Getting to an AI Policy Part 1: Challenges
Why institutional policies are slow to emerge in higher education
As we head into another semester, just a friendly reminder that you can check out the Syllabi Polices for Generative AI repository with over 150 examples across many disciplines if you’re trying to find your own right approach (You can also submit your own policy to the collection here)
I mention that because I’m currently thinking about institutional policy—something that impacts those syllabi policies and yet something that is sorely missing across higher education. There are a good amount of institutions as can be seen from Tracy Mendolia’s Padlet that is crowdsourced University Policies on Generative AI and this collection of Policies & Guidelines from Higher Education Strategy Associates on their AI Observatory. I’ve also started this crowdsourced Institutional AI Policies & Governance Structures repository (consider submitting your institutional policy/guidance!).
What you’ll see is a two-part series here where the first discusses the challenges of establishing a policy around GenAI and what I think are some of the hesitations or lag that is occurring. The second post will look at a framework for considering how we navigating an AI policy to be flexible with changes that seem to keep coming.
It’s Not Just About a Policy
Yet two years into the GenAI boom, I keep hearing from faculty and leaders in institutions that they don’t have guidance or policy yet. Much of what I’m also seeing is there is more guidance than policy. Guidance is great to have but it, too, can feel less useful and comforting to faculty navigating a tool that keeps moving.
There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.
That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.
Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer. Ultimately, it’s not just the policy but tool selection, support, and strategy that institutions need to determine in order to feel like they have agency and direction rather than is being subjected to the GenAI shift.
Which Tool(s) To Use
After the policy, an institution actually has to choose and invest in one or more tools. Tool selection requires more evaluation of different interests and considerations of costs. It may also mean different levels of access or fully different tools for different populations at the institution. This is easier (but not necessarily easy) for the larger institutions because they often have the staff, technological infrastructure, and money to investigate, invest in and use multiple tools. They may already have the means to acquire and deploy multiple technologies to different parties.
Tool selection is a real challenge for institutions but doubly so for GenAI. These tools are actively changing—sometimes day by day. Features across all the tools are not the same. Hence why I find myself moving across 2-3 tools regularly when using AI (ChatGPT, Perplexity, and NotebookLL). Choosing the right tool is much like choosing the right Learning Management System (which isn’t really possible at the end of the day—they are all lacking); you’re going to have to find the good enough tool.
But GenAI tools are not like LMSs. The lift to use a different LMS is much bigger for people than using a different GenAI tool. And that’s something leaders are concerned about too. Choosing tools that will actually get used (especially given the institutional cost of the tools) is the hard part. After all, there are free versions out there and lots of individual folks paying for AI out of pocket, including educators.
What I see happening is a lot of institutions are coming up with policies and then pointing to the basic version of Copilot or Gemini that can be turned on which does have data protection but not much else. At this stage, these are pretty basic in their abilities compared to what is being shown and shared across the internet. In fact, I would imagine the disengagement or dismissal of GenAI among some faculty has to do with exposure to these basic tools and not finding them all that useful or intriguing for use cases. One way to think about it is that institutions are offering Notepad and their community really wants or needs MS Word.
So institutions have to come up with a policy, figure out what tools to invest in that meet their technical and legal criteria, and then also hope that they are the right tools and different institutional players still don’t go off on their own to use tools they feel have more relevant to how they want to use GenAI.
This reminds me a bit of the COVID pandemic where nearly every institution raced to Zoom, even institutions that had Microsoft Teams or Google Meets. Some of those institutions that tried to guide their faculty and staff to those institutional tools still found that folks went to Zoom regardless (it helped that Zoom made things free). Eventually, institutions were able to pull people aware from Zoom (mostly because Teams and Meet got much better—but again, like LMSs, all are lacking in different ways).
Training, Support, & Guidance, Oh My!
But wait! There’s more! Getting through that process is getting far but the policy and the tool itself also need to be communicated and folks need (and want) training, support, and guidance. A policy without proper communication, training, and support is just another unfunded mandate put on the backs of people who already have too much to do. Leaving them more prone to burnout or likely to use their own tools if their voice wasn’t included in the process or they don’t feel they can get the results they need from the chosen tool(s).
To be clear, there has been tremendous support and guidance throughout many universities. Increasingly as I give talks and workshops at different institutions, they already share the different things they have created or done so far for navigating this. So I’m not saying that institutions aren’t doing this. They are, but many are doing so in the dark. They are often trying to guide in the absence of a policy with a sense of hesitancy about just how much they can encourage. I’ve done regularly events at campuses where there is inevitably a representative from the IT department who is there to say something along the lines of “this is all great folks and also, we can’t condone any institutional uses.” I don’t blame those folks, they’re also in challenging situations of knowing very well that people are using these tools and they don’t have good answers for them.
Side Note: Months ago, I had a fascinating meeting with a sales representative from one of the major AI tools. Without even asking, the person volunteered the amount of accounts that had been created with the institutional domain on the platform from the institution I was at. It was a significant amount—so I can very much understand why IT and other representatives are showing up to AI training and support sessions at institutions when no policy has been established to also reiterate the institution’s standing.
Institutions are doing this currently but it changes once you have selected a specific tool. Tool selection then gives way to training and guidance about specific features and elements of the tool as well as appropriate considerations about usage for different institutional populations. However, there’s also a communications strategy about updates and shifts that are likely to occur around these tools. Institutions often already have infrastructure for communicating out these changes but still need to add it to the list of ever-more communications going out to their community.
There’s also the challenge of considering what the training and support will look like. We’ve all seen poor training such as the 30-minute one-and-done training or the self-paced click-through training that checks boxes but doesn’t necessary represent learning. Of course, we need the basic 101 training with the selected tool but then we need more.
Learning about GenAI is more than explaining the tool and doing some prompt guidance. It will likely require a shift in perspective—an ability to use questioning and curiosity to think differently than what we might have done in the past. It’s partly why I continue to learn about other people’s usage because it reveals different ways to work with GenAI and also how it might do things differently from what I expected. And that can’t be taught in the same 30 minute training that one is just getting exposure to the tool.
Then, of course, there’s more of the specialized learning that needs to happen across the different academic and institutional departments. So it’s not just “training” but intentional deployment of support; how to make sure everyone using these tools has a working understanding of the tools and is finding the best use cases for their area of work. Otherwise, a lot of money gets spent and while 10-15% of staff are leveraging it well, it doesn’t necessarily recoup the costs of paying for the tool.
Strategy: Setting a Direction…
…even if you need to change it. To integrate thoughtful training and support as well as to choose the right tool(s), an institution needs more than policy, it needs a strategy.
That strategy should be aligned with the institutional mission and values for sure but also must clarify the purpose and intention of why GenAI is part of the conversation at the institution. Certainly, it can be obvious: industry and society is changing, education needs to respond and adapt. But it probably also needs to go deeper, be specific and reflect the institutional priorities.
Ultimately, a strategy has to grapple with and square the possibilities and the problems that GenAI, anticipate the implications of how GenAI intersects with teaching and learning, and ponder how it will impact the rest of the institution’s workflows. That’s a tall order. It’s filled with unknowns and how-to-get-there ambiguities.
Institutions want to get the strategy right and the stakes feel high to get it wrong. So there’s hesistancy and reluctance; there’s also belief that there’s no “there” there for GenAI in education or society. But the thing is—we have to borrow a play from our classrooms. We have to try and try knowing full well we’re going to get parts of it wrong. This isn’t a one and done; this is an iteration, an emergent approach to navigate a changing technology.
Of course, that’s easier said than done. Staking out a direction and acting towards that direction creates its own momentum that then becomes hard to resist if a change of direction is needed. And mayhaps the strategy includes that (I’ll have additional thoughts on this in part 2), but hesitation probably is no longer an option.
Yet, we’re two years into the GenAI movement and there’s enough whispers that agentic AI is coming right on the heels of GenAI. Even as imperfect tools (of which is literally every single tool we use in our work), they are going to impact and be used in many different ways to replace some ways of doing things, introduce new things we do, and change how we engage with the real and digital worlds.
These technologies have come faster than nearly any other technology we’ve experienced historically. They feel as existential as the cumulation of those prior technologies—at least as far as education goes. They feel destabilizing and they are hard to pinpoint exactly what they are and what they are (thanks techbros and media!). And because of all of that, a strategy is deeply needed at the institutional level to at least help faculty, students, staff, and the community to understand how the institution is making sense of this technology that has quickly become ubiquitous.
So there it is—it’s a bit messy but it’s what I’ve been able to surface from a lot of conversations, reading, playing, and thinking about what I’m seeing in higher education. The institutional policy question is bound up in the tool selection, the training and support deployment, and the strategic direction. Without those pieces in lockstep, it seems like institutions will not necessarily be able to effectively move forward.
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International