AI Syllabi Policies - A Look at the Collection
Over 140 submissions to the crowd-source document and so much to learn from!
We’re not far from the start of the semester and so I figured I would reshare and talk a little bit about the crowd-sourced Syllabus AI policy collection. This document of AI policies in syllabi (spreadsheet version with sortable columns here) has become quite popular (tens of thousands of visits) and is frequently referenced as a resource; it’s been featured in Chronicle of Higher Ed and Inside Higher Ed as well as CNN and Forbes. If you have a policy to submit, you certainly can at this link. I’m still in awe of just how widely it has been shared and referenced.
Brief background: I started this collection in January 2023. I shared it out on different social media platforms and different listservs and groups that I belong to. Over the course of the year, it slowly grew to over 100 by the end of 2023. Currently, it’s at about 140+ syllabi policies submitted by faculty from around the world.
Exploring the Resource
With about 50 or so disciplines represent, this collection represents a complex reflection of approaches, concerns, and opportunities by educators at a range of institutions trying to navigate this new technology.
When looking through the collection, there appears to be a trend of cautious acceptance rather than outright prohibition (though that exists too). Many faculty recognize the inevitability of AI in students' future careers and are striving to incorporate AI literacy into their curriculum. A significant number of faculty demonstrated a desire to prepare students for a world where AI is increasingly prevalent, while still maintaining academic rigor and integrity.
Most faculty require students to disclose their use of AI tools, often asking for detailed information about prompts used and how AI-generated content was incorporated into their work. That makes sense and I think increasingly, faculty will ask for links to the chat logs. Transparency makes a lot of sense to help assess the students’ learning. Citing where and how it is used is important so long as it doesn’t become a power struggle with folks being over-fixated on “proper citation”. I’m fully behind indicating one’s work is influenced from somewhere else so long as it’s not pedantic—if faculty are getting lost in battles (and loss of points) over commas and perfect format of citations, they’re are creating more hassle than what it’s worth.
It’s clear these policies also reveal a significant tension between embracing AI as a learning tool and concerns about over-reliance or misuse. Many faculty express worry that students might use AI to bypass critical thinking or writing skills development. This tension is common and certainly some of the language reminds me of the days of Wikipedia and the concerns and handwringing that happened in the late 2000s and early 2010s.
Interestingly, approaches to AI integration vary considerably across disciplines. STEM, business, and technical fields tend to have more permissive policies, sometimes allowing unrestricted use of AI tools. In contrast, humanities and writing-intensive courses often have stricter guidelines, reflecting concerns about AI's impact on fundamental writing and critical thinking skills.
A common thread across many policies is the encouragement of critical evaluation of AI outputs. Faculty are urging students not to blindly accept AI-generated content, but to fact-check and critically assess it. This is the modern take on “don’t believe everything you read on the internet.” I think this is the most important part that we can do in all of this—support their critical thinking skills with generative AI—because it can be so convincing in its outputs at first glance. But peeling that digital onion often reveals little substance that can easily or directly go into outward-facing writing.
The policies also reveal varying levels of AI literacy among faculty themselves. While some demonstrate a nuanced understanding of AI's capabilities and limitations, others seem to grapple with realizing or giving space for how the tools can be beneficial. Obviously, faculty can have many different reasons for complete abstinence but many of them stem from a lack of understanding of what generative AI tools are (other than fear of plagiarism), the “slippery slope” of “authentic” student thinking, and an unwillingness to find ways that geneartive AI can be helpful to people. I see that mostly with the blanket statements where there is no acceptable use of the tool.
Many policies touch on the ethical implications of AI use, both in terms of academic integrity and discipline-specific concerns. For instance, healthcare-related courses mention the importance of patient privacy when using AI tools. These ethical considerations extend to discussions of AI's potential biases and limitations, encouraging students to think critically about the broader implications of these technologies.
Despite mentions of the ethical implications, there's a notable gap in most policies regarding accessibility and equity issues related to AI use. Few address the potential disparities in access to AI tools or consider how AI integration might impact students with different learning needs or technological access.
Notably, some faculty are taking a collaborative approach to policy development, involving students in the process of defining appropriate AI use. This strategy recognizes that students may have valuable insights into the practical applications and challenges of using AI in their academic work. Given the work that I did around co-creating with students the AI usage policy for students and faculty, this is an approach I can certainly get behind and endorse!
Some policies hint at the need for new ways to evaluate student learning and competencies in an AI-augmented world. I hope this is the start of a bigger shift that helps us consider how we measure and value certain skills in academia.
Final Thoughts
I have no clue how long this collection of policies will be useful. There’s lots that could change to make it moot. And at some point, it will be widely integrated into AI tools so that folks could simply say, “pull 10 different policies on AI in college syllabi on course X at level Y” and the given AI tool can generate it.
Still, for now, it seems to be a much sought out resource that faculty can continue to learning, borrow, and adapt from. And, of course, they can (and hopefully, will) continue to submit their current and updated policies (yes, if you’ve updated your policy, you can get it updated for the collection in that form).
My intention in creating this resource was for folks to be able to learn from one another—to have a sense of possibility and some sense of how others are creating the guardrails in their classes for navigating generative AI. I *think* I’ve succeeded. And it’s not that I’m big on hard rules—but so many faculty and students are seeking guidance on the boundaries that I think this resource helps folks think about what that will look like in their courses.
Thanks to everyone who has contributed to, explored or shared the resources. And thanks to the many folks who have reached out to me personally to thank me or ask me about the resource.
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International
Wonderful resource. Somehow I’m late to the party, but just submitted my policy! Thanks for curating and for the summary, Lance.
Wowza! This is an incredible list and resource. Thank you, Lance, for doing the hard work of gathering and sharing.