A Workshop Toolkit for AI Exploration
A fun way to help think about the concerns of AI & the resources to do it!
Here’s the TL/DR
The Narrative: I adapted a workshop from a previous one. Imagine the worst-case scenarios and use that to clarify concerns and anticipate solutions.
The Toolkit: Here’s what you need to try it with others. It has Creative Commons licenses. Go forth and play! If you do, LMK!
The Insights: Look, I used AI to clean up all the contributions in a handy-dandy output from the workshop!
The Narrative
The more I’m in conversation with people about generative AI, but really, so many things in our world, I keep thinking about the old story of people in a dark room all trying to figure out what they are touching by each person describing it. When the lights are turned on, they realize it’s an elephant. They all have individual knowledge and experience with a part of the elephant, but it’s a collective effort to name it.
When I was invited to the Public AI Summit to do a workshop on AI policy, I knew this would be an opportunity to learn more about the elephant. In part, why I was asked was because of another time I was in the dark and working with others to figure out what kind of elephant we were dealing with. At College Unbound in 2023-2024, I worked with students as part of a course I taught (AI and Education) to create an institutional AI usage policy for faculty and students that was led by students (read more about that here). I’m 99.9% sure it was the first higher ed institution to have an institutional policy driven by student insights and recommendations.
Planning the Workshop
In recognizing the constraints of this workshop (40 or so minutes, possible attendance of 5 to 60 people, no breakout rooms), I knew I had to bring something that allowed for a lot of input, had some momentum to it, and gave people chances to learn from one another.
This led me to recall an invitation from my colleagues, Megan Hamilton Giebert and Josh Luckens to be a support for a workshop at NERCOMP they had run several times called “Strike That, Reverse It: What Not to Do When Designing to Learn.” The premise of the workshop is to come up with the worst-designed course along several different aspects–to make it completely horrible. It’s a fun activity in some ways because people start to really think about what they’ve experienced, what they’ve seen, and possibly, what they’ve done. At a certain point, the scenario switches, and participants are informed that they now have to use what they’ve made to figure out the best way to design a course.
I really enjoyed it as did the participants. It helped that Josh & Megan put on a full theater production, hamming it up with calls coming from “The President” and the like.
As a workshop, I thought it had a lot to offer for thinking about GenAI. The first is just that it provides a space for people to voice their concerns, their angst, and the things they have seen go wrong with other technologies and policies. But it also allowed for problem-solving and crowdsourcing on how to address the concerns.
I had a chance to adapt it for an assignment design process for a university that brought me in to work with their faculty two weeks before the summit. Once I had adapted for that and got really good feedback about the experience, I knew it could be used in other ways. I thought it could be the perfect fit for an activity, so long as I could hone it down to 40 minutes (from the 90 minutes it was).
The Structure
I created three activities for them to do where they would be focusing on thinking about policies around AI in their fields, disciplines, and industries. However, I bookended them with two reflections.
The first reflection was an ice-breaker that had them list in the chat about any policy (besides AI) they had encountered that went horribly wrong (Side note: there were a lot of folks talking about back-to-office, no surprise).
The next activity was framed for participants to consider, what if we were aspiring to create the WORST AI POLICY EVER, what would that look like? In a Google doc, they were to identify their field/industry and then explain one specific attribute per line. This got some laughs as people started to think about what it could look like. Also, some concerns because it’s not hard to imagine that these could happen. People shared some insights and observations before we moved on to the next activity.
The second activity had participants then explain WHY that attribute was part of the worst AI policy ever. This was an important step (at least for me) because people came from so many different backgrounds and understandings of AI. I wanted folks to have the opportunity to learn and understand the issues a bit more clearly. Again, we had folks share some insights and observations about the collective thoughts.
With the final activity, participants then had to come up with specific guidance on how to address it. They were encouraged to be detailed. “More transparency” needed to be specific and detailed about what that could look like.
We did a final reflection on this with more insights from the participants. The final reflection was opening up the consideration about how this as an activity could be valuable as something to try out with others, and what purpose the activity serves.
Technical Decisions
I want to share some of the particular decisions that I made around the activity to make it run smoothly. I should also note that I had an assistant in each session (Elisa Diop-Weyer and Mohsin Yousufi) and they were fantastic in helping it to go smoothly.
I decided that working in a Google Doc would be the most effective means of getting to hear and see from many people, while at the same time giving people space to explore and look at other people’s insights and ideas. The Zoom chat is too chaotic and personally, I find Padlet so un-user friendly to navigate to read and see what I want to see. But Google Docs can also be unwieldy and so knew that using a table with rows made it more consistent for users than just blank space.
Some folks would be incredibly uncomfortable about letting 30-40 people jump into a Google Doc because there’s always a chance of someone deleting things. It’s true and in fact, someone did accidentally delete it at one point, but it was quickly replaced. I was less worried about this because I could always use the revision history to restore it. I also wanted an artifact that folks could come back to at a later point to explore and review. However, after the workshop, I turned off editing on the document so that it stayed preserved.
I wanted to structure things progressively, but not set it up so people didn’t rush ahead. I thought it was important for us to all be at the same stage. So while we were debriefing for Parts 1 and 2, Elisa and Mohsin went into the Google Doc we were working on to add the next column (for Part 2: “why is a bad policy attribute” and Part 3: “how to fix it”). I think this made it a better experience and created more cohesion.
The Experience
I like to think that it went really well; I received verbal and written feedback saying so, but most important for me is that the elephant was made more legible for me and others through the activity.
Finding our way through AI in our respective spaces is incredibly hard, but I do think it’s made more legible through collaborative engagement with others.
If you are looking for more about doing this or just looking for what it might mean to engage in this work in organizations collectively, I would be remiss if I didn’t mention these two co-authored pieces of mine published in EDUCAUSE Review: Cross-Campus Approaches to Building a Generative AI Policy and In the Room Where It Happens: Generative AI Policy Creation in Higher Education.
The Toolkit
As part of my output for this workshop, I have assembled the materials from the workshop so that you can adapt and use them for AI Policy or any other challenge you’re tackling. They all come with Creative Commons licenses.
The Outputs
Finally, we have the output of the collective efforts of some 75+ folks over two sessions at the Public AI Summit. With the help of AI, I built out a collective “Do’s & Don’ts” guide for AI Policy development and a comparative matrix on AI policy approaches. I used Claude and ChatGP to help compose this document and you can see the chat logs from both as additional tabs on the document.
Coming to an (AI) Agreement Output
The Update Space
Upcoming Sightings & Shenanigans
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
AI and the Liberal Arts Symposium, Connecticut College. October 17-19, 2025
Recently Recorded Panels, Talks, & Publications
The Learning Stack Podcast with Thomas Thompson (August, 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 180+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form and I’ll get back to you soon!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International