“For educational purposes, we have to make sure our systems have guardrails.”
Part 2 of my interview with Corrie Bergeron
In the last post, we were talking with Corrie Bergeron. He shared with us some of his earlier projects in earlier ages of AI, along with how he is thinking about AI as a disruptive technology that requires us to rethink assessment. Want to find out what that means for our work? Go read part 1 and join us as we learn more about his current work and upcoming book!
If you have experiences around AI and education, particularly in the higher education space, that you would like to share, consider being interviewed for this Substack. Whether it is a voice of concern, curiosity, or confusion, we need you all to be part of the conversation!
Lance: Many of us are there! I’m curious about that idea of the faculty member using AI to get the 80%, and then refining and humanizing the remaining 20%. Where do you see the edge of that? I’m asking because there’s content development, course assignments, and then there’s evaluation. Do you see this as a tool that can be used across those areas, or are there places that feel like no-go zones?
Corrie: It’s a good question. I’m a little squishy when it comes to the idea of having the machine evaluate students. I know it can be done. Seven or so years ago, one or more of the big testing companies was using AI to score student essays. If the machine can score the essay, then it can write the essay.
So when ChatGPT showed up, I was like, okay, where have you been? We’ve been waiting for you. It’s about time. But I’m still squishy about it, because now you’re getting into what they call the dead web: machines writing content that machines then read, which produce something another machine reads, and then the machine generates the diploma, the cover letter, the résumé, which another AI scores, and then the AI decides whether or not to offer you the job. At what point is the human even in the loop? Why bother?
In the near term, next semester, the LMS we use has some really useful, well-thought-out AI tools built into it, in my opinion. Faculty are making good use of those tools and doing really interesting things with them, and they’re being transparent about that with students, as they should be.
For example, one of our nursing faculty is using role-play so students can practice communication techniques such as assertive communication techniques with a simulated coworker. The simulation actually changes how it responds if the student uses the correct techniques. Then the students engage in a dialogue with a communication coach to debrief how that interaction went, and that whole experience gets debriefed and discussed in class, in the physical space.
That’s a really interesting use of the technology. It’s giving the tool a shot. How does this feel? How does this work? Especially in nursing, because that is a brutal profession, frankly. My wife is a nurse, our daughter is a fourth-generation nurse, so I’ve heard a lot of war stories. Being able to practice those kinds of workplace interactions is nontrivial, and it’s a really good use of the technology.
Eaton: Yeah, simulations are big opportunities, and it’s something you already have experience with?
Corrie: I’ve done a lot of simulations. And the big thing that a lot of folks don’t realize is that what’s important for an educational simulation is that it be cognitively accurate. You want cognitive fidelity, not necessarily physical fidelity.
It doesn’t matter if you have a photorealistic office, if the business problem you’re solving isn’t realistic, and if the numbers on the spreadsheet don’t move the way real numbers move in a real business.
You can do a lemonade stand simulation that will have you sweating like a Fortune 500 CEO trying to pull off a hostile takeover—and keep you up at night—with just a plain spreadsheet, if you design it right. You don’t need photorealistic graphics.
What’s important is that the system reacts in ways that activate your limbic system, so you’re having the cognitive and, if necessary, emotional reaction you need to have. As an instructional designer, you analyze the task to be performed and ask whether there are any affective or psychomotor aspects to it.
Eaton: There are clearly places where simulations won’t work. With what we’re seeing now, there are some things you just can’t simulate. But where do you see high-value opportunities with simulations? As you’ve been engaging with faculty or just looking around, what feels like a rich opportunity that isn’t being seized yet, especially using chatbots as they exist now for deeper learning?
Corrie: Anything involving risky human interaction. Clinical psychology, nursing, drug-abuse counseling. The danger there, of course, is that you don’t just need guardrails, you need castle walls. You have to be HIPAA-compliant, absolutely. But more than that, you have to make sure that what the student says never, ever leaves the classroom and can never be used in court against them. It has to be absolutely safe to totally screw up. And it has to be tested to hell.
These are crushing professions. Every therapist has a therapist; if they don’t, they’re in trouble. So the benefits of simulation, which we’ve known about for a long time, are greatest in areas where it’s dangerous or expensive to do things for real.
What’s new is that chatbots and AI are opening the door to cognitive simulations we maybe hadn’t realized we could do before. Lots and lots of role-plays: criminal justice, interviewing, interrogations—holy smokes, you can do all kinds of things. You can interview every historical figure anyone knows anything about.
What’s fascinating is that the role-play built into our LMS can look things up on the internet. If you tell it it’s Leonardo da Vinci, it’ll speak to you in Italian if you want. You tell it it’s Shakespeare, and by golly, you feel like you’re talking to William Shakespeare, because it knows everything we know about Shakespeare. It’s a little spooky.
So you can do all kinds of really interesting things. What’s important is making sure there are guardrails and not getting bent out of shape when things go sideways, because we know they’re going to. Grok turned into “Mecha-Hitler” for a few hours. We knew something like that was going to happen eventually, because Reddit exists. There are these weird forums there where some people take off their tinfoil hats. Garbage in, garbage out, and there’s a lot of garbage out there.
For educational purposes, we have to make sure our systems have guardrails. For K–12, even more so. And that’s a whole other challenge: getting parents to understand this, getting school districts to understand this. Because guess what? Students in high school and middle school know a whole lot more about this than their principals and superintendents. That’s another conversation.
I just got a note from one of our partner school districts saying they’re blocking YouTube next year. Good luck with that. Bold strategy, Cotton. Let’s see how it works out.
Lance: Thinking about guardrails, particularly in higher ed, maybe this isn’t a direct line, but I’m thinking about what new roles might emerge for instructional designers in this space. I’ve heard folks worry that AI is getting good enough that if you have a competent person using it, they can do a lot and that maybe instructional designers are one of the knowledge-worker roles at risk.
So what’s their role now? What contributions do you see instructional designers making with AI that maybe weren’t there before, or that become more central or enhanced?
Corrie: I think it would probably be worthwhile to reread Reclaiming Instructional Design by David Merrell et al because they spent many years trying to automate instructional design. Merrell was a serial theorist and tried to create a theory solid enough to make instructional design automatable. He eventually came to the conclusion that you can’t do it.
There’s enough art in the art and science of instructional design that it isn’t completely mechanistic and can’t be reduced to a set of rules. I buy into that. I’ll admit I’m biased. I like Dr. Dave. I spent time at his house, ran his model trains in his basement, went to his summer institutes back when he was at Utah State. And I’ve been doing this long enough to know there’s a craft to it. It’s not just crank-it-out work. There’s artistry. There’s elegance to instructional design.
Yes, the machine can get you most of the way there. It can save you a huge amount of time. And frankly, if you don’t care about quality, you can crank out a lot of garbage that looks good and sell it. People who don’t know the difference will pay cheap money for something that’s mostly okay. But it will have holes. It will fall apart. And who knows, at some point, there may be lawsuits, especially if you’re doing anything that touches health or safety. If those Venn diagrams intersect with human lives, I do not want to be in the firing line.
There will absolutely be a role for instructional designers who know how to leverage these tools and who know how to use them to work more efficiently, better, cheaper, with more bells and whistles. In the industrial world, it’s faster, better, cheaper. That’s the name of the game.
But also better in a deeper sense. Being able to analyze 5,000 pages of technical documentation in minutes, turn it around, and hand it to an SME in a couple of days and say, “Okay, is this right?”—instead of six months later because it takes that long to dig through all the irrelevant garbage to find the core. Especially if you can write a GPT and train it to think like an instructional designer, to look for what matters. If you can build your own tools, you could be a monster at this if you’re smart about it.
Lance: I’m going to switch into some final questions focused on your forthcoming book. It looks at AI through science fiction—did I get that right? So why science fiction as a lens for educational futures?
Corrie: Yes. The working title is We Were Always Going to Get Here: Science Fiction, AI and the Future of Learning. It’s coming out from McFarland in mid-2026. I’m working with Pixy Ferris—she was really the driving force behind it—along with Chitra Singh and Giovanni Troiano.
Science fiction has already explored a whole range of possible futures. We’re all science fiction fans, so it felt natural. Why not? It turns out there’s a surprising amount of science fiction that engages with education or, if you squint and tilt your head a bit, you can see educational implications in it.
Because we’re all educators from different disciplines, we’re approaching the book as a kind of colloquium. We’re each writing a few chapters, and then we’re commenting on one another’s chapters. So the book becomes a conversation from multiple perspectives.
The chapters I’m writing include an overview to set the tone, much of which draws on the arc of technological adoption, and how AI is playing out the way that other disruptive technologies have. In chapter two, I look at “adjacent possibilities,” a term Steven Johnson talks about, and then at unintended consequences. Once you climb up on the counter, what else can you get into? And, “Oh dear, I didn’t mean for that to happen.”
In chapter three, I look at transformative visionaries including Thomas Edison, George Westinghouse, and yes, Elon Musk. I explore things like how the light bulb would have remained a laboratory curiosity if it weren’t for electrical distribution and billing systems. You’re not going to light a city unless you can get power into people’s houses.
But I don’t think Edison and Westinghouse were thinking about leveling mountains to get at coal, or filling landfills with 30-year-old windmill blades the length of football fields, or burying spent uranium fuel rods miles underground. And that’s before we even get into climate impacts and what, if anything, we’re going to do about them.
Those unexpected consequences; that’s a big part of what I’m trying to surface.
Eaton: How might science fiction help us think about those unintended consequences?
Corrie: My daughter lives in Phoenix, and her electric bills are going up because they’re putting a data center outside of town. This stuff is nontrivial. Some of these things we’re thinking about, some we’re not, but they’re going to happen.
Why science fiction? Because we’ve already explored a lot of the possible futures. If you’re familiar with futures studies, you can imagine an arrow pointing from now into the future. But that arrow can go in many directions. If you think of several concentric cones spreading out, there’s a most-likely cone that’s very narrow, then less likely, less probable, less probable, and then wildly improbable futures.
The challenge is that some of those wildly improbable futures are actually quite desirable: world peace, an end to disease, an end to poverty. Those are very desirable. At the same time, some outcomes—utter dystopia, even the end of humanity—may be quite likely if we make the wrong decisions.
We’re having that argument right now. People are at absolute loggerheads over questions like: should we try to make AI safe? What happens if we do and China doesn’t, or if we don’t and China does? And of course, you and I have no control over those deliberations. I’m a fan of the Stoics: focus on the things you can do something about, and don’t lose sleep over the things you can’t.
There’s the wave of history. Are we going to try to surf it, or resign ourselves to being crushed under it like the final scene in Deep Impact (1998): standing on the beach as the wave comes in.
Lance: Nice callback. Who’s your audience? Does it include faculty? And if so, what’s the spark you’re hoping to light there?
Corrie: The audience is definitely faculty. Hopefully administrators. Parents, maybe. Hopefully students. We hope to sell a lot of books and maybe make a buck fifty. Heck yeah. We hope to reach a wide audience. They’re an academic publisher, so maybe we’ll get into a couple of libraries. It’d be nice if folks read it. We’re certainly putting a lot of effort into writing it.
Lance: What do you want folks to walk away with after reading it?
Corrie: The takeaway I hope for is this: don’t be scared. There’s no point in being scared; it’s a waste of energy. You might as well get excited, because this is an exciting time. We’re at a cusp in history. Out of the billions of people who’ve lived, not that many get to experience an inflection point like this.
This is a really neat moment. And it doesn’t involve a world war—yet. Knock on wood. This is a remarkable time to be in educational technology. We’re on the cutting edge. We get to play with these tools and have a deeply profound impact on the next generation. What a chance to move the vector of that arrow in the center of the cone.
So many science fiction stories and TV episodes ask: what if you could go back in time and change the course of history? Well, what if you could do one thing tomorrow that would change the course of history—what would you do tomorrow?
The Update Space
Upcoming Sightings & Shenanigans
Continuous Improvement Summit, February 2026
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
Online Learning in the Second Half with John Nash and Jason Johnston: EP 39 - The Higher Ed AI Solution: Good Pedagogy (January 2026)
The Peer Review Podcast with Sarah Bunin Benor and Mira Sucharov: Authentic Assessment: Co-Creating AI Policies with Students (December 2025)
David Bachman interviewed me on his Substack, Entropy Bonus (November 2025)
The AI Diatribe Podcast with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education,” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 190+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form, and I’ll get back to you soon!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International




Thanks for writing this, it clarifys a lot. Totally agree with Corrie about the machine evaluating students. That 'if it can score, it can write' point is so spot on. It's exactly what I've been thinking in my classes. We need those guardrails, forreal.
Great interview team, thank you! I also get very very squishy about using AI to mark and feedback on student work, as if we need to do that then we have probably designed the wrong assessment in the first place...