“...screw around with it. See what it does. And we have to do that, fearlessly.”
Part 1 of my interview with Corrie Bergeron
As part of my work in this space, I seek to highlight the folks I’ve been in conversation with or learning from over the last few years as we navigate teaching and learning in the age of AI.
If you have experiences around AI and education, particularly in the higher education space, that you would like to share, consider being interviewed for this Substack. Whether it is a voice of concern, curiosity, or confusion, we need you all to be part of the conversation!
Introduction
Corrie Bergeron is an educator, instructional designer, and learning systems administrator at a community college in northeast Ohio. A longtime innovator in teaching with technology, he has designed simulators for airlines, banks, and the US Navy, led curriculum development at PLATO, and now helps college faculty integrate AI into their course designs and assessment strategies. A historical re-enactor since college, he enjoys demonstrating medieval and renaissance music, poetry, storytelling, and handicrafts. He and his wife of 34 years (a retired nurse) have four grown children. In late 2026, Corrie and three co-authors will be releasing a book through McFarland Publishing looking at the impact of AI on education - through the lens of science fiction.
Lance Eaton: How generative AI first showed up in your work, directly and indirectly?
Corrie Bergeron: I first used it several years ago, and it looks a lot better now.
How did generative AI first show up? I got an email from a faculty member in late 2022. Nobody in my network had hit it yet, and I got this email saying, “Holy smokes, did you see this?”I was surprised that I hadn’t seen it, because I’m usually on top of things.
I spent my Christmas break going, “Wow…WOW.” This is going to change the water on the beans. I’ve been involved with AI for a long time. I did expert systems back in the ’80s. I did a project in graduate school with an incomplete installation of LISP with no documentation. So I’ve been playing with this stuff for a long time.
Back at PLATO in the ’90s, we did something called Math Problem Solving, which kind of simulated AI in DOS, with a coach based on a state engine. It looked at the state of the simulation and provided coaching to the student as they went through it, based on the state of the various objects in the system. It worked pretty well.Then, a few years ago, working with Jack Mostow at Carnegie Mellon on the XPRIZE with RoboTutor, we were trying to model the student using facial recognition, analyzing the facial expressions of the students using the tablets.
Lance: Did this feel just like another AI thing along those lines?
Corrie: In some respects, GenAI is just the latest shiny thing to come along, and education has seen a lot of those. In some respects, we can treat it like that. On the other hand, AI is the printing press, it’s radio, it’s the internet. And it is going to fundamentally change things in ways we cannot predict. We need to be prepared for that. It is an absolutely disruptive technology, and it is going to change things in dramatic, fundamental ways. We just have to accept that and be prepared to ride it.
AI reminds me of a moment in 2007. Twitter is this new thing that I’m having to explain to people. When I explain it, they look at me like I’ve sprouted a second head. It still sounds dumb: you’ve got 140 characters, and people can subscribe to your thoughts. Why would anyone want that? But it’s literally changed politics. It’s changed the way we run the world.
If you want news, you read Twitter. You don’t watch television. You don’t watch cable news; that’s two hours behind breaking news. Twitter is what’s happening NOW. So, 2007. I’m following maybe a dozen people, a couple are following me, and somebody posts a link: “Hey, check this out.” I click the URL, and someone at a conference. I think in Hawaii, they have taken their MacBook, turned it around, and streamed live video over the internet.
I don’t have to crawl behind the machine, plug things in, set DIP switches, or dial into a server. I just click a link and watch live video from the other side of the planet. Whoa. It’s the 21st century.
Lance: That’s a moment many of us experienced in the 2000s!
Corrie: So there’s this presenter named David Warlick, this science teacher from Georgia I think, and he says something profound. He says, “We are the first generation in history who knows that we have to prepare the next generation for a future that we know we can’t imagine.”
Every generation before us assumed their kids would grow up in a world pretty much like the one they grew up in. I knew what the 21st century was going to be like because I watched The Jetsons. Flying cars, robot maids, space stations, moon bases. We’ve got all of those except the moon base, and I want my moon base.
But none of those things really changed the world. We have a space station, and now we’re figuring out how to bring it down safely. We have flying cars, but none of that changed the world. What changed the world is this [holds up smartphone]. A device with which I can access the sum total of human knowledge and talk to any person on the planet. And I use it to look at pictures of cats and argue with strangers.
This would blow my grandmother’s mind. She grew up in a shotgun shack in the bayou. But some things don’t change. She’d be really happy that I still make her cornbread dressing at Thanksgiving and Christmas.
Lance: I hear you about the cellphone—smartphone, whatever we want to call it—as an integrative technology that changes everything. Guide me from there. When you talk about the smartphone as changing the 21st century, walk me through from that to how AI is a technology that fundamentally changes things. What do you see that fundamentally changes?
Corrie: The question is: what doesn’t the smartphone fundamentally change? It has not fundamentally changed human psychology or human nature. It has not fundamentally changed human needs. It has changed how we go about getting those needs met.
We need food. We need companionship. We need meaning in our lives. How do we go about getting those things? That’s changed radically just in the last ten years. I can order anything I want and have it delivered to my doorstep within a week, maybe even today if it’s in the fulfillment center down the road. I can call up Uber Eats and get any cuisine on the planet. I have to pay for it, but my bank account is linked to this thing. And God help me if it gets hacked.
Lance: Let’s dig into education. Thinking about AI as fundamentally changing things, what is it changing in the teaching and learning space, particularly?
Corrie: That’s the thing; I can’t even think ten years from now. I’m thinking about next semester. We’ve got students using AI to write their papers. What do we do with that?
We’ve got software that claims to be able to detect AI, and we’re playing with it. We’re piloting it next semester. I’ve been playing with it, and okay, maybe. But you’re locking the barn door after the horse is gone and over the horizon. We have to fundamentally rethink how we assess learning.
That’s really hard, because it means rethinking how we design whole courses. We still have faculty who haven’t retired yet who lecture for eight weeks, give a midterm, lecture for eight more weeks, give a final and they don’t know any other way to teach. So what do we do? Their students are going to have to come into the testing center, sit down, and actually do the test while people are watching. I’m not sure there’s another way within those faculty’s comfort zone.
The pep talk I give students is this: look, I know you’ve been taught wrong for the last twelve years, but here in college, we really do care more about the process than the product. AI is great at shortcutting the process to get to the product. It’s very good at that. The problem is, it only gets you about 80% there, and that last 20% is garbage. And you have to be able to spot the 20% that’s garbage.
If you can’t, you’re going to be in deep trouble when you get out into the real world and try to get the machine to do your job for you. You have to know how to do it yourself. You have to be able to do the process. You have to be able to do the real thinking yourself.
That’s what we’re really all about in school, even if it doesn’t show. The product is merely a proxy for the process, because I can’t crack your head open and watch you think. So I have to get you to create an artifact. I don’t care if it’s a TikTok video. I just need some evidence that you’re thinking and that you’re the one doing the thinking, not the sand. Not the cleverly organized pile of sand, as Ethan Mollick puts it.
Lance: It’s something we’ve always been trying to communicate with students, and it feels quadruply important right now. I can’t help thinking about how that lands for students, though. I can imagine myself hearing that in a course I had no interest in whatsoever. I’m a social sciences and humanities kind of person, and I remember dragging myself through geology, weather, and climate.
One of the challenges is that no matter what classes we’re teaching, we always have some students who are there less out of choice and more because it’s part of the curriculum.
Corrie: And that is a challenge. When I was at PLATO, we always started each part with a WIFM: What’s in it for me? Everybody’s favorite radio station, WIFM. What are you going to get out of this? Because if you don’t have some intrinsic motivation, you are checked out.
I wish I could get every faculty member to read James Lang’s Cheating Lessons. We build incentives to cheat into our courses, and one of them is exactly what you said: “I don’t care. I just need a C. I just need to pass this bloody class. What’s the minimum I can get away with doing in order to skate through this?
I was in the cafeteria a few years ago, and I heard one student say to another, “Just take that course online. You can cheat your way through it.” My God. And it was a medical terminology course. That’s foundational. That’s a gateway to the whole allied health field. If you don’t understand the difference between a neuron and a nephron, I do not want you at my bedside.
Lance: We’re really talking about Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. The system becomes prey to gaming.
I think about our whole grading system and the degree to which gaming elements are baked into grades, and the abstraction of what an “A” even really means. That feels even more fragile, maybe even irrelevant, in the age of AI. If grades feel irrelevant, why am I working toward this? Why wouldn’t I just use the secret levels available to power up?
Corrie: Exactly. It speaks to what’s likely to change and what’s not likely to change. There’s already a social movement toward devaluing college education. How much of that is temporary, how much of it is real and permanent—we don’t know. I don’t think the value of the branded credential is going to drop to zero anytime soon. But the value of demonstrated competence, I think, is going to rise.
This is getting slightly beyond the scope of this conversation, but there’s a whole lot wrapped up in what AI is going to replace and the disruption that’s coming in the short, medium, and long term. Right now, we’re seeing a lot of entry-level white-collar jobs not getting filled because managers are saying, “Maybe I don’t want to hire someone fresh out of college. Maybe I can take one of my current employees—a known quantity—give them Copilot, and just give them more work.”
Now we have a tool that lets people do more work for the same amount of money or maybe a little more money. “I’ll give you a raise, but I don’t have to add another headcount to get more productivity.”
Maybe we’re seeing a white-collar industrial revolution, on the scale of the mills that gave us the terms Luddite, sabotage, and saboteur. Those movements didn’t succeed, by the way, other than giving us those additions to the lexicon.
Progress and change are going to happen. We may be able to influence the change. We may be able to surf the wave instead of being crushed under it. History isn’t inevitable - it doesn’t have to happen a certain way - but history IS going to happen.
Lance: I think of Mark Twain’s line: History doesn’t repeat, but it sure does rhyme.
Corrie: It sure does rhyme. And that’s what I’m seeing. That’s been the thrust of a few presentations I’ve given along these lines. I was absolutely delighted and felt very validated when David Wiley used the same theme in a presentation he gave. I thought, “Yes, I’m not crazy.”
Because we’re seeing the same technology adoption curve again and again. First, it’s the shiny new thing. Isn’t it silly? Isn’t it funny? Then it’s a threat: we must not use it, we must not allow it. Then we use it to do the same things we’ve always done, because we don’t know any better, just faster, cheaper, and more productive.
Then somebody comes along and says, “Hang on a second. I can do this. I can do that. I can do this other thing.” And then they see something completely new that nobody had thought of before.
I call them the transformative visionaries. They come out of left field. They might be in a middle school classroom right now. They might be sitting in high school. We don’t know. But we do know they’re going to happen. We know they’re going to show up. They’re going to have some idea, and we’re just waiting for it to happen.
Lance: While we’re waiting for that to happen, you’ve been talking about the changes that need to happen: the question of how we assess learning, how we redesign whole courses. These can be big or small, but what are the kernels? What are the pieces you’re figuring out as generative AI is here? What are some small “t” truths or insights that you’re gathering that feel like part of whatever comes next in education?
Corrie: As educators, it’s our responsibility to teach our students how to use these tools. And that means we have to use them. We need to know how to use them. We need to not be afraid to use them. That means we have to play with them. We have to see where the boundaries are. There’s this great French term, bricolage, which basically means screw around with it. See what it does. And we have to do that, fearlessly. That also means we will fail, and we can’t be afraid of that. We can’t be afraid to try things and have them not work.
That means being willing to stand in front of our students and say, “Hi, I generated this test using AI. If it’s a bad test, I’ll throw it out. But I used AI to generate it from this OER material because I could do it in five minutes instead of five hours. And that gave me four hours and fifty-five minutes to sit with you, answer your questions, and bring my human experience, my humanness, rather than sit alone in my artist’s garret writing multiple-choice questions, which may not be the best use of my time now that I have a machine that can do that for me.”
Now, yes, I need to spend more than five minutes, because even though the machine can crank it out, I still have to review it.
This is where the role of instructional designers and the like come in. The machine can crank out a lesson plan, an assignment, a test in seconds. But like I said, it’s only about 80% there. And the other 20% is pure, unadulterated garbage. I’ve seen some remarkably well-spoken garbage. You look at it and think, no, absolutely not. This is awful. What in the world?
If you don’t have that expertise, if you’re not steeped in learning theory, if you haven’t written thousands of multiple-choice questions, if you haven’t spent hours face-to-face with students, you might not recognize that. The human element is crucial, and it’s not going away anytime soon.
I don’t think even superintelligence is going to be human. It may simulate human reasoning. It may emulate human reasoning. But everything I’ve read so far–and I admit I can’t keep up, it’s like drinking from Niagara Falls—suggests that at heart, it’s still a machine.
There’s a je ne sais quoi to humanity that just isn’t there. The uncanny valley is still there. And we have something to bring to the party. Maybe I’m just an old guy hanging on to a thread of hope, but I think we always will.
Join us in the next post for Part 2!
The Update Space
Upcoming Sightings & Shenanigans
Continuous Improvement Summit, February 2026
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
Online Learning in the Second Half with John Nash and Jason Johnston: EP 39 - The Higher Ed AI Solution: Good Pedagogy (January 2026)
The Peer Review Podcast with Sarah Bunin Benor and Mira Sucharov: Authentic Assessment: Co-Creating AI Policies with Students (December 2025)
David Bachman interviewed me on his Substack, Entropy Bonus (November 2025)
The AI Diatribe Podcast with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education,” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 190+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form, and I’ll get back to you soon!
We periodically host small-group workshops and leadership sessions for higher ed teams. You can learn more about our current offerings here.
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International




nice interview, nice comments; i have posted the quote to my students: "We need to know how to use them (chatBots, LLMs). We need to not be afraid to use them. That means we have to play with them. We have to see where the boundaries are. There’s this great French term, bricolage, which basically means screw around with it. See what it does. And we have to do that, fearlessly. That also means we will fail, and we can’t be afraid of that. We can’t be afraid to try things and have them not work." .... and i may use it as i speak to the faculty soon.
Thanks for this conversation with Corrie, I am digging how he reaches back to PLATO days, I really want to thank you for the passing reference to David Warlick, who was a huge influence and guide to me, it led to a whole raft of connections (too bad Substack cant receive trackback pings)
https://cogdogblog.com/2026/01/back-trail-to-david-warlick/