3 Years AI-go: Insights for the Future-Past
A keynote that looking forward by looking back...
I recently shared my talk from 3 years ago. I was revisiting it for a keynote I was giving in late February at the Continuous Improvement Summit, held by Embry‑Riddle Aeronautical University. It’s a very affordable conference with lots of rich programming. Given that it was happening near the anniversary of my first talk, I figured that I would take the opportunity to look back to help me think about where we were and where we are—with the hopes that it might help us figure out where we are going. As usual, the slides and resource document are included.
Thank you for inviting me as one of the keynotes. This talk is part of my larger work in educational technology and teaching and learning in higher ed for about fifteen years, and teaching in higher ed for about twenty years.
One of the things I wanted to do with this talk is a kind of retrospective. I gave my first talk on generative AI in education in February 2023 for NERCOMP. If you’re in the New England area, they’re a great organization. You can find a recording of that talk here.
What the 2023 Talk Did
In that original talk (see the slides here), I tried to do several things. I acknowledged the different harms and concerns that generative AI represented. I gave a brief history of how we arrived at this moment. I articulated what is and isn’t possible with AI, and acknowledged the hype cycle, which is obviously still very much with us today. And I reminded people that we have always been a bit angsty about new technology; this goes back a very, very long time.
I then highlighted the types of conversations I was already seeing in February 2023. Those groups included educators who outright banned and punished AI use; those who avoided the conversation entirely; those who saw AI as a tool for the classroom; those who saw it as an instructor’s tool but not necessarily a student’s tool; and various critical takes that often resulted in dismissing or banning it.
But there was a sixth group that was largely absent from the conversation that I was trying to bring in: the student voice. Like so many times before, the conversation was happening about students and without them. I highlighted the student-centered approach I was engaged in at my institution at the time. In January 2023, I taught a course on AI and education and worked with students to develop and test an institutional policy around AI usage for faculty and students. It remains one of the most powerful moments in my educational experience because I took the time to turn toward students in trust and collaboration, and together we made a change at the institutional level. That felt both important and transferable to other institutions.
My final charge of that 2023 talk was simple, or maybe it wasn’t: whether we want it or not, AI is going to change how teaching and learning happen, and we need to figure out what that change looks like. There’s a lot of that talk that still holds up today. And while I’m proud of that, I’m also concerned.
The Strange Middle Space
Because in many ways, three years later, not too much has changed.
Some of this talk is about why I think we haven’t changed, recognizing some of the limitations of higher education, and some of it is a reiteration of the charge to figure this out while there’s still time. Generative AI has not disappeared and has not gone away. While it hasn’t transformed higher education, it certainly has not left it untouched.
It’s been the subject of more writing than the pandemic, and I find it fascinating that all this talk about the decline of writing and critical thinking is happening when there is a flood of such writing and critical thought.
We now occupy a strange middle space where change is clearly happening, but not in the ways many expected or necessarily wanted.
Institutions formed committees. Faculty redesigned assignments. Policies may have been written or are still waiting to be written. A few programs experimented boldly; others are still waiting, and some refuse to acknowledge that things have changed. None of this was inactivity. Higher education has been working, often quietly, unevenly, and usually on top of everything else already demanded of it. And that distinction matters.
A Story About Timing
What I’ve come to believe over the past three years is that generative AI is not primarily a story about technology. It is a story about timing.
A fast-moving tool entered institutions designed to move deliberately, for very good reasons. And what we are seeing now is the result of those two speeds colliding. To boil down the last three years and the generative AI discourse in higher ed: many classrooms are adapting, but many institutions are hesitating.
Understanding why that gap exists may matter more than understanding the technology itself.
I think one thing I overestimated was the degree to which higher education would not actively engage and prioritize this moment. I didn’t necessarily expect a rapid transformation. But there were enough early conversations framing generative AI as a meaningful disruption that I believed institutions would recognize this as something requiring a coordinated response; something that demanded attention beyond semester-by-semester adjustment. There was a sense that we would collectively pause and ask: What does this mean for how we prepare students for the world they are already entering?
Instead, what emerged was something more complicated. A significant and disconcerting amount of indifference. Refusal to acknowledge generative AI was here, or what it meant for teaching and learning. A reasonable amount of resistance, often grounded in genuine places of care, sometimes thoughtfully engaged, and at other times perpetuating a lack of critical thinking that we claim is disappearing from our students. Yet overall, diffusion happened. Engagement happened; it was just unevenly, locally, and often individually rather than collectively. And that realization has become increasingly concerning to me.
Where Change Has Happened
Because more than three years into generative AI, we now have students graduating who have had very little structured opportunity to critically engage with tools already shaping professional and social life. Despite an abundance of research, resources, and examples, engagement has often remained optional rather than intentional at the program or institutional level.
At the same time, real change has been happening, largely in classrooms. We’ve seen shifts in assessment, shifts in conversations about what students need to demonstrate, and shifts in how educators think about authorship, process, and professional responsibility.
Still, I’ve heard educators say things like: “This is your education — you can do with it what you want. I will evaluate what you submit and assume you are making choices about your own learning and future professionalism.” And then pay almost no attention to whether the work reflects appropriate or inappropriate AI use. That makes rational sense if one isn’t going to reasonably change one’s assessment strategies to reflect the times. Because in the absence of programmatic or institutional shifts, one is left spending countless hours and communications trying to certify that each work is a legitimate product of the student.
Across institutions, individuals have experimented thoughtfully. Conversations have expanded. New instructional practices are emerging. Effective pedagogy did not disappear. As my friend and colleague Autumm Caines says: if you’re looking for AI teaching advice, good pedagogy is nothing new.
But these changes to teaching and learning are happening often without institutional alignment or shared direction. At the same institution, students can be shown powerful and critical ways to engage with these tools in one class, only to be told they cannot use them at all in another. We know variation across classrooms is normal. Yet what happens when the AI-enabled course is an elective while the AI-prohibited course sits at the center of a student’s major? What does the student lose in learning in that moment, especially when it’s a discipline being very much disrupted by AI?
Complete resistance and disengagement with AI in 2026 can feel like teaching a course without the internet in 2015. It has as much chance to harm as good for our students.
And I want to be clear: this is not an argument that every instructor must use AI, or that every assignment should involve it, or that skepticism is misplaced. Many of the concerns raised in 2023 still matter today: labor, authorship, bias, environmental cost, surveillance, and equity. But avoiding engagement altogether does not protect students from those realities. It removes the opportunity for them to understand and navigate them critically.
Whether we invite AI into our classrooms or not, generative AI is already shaping how writing happens, how research begins, how ideas are explored, and increasingly how work itself is organized outside higher education. And that leaves us in a difficult but important position as educators. We are no longer deciding whether students will encounter these tools. We are deciding how prepared they will be when they do.
Pedagogical change is happening. Educators are experimenting. Students are already adapting. But institutional responses remain uneven, cautious, and often unclear.
Institutional Attention
Here’s an important disclaimer: I’m not trying to throw institutions under the bus. Individuals care. Faculty care deeply. Administrators care deeply. What I’m talking about is institutional attention: the ability of a complex organization to coordinate focus, resources, and support around emerging change.
Because when institutions hesitate, the burden of sense-making shifts onto individual faculty and individual students, each trying to navigate structural change largely on their own. What we are seeing right now is not the absence of change. It is change without coordination or adaptation without shared direction.
The last three years is less a story about technological disruption and more a story about institutional time. How do systems designed for deliberation encounter moments that demand responsiveness? Higher education institutions want to care, but they are struggling to prioritize AI within a landscape where everything already feels urgent, underfunded, and necessary.
And that raises the central question: if effective pedagogy still works (and it does) and if educators are already adapting (and many are), why do institutions find it so difficult to move with the same clarity? Put differently: why has higher education largely responded to generative AI semester by semester rather than systematically?
Why Institutions Struggle to Move
Why do institutions filled with thoughtful educators—people already adapting their teaching and thinking carefully about student learning—still struggle to move collectively when change is clearly underway? I don’t think the answer is resistance, nor is it a lack of awareness. The answer lives at the level of structure and how institutional attention works.
Higher education is much more than a collection of classrooms and topics. It is a coordination system. And generative AI arrived at a moment when that system was already stretched across too many priorities at once.
Not all faculty are disengaged from this moment. In fact, many faculty are deeply attentive to their disciplines, to their students, and to teaching and learning itself. They are constantly making decisions about pedagogy, assessment, curriculum, and how students develop knowledge and professional identity. That work has just become more complicated with the arrival of AI. But faculty attention exists inside institutional structures.
And institutions ask faculty to do many things simultaneously: teach well, support students, maintain disciplinary expertise, conduct research or professional work, participate in governance, serve committees, respond to accreditation demands, adapt to new technologies, and continually revise curriculum. So the issue is not that faculty lack attention. The issue is that institutions often lack the coordinated attention necessary to support and scale what faculty are already trying to figure out. Generative AI became one more domain where educators were experimenting locally while institutions struggled to focus collectively. What we’ve seen over the last three years is strong individual engagement paired with weak systemic coordination.
One of the challenges is the different questions that need to be asked at different levels.
Faculty rightfully ask: How does this affect writing in my class? What counts as learning now? What should students demonstrate?
Meanwhile, institutions must ask a different set of questions: What tools are allowed? What are the legal implications? How do we ensure equity across programs? Who pays for access? What guidance applies across disciplines?
Those questions move more slowly because they require alignment across units that operate on different timelines and responsibilities. So adaptation begins in classrooms first, and institutions follow later because coordination takes longer than experimentation.
Another structural challenge is that higher education institutions today operate as what we might call “everything institutions.” They are expected to provide: academic learning, workforce preparation, mental health services, technological infrastructure, community engagement, research production, belonging initiatives, accessibility support, and more.
Each expectation is legitimate. And each demands attention, staffing, funding, and leadership focus. So when generative AI arrives, especially at the tail end of a pandemic that reshaped nearly every institutional function, and before the political pressures foisted upon higher ed in recent years, leaders are not deciding whether AI matters. They are deciding when and how to decide, among dozens of equally urgent responsibilities.
AI also enters institutions differently than many previous educational technologies. It touches teaching immediately but raises broader institutional questions simultaneously. Adopting tools involves cost, privacy considerations, accessibility compliance, labor implications, intellectual property concerns, and long-term sustainability decisions.
From inside institutional leadership, moving slowly can feel responsible rather than hesitant. Deliberation becomes a form of care: an attempt to avoid unintended harm. The challenge is that technological change, and AI especially, does not slow down while deliberation occurs. And that creates tension between caution and relevance.
Temporal Misalignment: The Red Queen Problem
Technological systems advance through rapid iteration. Institutions evolve through governance, consultation, and shared decision-making. Curriculum revisions take semesters. Policies require review cycles. Consensus requires conversation.
These processes protect essential academic values: academic freedom, shared governance, and disciplinary autonomy. As a point of comparison, if we look at the world around us, we can see that it is in fact these processes that are helping to protect democracy. But they also mean institutions operate on a different temporal rhythm than emerging technologies. It can be easy, and I feel this regularly, to interpret stagnation as institutional failure.
What we are witnessing is better understood as a temporal misalignment. A fast-moving technology encountering a deliberative system designed for stability. This reminds me of Alice in Wonderland and the Red Queen — running as fast as you can just to stay in place. Higher education appears both active and stalled at the same time. And both perceptions are accurate.
Understanding this changed my perspective. It became less useful to ask: why hasn’t higher education responded? And more useful to ask: how does higher education learn while change is already happening?
Because while institutions deliberate, students are already living in the future that these tools are shaping. That concern has stayed with me over the three years since that first talk.
So I have two strains of thought that emerged from this realization — one a bit more abstract, and one more practical and immediately useful. I’ll let you decide which is which.
The Infinite Plate
I’m a big fan of Douglas Adams for many reasons, but one of them is that I borrow from him regularly when trying to navigate the ridiculous and absurd — and I think many of us would agree there has been plenty of that to navigate in recent years.
In The Hitchhiker’s Guide to the Galaxy, Adams gives us two useful nuggets. I’ll explain the technical one first and then the easier and more digestible one. He introduces the Improbability Drive, an engine that can move one from one spot in the universe to nearly anywhere else almost instantly, bypassing light speed and all those real limitations. In the book, they explain that the Improbability Drive couldn’t be created because it needed to calculate an infinite probability, and that calculation was impossible. Then someone said: well, if calculating infinite probability is impossible, then that — the calculating of the calculating infinite probability — is in fact a finite probability. That’s the technical part.
The easy part is my favorite Douglas Adams quote of all time: “Time is an illusion. Lunchtime, doubly so.”
So what does this have to do with AI—other than that it also sometimes feels like a hallucination?
I took something really valuable from this that I keep thinking about, really, in all my work, but specifically with the overfilled plate of issues within higher ed. If we take the fact that higher ed’s “things to deal with” plate is always going to be full, dare I say, infinitely filled, then there will never be a moment when there is less on the plate. That becomes a constant, a fixed probability.
There’s always going to be more on the plate; it’s always going to be pressing. And because of that, I can give myself permission to take time to figure out AI without guilt, because in the face of the endless to-do, in some ways, it matters less that we don’t get to all the things.
Higher education’s plate is always full. There will never be a moment when institutions are not dealing with more priorities than time allows. There will never be a semester when everything is settled enough to finally focus on the next challenge. That condition is not temporary. It is structural.
So if we wait for the plate to clear before engaging AI thoughtfully, we will wait forever. And once we recognize that the plate will always be full, something changes. We can permit ourselves to make space intentionally, because some things matter enough to begin anyway. To me, that is a useful framing for leadership across institutions. Not perfect. Not mathematically precise. But a helpful rule of thumb. Lunchtime may be an illusion; but we can still decided to take lunch.
Centering Students
Which leads to the second thought: if institutional change moves slowly, then an important question becomes: who helps higher education learn in real time? And we largely overlooked the most obvious partners from the very beginning. Students.
Students should not be treated as an afterthought in the AI conversation. They should be involved from the start.
Students will not magically solve our policy challenges or even know what’s best pedagogically speaking (though I think they can be incredibly helpful in such spaces).
Rather, including students is essential because they are the ones who have to live inside the consequences of our indecision.
An institution’s hesitations don’t come across as deliberation for students paying for an unaffordable education; they come across as inconsistent with the promises those institutions made when students applied, accepted, and paid their bill.
Students experience one class building critical fluency, another class banning the tool entirely, another class ignoring it altogether, and then a workplace assuming competence. That inconsistency is confusing. And more importantly, it becomes inequitable. When the institution doesn’t teach students how to engage critically with the tools shaping their world, the hidden curriculum becomes even more powerful.
Students with access to strong networks, mentorship, and confidence will figure out how to navigate generative AI anyway. Students without those supports are more likely to either misuse it, fear it, or remain unprepared.
“Centering students,” goes beyond asking them whether they like AI, or conducting a survey and calling it shared governance. I mean involving students in the work of: defining what responsible use looks like; identifying where AI helps and where it harms; clarifying what counts as learning in a course and a discipline; surfacing the pressures that lead to misuse; and shaping norms that are realistic rather than aspirational.
That’s what we tried to do in 2023 with the AI and Education course where students co-created the institutional policy for AI for students and faculty. It’s certainly not the only way to go about it, but it felt ethically and pedagogically aligned with what this moment demands.
I think there’s more to be said and done about the inclusion of students and the addression of institutional attention; but for now, I want to transition back to thinking about what’s next for teaching and learning.
A Better Question
For the last three years, a lot of institutional energy has been spent on the question: Does AI belong in education? I understand why that question felt urgent in 2023. But in 2026, I think that question is largely behind us.
A better question now is: how do we help educators adapt their practices with intention and clarity, while helping students learn how to navigate these tools critically and ethically? That is a very different kind of question. It turns our attention toward pedagogy, disciplinary judgment, and intentional experimentation: places where we already are with our teaching and also where we need to be for our students.
By experimentation, I mean small, deliberate experiments that help answer questions like: What learning outcomes are meaningfully human here? Where does AI help students practice, and where does it short-circuit learning? What do we want students to show that can’t be outsourced? What do we want students to learn to do with AI, because they will need it beyond this course? How do we meet students where they are and guide them to where we and they want to go?
Good pedagogy is not new. But some of the moves we use to enact it are changing. One framework we need to work with is: what are the new moves and new approaches to structure learning in a way that makes sense in the new learning landscape?
New Moves: Researching in an Age of AI
Let me illustrate what I mean by “new moves”.
Where do you go to do your initial scan of the literature and get a sense of a topic that you are research?
Most scholars today go online to Google Scholar, institutional databases, or other repositories. What we have to remember is that this is a fairly new practice. If you were doing research before the 1990s, there was a good chance you started with a card catalog. You would go to the physical library, pull up the cards, and start making connections. I remember myself starting to use platforms like JSTOR and having faculty tell me I wasn’t doing it right, that going online was absurd. Does that sound familiar?
If we’re being honest, the vast majority of us would not go back to the card catalog (if they still existed, besides on Etsy) while also having the option for digital databases? It would be inefficient and wouldn’t work.
So here’s another question: Can we identify problems and limitations we run into in using online databases for our research? Paywalls, hard to narrow focus, vast amounts of information to filter through, rusty search strategies, lack of interoperability, and consistency are some.
What we’re pointing to is that these systems we currently have are imperfect (I actually know this deeply since my dissertation is literally about accessing academic research and the rise of academic piracy). But we still prefer these imperfect systems to the previous imperfect system. And, in fact, we still find ourselves having to teach students how to use online databases when doing research in their courses?.
So what we’ve established is:
New research practices and technologies have emerged that have been widely adopted
Those new practices and technologies are imperfect
We would still prefer these new practices even if we had both the card catalog and the digital database available
We still have to show students how to use them.
This is a prime example of what I mean by a “move.” We moved, we adapted, and we recognized, despite imperfections, that the new system is still what works.
And that is the move we should be making with AI and research. Yes, these platforms are imperfect. But what they can surface, particularly deep research tools, is useful. Not perfect. Not the end of a research process. But a good start.
And that is where students are going already. So we can try to get them started in those traditional databases, or we can meet them where they are and teach them to use these tools well; just like the way we once had to teach them to use JSTOR, and before that, the card catalog. Telling them not to use AI tools is like telling them Wikipedia doesn’t count. At some point, we’re all better off showing them how to use it well, and then using that as a bridge to the databases where and when needed.
The resistance to using these tools is real. And so is this moment where students are already going there because it’s less friction-filled than the database system. Our goal is to help them use these tools well as a starting point.
Making Time for What Matters
I can already hear the response, because it’s the response many of us have internally: this sounds great, but when do we have time to figure out these new moves? And that’s where I return to Douglas Adams and the infinite plate. If we accept that the institutional plate will always be full, that there will always be more to do than we can reasonably accomplish, then waiting for a quiet moment to deal with AI means we will never deal with it. The more realistic move is not waiting for space. It is deciding what deserves space. And to me, preparing students for the world they are already living in deserves that space.
After all, what we choose to ignore becomes what students are left to navigate alone.
So here is what I think the “future-past” teaches us after three years. The early predictions weren’t entirely wrong. The biggest change is that AI has forced us to confront what pedagogy actually requires when students can generate fluent work without learning. Effective pedagogy still works. But the conditions around it have shifted. And the question is whether our institutions can support educators in making those shifts with clarity, without treating every classroom as an isolated experiment.
Closing
So I’ll end where I began. Many classrooms are adapting, but many institutions are hesitating. It’s not that higher education doesn’t care. It just means we are running a deliberative system in a moment that demands responsiveness. And we can’t solve that by pretending AI isn’t here, or by expecting individual faculty to shoulder the entire burden, or by leaving students out of the conversation until we feel certain.
If we want to respond well and thoughtfully, we don’t need perfection. We need a shift: from reactive to responsive; from isolated to shared learning.
That’s what conferences like this one are about, moving from “does AI belong?” to “how do we adapt with intention and clarity?”; and from doing this to students to doing it with them. Because we are no longer deciding whether students will encounter generative AI. The question is what we decide is worth making time for.
Thank you all so much.
The Update Space
Upcoming Sightings & Shenanigans
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recent Recordings, Resources, & Writings:
Margin of Thought with Priten: Season 1, Episode 5: How Can We Center Pedagogy During the AI Tech Wave? (February 2026)
Online Learning in the Second Half with John Nash and Jason Johnston: EP 39 - The Higher Ed AI Solution: Good Pedagogy (January 2026)
The Peer Review Podcast with Sarah Bunin Benor and Mira Sucharov: Authentic Assessment: Co-Creating AI Policies with Students (December 2025)
David Bachman interviewed me on his Substack, Entropy Bonus (November 2025)
The AI Diatribe Podcast with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
AI Syllabi Policy Repository: 200+ policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form, and I’ll get back to you soon!
We periodically host small-group workshops and leadership sessions for higher ed teams. You can learn more about our current offerings here.
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International





I'm seeing the same pattern in K-12. Individual teachers are adapting, but schools as institutions are struggling to move with any coordination. Meanwhile parents are sidelined, trying to figure out what their kids actually need while the system deliberates behind closed doors. Your point about students living inside the consequences of institutional indecision is important. For younger kids, it's parents absorbing those consequences on their child's behalf.
Wonderful talk, Lance. So glad you picked up those themes (from 3 Years ago!!) and returned to them. This piece remind me of two ideas that I think are useful when thinking about change in higher ed. They have been on my mind, even more so lately.
Charlie Stross, the scifi writer, talks about corporations as "old, slow AI" and colleges and universities are among the oldest and slowest AIs around. Arvind Narayanan and Sayash Kapoor talk about AI as normal technology, meaning that when we think about the processes by which a new tech is adopted and adapted--that is diffused through society--they happen according to human norms and rules. After all, we've had self-driving cars for a while now...we just don't let them on the roads because we don't have confidence that they work well enough.
Institutions of higher ed are responding slowly to the threats and opportunities afforded by large AI models, and that's what we should expect. As you say, the work is not to speed things along, it is to think and take care, especially with how we engage students, in the work of figuring out how to use this technology to create educational experiences that are valuable in the broadest sense of value.