The Sleep of Liberal Arts Produces AI
A keynote at the AI and the Liberal Arts Symposium Conference
This past weekend, I had the honor to be the keynote speaker at a really fantastic conference, AI and the Liberal Arts Symposium at Connecticut College. I had shared a bit about this before with my interview with Lori Looney. It was an incredible conference, thoughtfully composed with a lot of things to chew on and think about.
It was also an entirely brand new talk in a slightly different context from many of my other talks and workshops. It was something I had to build entirely from the ground up. It reminded me in some ways of last year’s “What If GenAI Is a Nothingburger”.
It was a real challenge and one I’ve been working on and off for months, trying to figure out the right balance. It’s a work I feel proud of because of the balancing act I try to navigate. So, as always, it’s here for others to read and engage with. And, of course, here is the slide deck as well (with CC license).
Introduction
Greetings, everyone. I’m so glad to be here. I’ve been thinking about this talk since I was first invited back in the spring, and I can tell you that it’s gone through seventeen different versions—though I think I’ve finally landed on the right one. I think.
In finding my way to the title and focus, I returned to one of my favorite works of art. You’ll find that throughout this talk, I’ll be sharing several of my favorites—pieces that have shaped my personal and academic journey and reflect a deep love and care for the liberal arts. I say that as someone who has not one but two degrees rooted in that tradition.
Before we start, a quick warning—and a request offered in good faith. What I share today may feel provocative. At moments, it might even sound like I’m standing outside, critiquing what many here hold dear. Please know that isn’t my aim.
I take the role of keynote speaker seriously. A good keynote, I think, should push our thinking, test our assumptions, and still leave us with a way forward.
So, as you listen, I hope you’ll stay with me to the end—and tell me if I’ve done that. This is, at its core, a message of care: one crafted for and offered to a community I care deeply about.
Let’s begin.
A Side Note: My partner & I at Anchor Insights are leading to some fascinating conversations around higher ed, change management, and AI. One project we’re pursuing is exploring different ways to support middle & lower-resourced institutions in operationalizing AI. If you’ve got 20 seconds, would you consider completing this 4 question survey? (If you haven’t already!) We will share the results on Substack here in November.
Francisco de Goya, The Sleep of Reason Produces Monsters

I first came across Francisco de Goya’s work at an exhibit at the Portland Museum of Art. The exhibit included a collection of his etchings, many of them unsettling, even monstrous. At the time, I was teaching a course on monsters at Emerson College, so naturally, his work caught my attention. I began reading more about Goya and eventually found my way to The Sleep of Reason Produces Monsters.
Take a moment to look closely at the image. Study it. Think about its title. What story do you see unfolding?
I imagine a few of you have already googled it or maybe even asked ChatGPT for help, but I’m especially curious to hear from those seeing it for the first time. What stands out to you? What do you make of it?
Some of its symbols are clear. A man slumps over his desk, asleep. Behind him, owls and bats circle, which in Goya’s time stood for ignorance and fear.
Interpretations
This etching comes from Los Caprichos, published in 1799, a series that dared to expose the superstition and hypocrisy of Enlightenment Spain.
Over the centuries, it has invited many interpretations:
Some see it as a warning: that when reason sleeps, ignorance and fear take control.
Others read it as subtler: that pure reason, severed from imagination, becomes monstrous itself.
And still others see a political message: that when the intellectual class grows self-satisfied, lulled by its own certainty, it breeds the very irrationalities it once sought to banish.
The central figure might be a scientist, a philosopher, or an artist—someone whose vocation was to stay awake. Yet fatigue, whether intellectual, moral, or civic, has crept in. The late 1700s were an age of upheaval and exhaustion. Much like ours. The fatigue is real; the weariness, familiar.
So I wonder: if Goya warned that the sleep of reason produces monsters, what might the sleep of the liberal arts produce? What happens when the very disciplines meant to keep us awake begin to dream instead?
The Sleep of Liberal Arts Produces AI
I invite you to sit with that question for a moment.
If Goya’s image belonged to our time, what would be gathering behind the desk?
What would our monsters look like? Any quick thoughts? Just one or two voices.
As the title of the talk suggests for me, the sleep of the liberal arts gives us AI. What does it mean to you?
What we fear, resist, or celebrate in AI often tells us more about ourselves than about the technology itself. When I think about the relationship between AI and the liberal arts, I see echoes across intellectual history:
Like the Enlightenment thinkers, we see ourselves as the last bastion of rationality in a world gone irrational, where fake news, fallacies, and fascism run rampant, and AI only adds fuel to the meltdown.
Like the Romantics, we fear the terror of unfeeling reason, the coldness of intelligence without soul, calculation without compassion. AI becomes the embodiment of that nightmare.
Like the Modernists, we feel the disillusionment of progress itself. We build machines meant to liberate us, only to find ourselves working harder, producing more, believing less.
And like the Postmodernists, we recognize that the pursuit of pure reason and objectivity breeds its own monsters: bias, exclusion, control. AI doesn’t stand apart from that logic; it perfects it. The algorithms don’t dream of progress or reason’s sleep. They are the dream, made from our data, our prejudices, our desires.
In that mirror, the liberal arts have urgent work to do. Not to restore an old faith in reason, but to teach us how to read the distortions it leaves behind.
What is a sleeping Liberal Arts?
What I want to do next is connect a few dots: observations and experiences with the liberal arts that have led me to this provocation:
The real threats and concerns of generative AI arise, in part, because of the liberal arts themselves.
Now, to be clear: there are many legitimate reasons to be concerned about AI such as the ethical, social, political, and technical issues. There are entire libraries of books, articles, and posts examining those issues, and we should all be engaging with them. Absolutely.
But today, I want to shift the lens a bit. To say: yes, it’s all that—but it’s also us. Before we focus too much on what’s “out there,” we need to look inward and ask where we may have fallen asleep.
So, I want to take you on three short journeys: three ways of looking through the glass at ourselves. These reflections draw from my own studies and experiences within the liberal arts. They’re not indictments, but invitations to examine the tensions between what we say we value and what we sometimes enact and how those tensions may have helped shape the moment we now find ourselves in.
Dismissing
I want folks to take a moment and think. Long and hard. Recall something that you were deeply interested in and excited about.
Ok, how many of you were told that this something–something you invested a lot of time and joy in–was irrelevant, meaningless, or made you less of a person?
Keep your hands raised if you continued to invest in that interest?
Oh gosh, isn’t that so sad. Those hands going down is like watching candles of curiosity being blown right out.
How many are still investing in that thing today?
And finally, how many of you are scholars in that thing today?
Many of us have been told what is and isn’t legitimate to explore and enjoy; it happens with every culture. I have my own long history with it as well. When I was a kid, I was told by my father and others that reading comic books and playing video games were a waste of time.
When I fell in love with audiobooks, a passion that I still hold today, I was told it didn’t count as reading and that I shouldn’t say that in front of my teachers.
In high school, for my senior year, I got in trouble in my English class because I did not do the summer reading. Not because I didn’t want to read, but because none of the so very limited 10 books were appealing to me. Instead, I read over 30 books that summer, with some books as large as 1000 pages. But none of it counted; it wasn’t “literature”; it was genre fiction.
To be fair, it wasn’t all rejection. There were bright spots such as an English teacher who offered a course on speculative fiction, and college professors who brought film or comics into the classroom. But even those gestures came with a quiet asterisk, a sense that these were exceptions. They were fun detours.
Those experiences stayed with me. They shaped what I would later come to study. When I began my first master’s degree in American Studies, after earning a B.A. in History, I realized that popular culture could be a site of scholarship. In the mid-2000s, that realization led me to explore:
Zombie narratives as post-9/11 allegories of fear and resilience.
Representations of trans histories in Nip/Tuck.
The impact of audiobook narrators on Stephen King’s writing.
How cultural exploration stagnates under intensifying copyright regimes.
Through all that my scholarship over the years, a pattern became clear: the recurring urge to dismiss what is new, popular, or accessible. Every emerging medium is met first with suspicion.
Dime novels, for instance, were once condemned as corrupting influences on youth—yet they produced Louisa May Alcott, Charles Dickens, Robert Louis Stevenson, and Harriet Beecher Stowe.
Photography was scorned for decades as mechanical imitation before it was elevated to fine art.
Film, radio, comic books, television, video games, digital media were all derided as shallow or dangerous until, much later, the academy found them worthy of study.
This cycle of dismissal is almost ritual: what people love first, institutions tend to distrust.
And yet, for me, popular culture has always been the doorway into deeper learning. DuckTales—and yes, I mean the cartoon—sparked my lifelong fascination with The Odyssey and the epics of gods and mortals. Final Fantasy II made me think seriously about redemption narratives and why we return to them again and again. The Hitchhiker’s Guide to the Galaxy audiobook taught me more about tone and irony than any writing workshop ever did.
These were not distractions from meaning. They were gateways into it.
So when I talk about the “sleep of the liberal arts,” this is one of the places I see it most clearly: in our inherited habit of dismissal. In the instinct to guard the borders of legitimacy rather than expand them. And when we do that, we risk cutting ourselves off from the very forms of imagination and expression that keep culture and education alive.
Fetishizing
We do, of course, want to study things deeply. We want rigor, consistency, and care in how we create and share knowledge. But somewhere along the way, we’ve begun to fetishize that process. We can treat the act of studying itself as sacred in ways that sometimes make learning less accessible, less human, and frankly, less joyful.
We make writing and learning unfun to do while simultaneously making what we write and what we learn hard to reach.
We have, I think, a holy—if occasionally questionable—fixation on writing as the ultimate proof of thought. Writing has been the marker of legitimacy, the visible evidence of intellect. We elevate writing as the “true form” of thinking, and the more formal and structured it is, the more it is assumed to have value.
Consider the academic forms that mark our progress: the entrance essay, the capstone, the thesis, the dissertation. These are supposed to be demonstrations of growth, but often they serve tradition more than students.
How else do we explain the pattern we see everywhere in higher education where the higher you climb, the more people fall away? The attrition rate from undergraduate to doctoral study is not simply about talent or commitment; it is also about the narrowness of what counts as legitimate expression.
I say this as someone who has been writing since I was eleven as someone who has
Been a professional writer for more than twenty years.
Taught writing for nearly two decades.
Presented and published scholarship for seventeen years.
Has six degrees—three of them master’s degrees and one a doctorate.
And yet, I can say with full honesty: I am a writer, scholar, and educator because of computers. Without that technological intervention, my path would be entirely different.
If I had been required to handwrite my way beyond middle school, I wouldn’t be standing here today. My educational trajectory changed the moment a computer entered my life and my teachers allowed me to type. I hadn’t changed internally, but the perception of me changed. My messy scrawl transformed into neat, legible lines. And suddenly, doors began to open.
That small shift—how “good writing” was visually judged—had a lifelong impact on my education. And while that was thirty-five years ago, those same hallmarks persist today. We still hold deep, often unspoken assumptions about what legitimate writing looks like, sounds like, and feels like.
You could see this reflex surface again when AI entered the classroom. The first instinct was not to explore or to ask, but often, to accuse. Within the year, we saw disproportionate suspicion cast toward Black students and multilingual writers. The “fetish” for particular kinds of writing—certain tones, certain sentence structures—reasserted itself, this time through algorithms and bias detection tools.
And it’s not just writing. We fetishize knowledge itself into forms that, intentionally or not, alienate as much as they enlighten. We construct explicit and implicit rules about what counts as real scholarship, then punish or exclude those who don’t know the hidden curriculum.
We make our fields, our ideas, and our insights illegible to students and to the public because we privilege research over communication and production over connection.
I specifically remember sitting in my American Studies seminar, trying to make myself smaller as we discussed a single chapter of Michel Foucault. Around me, some students were absorbed, others pretending. I wanted to contribute anything to prove I understood. I didn’t.
It took me fifteen years to finish that 200-page book, The History of Sexuality, Volume 1. And I finally did so, not because my intellect grew sharper, but because it was finally available as an audiobook.
Now, there are good reasons for complexity and theory; such texts serve real purposes.
But I can’t help wondering how many others have been turned away by the sheer inaccessibility of our language. Honestly, I would have loved a “Foucault-bot” to test ideas with. In fact, some of the clearest, most engaging philosophy I’ve encountered has come from creators like PhilosophyTube and ContraPoints: doctoral students who left academia to make difficult ideas tangible and humane.
I realize that calling this tendency fetishizing might sound harsh. It’s not meant to disparage deep scholarship or the beauty of complex thought. But from the perspective of those trying to enter, or just trying to understand, it can feel like a gatekeeping ritual. A way of keeping others from finding community, participation, or belonging in the intellectual worlds we claim to cherish.
Externalizing
Let me start with a quick show of hands for the educators in the room.
How many of you assign learning materials that your students have to purchase?
Now, keep your hand up if you know exactly how much those materials cost.
It’s always striking, isn’t it? Most of us know the content of what we assign, but not the price. And that’s understandable. We often inherit syllabi or build courses around texts we love and trust. Those materials become part of our pedagogical DNA. But they also create real, compounding costs for our students.
And if you’re skeptical of AI and think it’s something external to the conversation of textbooks, you might be surprised. Textbook publishers are already embedding AI into their platforms, monetizing it, and selling it back to us. Students’ data on these platforms provide the raw data that will then be used for AI tools, which will inevitably create more features for faculty and others and therefore, more costs for students. That data is becoming another revenue stream. Yet another form of extraction.
I raise textbooks here not to moralize, but to illustrate a larger pattern: how easily we externalize the costs of knowing. We build systems that expect others (often the students and public) to bear the financial, temporal, and cognitive costs of accessing our ideas.
We want to serve the common good, to cultivate critical thinkers, to make knowledge a public trust and yet, we’ve quietly locked much of it behind paywalls.
I’ve seen this pattern play out in my own teaching. I taught American Literature for years. When I started, I did what most of us were taught to do: I assigned the big anthology, with pages so thin they were practically transparent, filled with far more content than any student could absorb in one semester. Of course, it cost about a hundred dollars.
Somewhere along the way, I discovered that almost everything I was assigning was already in the public domain. So the next time I taught the course, I collected those materials myself, compiled them into PDFs, and shared them freely.
That simple shift turned out to be my unintentional entry into Open Educational Resources. From there, I made it a goal: every course I taught would use OER or keep materials under twenty-five dollars. Eventually, I helped launch institutional OER initiatives because it just felt wrong to keep passing on the same costs I had once struggled to pay myself.
As I moved into doctoral work, I started thinking about openness on a larger scale. Nearly two-thirds of all research in the United States is funded by taxpayers. Yet most of the publications that result from that research are locked behind publisher paywalls. Libraries, hospitals, nonprofits–they all have to re-purchase access to that same scholarship year after year.
Let me ask: how many of you have published academic work?
And how many of you know how much it costs for someone outside your institution to read that work?
Here’s an example that encapsulates the issue. One of the most cited papers of all time, Ulrich Laemmli’s 1970 Nature article on bacteriophage protein structure, was written at a public university.
Over fifty years later, it’s still behind a paywall: $39.99 for the PDF. It’s been cited more than 270,000 times. Conceptually speaking, that single article has generated about ten million dollars in sales for Nature. And it won’t enter the public domain until seventy years after the author’s death. Which, by the way, he’s still alive.
Now imagine that scale across all the scholarship in this room: our work, our colleagues’ work, our students’ future work. How much knowledge is locked away and benefiting private interests over public?
And here’s the truly jarring part: many of those same publishers are now selling our work again. This time to AI companies without our consent or compensation. I’ve come to label it as academic fracking: extracting value from our intellectual commons, layer after layer, until nothing of public good remains.
That realization shaped my dissertation. I studied how scholars understood their identities in a world where they had to use illegal means—sites like Sci-Hub or Library Genesis—just to keep doing their jobs. I interviewed independent scholars, community college faculty, and Ivy League professors alike. All described the same paradox: to be a scholar today, you often have to break the law to access knowledge produced for the public good.
And that, to me, is the deepest irony. In the liberal arts, we often critique capitalism’s exploitative systems, yet we reproduce the same patterns in our own knowledge economy. We externalize the costs of learning and call it normal.
So when I say the liberal arts have been “asleep,” this is another form that sleep takes. We dream of being the stewards of human wisdom, but we’ve become the guards of gated communities where only those who can pay for entry get to join the conversation.
The Sleep of the Liberal Arts Produces AI
Okay. I know I’ve painted a bleak picture here. And I imagine a few of you are wondering how it all fits together. What’s the throughline? What’s the takeaway?
Who, after all, came to a keynote to be told this?
Don’t worry. We’re getting there.
But first, I want to be transparent about where I stand in this story. I share these critiques not as an outsider pointing fingers, but as someone who has lived and perpetuated these same patterns.
In the first half of my career, I was a border guard, deciding what counted as legitimate study. I fetishized the learning process, mistaking complexity for rigor. I externalized the costs of knowledge, assuming others would find their own way in.
All of this, I’ve had to unlearn. And I’m still unlearning.
And I know I’m not alone. I’ve worked with hundreds of faculty over the past fifteen years, across disciplines and institutions, and I still see these practices everywhere. They are not born out of malice, but out of inheritance. They’re baked into how we were taught to value knowledge, and what we were taught to protect it.
So when I talk about the “sleep” of the liberal arts, I’m not talking about apathy. I’m talking about habit. About what happens when patterns of thought become so natural that we mistake them for virtues.
I worry that a kind of survivorship bias permeates our collective self-understanding. Those of us who made it through the long trials of academic life learned to survive dismissal, to internalize the fetish for certain kinds of learning, to tolerate the externalized costs of access. And because we survived, we assume the system works. But maybe what we call “rigor” or “excellence” is, in some cases, simply what we’ve learned to endure.
Now, I want to be careful here. What I’ve described is only part of the picture. There are many here who have already been doing the hard work of waking up: faculty and librarians advancing open access, people experimenting with open pedagogy, artists and scholars bringing new media into the classroom. Those efforts matter deeply.
But if we step back and take the pattern as a whole, a troubling picture emerges. Over time, the liberal arts have:
Disregarded technologies and mediums that people, especially younger generations, find meaningful.
Fetishized certain ways of producing and demonstrating knowledge, making them exclusive and inaccessible.
Externalized the very costs of learning and access to the point where some must break the law just to engage with our scholarship.
When you see these tendencies side by side, it becomes clear. It’s not just that AI filled a void; it’s that we created one.
If you are a student or a curious learner today, where do you turn for knowledge that is alive, interactive, and open? Where do you find meaning-making that feels accessible, responsive, and available? Increasingly, the answer isn’t us—it’s AI.
We have, however, unintentionally, built a discourse of disregard, exclusivity, and distance from the very people we most wish to reach. And in doing so, we have made it easier for them to look elsewhere for tools, for answers, and for belonging.
That’s what it means, in my view, for the liberal arts to be asleep. Not absent. Not dead. But dreaming while the world outside moves restlessly on.
Awakening the Liberal Arts
So. How do we awaken the liberal arts? How do we move from critique to renewal?
Because the truth is, the world really needs the liberal arts. They are not an ornament of education; they are its conscience. They give us the habits of attention, the curiosity, and the ethical imagination to navigate a world that sometimes feels like the worst possible timeline and now, a world reshaped by AI.
At a time when programming jobs are shrinking and automation is accelerating, what we need are not more coders, but more thinkers: people who can interpret, question, and connect. The liberal arts cultivate that lens. They teach us to see systems and stories, power and possibility, in every context we enter.
When AI floods us with information, the liberal arts teach us how to make meaning.
When AI predicts the next word, the liberal arts ask why that word matters.
When AI reflects our biases, the liberal arts trace where those biases come from.
The liberal arts are not a luxury in the age of AI. They are the operating system for surviving it.
But renewal requires movement. It’s not enough to defend what the liberal arts once were; we have to reimagine what they can become. For me, that begins with three essential shifts. They aren’t the only ones, but they feel foundational: ways of turning critique into practice, and practice into possibility.
Engaging
We can’t afford a wait-and-see model with AI.
Now, I know what that sounds like. Some of you might already be hearing echoes of tech-bro optimism. The idea that AI is inevitable, unstoppable, and that resistance is futile. You may be bracing for a techno-determinist pep talk.
That’s not where I’m going.
Your caution is right. But curiosity is also important here.
If the liberal arts have always been about anything, it’s the courage to engage and experiment, to interpret and make meaning through practice as much as critique. And that means we must be as creatively engaged with AI as we are critically concerned about it.
We have to make things with it—to write, to teach, to design, to experiment. We don’t have to accept everything it represents, but we can’t understand what it is doing to us if we never step inside the process.
We can’t afford to dismiss it.
History has already taught us that lesson. Every new technology of expression—from print to photography, film to radio, television to the internet—was met first with anxiety and resistance. Each came with its own mix of affordances and dangers, yes, but also with new ways to tell stories, to connect, and to question the world.
Why should AI be any different?
If we retreat into rejection, we’ll wake up one day to find that meaning-making has moved on without us.
So instead, I return to one of my favorite bits of cultural anthropology disguised as comedy and my second-favorite quote from Douglas Adams:
“1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things.”
Let’s not be the caricatures in this quote and let’s not wait decades to make sense of a form of expression and creation that is already here.
Let’s engage now so that when we critique, we do so from experience, not distance.
Unleashing
Let’s do another informal poll:
Show of hands. How many of you have had meaningful conversations about AI & its place within higher education, teaching and learning, or your discipline amongst your colleagues and other faculty?
Keep those hands up.
Ok, how many conversations (roughly)?
Lower your hands if you have less than 5
Lower if you’ve probably had less than 10? 20? 50? 100? Countless?
Ok, you can put your hands down.
Now, how many of you have had meaningful conversations with students about GenAI?
What do I mean by meaningful here?
I mean, non-punative. I mean actual conversations where there was conversation, not just a one-sided explanation of appropriate use with questions only about clarification.
Conversations where students had an opportunity to contribute to meaning-making as much as you were imagining when you were thinking about meaningful conversations with your colleagues just a moment ago.
Now, show of hands. How many of you have had conversations with students about GenAI?
Look around. Ok, keep those hands up! Ok, how many conversations (roughly)?
Lower your hands if you have less than 5.
Lower if you’ve probably had less than 10? 20? 50? 100? Countless–so many you can’t even count?
Ok, you can put your hands down.
What can that tell us about all of engaging with AI and how we do it? I did this at another talk earlier this week and got similar results.
If you are teaching part-time or full time, the likelihood that you are in more ongoing conversations with students is at a much higher frequency than your colleagues; you’re apt to interact with more students more times in a week than your colleagues, especially over the last 3 years.
So why is the frequency of meaingful conversations inverted?
There are lots of different reasons for this, but an evident one is that we do not trust our students or believe they are capable of the critical insight to figure out how to use AI well. But, of course, if we have so few conversations with them, or many one-sided conversations with them, then how are we going to know?
To me, the lesson to take from the fetishization within the liberal arts is a wholeheartedly different lens.
It’s one that I’ve learned from a tweet.
Jessie Stommel’s 4-word pedagogy tweet: “start by trusting students.” But I have also learned similar lessons from so many great educators out there.
We need to rethink how we perform and value knowledge production and demonstration of knowledge. We have to step into our work with students and the world at large with the first principle that everyone is a critical thinker.
Full stop.
If that person has made it to stand in front of you in whatever context, they have had to navigate an incredibly complex and brutal world to be there at that moment, across contexts and with a range of capital that we may never know or understand.
Our work is to see and understand that critical thinking exists and build the bridge from who and where they are as critical thinkers into the subjects of our courses and the questions of our disciplines. For too long, we have said that they need to understand our discipline, but it’s only if we understand them can we make our disciplines legible and meaningful enough for them to find their way with us as guides.
Opening
We have externalized the costs of knowing. And in doing so, we push students and the public at large to go use these tools, which do and will have that very same paywalled content as part of their data for them to explore.
So I keep returning to the question of openness as a practice.
In my writing, my teaching, my scholarship, I try to live that practice. Even this talk—everything I’ve said here today—will be published in full on my newsletter Monday morning, freely available to anyone who wants it. Knowledge should not depend on who can afford the ticket price to the room.
That, after all, was the spirit of the 1940 Statement of Principles on Academic Freedom and Tenure, which states:
“Institutions of higher education are conducted for the common good and not to further the interest of either the individual teacher or the institution as a whole. The common good depends upon the free search for truth and its free exposition.”
Eighty-five years later, it’s clear that we’ve worked hard to uphold the first half—the search for truth—but faltered on the second: its free exposition.
It’s also clear that by “free exposition”, they originally meant to be able to uninhibitedly share it, but the fact is, if it is behind a paywall, it is not freely exposed, it is knowledge enclosed.
If we want to awaken the liberal arts, we have to recommit to that second principle: to the public life of knowledge. To make what we produce, what we love, what we labor over, as open as possible.
That work can happen in small, concrete steps:
Post accepted manuscripts to your campus repository.
Use a Creative Commons license by default.
Reconsider what it means to assign expensive learning materials; set a reasonable cost cap and solve creatively within it.
Add a plain-language abstract—two hundred words, maximum—to every article, talk, or assignment.
Pair dense texts with a glossary, a visual, or a short explainer video.
Because if there is a paywall between learning and knowledge, we will continue driving people to AI.
If people can’t read us, they can’t join us.
If they can’t understand us, they can’t trust us.
We have to stop pushing costs outward to students, to the public, and to the next generation of learners.
Staying Awake

We began today with Goya’s image of a sleeper, haunted by the monsters that rise when reason drifts into dreams.
And throughout this talk, I’ve suggested that our own sleep—our dismissal of new media, our fetishization of inaccessible knowledge and knowledge practices, and our externalizing of learning’s costs—has created a void that AI has rushed to fill.
But as I look out at you now, I don’t see sleepers.
I see people who are awake: educators, scholars, and leaders who wrestle every day with how to help students navigate a world overflowing with information yet starving for wisdom.
I see you doing the quiet, often invisible work of connection at a time that rewards speed over depth.
That work has a name. Sociologist Allison Pugh calls it connective labor—the practice of seeing and being seen that she explores in The Last Human Job.
It’s what happens when you reflect a student’s understanding back to them, when you build trust in a classroom, when you guide someone through moral or intellectual uncertainty.
And that labor is under threat. We are living, as Pugh warns, in a depersonalization crisis: a world trying to automate the very work that makes us human. Systems of efficiency keep trying to measure what cannot be measured, to mechanize what cannot be mechanized.
And so, we’re rightfully trying to navigate that and it is a big challenge.
Our collective task, then, is not to reject technology, but to find the right relationships with it and with one another.
And this is not only an intellectual challenge. It’s a moral one. Greg Epstein reminds us that technology has become a kind of global religion, a tech theology with its own promises of salvation and its own hierarchies of the elect. Left unchallenged, it risks becoming what Ijeoma Oluo calls “a white man’s version of utopia.”
Our role, then, is to be the loving reformers, the heretics and humanists who insist that justice and dignity must remain at the center of our systems and we can do so by also making sure we are deeply part of the discourse to come.
So let’s leave here not fearing monsters, but recognizing our shared purpose.
The work ahead is to design seeing systems—technologies, classrooms, institutions—that preserve time and space for connection.
To remember that a liberal arts education does not expire. It compounds, preparing people not for one job, but for a lifetime of making meaning.
Goya’s etching reminds us that the sleep of reason produces monsters.
But our awakening and our shared commitment to engaging, unleashing, and opening can ensure that the work of the liberal arts continues to move students and society toward deeper understanding, relationship, and care.
Thank you.
Note about AI usage: This text was developed over the last two months through a mixture of having ChatGPT interview me about my thinking and framing about how I wanted this talk to be structured, the examples I wanted to draw upon, etc. It was also used to clean up transcripts where I opened up Google Recorder to discuss ideas and insights and things to add. I additionally leveraged it to help refine the arc of the piece for better clarity, alignment, and connection. Finally, it was used to help edit and at times, reduce the original text.
The Update Space
Upcoming Sightings & Shenanigans
The AIs Go Marching On: Finding Our Way with AI in Education, NCFDD, October 24, 2025
Teaching in Stereo: How Open Education Gets Louder with AI, RIOS Institute. December 4, 2025.
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025)
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 190+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form and I’ll get back to you soon!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International




Lance, your Goya progression haunts -- monsters sharpening through "Dismissing," "Fetishizing," "Externalizing," then vanishing by "Awakening." Monsters don't vanish when we stop seeing them; they move into the architecture.
You know I worked in admissions at a selective liberal arts college; now my work explores how institutions create or destroy belonging, purpose, and social capital. Your keynote maps a relational rupture. When we tell students their audiobooks "don't count" or gaming "isn't real learning," it isn't just snobbery -- it's a belonging injury that also erodes purpose (why strive if my learning doesn't matter?) and blocks social capital (who opens doors when your knowledge is deemed illegible?).
The parent whose kid learns history through Assassin's Creed isn't failing school; the school is failing to see an asset.
AI didn't invade. AI arrived as a belonging machine -- not because it should, but because we left a vacuum. It answers without judging. It meets comic-book analogies where they live. It's awake at 3 a.m., indifferent to whether you learned from YouTube or a $200 anthology. It supplies recognition, purpose cues, and connection institutions too often withhold.
"Start by trusting students" isn't sentiment -- it's survival. When we stop seeing our monsters -- dismissal, fetish, extraction -- they become the water we swim in. Students, finding neither belonging nor purpose in those waters, swim elsewhere -- even at highly selective colleges with 90%+ overall graduation rates where subgroup gaps remain, often by double digits.
So the question isn't whether liberal arts can "compete" with AI, but whether we can remember what we're for: build seeing systems that recognize diverse knowing, ignite purpose, and grow social capital -- in service of the common good -- so students don't have to leave to be seen.
Keep going. Appreciate you, LE.
A brilliant talk, Lance! I will return to it again and again for inspiration, course correction, and ideas.