Recently, at the conclusion of a talk, someone started off the Q&A referring to me as a “shill for AI.” It was a rather interesting moment in my public speaking experience, where the critique came at me personally rather than the technology. This post is about my reaction in the moment, what I would I have liked to have said, and why it matters how we’re navigating GenAI in the discourse.
My hope in this post is to give myself some space to reflect and also to give others some thoughts about where we find ourselves. For many good reasons, there are lots of tensions around GenAI and a sometimes-growing antagonism between those who are critical and rejecting AI and those who are engaging with AI (regardless of whether they do so critically or uncritically). I think there should be meaningful conversations amongst these groups (and all the variations among and beyond them). But, at times, the dismissiveness of one by the other leads me to wonder how thoughtful we are being about the discourse as a whole. In particular, there’s some degree of talking past each other that feels like a cornerstone of the very critique we hold of GenAI and the world at large that we’re just reproducing in this space. We’re robotically (if I may) reacting (not responding) to one another’s comments with rejoinders that re-entrench our own perspective without doing the work of learning and thinking that we rally to when we talk about what our students should be doing.
And as always, I’m just as caught up in this as anyone else.
The Comment
I had just finished a talk that was exploring where and how we might thoughtfully navigate GenAI’s ubiquity in teaching and learning. I walked through the challenge it represents in creating new complications in course alignment and therefore, leading to us considering how to rebuild our courses, and the toll that can take on faculty, especially right after spending so much time reimagining education repeatedly through the different seasons of COVID. I demonstrated some ways that GenAI could be incorporated into activities, assignments, and learning.
As usual, I do my best to leave time for questions, and this person was the first to raise their hand and said, something to the effect of:
“I'm just amazed at how much of a shill for AI? This is absolutely terrible. Everything that you said that’s good about AI; it's stuff that we should be doing together. You kept talking about collaboration and that’s just replacing the means. So, now, we're not having conversations in class. We're not having conversations with each other. That means lost jobs not only among faculty but among students. I came here hopeful to learn something about AI to learn something that it can do that human beings can’t do better or make our lives better and I’m not getting that.”
Now, there’s a lot to take in with a statement like that. It’s not a question or while there’s a critique about the content of the talk, it comes first with a direct and specific critique of me. In the moment, it did feel like this person was coming out swinging.
The challenge here is that the scene isn’t set up well to actually engage in conversation. I’ve just finished speaking for a long duration, I’m standing between folks and lunch, and the commentary requires much more thoughtful nuance and exploration. It requires learning more about what this person fears and what they know/do not know about GenAI, what they’ve tried, what they’ve encouraged in their teaching and students and so much more. It requires a lot more questions and understanding of the person because a thoughtful response can be composed. All of which is to say that I wasn’t at my most thoughtful or caring when I responded.
My Response
“There's a lot of different ways I could respond to that. I've recognized that there is no cure-all as I said earlier on. I would say I wouldn't consider myself a shill for AI. It is more that I'm looking at the world around us, I'm engaging with students, and I don't know that we don't live in a world where AI doesn't exist anymore. I’m looking for where the possibilities are.
These tools are substantially and experientially different for our students. I know students who are using it to support their learning in ways that are not cheating or bypassing their learning, but helping them to do and learn better. And I want to find more of those.
I cannot express enough how uncomfortable I feel about the real issues around AI, whether it's the environmental, the energy, the bias, the misappropriation of people's materials. All of those are valid. And also any device we're are using has these hazards attached to them and I can't solve that big problem. That is a collective problem that we aren't ready to solve.
I'll tell you we're not ready to solve because we have all of those concerns about generative AI. But those are the same concerns that we have had Wikipedia, the internet, our phones and our laptops. Those are perpetual problems we are choosing not to make right. We can worry about all the issues that AI represents, but those data centers were already here and already depleting resources every time we stream video. All of those things are come with costs that we don’t want to talk about.
So I recognize all of that. And also, AI is here, and people are using it. In my role, I can try to find ways that are helpful. What I am finding from talking with thousands of faculty at this point is that providing these approaches and insights is helpful. They may not be universally applicable but people find parts of it to apply and use. That's the best I can do in this context.
It’s not going to solve those bigger issues, but until we're ready to lean into those bigger issues including how much is being asked of faculty, how little time they have, how little time they have for students, or how busy students are—all that big picture stuff—I am as lost as anybody else.”
As you can see, there are some points to be made, but it’s also a bit of my own reacting and not responding. As I finished, the person also added this:
“That just means we get to do evil stuff and pretend that it’s not evil. That’s what you’re saying.”
Now, I know I shouldn’t have taken the bait. I should have left it at that or appreciated and acknowledged it before moving on, but I was caught up in the moment and said something along the following.
“Sure. But that cup of coffee that you’re drinking. It took some 30 gallons of water out of a community that could have used it, and was most likely created by exploited labor. That is capitalism in our modern world. I can't solve for that. And some of what I am seeing with AI is that it’s helping students in different ways make sense of a world that is harder to navigate for them than it is for us in this room. That's the best I can think of in this ordeal, otherwise. I'm going to be completely paralyzed.”
To be clear, my tone wasn’t angry but more conveying a sense of my own exasperation with the challenges before us and that there are no easy or clear solutions.
As I finished, they acknowledged my response. By this point, others were raising their hands with questions and I moved on.
What I Should Have Said
I’m of two minds about what I said. The less-kind version of myself feels validated by saying things that also drew in their personal decisions in direct relation to their approach to making their comments personally about me. This version was further validated when others came up afterward to thank me and directly or indirectly apologize on this person’s behalf. It’s the version of me that is protective of the ego or the fast brain from Kahneman’s perspective.
Yet, the kinder version or slow-brain in me wishes I had taken more time to unpack their concerns and work with them collaboratively to understand and separate the real concerns from the personal attack. It might look something like this:
I don't know if we're at the point where definitive universal statements can be made about whether GenAI can make our lives--particularly in the teaching and learning space--better. What it can do is a moving target that keeps changing and may not be consistent across users or versions of GenAI. Coupled with that are heaps of concerns and projections that make it even harder to figure out. It's both similar and different from other technologies we've experienced and certainly one that has come at us so much quicker than anything before.
Because of that, I would agree that it is not a replacement for human connection and is unlikely able to do many things better than humans when it comes to collaboration, care, and connection. Yet, I also know that those things in general are not universally available to our students or to everyone in society. And we keep making choices as individuals and as a society that tell me, there is never going to be enough for everyone to get what they need, never mind want. Unfortunately, we are all complicit in myriad ways and decisions around this, and it is incredibly challenging (financially, emotionally, socially, cognitively, physically) to untangle this trajectory.
In that space, when I look at GenAI and think about what it has to offer, it offers some additional possibilities for conversation, reflection, and understanding, which are not the same as collaboration, care, and connection, but close proximities that still may serve some promise for people without access to those things. When English-language learners can use GenAI to access more opportunities that are closed to them due to linguistic bias or when students can use it to make sense of concepts that the instructor seems unable to explain in a way that is useful to the student, GenAI does have something to offer.
I know it comes with a lot of concerning and challenging tradeoffs, some of which are perpetual problems we navigate with all technologies; others raise new questions and challenges.
But you and I are on the same page, we're looking for something to make us hopeful in what AI might do that feels redeeming or expansive. I don't know that I have found that yet myself; what I have found is ways to use it that I might otherwise do with humans if I had more time and opportunities to do so. I have found it can serve some purposes that are probably not now revolutionary but do open up other possibilities—in the classroom and in life.
For now, I’m willing to keep exploring that because I know that when new technologies emerge, it still takes time to figure out where they are and aren’t useful. Given the short amount of time, coupled with the near cultural saturation of GenAI in the last 2.5 years, I think we’re still on that search together.
So I appreciate your concern and care for students and I don’t think I’m offering a model that says “always do this” but rather, some ways to explore it to see where it might further support, augment when necessary, or give space for new forms of collaboration, care, and connection when students also have more opportunities for conversation, reflection, and understanding.
This is far from the first critical response I’ve had to my work, but definitely the first where it was made personal. I like to think (but don’t know for sure) that when criticisms have come (or raised some of the concerns about the implications of my ideas), I’ve done my best to thoughtfully engage with them. But the personal nature of this comment did throw me off my game a little bit. Yet, it also sent me down this pathway to ponder about things that I might not have or consider just what does it mean to respond to such challenges publicly. For that, I am grateful for the experience to reflect and work through.
How Would You Handle This?
I’m curious about those of us reading these who find themselves in these spaces where they are navigating conversations about GenAI, often discussing it as something to acknowledge, consider, and integrate as part of the educational ecosystem. I imagine that many carry these concerns and challenges about technology in general and GenAI in particular.
How might you have responded to such a comment? More importantly, how would you do so in a way that did not just recreate the divide or dismiss the fear and angst at the core of the comment? Would it be different if it was a group and at the end of a talk? What moves would you make?
I continue to be interested in how we have conversations across higher education (and education) as a whole that embrace the challenges we’re each figuring out with this particular technology (but really, nearly all new technology in the teaching and learning space). I read as much from the critics (e.g. Audrey Watters, John Warner, Brian Merchant, Gary Marcus, among many others) as I do from those who sit in the middle ground and those who (critically or uncritically) embrace GenAI.
There are ways that we’re all just talking past each other, only exacerbating the problem. We’re often performing in these spaces (newsletters, blogs, social media, books) of why each is the right path. There’s a lot of dismissing across the continuum towards where other people are. Understandably, there’s a range of fear, frustration, exhaustion, and more that’s keeping us from slowing down to understand where we each are on this journey and how we got there.
When I think about this experience, I think that’s the larger lesson I am drawing from it. This person, intentionally or not, set out to talk past me and unfortunately, I talked past them. So we both left the conversation with less understanding. Something for me to chew on and hopefully give space for readers who also find themselves in similar circumstances, an opportunity to prepare for a bit more engaging with than talking past.
The Update Space
Upcoming Sightings & Shenanigans
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: June 23–July 3, 2025
NERCOMP: Thought Partner Program: Navigating a Career in Higher Education. Virtual. Monday, June 2, 9, and 16, 2025 from 3pm-4pm (ET).
Teaching Professor Online Conference: Ready, Set, Teach. Virtual. July 22-24, 2025.
AI and the Liberal Arts Symposium, Connecticut College. October 17-19, 2025
Recently Recorded Panels, Talks, & Publications
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
Bristol Community College Professional Day. My talk on “DestAIbilizing or EnAIbling?“ is available to watch (February 2025).
OE Week Live! March 5 Open Exchange on AI with Jonathan Poritz (Independent Consultant in Open Education), Amy Collier and Tom Woodward (Middlebury College), Alegria Ribadeneira (Colorado State University - Pueblo) & Liza Long (College of Western Idaho)
Reclaim Hosting TV: Technology & Society: Generative AI with Autumm Caines
2024 Open Education Conference Recording (recently posted from October 2024): Openness As Attitude, Vulnerability as Practice: Finding Our Way With GenAI Maha Bali & Anna Mills
AI Policy Resources
AI Syllabi Policy Repository: 175+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International
I don’t think I would have been any better in the moment. But I believe that where many of these fears stem from, including my own, is the lack of choice. We’re told we have to use these tools or get left behind, and I don’t know that this is the case. I would assert that we have to learn about these tools, how they work, and how they might impact us. Then we can make the decision to use them or not. I also want to give my students the choice to use them or not. We live in a world where we have the illusion of choice (hundreds of options for dinner) but not when it really matters (healthcare, politics, housing, etc.)
I also fear that we may be making the same mistakes that we made with social media. We hyped it for years until it was too late to undo the damage. I have students now who are choosing to not have a social media presence and they are the better for it.
I don’t think the persons comment was about GenAI. It was a globalized concern about lack of human connection and a feeling of helplessness in the face of corporate and tech priorities being imposed (or seemingly imposed) on academic values. In other words, stuff way beyond the scope of your talk. I’ve faced similar questions in presentations I’ve given. Once someone characterizes a tech as “evil”, you’re having an entirely different conversation.