"They’re not necessarily trying to cheat; they’re just being efficient"
Part 2 of my interview with Tawnya Means
In the last post, we were talking with Tawya Means. You can read that and learn more about the opportunities and tensions that GenAI opens up in teaching and learning or jump right into this one, where we talk about play, agency, and character through the lens of AI.. If you have thoughts about AI and education, particularly in the higher education space, consider being interviewed for this Substack.
Lance Eaton: Let’s talk about students. You mentioned classes previously; are you currently teaching at this moment, or was that hypothetical?
Tawnya Means: It’s hypothetical because I’m not actually teaching right now, but I love teaching. Some of the most valuable insights I’ve ever had came from saying, “I’m going to try this in class,” and then doing it. Sometimes it worked. More often, parts of it worked and parts didn’t. That kind of battlefield testing of trying ideas in real time was where I learned the most.
These days, most of the teaching I do is with faculty and others in similar roles, helping them learn how to use AI. For example, I posted on LinkedIn today about a bot that could help you have a dialogue instead of a test. I invited people to try it and tell me: did it do what you expected? Did it not? What would you have liked it to do instead? That kind of experimentation—seeing how people engage with these tools—is incredibly valuable.
Whenever I give a talk, run a workshop, or even have a one-on-one conversation about AI, I try to include a hands-on element. When people actually do something, a few things happen. They get to be aspirational, saying, “I think that…” or “I want that…” which speaks to the vision level. Then they try it and say, “That’s much better than I expected,” or “That’s not what I wanted it to do.” That feedback loop connects the practical to the aspirational.
From there, people start making their own connections: “I’m not teaching that, but I am teaching this,” or “It didn’t quite work for me, but I can see how it might work for something else.” They begin linking the potential of the tool to their own contexts.
And that, I think, applies to just about anything we teach: writing, veterinary science, anything. You have to tie it back to purpose: What is this intended to do? What’s its value? Then you combine that reflection with active experimentation. AI gives us more opportunities to make those connections than we’ve ever had before.
Lance Eaton: Yes, the ability to play. I offer that up a lot as well: encouraging people to play. We need to keep experimenting, not just to test what we’re doing, but to learn new ways of interacting and figuring it out.
To that end, I know you mentioned you’re predominantly working with faculty, but have you had experiences or examples involving students? What has that interaction been like?
Tawnya Means: I’ve had a few really meaningful opportunities to talk with students in this space. One of the biggest things I’m noticing is that we need to be much clearer with them. We’re often not telling students what we actually expect. Either we avoid the conversation altogether, pretending AI doesn’t exist, or we go to the other extreme and say, “You absolutely must not use AI. You can’t even do a Google search or turn on spell check.”
We need to be clearer, because clarity reduces anxiety. And students have a lot of anxiety right now. They know the world is changing. They know their future jobs are changing. They know what will be expected of them outside of college is very different from what they’re currently being taught. That gap shouldn’t exist. College should be the place where it’s safe to try out what they’ll need to do later, where failure is low-stakes and part of the learning process.
When students have that clarity, they’re much more willing to explore. If you tell them, “This is my expectation. This is how I’m using AI. This is how I expect you to use it. This is how it can benefit you. This is how it connects to the real world,” they do incredible things.
I worked with a few student groups on a project where they built apps as part of their coursework. We met regularly. I’d say, “Here’s what I hope the app will do,” and they’d go off and try it. They’d come back, show me what they built, we’d discuss it, they’d make adjustments, and try again. In eight weeks, they built functional apps: things most of them never imagined they could actually make.
That back-and-forth, the feedback, experimentation, and exploration gave them freedom and agency. They built something real. In the past, a similar assignment might have ended with a mockup, a wireframe, or a paper describing what the app could be. But now, with these tools, they could actually build it, test it, troubleshoot it, and see it work in real time. They hit technical challenges, solved problems, and gained experience they wouldn’t have been able to before.
Lance Eaton: That sense of agency feels really powerful—to have something tangible, something they can point to. And I think about that in my own learning. The classes that resonate most with me are the ones where I can say, “Look, I made this. This is the thing.”
Tawnya Means: “This is the thing I built.” I think another aspect of the student perspective is that if they don’t see the value in what you’re teaching, they’ll default to efficiency. They’re not necessarily trying to cheat; they’re just being efficient. They’re thinking, I’ve got 67 other things to do this week, and writing a short reflection paper isn’t one of them. It doesn’t seem important to me. So they’ll just ask ChatGPT to summarize their learning and turn that in, because nothing is stopping them from doing it.
That kind of cognitive offloading happens when students don’t see value in doing the work themselves. Two things can help address that.
First, we can design assessments that make sense, where the purpose and relevance are clear. Students should understand why they’re doing an assignment, not just what they’re doing.
Second, we can model and teach them how to think with AI. For example, if I want them to write an essay and use AI meaningfully, I might show them my own process. I could say, “I’m using Perplexity to gather research. When it returns mixed results, I verify sources and decide which ones meet the quality I expect. Then I ask it for more like this one.” That iterative pushback process is key.
Once they have good resources, I might say, “Now let’s move to Claude and have it help generate a first draft.” But then they should go through many rounds of revision, maybe 70 iterations, pushing back, refining the tone, and making it sound like their voice. It’s their paper, not the AI’s.
Teaching them how to revise like that—saying things like, “I wouldn’t use that punctuation,” or “I’d emphasize this point instead”—helps them develop their own perspective. That process is far more cognitively demanding, and therefore far more growth-inducing, than just saying, “Okay, in three iterations I’ve got my essay.”
Lance Eaton: It reminds me of rethinking what it means to create and demonstrate learning—rethinking the whole workflow. I grew up right at the transition point. I remember using note cards and the card catalog in high school. By college, we were starting to get access to databases. By grad school, I was thinking, why would I ever go back to a card catalog?
And for people who did their doctorates fully on paper—just the thought makes me cringe. I wanted everything in Mendeley where I could highlight and annotate. That felt more accessible and meaningful.
I appreciate what you’re saying: rethinking workflows given these new tools. Thoughtful engagement with AI pushes us to reconsider how we produce and show learning. It’s going to look different than before. And I agree that within that difference, we might actually see more powerful student work.
Tawnya Means: Yes. Even just consider, if I ask my students to write a paper on a particular topic and tell them they need to consider at least 17 viewpoints, they’d never, on their own, write out 17 personas or try to imagine all those perspectives. But with AI, they can. They can take that topic, give it to their preferred AI, and say, “Generate 17 different perspectives,” and then merge three of them.
That gives students the opportunity to engage with far more viewpoints, to push back when they see bias, or when something doesn’t feel like a fair representation of what they know about the world. And sometimes, they’ll encounter a perspective they didn’t know existed and learn from it. AI opens up so many more opportunities to expand thinking and expose students to diverse ways of seeing.
Lance Eaton: Let me throw out a question. It circles back to something we touched on earlier, but it fits here. What’s the biggest challenge in making sure that AI strengthens rather than weakens student learning? How do we get there from here?
Tawnya Means: Maybe that’s where the human relationship piece comes in. Let’s say you’re my student and I give you a writing task. You go off and do it, then come back, and we talk about it. In that conversation, in that relationship, I can push back. I can ask, “How many times did you try this? Did you consider this? Did you notice the bias creeping in here?” That human connection allows for prompting, modeling, nudging—pushing deeper in a way AI can’t replicate.
And alongside that, we need thoughtful people designing the tools themselves. Behind the scenes, we should be shaping tools that actually do what we want them to do for learning. But in front of the student, there still needs to be a human presence—a teacher, a guide—making meaning with them in real time.
Lance Eaton: Yes! I recently read Alison Pugh’s book, The Last Human Job from last year. She introduces this idea of connective labor: the idea that many jobs, including teaching, ministry, therapy, and social work, are built around human connection. Much of that work is constructed in the space between people.
I’m thinking about that in relation to what you’re saying, and also about how little higher education invests in that part of teaching. There are incredible faculty development programs and brilliant instructional designers. I’ve worked in that space for nearly 15 years, but when you look at the ratio of availability, that relational piece often gets overlooked.
Then you think about large online institutions that rely almost entirely on adjunct faculty, often with very little support or agency in their courses. When you put all that together—the connective labor, the human relationship, the institutional structures—it’s a lot to chew on. I don’t even know if that’s a question, but it’s definitely something worth sitting with.
Tawnya Means: I haven’t read the book yet, but I’ve talked with a few people about it, and I definitely want to.
What I’ve been writing about recently connects to that. It’s around character development and service learning, and how we help people discover their passion rather than just their profession. If AI can streamline and speed up work, it can give us more time to pursue those passions instead of grinding through the repetitive tasks we’ve always had.
We need to put more emphasis on that shift. For too long, education has focused on content distribution and retrieval: I tell you something, and you tell it back to me. Compliance-based learning.
The problem is, we’ve placed more importance on making sure students understand accounting than on helping them understand how to relate to another person so they can help them with their accounting. That human connectedness—the relational, ethical, empathetic part—is what we need to elevate if we want to stay relevant in the world.
Lance Eaton: You’ve hinted at this a bit, but what are some ways you’re using AI that you find helpful, rewarding, or just genuinely exciting or that keep you learning and engaged?
Tawnya Means: I love being able to have an idea, start a chat on my phone, open it on my computer, build out a proof of concept, and then share it with someone more technically skilled than me to ask, “Can you build this?” It’s giving me the ability not just to imagine something but to realize it. That’s been a lot of fun.
And while I’m building those proofs of concept, I like to make them semi-functional. I know there are smarter people who can do it better with an engineering background, but it feels great to create something myself, even something small, like a calculator that figures out savings from a particular activity. In the process, I’ve been learning programming, Python, and how to use GitHub. These are things I hadn’t done before. It’s fun, exciting, and takes my work from the theoretical to the tangible to something that actually works.
The second thing I’ve been enjoying—it’s even in your email signature, “I wish I knew all the questions to ask”—is simply having conversations with AI. I’ll ask a question, learn something, and then go, “Wait, what about that? Tell me more. I’ve never heard that concept before; can you explain it?” Or, “That’s not what I thought it meant; how does it differ?”
That ability to explore freely, to follow curiosity wherever it goes, is both liberating and exciting. I don’t have to wait for someone to ask the right question. I can just ask it myself and see where it leads. I’m really enjoying that.
Lance Eaton: Those three approaches resonate with me. I’ve got a keynote at the end of this week. It’s one where I’m trying to be both provocative and thoughtful, and I’ve been doing that exact thing: working with AI, having it interview me, picking apart my thinking. Having AI as a conversation partner to help shape something I’m proud of, something that’s deeply me but also informed by that dialogue.
I’ve also been doing more of the technical experimentation. I’d be curious to see a chart showing how many people have downloaded Python or similar programs over the last few years—say, comparing 2019 to 2022, and then 2022 to now. I bet that number has skyrocketed. I’d never used it before, but now I’m learning, tinkering, and trying to build small programs.
Tawnya Means: Yeah and how many more people are using command prompts, or Visual Studio, or tools like that?
Lance Eaton: I’ve done them all on my computer now!
What you said about curiosity and AI reminds me of the early days when I first got online. That same sense of exploration. Or, when Wikipedia really took off. This feeling of discovery and open learning. For me, that’s what I love. I’m a nerd; I collect degrees; I thrive on that. So what you described captures perfectly what many of us are experiencing.
And for a final question. What are you grappling with right now? What’s the most persistent question you’re still trying to figure out?
Tawnya Means: Finding the right balance. Like I mentioned before, I’ll have an idea, start exploring it, and then build a proof of concept. Sometimes I get more and more wrapped up in refining that concept, even though it’s never going to be the final thing I use.
So I’m asking myself: how much time do I need to spend articulating what I’m trying to achieve versus how much time am I spending developing expertise in something I don’t actually want to be an expert in? I just want it to work.
That’s what I’m trying to figure out: how far do I take something before saying, “Okay, this is good enough. It’s a solid proof of concept. Someone else can take it from here.”
The Update Space
Upcoming Sightings & Shenanigans
I’m co-presenting twice at the POD Network Annual Conference,
November 20-23. Pre-conference workshop (November 19) with Rebecca Darling: Minimum Viable Practices (MVPs): Crafting Sustainable Faculty Development.
Birds of a Feather Session with JT Torres: Orchids Among Dandelions: Nurturing a Healthy Future for Educational Development
Teaching in Stereo: How Open Education Gets Louder with AI, RIOS Institute. December 4, 2025.
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
The AI Diatribe with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education,” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 190+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form and I’ll get back to you soon!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International






I’m surprised at the lack of mention of any drawbacks or concerns—ethical or intellectual— when encouraging students to use AI in the ways you mention. I love the idea of using AI as a conversational or dialectical thought partner, but what about the problem of sycophancy or the fact that they’re really not designed to challenge you, but rather to make you feel good and keep you using the tool? I’m also concerned about students outsourcing their thinking to AI to the extent that they don’t build the muscle to do cognitive tasks without AI. Are these concerns you share? If not, I’d love to hear why not!