"... imagine a world where there’s no separation between learning and assessment..."
An interview with Tawnya Means
As part of my work in this space, I want to highlight some of the folks I’ve been in conversation with or learning from over the last few years as we navigate teaching and learning in the age of AI. If you have thoughts about AI and education, particularly in the higher education space, consider being interviewed for this Substack.
Introduction
Tawnya Means is a leading voice in educational innovation with over two decades of experience guiding institutions through technological transformation. As Founding Partner and Principal of Inspire Higher Ed consulting firm, she has worked with universities across six continents on online learning strategy and AI integration. Her work focuses on where ancient educational wisdom meets cutting-edge technology, exploring how AI can create truly personalized, engaging learning experiences at scale. As a Gallup Certified Strengths Coach, she brings both visionary perspective and practical implementation experience to help institutions enhance the irreplaceable human connections that make education meaningful. Tawnya is the author of The Collaboration Chronicle: Human+AI in Education
Interview Part 1
Lance Eaton: You’ve been in higher ed, but what was the catalyst moment to shift into the AI and education space?
Tawnya Means: My background is in information science and learning technologies. My PhD is in learning systems design and development. So I’ve always been in the space of figuring out what’s the latest, especially as it relates to online learning, blended learning, and technology-enhanced learning. That’s been my primary focus and the work I’ve done for almost 20 years now.
When generative AI was first launched (and I know AI has been around for 70-plus years with lots of research and hopefulness that maybe one day it’ll become what we want it to be), it felt very similar to when the iPhone was launched. It wasn’t just the device, it was the app library that exploded the whole concept of using your phone for so many things. Generative AI, I think, offers that same kind of expansion: the ability for anyone to build something that can extend or enhance an idea or take learning in a new direction.
I remember one of my early speaking engagements. There were maybe 40 to 60 people in the room. I demonstrated something that AI could do, and in the middle of my talk, people started clapping and cheering. I don’t even remember exactly what the demo was. It doesn’t really matter. The point is that people were so excited by the opportunity this opened up.
By then, I had already been following Ethan Mollick for a few months, reading his posts, and thinking, yes, yes, yes. I was rereading about Bloom’s two sigma problem (the idea that group instruction is so much less effective than individual instruction) and realizing that we’ve never truly solved that. We’ve tried adaptive learning and other innovations meant to give people individualized experiences, to have a coach there side by side, but it’s never quite worked.
When I saw how flexible and adaptable generative AI could be—that you could just ask it for what you wanted—it felt different. Yes, early on there were plenty of failures and disappointments. People said, “This is never going to work.” And I kept thinking, no, it will. We just have to keep working on it.
Now, three years in, seeing how far and fast we’ve come, the conversation is more balanced. There are still fears. People worry it’ll take over the world, ruin everything, undermine academic integrity, replace faculty, but we’re also starting to see the return on investment. When you use these tools thoughtfully, you get something meaningful out of them. That’s rewarding in a way we haven’t often seen in education. Most innovations have taken years of research and effort before showing real impact.
Lance Eaton: Yeah, I appreciate that navigating this liminal space.
Tawnya Means: I think because AI has moved so fast, and because it fits into so many of the needs we already have, and because there’s so much opportunity for the average person to get involved, it’s just so much more powerful than any of the innovations we’ve tried in the past.
Lance Eaton: That emphasis on how much more powerful it feels—it carries so much weight. I want to follow that, but first, as you were talking about the two sigma problem, something flashed into my head. I’m curious about your thoughts about it.
I keep thinking about Salman Khan (of Khan Academy) and others who are pushing the idea that AI tutors are the future. But when I think about the mentor model, all of that work was grounded in human relationships. So are we setting up a trap where we see personalized learning and one-to-one AI approaches as the solution but lose sight of the fact that all that research was based on human-centered relationships?
Tawnya Means: I think that’s an amazing perspective because one of the things we’ve never really done is tease apart what that means. So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?
I’ll say “can’t,” though I’ll add a caveat to that. What I think AI can’t do, even if we get to AGI or ASI, is be real. It just can’t, because of its construct. It’s ones and zeros. It will always be ones and zeros. Even if it becomes self-aware, it’s still ones and zeros.
You and I can talk to each other and establish a relationship. That relationship doesn’t have to be built on me teaching you something and you proving that you know it. It can exist simply because we’re human. The part about verifying knowledge can be separated from that human relationship. We can ensure learning happens, but it doesn’t have to depend on the relationship itself.
So what would that look like? We’ve seen examples like the AI school in Texas where students have a tutor who works with them for a few hours, and then they meet with an instructor.
Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.
The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.
But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.
Lance Eaton: I appreciate your thinking of how we disentangle those two. My follow-up question is this: I don’t know that there’s a perfect answer, but how do we do that? And how do we know when we’ve actually done it or if we’ve done it?
Tawnya Means: I have a couple of answers for that. One thought before I dive in is that if we look at successful models of teaching in the past, the best outcomes often came from apprenticeship or mentorship models. But we couldn’t scale those. Whether or not they were designed to be exclusive, they were exclusive. There just aren’t enough experts to be one-on-one with every novice. There will always be more novices than experts. That’s simply how expertise works.
So, coming back to what you’re asking: how do we know if it works? I’d say, “By your fruits, you shall know them.” If you’re able to start and run a successful business, then you’ve learned it. You’ve demonstrated the knowledge.
If we take that apprenticeship model and strip away its exclusivity, we can imagine a world where anyone who wants to learn something deeply can. Not everyone will become an expert in everything. That’s not possible or even desirable. But people can choose their areas of focus.
So, let’s develop a model where anyone who wants to build expertise can do so. We’ll know it works when learners can actually do the thing—when they demonstrate the skill or understanding in practice.
The space between point A and point B—between wanting to learn and achieving expertise—is what we have to figure out. That’s where we are right now. We’re experimenting: does dialogue-based instruction work? Does simulation work? Should a bot always respond in a consistent way, or is it better when it doesn’t?
That middle space is where the real exploration is happening. And I don’t think we’ll know what truly works until we start to see more concrete outcomes.
But then, just to think back on it, did we ever really know the apprenticeship model would work until it actually did? I mean, it took years for us to figure out that model.
Lance Eaton: It’s this interesting bind. It reminds me of what we often talk about in teaching. You’re planting seeds for trees you might never see grow. Especially in the liberal arts, you’re helping students develop practices, ways of working through problems, and ways of seeing the world. So I appreciate that we’re in that space between point A and point B.
I do wonder if point B is really when they’re out in the world and doing work, being the fully developed humans they are. And that raises an interesting question: are we going to know too late where it all leads?
Tawnya Means: But that’s assuming we’re following just one line, right?
Lance Eaton: Say more.
Tawnya Means: If we have a billion lines we’re following, then some people are going to get out into the world, so to speak, before others. We’ll have outcomes along the way that show whether we’re heading in the right direction.
If we realize we’re developing people who are, say, sociopathic, then clearly we’ve gone off course and need to stop. But I think there’s evidence we can collect along the way to see whether we’re on the right path, rather than waiting until the very end.
Lance Eaton: Thank you for following that thread. I appreciate that. So, let’s move back to another question I’ve been thinking about for you. You’re operating both in the classroom space and the institutional space. How do you see those as separate but intersecting? Where do they overlap, and how do institutions figure that out?
Because AI is happening in two different ways. It’s happening in the classroom, but it’s also happening institutionally. So how do we think about it in terms of teaching and learning, and how do we think about it operationally?
Tawnya Means: The way I think about it, there’s a bottom-up piece, a top-down piece, and a middle space connecting the two.
Top-down, if we don’t have leaders who can articulate a clear vision. If they’re just saying, “Maybe we should look at this AI thing,” or, “If we don’t do something, we’ll be in trouble,” or, “That other school is doing this, let’s copy them,” or, “What’s the newest, shiniest thing we can throw money at?”
If that’s all that’s happening, then we have a problem. Leaders without a clear understanding of what it means to lead in this moment aren’t ready for this space. They might be good leaders historically or in other contexts, but in this space, they need a vision.
Then there’s the bottom-up piece. The people on the ground who are actually teaching. The ones saying, “I’m in class on Tuesday, I tried this, here’s what worked, here’s what didn’t. What am I missing? Where did this go off the rails? This really worked, but is it as good as I think it is?”
We need a lot of those people who are testing, trying, playing, and saying, “That sounded good in theory, but here’s why it doesn’t work in practice.”
And then in the middle, there has to be a conversation space. I think that’s where faculty development comes in. Historically, we’ve treated faculty development as fixing a deficiency, bringing faculty up to a certain level. But that’s not how I see it here. In this space, it’s about connecting the mission, vision, and values of the institution and its leadership with what’s actually happening in classrooms.
That middle space is where both perspectives meet and talk about what’s next. It’s where we build alignment between the strategic and the practical, between the big-picture direction and the lived experiences of teaching and learning.
Lance Eaton: I love that idea of developing it. It makes me think about the dynamics of managing down and managing up.
It also makes me think about what you said earlier about the scale of this. No one person is going to figure it out. This is a giant elephant we’re all trying to understand, and there’s a lot of surface area to explore. We’ve got to collectively make sense of it.
The Update Space
Upcoming Sightings & Shenanigans
I’m co-presenting twice at the POD Network Annual Conference,
November 20-23. Pre-conference workshop (November 19) with Rebecca Darling: Minimum Viable Practices (MVPs): Crafting Sustainable Faculty Development.
Birds of a Feather Session with JT Torres: Orchids Among Dandelions: Nurturing a Healthy Future for Educational Development
Teaching in Stereo: How Open Education Gets Louder with AI, RIOS Institute. December 4, 2025.
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
The AI Diatribe with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education,” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 190+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form and I’ll get back to you soon!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International






As soon as I saw the title I immediately thought "apprenticeship!" and sure enough, that's what you were talking about.
About thirty years ago when I was at PLATO I gave a conference presentation on "cognitive apprenticeship and computer-based learning", the craft of deliberately designing scenarios to present increasingly complex problems to students. *skips digression on Vgotsky's Zone of Proximal Development* That's exactly the issue with Bloom's Two-Sigma Problem, of course. It's not likely that more than one or two students in a classroom are going to be in the same zone at the same time. And of course, human tutors get tired, cranky, have bad days, etc. (These were selling points for CBT systems decades ago.)
Ed-tech researchers have been chasing robot tutors that can provide individualized instruction for years, of course. The usual approach has been to try to model the state of the student's understanding. LLMs simply respond to the student's questions, perhaps picking up on texted indications of frustration or confidence. It sidesteps the modeling problem altogether.
On a different path, GenAI lets regular teachers crank out realistic scenarios by the bucketload, making authentic instruction+assessment much simpler (i.e. lower workload) than in the past.
Good stuff. I've passed it on to the informal "AI brain trust" at the community college where I work.