"The concern is for average work."
Part 2 of my interview with Jason Bock
In the last post, we started a conversation with Jason Bock—you can catch up there about some of the ways AI is helping with accessibility or jump right into this one where he talks about his work with students around AI. If you have thoughts about AI and education, particularly in the higher education space, consider being interviewed for this Substack.
PART 2
Lance Eaton: You mentioned earlier about a project you had your students doing where they were playing ID and in SME if I understood this correctly with an AI.
Jason Bock: In 2023, when GPTs were just emerging, I built a simple bot to simulate an instructor for a course in strategic communication and instructional design. At the time, it required coding and API tokens, since students couldn’t reasonably be expected to pay for access.
The bot acted as a subject matter expert whose first language was not English. I designed it to respond in slightly choppy English to mimic real-world communication challenges. The course focused on the instructional designer-SME relationship, blending goal setting, project management, communication skills, and influence strategies.
The exercise used instant messaging as the medium. That choice was intentional: in the workplaces I’ve seen over the last decade, threaded instant messaging has replaced email for most back-and-forth communication. It’s more immediate and easier to manage than long email chains. Using that format also aligned with the way LLMs naturally interact through text.
The simulation gave students practice in communicating with an SME under realistic conditions, both in style and medium, while also showing how AI could be programmed to create authentic professional interactions.
When I built that simulation, I set ChatGPT to limit the size of its responses so they resembled instant messages rather than long-form replies. It worked well as a realistic ID simulation.
Lance Eaton: I love that idea and may need to borrow it! How did it go?
Jason Bock: What surprised me was that almost none of my students had used instant messaging in a professional context. The exercise gave them practice not only in communicating with a subject matter expert but also in experiencing how professionals now interact.
I raised questions about conventions we take for granted. For example, I asked about using emojis. Five years ago, I would have called them unprofessional, but now they’re widely accepted in workplace communication. I also asked about abbreviations like “LOL.” Students felt uncomfortable with them, but I admitted that I use them myself to save time in professional IM exchanges.
The experiment worked well as both a skills exercise and a cultural exploration. It pushed students to think about the evolving norms of professional communication in digital spaces.
Unlike the role-switching exercise where students interacted with each other, the instant messaging simulation worked better with the chatbot. If students had role-played with each other, their shared insecurities and anxieties about professional IM would likely have exaggerated the awkwardness. With the chatbot, I could fine-tune its behavior—making it informal, adding a language barrier, and instructing it to act like a typical SME by often saying, “I’m not done, I need more time.” That gave students realistic pushback to navigate.
I also gave students goals to achieve in their conversations. Some goals matched what the AI was prepared to handle, while others went beyond its capacity. This highlighted the negotiation and problem-solving aspect of SME–instructional designer interactions.
Lance Eaton: Again, I might have to borrow that idea of changing or adding goals along the way for role-playing AI activities.
Jason Bock: Generative AI is particularly well-suited for roleplay. I’ve worked on games and escape rooms, and before AI, scenarios resembled “choose your own adventure.” Students might have only three choices: A, B, or C. Anything outside that was impossible. With AI, students can try D, E, or anything else, and the system will still generate a response.
Most conversations ended up fairly routine, but some were genuinely engaging because students pushed beyond predictable paths. That flexibility mirrors real life, where problems rarely come with only three clear options. It’s a more authentic way to simulate professional practice.
Lance Eaton: Right! I’d be interested to hear a little bit more about the use of AI as a tutor for statistics that you mentioned in our lead up to the interview?
Jason Bock: Since I started talking with you, we decided to discontinue the tutor project. Too many competitors were entering the space, and it wasn’t my full-time focus. Still, I learned several valuable lessons.
First, building a tutor that is close to 100% accurate is extremely difficult. Our approach was to use a closed set of homework questions supported by large datasets. That way, we could feed the AI the correct answers and restrict it to those. Even then, achieving near-perfect accuracy was tough. We came close, but fully locking down the AI removed some of its flexibility. At that point, we might as well have skipped AI altogether and just returned scripted responses.
What made it worthwhile was adding nuance. The system could tell a student, “You’re close, keep working at it,” or give feedback resembling what a human tutor might say. That layer made it more than just answer-checking. Still, maintaining accuracy while preserving nuance was a constant balancing act.
Lance Eaton: What were the students’ experience with it?
Jason Bock: Student response at first was positive. But as generative AI became mainstream, expectations shifted. Students began comparing our tool to open-ended systems like ChatGPT, and they wanted much more than tightly constrained homework help. Their expectations outgrew what the closed tutor could realistically provide.
We began the project with excitement, but over time students came to see it as just another tool they could already access elsewhere.
One real advantage was that the AI was trained directly on the instructor’s homework set. But that introduced another challenge: user input errors on our side. We created hundreds of datasets and test cases in Excel, and the bigger issue turned out to be our own mistakes. If we entered flawed data, the AI confidently reinforced it.
In practice, the instructor and I made more errors than the AI. For example, if we coded the correct answer as –1 instead of +1, the system would reject the student’s right answer and insist they keep trying. That was far more frustrating to students than any AI limitation. The system was doing exactly what it was told, but bad input made it look unreliable.
This was a major lesson: accuracy depended less on the AI itself and more on the diligence of dataset creation and testing. With more systematic and careful design, those errors could be reduced, but it added a layer of complexity and effort that undercut some of the efficiency gains.
It’s a heavy lift for instructors to prepare enough material for AI to guide students through multiple iterations of problems. That experience gave me more respect for publishers like Cengage and McGraw Hill. The processes they use to ensure accuracy in courseware are intensive, and when errors slip through, I’m now more forgiving. Creating reliable content at scale is difficult work, and AI can only be as strong as the material it’s given.
Students initially liked the tutor we built, especially for walking them through processes. That matches what’s happening with tools like Khan Academy’s Khanmigo, Google’s Gemini homework helper, and ChatGPT’s own tutoring features. These tools can play the same role as a human tutor, and unless someone is skilled at prompting, they won’t simply hand over the answer. That said, human tutors sometimes do give answers outright when frustration builds, so it’s not a problem unique to AI.
Lance Eaton: That’s true; at least from my own daily experiences, I get plenty wrong. So where does that leave us?
Jason Bock: The open question is whether AI risks becoming too much of a crutch. Will students bypass the learning process and rely on being led step by step? Or will AI provide the scaffolding that lets them eventually solve problems independently? There aren’t clear answers yet, but the concern is legitimate, and it highlights the need for thoughtful integration rather than unchecked reliance.
In higher education and K–12, it’s essential to introduce students to AI. Banning it outright puts them at a professional disadvantage. Employers are already using it, and that use will only expand. The adoption curve may not match the pace AI companies hoped for, but the trend is clear.
I believe the most intelligent and creative humans will continue to outperform AI in key areas. There will always be instances where a person beats an AI at something—even in domains like chess, where machines excel. In creativity and other nuanced tasks, the best humans will maintain an edge.
The concern is for average work. Routine, middle-tier jobs are the most vulnerable.
Service jobs will persist, though some may be supplemented by robots, as we already see in restaurants. But the real pressure is on that middle space where AI can handle predictable knowledge tasks.
The people best positioned for the future are those who understand AI, regardless of how often they use it. They’ll be equipped for the jobs of tomorrow and many of today. By contrast, roles like copywriting illustrate the risk. Unless someone is at the very top of their field, AI is already reshaping that work.
Generic copywriting, like product descriptions for large websites, is one of the roles most disrupted by AI. I know people who lost jobs because their employers realized tools like ChatGPT or Claude could generate thousands of descriptions faster, with fewer errors, and in more polished language. That said, the best human copywriters still outperform AI in creativity, nuance, and originality.
This is why we need to engage with AI in education. Students must grapple with these realities now. At the same time, I share the frustration of seeing “copy-paste AI” work where students paste prompts into ChatGPT, submit the output without editing, and even leave in placeholders like “[insert your name here].” That shows no real learning.
But it raises deeper questions. Are we changing the nature of knowledge itself? Do students still need broad exposure to general knowledge if, historically, most forgot it immediately after the course anyway? Or should the focus shift toward knowing how to work with knowledge through curating, questioning, and applying it in an AI-rich environment?
Lance Eaton: That brings me back to wonder, where does AI fall flat?
Jason Bock: One area where I’m clear: I dislike AI-generated course content. It reads flat, formulaic, and uninspired, like a generic textbook. That may be because AI is trained on that style of writing, but it makes for dull learning materials. This reinforces the idea that AI should be a tool for augmentation, not a wholesale replacement for human-authored teaching content.
I share the view that most college textbooks are poorly designed for student engagement. The only time I’d use AI in content creation is for micro-level editing by asking it to generate several versions of a single sentence so I can choose the best fit. I avoid having it draft entire course materials because I want the content to carry a distinct, human voice rather than the flat, generic “American textbook” style.
Students overwhelmingly dislike textbooks, and in my experience, many don’t read or use them. AI-generated content that mimics that same style whether produced by instructors or students, runs into the same problem. It becomes filler rather than meaningful learning material.
AI works best as an add-on tool, not a full solution. The broader issue is that many undergraduate students already feel disconnected from general education courses. They see them as irrelevant to their lives and career goals. There are, of course, exceptions where great instructors make the material engaging and valuable. But often the perception is that gen-ed courses, especially in subjects like English and writing, lack purpose.
That’s why it’s understandable when students ask, “Why do I need to learn this?” The challenge is less about AI itself and more about how higher education justifies and delivers knowledge in ways that feel authentic, useful, and connected to students’ futures. If AI can write more clearly than we can, why force students or ourselves to labor through weaker writing? It’s not an easy question, and there isn’t a simple answer.
Academics are poor test cases because many of us love learning for its own sake. We enjoy experimenting, exploring, and building knowledge. Our students often arrive with different expectations, shaped by a K–12 system focused on grades and compliance, reinforced by parents and society. By the time they reach college, many view education as transactional: they want credentials, skills, and pathways to jobs.
The problem is that students’ sense of what is “relevant” isn’t always aligned with what they actually need to know. We walk a line between respecting their goals and pushing them to grapple with material they may not see as valuable yet. Ungrading and reflective assignments can help, but trust is fragile. Many students don’t believe us when we say they can challenge ideas freely. Too many have been penalized in the past for not parroting the instructor’s view.
This is even harder online. Asynchronous courses magnify the transactional mindset: students want efficiency, not open-ended exploration. That’s where AI has potential. Used well, it can make courses more adaptive, responsive, and engaging, offering multiple pathways into material. But the underlying challenge remains—many students are asking, “Why should I learn this at all?”
AI can’t solve that existential question, but it can help us design courses that feel less like hoops to jump through and more like experiences that connect learning to what students value. The work ahead is rethinking both course design and culture, with AI as one tool in that process.
This is the double edge of AI. On one side, it’s a powerful tool for scaling accessibility, alternative formats, unique scenarios; things no individual instructor has time or resources to create manually. On the other, it’s a force that can erode reasoning, communication, and authenticity if it’s treated as an authority rather than an assistant.
Lance Eaton: Thank you so much for sharing your thoughts and insights!
If you want to hear more from Jason, follow him on LinkedIn!
The Update Space
Upcoming Sightings & Shenanigans
I’m co-presenting twice at the POD Network Annual Conference,
November 20-23. Pre-conference workshop (November 19) with Rebecca Darling: Minimum Viable Practices (MVPs): Crafting Sustainable Faculty Development.
Birds of a Feather Session with JT Torres: Orchids Among Dandelions: Nurturing a Healthy Future for Educational Development
Teaching in Stereo: How Open Education Gets Louder with AI, RIOS Institute. December 4, 2025.
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
The AI Diatribe with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education,” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 190+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form and I’ll get back to you soon!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International


