"...this will always be a partnership..."
An interview with Jason Bock on AI, accessibility, & education.
As part of my work in this space, I want to highlight some of the folks I’ve been in conversation with or learning from over the last few years as we navigate teaching and learning in the age of AI. In this interview, I’m introducing Jason Bock. If you have thoughts about AI and education, particularly in the higher education space, consider being interviewed for this Substack.
Jason’s Introduction
Jason Bock is currently the Director for Online Education for the Eberly College of Arts and Sciences at West Virginia University, where he leads a team of instructional designers and graduate students building programs for WVU. He received his doctorate researching online student engagement with their institution. He is also an entrepreneur, creating a self-checking AI tutor for statistics and continuing to experiment with using AI tools to enhance student learning outcomes. In his spare time, he is an avid boardgamer, an actor, and has designed commercial escape rooms.
Part 1
Lance Eaton: The first thing that struck me about your interest was your focus on accessibility opportunities with GenAI. Can you tell me about that?
Jason Bock: I’m a practitioner running online education at a public university college. For my role, this is a practical matter. All public institutions, and many private ones, have been mandated to become 100% accessible by April of next year (2026). Meeting that deadline will be very difficult, if it is even possible. Large public institutions across the country are in the same situation. It’s a very difficult requirement, and most of us lack the resources to make a meaningful impact on accessibility.
I’ve been interested in generative AI since it emerged and see it, as many others do, as an opportunity to personalize education. Higher education often ends up teaching to the middle, and AI offers a way to move beyond that. When resources are limited, you look for tools that can increase efficiency. AI has been a strong companion in creating alternative versions of materials. For example, it can take the poor-quality transcripts that YouTube generates and turn them into something far more accurate and useful for closed captioning. That has been one area where AI really shines.
We are also exploring ways to convert materials into different formats, such as turning text into video using tools like Notebook LM. The goal is to give students choice, which institutions have rarely had the resources to do outside of small pilot projects. AI has made that far more feasible.
Of course, AI has its limitations. With transcripts, it may not hallucinate as much, but it often tries to summarize or reshape content in ways that require careful oversight. Mistakes do happen, but we are focused on not letting perfect be the enemy of good. Producing many good-quality transcripts and alternative formats quickly is far better for our students than producing just a few flawless, fully human-engineered versions.
That’s where we are as an institution, and that’s where I am professionally. We’re actively experimenting with AI and embedding it into real workflows. This is not hypothetical; it is already part of how we build and deliver our courses.
Lance Eaton: Thank you for that. The first thought that comes to mind is the idea of choice. I’ve thought about and used choice a lot in my own teaching. I’ve done the a la carte assignments for courses where they pick and choose what they want to complete. I’ve applied it to learning materials. One thing I’m thinking about before AI and moreso with it is if and how this choice-expansion contributes to over-contentificaiton of courses. It’s like all of a sudden now for every learning learning element in a course, you have like 10 things to choose from: 3-5 different assignments, 3 different learning items with 3 different modalities., you have 10 different things to choose from because it was originally like three items, but now those three items are, item, a text, a podcast, a video, and just that challenge of decision fatigue that we also experience or content overflow that we’re experiencing.
Jason Bock: I agree that analysis paralysis is a real concern in education. We have to limit choices. Just because AI can create a hundred different ways to submit an assignment doesn’t mean that’s beneficial.
When I’ve used AI as an instructor, I focus on generating iterations of a similar situation so each student can have a unique case study. For example, in an instructional design class on communication, I wanted students to practice interacting with a subject matter expert. I could have pretended to be that expert for everyone, which would have been overwhelming. I could have asked colleagues to step in, but that would have produced inconsistent results and burdened them. Instead, AI quickly generated several similar scenarios. Each student acted as both a subject matter expert for one peer and an instructional designer for another. Because the roles weren’t paired, students couldn’t simply mirror each other. This made the assignment more effective without overwhelming them with options.
Lance Eaton: Yeah, I love using it to build out scenarios and examples. They can always be tweaked or adjusted, but the runway is much clearer.
Jason Bock: In other cases, I intentionally limit how students can submit work. I usually give three or four choices, no more, because otherwise students spend more time deciding than producing. One elegant approach I tried was redesigning discussion forums, which most online students dislike and consider busywork. We know social engagement is necessary in asynchronous classes, but higher education has often handled it poorly. My own discussion forums were no different.
In one course, instead of dictating a single format each week such as a video one week, audio the next, I created six options across six discussion forums in a seven-week class. The rule was simple: to earn bonus points, students had to complete one of each format at some point in the course. Which week they chose to do each format was entirely up to them. If they wanted to start with an infographic, then do a video, then a traditional post, that was fine.
Students appreciated this flexibility. They enjoyed creating in different formats and responding to a mix of media. No two posts looked the same. I also made sure the prompts were expansive rather than restrictive. This approach reduced the monotony of grading, which often sets in by the 18th nearly identical discussion post. Instead, it was more engaging to evaluate a variety of mediums and responses.
Some students even experimented by conversing with an AI and then reflecting on the topic. That variety made the experience more interesting for everyone, including me as the instructor.
Lance Eaton: I love that. Gosh, my mind’s going so many different directions. The online discussions approach piece reminds me, I have a running joke when I talk about discussions in online courses. I took my first online class in 1999 and the structure of the discussion has not substantively changed in over 25 years.
I want to go back a little bit to that consideration of not letting perfect be the enemy of good and the idea of “it’s good enough.” I’m there and I have thought about that in a lot of my work and stuff. Yet, I think there’s an interesting discourse from the people who are subject to “good enough” is all that they get and what that tells them about their value as a human in society. What I have in mind is Allison Pugh’s The Last Human Job. She explores this idea of “connective labor” which includes teaching, therapy, ministry and other roles where relationships are central. She explains that the challenge of “good enough” is that historically it increasingly becomes the only option.
I’d be curious what might be the places where “good enough” is okay and where the human presence is essential.
Jason Bock: All of the transcripts and related work we’re doing with AI serve as supportive tools. I’m not taking AI output and immediately placing it into a course. Everything is reviewed by a human. We’re using AI to increase speed compared to doing the same work manually, which would also contain errors.
I’ve manually transcribed lengthy videos, and it’s not easy. It’s tedious, boring, and it’s easy to lose focus and miss what was said. Some people may enjoy or excel at that kind of meticulous work, but I don’t. While I value accuracy, it’s difficult and time-consuming to get everything exactly right. That’s why we use AI as an assistant. Mistakes will occur, but mistakes also happen in human-created materials. Expecting AI to be 100% perfect is unreasonable when our own work isn’t. My goal is to achieve results that are faster and comparable to what we’d get if we spent the same human resources. That’s what I mean by not letting perfect be the enemy of good. We don’t review outputs with a fine-tooth comb, but nothing goes into the educational environment without a human reviewing it.
I’m very cautious about fully automated AI solutions, especially in accessibility. For example, one of the biggest challenges at universities is audio description for videos. If a student is visually impaired, they may miss information that isn’t spoken aloud, even in something like a PowerPoint presentation. Creating accurate audio descriptions is difficult and expensive.
A vendor demonstrated AI-driven audio description software. While impressive, it illustrated the problem of insufficient context. The tool produced results similar to Word’s automatic alt text generator—accurate in the narrow sense but not contextually meaningful. For example, AI might note that a person is drinking coffee from a red cup. But is the fact that the cup is red important to the lesson? Without context, the description may be irrelevant or misleading.
This is the problem with fully automated AI outputs. Without deep contextual understanding, they cannot reliably meet accessibility needs. That’s why we insist on human review, even when AI is part of the workflow.
Lance Eaton: Can you say more about human review in this work?
Jason Bock: I think this will always be a partnership, at least for the foreseeable future. I hesitate to make predictions, because every time I say AI won’t be able to do something, within a few years it often can. But right now, AI is not at a point where it can be completely self-sufficient.
It has to work in partnership with a human who verifies the output, ensures the context is correct, and confirms it hasn’t gone off track. This is especially important when explaining visual content whether in transcripts, alt text, or audio description. Even with straightforward transcripts, I want to make sure a human reviews the work so we know it’s accurate and useful.
AI is a force multiplier, but it is not a force unto itself.
Lance Eaton: I like that a force multiplier versus a force unto itself. It emphasizes that the force (resisting a Star Wars reference here–oops, too late) is human. Let’s follow this thread. You had mentioned integrating AI into workflows. What are some of the ways you’ve integrated it?
Jason Bock: A lot of our work has focused on video transcripts. As I mentioned, we don’t have the resources to provide full audio description, but transcripts are especially valuable for long videos. They’re not just for students with disabilities. They support universal design, and research shows that many accessibility practices benefit all students.
One example was a program where the instructor included hour to hour-and-a-half lecture videos in an asynchronous course. Normally, that would be discouraged. But since the content involved complex software and data analysis walkthroughs, breaking it into three- or four-minute clips wasn’t practical. In this case, transcripts were essential, giving students a way to revisit the material without rewatching long videos.
AI improved these transcripts by inserting headers and subheaders. That allowed us to generate a table of contents, making it easy for students to search for topics and jump to the part they needed, like “the section on X or Y.” This turned the transcript into a valuable reference tool for everyone.
A lot of this comes down to prompt design. I always start by adding context, telling AI what course it’s working on and what I want it to do. For example, specifying that it’s a geography course, a psychology course, or a social work course helps AI frame vocabulary correctly. YouTube’s automated captions don’t do this; they lack any understanding of context.
AI has already caught important errors. In one social work course, YouTube captions had misrendered Jane Addams’ name, a key early American leader in social work. She founded settlement houses for immigrants and helped shape modern social work. YouTube had her as “Adam” with one “D”. Context-aware AI corrected that mistake.
YouTube handled the sounds correctly but defaulted to the most common spelling, “Adams,” because it lacked context. ChatGPT, which I mostly use in the paid version, handled it differently. Since I told it the course was in social work, it accurately recognized the name and even asked, “Did you mean Jane Addams with two Ds?” before correcting all instances. With the right context, it’s an excellent tool.
The next step I take is giving it limitations. The biggest struggle with transcripts is keeping AI from drifting into summaries instead of sticking to the spoken text. I tell it that it can only do three or four things. Sometimes it still strays, but usually we can pull it back.
First, I tell it exactly what it’s working on: “This is the YouTube transcript.” The problem with YouTube is that it translates sounds but doesn’t provide punctuation. I ask the AI to add punctuation that makes sense. Generative text AI does this very well, since it defaults to the most common and usually correct usage.
One quirk, though, is that it tends to overuse em dashes. I admit I’ve used them for years, but AI leans on them too heavily.
Lance Eaton: I feel robbed too! I would use them regularly and now every time I see it in my own writing I’m like they’re gonna think it’s AI.
Jason Bock: It’s a challenge for those of us who used unusual punctuation long before AI, but aside from those quirks, ChatGPT does an excellent job punctuating transcripts. I allow it to add headers and subheaders for longer videos. For anything over about ten minutes, those additions are useful. For shorter videos, they tend to clutter the text.
I also ask it to scan the transcript for potential misspellings or misused words, with a focus on field-specific terminology and names. That’s how it flags issues like “Jane Adams” versus “Jane Addams.” Because I give it the course context, it catches errors YouTube often introduces—mistakes I might not notice myself.
Once that review is complete, I have it begin editing in small sections. Even with the extended memory in ChatGPT 5, it works best on manageable chunks of about a page of text at a time. Shorter videos can be processed in one pass, but for something like an hour-and-a-half lecture, breaking it into three- or four-minute pieces yields better accuracy.
At the end, and sometimes midway through if I see it veering off course, I ask it to check whether it changed any words, summarized content, or adjusted grammar beyond what I specified. This helps keep it aligned with the goal of faithful transcription rather than rewriting.
The fourth step is removing vocalized pauses. I haven’t explicitly instructed AI to do this, but it often eliminates double words or filler phrases that speakers use as pause techniques. Most instructors reviewing the transcripts feel it improves how they come across.
I stop short of asking it to correct grammar, run-on sentences, or fragments. That’s how people actually speak, and I think students should experience it as it really sounds.
A story from COVID illustrates this. A student with hearing loss needed closed captions for every video. Many instructors were recording 20- to 30-minute lectures each week, and this student complained that the captions didn’t make sense. The provost called my boss, who then called me. I spent half the night reviewing the captions. They were accurate, but the instructor had a strong accent and spoke in incomplete sentences. The transcript reflected exactly what hearing students received. It wasn’t flattering to the instructor, but it was verbatim. That’s why we avoid “fixing” run-ons or fragments with AI; it’s better to keep the transcript authentic.
My last step is asking AI to recheck the entire transcript and confirm whether it changed any words. I usually trust its response, though sometimes I suspect it tells me what I want to hear. Still, there have been cases where it admitted to summarizing and offered to rework the section. When that happens, I typically start from scratch rather than try to get AI to iterate on itself for longer content. At this point, it still struggles with that, so restarting is more effective.
Lance Eaton: That’s solid process. What else have you noticed in working with AI for transcripts?
Jason Bock: I’ve caught AI being honest at times, admitting when it didn’t follow instructions. That gave me some confidence in how I use it.
Lance Eaton: Are there any other things you’ve learned in this process?
Jason Bock: One surprising discovery was that YouTube’s closed captioning doesn’t identify individual speakers. Tools like Zoom can do this by linking audio to the participant’s name in a window, but YouTube lacks that context. If you upload an interview to YouTube and rely on auto captions, the result is a block of text with no indication of who is speaking. For someone who depends on captions, that missing context is critical.
I experimented by telling ChatGPT about this limitation and asking it to mark where a new speaker logically begins. I expected poor results, but it performed better than I anticipated. It had the same kinds of flaws I’ve seen in dedicated AI transcription tools. For example, it might misassign a word or two from one speaker to the next, or cut slightly in the wrong place. I saw the same thing when I used AI transcription software for interviews in my doctorate.
Even so, ChatGPT did a pretty good job. It saved me time when preparing transcripts, because I didn’t have to listen through the entire video just to mark every speaker transition. I was impressed, especially since it was working only from YouTube’s raw, context-free text rather than audio data.
Lance Eaton: Ok, how else are you working and thinking about GenAI in your work?
Jason Bock: With text-based AI, it always comes down to fine-tuning the prompt and experimenting to get the best results. I noticed this especially with the rollout of ChatGPT 5. Before version 5, I was working with 4, which had become very consistent with my prompts.
Now, with 5, the results diverge more. Because the system selects which model variant to use based on the initial prompt, sometimes the output quality drops. About one out of every eight or nine times, the transcript is simply not good enough, and we have to start over. That’s where the human role comes in; listening to the video, checking against the transcript, and spotting when it has gone off track.
Generative AI also shows a common pattern: it may start out strong, but as it moves deeper into a long transcript, it begins to drift. By the third chunk, the divergence can become more severe. The key is to catch it early and either redirect it or restart.
Even with the need to start over one out of every five times, AI is still far faster than fixing transcripts manually, which is extremely time-consuming.
Lance Eaton: Are there any attempts with GenAI in your work that you’ve tried that you’re like, “Nope, it’s just not there yet”?
Jason Bock: Transcripts and format conversions work well with AI because the context is closed. Problems arise in areas like alt text or audio description, where context is open-ended. Without feeding AI a massive amount of information, it rarely gets alt text right.
I’ve experimented with these tools, but I still find I’m better at writing alt text than any AI system I’ve tested. That’s because I can look holistically at the learning context, the content, the course goals, and the specific moment in which the image appears. AI doesn’t have that perspective.
For example, take an image of an apple tree. In many courses, the best option is to mark it decorative. As visually impaired colleagues often say, if it doesn’t add meaning, it’s better left out. They don’t want their reading interrupted with “image: apple tree” if it contributes nothing.
But in a physics course, that apple tree might connect to the story of Isaac Newton and the discovery of gravity. In a psychology or social work course, it might symbolize branching pathways from a central source. The alt text should reflect those contexts. AI cannot consistently adapt at that level without extensive prompting and guidance.
In a biology course, the apple tree image might require alt text about the tree’s anatomy, which is very different from physics or social work examples. A generic alt text generator would only say “apple tree,” missing the point entirely. That’s an area where AI isn’t yet up to par.
My experiments haven’t produced results faster than writing alt text myself. In those cases, it makes more sense to default to doing it manually. Still, I keep testing because AI might eventually improve, especially with better prompt strategies. One idea is to have AI ask clarifying questions to gather the context it needs. For simple images, that’s unnecessary and slower than writing alt text directly. But for complex visuals, or for tasks like audio description, it might add value.
That’s why I already use this questioning strategy with transcripts. I ask AI to flag any names it thinks are misspelled, any scientific terms that look wrong, or anything that doesn’t make sense. Often, it catches errors where YouTube misheard a word, especially in videos heavy with specialized terminology. Sometimes I confirm the AI’s suspicion was incorrect, but many times it’s right and helps correct important mistakes.
So, while AI doesn’t save time on straightforward alt text, its ability to ask questions and surface possible errors makes it useful for complex, context-heavy educational content.
Lance Eaton: I’m curious if you’ve tried doing a custom GPT and giving it a clear intention about coming up with alt-text for courses; then uploading a batch of images and alt-text to prime it. Could that create better results?
Jason Bock: Arizona State’s partnership with OpenAI is a good example. Their tool for generating both alt text and extended descriptions of equations, tables, and charts shows that with enough context and integration, AI can handle more complex cases effectively. That aligns with your idea: batch processing plus course-level context can improve results.
For instance, starting a session with, “These are all chemistry course images,” sets a baseline. Then, instead of asking for one answer, prompt AI to give two or three options for each image. Reviewing variations and merging them into the final version is faster and more reliable than working from a single generic response.
This approach plays to AI’s strengths: context memory, rapid iteration, and pattern recognition across multiple items. It won’t replace human judgment, but it can reduce workload when you’re facing a large set of complex visuals.
Join us back here for part 2 of the interview, where Jason will dive into some of the work he’s been doing with students.
The Update Space
Upcoming Sightings & Shenanigans
I’m co-presenting twice at the POD Network Annual Conference,
November 20-23. Pre-conference workshop (November 19) with Rebecca Darling: Minimum Viable Practices (MVPs): Crafting Sustainable Faculty Development.
Birds of a Feather Session with JT Torres: Orchids Among Dandelions: Nurturing a Healthy Future for Educational Development
Teaching in Stereo: How Open Education Gets Louder with AI, RIOS Institute. December 4, 2025.
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
The AI Diatribe with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education,” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 190+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form and I’ll get back to you soon!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International



