In the last post, we started a conversation with Rob Nelson—you can catch up there about some of his practices in the classroom or jump right into this one where he talks about what’s next on his bucket list for AI and education.If you have thoughts about AI and education, particularly in the higher education space, consider being interviewed for this Substack.
Part 2
Lance Eaton: Ok, shifting gears, let’s talk about your career shift in this last year. Share more about that.
Rob Nelson: On one hand, it was time to move on from a job I had been doing for a long time, but on the other hand, I just got so interested in questions about the educational value of generative AI. It felt like an opportunity to make positive change in the institutions where we teach and learn. The writing I was doing online gave me a sense that I had something to say. I just followed that sense of purpose into a decision to say, “I'm done being a full-time bureaucrat. I'm going to talk with people about this stuff, write about it, and teach about it.”
Lance Eaton: When I think about both what you're doing but also that background, I'm curious. I know there's the student part of this, but you've been working at this intersection of higher ed and its integration of technology–what comes from that experience and all that work?
Rob Nelson: Wearing the hat of a bureaucrat for 18 years and being involved in a lot of enterprise implementations taught me a great deal about how different members of the university community experience what enthusiasts call “the digital transformation.” During the past two decades, we have moved from pencil and paper processes run by humans to having these semi-automated software programs that are available 24/7. For most of us, that experience for most of us has been bad. It used to be that you call up a person on the phone or go meet with a person and have them help you with your payroll, with your grading submission, with whatever it is you need to figure out. Now, with whatever it is you're trying to do, that experience is all mediated by digital software. And, it's software that was designed to help people in the back office more than to help the people who are teaching our students or running the academic operations of a department or a school. That set of experiences, and what I see happening with AI, leaves me very skeptical that any institution is going to be able to manage the adoption of new tech effectively, and definitely not as quickly as the AI companies and the enthusiastic early adopters seem to think we should be able to do it.
Lance Eaton: What is the thing you're like, “Shit, we'll have really got this wrong if X happens or Y happens”?
Rob Nelson: California State University just decided to give everybody ChatGPT as an educational experiment. They invested just under $17 million to do that, and I don't think anybody has a firm sense of what educational value ChatGPT is giving anybody, teachers or students. This sort of wild investment on an untested tool feels to me like the worst-case scenario in the sense that there's not much thought or guidance. The faculty weren't prepared for it, and the students all have it now. Not that they didn't already, but it is specifically sanctioned by the university system, and students are being encouraged to use it for their studies. I just think that that kind of massive adoption without any thought or planning is the worst case. That's what we appear to be doing in many places.
Lance Eaton: What makes that the worst case, and I asked this because to what degree is there knowledge externally of what that decision process was?
Rob Nelson: I pick on CSU because they're a state institution, and they have sunshine laws that require them to do things like say how much they paid for it and other disclosures. Look, the reality is these are experiments. This will play out in ways that are fundamentally uncertain. I try to remind myself that we just don't know. This is what would make it worse case: if we find out that the use of ChatGPT really does interfere with people's ability to think in educational environments. That substitution of asking ChatGPT for the process of writing or learning will lead to a diminishment of the student experience. A university or university system that promoted this as a learning tool would have a lot to answer for.
Lance Eaton: I'm curious if the only way to figure that out is to actually give it to students.
Rob Nelson: Let me back up. I should actually make this clear. My objection is the idea that an AI strategy involves spending $17 million on enterprise licenses for all your students and teachers. That $17 million could have gone to something else. It could have helped rescue some research programs that were biting the dust, but it also could have gone to funding the kind of small incremental projects that yield insights that then get moved through the organization. In other words, not just grant programs where you get a budget code and $500 to go do something with AI, but genuine support for experimental learning–call it an incubator if you want to. A place, virtual or real, where people are coming together and sharing what they're doing and learning from each other in cross-functional teams. So it's not just $500 and go teach your class with AI. It's an educational technologist and instructional designer who will help you think about how to use AI and support for sharing what you learn.
Lance Eaton: Yeah, it's an intentional supportive process, not just “good luck.”
Rob Nelson: Yeah, exactly. That's where the money should be flowing, not to enterprise licenses for ChatGPT.
Lance Eaton: Ok, you're bringing that 18-year experience of thinking about systems and integration and thinking about AI and education. What is some of the work that you’re doing now, and what are you interested to sink your teeth into?
Rob Nelson: We've talked a lot about my classroom experience, but I'm really interested in the ways that AI can improve academic operations. There's a lot of education that happens outside the formal curriculum. Every time a student or a faculty member interacts with one of those digital platforms –that's an educational experience. It's usually a frustrating one, but it's an educational experience. I'm really interested in working with companies and working with institutions that are trying to find out what value large language models and other forms of generative AI have in terms of the good functioning of a bureaucracy and administration.
Lance Eaton: What would be an example within this?
Rob Nelson: This has been my dream from the very beginning. I'm convinced that these things really are a better natural language interface. Yes, they have some problems, hallucinations, confabulations, whatever you want to call it. They're not 100% reliable, but it's very clear to me that with some of the technology we have in place, you can put a natural language interface or search on just about any data set that you want. Students never read policy handbooks, have never gone to websites and poured over the wonderful web pages that our colleagues create. There is all this unused information designed to introduce students to what higher education is. I think a natural language interface to a student handbook or the course catalog would be a way to help somebody who comes to an institution without knowing the local, bureaucratic vocabulary, who couldn't name the offices or speak the institutional language or even think to look at a handbook, but has questions and wants to learn about how to navigate the institution. That's my dream: a natural language interface to a university catalog.
Lance Eaton: Yes! I've thought about this a lot. AI can open up the hidden curriculum to a lot of parts of life and act like the hidden curriculum decoder for institutions.
Rob Nelson: I think that's an area where ChatGPT is really dangerous on its own. It may be able to incorporate some of your course catalog into its answers, but it's not going to be limited to finding those answers. In my dream, that natural language interface wouldn't just be telling students things. It would be pointing them to specific passages in a student handbook that answered their question or going to find a department's website and saying, " If you want to major in art history, here is the specific web page on the history department that tells you how you should think about that.” And crucially, saying here is a human who really wants to talk to you about this. Make an appointment!
Lance Eaton: For folks who are going to read this interview, what would be the questions they could ask that you would be hungry to answer that we haven't explored or just to be in conversation around?
Rob Nelson: The two ideas I've run across this year that I'm working with are AI as a normal technology. The authors of AI Snake Oil, Arvind Narayanan and Sayash Kapoo are now writing about AI as a normal technology. The other one I won't talk about because it is so complex, but it's about understanding AI as a cultural or social technology. (For those interested, check out(, Large AI models are cultural and social technologies by Henry Farrell, Alison Gopnik, Cosma Shalizi, and James Evans).
The idea of AI as a normal technology invites us to think about it as a set of digital tools or an extension of the set of digital tools we already had, instead of thinking about it as an intelligence or another mind or an assistant or companion or servant or whatever. So for me, the question is, okay, if we take that frame and think about it as an expansion of our digital toolbox, where does that lead us? What aren't we thinking about yet? Where are some places we can find value in the classroom or in an administrative office that yield some positive result, some improvement, or some way of making our lives better? This does not need to be about efficiency or optimization. It could be better feedback mechanisms or more effective collaboration.
Lance Eaton: I love it. It just reminds me of one of the ways I've been thinking about AI recently and talking with leaders at institutions about. What if we think of GenAI as a utility? Just the internet and computers, we started to build a plan on how to make those things part and parcel of our institutions because there was a need for it, which supported different processes and things going on. It's one of those things where, as you said, the digital transformation happened across all universities and colleges over the last 30 years.
Rob Nelson: And the key for me for that is we need to give ourselves a long runway here. It's going to take a long time to figure this out.
Lance Eaton: I think accepting that this is a long process and that it needs a first step.
Rob Nelson: This idea that we have to have it figured out tomorrow is just wrong. We should give ourselves permission to experiment, to make mistakes, to realize that we've taken a wrong turn or that this application doesn't work. And just give us time to really explore and innovate.
Lance Eaton: That's where definitely institutions have been struggling is to that point of what's the right step, and it’s just hard to know.
Rob Nelson: As I said, the first step is to talk to your students about AI, especially in low-stakes environments where they're not worried about how it's going to affect their grade. The other thing is to agree we're going to be in this for the long haul.
Lance Eaton: Long indeed. Thank you, Rob!
If you want to hear more from Rob, check out his Substack and keep up with his work and his thoughts!
The Update Space
Upcoming Sightings & Shenanigans
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Public AI Summit, Virtual, Free from Data Nutrition Project & metaLAB at Harvard. August 13-14, 2025
AI and the Liberal Arts Symposium, Connecticut College. October 17-19, 2025
Recently Recorded Panels, Talks, & Publications
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
Bristol Community College Professional Day. My talk on “DestAIbilizing or EnAIbling?“ is available to watch (February 2025).
OE Week Live! March 5 Open Exchange on AI with Jonathan Poritz (Independent Consultant in Open Education), Amy Collier and Tom Woodward (Middlebury College), Alegria Ribadeneira (Colorado State University - Pueblo) & Liza Long (College of Western Idaho)
Reclaim Hosting TV: Technology & Society: Generative AI with Autumm Caines
2024 Open Education Conference Recording (recently posted from October 2024): Openness As Attitude, Vulnerability as Practice: Finding Our Way With GenAI Maha Bali & Anna Mills
AI Policy Resources
AI Syllabi Policy Repository: 175+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International