"It’s building on each other’s work in a very immediate way."
Interview with Anna Mills Part 2
In the last post, we were talking with Anna Mills. She shared about her recently published OER book on AI and Writing for students, along with how and why it was created. In this part of the interview, we explore some of her process and how this became an open educational resource.
If you have experiences around AI and education, particularly in the higher education space, that you would like to share, consider being interviewed for this Substack. Whether it is a voice of concern, curiosity, or confusion, we need you all to be part of the conversation!
Lance Eaton: I want to ask about that tension you mentioned: the instantaneous engagement that social media offers, and the way that creates a very present community. It’s great how, while you were working on this, you also found ways to make it social by sharing drafts. But I’m curious, as one incredibly busy person to another: how did you manage to start and finish this, and what was that process like?
Anna Mills: Having two rounds of grant funding from the Chancellor’s Office initiative really made it possible. That structure mattered. I had deadlines and a supervisor, so I could say, “Okay, this is real.” I basically took two summers to do it. I’d done an OER textbook on argument before, so I had some sense that if I have a little structure and guidance, I can do this. I built on that experience.
Sharing drafts very early on in Google Docs and having people respond was really sustaining. I don’t know if I could write a book in the traditional way, in isolation. I needed that response and collaboration with people saying, “Hey, I found this Google Doc, I like how she talks about this, I’m going to see what my students think.” That kept me going.
It was really that combination. And yeah, a shout-out to the California Community Colleges Chancellor’s Office. I could honestly use another round of grant funding to take it further. But even beyond that, it showed that if you give faculty some space, even a nominal amount of space, at a relatively low hourly rate, and you believe in their vision and their ability to create something others can use, that’s worth investing in. I’m really glad they did that.
Lance Eaton:
Along those lines, you worked on this for two summers, both independently and with feedback from collaborators. Was AI used in any capacity in creating this OER?
Anna : I mean, I’m always debating how I should use it. I’m committed to transparency. That’s part of the ethos of the book, because that’s what I’m asking of students, and I have to model that. But I also felt like students might be less interested in reading it if they knew parts were AI-generated, so that was a bit of a disincentive.
Beyond that, I found I really wasn’t tempted to use it for creating the text, or even for outlining or brainstorming. This had to come out of my experience in the classroom and in social media discussions. It’s a synthesis. It was a chance for me to think through, what would I really say to a student? That process felt necessary and generative (pun intended!).
There wasn’t a point where I felt stuck and thought, “I need AI to write for me,” or “I need it to tell me what to say.” It felt like, no, I need to figure this out myself. I did use AI for feedback, though, and there was one place where I used it to generate discussion questions for a section. I labeled that clearly and included the chat transcript.
In some ways, I almost felt like I didn’t have time to use AI more. I do my own thinking first, and then maybe at a later stage it feels useful to ask AI, “What could I do with this?” or “How could I reformulate this?” or “How could I build on it?” But I had to get the ideas and the message clear through my own processes first.
Lance: Thinking about this as a work that’s for students, to what degree did you imagine this also as a text that exists in the world for faculty? Was there any thinking about how this might help faculty move from that space of “What am I trying to teach and evaluate?” Was part of this also for faculty who are struggling?
Anna: There was definitely the hope that it would make people feel like it’s okay to share what they’re conflicted about. I hoped faculty might draw on a section here or there, assign something when it complemented how they were addressing AI in their course, and that it would open up discussions in their classes and help those discussions feel more open-ended.
I also really wanted to combine that curious exploration with concrete guidance. So I wanted faculty to feel they could both say, “Here’s my evolving thinking, and I invite you to engage,” and also, “Here’s what I’m suggesting right now in my class,” in a way that’s easy to follow.
It’s a combination of saying, here we are in this landscape together and it’s interesting and overwhelming —and here’s something to hold onto for the moment, when you just want to move things one step forward.
Lance: What surprised you about the creation process? Was there anything unexpected that, looking back, you didn’t anticipate but are glad exists or that you’re still chewing on now?
Anna: Everything took longer and was harder than expected, but that’s always how it works for my writing process. I guess the idea that surprised me was that recommendation to use AI only for tutoring-style assistance–that’s not something I started out with.
Originally, I was going to use Leon Furze et al’s AI assessment scale, classifying four different levels of use and suggesting different approaches at different points. But as I worked on it, and felt like it needed to be easier to understand and more actionable. I realized it could be simpler; I could stand behind a tutoring-style rule of thumb.
That was a really good experience: a big, complicated writing process leading to something that was, in the end, simpler and easier to communicate, which is always the holy grail for me.
Lance: I want to move us into OER, because how can we not? You’ve been a longtime advocate of OER, well before AI. Hearing that you were doing another OER textbook was no shock. But I’m curious, was there any hesitation? I ask because you’ve become well known and established in this space, particularly around AI and writing. Was there any thought about going the traditional publisher route, as others have? Was there tension there, and if so, how did you resolve it?
Anna: Yeah, definitely. I mean, it was exciting to be approached by one major publisher, and I was also asked to peer review an AI-related guide that another publisher was putting out. I’ve definitely debated it. There are real considerations around publicity and adequate, fair compensation, and I respect other people’s choices to work with commercial publishers. I can see the value in how they’re able to promote the work.
It’s kind of funny because I went the OER route in part because I’ve branded myself so heavily around OER on social media that I felt like I didn’t really have a choice. It would have been way too embarrassing.
But I also feel at home in the OER approach. In writing my first OER textbook, I reveled in the flexibility and control, and in how the work could keep living and growing organically. I could keep adding to my writing textbook, editing it, and if students gave me feedback, I could tell them, “Hey, I just changed that. I updated it right now.” That kind of ongoing, organic process is possible with OER, and it really fits my style and ethos.
Putting something out there with an open license means it doesn’t have to be perfect. It just has to be a useful contribution. Someone else can adapt it, change it, or edit it if they want. I find that incredibly freeing, and it helps me get past a lot of writer’s block and perfectionism. It lets me make more contributions, more often.
Lance: I get that. I’ve had colleagues and friends say, “You need to write a book on this,” and I respond, if I do, it’s going to be open. We can’t be in this space and suddenly say, “Okay, time to cash out,” at least not in the work we’ve been deeply involved in. I believe it’s a recognition of leaning into the values and beliefs behind the work, and where it should live, especially in education.
Anna: For now, I feel like I can continue doing this work, offering faculty workshops that are compensated at a more reasonable rate, and that can sustain the writing and the time I spend on social media around these topics. I’m also grateful for the support of my partner, which makes these choices possible.
And honestly, I don’t think I’d be very good at selling my work in a commercial way. It’s just not my style. OER frees me to be more authentic; to say, “Here, I have this thing to offer,” without needing to justify its price.
Lance: I’m called to that aspect too. It feels much harder when your inclination is just to put the work out there. So in going with OER, was there anything about that choice that shaped how you approached the content or made you think differently about editorial decisions? You mentioned being able to edit it over time; were there other examples where OER really impacted your choices?
Anna: Yes. There’s one section that’s adapted from Long, Minervini, & Gladd’s chapter in their OER textbook on how to acknowledge and cite AI.
That’s one of the things I absolutely love about the OER ethos. If somebody has already written something and I really like how they did it, but I want to make certain changes to fit my vision, I can do that. I could just get right in there, edit their version, and say, “Okay, this is how it makes sense to me,” while also linking back to theirs and promoting their textbook.
So it’s not a competition. It’s building on each other’s work in a very immediate way. Even though I didn’t do that in too many other sections, I considered it. For example, in the section on which AI tools students should consider, I looked at Joel Gladd’s work and thought, “That’s really inspiring.” I considered adapting it, but I ended up doing my own thing. Still, that sense that I could adapt someone else’s work—that I don’t have to come up with the latest new acronym or framework—was freeing. If someone has already done it well, I can build on that.
Lance: Right—no need for a new acronym and all that.
Anna: Exactly. And I hope someone will adapt a piece of mine at some point.
I also really loved the public drafting process. It probably seems pretty crazy and radical. I was sometimes posting drafts almost as I wrote them, knowing they were rough. But I liked that sense of radical collaboration and vulnerability. I might be a little extreme about it, but it really fueled me.
Also, things in the field are changing so fast that even something rough can still be useful because it’s timely. Tools are changing, practices are shifting.
It also helped that I was participating in the Peer & AI Review + Reflection project. We were committed to producing OER materials that were both student-facing and faculty-facing. There was overlap and inspiration there, especially because the California Education Learning Lab was committed to making sure everything produced through the grants was OER.
So I could be writing something for the textbook and then talking with colleagues about it and asking, “Does this fit our project? Could we use these ideas?” Sometimes, it even fed into software design decisions, like changes to the user interface of the essay feedback tool we were using. I think the cross-pollination with other open projects and open-ethos work energized me.
Lance: Your approach (radical openness) and how you find it motivating–it takes me back to the classroom. There’s a parallel in the publishing and editing process and the classroom space. We know that the longer the distance between when an assignment is submitted and when substantive feedback is given, the less care and interest there is in the learning itself. So it makes a lot of sense to me that this kind of immediacy would be motivating and help move your work forward.
Anna: Yeah, I’ve noticed that my students seem really excited about getting Google Docs comments in the margins since I’ve shifted to that. There was a nice parallel there. I’m writing and getting comments that way, and they’re writing, and I’m giving them comments.
Lance: As someone who’s been working in OER and AI, how do you feel about the likelihood of your work being swallowed up by AI tools and used without acknowledging your work or the text as its source?
Anna: I really think the companies could do more around acknowledgment. They could work on, after something is generated, checking whether there’s plagiarism or whether ideas need to be cited. That’s something AI systems could do a reasonable job of now. It’s a matter of investment on the companies’ part. I’d absolutely like to see that, and I do think it’s wrong that they haven’t tried harder to do it.
On a personal level, I don’t feel upset about my work being used as training data. I would just rather it not be plagiarized in the outputs, and I think that could be prevented technically, even if the training process doesn’t change. I’d like to see energy go toward advocacy for regulation or collaboration among companies to say, “Let’s do a better job of this.”
I don’t have a lot of hope that the training process itself is going to be effectively regulated, and I do like putting things out very publicly. I understand that other people have very different reactions to that, and some of this is probably personal disposition. But I do think there’s an opportunity not just to protest these systems and say they’re unethical, but to ask what could be done on the output side.
These tools are useful, and their capabilities are increasing to the point where they could check for plagiarism, fact-check themselves more, and consider who needs to be cited—what ideas require attribution. They could facilitate that, do some of it themselves, and help users do more of it. That’s where my instinct is: we can do a lot more to shape these systems. That’s what I’d like to see, rather than just polarization around “tech is evil.”
Lance: But that’s so much easier. Anna, come on! [in completely facetious tone]
Anna: I know. And so many people have given up on the question of what could be done, just out of frustration. But I tend to think there are some not-so-hard things that could be done and maybe they actually can be done, if we could muster the will to make that happen.
Lance: I have one last question. If this book does what you hope it will, what do you think will feel different in writing classrooms that are using it? What do you think will be perceptibly different in those spaces?
Anna: My hope—my dream—would be that students feel more confident and free to express a wide range of opinions, ideas, and questions about AI, and to think about possibilities and paths forward for their own use in classrooms. I’d also hope they feel reassured and supported by having something concrete to hold onto such as “here are all the things we want to think about, all the possibilities, but for now, let’s use some very clear guidelines.”
For example: tutoring-style assistance, not creation of the document; not “give me all the ideas” or “organize this for me.” A grounded understanding of why the writing process itself is useful, including all its steps. And when we’re not sure, let’s keep things simple and, for now, use AI for feedback.
I’d hope that gives students a sense of stability, even as they explore the terrain and find their voices. And I’d hope they feel less fear and tension with instructors around “What should I not do?” and more of a sense that, okay, the teacher is figuring this out too. “Maybe I have something to contribute that they haven’t thought of, or maybe I don’t agree with them on something.”
Ideally, students would feel respected and feel they have a place in a larger conversation about how we should use AI, how our lives are changing with it, and what roles we want it to play. That sense of empowerment matters. And the best outcome, really, would be if they feel motivated to write their own reflections to share what they’re doing, how they’re thinking, and how they’re responding to what I’ve said.
The Update Space
Upcoming Sightings & Shenanigans
Continuous Improvement Summit, February 2026
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
Online Learning in the Second Half with John Nash and Jason Johnston: EP 39 - The Higher Ed AI Solution: Good Pedagogy (January 2026)
The Peer Review Podcast with Sarah Bunin Benor and Mira Sucharov: Authentic Assessment: Co-Creating AI Policies with Students (December 2025)
David Bachman interviewed me on his Substack, Entropy Bonus (November 2025)
The AI Diatribe Podcast with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education,” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 190+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form, and I’ll get back to you soon!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International





