“To me, it’s an access question.”
Part 2 of an interview with Emily Pacheco
In the last post, we were talking with Emily Pacheco. You can read that and learn more about the considerations about where and why GenAI is showing up in the college admission space or jump right into this one, where we discuss some of the concerns and challenges about GenAI in admissions and applying to college. If you have thoughts about AI and education, particularly in the higher education space, consider being interviewed for this Substack.
Lance Eaton: Let’s move into the actual applying stage. Where do you see AI’s role in that process?
Emily Pacheco: Yeah, that’s definitely a place where people usually start in this conversation. When people hear AI and college admission, that’s often where their first thoughts go. I’ve been in that space for a while, and after probably my eighth or tenth presentation on AI in admissions, someone at a conference raised their hand and said, “When we think about ethics around AI in college admission, isn’t it really just the essay we’re concerned about? What else do we care about?”
And honestly, that caught me a little off guard as I had been thinking about all the implications. But they were kind of right. Do we care if a student uses AI to brainstorm extracurriculars? As long as they’re not fabricating anything, that’s fine. Do we worry if it’s being used to help with the college search? No, not really. The main concern when it comes to student usage is in the admission essay writing process. That’s where much of the ethical debates are happening. There is definite concern around students’ usage of AI to write their essays, but also questions about AI being a part of reading and reviewing those essay,s and concerns that AI might be making the final admission decision. It creates this perception that humans might become completely removed from this loop.
That’s definitely a big part of the conversation, and there are legitimate concerns there. But to me, the bigger question is: Is the essay even worthwhile anymore when there’s a technology that can write a really good one? Is it really the best indicator of student success? Is it helping us determine if the student would be a good fit for our institution?
It feels a bit absurd that we spend four years teaching high school students to write in a variety of forms: analytical essays, persuasive arguments, research papers, and informational writing, only to ask them, in their senior year, to produce a deeply personal, self-reflective piece and suggest it might be the most important thing they’ll ever write. Many spend months polishing this one essay that, truthfully, in most cases, won’t make a significant difference in their admission decision. Is this the best use of their time? Does this final product really help admission offices build a solid cohort of students?
The reality is that about 70% of schools don’t take into account the college essay when reading applications. For the remaining 30%, usually those with very competitive admission rates, the essay only meaningfully impacts a small fraction of applicants, maybe 15% at most. Yet we continue to devote a massive amount of time and energy to a process that is only decisive in a limited number of cases. Considering how easy it is now to produce a strong college essay with the help of AI or an expensive college coach, it’s hard to argue that the essay is still serving its original purpose. But AI didn’t break this system. Students with access to quality college counseling have long been getting essay coaching, some of it well within ethical bounds, and some clearly crossing the line. It’s been clear for a while that many college essays aren’t written by the student alone, yet we haven’t been penalizing students for that.
I see AI as a tool that could actually help level the playing field, giving all students access to the kind of support that was previously only available to a few. It’s interesting that we’ve largely stopped questioning the fairness of over-coached essays written with human help, yet we’re suddenly alarmed by the idea that a computer might offer the same advantage to everyone.
I believe there are far better ways than the essay to evaluate applicants and understand how they’ll thrive at our institutions. I’m hopeful that colleges will begin exploring new, more meaningful ways to assess students, approaches that reflect the full range of their potential and readiness.
Lance Eaton: I feel that parallel completely. The college essay feels like the same genre as the cover letter. This strange form of writing with rigid rhetorical expectations. You have to humble yourself, say all the right things, and hit the perfect prescribed tone. It’s performative. So, yeah, there’s an issue with this form of writing!
Emily Pacheco: And why? What’s the point? What are we actually gauging with the cover letter? What are we looking for that makes us say, “There it is, that’s the perfect candidate”? Said nobody. It’s totally just a hoop. The same goes for college essays.
I get asked all the time, “You must be reading so many essays that are obviously written by AI. Have you noticed that change?” My response is: we’ve been reading essays for decades that weren’t written solely by the student. For 30 or 40 years, many of these essays have been shaped by college counselors, private coaches, or even parents.
Can we really say the student wrote those essays? How often do I read something and think, “This sounds like a 17-year-old?” Almost never. And honestly, now when I suspect AI has been used, I think, “Good job.” The student is making use of the tools available to them, which, unlike private college counselors, are accessible to everyone.
To me, it’s a question of access. Everyone suddenly has the same tools, and now gets help that was previously unavailable. For years, we have known that wealthier students have help, such as parents who pay thousands for essay coaching, and nobody objected. But now that all students can access help for free, it’s suddenly seen as unethical.
The AI detection issue drives me crazy. The University of California system is one of the few admittedly using AI detection, flagging essays, and denying admission because they suspect AI use. I’ve asked their admissions counselors, “Are you also flagging essays clearly written by parents or coaches? What’s the difference?” It’s such a blatant inequity. That practice is, in my view, discriminatory.
Lance Eaton: I love that point. It’s such an important observation. We only seem to care now that everyone can use the same tools.
Emily Pacheco: Exactly. I don’t understand how the UC system can say, “We care if you use AI,” when they’ve rarely, if ever, flagged essays that are clearly not written by a 17-year-old. What’s the difference? That one really fires me up. The energy people spend on AI detection, believing it’s a viable path forward, is completely misplaced.
Lance Eaton: Ok, you’ve highlighted some really important and helpful ways AI is changing things. Where are the worry points for you?
Emily Pacheco: Yeah. I mean, that’s it: the students who get it and the students who don’t. You mentioned your institution, where everyone has access to Claude, and that’s great. But at the high school level, it’s rough right now.
At the university level, there’s at least some acknowledgment that AI will be a part of our students’ future. But most high schools are frozen. Many are outright banning it because they don’t know what to do with it. That’s one of my biggest concerns: who gets access and who doesn’t, and what kind of training high school students receive.
As we’ve seen, it’s the same students who already had privilege and access to resources who are now getting access to these new tools and training. The private schools I visit are the ones doing the most innovative things with AI. The public schools don’t have the same capacity; they don’t know what to do with it, so their response is often to ban it.
And it’s not just banning. Many are also instilling fear and disgust toward it, calling it a “cheater’s tool” or something that will lead to “brain rot.” They’re trying to scare kids away from it because they don’t know how to teach them how to use it. That’s really concerning.
Lance Eaton: “Scared straight,” but with the AI emphasized. “ScaAIred StrAIght”.
Emily Pacheco: Exactly. I can picture that on a poster at a school.
That’s a huge problem right now. And then, of course, there’s the issue of authenticity. I hear this constantly: “You must be seeing essays that are no longer authentic.” And my question is always, what do you mean by authentic? The way we use that term as educators confuses me.
Educators are telling students, “Just make sure your essay is authentic.” And with AI, they frame that as a risk: “If you use it, your authenticity is gone.” And I’m like… what does that even mean?
Lance Eaton: Talk about code-switching; there’s a whole layer of coded language and hidden curriculum in that.
Emily Pacheco: Right, because those things aren’t the same. When people use “authenticity,” what they really mean is, “Don’t plagiarize.” What they’re trying to say is: don’t copy and paste directly from ChatGPT, and don’t just throw in a generic prompt with no context about who you are and expect a solid response, because what comes out won’t be relevant or specific to you. That’s the real point, but they’re not saying it that way.
So this whole question of authenticity and what does it actually mean in writing? And is it still important that we teach writing the way we always have? That’s going to be a huge hurdle in the next five to ten years. And honestly, there are a lot of very sad people out there: the whole writing community.
Lance Eaton: Yeah, there’s a lot to figure out.
Emily Pacheco: There is. And I get it. When you’ve dedicated your entire career to writing and to teaching creative expression, it’s hard. Even in the network I work with, independent educational college consultants, a large portion of their work centers on the essay writing process. Check out The College Essay Guy for a great example of this. That’s what many value in their work, that’s their expertise: “I coach kids in essay writing. I’ve done it for 20 years. I have a process that really works.”
So when you come in and say, “Guess what? That may no longer be relevant. We need to rethink the entire process,” people don’t want to hear it. And I understand that. It’s a reckoning.
Lance Eaton: Yeah. The shakeup is about reimagining the kind of knowledge work we’ve always done, while still trying to figure out what knowledge work even looks like now, and what it can become. It’s going to be a struggle for many people.
One quick follow-up on that. You’ve talked a lot about the people’s side of this. Are there any aspects of the AI tools themselves or specific tools that generate concern or hesitation for you?
Emily Pacheco: Yeah. Going back to what I mentioned earlier about how universities do their outreach. This is where things also get concerning. The way schools market to students is increasingly based on the data points they collect from them. And that’s a slippery slope.
Would you want your kid in a system where they only get the “right” information if they click the right things, if they happen to be on the right platforms, or if they interact with certain technologies?
Suddenly, they end up on a VIP list because Yale or another university buys that data and decides, “These kids are more valuable.” Those students then receive a completely different recruitment experience.
That’s deeply worrying. We already have inequities in how admissions outreach happens. Even before AI, humans made biased choices, visiting only the wealthiest high schools, focusing on areas with the most financial return. Now, when those decisions are even more driven by algorithms, that bias gets automated and amplified.
Think about it: Max, a student in Palo Alto whose parent works at Oracle and who’s active on the Khan Academy app, will have a completely different digital footprint than Jane, a student in San Jose at a public school whose parents didn’t go to college. Their college search experiences will diverge dramatically, shaped by how AI interprets and targets them.
Lance Eaton: Yeah, it really reminds me, if you’re not already familiar, of Chris Gilliard’s work. He coined the term digital redlining. What you’re describing is exactly that, just amplified by AI. Where and how people show up or don’t in the data determines whether they’re even communicated with. It becomes self-reinforcing.
Emily Pacheco: Exactly. It becomes perpetual. You’re basically born into a digital profile. Where you’re born, who your parents are, your neighborhood, all of that starts shaping what you see online, what you interact with digitally, and that changes how institutions see you. It’s like you’re automatically sorted into a client box, and from there, every piece of information or opportunity you encounter is filtered through that identity. That’s a huge concern.
The other issue that comes up a lot is how universities are starting to use AI as one of their application readers. Reading essays takes an enormous amount of time and resources, so many institutions have started experimenting with AI readers. Few are talking about it publicly, but some are, and I give credit to those that are transparent.
For example, Virginia Tech did it right. They ran a two-year pilot comparing AI readers with human readers behind the scenes. This fall is their first year actually using AI as one of their readers. But before implementing it, they went public with their plan, explaining how they were monitoring for bias and adjusting their systems accordingly.
Then there’s the opposite approach, which I’ve personally seen at an institution I worked for. They weren’t transparent at all, and it was honestly terrifying learning what was going on behind the scenes. They assigned each applicant an AI-generated score, a number between one and five, and told their human readers that their ratings had to match the AI’s score.
So instead of the AI learning from people, the people were being trained to match the AI. Now that is seriously problematic.
Lance Eaton: Boom. I don’t think they understand how that works.
Emily Pacheco: It’s terrifying. And they weren’t even explaining to their readers what the number represented. I knew what it was; it was an AI-generated score assigned to each applicant, but it was impossible to see how this number was generated. And they were telling human readers that their scores needed to be within a certain numerical distance of that AI score. One reader who consistently was not matching the AI score was called out on it in her performance review.
So yeah, these are two completely different approaches. In Virginia Tech’s system, the human assigns a score without seeing the AI’s number, and then they compare them afterward. If the numbers are too far apart, they bring in another human reader for a final human review. That’s the right way to do it.
There’s huge potential here if it’s implemented responsibly. Because honestly, humans are not great at reading applications objectively. We all carry bias. I’ll admit, if a student writes a highly political essay that I don’t agree with, it’s hard to be completely neutral about that. Computers don’t care about that in the same way. They can be calibrated, they can be monitored, and you can actually see where the bias shows up in their data, something that isn’t as easy to recognize in our human readers.
People forget that you can’t take bias out of humans, but you can at least measure and correct for it in an AI system. That’s what some institutions are doing: comparing AI scores with human scores, tracking when they diverge, and flagging cases for review.
And honestly, this help is needed. Admissions readers are burnt out. I was reading between 40 and 70 applications a day, spending an average of between 2-5 minutes on each. You’re scanning, not analyzing. You’re missing nuance. I think AI could possibly do that better than we can, if used correctly. So it’s both a concern and a major area of potential.
Lance Eaton: Thank you. Final question: What are you chewing on right now? Where’s your mind focused these days? Are you working on something new?
Emily Pacheco: I’ve really been grappling with this idea that so many people are paralyzed by this technology—by all of it—because they can’t see the future. And that’s understandable. Humans like what we can see and understand. But this is the opposite of that. We’re talking about something new, transformative, maybe even disruptive, and we keep using words like “exciting,” “different,” and “scary,” yet we’re not really painting a picture of what it looks like.
I’ve been thinking: let’s tell the story. Let’s imagine it’s 2035. What does college admissions actually look like? I’ve spent some time recently writing stories from the perspectives of a university admissions officer, a college counselor, and a student in the future. Seeing it written out was fascinating and, honestly, beautiful to me. It made it real.
But I’ll admit, parts of it were also unsettling. Some of what I imagined was a little scary. I’m curious to see how others interpret these stories. My goal is to find a way to share them more broadly, to invite others into this Informed educational fiction to help them see what our future might look like and imagine how we could be a part of making it what we want it to be.
In December, I’m doing a sort of trial run of this: a “future of admissions” session. I thought the timing was perfect for the end of the year. I’m going for a Charles Dickens approach: “the ghost of what’s to come”. I’ll tell three stories from different perspectives. Because I think that narrative framing can make it easier for people to engage with what’s coming, to see that this future might actually solve some of the persistent problems we have in higher ed.
Maybe if we can paint that picture, more people will feel willing to join the conversation instead of resisting it. And I think the ten-year frame is especially interesting. It’s far enough to stretch our thinking, but close enough that most of us will still be here to see how much of it came true or didn’t. Maybe we’ll say, “We were totally off,” maybe we’ll be surprised by what emerged that we couldn’t have predicted.
I’m planning to present this as a webinar, but I’d also like to turn it into an in-person event next year. Not just about college admissions, but bringing in others from across the education space. I recently met someone from San Francisco Bay University. They’re doing some really interesting work with program design, and that conversation sparked the idea. I’d love to organize a panel of three or four people who are pushing the boundaries of what higher ed could look like, people doing work that’s not the typical kin,d and come together to talk about what the future looks like.
Lance Eaton: I really like that on several levels. First, it reminds me of Jane McGonigal’s Imaginable. She’s written about the gamification of life and how it can make things more meaningful. That book, Imaginable, is really about future-casting and creating games that help people think about possible futures.
And what you’re describing sounds similar. Imagine a workshop where the first part is just envisioning what higher ed might look like in ten years across different sectors. Then, move to five years out, and then, finally ask, “How do we get there?” That kind of arc could be powerful, especially if you’re bringing in multiple perspectives: admissions, alumni, faculty, libraries, all in one space. It could be such a rich conversation.
If you start to develop this, please let me know. I’d be deeply interested in being involved.
Emily Pacheco: Yeah, yeah—you nailed it. That’s exactly what I’ve been thinking. The idea is to help people picture it first, to really see what the future could look like, and then work backward: what steps do we take next year to get there?
Creating that visual, narrative picture helps people say, “Wow, this looks exciting. I want to be part of that.” And yes, bringing in multiple perspectives is key.
I’m imagining this as something flexible, not just one event. It could include panels, workshops, different formats.
Lance Eaton: It could even be a toolkit: something people could use if they want to experiment with future-casting and help others feel grounded in what’s ahead.
Emily Pacheco: Exactly. And we could tailor it for different conferences or audiences, making it relevant for each space. That’s what has been percolating for me. I’m starting with admissions in December, then expanding it into a broader higher ed conversation, and eventually bringing it to conferences.
This all comes from seeing how afraid many people are to even start the AI conversation. I keep asking, “How do I get them there?” That’s why I write, that’s why I present. I’m trying to reach people where they are and help them start thinking about AI in accessible ways.
If we can show people what the future could look like and that we all have a role in shaping it; that’s powerful. That’s really what I’m about.
Lance Eaton: Love it. Awesome. Thank you so much.
The Update Space
Upcoming Sightings & Shenanigans
Continuous Improvement Summit, February 2026
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
David Bachman interviewed me on his Substack, Entropy Bonus (November).
The AI Diatribe with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
“In the Room Where It Happens: Generative AI Policy Creation in Higher Education,” co-authored with Esther Brandon, Dana Gavin and Allison Papini. EDUCAUSE Review (May 2025)
“Does AI have a copyright problem?” in LSE Impact Blog (May 2025).
“Growing Orchids Amid Dandelions” in Inside Higher Ed, co-authored with JT Torres & Deborah Kronenberg (April 2025).
AI Policy Resources
AI Syllabi Policy Repository: 190+ policies (always looking for more- submit your AI syllabus policy here)
AI Institutional Policy Repository: 17 policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form and I’ll get back to you soon!
We periodically host small-group workshops and leadership sessions for higher ed teams. You can learn more about our current offerings here.
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International



