“You have to experience things firsthand.”
Part 1 of an interview with Urmi Ashar & Jessica Mann
Introduction
Dr. Urmi Ashar is a physician, educator, and leadership coach who directs the Public Health Program in the John G. Rangos Sr. School of Health Sciences at Duquesne University. She works at the intersection of public health, higher education, and systems change, exploring how AI, community-engaged learning, and complexity science can reshape how students learn and how institutions serve their communities. As a Gaultier Fellow and faculty lead for the Council of Independent Colleges’ AI Ready initiative, she designs AI-enhanced “living labs” that center ethics, equity, and the future of work. A certified Gallup Strengths and iPEC Core Energy coach, she helps leaders and students build the self-awareness and adaptive capacity needed to stay fully human in an AI-driven world.
Jess Mann. Ph.D. serves as Assistant Vice President of Community Engagement at Duquesne University in Pittsburgh, PA, guiding initiatives that advance reciprocal partnerships and community-engaged learning across disciplines and communities. She also teaches in the university’s Education Leadership graduate programs. As a practitioner-scholar of student and faculty development, community-engaged teaching and learning, and democratic institutional strategy, she is particularly interested in the emerging role of AI in expanding access, easing administrative burdens, and deepening reflection within community-engaged teaching.
About the Interview series
As part of my work in this space, I seek to highlight the folks I’ve been in conversation with or learning from over the last few years as we navigate teaching and learning in the age of AI.
If you have experiences around AI and education, particularly in the higher education space, that you would like to share, consider being interviewed for this Substack. Whether it is a voice of concern, curiosity, or confusion, we need you all to be part of the conversation!
Lance Eaton: When did you start thinking about AI, and where did that lead you?
Urmi Ashar: It really started when ChatGPT was first released. I distinctly remember being in the faculty lounge corridor when one of our colleagues reached out and said, “Holy smokes, ChatGPT was just released, and that is the end of all our assignments.” I had heard a little about what ChatGPT could do, and she was in a tizzy: “The students are going to use ChatGPT and our assignments will be useless as assessments.”
That semester, I had a student in my leadership class in the Master of Health Administration (MHA) program. This student submitted something that was just pure garbage. It did not make sense; it was just word salad. I remember thinking, if this is ChatGPT’s output, we do not have anything to worry about.
Then, in 2023, the next version was released. Around that time, we also had the Grefenstette Tech Ethics Symposium at Duquesne, which my students always participate in. My general approach has always been: if I am asking my students to do something, like prepare a poster for the symposium, then I need to play with it myself and get my fingers dirty. If I do not, I do not know what I am talking about. You have to experience things firsthand.
So I felt compelled to try using ChatGPT myself before I made any determination about its impact or its usefulness, and that curiosity eventually pulled me into redesigning how I was teaching and assessing in my public health courses.
Eaton: When did you start playing around with it for your own learning and classes?
Ashar: I had my first real conversation with ChatGPT in 2023, and I realized how different it was in just one year. That year, I ended up doing an experiment in class with my students. I teach public health courses, and I am the director of public health programs. Public health is built on understanding priority populations. Perspectives are everything because the same issue looks very different from a provider’s perspective versus a community’s perspective.
So I created four different scenarios in public health, each with a distinct persona deciding whether to move to Sheraden. It is always a challenge for students who have led very sheltered lives. They tend to generalize their own sheltered experiences as everyone’s reality, and many students today lack much community experience. I created four personas, for example, a single mother caring for her elderly mother.
At the time, we were working with a neighborhood nonprofit and community hub called Jasmine Nyree in the Sheraden neighborhood, about 1.5 miles from Duquesne University. It is what we call a food desert and a service desert. Very little is truly accessible to the people who live there, and there are significant transportation barriers for people who cannot afford to own a car.
I asked the students to explore this neighborhood using three different chatbots. I did not prescribe which ones; they had to choose. They had to come back with a recommendation for each of the four personas on whether that person should move into that neighborhood.
Eaton: How did that go?
Ashar: The students undertook the exercise and came back with their responses. What I realized was that they were taking everything the chatbot presented at face value. I have been to that neighborhood, and I know what it is like. So we said, okay, let us look at the neighborhood more closely. How far is the grocery store. We used Google Maps, and Google gave a different set of answers. I asked them, how are you going to determine which one is right.
Eventually, they got there and said, “We will have to go and take a look.” I said that is one of the foundational things we do in public health. It is called a windshield survey. It sounds fancy, but it is really just getting in the car and experiencing the neighborhood for yourself.
Because I had a student doing her capstone project with this organization, we convinced her site mentor to take her through the neighborhood. She came back with a completely different story.
Not only that, but the grocery store, which was supposed to be 1.5 miles away, would actually take an hour and a half to walk to, because the area is not pedestrian-friendly.
Eaton: How did the students respond?
Ashar: The students realized that the terrain is very different from what the map suggests. They were surprised by how much the reality on the ground contradicted what the chatbots and Google had told them. For me, it was a realization: my goodness, this is fabulous. They were not just reading about a persona’s lived reality; they were stepping into that mindset and then testing it against the neighborhood itself, using ChatGPT as a thinking partner. The tool could simulate roles and perspectives, but the students were the ones inhabiting the personas. That is where my foray into using AI in this way really began.
Eaton: You were talking about personas that students interacted with. Were those AI personas?
Ashar: I created composite personas from my lived experience working with people. For example, I once worked with a veteran with an amputation and chronic pain who was struggling to find employment. I used that experience to create a persona of a veteran who developed an opioid addiction, is newly sober, and is trying to rebuild his life while parenting a two-year-old son. The central question was whether moving to Sheraden made sense for him.
All of this was rooted in a place-based context. Public health issues are not simple problems you fix with one answer. They are complex, adaptive issues that live inside complex adaptive systems. We were trying to understand neighborhoods through the lens of social determinants of health, and how where we live, work, and play shapes both opportunity and health. Many of us have become disconnected from how social determinants influence our choices and outcomes. One of the foundational principles of public health is that the communities we live in shape our health, and even our attitudes about what it means to pursue and sustain it. In public health, we often say that zip codes can become destiny, because people and places keep shaping each other.
In this assignment, AI helped surface that complexity. The AI was not the expert. It was more like a mirror that showed us the students’ questions, assumptions, and blind spots. They were inhabiting the personas, asking questions in the first person, and then checking those answers against the reality of Sheraden. I realized I was no longer standing at the front of the room as the “expert” with the right answer. I had shifted into a coaching role, sitting next to them, asking, “What did you ask, what did you get back, and what does that say about how you are thinking.” We were exploring the tool together and modeling how to learn, not just what to know.
Even in the classroom, students have told me, “It is only in public health classes that we are asked to figure out how to talk to other people, and I have never made as many friends in any other class.” This was even before AI entered the picture. I have had students visit me years later and say that because of my class, they got to know other people. I believe in group projects. I do not believe in silos. The AI assignment took that further. Students stopped trying to impress me with a single correct answer and started working as a team to make sense of a messy, real situation.
Eaton: Why do you think this is working?
Ashar: In a way, what we were seeing in that classroom was a small fractal of a larger pattern: how humans and tools interact inside a complex system, and how trust and curiosity can help a group reorganize its thinking together. The local pattern of one group of students making sense of Sheraden was reflecting something bigger about how our institution, and even our culture, is learning to live with AI.
My favorite way to explain this is with the body. If you have a paper cut that gets infected, it steals cognitive bandwidth from you. There is no boundary where you can say, “It is only my thumb.” It affects everything. Teams function the same way. One place of strain or mistrust pulls energy away from the whole. We are part of living systems. We need each other, and we are incomplete without each other.
That is why teams work best when there is real mutual trust. When trust is missing, you pay a tax in time, rework, emotional strain, and disengagement. You hold back, and the team never gets your best thinking. The same is true when we are trying to make sense of AI together. If we can lower the hierarchy between teacher and student, or faculty and administrator, and stay in the work as a team, AI can actually help us become better collaborators because it keeps throwing our thinking back to us.
I see that very clearly with my colleagues. I completely trust Jess as a collaborator. If I am not making sense, she will stop me before I go too far, or she will translate what I am trying to say so it lands with the audience we are trying to persuade. For example, she will help ensure our CIO understands our project, rather than walking away confused, thinking, “What are they talking about.” Used in this way, AI can start to transform our habitual cognitive assemblages, the familiar patterns of thought and reaction we fall into as individuals and teams. It makes those patterns visible, so we can choose different moves together. We will need high performing, high trust human teams to leverage that possibility, because the tool is powerful, but it is our ability to think together that decides whether it helps or harms.
Eaton: So I’m curious about the personas and the students’ experience interacting with them. What was it like for students to engage with these personas, and how did they respond?
Ashar: The AI was not taking on those roles. The students were the personas. I asked them to step into each life, speak in the first person, and then use the AI platforms as a thinking partner for their research. So a student might type something like, “I am a newly sober veteran with a two-year-old son I am responsible for. I am looking for a job and low-rent, safe housing. I am exploring moving to Sheraden. What should I be worried about. I would like to know if there are good schools and safe playgrounds nearby.” The tool responded to that voice, and that is what we worked with in class.
Eaton: Got it. So how did they interact with the AI?
Ashar: Most of my students are used to going to AI for quick answers. In this assignment, the structure forced them to slow down. That setup let me turn their prompts into data. I could say, “These are the questions you asked as this person. Why these questions. What made them feel obvious to you.”
Very quickly, their assumptions came to the surface. Students who had grown up in fairly comfortable suburbs would ask things like, “Where are the closest playgrounds and coffee shops,” or “How long would my commute be if I drive to work.” During class discussions they would say, “I can just use Uber if the bus is not great,” or “I am sure there is some daycare nearby I can afford.” I would hear comments like, “I assumed transportation would be fine because the person can walk,” or “because they are a veteran, I figured benefits would cover most of this.” Those questions told me more about the students’ reality than the persona’s, and that is exactly what we worked with in class. In that moment, AI stopped being a shortcut to an answer and became a shared object we could gather around, a way for us to look at our own thinking together.
Exploring the neighborhood then became a way to test what students thought and assumed, based on what they had asked the chatbot. They had to see that the answers were only as good as the questions they asked, and that they still had to verify those answers against reality. The map is not the terrain.
Unless you have worked in the field, you do not know the full extent of your ignorance. Humans lean on heuristic thinking, on shortcuts, because it helps us survive by recognizing patterns. But the moment you put a question into words, you expose how you see the world. That is what I told them: your question is a mirror of the world you grew up in. Many of them were assuming that their lived experience translated to everyone’s lived experience. When they realized that their assumptions were wrong, or that what they saw on the screen did not match the services and conditions on the ground in the community, it opened the door to questioning their own sense of reality through open, honest conversation, not accusation.
Jess also comes into my class to talk about implicit bias. We build modules where students do not just read about bias, they feel it. When they go through the implicit bias module, they sometimes get irritated or defensive, and I tell them that is often the moment learning is happening. Not because they aced a test, but because they have bumped into the limits of their old paradigm.
For me, the surest sign that learning has occurred is when you can say, “I have reached the edge of what I know, and I have to let something go.”
Eaton: Where did your work with AI in the classroom go from there?
Ashar: From there, it did not stay in that one classroom. I talk with Jess about almost everything I do, because we both teach and do research in community engaged spaces. I come out of healthcare, she sits in community engagement, so our conversations became the place where I could test what I was seeing with my students. I told her the whole Sheraden story, the personas, the prompts, the windshield survey, and how the class started functioning more like a learning team than a room full of individuals trying to give me the right answer.
We started to see that this was more than a clever assignment. We were asking, “If students can use AI this way in one neighborhood, what would it look like to treat our courses as a kind of living lab for AI and community engaged learning.” At that point, a lot of talk about AI in healthcare was still, “It is going to take over documentation, prior authorizations, all the routine processes,” and now we are already seeing pieces of that implementation. I was connecting what I was observing in the classroom with what I was watching roll out in healthcare.
Jess was one of the first people at my university who really heard that connection and took it seriously. In 2024, she reached out and said, “Why do you not apply for the Gaultier Fellowship and use it to build this out.” That invitation was a turning point. It gave me a formal container to design a larger AI teaching and learning project, instead of keeping it as a one off experiment in a single course. As a Gaultier Fellow, I am now building AI teaching assistants into several public health courses as living labs, where students practice AI literacy, community engaged work, and reflective teaming over time. In parallel, my work with the Future of Work Faculty Academy lets me take the same questions into a wider faculty space, asking not only how students will work with AI in the future, but who they need to become as collaborators and leaders in systems where AI is in the mix.
Eaton: I think that’s a great segue to hear from Jess about her experience with this?
Mann: The unit I lead is called the Office of Community Engagement. Our faculty fellowships are supported by our Center for Community-Engaged Teaching and Research, or CETR. CETR serves as the institutional hub for advancing reciprocal, community-engaged scholarship, teaching, and learning efforts. Grounded in the University’s Spiritan mission and commitment to social justice, the Center supports faculty, students, and community partners in co-creating knowledge that addresses complex social challenges while enhancing student learning and community well-being.
The Center’s mission is to cultivate ethical, mutually beneficial partnerships that integrate community expertise with academic inquiry. Through faculty development, partnership infrastructure, assessment, and institutional strategy. We strengthen community-engaged pedagogy and research across disciplines, ensuring that engagement efforts are rigorous, equitable, and responsive to community priorities. In this sense, our unit works across all ten schools of study and seeks faculty interested in community-engaged teaching, learning, and research. One way we support faculty is through fellowship opportunities, as Urmi described. The Gaultier Fellowship was created to support seasoned, engaged faculty by giving them a year to dive into a program, research agenda, or pedagogical intervention, with funding from our office and mentorship and support from our unit.
We’re constantly looking for folks who are doing something timely in their classroom or research agenda that aligns with the tenets of community engagement but is also innovative. We don’t want wash, rinse, repeat efforts.
I also teach in our educational leadership program, and I have been seeing an increase in AI usage, too. Unlike Urmi, I was not initially a fan. I was highly skeptical. But I knew the train was moving, and it was one of those situations where you either get on or you’re left behind.
Urmi and I had conversations about how AI is only as good as its inputs. We can either get in and start working from the inside—Trojan horse style—or, in true higher-ed fashion, be reactive to decisions made without our input. So we started playing with what it would look like to wrap this fellowship around the use of AI throughout an entire public health program, rather than just one course. That way, we could examine how different levels of understanding and disciplinary knowledge influence students’ use, and how AI informs different projects and approaches.
We just kind of took it from there.
Ashar: I submitted the proposal, and as I was preparing it, I was already naming some of the challenges we face. I am deeply concerned about the state of younger people today. This is the most anxious generation I have ever encountered. Many of them are even afraid to talk in class. I realized I needed to figure out how to create sandboxes where my students could have real experiences.
We often prevent students from having real experiences because we are trying to protect them from everything. Some risk is good for us; we learn from it. Many of my students had no idea how to function in the real world. Community-engaged learning allows them to experience that, because suddenly it is not just the faculty training them. There are real consequences in the community, and you see the impact of your actions. It changes my dynamic from being their teacher to being their coach, and it aligns everything beautifully.
I told Jess, I do not see any other option but community-engaged learning, because that is the only real world they live in. They create friendships in the virtual world, and there are no consequences. They do not realize that the pictures they see online are made up and cleaned up. Nobody lives such a perfect life, and then they get anxious when their own lives do not match that image.
As we named these challenges, it became clear that AI is part of the same ecosystem, not a separate issue. If my students are already living with AI in their pockets, then the fellowship cannot just be about community engagement; it also has to take AI seriously as part of the world they are learning to navigate. That raised a different question for me: if AI is already shaping how students live and learn, how do we intentionally build it into our program goals, rather than pretend it sits on the side. Around that time, I participated in a workshop at our Center for Teaching Excellence on the National Association of Colleges and Employers..
NACE has articulated a set of competencies, and I remember thinking: what if I aligned this program’s learning objectives with the institutional learning objectives, the Council on Education for Public Health objectives, and the NACE competencies. The thread that ties all of this together, again, is community engagement.
It prepares students in a way that feels closer to early scientific research, before chemistry became something you only met in equations and models. Back then, you learned by watching what happened in the dish in front of you. For my students, the community is that petri dish. We teach a lot in the abstract, but students often do not know the realities until they are in a neighborhood, with real people and real tradeoffs. That is exactly what I was trying to do with the first Sheraden experiment.
I built my proposal around the idea of bringing in social-emotional development. I am certified in Gallup’s CliftonStrengths assessment-based coaching and the Institute for Professional Excellence in Coaching’s Energy Leadership Index-based coaching. When we build students’ social-emotional learning, they can also develop stronger writing skills because they are more aware of their own thinking and voice. The reality I see in the classroom is not just what they can memorize, but how they behave, how they connect the dots. I always tell my students: I do not really care about rote memorization. I care about how they connect the dots - synthesize the information and make it practically applicable to their contexts and the communities they inhabit.
If you are going to use AI for papers I am asking you to write, then I want to know what part is your experience and what part is AI telling you. Students sometimes conflate the two. Part of this work is helping them tease that apart, so they can see their own thinking and growth more clearly.
Eaton: So how did you approach the classroom?
Ashar: I was very fortunate to have Jess help me navigate this. When we started, even at our university, the policy said we were only allowed to use Copilot. It was thanks to Jess connecting me with different people that we got our CIO involved, our vice president for strategic initiatives involved, and eventually, an exception was granted.
I am now the campus lead for BoodleBox, and we have created a different AI environment. AI literacy is not a separate add-on; it lives inside our public health classes. This is the first semester we have used it in three different classes, all classes I teach, but we built our own AI literacy module, and students are using it at different levels. My seniors are learning how to build bots.
It was a little scary because on the first day of the semester, it had only been about fifteen days since I got access to BoodleBox. I started by saying, “I do not know what I am doing, but bear with me.” I realized that requires us to become more humble and more vulnerable. We have made authority synonymous with expertise, but in the AI world, we are asking a different question: what is intelligence? Is it knowing facts, or is it learning how to learn?
That shifted the dynamic with my students. When I said, “Look, I do not know all the answers, but we are going to figure this out together,” the critical thinking I started seeing was remarkable. In fact, today one of my classes had their final presentations. Jess was there, along with a couple of other colleagues from the university and our community partners. There is a night and day difference between last year’s students and this year’s students.
Do we have all the answers about how we are going to document this in academia? No. But we are constantly evolving our own thinking too.
Join us in the next session to learn more about their work! If you want to catch them live, there is a great session coming up on February 11 at 2pm (ET) being held by the Coalition of Urban and Metropolitan Universities (CUMU) for its institutional members.
The Update Space
Upcoming Sightings & Shenanigans
Continuous Improvement Summit, February 2026
EDUCAUSE Online Program: Teaching with AI. Virtual. Facilitating sessions: ongoing
Recently Recorded Panels, Talks, & Publications
Online Learning in the Second Half with John Nash and Jason Johnston: EP 39 - The Higher Ed AI Solution: Good Pedagogy (January 2026)
The Peer Review Podcast with Sarah Bunin Benor and Mira Sucharov: Authentic Assessment: Co-Creating AI Policies with Students (December 2025)
David Bachman interviewed me on his Substack, Entropy Bonus (November 2025)
The AI Diatribe Podcast with Jason Low (November): Episode 17: Can Universities Keep Pace With AI?
The Opposite of Cheating Podcast with Dr. Tricia Bertram Gallant (October 2025): Season 2, Episode 31.
The Learning Stack Podcast with Thomas Thompson (August 2025). “(i)nnovations, AI, Pirates, and Access”.
Intentional Teaching Podcast with Derek Bruff (August 2025). Episode 73: Study Hall with Lance Eaton, Michelle D. Miller, and David Nelson.
Dissertation: Elbow Patches To Eye Patches: A Phenomenographic Study Of Scholarly Practices, Research Literature Access, And Academic Piracy
AI Syllabi Policy Repository: 200+ policies (always looking for more- submit your AI syllabus policy here)
Finally, if you are doing interesting things with AI in the teaching and learning space, particularly for higher education, consider being interviewed for this Substack or even contributing. Complete this form, and I’ll get back to you soon!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International




