Discover more from AI + Education = Simplified
The Gaps To Fill in Supporting Faculty & Staff with Generative AI
Why this work is needed and what it entails...
It’s still strange to think I’m getting the number of requests for talks and workshops at the level that I am. I both grapple with it and see it as making sense. It makes sense because my cumulative work of the last 25 years has primed me for this in many ways. From playing on the Internet in the 1990s to presenting at national conferences in undergrad to continuing to think about and iterate on my teaching for 17 years to learning about digital tools and the possibility/challenge tension they represent for the last 12 years to being will to invest a lot of time and energy into public thinking through presentations, blogging for 12+ years, social media, and other publications, I can say that I’ve put in the work to be at this point in my career.
Yet, it still feels strange and I struggle with it a bit. In part, because some of it is what I just do. I just seek out and learn about these things and think about their application. And I know not everybody does that, but there’s a way in which I feel like what I do is not particularly inaccessible or impossible. It often relies on simple principles, some of which are the very principles we tell and teach folks at all levels of education.
I’m not unaware that this process after decades is likely to create specialization or deeper wells of knowledge and ways of seeing the world. Still, I sometimes wonder if folks overthink or worry about needing to have something more (a badge, a certification, an acknowledgment by others) to explore something and discuss it publicly. It’s probably a mixture of things.
It might also be just my disposition as someone who can appear as a specialist but whose proclivities and history much more deeply indicate a generalist (For a good exploration of the differences between generalists and specialists, check out Range by David Epstein). Maybe a master generalist (whatever the hell that means), but a generalist nonetheless. Whether it’s my education (6 degrees, none in the same field), my writing (academic publications from audiobooks as adaptations to the intertextuality of Jekyll & Hyde to student power in the LMS), history of work (internet product editor, overnight residential counselor, full-time adjunct, writer, instructional designer), it can be hard to see the throughline of it all. I do think it exists and that I’m working more on a tapestry than a hustle.
Still, in my sensemaking, there are moments where I’m understood as a specialist but my roots are quite deeply situated in generalist tendencies. This means there’s also some cognitive dissonance when folks call me an expert in anything, particularly, generative artificial intelligence and education. (However, I would fiercely argue that I’m an expert in audiobooks—hahaha).
The thing that I struggled with a lot in the past ten months is how much I feel like I’m not necessarily offering much to folks asking me to talk. I mean I probably do have an understanding of generative AI more than the average person but it doesn’t feel substantial. Like what I feel like what I am offering sometimes is not particularly insightful or deep. Of course, I’ve felt this regularly throughout my career and also been deeply validated about the help I’ve offered.
It can feel like “anyone can do it” by just diving into things, reading/watching the plethora of materials available, and the willingness to try things out. Yet, I’m also starting to realize and appreciate what I (and many others, to be clear) bring to the (digital) table in this work. I’ll speak mostly for me in what follows but have a hunch (and have seen) that others do this as well.
Because of sloppy reporting, utopian visions (hallucinations?) from Silicon Valley tech bros, and the avalanche of new companies throwing "AI” into their product descriptions like it was the free spot in a bingo card, there is so much fluff and stuff out there. Sifting through it all is impossible. Therefore, I offer a space to discuss it in layman’s terms and ground folks about what it is and isn’t. As important, I can often frame how generative AI might be relevant to one’s domain is important.
When folks ask me how I keep up with everything going on with AI—I laugh and say, “What makes you think I keep up with it all?” I can’t keep up with all of it—even in education—it’s impossible. But in the deep dives that I’ve done and ongoing learning that I’m doing, I do try to do two (actually, now three) things.
First, I revisit the folks that I’ve been reading and following along the way. I’ll have a future post about this, but there are definitely folks that I follow to keep in learning about what they are sharing. Over the years, these are the folks in my hivemind that have given the tools to make sense of things. Second, I continue to add to that list both because there are always newer insights that arrive and also some folks stop creating for one reason or another. Also, this is a cross-platform practice—this includes blogs, newsletters, YouTube channels, Instagram creators, groups on Facebook, folks on Twitter, and the like.
Finally, of late, I’ve also been using generative AI to help me out. Particularly with research or big documents, I will plug them into Claude or ChatGPT and ask it some questions to get the essential ideas and points from longer documents.
The culmination of this is that I can reasonably speak about generative AI in ways that make it accessible and understandable. I can cut through the fluff—offering practical and clear ways one can make use of it right when folk are leaving the sessions, rather than offering up abstract or complicated possibilities for the future.
In addition to clarity, I often provide a framework for folks to work within. In one of my first talks on exploring generative AI in education (back in February 2023), I made it a point to help folks think about where they are and what that might mean for them. I outlined different positions they might find themselves in, what are the challenges and the benefits in being in those positions. It was an attempt for faculty to name collectively where they were and then use that information to figure out if they want to stay there.
Whether it’s that, or framing how generative AI can fit or be considered in different ways, what is often needed and appreciated is a way to fit it into the faculty or student frame. It doesn’t have to be elaborate or complex. It can be as simple as what I did in a talk at BU where I split it into 4 (imperfect) categories (Slide 39):
Lean toward engaging and using generative AI in relation to teaching and learning
Learn more and use generative AI more and maybe dabble with it in teaching and learning
Learn more and use generative AI more but not ready to bring it into teaching and learning
Lean away from generative AI and not wanting to use it in teaching and learning
It doesn’t have to be complex frameworks—but just something to help folks figure out where they are. Having something folks can see themselves in or that they can use to make sense of the tool can be incredibly helpful in trying to figure out what’s next.
My best friend, Kara Kaufman, helped so much with this one. She’s been teaching for 20 years and deeply loves her work (and does amazing work!). In conversations with her, I realized something that I think particularly helped me shift my thinking around this work and how I engage faculty in particular.
A mistake I see a lot of institutional leaders doing in this work is to assume that skeptical faculty are luddites or just not wanting to do the work. This assumption is likely to create as many problems as generative AI is for faculty because it fails to understand or listen to what’s really going on for many faculty.
Validating the work that faculty have been doing is the first stop to letting them know you see them and what they have had to do. And yes, some of this is the work they have had to do through the pandemic but it also connects to a longer-trail of technologies and practices that have made them to have to constantly change and reinvent their work. And they’ve done it, time and again.
Sure, we can lament the faculty member who is still using transparency slides—that’s easy (and the more work that I do in this realm, there’s a certain “punching down” to this viewpoint), but when we make a caricature of all faculty resistant to learning and growing as a simple binary, we lose a lot. We perform the same kind of alienation for our faculty as we do when we do binary thinking with our students.
So to me—most talks I give try to tap into and validate the work that faculty have and continue to do. Without that validation, it just feels like we’re bulldozing through them and not sitting with the angst, frustration, and exhaustion.
Within that validation, I am also trying to validate the overwhelmingness and so I aim to be honest about also feeling overwhelmed, confused, challenged, concerned, etc about what generative AI means and how it’s hard even for me to keep up with it all.
Having done this work for a while now, I think better work, buy-in, and growth comes from taking this moment to really validate faculty work and experiences (and this is just good teaching practice, regardless!).
I don’t try to kid folks that this will be easy or ceaselessly grand. It’s going to be challenging and challenging in lots of different ways we can’t predict. I hope to be an honest broker in guiding faculty toward making sense of generative AI.
Faculty and staff are not ignorant nor lacking precedence for skepticism—they’ve lived through how all the great technologies from the computer to Web 2.0 to MOOCs and the like were going to revolutionize education. And while changes have occurred, they’re far from the promised revolutions and often required as much work as faculty were already doing. Actually, they tend to require more work because they are stacked upon other requirements we increasingly ask of faculty.
To show up with a cheery disposition that this is going to be easy and great is both to offer false promise and going to turn off many faculty as not really understanding the problems and challenges at hand.
I want them to succeed and so they need to know the challenges. That’s just good teaching and learning practice. It doesn’t mean I only focus on them but I do give reasonable attention to them. If we don’t define the full scope of the challenge, we’re setting them up for lots of issues that can do harm to them, to the students, and even the institution.
To that end, I assure them that I too grapple with lots of this—daily. I want the best possibilities of these tools. I also know that’s never going to happen so I need to be contemplative about the harms, side-effects, and unexpected. I don’t hide my uncertainty in this regard and will use, “I don’t know” as a response—wanting them to understand that there is no oracles of AI to save us but rather what we figure out and share together.
Center the audience
Increasingly as I end my talks and workshops, that’s the place I lean into. I can come in, provide some frameworks, be honest about the challenge, validate their work, and clarify some of what they are doing. But I’m a drop of water and they are the water-filled bucket. So, I tell them that. The wisdom is in the room and they need to look and learn from one another about what they’re already doing. Inevitably, the folks know more than me and their responsibility is to continue to learn, share, challenge, and grow together to figure out how generative AI is going to fit and not fit for their institutions.
I tend to think that if we don’t center their abilities and what they bring to the conversation, we’re losing the greatest potential of all in addressing generative AI in teaching and learning.
And I do that in these rooms, often with administrators and others listening because I need them to hear and understand that. I’m happy to come and share my insights but the next move really needs to center the faculty and/or staff to draw upon what they are learning and doing.
As a practice, this is effective and meaningful learning—learning that centers the learner and community to build together what comes next. It’s the kind of thing we strive for in our classrooms and we need it in these rooms with faculty.
I do think there is a big change going to happen (or even forced) in education with generative AI. I want folks to engage with it as best as possible and engage with it—that’s the only way we figure it out and maintain relevance. But folks also need to not feel the full weight of that in any moment, which means I gotta have some jokes and opportunities to smile or laugh.
My favorite way to do this is to engage folks with a question: “What is generative AI? Wrong answers only.” It breaks some of the tension and people are super-creative and funny with their answers.
And folks need levity in this. The work ahead feels big. Generative AI has only been accessibly available for just under a year and it’s felt like there’s so much going on (yes, definitely some AI hype, but also tangible considerations to figure out in that hype cycle). There are things to figure out but that doesn’t mean we can take a moment to laugh and smile and feel some ease from that tension.
In the end, I think these are the things that many of us doing this work bring. We bring a very human element to what feels like a very inhuman and big challenge. We do the thing that I think great educators the world round do—we connect with the values and experiences of our learners and use that to build pathways for folks to step into a world that still may feel very much beyond what they were ready.
In the hopes of both learning more and seeing what else is going on, I’m hoping to jumpstart a conversation in the comments.
For other folks doing this work or who have sat in on folks successfully doing this work—what else do they bring into this space that is useful and helpful in their approaches and style?