While I always try to give talks that are customized and personalized to the audience, given the requests of talks with faculty, higher educational leadership, and staff, there is inevitably overlap and points that get repeated. This is fairly common or so I’m told by others who are often sought out to give talks, run workshops, and consult. So when an opportunity avails itself to give a much more personalized and unique talk, it can be exciting and heartwarming—and that was the place I found myself in last week when I gave a keynote at the University of Massachusett, Boston for their University Conference of Teaching, Learning & Technology.
Currently, two of my three Master’s degrees (yes, I’m a nerd) and my current doctorate that I’m working on are from UMASS Boston. I clearly have chosen them time and again for many different reasons, including the fact that they provide a great education and have been reasonably affordable. They are definitely an institution I have done a lot of growth and learning, particularly as relates to my professional standing these days.
Given that, I was elated to be asked to give a keynote on the subject of the good, the bad, and the unknown with AI. Over the course of discussing the talk, I had a lightning moment when I realized that I knew exactly how I wanted to frame the talk and then everything fell into place.
What I got to do with this talk is not something I could do any where else (ok, maybe Salem State or North Shore Community College where I’ve also had deep ties to over the decades). I realized that the way I’m able to think about delving into the good, the bad, and the unknown with AI is in direct relation to my three programs at UMASS Boston.
My experience of the talk is that it went well and hit the mark. The audience seemed engaged; they were ready to respond when I asked them something. They laughed at my jokes. Afterwards, many took the time to say that it helped them think about things a little bit differently.
I think the challenge I faced was that it was to be a 20 minute talk with a goal of leaving them with some sense that all is not lost and that AI will not make everything worse. I mean, that’s kind of a tall order to convince folks of that if they are already there. However, I think it still worked out that folks feel more grounded about navigating generative AI.
As always, the text, slides, and prompt-guide can be found in this resource.
The Text of the Talk
But, before anything else, I need everyone to get some paper and a pen.
It’s not a pop quiz.
And for folks on Zoom, you’re instructions will show up in the chat. You’ll be doing a jamboard.
Ok, this talk is called the good, the bad, and the unknown.
Write down the following:
Something bad about AI
Something good about AI
Something unknown about AI
I’ll give you two minutes to write that down.
Be sure you get something for each item.
Don’t try to write a novel–just a sentence at most.
Great–now–you have 1 minute.
Your goal is to trade your paper with someone else and keep trading papers with at least 3-4 people.
The goal is to end up with a different paper and have no clue whose it is.
Ready–let the clock begin.
Well–good morning! And thank you for doing that! Hold onto those papers, we’ll come back to them.
So, I’m here to talk about generative AI today–I know, spoiler alert.
But I’m going to do it in two distinct ways that I hope–no, I know will be uniquely grounded in the value, skills, and force that comes from getting and providing an education at the University of Massachusetts, Boston.
One way is that this talk is going to be dependent on y’all. I think you just got a flavor of what that looks like.
I’m going to need you, your focus and your thought here in this room.
I’m here to engage with you. And hope going forward, you engage each other.
Make no mistake–whatever I’m here to say–it’s useless if y’all don’t look to learn from, share with, and be in communities with one another now and going forward.
But the second way–well, that entails me telling you a story that is both my story and UMASS Boston’s story; it’s the story of why and how I’m able to be a national speaker and writer on generative AI.
This comes from me and my education HERE at UMASS Boston….only enhanced with some generative AI images along the wy…and of course, pictures of my pets–because I’m a professional.
I've become a thought leader on generative AI and education somehow, but much of it comes from my experience at UMASS Boston.
Without this of foundation of criticism, optimism, and willingness to navigate ambiguiety, I wouldn't be able to help others. And that happens because of all of yo, here in this room and at this institution.
In order to make this story chronological, I’ve needed to retitle this presentation: The Bad, the Good, and the Unknown
My journey at UMASS Boston 20 years ago when I entered into a Masters in American Studies. After being an honor student all my life, this program kicked my butt.
A lot of new critical ideas, analyses, and understandings happened over those two years; I definitely grew intellectually on levels only rivaled by my PhD program.
American Studies was pivotal to more deeply developing a critical lens on the world–to see systems of power, and understand systematic racism, patriarchy, class warfare, and many other axes of inequality that exist in our world.
It sent me to explore these ideas in the world in front of us–in the popular culture and technologies that surround us–even when they’re invisible to us.
These skills of analyzing power take time to develop, often need to happen in relationships and in community.
It can’t just be information–it can’t be just Friere’s banking method of education.
Studying American Studies has continually helped me to unearth the bad, the problematic, and the alienating aspects permeating all around us.
Such lessons and skills need a diversity of voices and experiences that help us to make sense of things, to understand the lived experiences of others and how power and systems play roles in constructing their reality and experiences.
So what does this have to do with AI? Well, this is the bad part, right.
So, before I go one. Can someone read me “something bad about AI” from the paper they have. [ASK & RESPOND]
Despite how much I talk about AI, I’m not as big a fan as folks tend to think. I have a lot of reservations. We have to talk about the bad things.
AI flattens context, texture, and nuance.
It isn’t a weaver of language like humans. It’s a machine that uses what could only be called brute-force probability to stamp words, sentences, and paragraphs together like steel plates.
AI speaks in a unified voice laden with bias on different levels.
There is no “I” in the AI. It remains a collection of words that are infused with a variety of biases from its data sources included, to the wonky means it creates outputs to the weights and moderation of those outputs that the AI companies instills into it.
AI can reply to us but AI can’t respond to us.
AI is never making a case to help us understand one another individually, contextually, culturally or systematically. It’s calculating its reply and therefore, can’t actually touch us in the way that other humans can.
AI is a tool of hyper-productivity for productivity’s sake.
AI is a tool of capitalism created to make more work faster but never to make lesser work. We will have to produce even more things–much of which will then be fed into other AI tools for other people to process.
AI has many real world tangible harms.
It has significant environmental costs. It’s a massive act of appropriation of ideas, works, and insights by people who will never be credited. How companies design content moderation for these tools have usually been through the exploitation of marginalized people.
We have to keep all these in mind–and we have to recognize that in our world–these facts have been never enough to keep us from doing stuff or from the culture at large barreling forward.
That doesn’t mean we can’t or shouldn’t do anything but it does mean that we need to acknowledge that ignoring it entirely or thinking it will go away does no one any good.
That’s what American Studies taught me–hiding from the truth of how things are will not help me or the world I live in.
And that brings me to my Master’s Degree in Instructional Design from UMASS Boston. This degree further empowered me to engage with technologies in general and educational technology.
It helped me to think about the trade offs of any technology and it place in education because, make no mistake–all education for millennia has been deeply embedded in technology.
We might have called that technology quill and papyrus or book and blackboard, but it’s always been there.
All learning in formal institutions requires us to wrestle with the benefits and drawbacks of technology.
And because of that technology, we can be together here in this hybrid space with folks hearing me well through the speaker system and others streaming in from their homes or offices across campus.
When used well, technology builds upon or supports human connection.
Referring back to my American studies degree, it’s worth considering, that generative AI would have been amazing for me.
I get that Michel Foucault is important, but trying to read him broke my brain.
It took me 13 years to finally be able to read The History of Sexuality Volume 1–for those not in the know–that book is under 200 pages; but it’s deep philosophy which is just really hard for me.
Generative AI would have been immensely helpful to a first-gen graduate student like myself who felt inadequate among his peers with texts that were hard.
Being able to ask “stupid” question after “stupid” question, I would have me been able to do so much better than how I stumbled my way through.
So what does this mean for AI? Well, let’s ask the room. Can someone read me “something good about AI” from the paper they have. [ASK & RESPOND]
While I am skeptical of generative AI, I can still see some values to myself and others.
AI can answer things more quickly and at times, more accurately than I ever will be able to.
We are surrounded right now by millions of things we do not understand. Some of us don’t understand how a toilet works, others cannot explain how the internet works–the very tool you use every day and may be using right now to check email. It is not perfect in its answers but it is closer on many things I don’t know about.
AI can provide guidance.
AI can be something to help me get outside of my head and challenge my assumptions and misunderstandings. It might be parroting collective ideas but they might be ideas I may never come up with.
AI can open up a hidden curriculum of the world.
It can be used to unlock ideas, assumptions, and the parts of the world we don’t talk about. My favorite example is using it to unlock the resume, the cover letter, and the job description with others. There’s so much mystery around these things and yet, this is a tool perfect for navigating that process.
AI can help us be more critical about our information.
I know that sounds weird. If we know that a pitcher of water has a drop of poison, we’re likely to bring more awareness to all drinks in the room. That is, we can lean into AI’s fallibility with folks to raise our concerns about all information that comes our way.
AI can be an effective collaborator to move through some things–especially tedious things.
We all have those tasks or projects that we loathe in our work and personal lives. We can’t necessarily avoid them but we can minimize the energy it drains from us.
Each of these come with caveats but that is true with every technology, every pedagogy, every strategy we use to improve education. We can’t avoid trade offs.
The instructional design program at UMASS Boston has helped me to better see and evaluate the opportunity and possibility of these technologies while keeping in mind the best ways to implement them for effective and meaningful learning.
Again, that’s a lesson that already reverberates throughout the bones of this university. Seeing these challenges, these opportunities, and navigating through them is what UMB does best.
Now, you’re probably experience a little bit of disorientation with the things I listed here and what I listed in the bad section. Of course! We are seeing both of these things and that in part turns us to the next section.
We turn to the unknown. And here’s where my pursuit of a PhD in higher ed comes into play.
Like generative AI, so much about what comes next with higher education is unknown. Including if I’ll successfully defend my dissertation in September–wish me luck!
But navigating uncertainty is the core part of the program. It’s another program that brings a thoughtful and critical lens to this thing called higher education.
In this program, we examined higher ed through the lens of social justice and consider what do these intersecting infrastructures of public, private, and for-profit institutions, set of beliefs about how such institutions function–or don’t,--and a collection of ideas about what teaching and research mean.
We’re not good with the unknown and we often aren’t prepared for it. Yet, we get through it time and again.
Muddling through uncertainty is the tagline for the history of American higher education. There’s always been uncertainty about what’s to come; and yet, we find our way through; changed in some ways, the same in others.
This thing we call higher ed–it’s messy. And well, so is generative AI. And it’s ok to feel
that ambiguity,
that angst,
that fear,
that confusion,
that exhaustion
These are perennial challenges of higher ed–and it’s ok to acknowledge them and also to not let them hinder us in figuring out whatever is next.
So with that, it’s time to share.
Can someone read me “something unknown about AI” from the paper they have? [ASK & RESPOND]
There are many unknowns to what AI represents for higher education.
How will AI change power and agency in the classroom and among students?
AI challenges us to rethink the way power and agency has existed in our educational structures. We gave primacy to faulty and efficient means of assessment and now, we must navigate the changing power dynamic.
How will AI be used on us as scholars and educators?
We wonder just as much about how it might be used on us, with or without our permission, both inside and outside the institution. We don’t know what that future will hold.
How will scholars & educators be expected to use AI?
Just like email, the internet, learning management systems, and other institutional technologies, we anticipate we may have to use it but still aren’t sure what that looks like.
What will be the impact on all of us as a result of using these problematic tools?
We know that we have a deeply interwoven relationship with technologies that shape our experiences, lives, and the way we move through the world. Sometimes, to the degree that we don’t even realize that it’s a technology and it becomes invisible to us, even though it deeply influences how we show up in the world?
Quick question: What is the most ubiquitous technology in the world? A technology that I know every person in this room and everyone on this campus absolutely has–without even having to meet them. Any guesses? [Answer: clothing]
What does it mean for AI to become that invisible to us?
What if this is a big nothing-burger?
Of course, this is also an unknown. Many wonder if this is the MOOC moment–the onslaught of massively open online courses that was going to completely changed higher ed in the mid-2010s. There’s still a lot to figure out how viable or even useful generative AI will be. So we’re also holding our breath to see if there is anything to get excited about.
That’s a lot of unknowns. So as we sit here and try to figure out the future–it’s ok to acknowledge that we don’t know what’s next. But that’s always true and we can look to each other to help find out way through it.
All the bad, good, and unknown are true and yet, I remain hopeful and continue to want to learn more about generative AI.
The thing is, I’ve taught for nearly 18 years. I’ve taught through all these technologies and shifts in education. Many of us have. I can’t tell you what’s next but I do know that where all these things have led me.
First, it’s led me to think more about my students and the worlds they live in and the things they must figure out. Because yes, these tools are so different from what I had–and yet, they also make the world so much harder to sort through.
How does a student as a knowledge worker conceive of their future when generative AI has arrived?
I don’t know– but I do know that my goal is to connect as a human with that student, recognize their challenges, and find meaningful ways through it. Their world is just as scary and alienating for them as we feel.
Yet, we also have more agency, wisdom, and abilities in this moment from navigating all the previous things. For me, that means I’m able and willing to figure out what comes next with them.
For me, a new technology is an opportunity to reevaluate what is true, what is doable, and what is relevant in the classroom and to do it with students.
Our collective and individual goal with any technology is to figure out where the line is.
Where is the line of appropriate and inappropriate use
who gets to decide that,
and how do we understand, acknowledge, and negotiate the moving of that line for individuals, communities, cultures, and across all of these.
Therefore, we can’t just teach them; we must learn with them, and we must learn from them.
With generative AI, we have to understand that in order to use it well–really well–the kind of well that means a matter of life and death, with certainty, and critcal understanding, we need two leaky buckets.
The first bucket is a working understanding of how generative AI works; its limitations, its capabilities, its biases, and how it fits into the larger cultural landscape of late-stage capitalism.
That’s essential to knowing what it is and isn’t, can and can’t do–but more importantly, sifting through the promises of a lot tech-bros in Silicon valley and business hype about a techno-utopia that always results in buying a product to solve all our woes.
The second bucket is a deep knowledge of the subject matter you’re working with it on. It just doesn’t know things; it only calculates responses and that can only get one so far. When it comes to disciplinary knowledge, it still falls drastically short. Yes, it can produce facts but it often can’t produce knowledge.
As leaky buckets, we have to continue to re-immerse ourselves in the changes that regularly occur in AI and our disciplines.
We need our critical analytical skills to really navigate its outputs; it’s the thing that education has often provided and I can say first hand–it’s the thing that UMASS Boston does well.
And yes, we have to do this work with students and aid them to understand these two leaky buckets together–in conversation, in community, and yes, at times, in tension with students.
This is no easy task–there’s no ubiquitous technology that doesn’t initially instill a technopanic.
The internet made us stoopiderer; video games made us violent; comic books created a whole generation of juvenile delinquents; disembodied voices coming from radios in our homes rotted our brains; and penny dreadfuls and dime novels sent us all into a tizzy. And let us not forget that original sinful technology–the one that Socrates scolded with abandoned–writing.
All of these change what learning and culture was. We don’t have to panic about it; we do have to think about it.
Rather than panic, we must persist.
Rather than dismiss it, we must navigate its uses for ourselves, our students, our culture.
In the end, my wish for all of you here is to consider this. I’ve been at student at UMASS Boston for a generation–dear god, I’m just realizing how long that sounds.
Let’s try again–I have earned several degrees from UMASS Boston over the past 20 years.
And they have all led me in the right direction time and again.
It had less to do with the content and more with how people connected with me. The learning happens inside and outside the classroom and has less to do with writing papers and completing tasks and more about forming relationships and further developing thinking and understanding.
UMASS Boston does fantastically–and you should believe me because I might have a doctorate soon from a fantastic university
But seriously, in these halls exist folks who have fine-tuned my criticism of power and systems, sparked my curiosity and appreciation of learning, and helped me to make sense of the nonsensical–that is, higher education as a whole and more recently, generative AI.
We don’t have the answers to what generative AI will bring to our professions and practices. But we do have allies and collaborators and we should all be digging in, learning, sharing, and gaining more clarity together.
We’ll only be able to harvest what good there is by having all of us working together to figure out what is bad and pushing the peripheral of the unknown further and further out.
The room is always smarter than the individual–this room is no different. But so too are your classrooms, so continue to attend these events and also, continue to be in formal and informal rooms to share, learn, explore, play, and figure this out with one another, with those not in this room, and with your students–that’s the only way forward and quite frankly, it’s better than any other alternative I or generative AI can imagine.
Thank you!
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International
I am in awe of how much approachable practicality you pack into your talks without it seeming like an overload. I have to admit from the title, I expected a Sergio Leone motif, but I really do not relish seeing AI depictions of the Man with No Name.
Good talk, Lance. I like the way you mix in your story with audience contributions.