Last Year's AI Views Revisited
A retrospective & current concerns about the pace of engagement with generative AI
Last year at this time, I had been playing with ChatGPT for two months, having some rich conversations with professional friends, and building out an institutional strategy. On my other blog (By Any Other Nerd), I had posted this, ChatGPT, AI-Generative Tools, and Education...my turn.… I thought I’d revisit some of its thoughts in today’s post.
This post was the first time that I had publicly written about the work being done around generative AI at College Unbound (you can see the deeper dive into our student-centered policy development here). The blog post was kind of the "here's the plan" and mixed with “here’s how I’ve made sense of it in the first 2 months of it.” There was also a splash of “hey folks, I know this is strange new land but here’s some guiding thinking about it.”
I start the post with a statement about problematic usage—something I continue to grapple with:
"This presentation/paper/work was prepared using ChatGPT, an “AI Chatbot”. We acknowledge that ChatGPT does not respect the individual rights of authors and artists, and ignores concerns over copyright and intellectual property in the training of the system; additionally, we acknowledge that the system was trained in part through the exploitation of precarious workers in the global south. In this work I specifically used ChatGPT to craft my Q&A log to explore and inform my own understanding about the way the tool operates and responds to a series of questions."
Autumm Caines had introduced me to the work of Lawrie Phipps and Donna Lanclos and their ideas for properly acknowledging the problematic use of ChatGPT. I had used a version this in a lot of my workshops and talks up through the summer and then slowly, I began to not use it. I think its message is still relevant but front-loading it into the conversation rather than drawing out its points at different parts of my talks or writings felt less useful; something to state and put aside.
I also shared my public log of questions and answers that I was getting from ChatGPT so folks could get a sense of its abilities if they hadn’t tried it yet. I still continue to do that at times, but not with every line of inquiry. Of course, ChatGPT helped immensely with that by allowing one to share chatlogs with ChatGPT publicly.
But once I got past all that, I explored the possibilities, challenges, real problems, and what I call the "big twisty, knotty problem for education." It still feels like much of it is the same nearly a year later.
For instance, I'm excited that some folks are delving into the possibilities--really trying to figure out how this can aid in learning or supporting faculty. I still hope we continue to do it critically (keeping those real problems in mind) and am glad there are lots of examples and resources helping to figure out how we use generative AI in effective ways.
I am curious to see what things like how Arizona State University will do now that they have an enterprise model of ChatGPT for their institution. What does it mean to think, play, and learn with these tools, have them available to everyone, and to know that it’s part of the institutional framework?
I don't negate the numerous flaws and legitimate concerns that generative AI represents (both in that piece and currently) but there are countless flaws in all our technologies and systems of education. And that's not trying to wash it all away, but the flaws themselves have never been enough to discredit, dismiss, or remove technology--at least to try and figure out its places in education and where it works and where it doesn't. If we don't experiment or test it out more substantially and robustly, we can't learn anything about it beyond singular stories and experiences.
I have all sorts of issues with all learning management systems that are about power, agency, surveillance, accessibility, and flexibility. I’m still a better educator as a result of it in my toolbox. The same is likely to be true for generative AI, in my opinion.
As I move into challenges, I think we're still doing a lot of grappling. By contrast to the pandemic that happened all at once, the AI expansion is happening like a slow wave across academia. So we're all encountering these moments where our sense of trust in one or more students or in our own ability to navigate this is disrupted and challenged. Some of this is real; some of this is amplifying an already and well-established deficit lens by faculty toward students (No, not all faculty, but a significant portion who view the students as less-than and are already suspect).
Just this week, I had an amazing, thoughtful, and asset-based practicing colleague grab some time with me to talk about her struggles when she realized a student was (in all likelihood) using generative AI in their work. This faculty member puts in 200% of care, thoughtful communication, and a genuine emphasis that she wants the student's authentic work, and doesn't matter where it is, she'll bend over backward to work with them. When she had a student submit something in sharp contrast to other submissions of a different style and caliber, along with a very formulaic writing akin to ChatGPT outputs, she was right in wondering about the authenticity. And, of course, there was a bit of a punch in the gut to see a student (most likely) use it.
That leads me into the "big twisty knotty problem for education" of the post where it's that question about authenticity and where is the right line(s) for when and how to use this--and how will we even know.
In a keynote I posted on this Substack, I expand upon this concern:
"This leaves us in a vulnerable space where we know but don’t want to say that there are possibilities of students fooling us and passing our class without actually learning anything. And that idea challenges many of us as educators. It can make us feel inept or wondering what we are doing in this work.
And in this way, generative AI challenges power and the power of the learning space. The power of us as educators to know and hold knowledge in a particular way. What does it mean that students can choose to use this tool to challenge us or bypass us and our role as knowledge gatekeepers.
Now–I’m not saying that individually, we feel like we hold that power or we operate through that lens, but as representatives of a larger institution within higher education, that is, in fact who we are: Knowledge gatekeepers–deciding who goes forward with passing grades and who does not.
Generative AI leaves us wondering about our ability to hold this role which means it represents some level of power change that we’re not entirely comfortable with.
It feels very much like the vast majority of mental work that gets turned into tangible deliverables for evaluation in higher education are very quickly becoming possible to being generated by AI."
It's evident generative AI will be used and expected in many different areas and yet, my colleague and I both know from our extensive use that to use it well and successfully you need one of two things (and ideally both): deep knowledge of the content you are trying to elicit and deep understanding of how the AI tool works. And our hopes are to figure out how to guide and support students to use it as such. Yet, it can be really hard to name usage if you come from an approach that is to trust students and make sure to not falsely (however couched in positive and affirming language) accuse students of doing things they might not have done.
In looking at the post, I think it still holds up pretty well a year later--especially as I continue to hear from leadership on different campuses about the conversations they are having. I also continue to see a range of usage and accurate knowledge from faculy including those who are learning deeply about generative AI and those who still haven't really touched it or worse, those who are relying on false understandings about generative AI.
And that's the thing that worries me at this point, nearly 14 months since the emergence of the tool. If faculty are not looking to fully understand the technology and how it works, shape its use in the classroom, or engaged with students about it, we're going to see a lot more problems emerging on a few fronts:
These tools are going to be used or needed in different fields. Not because they are great or perfect but because they are "enough”. They satisfice in many ways and businesses and organizations are in the business of “good enough”—especially when it comes to employee output. Students are going to be expected to work with these tools in many industries because of the perceived efficiency they represent (which is true in some use cases and areas). Another year can’t go by without graduates having some reasonable understanding and experience in working with generative AI in professional and public ways if they are going to be seen as credible in certain areas. This will become another instance where higher ed is accused of not preparing students for the world they are living in, yet charging them a lot and making them fixate on arcane things like the proper placing of semi-colons an APA reference. I’m not saying that such things happen everywhere or at scale, but accusations of over-concern about trifling things is always an easy target for higher education.
False accusations (as a recent conversation of students being accused of AI use when using Grammarly has demonstrated) are going to hurt higher education's reputation while it's also trying to be accessible and responsive to a wider range of students. There's nothing that's going to diminish a student's sense of belonging more than an accusation about something the institution actually can't prove. These will also result in litigation and that too will continue to damage higher ed’s image.
The Ferris State University is going to intentionally enroll two AI students into its courses and I think that’s an intriguing approach to learning more about how such tools can be used (also, it’s a PR stunt). Yet, some folks are going to enroll unknown generative AI students into courses to take on their behalf or maliciously. Some are going to do it as a political stunt—a means of undermining and dismissing the value and intention of higher ed (here’s looking at you, Turning Points USA). And we’re not prepared to respond to that; which means we’ll react—exactly what is wanted and what happened in Fall, 2023 with the university presidents hauled into a Congressional hearing.
There’s probably more that’s swirling in my head but these are the three most prominent raising to my mind as I revisit this post. I think there have been rich conversations and things taking place in the last year and also, I feel like lots of places still aren’t effectively strategizing what it means for their institutions to more intentionally engage with these tools.
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International
Great post, Lance. I like the contrast you make between the "all-at-onceness" of the pandemic and the slow wave of generative AI. We tend to think of major developments as discrete, yet when I think about our collective approach to using digital technologies in higher ed, it feels like the 1-2 punch explains more about what's happening on campuses than either in and of itself.
Lance, quick correction: it’s Ferris State University, not the University of Michigan, that is enrolling “Ann” and “Fry.”