I’ve been in a lot of conversations about where we should be using generative AI with faculty, staff, students, and those outside education. It’s a tricky line to figure out because it is so contextual.
In a conversation with my partner and friends last night, we were trying to figure out where does it fit. It reminded me of the section “The Big Twisty Knotty Problem for Education” in this post and some of what I was poking at in this post from last year on generative AI divides.
There are places where I know I would have used it. I would have used it in my science classes in undergraduate—in part, I was taking them because I have to and largely remember nothing from Geology or Weather and Climate. It probably might helped me understand more than what I did at the time. But as important, I would have loved this tool in graduate school.
In my Master’s in American Studies, we encountered Foucault and I hit the limits of my ability to read theory. In fact, it took me about 14 years to read History of Sexuality Volume 1—and that only happened because of audiobooks and how that made things easier to follow. But in grad school, I was in classes with folks that loved Foucault and had been reading him for years. I was stuck trying to figure out up from down. If I had generative AI, it would have immensely helped me make sense of his work in a way that trying to read his work never has—and not through lack of trying; I kept trying to read Foucault for those 14 years and hit the same walls, time and again.
The other course that would have been immensely helpful was quantitative analysis courses (of which I took at least 2 in my different programs over the years). I don’t have a solid math background. I had a poor math education in high school (including getting suspended in one of my math classes—great way to help me learn). It takes a lot for me to full understand and at times, because of all the symbols—it is literally all Greek to me or Latin—I’m that lost on it. So the textbooks and videos and such are not necessarily helpful when I need to break down sentence by sentence to understand what is being said. Being able to get a generative AI to talk to me at a 5th grade level would be a boon for learning math more coherently and actually understand what I’m doing rather than just do things and aptly forget them later.
So there is definitely a place for these tools in learning. There’s too much learning in any discipline that will challenge, alienate, and leave folks feeling stupid when what they actually need is the patience and ability to ask ALL THE QUESTIONS and approach it in different ways to suit their needs.
Finding the Usage Balance
What’s hard about this is that many educators are going to expect students to develop that mastery in their courses, even when the courses might not have full relevance to the student’s larger goals. For some, that’s learning math or science or history—things that I 10000% think are important and also know that many students who enter those courses with no interest are going to exist that course with that same level of disinterest. I probably need to find the research on this but anecdotally and personally, I see it all the time. We take the course because we have to but so little if any of it sticks (I wouldn’t trust any advice I can give about geology, weather and climate, or quantitative analysis!).
But there’s no real payoff. I say this decades later the absence of understanding or maintaining any knowledge of geology and weather and climate has not been detrimental to my life. One could make the argument that spending significant time in those courses was a waste of time because 1. No deep learning took place that remains with me decades later. 2. No detrimental consequences have occurred. They were unnecessary hurdles to my learning, not intentional challenges that enhanced my learning. (I was an avid learning in college outside of the classroom—reading and listening to audiobooks, continually learning things on the early Internet, so none of this is about not being interested in learning either).
And the thing is, we all believe our disciplines are the most important, and yet, there’s an interesting question of which disciplines is that actually true for and to what degree is it true. Thus, the challenge is for us to figure out where is AI a tool, where it is a substitute for learning, and whether it matters. For my personal use, that is starting to become clear in personal usage and maybe that can help me in advising others and faculty to think about it as it slowly (but not perfectly) emerges in my mind.
In January, I was a guest on Jeff Selingo’s Next Office Hour and I mentioned something I’ve been realizing in the last six months:
“You need two buckets to work with AI,” Eaton said. “You need a good understanding of what generative AI is and how it works, and you need expertise in the subject so you understand the limitations of any questionable outputs.”
I would only add to this that when I say work with AI, I mean, work with it well in a way that makes it an aid in the work you are doing. This seems to make the most amount of sense. Understanding generative AI, what it does, what it doesn’t do, what it actually is, and how to work with it effectively in that context is important but so too is the domain knowledge of the area you are working with.
This may seem contradictive to what I said already because I’m saying those who have little domain knowledge (or interest) should not use generative AI (at least that’s the converse of what I’m saying). But it’s where I might say, not quite. In truth, generative AI is statistically going to be more accurate and right on anything I will come up with for answers regarding weather and climate, geology, or quantitative analysis. That is, even if it is wrong on some things, it is going to be more right than I ever will be and even if it is wrong, I’m not going to know it. Still—better chances of it figuring things out in that domain than me.
However, if I want to be deeply knowledgeable in a particular domain, I want to be quite careful in how I use generative AI because it would be incredibly important for me to understand its limitations and aim to always have a better and deeper understanding of the subject and the way the AI tool might not get it.
This leads me to think that what we might see in the future is more splitting of the core curriculum to be “Discipline 101” and “Discipline 101 for majors”. So there is a generalized History, Science, Comp, etc that more actively considers the role of generative AI for students and then, a “History 101 for History Majors” that focuses more specifically on the exact skills needed to understand the discipline in a way that can more effectively discerned from Generative AI.
Folks in every discipline are likely to balk at this—and I get it. It’s hard for me to think about the disciplines I teach in (writing, literature, history, interdisciplinary studies) as being less relevant or essential. And yet, I can point to other “essential” disciplines that I was exposed to that are absolutely less relevant for me to know and learn despite how they were framed as essential. We can either recognize this or we can end up missing the reality about the ways generative AI can and could be used.
I worry about the nuanced debates that linger on for years if not decades that inhibit decision-making that leaves places missing some likely changes. In some ways, it reminds me of the Internet and how it took the COVID pandemic for nearly all faculty to make use of online tools for their courses. Decades of lag about actually using online tools as a de facto expectation feels like a place we can’t afford to be with generative AI.
And My Usages of Generative AI?
This reflection came about as my own ponderings about where and how I use it and noticing certain trends.
I do use generative AI pretty regularly. More days out of the week than not. I use it for trying out ideas, reorganizing information, reviewing content, making recommendations, and the like. I have it create images, I have it come up with different ideas that I might not think about. I ask it to clean up language or to look for flaws or tonal styles in my writing.
Yet, there are places where I use it very little.
As 2024 continues to evolve like 2023 for me in terms of the presentations, talks, and consultations that I’m giving, I find that I use it in very limited capacities in this work. I’ll use it to help create support materials, such as visuals, case studies, personas, and other materials. But where I don’t really use it is in the presentations themselves or even the creation of the sldies. Occasionally, I’ll use it to get an idea or two but the arc of the presentation, the words I speak, and ways I try to connect with faculty or other audiences—that’s all me.
Similarly with writing these posts or writing on my other blog. Obviously, I use generative AI in the research posts, but I feel like that’s a bit different from much of the other writing and presenting I’m doing. In many of these cases, I have certainly explored and tried to use generative AI but I haven’t found it quite useful. That is, the level of tinkering around it that I might do with what I get back is on par with the actual work that I am going to do anyways because at least, when it comes to writing and speaking, I like to think I have a particular “voice”—something, that I have yet to see generative AI pick up on, regardless of how much of my own writing I put into it.
For instance, at one point, I took my writing and the text from my talks for 2023 and put it into single files to play with on ChatGPT and Google Notebook. Neither of them were particularly good at recreating my voice or projecting and anticipating how to take topics in a particular direction with some prompting. Even after a few hours of playing, I found they both came up short.
Recently, I was providing an update on my dissertation as I’m making my way through Chapter 4. One area that I’ve been tempted to use generative AI has been my dissertation (gasp—I said it aloud!). Tempted in part to see what generative AI creates versus what I create. But again, after hours of tinkering and trying out different things, I found it could not sufficiently produce things that felt like I could trust it enough to do the work I needed to do. To be clear, my intention would not be to supplant my work but more in the realm of curiosity to see what it could do.
Now, some of that might have to do with both the methodology (phenomenography) and the topic (academic piracy) are not as widely written about or known. Maybe it would have been something different if my dissertation focused on student success and used phenomenology. Who knows!?! But it was rather vague and lacking on the contours of phenomenography in terms of being able to apply its methodology.
However, I did find a great use of it in terms of supporting my work in my dissertation. Because phenomenography isn’t as popular a methodology as others, it means sifting through a lot more dissertations to get a clear sense of what Chatper 4 can look like and sometimes, dissertations might also vary according to discipline.
So rather than scour the internet to get a hold of such pieces in order to see what made sense for my dissertation, I asked ChatGPT to provide me a sense of what the outline would look like.
Here's the prompt I used:
"you are an expert phenomenographic researcher who is working on their dissertation. Write out an outline of the key parts and sections of chapter 4 along with a description and recommended length of each (in word count) for each part or section. The dissertation is focused on phenomenography as its method and therefore recommendations for chapter 4 should keep that in mind."
It provides a useful answer that while far from perfect and even I knew to adjust some parts of it, gave me a starting structure that was clearer to work with. Here’s what it gave me:
As a phenomenographic researcher, your dissertation's Chapter 4 will primarily focus on the presentation and analysis of the qualitative data gathered through your research. Phenomenography, being a qualitative research method, aims to understand the different ways people experience or conceptualize a particular phenomenon. Here's a tailored outline for Chapter 4, keeping phenomenography as the central methodological approach:
Chapter 4: Presentation and Analysis of Findings
1. Introduction to the Chapter
Description: Brief overview of the research questions and a summary of the methodological approach (phenomenography). Set the stage for the findings and their relevance.
Recommended Length: 300-500 words
2. Description of the Participant Demographic
Description: Provide an overview of the participants, ensuring anonymity, to give context to the experiences and conceptions collected.
Recommended Length: 300-500 words
3. Presentation of Categories of Description
Description: Present the main categories of description that emerged from the data. This includes a detailed depiction of the qualitatively different ways participants understand or experience the studied phenomenon.
Recommended Length: 1000-1500 words per category
4. Analysis of Variations in Understanding
Description: Analyze the variations in the categories of description. Discuss how these variations relate to the research questions and the broader theoretical framework.
Recommended Length: 1500-2000 words
5. Outcome Space
Description: Discuss the outcome space, which represents the collective understanding of the phenomenon under study, encompassing all categories of description.
Recommended Length: 800-1000 words
6. Relationship Between Categories
Description: Explore and analyze how the categories of description relate to each other. This could involve hierarchical relationships, complementarity, or any other form of interconnection.
Recommended Length: 800-1000 words
7. Discussion of Findings in the Context of Existing Literature
Description: Contrast and compare your findings with existing literature. Discuss how your research contributes new insights or perspectives on the phenomenon.
Recommended Length: 1000-1500 words
8. Summary of Chapter
Description: Provide a concise summary of the key findings, their implications, and a transition to the next chapter.
Recommended Length: 300-500 words
9. References
Description: Include all references cited in this chapter.
Recommended Length: As required.
10. Appendices (if applicable)
Description: Include supplementary materials like full transcripts, extended data, or additional analysis.
Recommended Length: As required.
This structure ensures a thorough presentation and analysis of the phenomenographic data, adhering to the method's emphasis on the variation in experiences and conceptions. The suggested word counts provide a guideline but may vary depending on the complexity and depth of your data.
Now, nothing here is particularly surprising and some of this is self-evident. Additionally, I've talked with the methodologist and my adviser on my committee and they've said similar things. But here, I found the mixture of details and lengths helpful and all of it in a clear layout. Keep in mind, I blew right past the recommended lengths (as is my style when writing), but the clear and laid-out structure gave me a lot more help in seeing what I had to do and where. The result is that my Chapter 4 has a lot more momentum to it in terms of writing it up.
Where Is Your Line?
So that’s where I am. I’m sitting in this space where I can find some of it useful but not a lot of it useful that leads me back to the start of this piece where I’m thinking about where it should and shouldn’t be.
And I don't think that's a problem per se. I'm a person who develops their thinking through a few particular ways, running, listening to audiobooks, conversation, and writing. And I enjoy all these methods.
But these methods are not universal--others do their deep thinking differently and leverage different tools. Some use art; some use meditation. People do sense-making in a variety of ways with a diversity of tools and supports to help them do so.
For me, generative AI feels most like conversation or audiobooks whereas running and writing are places where I converse with myself.
I'm largely ok with and and also curious to see how others who are doing similar work are sense-making and positioning their use of generative AI.
For the most part, I'm seeing folks largely avoid it, use it as a supplement or support, and figure out how to help others to use it. But if you're using it in a way that's particularly deep and integrated to your work, I'd love to hear more about that. That seems to be what Steven Johnson is doing, but curious about others.
In the end, I guess I'm asking, have you figured out where you are and are not using generative AI in your own intellectual work?
Coda
I have no doubt that there are contradictions throughout this writing. Like other folks in this realm, I’m not trying to claim absolute certainty in any of this. I appreciate friendly and thoughtful provocations to what I am sharing here. These really are a mix bag of thoughts that I’m slowly making sense of.
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International
This mirrors my own experience pretty closely. I keep having a feeling that I should be using Ai for everything, and have loads of processes and applications for every little thing, but the truth is there are some things (writing) that I don't want to use AI for, and other areas where the impact of AI is more limited. But at the margin it continues to offer a lot of utility, and the constant stream of updates and new products means it is hard to settle on a constant routine.
This line really resonated: "For me, generative AI feels most like conversation or audiobooks whereas running and writing are places where I converse with myself." As impressive as the capabilities of the latest generative tools are, I think what's changed is the way we relate to them. I know (and I keep saying) that we've had chatbots that seem like conversational partners since ELIZA, but the quality of the exchange is different and a steep increase in the number of people having the exchanges has been the real change. Like you, I'm only finding limited use cases that I think are actually useful. Writing is not an area where I find utility in generative AI. I do find it in the way I bounce off a question I ask it. Nice piece, Lance.