12 Comments
Mar 5Liked by Lance Eaton

Your pragmatic and applied approach is valuable, as is the idea of choosing where we decide tp place that line (which to me is a moving and not always sharp edged entity).

I do have to say as someone who spent a chunk of my post education getting 2 and a fraction degrees in geology, though diverged and never made it a career, that the value of learning is not in the facts and details we do not remember (though knowing that the mnemonic device for the geologic eras was a silly sentence I wont waste time with here gives COPDMTJC), the value was the ways of thinking that one learned, how to approach classification and unknown identification schemes, seeing relationships in three dimensions, visioning, 2D (maps) to 3D (world) visualization, this is the stuff that actually was foundational for later learning, you know, up the old pyramid.

Expand full comment
author

Hi CogDog,

YES!!! it is definitely a moving edge! When I do talks and workshops, I usually start with the fact that I've had 15 different feelings about that edge and that was just that day!

And yes, I 1000% percent agree about the ways of seeing the world and sense-making is the higher value in learning. But that was the thing in both--the basic facts and the accompanying foundational learning was still lost on me--we can chalk it up to different teaching styles/approaches (this was the late 1990s and having a color division problem made things more complicated in geology) and poor science education in high school (long story--but my sophmore chemistry class was more learning about the aerodynamics of paper airplanes we threw behind the teacher's back than actual chemistry).

To me, using GenAI to help me figure things out in my own way (Explain geology to me using the planets in Star Wars), it might have made things clearer or engaging in the way that I needed for any of it to stick.

Expand full comment

Good point. Part of learning inside a disciplinary community involves adopting complex attitudes toward what counts as true, how the community distinguishes between factuality and truth, sources of academic power and authority, genres, etc. Subjectivity, for example, is salient in literary studies, less so in geology and mathematics. Learning how to learn a discipline depends on building expertise as well as capacity. This I believe explains Lances point about the difference between taking up a posture as a novice vs. an expert.

My advice to teachers based in limited experience is to emphasize to students that AI is more like a hammer than a friend. It’s a mechanical tool which appears to read and write like people do but like a hammer all it can do is take directions and follow them, sort of like digital farm irrigation systems or tractors operated through electricity (a nail gun), not through consciousness. So teachers might consider fostering and modeling a COMMAND posture, recognizing that with most tools, you have to massage them or tweak them or finagle them—which means acting out a part with AI. It’s a mistake to believe AI cannot make mistakes. The person controls the tool, and it’s easy to pound a thumb instead of a nail.

Critically, in my theory, which could be garbage, students must understand that AI produced text is synthetic, (un)authoritative, more like an on-the-fly encyclopedia consult than a peer—even when you ask it to write a poem or solve a problem. Everything it says and does has been said and done by a human, only AI breaks it up and borrows the parts wherever a part looks promising. Humans don’t work that way. It contributes nothing new, though it can assemble bits of human parole (Saussure) in surprising ways. The outcome is not a written product but an elaborate search through texts for structurally relevant words and word particles.

The value IS in the ways of thinking we learn, CogDog, take it from Shakespeare’s Monkey. True of AI as a tool of learning for learners who think in ways appropriate to the disciplinary community—oddly La textual delivery machine like an audiobook.

Expand full comment
author

Thanks for adding insights here, Terry!

I appreciate the analogy of AI as a tool--I think the challenge is that it has the possibility to be different tools in different disciplines (all of which can do physical harm as much as help).

Yes--that idea that AI produces things uncritically and just through probabilities without scrutiny is a big consideration.

Expand full comment

I forgot. Are you sure you want to leave AI out of History101? Likely that people who want to study history have built some expertise. By modus ponens, students in History101 could benefit logarithmically more than Generic Student101 who have not declared any expertise.

Expand full comment
author

Valid point--I don't think I would leave AI out of History 101 for Majors but I think (to connect to your previous point) it would be presented as a different set of tools than possibly in the non-majors History 101.

Expand full comment

How so? Your approach seems to be based on the assumption that language bots can be thought of as lawnmowers, washing machines, audio mixers—different tools. Bots can be specialized for sure, but in what sense are they a collection of different tools? Also, in my experience it can be a mistake to assume that the way something is presented to students is the way they will use it or think about it. University faculty in particular are so knowledge driven—all learners need to come to grips with AI. The idea that AI is going to somehow weaken the human capacity to learn is limiting. I think your proposal to segregate students by way of presentation of tool (likely involving rules of ethical behavior and “following directions” especially at the onset of innovation is a compromise that in the end is worse than the goodness of immediate benefits—sort of a 3/5ths clause in the Constitution/Institution.

Expand full comment

This mirrors my own experience pretty closely. I keep having a feeling that I should be using Ai for everything, and have loads of processes and applications for every little thing, but the truth is there are some things (writing) that I don't want to use AI for, and other areas where the impact of AI is more limited. But at the margin it continues to offer a lot of utility, and the constant stream of updates and new products means it is hard to settle on a constant routine.

Expand full comment
author

yeah--I also wonder about this for folks who do a lot of thinking in writing--which is great for us but that's not how all folks think--so I know some folks who are brilliant and also, writing is not how they think and so, writing becomes an unhelpful hurdle rather than a clarifying space. Some of them when they use AI (critically) find they can get more out of themselves than they would otherwise.

Expand full comment

This line really resonated: "For me, generative AI feels most like conversation or audiobooks whereas running and writing are places where I converse with myself." As impressive as the capabilities of the latest generative tools are, I think what's changed is the way we relate to them. I know (and I keep saying) that we've had chatbots that seem like conversational partners since ELIZA, but the quality of the exchange is different and a steep increase in the number of people having the exchanges has been the real change. Like you, I'm only finding limited use cases that I think are actually useful. Writing is not an area where I find utility in generative AI. I do find it in the way I bounce off a question I ask it. Nice piece, Lance.

Expand full comment
author

thanks Rob! Yeah--when I realized where it fits for me along those lines, it helped me a lot better...and I think for those of us who using writing to think--it still doesn't sit write for that, but bouncing ideas and iterating, I think it has possibilities for my usage! Appreciate the read and the response as always!

Expand full comment

See if you can get past thinking about what AI does for the teacher and how the teacher wants to “use” AI as a teaching tool to deliver content. Instead, think about how students might be best positioned to learn to use AI as a learning tool. Think about learning with AI rather than teaching with AI. In the end, learners are going to do what they are motivated to do and find effective. If they are motivated to get a good grade, they will act accordingly. If they are motivated by curiosity, they will act accordingly. The good grade means working for the teacher (3/5ths clause). The curiosity means working for learning. Teachers often think working for the teacher is learning. But learning science is well beyond that paradigm

Expand full comment