13 Comments
Oct 21Liked by Lance Eaton

Standing ovation. Brilliant, brilliant talk, Lance. I will be sharing this with our faculty through my weekly edtech news feed. Bravo!

Expand full comment

I don’t think gen AI is comparable to Wikipedia at all. Wikipedia has human editors, checks and balances etc (or at least is has evolved into having). It doesn’t tell you to mix glue into pizza sauce like Google’s AI did the other day. As far as ethics go, Wiki is way further ahead. The problem with AI is that its barons don’t seem to really care about checks and balances, like Jimmy Wales did. It’s NOT technophobia, it’s fear of the insanely rich and irresponsible human beings in Silicon Valley who only care about their own profit, as we have seen with Zuckerberg, Musk and the like for the past two decades.

Expand full comment
author

Hi Gabriela

I'm not comparing them as similar in what they do. What is similar is the reactions that educators had for both of them.

The rest of inaccuracy was the same concern faculty have expressed for both... I've been hearing other educators make this claim for over 15 years about Wikipedia because "anyone can edit it"...

There are other fears, sure, but the major ones that I have heard from thousands of faculty are the same ones that come from all sorts of concerns about previously existing technologies that we have yet to address or solve for....

Expand full comment
Oct 22Liked by Lance Eaton

This was my point exactly, that it’s not the same fear! It may sound like it is because of the way people frame or describe it, but it really isn’t. The fear behind AI is more profound, and it should be. It’s time we stop being naive about technology in education. PISA 2022 data was pretty clear, as far as K12 goes.And the Center for Humane Tech does a great job regarding the naïveté part…

Expand full comment
Oct 21Liked by Lance Eaton

What a great, thoughtful meditation on AI and education... and at the same time an inspiring talk for the audience. Well done.

And many thanks for the shout-out!

Expand full comment
author

thanks Bryan! If I'm gonna talk about climate in higher ed, it's an obvious mention :) now if they can just put it in audio :)

Expand full comment

Wow, Lance! Masterful weaving together of so many important threads. I'm in the middle of writing and thinking about how the pandemic is the right place to start the story about the impacts of generative AI on education, and I find you there already...no surprise as you so often articulate insights that are bouncing around my head trying to get out.

I really like the snowmageddon reference because it is a good reminder that education is always being disrupted by external forces...the pandemic just happens to be a big one that preceded AI by a few short years.

Expand full comment
author

thanks Rob! Yeah--I think that was a point that I wanted to drive home that we seem to keep forgetting! :)

Expand full comment

That pipe allusion to Magritte! (I mean, the rest of it was great too, but LOVE it!)

Expand full comment
author

thanks Liza! there's a another post that I want to do about that and why I think it's a useful metaphor to work with....but had a lot of fun...also I made this video about how I came to this particular image: https://www.youtube.com/watch?v=S7nSbP-WyQs

Expand full comment

Forgive me for going all McLuhan-y on y'all here. The medium of communication, he states, "shapes and controls the scale and form of human association and action" (McLuhan, 1964, p. 9) which, in his view, has greater importance for defining the meaning of human interplay than the substance of messages themselves. An example might be how the Ford Model T was proposed as a way for the common man to get out and see the landscape and, in the longterm, destroyed it. It also transformed concepts of privacy leading to changes in dating and sex behavior. The Model T, as a medium, requires that we focus of our attention beyond its purely functional role as a means for transportation. The "message" of the Model T, therefore, is that the environment and social norms are not as important as man's [sic] need to move about independently, free of oversight.

As it applies to AI, as others have posted here, AI shifts the balance of human interplay, redefines the meaning of knowledge, shifts the methods by which we legitimize, validate, and attribute any knowledge claim, and adds another layer of accountability to individuals in positions of expertise.

The keynote proposes that the encounter with AI is no different than other encounters with technology, and I agree. It is just one more challenge for us to justify our relevance as educators. However, the "message in the medium" is something we most certainly should be paying attention to. The "message" of AI looks like this:

- All of the information created by humans over the millennia belongs to whoever has the greatest corporate power to take it and use it as a basis for serving a business model.

- The answers to questions are simply a matter of computing.

- The optimal means by which we overcome stopping points in cognitive movement is to avoid human friction.

- Human-level discourse is inefficient.

- It is socially acceptable to outsource human relationships to a generative proxy.

Is AI a nothingburger on par with Socrates' lament about writing? Yes, and no. Yes, it is yet another challenge to the status quo. No, because a society today + AI is not just a society + AI. It is a completely different society.

Expand full comment

I love this!

I always tell educators to make changes that will make sense even if (not that I think it will) Gen-AI dies tomorrow.

I like your way of putting it better.

Expand full comment
author

thanks Jason! much appreciate the similar thinking and sharing!

Expand full comment