Yeah--finding the line will continue to be challenging!
I guess it would depend on the context and the role. Do all folks see or are interested in improving their writing or is it a means to an end? I think many folks feel the latter. And not all writing is worth "improving upon" or might be made better through an AI tool.
Writing instructions, a process, or other things may be within one's role but not thier role entirely...leaning on AI can be more effective. How much writing is often perfunctory or even never read but needs to be done for some record or other? In those instances and others, I'm OK with it because often the work being presented has its own dehuman elements to it.
Great points about context, especially the role. My favorite sad yet realistic use case is realtors deploying AI to write ad copy - not sexy, but a solid use.
I wonder how far our cultural divide is over generative AI. I keep saying ppl call for never touching the stuff.
I hear that...and also, wonder how long that will last or will it matter....does it become like social media, where some folks operate without and others see it as core to their work...
I also wonder what happens when more and more folks find that one useful thing it can do that encourages them...
there definitely will be hold-outs and also, what happens as leaders increasing require it in different roles...
and this isn't me trying to say we all must use it...but just see the pull of it in many areas being hard to resist...because generative AI as a tool is reflection of the culture (hyper-productivity for the sake of capital)....
You're on to something here, Lance. Your first essay in this series came up in a meeting I was in yesterday. I bragged to someone later that I was one of your early subscribers!
Here's why the series is resonating with me. I think the worst impacts of generative AI have been to push us to surveil our students using monitoring software (powered with AI!) to keep them on the straight and narrow path. Jason Gulya has a good essay up on The AI Edventure on how Coursera's latest announcement is pointing in this direction.
In my most dystopian moments, I imagine a world of monitoring software combined with predictive algorithms (also powered with AI!) auto-nudging students to engage more with LLMs chatbots in order to demonstrate engagement, forcing us all down a path to AI College from Hell.
Reading your thoughtful approach and guidance to instructors working through these issues is giving me a vision of much more hopeful future.
Great essay. Thanks for pointing to it. This line: "Yet what the LMS has offered to me and many others is a Faustian bargain that promises efficiency and productivity at the cost of respect and privacy."
If you change one letter, it speaks to the current bargain on offer from generative AI. And of course, the concerns about data and LMSs are still just as live today.
Keeping going with this fine series, Lance.
Heard an anecdote. One university manager said they knew a certain staffer used ChatGPT to write something - because the results were good.
thanks Bryan! I've heard that too and cheers on the staffer for upping their game :)
I agree... but I wonder about opposed views:
-That a staff member uses ChatGPT means they didn't improve their own writing, but outsourced it.
-Relying on that AI supported (albeit in a small way) a technology critiqued for a stack of reasons.
-Perhaps something human is lost.
Yeah--finding the line will continue to be challenging!
I guess it would depend on the context and the role. Do all folks see or are interested in improving their writing or is it a means to an end? I think many folks feel the latter. And not all writing is worth "improving upon" or might be made better through an AI tool.
Writing instructions, a process, or other things may be within one's role but not thier role entirely...leaning on AI can be more effective. How much writing is often perfunctory or even never read but needs to be done for some record or other? In those instances and others, I'm OK with it because often the work being presented has its own dehuman elements to it.
Great points about context, especially the role. My favorite sad yet realistic use case is realtors deploying AI to write ad copy - not sexy, but a solid use.
I wonder how far our cultural divide is over generative AI. I keep saying ppl call for never touching the stuff.
I hear that...and also, wonder how long that will last or will it matter....does it become like social media, where some folks operate without and others see it as core to their work...
I also wonder what happens when more and more folks find that one useful thing it can do that encourages them...
there definitely will be hold-outs and also, what happens as leaders increasing require it in different roles...
and this isn't me trying to say we all must use it...but just see the pull of it in many areas being hard to resist...because generative AI as a tool is reflection of the culture (hyper-productivity for the sake of capital)....
The social media model is a useful one, Lance.
I'm also pondering ebooks, which swept the world until they stopped, and now readers and kinda split 50-50 between digital and print materials.
Or we follow the classic S-curve and see the critics become holdouts.
You're on to something here, Lance. Your first essay in this series came up in a meeting I was in yesterday. I bragged to someone later that I was one of your early subscribers!
Here's why the series is resonating with me. I think the worst impacts of generative AI have been to push us to surveil our students using monitoring software (powered with AI!) to keep them on the straight and narrow path. Jason Gulya has a good essay up on The AI Edventure on how Coursera's latest announcement is pointing in this direction.
https://higherai.substack.com/p/meet-the-new-face-of-traditional
In my most dystopian moments, I imagine a world of monitoring software combined with predictive algorithms (also powered with AI!) auto-nudging students to engage more with LLMs chatbots in order to demonstrate engagement, forcing us all down a path to AI College from Hell.
Reading your thoughtful approach and guidance to instructors working through these issues is giving me a vision of much more hopeful future.
You're such a kind supporter--I appreciate it, Rob! Yeah--I also like Guyla's writing on the subject too!
the surveillance thing is something I've been thinking about for years--I wrote this piece a few years ago about the surveillance issues of our LMSs: https://cuny.manifoldapp.org/read/the-new-lms-rule-transparency-working-both-ways-3a998d1f-b989-4100-881b-1a43968543c0/
I'm hoping my work can be its own gentle reminder and help for folks before going full automation :)
Great essay. Thanks for pointing to it. This line: "Yet what the LMS has offered to me and many others is a Faustian bargain that promises efficiency and productivity at the cost of respect and privacy."
If you change one letter, it speaks to the current bargain on offer from generative AI. And of course, the concerns about data and LMSs are still just as live today.
exactly!!!