11 Comments

Excellent account of where we find ourselves when it comes to AI detection (don't rely on it) and the larger questions, which should be about fostering trust and communication in our learning environments. Few would argue against trust and open communication. Instead, what the objections I hear are arguments about the value of enforcing norms (can't let them get away with it) and the desire to hold students to the same expectations (not fair to the ones who put in the work). Plus, a genuine and understandable desire to continue using assignments and methods that have worked.

No easy answers here, but as you say, start by not using an unreliable tool to detect AI use and approach students with questions rather than accusations. I look forward to the rest of the series!

Expand full comment
author

Hi Rob! I hear that--I'm going to hopefully address more of those questions (how then do we address improper AI usage in our class) in the next few posts! Thanks always for reading and sharing your thoughts!

Expand full comment

This is a rich nuanced perspective.

This is exactly the kind of discussion we need!

Expand full comment
author

that's greatly appreciated Jason--I realized after writing several emails like this, I needed a spot to just send folks that summarized it and that other might find it useful too! :)

Expand full comment

Faculty perspective here: I've seen essays that every AI detector I could find said was AI-generated, submitted by students who openly admitted using it when asked. In other words, I don't think it's all just paranoia either. There have always been some students who cheat, and AI tools look like a gift to them. (I say "look" because AI is currently terrible at writing about music at the college level).

But the cheaters have always been a small percentage--so I agree that a culture of distrust helps no-one. I've also seen false positives, and once a student who intentionally tried to write like AI, because they thought it sounded more educated (I'm not kidding).

I really want to read your next post!

Expand full comment
author

I don't think it's paranoia either for sure, Matthew--but I do see folks going right to AI or overly diagnosing work as AI without doing the leg work.

My other point about those being cause with AI--those are the ones that need help. There are still folks using it and using it so well that we'll never know. There are whole wormholes in TikTok and Instagram Reels where they show different ways to use GenAI effectively to hide their usage. That is, the ones who are using it really well, we're never going to know. Those who are using it to a degree that it feels obvious--are struggling students in need of help.

Expand full comment

I also wonder about false negatives. In places where the detector is treated as proof, or strongly indicative, of wrong doing, how many students who are a little bit better at prompting, a little more canny in how they use it, or just have an extra $20/month, are seeing their papers sail through the detector. It is at least an equity issue and certainly bolster a culture of cheating. I can see this being as destructive in the long run as accusations based on false positives.

Expand full comment
author

hi Guy! Yes--and I think that's real as well--how easy it is to trick an AI. Students find hacks on TikTok and elsewhere that shows them different prompts and ways an AI can re-write an GenAI work in order to pass through AI detectors.

Expand full comment

Great write up. AI detectors definitely should not be relied upon blindly!

One study referenced is worth a deeper analysis... the "bias" narrative comes from a flawed study looking at only 91 samples, calling GPT-4 polished content human and comapres University Entrance essays to Grade 8 essays introducing other variables to the analysis.

Hopefully, some helpful resources to continue the conversation.

Non Native English Speaker Bias - https://originality.ai/blog/are-ai-checker-biased-against-non-native-english-speakers

Efficacy Study Round Up - https://originality.ai/blog/ai-detection-studies-round-up

Expand full comment

This is a great write-up on detectors. I had a student who was absolutely devastated when one of her professors refused to accept her essay because "the detector said so" without realizing that it was just using probabilities and not really "detecting" the AI. Hilariously, when that student had to submit their next essay, they sat down with me to make sure it wouldn't show up as AI-made, and it was so easy to swap words around and use a thesaurus. However, it made the essay way worse in terms of the actual writing.

Expand full comment
Jun 26Liked by Lance Eaton

Thanks for this information, and the language in which it is written. I am glad to see that I am not alone in dealing with this issue. Reading freshman essays was never fun, and now, I have developed yet another "gear" for reading and assessment just for AI detection, and this one really makes reading papers as pleasant as gouging my own eyes out with a rusty spoon because I am almost detached and robotic in this "gear" and miss the humanity of the paper.

If and when I suspect any AI usage (which I have clearly articulated in my syllabi) I use the uber-careful/non-accusatory/non-assumptive language because these days, students have a lot more clout than professors (this is whole different subject), but the problem or burden of proof for me is that it is instinct or observation that prompts me to check in the first place, so this becomes a bit of a tautological game and a gamble because in my experience, half the time the student will just quietly exit the conversation or even the class, and the other half the time, I am dragged across hot coals, even though I know the truth and see the differential between the writing samples.

In any case, your article, along with all the other readings and discussions I have done, make me think that I eventually have to cave and usher in the proper usage of AI in class by instilling what I call, "Digital Hygiene" with the hopes that I can give my students the autonomy to use AI, but to do it responsibly. This reminds me of when a parent allows a child to consume something they should not in their presence just to establish trust, demystify the allure, open dialogue and use the moment to teach.

Thank you!

Expand full comment