In April, I had the honor to be a keynote for Advancing a Massachusetts Culture of Assessment (AMCOA); a great organization that has some of the people I deeply respect and have had the opportunity to work with over the years.
This piece absolutely hit the mark. Your framing of GenAI not as a replacement but as a catalyst for rethinking assessment resonated deeply. I’ve been working on scaffolding models that integrate structured content mapping, spiral learning, and AI-powered feedback loops and your point about aligning assessment with process, not just product, is critical.
I particularly appreciated the emphasis on authentic learning and the reminder that innovation should serve pedagogy: not just novelty.
One area I’d add to the conversation is the foundational role that skills like cursive handwriting can play in supporting the orthographic loop. -the cognitive process linking visual perception, motor coordination, and memory consolidation.
There’s compelling research suggesting that even in AI-supported models, preserving these practices strengthens neural pathways for reading fluency and long-term retention. Cursive handwriting: https://extension.ucr.edu/features/cursivewriting
While we may disagree on the fate of cursive, your piece strengthens my conviction that GenAI’s highest use isn’t automation: it’s alignment. I’m eager to keep exploring how GenAI partnerships can reshape what’s possible. -whether typed, printed, or (gasp) scrawled in Helvetica.
———————-
Note: The recommendation to explore your substack and the link to: AI + Education = Simplified came from my own GenAI partnership, with ELIOT (Emergent Learning for Observation and Thought).
Thank you for this thoughtful exploration of AI's role in education, Lance. Your point about the gap between technology integration promises and actual learning outcomes really resonates. The challenge you describe - faculty rebuilding that "rich, interconnected web" of learning outcomes, assessments, and materials after AI "scatters" everything - captures a recurring pattern in EdTech.
This recurring gap between EdTech innovation hype and true educational impact is something I've been thinking about extensively. There's a compelling analysis of this phenomenon at https://1000software.substack.com/p/technology-wont-save-schools that argues technology alone can't address the systemic challenges in education without fundamental changes to how we approach teaching and learning.
Your emphasis on "challenge, complexity, and connection" as foundations for skill development (drawing from Beane's work) seems particularly relevant here. I'm curious: when you work with faculty on AI integration, how do you help them distinguish between AI applications that genuinely support these foundational elements versus those that might inadvertently undermine them? What evidence of impact do you look for to know whether an AI-enhanced approach is truly serving learning versus just appearing innovative?
that's a great question, Christopher and thank you for reading and thoughtfully engaging!
I think this is where I really lean into a direct conversation with students to help figure it out. I think there are assumptions that we have about what should and shouldn't happen that aren't always true (of ourselves, or our students). I'm not sure we can fully distinguish these without a lot more context and an exploration of the explicit (and often implicit) assumptions about learning structured in that course.
Evidence to me would be a more active conversation and raising of questions about how we use them that is sustained by students as they encounter new assignments/activities through a course. (at least, that's the short answer :)
Fantastic observations as always, Lance Eaton. You have way of painting a visual!
aww--thank you Christine! I try :)
This piece absolutely hit the mark. Your framing of GenAI not as a replacement but as a catalyst for rethinking assessment resonated deeply. I’ve been working on scaffolding models that integrate structured content mapping, spiral learning, and AI-powered feedback loops and your point about aligning assessment with process, not just product, is critical.
I particularly appreciated the emphasis on authentic learning and the reminder that innovation should serve pedagogy: not just novelty.
One area I’d add to the conversation is the foundational role that skills like cursive handwriting can play in supporting the orthographic loop. -the cognitive process linking visual perception, motor coordination, and memory consolidation.
There’s compelling research suggesting that even in AI-supported models, preserving these practices strengthens neural pathways for reading fluency and long-term retention. Cursive handwriting: https://extension.ucr.edu/features/cursivewriting
While we may disagree on the fate of cursive, your piece strengthens my conviction that GenAI’s highest use isn’t automation: it’s alignment. I’m eager to keep exploring how GenAI partnerships can reshape what’s possible. -whether typed, printed, or (gasp) scrawled in Helvetica.
———————-
Note: The recommendation to explore your substack and the link to: AI + Education = Simplified came from my own GenAI partnership, with ELIOT (Emergent Learning for Observation and Thought).
The modern U.S. public education system, since the creation of the Department of Education, has never been about education; rather, programming. https://torrancestephensphd.substack.com/p/dumbing-down-students-so-everyone
Thank you for this thoughtful exploration of AI's role in education, Lance. Your point about the gap between technology integration promises and actual learning outcomes really resonates. The challenge you describe - faculty rebuilding that "rich, interconnected web" of learning outcomes, assessments, and materials after AI "scatters" everything - captures a recurring pattern in EdTech.
This recurring gap between EdTech innovation hype and true educational impact is something I've been thinking about extensively. There's a compelling analysis of this phenomenon at https://1000software.substack.com/p/technology-wont-save-schools that argues technology alone can't address the systemic challenges in education without fundamental changes to how we approach teaching and learning.
Your emphasis on "challenge, complexity, and connection" as foundations for skill development (drawing from Beane's work) seems particularly relevant here. I'm curious: when you work with faculty on AI integration, how do you help them distinguish between AI applications that genuinely support these foundational elements versus those that might inadvertently undermine them? What evidence of impact do you look for to know whether an AI-enhanced approach is truly serving learning versus just appearing innovative?
that's a great question, Christopher and thank you for reading and thoughtfully engaging!
I think this is where I really lean into a direct conversation with students to help figure it out. I think there are assumptions that we have about what should and shouldn't happen that aren't always true (of ourselves, or our students). I'm not sure we can fully distinguish these without a lot more context and an exploration of the explicit (and often implicit) assumptions about learning structured in that course.
Evidence to me would be a more active conversation and raising of questions about how we use them that is sustained by students as they encounter new assignments/activities through a course. (at least, that's the short answer :)