3 Comments

I'd like to suggest adding a Step 2.5. It is important for faculty to reflect on what matters in the narrative of student learning for any given assignment to know which tasks must be conserved according to traditional methods. In other words, conduct a triage of instruction to determine what *cannot* have an AI component to it. Once the "space" has been established for what *can* have an AI component, then you can discuss what is possible in that space.

Second, it is useful for faculty to think incrementally about the use of AI so that it does not feel so expansive. I suggest starting with ways to streamline certain tasks at a relatively low stakes level before considering transformative approaches that leap too far, too fast. (Think SAMR model).

Expand full comment

This is excellent. Really useful stuff. Do you have anything more on the icebreakers you used here?

Expand full comment

hey College robot!

The "Wrong Answers Only" is really my go to....but I could imagine other questions might be:

What's the worse/most ridiculous answer/response you've gotten from an AI?

If AI was to become sentient, what would it be most disappointed about with humans?

I think also reminding folk at the start that the first big AI (ChatGPT) when pronounced, phonetically sounds like "Cat, I farted" in French...

Expand full comment