Last week, I had the opportunity to attend the NERCOMP conference. What’s a NERCOMP? I know, I get asked that question whenever I talk about them. They are a regional entity that is similar to the national organization, EDUCAUSE, in the Northeastern section of the US. The two organizations often work together but are separate. NERCOMP does two great things for higher education institutions in this region. The first is they provide great vendor agreements at good discounts and the second is that they provide rich professional development to a variety of folks in informational technology, academic technology, tech leadership, instructional design, library spaces and more.
I’ve been attending NERCOMP events and the annual conference since about 2012 and have helped to and facilitated lots of their professional development sessions. In fact, a colleague and I are doing a virtual workshop in June called, “Support Faculty with MVP: Minimum Viable Practice for Sustained Learning”.
Since I’m on the subject, I’ll shout out two other cool upcoming events in April. One is a workshop on using bad design to understand good design by some colleagues of mine at Wentworth Institute of Technology called, “Strike That, Reverse It! What Not to Do When Designing Learning Experiences” They also have Idea Day coming up which focuses on letting the community participate in program planning for the year to surface ideas for professional development and move them forward. Finally, I’ll mention a directly relevant session coming up in June, their Thought Partner Program: AI in Teaching and Learning. This program is a series of drop-in conversations about AI for folks to share and learn together. If you’re in New England, you should check to see if your institution has a membership here and sign up for many of these free or highly discounted events.
So NERCOMP is a pretty cool organization that holds this annual conference in Providence, RI and not only was I attending but I had the opportunity to lead a 4-hour workshop with three colleagues and then also do a 45-minute session with three other colleagues—all on AI (to no surprise).
The Workshop
This session, Your New Teammate: Leveraging AI Tools and Practices to Better Support Faculty, included Adam Nemeroff (Quinnipiac University), Stephanie Payzant (Post University), Xiaorui Sun (Brown University) and myself. Now, 4 hours before two days of conferencing is a lot but I think we found the right balance of talking and activity as folks were largely engaged throughout the 4 hours, which is a hard thing to do.
Our goal was to create an idea exchange for educational developers to share, learn, and apply ideas about how they are or might want to use generative AI to support their work. To get there, we first laid some groundwork, introducing folks to the different AI tools that are most popular followed by explaining how some of the models work. We looked at a case study at Post University that are have integrated generative AI into their instructional design workflow. Then, we moved into playing around with a variety of examples that I generated using AI and had folks test them out and review them. You can find those examples and their reviews here if you’re looking for ideas. Next, we brainstormed and tested out ways we might also use generative AI after we had a chance to play with examples (the output can be found in the resources). The day as a whole proved helpful for exploring and conversing where educational developers are, where they want to be and where they don’t want to be. All the resources for the workshop materials can be found here if you want to take a look.
The Presentation
Following right on the heels of this (as in, the workshop ended at 5pm and this session was 8:30am the next morning), I also got to work with some of my favorite peoples, Esther Brandon (Harvard University), Dana Gavin (Dutchess Community College), and Allison Papini (Bryan University) for a very active morning session called “A Round of Musical ChAIrs: Reviewing, Implementing, and Revising AI Policies”.
Last year, the four of us ran the preconference workshop on creating an AI policy and then we went on to write Cross-Campus Approaches to Building a Generative AI Policy for EDUCAUSE Review. So we figured we had to follow it up with another robust discussion about where people are, what are they doing with their policy and how is implimenting going. We were both surprised and not surprised to find that a lot of institutions have still not found their footing around this. If nothing else, the session helped normalized the disarray that higher ed finds itself in around generative AI. And, of course, here are the resources that we created and shared for this session.
The Conference as a Whole
For another year, generative AI was on a lot of folks’ minds. Lots of sessions discussed its usage and how to navigate it in different areas of the education and technology intersections. For the most part, it still feels like a mess of uncertainty and I think that’s in part because the ground doesn’t quite feel settled on what to make of it because it continually shifts about what we know about generative AI and well, it’s still unreliable.
I keep thinking about a comment by Maha Bali on a recent panel we were on. She said "If the possibility is 1%, this is more dangerous, then when the possibility is 50%, because you're more likely to miss it.”. She was discussing the challenges and concerns of wrong outputs by AI (I loathe calling them “hallucinations” because of how that anthropomorphizes generative AI). So using the generative AI feels more problematic with a 1% failure rate than 50% failure rate, because it’s harder to catch and our trust in it increases uncritically.
Because of that 1%, we’re still scratching our heads, which I think is really interesting because nearly all of ed-tech is unreliable to some percentage of us, but we steamroll ahead with it. We know eproctoring technologies did reasonable harm to some portion of students but many institutions did and still hold tight to them. This isn’t me saying well, let’s roll ahead with generative AI because we’re already committed to do harm with other technologies, but it’s interesting how the shifting ground of AI coupled with its unreliability combine to make folks more hesitant. It’s like we’re saying, “well, it can be constantly changing but reliable or static and unreliable, but we can’t have both!”.
Still, one session that I want to highlight from was Building a Plane Mid-Air? Strategies for Faculty Engagement with AI from the folks at Rhode Island School of Design. It was particularly excellent. Not only did they talk about how they are working through ideas with their faculty but they actually brought their faculty—which doesn’t happen enough at NERCOMP and similar conferences. So we got to hear how thoughtful and nuanced the faculty were approaching using generative AI in their classrooms. That session was fantastic to see critical explorations of generative AI with students through the faculty lens.
The Keynote featured a "fireside chat" (with no fire--conventional hall regulations and all) with Dr. Alondra Nelson, interviewed by Stan Waddell (who I collaborated with for this article last year in EDUCAUSE Review, “10 Ways Technology Leaders Can Step Up and In to the Generative AI Discussion in Higher Ed”). Nelson is the former acting director of the White House Office of Science and Technology Policy, so she had a lot of thoughts in guiding the discussion around generative AI. She spoke to the challenges of how government can or should reach to generative AI and whether it is something definitively different or subsumed under technology as a whole and in general, how we aren’t great at making that distinction. That is, every time there is a new feature or element of standing technology (social media is a great example—it’s not a new technology even when it was new but a new arrangement of current technologies), there’s a rush to specialize a focus on that and create a “czar” of some sort or another. Yet, realistically, it’s smarter to be more wholistic and thoughtful about what we want from technologies as a whole. But that is much less compelling and requires more nuance than headlines and soundbites allow.
However, I was much more interested in her work on the 2022 Memo that promoted the idea that all research funded with taxpayer money (about 2/3 of all research in the country) be made publicly available. As some readers know, my (soonish to be done dissertation) is focusing on academic piracy and how scholars use academic pirate networks like SciHub & LibGen in order to access research literature so they can actually be scholars. Nelson's work on making research more publicly available speaks to the heart of so many issues of my research.
Final Thoughts
Personally, NERCOMP was a lot of fun and engaging for me. Like many in the realm of education and technology in higher education, I sit at a place that lots of other folks don’t necessarily get or fully value. So to be surrounded by folks with the similar challenges, great insights to share, and who simple “get it”, was quite appreciative. It’s part of why I keep coming back to this organization.
But more importantly, I appreciated the more specific common challenges and concerns we all have around generative AI. The landscape is still hella confusing and unclear and 17 months out since the start of generative AI and many institutions still don’t know what they’re doing. For sure, some institutions have created or updated policies (check out this padlet of higher ed institutional policies) and of course, lots of individual faculty have developed their classroom policies (check out the AI Policy Syllabus collection for over 100 examples). Yet, there doesn’t seem to be critical mass on any of it that can help other institutions follow along, even though there are folks staking out clear directions to pursue with generative AI.
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International
Re: those "wrong outputs." I keep thinking that we don't hold AI image generators to the same standard that we do AI text generators. That is, if I ask Midjourney to create an image of a bunch of robots playing basketball, no one is sitting around saying, "That's not what robots playing basketball looks like!" Yet we expect text generators like ChatGPT to answer questions accurately.
I wonder if some of us are still thinking of AI text generators with IBM's Watson in mind, after it beat Ken Jennings at Jeopardy. That tech was designed to answer questions!
Very nice account. Reminded me of ones I went to.