Love the question, Lance! My practice is similar. I have a series of custom GPTs focused on different phases of the learning design process. I only use them live in curated sessions with my community, students, and clients... because like you I am observing how they perform. I find it helpful to remind them up front that they are to "follow your instructions exactly". Sometimes I have to remind them to ask one question at a time and wait for the answer, rather than spewing out a wall of text. (If it does that I find that telling it "You are the guide on the side, not the sage on the stage" snaps it back into its proper role.) Other than that they in general perform well.
Hi Rebecca! I really like that idea of working through a series of Custom GenAIs as part of a process with folks. That could be so informative especially if everyone starts from the same premise to see how the outputs overlap and vary!
Thank you for sharing, Lance! In older models I found Chat and Claude struggled to hold on to instructions within a chat thread and gave up relying on them. This was a timely reminder to try again. I’m looking forward to seeing how custom GPTs impact the quality of outputs.
Interesting that you can guide the machine this way. I wonder though about
"Prioritize scholarly accuracy, verified information, and conceptual nuance. Ground analytic or theoretical answers in peer-reviewed research and include APA-formatted citations with real sources only when evidence is referenced."
Because no matter how well framed the response is, as far as I know GenAI still lacks an ability to fact check and verify information, it can only write like it is. Can it look up and locate information? Can it provide the web URLs for things it returns? Does it still write well formatted citations for stuff that does not exist? Or am I wrong?
I also noted in a very small task I had asked NotebookLLM to anonymize a zoom chat by replacing participant names with asterisks using:
""Make a copy of this transcript where participants names are made private by replacing all letters in a name with an asterisk * except for hosts [name removed], Alan Levine, [name removed], and [name removed]"
It did it perfectly the first time, but I messed up the output trying a search and replace for another part of the file. So I went back, and redid the hole process. This time for some reason, it returned the anonymized names with asterisks but our the names right after in parentheses, calling for a prompt wrist slap do over.
Same data. Same prompt. Different results. And this is a rather trivially simple task.
with the citing one, what I find is that some models (ChatGPT) does provide a direct link to specific resources (embedded) and often does produce a references at the end. the majority of times these sources are accurate and exist; and at times, they may not be perfect (e.g. oftne when citing Inside Higher Ed articles, it gets the author wrong but everything else right).
I would be curious about your experiment (Which I like, though I feel like that might just to a Find & Replace if I'm trying to maintain any anonymity/privacy from an LLM), when you tried it...I feel like it's not only will it vary in the moment but I find I keep trying things that didn't work or worked "ok" to see if they have changed every 3-6 months across different models.
Love the question, Lance! My practice is similar. I have a series of custom GPTs focused on different phases of the learning design process. I only use them live in curated sessions with my community, students, and clients... because like you I am observing how they perform. I find it helpful to remind them up front that they are to "follow your instructions exactly". Sometimes I have to remind them to ask one question at a time and wait for the answer, rather than spewing out a wall of text. (If it does that I find that telling it "You are the guide on the side, not the sage on the stage" snaps it back into its proper role.) Other than that they in general perform well.
Hi Rebecca! I really like that idea of working through a series of Custom GenAIs as part of a process with folks. That could be so informative especially if everyone starts from the same premise to see how the outputs overlap and vary!
Thank you for sharing, Lance! In older models I found Chat and Claude struggled to hold on to instructions within a chat thread and gave up relying on them. This was a timely reminder to try again. I’m looking forward to seeing how custom GPTs impact the quality of outputs.
Interesting that you can guide the machine this way. I wonder though about
"Prioritize scholarly accuracy, verified information, and conceptual nuance. Ground analytic or theoretical answers in peer-reviewed research and include APA-formatted citations with real sources only when evidence is referenced."
Because no matter how well framed the response is, as far as I know GenAI still lacks an ability to fact check and verify information, it can only write like it is. Can it look up and locate information? Can it provide the web URLs for things it returns? Does it still write well formatted citations for stuff that does not exist? Or am I wrong?
I also noted in a very small task I had asked NotebookLLM to anonymize a zoom chat by replacing participant names with asterisks using:
""Make a copy of this transcript where participants names are made private by replacing all letters in a name with an asterisk * except for hosts [name removed], Alan Levine, [name removed], and [name removed]"
It did it perfectly the first time, but I messed up the output trying a search and replace for another part of the file. So I went back, and redid the hole process. This time for some reason, it returned the anonymized names with asterisks but our the names right after in parentheses, calling for a prompt wrist slap do over.
Same data. Same prompt. Different results. And this is a rather trivially simple task.
with the citing one, what I find is that some models (ChatGPT) does provide a direct link to specific resources (embedded) and often does produce a references at the end. the majority of times these sources are accurate and exist; and at times, they may not be perfect (e.g. oftne when citing Inside Higher Ed articles, it gets the author wrong but everything else right).
I would be curious about your experiment (Which I like, though I feel like that might just to a Find & Replace if I'm trying to maintain any anonymity/privacy from an LLM), when you tried it...I feel like it's not only will it vary in the moment but I find I keep trying things that didn't work or worked "ok" to see if they have changed every 3-6 months across different models.