Developing the Research Insights Prompts
Documenting the process of prompts and feedback with ChatGPT about review research articles
As I mentioned in the last post, I’m starting to use generative AI as a tool for review and exploring research and today, I’m going to break down how I did it. A challenge that many folks still struggle with is developing and working with prompts. I certainly struggled at times and figured that I would share my process.
I’m sure many have heard the statement “work smarter, not harder.” I attribute that quote to Scrooge McDuck because I can still picture the scene in my head in Ducktales where a young Scrooge uses cycle contraption to more effectively shovel coal into the steam engine rather than shoveling. Even if I bothered to look up who that quote is supposedly attributed to—in the folds of my brain, it’s always Scrooge.
Part of the purpose of this Substack is to share things that I’m doing and make generative AI easier to understand, try out, and explore for others. In that vein, I’ve been wanting to start this series wherein I use generative AI to explore some of the research out there (both in general and specifically about generative AI) to help me continue to learn more and share more.
Many of us reading this, most likely have the infamous “To Be Read” folder. A folder that has dozens (if not hundreds or even thousands) of PDFs and other document types that are things we REALLY want to read when we get that pocket of time that almost never exists.
Since Claude and now ChatGPT have much bigger context windows (the ability to upload content to which it can be analyzed and incorporated into the answers the generative AI provides), folks have been uploading articles and even books to “interview them”—so I’m not offering anything innovative or that hasn’t been done. Yet, what I want to do is to start doing this consistently with this series on the Substack to also share out those findings.
So I’ll spend a little bit here talking through how I’m setting up ChatGPT to do this so that you can follow along or try it if you think it might be helpful.
Step 1: Starting the Process
I started by trying to figure out what I wanted. I produced this prompt to start to get some ideas and chose an article to start play with for this.
You're an instructional designer and faculty developer expert. You are effective in drawing out deep insights from research literature and finding effective ways to integrate those insights into practice as well as to use those insights to inform strategy. When reading research literature, what are the 10 most effective questions and frames used to identify insights from an article that help you as an instructional designer and faculty developer expert?
That produced the following questions which I thought was a good start.
What is the main research question or hypothesis?
What are the theoretical frameworks or models used?
What methodologies are employed in the study?
What are the key findings or results?
How do these findings relate to existing literature?
What are the practical implications of the research for instructional design and faculty development?
What limitations are acknowledged in the study?
How can these insights inform strategy and decision-making in educational settings?
What innovative or creative ideas does this research inspire?
How can these insights contribute to my own professional development and knowledge?
But these were also a lot of questions and some overlap, so I then asked this prompt:
With those questions in mind, analyze this article and provide a 8-10 sentence summary of this article that is helpful for instructional designers.
I added another prompt after this:
What are the practical considerations and actions that an instructional designer and faculty developer should take based upon this article?
Step 2: Assessing the Article
These were giving me some insights about the content but another part of me also wanted to be able to get a high-level assessment to about what I was looking at. I turned to asking it to do some assessment of certain qualities of the article with this prompt:
Assess the quality of the article. On a 10 scale (10 being the strongest), rate and explain the following aspects of the article:
Clear and strong argument
Evidence
Reliance on diverse perspectives and research
Clarity of ideas
Ability to directly apply ideas into practice
The results felt a bit off and so I refined the question to highlight what I mean by each quality:
Try that assessment again with the following considerations.
Assess the quality of the article. On a 10 scale (10 being the strongest), rate and explain the following aspects of the article:
Clear and strong argument: What the article is trying to communicate is structured in a way that guides the reader thoughtfully through all of its points.
Ease of Reading: The degree to which the authors rely on complex terminology and complicated language and assume what the audience knows.
Clarity of ideas: How straightforward the keypoints of the article can be deduced.
Evidence: How consistent are the authors in drawing upon research to validate claims and how well they demonstrate their evidence for their findings.
Reliance on diverse perspectives and research: The degree to which the research they draw upon represents a variety of standpoints in terms of time of publication, diversity of authorship, publication type, and other relevant factors.
Ability to directly apply ideas into practice: The ease of taking what the article discusses and translating it into things that people can do.
This still didn’t quite get to what I was looking for so I asked ChatGPT to help me better word the prompt for improved results:
How might I better reword or clarify the following prompt in order for you to more effectively analyze a research article? Ask any questions that will make help you improve your response.
[Included categories & questions from above]
I got feedback that helped me adjust and I updated the categories. I figured one more round of feedback could be helpful and so asked:
Review these updated prompts. Are they missing anything else or could they be improved further?
I stopped after this because I could stay in this recurring role for a while and wanted to get to actually testing them out. However, this is one of the ways that I find ChatGPT particularly effective. Have it create or edit something. Then ask it for feedback on improving the thing. Improve it accordingly and ask it for more feedback.
The current prompt that I’m now using for these articles is the following
Assess the quality of the article. On a 10 scale (10 being the strongest), rate and explain the following aspects of the article. Be sure to draw on specific details of the article:
Writing Intention: Indicate what kind of writing piece this is such as a theoretical argument, an empirical study, a practical explanation, case study, etc. It can be more than one type of writing. Specify what the main objective or thesis. Additionally, determine this piece's intended audience(s).
Ease of Reading: Review the degree to which the authors rely on complex terminology and complicated language and assume what the audience knows. Consider to what degree the overall structure and organization aid understanding.
Evidence: Clarify the types of resources and evidence that are used. How consistent are the authors in drawing upon additional scholarship research to validate claims and how well do they demonstrate the evidence for their findings. Evaluate the reliance on qualitative, quantitative, mixed methods, primary sources, and other types of evidence.
Clear and strong argument: What the article is trying to communicate is structured in a way that guides the reader thoughtfully through all of its points. Identify if they engage with counterarguments or alternative perspectives.
Clarity of ideas: How straightforward the key points of the article can be deduced including the logical flow of the article, the explicitness of key points, or how well the article summarizes its main ideas.
Reliance on diverse perspectives and research: Consider interdisciplinary approaches, global versus local perspectives, intersectional considerations, historical and contemporary lenses or other types of approaches. The degree to which the research they draw upon represents a variety of standpoints in terms of time of publication, diversity of authorship, publication type, and other relevant factors.
Ability to directly apply ideas into practice: The ease of taking what the article discusses and translating it into teaching, learning, and supporting faculty in developing teaching and learning.
Step 3: Creating a Useful Summary
Now that I had the means of assessing the article, I wanted to find a good means of eliciting a purposeful summary from the article. I find abstracts are limiting in many ways (it makes me wonder what is the research about research abstracts—very meta). So a summary that was created from a frame that was relevant to me and my work was important.
Once again, I started with ChatGPT and asked the following:
what would be a useful prompt to ask for a summary of the article that provides more than the abstract and helps educational developers and faculty to know if the article is valuable to their work?
The output was useful and I did only small tweaking
Please provide a detailed summary of the attached article, focusing on its key points, findings, and recommendations. In particular, highlight how the article's insights and conclusions are applicable to the work of educational developers and faculty. Discuss any specific strategies, methodologies, or theoretical perspectives presented in the article that can inform or improve teaching and learning practices. Additionally, identify any gaps, challenges, or limitations mentioned in the article that educational professionals should be aware of. The summary should offer enough depth to help educators determine the article's relevance and value to their work.
Step 4: Adding “Surprising” Findings and Keywords
Given what I do (and don’t) know about generative AI, I’m skeptical about this question but thought it still might be worth asking to see if it yields anything of note to consider. But my hopes would be that I could draw out something that I might not notice or realize or just something unexpected. I’m going to probably continue to tweak this one to see if it can elicit something better:
Are there any surprising or unconventional findings or approaches in this piece for educational developers or educators?
Suggestions are welcomed if you have any ideas to inject a little bit or whimsy and randomness into the prompt that could produce something close to what I’m looking for.
Then there are the keywords. I’m hoping to use this activity as a whole to build some kind of spreadsheet or database of research in my collective that has the relevant information and I thought keywords would be a great way of sorting or searching.
To be clear, by database, I’m not thinking anything elaborate—a Google form to a spreadsheet, maybe an Airtable. If y’all got suggestions, send them along!
The first prompt that I tried went like this: “Include up to 10 keywords for this article that would be beneficial to educational developers.”. The results were ok, but 10 keywords felt a bit too long. I edited it to “Provide a bulleted list of strong keywords that would be beneficial to educational developers.” Yet when I Did that, it started to provide some descriptors and a preamble.
Finally, I landed on the following prompt for eliciting keywords:
Provide a bulleted list of 5-8 keywords (no explanations).
The keywords should draw from the article, be relevant and beneficial to help educational developers categorize this article.
The "(no explanations) prevented any extra text which I appreciated and the bulleted list was so that it is was easier to copy. I find that when I copy/paste numbered listed it is more wonky with format in other spaces.
Step 5: Pulling It Altogether
At this point, when I tried to take all the things that the ChatGPT had generated for an article, it turned into about 4-5 pages on a Google Doc. That felt like I wasn’t necessarily changing up my work in any way if I did all that work to elicit and then had 4-5 pages to read.
I needed a little more summarization. I came up with the following prompt to bring it altogether:
Compile together the review of this article, the detailed summary, the surprising and unconventional findings or approaches, and the keywords into one comprehensive response that reduces repetition but maintains all the details provided.
The challenge, of course, is that when I do this, the summaries lose a bit of substance and again, feel superficial in a way that might not be as helpful. Which inevitably means more tweaking and testing out.
Still, for now, I took it and then moved things one step further to try to integrate and connect all three articles to see what would happen. For that, I constructed this prompt:
Review the outputs across the articles in this thread and provide the following. What are interesting connections across the articles and new ideas when you combine these findings? These can be in general and particularly for educational developers and educators. How might one apply these findings in direct and specific ways for educational developers and educators?
I mentioned this in the original post but I chose 3 articles at random and Rob Nelson encouraged me to be a bit more intentional with my choices. He’s right and I think if I had 3 articles that had similar foci within artificial intelligence, I think this too might have had more interesting results to consider. More to play with, right?
In the course of playing around with this, I created a GPT that I named “Edu Researcher” to also compare the differences. That will be a future post (both creating it and the different responses that I get from it).
Final Considerations
I’ve got a start but not a finish. I am hoping I can continue to tweak and refine the prompts and the process to see what comes of it if it’s helpful and provides more accessible insights.
Inevitably, some folks are reading this and thinking this is a lot of work and I might as well read the articles and quit procrastinating. They might not be wrong in the long run but for right now, playing more with the tools, understanding their limitations, their benefits, and sharing those results with folks here is still quite useful for me in my own work. If I never get to being able to do this well with this tool, I’ve still probably been able to help folks learn more in the process—still a win.
Are you or anyone you know trying to do something like this? What kind of results or prompts are they using to achieve?
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International
Very useful, Lance. I think you put your finger on the problem I've found in my much less extensive explorations: summarizing is hard and GPT doesn't quite hit the mark for me. Looking forward to the next Research Insights.