Research Insights #6: Student-Focused Studies Part 5
Even more research on generative AI and students!
It’s time to explore another series of studies that I’ve stumbled upon that look at research on students usage or generative AI. Let’s dig in!
Sun, D., Boudouaia, A., Zhu, C., & Li, Y. (2024). Would ChatGPT-facilitated programming mode impact college students’ programming behaviors, performances, and perceptions? An empirical study. International Journal of Educational Technology in Higher Education, 21(1), 14.
Generative AI summary
Part 1 - Study's Findings and Educational Implications
The study investigated the impact of ChatGPT-facilitated programming (CFP) on college students' programming behaviors, performances, and perceptions compared to self-directed programming (SDP). Key findings include:
Programming Behaviors: Students in the CFP mode exhibited more frequent debugging and error message interactions. They were also more likely to use ChatGPT for copying and pasting codes and receiving feedback. In contrast, students in the SDP mode engaged more in reading feedback directly related to their code within their development environment.
Programming Performance: The study found no significant difference in the programming performance between students in CFP and SDP modes, despite the CFP group showing a slightly higher average score. This suggests that while ChatGPT may enhance certain programming behaviors, it does not necessarily translate into improved performance outcomes.
Perceptions and Experiences: Students' perceptions of ChatGPT improved post-intervention, particularly regarding its usefulness and ease of use. Their experiences highlighted ChatGPT's ability to provide contextualized responses, accurate and efficient feedback, and support across various stages of programming. However, concerns about inaccurate codes, limited input/output interactions, and technical issues were noted.
Educational Implications: These findings suggest that while ChatGPT can enhance certain aspects of programming education, such as debugging efficiency and access to immediate feedback, it does not replace the need for a solid foundational understanding of programming concepts. Educators should consider integrating ChatGPT as a supplementary tool, focusing on how it can support specific learning objectives and enhance students' problem-solving skills.
Part 2 - Strategies, Methodologies, and Theoretical Perspectives
The study employed a quasi-experimental design, leveraging mixed methods to collect and analyze data, offering a comprehensive view of ChatGPT's impact on programming education. The Technology Acceptance Model (TAM) provided a theoretical framework, helping to understand students' acceptance and use of AI technologies in learning.
Strategies and Methodologies: The study's methodological approach allowed for a nuanced understanding of the impacts of ChatGPT on various aspects of programming learning. The use of video analysis, interviews, and surveys provided a multi-faceted view of students' interactions with ChatGPT, their behavioral changes, and their perceptions.
Theoretical Perspectives: The TAM framework helped elucidate students' acceptance of ChatGPT, showing that perceived usefulness and ease of use significantly influenced their intentions to use the technology. This aligns with existing literature suggesting that students' acceptance of new technologies can enhance learning experiences if they perceive them as beneficial and user-friendly.
Enhancing Value: To maximize the educational value of ChatGPT, educators should focus on integrating it in ways that complement traditional teaching methods, rather than replacing them. For example, ChatGPT can be used to provide additional support and resources, encourage exploration and inquiry, and offer personalized feedback, thereby enriching the learning environment and catering to diverse learning needs.
Part 3 - Gaps, Challenges, and Limitations:
Gap in Longitudinal Data: The study is conducted over a relatively short period, lacking longitudinal data to evaluate the sustained impact of ChatGPT on programming education. Educational developers should consider long-term studies to assess how continuous use of AI tools like ChatGPT influences learning outcomes, student engagement, and skill retention over time.
Challenge in Measuring Performance: While the study compares the performance of students in CFP and SDP modes, it primarily focuses on short-term performance metrics. A challenge here is understanding the depth of conceptual understanding and problem-solving skills acquired, beyond immediate task completion. Educational developers should explore more nuanced assessment methods to gauge the quality of learning, not just performance efficiency.
Limitation in Student Interaction Analysis: The research highlights differences in student behaviors in CFP and SDP modes but does not deeply analyze the quality of student interactions with ChatGPT. Understanding the nature of these interactions can provide insights into how students use AI to support their learning. Developers should delve into the cognitive processes students engage in when interacting with AI tools, which can inform the design of more effective AI-assisted educational interventions.
Generalizability Concern: The study is context-specific, focusing on a particular student demographic and programming language. There's a limitation regarding the generalizability of the findings to other contexts, such as different disciplines, educational levels, or cultural settings. Educational developers should consider contextual factors when applying these insights and design adaptive AI-integrated learning experiences suitable for diverse educational landscapes.
Part 4 - Application to Educational Development:
Incorporating AI into Curriculum Design: The findings suggest that ChatGPT can support programming education by providing immediate feedback and aiding in debugging processes. Educational developers should consider how AI tools can be integrated into the curriculum to support active learning strategies, enhance student engagement, and provide personalized learning experiences.
Developing AI Literacy Among Educators and Students: Given the increasing prevalence of AI in education, there is a need for educational developers to focus on building AI literacy among faculty and students. This includes understanding how AI can be used effectively for learning and being aware of its limitations. Training programs and workshops can be developed to equip educators with the skills to integrate AI tools like ChatGPT into their teaching practices effectively.
Ethical Considerations in AI Utilization: The study touches upon the potential for academic dishonesty with the use of AI in programming tasks. Educational developers must address the ethical implications of AI integration, including issues of plagiarism, data privacy, and the impact on student learning autonomy. Developing guidelines and policies on the ethical use of AI in educational settings can help mitigate these concerns.
Adapting Recommendations for Broader Application: While the study focuses on programming education, the insights gained can be adapted for broader application in various disciplines. Educational developers can explore how AI tools can support different subject areas, considering the unique needs and learning objectives of each field. For instance, AI can be used to provide tailored feedback in writing-intensive courses or to simulate complex scenarios in science education.
In summary, the study provides valuable insights into the potential of AI to enhance programming education, but it also highlights several gaps and challenges that need to be addressed. Educational developers have a crucial role in translating these insights into effective educational strategies, considering the broader implications for curriculum design, faculty development, and ethical AI use in education.
My Commentary
This study feels like it's just saying, "Hey, not much difference between students using ChatGPT to code and those who don't." That can feel both reaffirming ("we don't have to reinvent the whole curriculum") and challenging ("students are going to use it so we need to be ready to teach both ways well.").
Two thoughts come from this study. The first is that point that I am regularly discussing with folks that the importance of domain knowledge when using AI is really important otherwise, you can't assess the results. That the students were doing this in a context in which their use was known and the goal of creating something that worked reinforces the need to know one's stuff.
The other thought--and it's not really mentioned here nor is it entirely clear that was the case but there's something about this article that is reminding me of the idea of failing faster. Now, failing faster for me in the learning sense is the idea that you can get effective or useful feedback about said failure (a loaded term to be sure, but the idea that you can course-correct faster is appealing). So that students could more quickly turn around new results from instant feedback is not something this article makes me think about as it comes to generative AI's ability to help our learning.
Yin, J., Goh, T. T., Yang, B., & Hu, Y. (2024). Using a Chatbot to Provide Formative Feedback: A Longitudinal Study of Intrinsic Motivation, Cognitive Load, and Learning Performance. IEEE Transactions on Learning Technologies.
Generative AI summary
Part 1: Study's Findings and Educational Implications
Main Findings:
Chatbot-based Feedback and Intrinsic Motivation: The study found that chatbot-based feedback significantly increased learning interest, perceived choice, and value while decreasing perceived pressure over time compared to teacher-based feedback.
Impact on Cognitive Load: Chatbot-based feedback was more effective in reducing cognitive load, especially with content involving conceptual or challenging knowledge.
Learning Performance: Chatbot-based feedback was more efficient and effective in supporting the mastery of application-based knowledge compared to teacher-based feedback.
Educational Implications:
Personalized Learning: Chatbot-based feedback offers a more personalized and immediate feedback mechanism, which can be particularly beneficial in large classroom settings where individual attention is challenging.
Student Engagement: The increase in intrinsic motivation factors such as interest, choice, and value suggests that chatbots can significantly enhance student engagement in the learning process.
Cognitive Load Management: The ability of chatbots to reduce cognitive load points to their potential in designing more effective learning experiences, particularly in complex or conceptually challenging subject areas.
Part 2: Strategies, Methodologies, and Theoretical Perspectives
Strategies and Methodologies:
The study utilized a longitudinal quasi-experimental design, employing linear mixed model (LMM) and T-test for data analysis, which provided robust insights into the temporal effects of chatbot-based feedback.
The chatbot was designed following six key principles, integrating concepts from Self-Determination Theory (SDT), Cognitive Load Theory (CLT), and feedback frameworks to enhance intrinsic motivation and learning efficacy.
Theoretical Perspectives:
Self-Determination Theory (SDT): The chatbot design emphasized autonomy, competence, and relatedness, which are core to SDT, aiming to boost intrinsic motivation.
Cognitive Load Theory (CLT): The design principles aimed at reducing extraneous cognitive load and promoting germane cognitive load, aligning with CLT to optimize learning.
Feedback Frameworks: The chatbot integrated evaluative and informative feedback components, ensuring that feedback was not only corrective but also instructive and supportive of learning.
Enhancing Document Value:
The study's findings could be particularly valuable for educational developers looking to integrate AI-based tools like chatbots into learning environments. By understanding the impact of such tools on intrinsic motivation and cognitive load, educators can better design interventions that enhance learning outcomes.
The methodologies and theoretical underpinnings provide a strong foundation for further research in this area, offering a framework for evaluating and implementing chatbot-based learning interventions.
Part 3: Gaps, Challenges, and Limitations
Gaps:
Lack of Diverse Contextual Application: The study primarily focuses on a specific educational setting and subject matter. There's a gap in understanding how chatbot-based feedback would function across various disciplines or in different educational contexts, such as in asynchronous learning environments or non-academic training settings.
Long-Term Impact: While the study is longitudinal over a semester, the long-term effects of chatbot-based feedback on intrinsic motivation and learning retention beyond the course duration are not explored.
Challenges:
Personalization and Adaptability: The study highlights the use of a chatbot for formative feedback but does not fully address the challenges in scaling this approach for personalized feedback based on individual student's learning pace and style.
Integration with Existing Systems: There is a challenge in integrating chatbot-based feedback systems with existing Learning Management Systems (LMS) and ensuring they complement rather than complicate the learning ecosystem.
Limitations:
Generalizability: The findings are based on a specific cohort and subject area, which may limit the generalizability of the results to other populations or educational settings.
Interaction Depth: The study doesn't deeply explore the nature and quality of interactions between students and the chatbot, particularly in terms of how these interactions compare with human feedback in depth and nuance.
Considerations for Educational Developers:
Developers should be cautious about applying these findings universally and consider piloting similar interventions in varied contexts to understand their broader applicability.
There's a need to explore adaptive algorithms that allow chatbots to provide more personalized feedback and assess how these technologies can be integrated seamlessly into existing educational frameworks.
Understanding the limitations of chatbot interactions can guide developers in designing chatbots that complement rather than replace human interaction, ensuring that the technology enhances rather than detracts from the learning experience.
Part 4: Application to Educational Development
Relevance to Educational Developers and Faculty:
The findings underscore the potential of chatbots to enhance intrinsic motivation and reduce cognitive load, which is crucial for designing effective learning environments. Educational developers can use these insights to advocate for and design chatbot-based feedback systems in their institutions.
The study's emphasis on principles derived from SDT and CLT provides a theoretical foundation that developers can use to guide the design and implementation of educational technologies.
Adapting and Applying Recommendations:
Feedback Design: Developers can use the study's insights to design chatbot feedback that is not only corrective but also supportive and instructive, following the outlined design principles to enhance learning outcomes.
Faculty Training: The study's results can inform training programs for faculty, helping them understand how to effectively integrate chatbot-based feedback in their teaching and how to complement these systems with human interaction to maximize learning benefits.
Curriculum Development: The findings can influence curriculum development, especially in designing activities and assessments that align with the capabilities of chatbot-based feedback systems, ensuring that these technologies are effectively utilized to support learning objectives.
Policy and Strategy: At a strategic level, educational developers can use the study's findings to inform policy development around technology integration in education, advocating for resources and support to implement chatbot-based feedback systems.
By understanding the specific benefits and limitations of chatbot-based feedback, educational developers can effectively integrate these tools into educational settings, enhancing learning experiences while being mindful of the technology's constraints and the need for human oversight.
My Commentary
This study seems to lean into what I was saying above that more quicker feedback can be helpful in learning more quickly. I'm not praising learning faster for the sake of productivity but just because that learning cycle feels more natural and inclined to keep one's interest or enthusiasm to learn. As children, we're learning constantly through rapid feedback loops. As we grow, we're just as much unlearning as learning and that requires more effort so the learning cycle needs to be as quick. I think finding more use-cases of how generative AI can be used in specific disciplinary contexts and determining what are the kinds of feedback that are more-trustworthy for using with students would go far in this continual quest of what makes sense for deployment of these tools.
Schuetzler, R., Giboney, J., Wells, T., Richardson, B., Meservy, T., Sutton, C., ... & Hughes, A. (2024). Student Interaction with Generative AI: An Exploration of an Emergent Information-Search Process.
Generative AI summary
Based on the article "Student Interaction with Generative AI: An Exploration of an Emergent Information-Search Process," here is a comprehensive analysis of the study's findings and their educational implications, as well as a discussion of the methodologies, strategies, and theoretical perspectives employed in the research:
Part 1 - Study's Findings and Educational Implications
Main Findings: The study explores how generative AI, specifically ChatGPT, has transformed the information search process for students. Key findings include:
ChatGPT supports diverse information needs, facilitating an iterative process of prompt adjustments.
It promotes the easy adoption of results due to its conversational nature and contextualized responses.
Students are willing to overlook or work around limitations like outdated data and inaccuracies due to the benefits ChatGPT offers.
Educational Implications:
Pedagogical Strategies: The study suggests a paradigm shift in how students gather and process information, impacting pedagogical strategies. Educators need to adapt teaching methods to integrate these new information-search behaviors.
Information Literacy: As ChatGPT alters traditional search processes, there's a growing need to teach students effective prompt crafting and critical evaluation of AI-generated content, enhancing their information and digital literacy skills.
Ethical Considerations: The study raises concerns about students' uncritical acceptance of AI-generated answers. Educators must emphasize the importance of verifying information and understanding the limitations of AI tools.
Part 2 - Strategies, Methodologies, and Theoretical Perspectives
Methodologies: The researchers used a cross-sectional survey design to collect data from 455 students, analyzing prompts submitted to ChatGPT and student feedback. The thematic analysis was employed to identify patterns in how students use ChatGPT for information searching.
Strategies and Theoretical Perspectives:
Prompt Engineering: The study highlights the importance of prompt engineering—crafting precise prompts to elicit desired responses from ChatGPT. This skill is crucial for effective information retrieval and generation in educational contexts.
Cycling Process: The research introduces a "cycling" concept, where students iteratively refine their queries based on ChatGPT's responses. This iterative process is a strategic adaptation to the conversational nature of AI, enhancing the information search process.
New Model of Information Search: The paper proposes a new model of information search specific to generative AI, emphasizing stages like informational need identification, prompt generation, response creation and evaluation, and adoption.
Application in Higher Education: The findings and strategies outlined in the study can be applied in higher education to:
Train students in effective prompt crafting and critical evaluation of AI-generated content.
Develop curriculum materials and assessment strategies that reflect the new information search behaviors facilitated by AI.
Foster a deeper understanding among educators and students of the strengths and limitations of AI in educational contexts.
Part 3 - Gaps, Challenges, and Limitations
Gaps and Limitations:
Sample Diversity and Generalizability: The study's data were collected from a single university with a majority of participants being business majors. This limitation could affect the generalizability of the findings across different disciplines and educational contexts. Educational developers should consider the diversity of student populations and academic disciplines when applying these findings.
Depth of Interaction Analysis: While the study provides insights into how students use ChatGPT, it may lack a deeper analysis of the cognitive and emotional processes involved in these interactions. Understanding these aspects could offer more nuanced strategies for integrating AI tools in education.
Long-Term Educational Impact: The document does not address the long-term effects of using generative AI on students' learning outcomes, critical thinking skills, and information literacy. Educational developers need to consider these longitudinal impacts when integrating AI into the curriculum.
Challenges:
Ethical Considerations: The study highlights students' willingness to accept AI-generated responses without critical evaluation. This raises ethical concerns about the reliance on AI for academic purposes and the potential for propagating misinformation.
Skill Acquisition: The necessity for prompt engineering and the iterative "cycling" process indicates a learning curve for effectively using ChatGPT. Educational developers face the challenge of creating training programs that equip students with these new skills.
Integration with Existing Pedagogies: Aligning AI tool usage with established pedagogical frameworks presents a challenge. Educators must thoughtfully integrate these tools to enhance learning without undermining critical thinking and problem-solving skills.
Part 4 - Application to Educational Development
Relevance to Educational Developers and Faculty:
Training in Prompt Engineering: The study underscores the importance of prompt engineering in eliciting useful responses from ChatGPT. Educational developers should create workshops or modules to teach students and faculty how to craft effective prompts, integrating these skills into information literacy curricula.
Developing Critical Evaluation Skills: Given the challenges in verifying the accuracy of AI-generated content, there's a need for developing programs that enhance students' abilities to critically assess information, discerning between credible and inaccurate sources.
Adapting Pedagogical Strategies: The iterative "cycling" process identified in the study can inform pedagogical approaches, encouraging an exploratory and reflective learning process. Educators can incorporate similar iterative cycles in problem-solving and research assignments to foster deeper engagement with content.
Adaptation and Application:
Curriculum Integration: Educational developers can guide faculty in integrating ChatGPT into their teaching, ensuring it's used as a complement to traditional learning resources rather than a replacement, thereby enhancing research, writing, and coding assignments.
Assessment Design: In light of the study's findings, educational developers can assist in designing assessments that evaluate not just the final product but also the process students engage in when interacting with AI tools, such as their prompt engineering and evaluation strategies.
Professional Development: For faculty, professional development sessions can be organized to explore the use of ChatGPT in teaching and learning, focusing on its benefits and limitations. This can help faculty to more effectively integrate this tool into their pedagogical practices, ensuring they are equipped to guide students in its use.
In summary, the document provides a foundation for educational developers to consider the nuanced implications of integrating generative AI into educational settings. By addressing the identified gaps and challenges and applying the study's insights, educational developers can facilitate a more informed and strategic incorporation of AI tools in teaching and learning processes.
My Commentary
I can hear the shackles raising among many readers--generative AI with all its false information and limited information being used as part of the "information search process" taps into the same anxiety that many folks expressed for Wikipedia in the 2000s (and early 2010s). I do think the "cycling" concept is interesting because it's a natural scaffolding to revision and refinement of queries when searching for information (with or without generative AI--"Am I asking the right question?"), but as importantly, it is a living example of research experiments. "I'm examining X through question Y, are the results useful? No? Ok, maybe there's another question to be asking." It's this idea that excites me--the rapid questioning and feedback--good or bad--are experiments that allow students to keep iterating and changing their working hypotheses of whatever it is they're trying to do and that's kinda cool.
Addy, T., Kang, T., Laquintano, T., & Dietrich, V. (2023). Who Benefits and Who is Excluded?: Transformative Learning, Equity, and Generative Artificial Intelligence. Journal of Transformative Learning, 10(2), 92-103.
Generative AI summary
Based on the detailed review of "Who Benefits and Who is Excluded? Transformative Learning Equity and Generative Artificial Intelligence" by Tracie Addy and colleagues at Lafayette College, here's an in-depth analysis:
Part 1 - Study's Findings and Educational Implications:
The primary finding of the study is the nuanced exploration of how generative artificial intelligence (AI) technologies, particularly in the context of higher education, can facilitate transformative learning. The article highlights how these technologies can support diverse student populations, including multi-language learners, students from marginalized linguistic communities, those with disabilities, and low-income students. However, it also addresses the significant limitations of generative AI, such as accessibility issues and the potential for reinforcing societal biases.
For the field of higher education, the implications are profound. The study suggests that generative AI can democratize access to educational resources and personalized learning experiences, thus potentially narrowing the achievement gaps among diverse student populations. For instance, the use of AI in language learning can provide multi-language learners with tailored support, enhancing their engagement and learning outcomes. Yet, the equity concern is paramount; if access to these advanced tools is uneven, it might exacerbate existing educational disparities.
Part 2 - Strategies, Methodologies, and Theoretical Perspectives:
The methodologies discussed, primarily through the lens of Mezirow's Theory of Transformative Learning, focus on critical reflection and the reevaluation of one's assumptions. This approach is vital in understanding and leveraging the benefits of generative AI in education. The study employs a survey methodology to gather students' perspectives, providing a grounded understanding of AI's impacts on their learning experiences.
Strategically, the document calls for an equity-minded approach in integrating AI into educational settings. It emphasizes the need to ensure that these technologies are accessible to all students, particularly those from historically marginalized groups. The discussion around the potential for AI to support neurodivergent students and those with disabilities is particularly noteworthy, illustrating how AI can be tailored to meet diverse learning needs and preferences.
Theoretically, the paper is anchored in the transformative learning theory, which is apt for understanding the potential shifts in perspectives and practices AI can induce in educational contexts. This theory is effectively used to frame the potential for AI to transform not just individual learning experiences but also broader educational practices and policies.
In applying these insights, educational developers should consider the following:
Engage in critical discussions about the equitable deployment of AI technologies in education, ensuring that these discussions are inclusive of diverse student voices.
Develop AI literacy among educators and students to foster a more nuanced understanding of the technology's potential and limitations.
Advocate for policies and practices that ensure equitable access to AI tools, with a particular focus on supporting students who may benefit the most from these technologies, such as those with disabilities or those from low-income backgrounds.
Part 3 - Gaps, Challenges, and Limitations:
The document by Addy et al. provides an in-depth exploration of the potential of generative AI in transformative learning within higher education. However, there are several gaps, challenges, and limitations that educational developers need to consider:
Equity and Accessibility: While the document emphasizes the potential benefits of AI for diverse learning groups, it also highlights the significant issue of access. There is a clear gap in how generative AI tools are accessible to students from different socioeconomic backgrounds, which could exacerbate existing educational inequalities. The document mentions the digital divide and the cost associated with accessing more sophisticated AI tools, pointing out that this could limit the transformative potential of AI in education.
Bias and Ethical Considerations: The document discusses the inherent biases within AI tools, which are reflective of the data used to train these models. This presents a challenge in ensuring that AI tools support equitable learning without perpetuating existing societal biases. There is a need for further exploration and mitigation strategies to address these biases, ensuring that AI tools are inclusive and equitable.
Methodological Limitations: The study primarily uses surveys to gather student perspectives, which may not capture the full range of experiences and interactions with AI. There might be nuances in how different student demographics perceive and utilize AI tools that a survey alone cannot capture.
Long-Term Impacts: The document does not extensively explore the long-term impacts of integrating AI in education. While it provides a snapshot of current perceptions and uses, the longitudinal effects on learning outcomes, skill development, and the labor market remain underexplored.
For educational developers, understanding these gaps and challenges is crucial in critically assessing the integration of AI in educational settings. They need to advocate for and implement strategies that address accessibility, mitigate bias, and continuously evaluate the long-term impacts of AI on education.
Part 4 - Application to Educational Development:
The insights from the Addy et al. document are highly relevant to educational developers and faculty, offering a roadmap for integrating AI in a manner that supports transformative learning:
Enhancing Accessibility: Educational developers should work towards making AI tools accessible to all students, regardless of their socioeconomic background. This could involve advocating for institutional subscriptions to AI tools, ensuring that all students have equal access to these technologies.
Addressing Bias: Developers and faculty need to be aware of the biases inherent in AI tools and work towards mitigating these biases in educational settings. This includes training faculty on the limitations of AI, encouraging critical engagement with AI outputs, and selecting AI tools that are transparent about their training data and methodologies.
Professional Development: There's a significant opportunity for educational developers to create professional development programs that equip faculty with the knowledge and skills to integrate AI tools effectively in their teaching. This includes understanding the pedagogical potential of AI, addressing ethical considerations, and leveraging AI to enhance student learning.
Curriculum Integration: The findings can inform the development of curricula that integrate AI in a way that supports critical thinking, problem-solving, and personalized learning. Educational developers can guide faculty in designing assignments and learning activities that leverage AI to enhance learning outcomes while also fostering an understanding of AI's limitations.
Continuous Evaluation: Given the rapid evolution of AI technologies, educational developers should establish mechanisms for the continuous evaluation of AI tools in educational settings. This includes assessing the impact of AI on learning outcomes, student engagement, and equity.
In conclusion, while the document provides valuable insights into the potential of AI in education, it is essential for educational developers to approach this integration with a critical lens, addressing the challenges and limitations while leveraging the opportunities to enhance teaching and learning in higher education.
My Commentary
I think continues to be the space that myself and many others who are "in the middle" of the extremes of around generative AI and education sit. We understand and think a lot about the limitations, problems, and concerns that generative AI represent from different bias issues (the input, the companies, the outputs, the uses) to access concerns to environmental impact to how and who gets to use it vs it gets used on. We see lots of that and still, we see there are ways that this can also help. Not transformational and not the promise that this will raise all boats but the ways this can help feels as important to highlight. When I think about how a tool like this can expose more of the hidden curriculum of the world, it just feels important to recognize those possibilities for many people who are pushed to the margins. Thus, I do appreciate an article that's going to dig into this and help to understand what students are seeing and experiencing it in more complicated ways than some of the general narratives around students seem to represent.
Meta-commentary
I was at a conference and at a panel this week--lots of conversations about generative AI. Part of what you're seeing in my responses to these articles are informed by the conversations that I've been having and work that I've been seeing folks do.
As I continue this series on student-focused research, I'm appreciative of the mixture of work being done and hoping to start to think. Both in these articles and in some of the presentations and conversations, I'm seeing more attempts to work with these tools with students in different and distinct ways. It's making me wonder where do we hold or put all of these findings for others to benefit. What would an AI research for educational application look like that can pull together the different possibilities and challenges discovered in these findings to help us figure out what we do (or don't do) next.
It can often feel like there's much being done and experimented in classrooms aroudn the world but little being captured to help our understanding. These studies give me hope that we're starting to find some pathways more clearly into trying to figure out what works and where there's real challenges to be figured out.
AI+Edu=Simplified by Lance Eaton is licensed under Attribution-ShareAlike 4.0 International
Your observation that there is much happening in classrooms with generative AI and much less to help us develop understanding seems right to me. My current framework for how this gets better is sharing knowledge with in disciplines, which I think is happening and your research insights is one window I have into this. The second article is a good example, and not surprising that is electrical engineering given I think their comfort and knowledge of the tools gave them a head start. But this is happening in other disciplines as well. I spoke to colleagues in Germanic Languages and Critical Writing about work their doing in the classroom this term. I expect it to be shared in meetings on campus, but next year at discipline-based conferences and journal.
Thanks for this series, Lance! I've just read through the whole series and come away with 4 or 5 articles I want to dive deeper with.