No images? Click here Intentional Pedagogy with AI Technology This Sheridan Center newsletter originally appeared in January 2023, and we are sending an updated version for the start of AY24-25. Chances are that you have encountered conversations about artificial intelligence (AI) in the classroom. Educator responses seem to range from excitement to exasperation to exhaustion. As one might expect, there is a similarly wide range of integration at Brown and elsewhere. Some describe this as making a decision about “No AI, OK AI, [or] Go AI” (Report of the Yale Task Force on Artificial Intelligence, 2024), while Perkins et al. (2024) describe instructor choice on a five-point scale:
The good news embedded in all of these choices is that good teaching practices (such as those that support critical thinking, knowledge acquisition, and skill proficiency) already position us to productively engage with AI. Additionally, Brown’s student population is well-positioned for this conversation as students who have chosen an Open Curriculum where they are intentional participants in their intellectual and personal growth. Professor Steven Lubar (American Studies, History, and History of Art and Architecture) illustrates the process of developing an intentional pedagogy for use of AI: "In my Methods in Public Humanities (PHUM2020) course, I am, with some trepidation, encouraging students to use ChatGPT and similar tools. These new tools will be useful to them in their work after Brown, and we should help them learn to use them wisely." Because of quickly changing norms and the range of teaching practices around AI, there are real and practical considerations to take into account as we move forward. This resource addresses three areas we encourage you to consider as you think through your approach with students this term. Designing writing assignments Turnitin’s DraftCoach can also be used as a learning tool to help students develop awareness around plagiarism norms and skills in areas like citation/paraphrasing. This can also open up space for conversations around disciplinary norms in citations and literature reviews. For graduate writers, Georgia Tech’s “Effective and Responsible Use of AI in Research” offers helpful guidance, framed by questions that see AI as a generative and brainstorming tool, such as “How can students use AI effectively as a tool to help generate research ideas and approaches?” However, it also addresses data sharing cautions, asking, “Is a student giving away valuable ideas or research results to an open platform (like ChatGPT) before the topic is peer reviewed and published? Will you lose your intellectual property rights, such as patents?” Students should also note that thoughts and perspectives on the use of generative AI can vary significantly across disciplines. Students should consult with their advisor(s) and Director of Graduate Studies to discuss discipline-specific expectations. Academic integrity A student’s name on any exercise (e.g., a theme, report, notebook, performance, computer program, course paper, quiz,or examination) is regarded as assurance that the exercise is the result of the student’s own thoughts and study, stated in his or her own words, and produced without assistance, except as quotation marks, references, and footnotes acknowledge the use of printed sources or other outside help. Within this framing, consider how you might update your classroom guidelines or develop a new AI policy for students that details how they should, might, or cannot engage with it. While we certainly hope all students have read the academic code, they may not understand how the use of generative AI might complicate the assurance of one’s own work and/or require disclosure as outside help. For example, Professor Monica Linden (Neuroscience), created a new syllabus statement about the ethical and effective use of ChatGPT in her course, NEUR 1930L: Neurobiology of Love. (See Prof. Linden's full original statement in this blog post.) Recently, she updated the statement to add: "I think we will have the opportunity to creatively use ChatGPT and other generative AI products in this class. However, it is important that we use them responsibly. You should not be using ChatGPT to do the assignments for you. Every assignment is intentionally developing skills aligned with the course learning objectives. If you use ChatGPT instead of doing the work yourself, you are cheating yourself out of this opportunity to learn. If you aren’t here to learn, this class isn’t for you." "(Also, this class is about love and attachment. Are you really going to let an AI-bot try to understand that for you??)" -Syllabus for NEUR 1930L (Neurobiology of Love), taught by Prof. Monica Linden Professor Tara Nummedal’s (History and Center for Digital Scholarship) approach also places bounds on the use of AI: "We are all learning to understand how, when, and whether to use generative artificial intelligence (AI) in our teaching, learning, and research. I recommend that you read this brief introduction from the Library to better understand how tools like Chat GPT work (and don't work), especially the page about the difference between chatbots and search engines. In particular, please be aware of the phenomenon of "hallucinations"- that is, AI's surprisingly common habit of generating plausible but false information. I have tried out ChatGPT with a range of queries related to this class, and concluded that our course material is too niche to make AI an effective research tool. Interestingly, it does tend to surface (that is, produce!) some common misperceptions about natural knowledge in early modern Europe as well as some of the deep master narratives that this course is designed to challenge. This is fascinating in its own right, especially since knowledge-making is a core theme of our work. In this class, therefore, we occasionally may experiment with ChatGPT together. However, because of my reservations and because learning how "to craft, develop, articulate, and sustain an argument in a written paper, supporting claims with either original research or examples from lectures and assigned readings" is one of our learning outcomes, students will not be permitted to use ChatGPT or similar tools in the graded writing assignments (including the midterm) submitted for this class. AI-generated submissions, even if properly cited, will be treated as plagiarism and a violation of the Academic Code. If you want to propose a project incorporating AI, however, please come talk with me about permission to obtain an exception to this rule." In contrast, a model syllabus statement for Brown’s Master in Technology Leadership Program allows instructors class-level discretion on use of AI. However, if the tools are used, they should be cited through the following conventions: 1. Clearly identify the use of AI-based tools in the work. Any work that utilizes AI-based tools must be clearly marked as such, including the specific tool(s) used. For example, if you use ChatGPT-3, you must cite "ChatGPT-3. (YYYY, Month DD of query). "Text of your query." Generated using OpenAI. https://chat.openai.com/" Additional options for citation may include asking students to attach a log of prompts used with the AI tool. Bowen & Watson (2024) offer a number of ways to request AI acknowledgements, such as, “I used AI to do an outline/first draft which I then edited. Describe the nature of your contribution:” or “I used AI/friends/tutor to help me generate ideas. Describe that process:”. Other examples of syllabus statements can be found on the Sheridan Center's Creating a Brown University Syllabus page (see "Syllabus Statements Addressing Emergent Issues"). Even though it is past the first day of class, you can still publish an official update of your syllabus with revised language to reflect these changes, and ongoing classroom discussions around specific assignments are also useful. A Sheridan Center newsletter on Inclusive Practices for Addressing Academic Integrity offers additional ideas. Please reach out to Sheridan_Center@brown.edu if we can facilitate the development of this statement. Ethics of use and cost While generative AI offers new and exciting possibilities across campus and disciplines, instructor attitudes towards integration into student work vary considerably–even within shared departments or disciplines. Clear, articulated guidelines will support your students’ learning most effectively. Other resources to support Brown instructors and students include:
References Georgia Tech Office of Graduate and Postdoctoral Education. (2024, July 10). Effective and Responsible Use of AI in Research: Guidance for Performing Graduate Research and in Writing Dissertations, Theses, and Manuscripts for Publications. Available: https://grad.gatech.edu/ai-research-and-teaching Perkins, M., Furze, L., Roe, J., MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(6). Playfoot, D., Quigley, M., & Thomas, A.G. (2024). Hey ChatGPT, Give me a title for a paper about degree apathy and student use of AI for assignment writing. The Internet and Higher Education, 62. https://doi.org/10.1016/j.iheduc.2024.100950 Report of the Yale Task Force on Artificial Intelligence. (2024, June 18). Available: https://provost.yale.edu/news/report-yale-task-force-artificial-intelligence This resource was originally authored in January 2023 by Dr. Jenna Morton-Aiken, Senior Associate Director for Writing and English Language Support; and Lecturer, Department of English. The resource was updated in September 2024 by Mary Wright. Thank you to Kristi Kaeppel, Anne Kerkian, and Christine Baumgarthuber for feedback on drafts and discussions of the topic. |