Thoughts On Academic Integrity and AI Technology
TELC aims to disseminate interesting approaches to providing relevant information and strategies for UNIC lecturers.
This article drew on York University [Canada] AI policy.
Information and Strategies for UNIC Lecturers
This article has been made available in this newsletter to help UNIC lecturers understand the capabilities and limitations of generative Artificial Intelligence technology and provide strategies for helping students avoid using it to engage in academic misconduct. The sections that follow provide information about these tools, with a focus on ChatGPT. Also, as artificial intelligence technology is constantly evolving, TELC aims to update lecturers as new information becomes available.
The Capabilities of Artificial Intelligence (AI) Technology
The media and educational institutions have recently focused on generative artificial intelligence (AI) tools. These tools can mimic, and in some cases outperform, human abilities to research, write, problem-solve, create art, create videos, and even “learn” and evolve. As a result, there is concern that generative AI tools/apps (such as ChatGPT, GPT-3, and DALL-E, among others) could be used to assist students in engaging in academic misconduct. Concerns about text-generating tools, in particular ChatGPT, have risen to the fore because of their ability to generate content (e.g., essays, programming code, maths problem solutions) that students could then submit for credit without their lecturer‘s knowledge or approval.
Text-Generating Software (e.g. ChatGPT)
Text-generation systems, such as ChatGPT, operate on demand: when presented with an enquiry or a prompt from a user, they draw on vast databases of information to produce genuine and human-like replies rapidly. Although the generated content is not always correct and may lack depth, this sort of technology can compromise the institution’s academic integrity efforts.
Recognizing the Limits
OpenAI introduced ChatGPT (GPT stands for Generative, Pre-trained, Transformer) on November 30, 2022. It is a big language model that was trained on a significant quantity of data to anticipate the proper next word in a phrase. ChatGPT is capable of performing a wide range of activities (e.g., answering questions, generating content, summarizing text, translating material, and so on) and engaging in a wide range of topics. Its replies are frequently clear and impressive, which might lead to the incorrect perception that ChatGPT is more capable than it is. The current version of ChatGPT (3.5) has the following limitations:
- It uses a predetermined quantity of knowledge that is only valid until September 2021; ChatGPT is not linked to the Internet and hence cannot seek for more recent material;
- It occasionally offers inaccurate or illogical information. Additionally, it displays this information in an authoritative manner, which may lead to user confusion;
- It is sometimes terrible at referencing sources and can generate incorrect citations;
- Its pieces may be grammatically accurate and simple to read, but they are usually shallow and lacking in thought. It is possible that information will be repeated throughout the article;
- Cannot combine information from many sources;
- Does not demonstrate critical thinking;
- May reflect the biases of the person who provided the training data
Introducing AI Technologies to Students
As a lecturer, expect students to be inquisitive about these tools and to need clear guidance on whether they are authorized to use them for academic work within your course. As a result, instructors are urged to engage students in an open conversation regarding artificial intelligence apps and how they interface with academic integrity.
Expectations of the Institution
When speaking with students, inform them of the following institutional expectations:
- Any unauthorised use of ChatGPT (or other AI technologies) on exams is seen as a violation of academic integrity;
- Remind students of the Policy on Academic Honesty and provide examples of how unauthorized use of this technology can lead to violations of:
- Cheating if they use an AI tool to gain an unfair advantage on an academic evaluation when it has not been authorized by their lecturer
- or plagiarism if they use images created by others, such as through the use of DALL-E or another image-generating tool when it has not been authorized by their lecturer.
- When giving assignment instructions, be extremely clear about your expectations. To assist prevent confusion, make sure these objectives are presented in a variety of methods, such as in Moodle, teaching guidelines, and in class;
- Discuss how various lecturers might have varied expectations for AI tools, and if it is authorized by one lecturer, this does not indicate it will be permitted by others.
Issues for Student Discussion
You can utilize the arguments raised above to further examine the ethical implications of AI technology in your discipline, as well as any possible issues that may develop. Some leading conversation questions may be:
- What do you know about artificial intelligence apps?
- Have you used them before? How, if at all, have you done so?
- What were your observations?
- What are some ethical difficulties that might come from utilizing these applications in your field/discipline?
- How can you utilize AI apps to help you study in an ethical manner?(Eaton, 2023)
Ethical Issues and Grey Zones
You can let students be aware that there are still numerous unclear issues surrounding AI apps and that there are no clear solutions or recommendations as of yet (Watkins, 2022). There are no requirements, for example, on how to properly reference information created by these apps. Apart from challenges to academic integrity, there is also the possibility that by answering prompts and interacting with AI apps, users are assisting the technology in improving and evolving. Moreover, ChatGPT captures a substantial amount of data from its users, which may be shared without the user’s knowledge or permission (Trust, n.d.).
Issues for Student Discussion
Such grey zones can also serve as a springboard for extra classroom discussion and activities, for example, Eaton (2023) suggests the following:
- Provide a way for students to communicate after using AI apps in their academic work. For example, they may begin with the following statement, derived from OpenAI:
The author generated this [text, image, or video] in part with [insert provider],
a language-, image-, code-, or video- generation model. The author verifies that
they have reviewed, edited and revised the draft provided and takes responsibility for
the content of this [assignment name]. (Open AI, 2022)
- Collaborate with your students to develop class guidelines/a charter for the usage of artificial intelligence apps in your classes. Is it appropriate to use? If so, what are some guidelines for using it?
- Add a conversation thread with artificial intelligence so that students may exchange knowledge, ask questions, publish articles, and so on.Have students read OpenAI’s privacy policy and terms of use pertaining to ChatGPT and DALL-E. Discuss possible concerns and implications of having their data collected.
- If users are helping the technology to improve and evolve through providing prompts, what are some future implications for work in their field? Will improved AI technology be a source of help or a hindrance?
Using AI Technologies as a Learning Aid
Generative AI technology has produced a need to change assessment techniques while also providing some new opportunities for in-class learning.
Strategies for the Near Term
When it comes to assessment, some tactics that may be used in the short term include:
- requiring students to hand-write assessments in class in place of written assessments sent through Moodle, including presentations, reflective writing assignments, in-class examinations, or short essays requiring students to provide rough notes along with their final submission of their work;
- While implementing the measures outlined above, keep in mind the possible accessibility and equality difficulties that may occur when moving evaluation modes.
Long-Term Planning
Assessments may be modified over time so that students are not submitting work that AI software can readily make. Some suggestions for revamping evaluations are provided below.
Propose broadening or replacing written assessments by doing the following:
- Concentrate on the process of writing rather than the finished product
- While AI technologies have been demonstrated to be inadequate at displaying these higher-order abilities, students should prioritize evidence of creative ideas and critical thinking
- request that students utilize current sources (post-September 2021)
- request that students apply personal experience or expertise to academic subjects
- or, replace a written assessment with a multimodal one
You may also wish to revise your grading standards or rubrics to emphasize examination of deeper discipline-specific abilities like argumentation, evidence usage, or interpretative analysis rather than writing mechanics and essay structuring. This can help you reweight your evaluations in favor of student learning and away from abilities that AI systems can readily do.
Nowadays, many lecturers are experimenting with incorporating generative AI technology into their assessment design. Bearing in mind the limits of AI, if you wish to use such tools into exams, students can utilize technology such as ChatGPT to apply higher-order abilities in the following ways:
- generate a ChatGPT response to a particular question, and then write an analysis of the strengths and weaknesses of the ChatGPT response;
- fact-check the responses that ChatGPT provides to identify incorrect information;
- generate a paper from ChatGPT and evaluate its logic, consistency, accuracy and bias;
- use ChatGPT to create an outline that students can then use to develop an essay. (D’Agostino, 2023; Montclair State University, n.d; Trust, n.d.)
Classroom Activities
In addition to formal evaluations, generative AI techniques may be utilized in ungraded or low-stakes learning activities in the classroom. Using this technology in lectures or discussions can assist students learn how and when to utilize AI technology efficiently, ethically, and in ways that accord with your discipline norms and standards. Consider the following learning activities:
- Using AI-generated text as a starting point for a class debate on a certain topic. What exactly does it get right? What is lacking? What changes would be required to bring it up to the academic standards of your field?
- having small teams of students experiment in using AI to create text about a given subject, and then comparing the results (what grade would they assign its response using a course rubric?) and/or the process (what prompts and tweaks were needed to generate the text?);
- engaging in a class debate against generative AI tech. Use the tool to generate counterarguments that can help them explore perspectives and strengthen their own arguments;
- asking your students! Gather anonymous feedback about whether they are using the tool, what value it provides them, and how they think it should be used in your disciplinary or teaching context.
Detection and Future Considerations
If using a detector to check for AI generated content in student work, lecturers are encouraged to be transparent about their use with students. Additionally, lecturers are also encouraged to become informed about the limitations and privacy concerns of these tools. To use detectors, most require that you copy and paste content that you suspect has been generated by AI apps. Some, like OpenAI’s AI Text Classifier require users to register. Additionally, the reliability of detectors has been questioned, for instance, this article from the Guardian discusses how OpenAI’s Text Classifier tool correctly identified only 26% of AI-generated English content.
Although current generative AI technology includes some limitations, continuous training and the release of new versions mean that these limitations will be corrected. When it comes to ensuring academic integrity, OpenAI is working on a watermark that will identify when text has been produced by AI apps, and Turnitin is also set to release an updated version of its software to detect AI-generated content.
Revising Assessments
- Bailey, J. (2022, December 7). Why Teachers Are Worried About AI
- D’Agostino, S. (2023, January 31). Designing Assignments in the ChatGPT Era
- Eaton, S. and Anselmo, L. (2023, January). Teaching and Learning with Artificial Intelligence Apps
- McClennen, N. & Poth, R. Education is about to radically change: AI for the masses
- McKnight, L. (2022, October 14). Eight ways to engage with AI writers in higher education
- McVey, C. (2022, December 05). POV: Artificial Intelligence Is Changing Writing at the University. Let’s Embrace It
- Mollick, E.R. & Mollick, L. (2022). New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments
- Prochaska, E. (2023, January 23). Embrace the Bot: Designing Writing Assignments in the Face of AI
- Schulten, K. (2023, January 23). Lesson Plan: Teaching and Learning in the Era of ChatGPT
- Webb, M. (2022, August 04). What’s next for AI in higher education?
General Information about AI Technology and Education
- Alby, C. (n.d.). ChatGPT: Understanding the new landscape and short-term solutions
- Alby, C. (2023, January 7). Can ChatGPT be a blessing?
- Ditch that Textbook (2002, December 17). ChatGPT, Chatbots and Artificial Intelligence in Education
- Kovanovic, K. (2022, December 15). The dawn of AI has come, and its implications for education couldn’t be more significant
- Morrison, R. (2022, November 16). How to Identify AI Generated Text
- Schulten, K. (2023, January 24). How Should Schools Respond to ChatGPT? (Student Opinion)
References
- D’Agostino, S. (Jan 12, 2023). ChatGPT advice academics can use now. https://www.insidehighered.com/news/2023/01/12/academic-experts-offer-advice-chatgpt
- Eaton, S. (2023). Teaching and learning with artificial intelligence apps. University of Calgary. https://taylorinstitute.ucalgary.ca/teaching-with-AI-apps
- Montclair State University (n.d.). Practical responses to ChatGPT. https://www.montclair.edu/faculty-excellence/practical-responses-to-chat-gpt/
- OpenAI (2022). Sharing and publication policy. https://openai.com/api/policies/sharing-publication/#content-co-authored-with-the-openai-api-policy
- Trust, T. (n.d.). ChatGPT and education. https://docs.google.com/presentation/d/1Vo9w4ftPx-rizdWyaYoB-pQ3DzK1n325OgDgXsnt0X0/mobilepresent?slide=id.p
- Watkins, R. (2022). Update your course syllabus for chatGPT. https://medium.com/@rwatkins_7167/updating-your-course-syllabus-for-chatgpt-965f4b57b003
- Wiggins, K. (2022). OpenAI’s attempts to watermark AI text hit limits. TechCrunch+. https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits/
This article drew on York University [Canada] AI policy.
Information and Strategies for UNIC Lecturers
This article has been made available in this newsletter to help UNIC lecturers understand the capabilities and limitations of generative Artificial Intelligence technology and provide strategies for helping students avoid using it to engage in academic misconduct. The sections that follow provide information about these tools, with a focus on ChatGPT. Also, as artificial intelligence technology is constantly evolving, TELC aims to update lecturers as new information becomes available.
The Capabilities of Artificial Intelligence (AI) Technology
The media and educational institutions have recently focused on generative artificial intelligence (AI) tools. These tools can mimic, and in some cases outperform, human abilities to research, write, problem-solve, create art, create videos, and even “learn” and evolve. As a result, there is concern that generative AI tools/apps (such as ChatGPT, GPT-3, and DALL-E, among others) could be used to assist students in engaging in academic misconduct. Concerns about text-generating tools, in particular ChatGPT, have risen to the fore because of their ability to generate content (e.g., essays, programming code, maths problem solutions) that students could then submit for credit without their lecturer‘s knowledge or approval.
Text-Generating Software (e.g. ChatGPT)
Text-generation systems, such as ChatGPT, operate on demand: when presented with an enquiry or a prompt from a user, they draw on vast databases of information to produce genuine and human-like replies rapidly. Although the generated content is not always correct and may lack depth, this sort of technology can compromise the institution’s academic integrity efforts.
Recognizing the Limits
OpenAI introduced ChatGPT (GPT stands for Generative, Pre-trained, Transformer) on November 30, 2022. It is a big language model that was trained on a significant quantity of data to anticipate the proper next word in a phrase. ChatGPT is capable of performing a wide range of activities (e.g., answering questions, generating content, summarizing text, translating material, and so on) and engaging in a wide range of topics. Its replies are frequently clear and impressive, which might lead to the incorrect perception that ChatGPT is more capable than it is. The current version of ChatGPT (3.5) has the following limitations:
- It uses a predetermined quantity of knowledge that is only valid until September 2021; ChatGPT is not linked to the Internet and hence cannot seek for more recent material;
- It occasionally offers inaccurate or illogical information. Additionally, it displays this information in an authoritative manner, which may lead to user confusion;
- It is sometimes terrible at referencing sources and can generate incorrect citations;
- Its pieces may be grammatically accurate and simple to read, but they are usually shallow and lacking in thought. It is possible that information will be repeated throughout the article;
- Cannot combine information from many sources;
- Does not demonstrate critical thinking;
- May reflect the biases of the person who provided the training data
Introducing AI Technologies to Students
As a lecturer, expect students to be inquisitive about these tools and to need clear guidance on whether they are authorized to use them for academic work within your course. As a result, instructors are urged to engage students in an open conversation regarding artificial intelligence apps and how they interface with academic integrity.
Expectations of the Institution
When speaking with students, inform them of the following institutional expectations:
- Any unauthorised use of ChatGPT (or other AI technologies) on exams is seen as a violation of academic integrity;
- Remind students of the Policy on Academic Honesty and provide examples of how unauthorized use of this technology can lead to violations of:
- Cheating if they use an AI tool to gain an unfair advantage on an academic evaluation when it has not been authorized by their lecturer
- or plagiarism if they use images created by others, such as through the use of DALL-E or another image-generating tool when it has not been authorized by their lecturer.
- When giving assignment instructions, be extremely clear about your expectations. To assist prevent confusion, make sure these objectives are presented in a variety of methods, such as in Moodle, teaching guidelines, and in class;
- Discuss how various lecturers might have varied expectations for AI tools, and if it is authorized by one lecturer, this does not indicate it will be permitted by others.
Issues for Student Discussion
You can utilize the arguments raised above to further examine the ethical implications of AI technology in your discipline, as well as any possible issues that may develop. Some leading conversation questions may be:
- What do you know about artificial intelligence apps?
- Have you used them before? How, if at all, have you done so?
- What were your observations?
- What are some ethical difficulties that might come from utilizing these applications in your field/discipline?
- How can you utilize AI apps to help you study in an ethical manner?(Eaton, 2023)
Ethical Issues and Grey Zones
You can let students be aware that there are still numerous unclear issues surrounding AI apps and that there are no clear solutions or recommendations as of yet (Watkins, 2022). There are no requirements, for example, on how to properly reference information created by these apps. Apart from challenges to academic integrity, there is also the possibility that by answering prompts and interacting with AI apps, users are assisting the technology in improving and evolving. Moreover, ChatGPT captures a substantial amount of data from its users, which may be shared without the user’s knowledge or permission (Trust, n.d.).
Issues for Student Discussion
Such grey zones can also serve as a springboard for extra classroom discussion and activities, for example, Eaton (2023) suggests the following:
- Provide a way for students to communicate after using AI apps in their academic work. For example, they may begin with the following statement, derived from OpenAI:
The author generated this [text, image, or video] in part with [insert provider],
a language-, image-, code-, or video- generation model. The author verifies that
they have reviewed, edited and revised the draft provided and takes responsibility for
the content of this [assignment name]. (Open AI, 2022)
- Collaborate with your students to develop class guidelines/a charter for the usage of artificial intelligence apps in your classes. Is it appropriate to use? If so, what are some guidelines for using it?
- Add a conversation thread with artificial intelligence so that students may exchange knowledge, ask questions, publish articles, and so on.Have students read OpenAI’s privacy policy and terms of use pertaining to ChatGPT and DALL-E. Discuss possible concerns and implications of having their data collected.
- If users are helping the technology to improve and evolve through providing prompts, what are some future implications for work in their field? Will improved AI technology be a source of help or a hindrance?
Using AI Technologies as a Learning Aid
Generative AI technology has produced a need to change assessment techniques while also providing some new opportunities for in-class learning.
Strategies for the Near Term
When it comes to assessment, some tactics that may be used in the short term include:
- requiring students to hand-write assessments in class in place of written assessments sent through Moodle, including presentations, reflective writing assignments, in-class examinations, or short essays requiring students to provide rough notes along with their final submission of their work;
- While implementing the measures outlined above, keep in mind the possible accessibility and equality difficulties that may occur when moving evaluation modes.
Long-Term Planning
Assessments may be modified over time so that students are not submitting work that AI software can readily make. Some suggestions for revamping evaluations are provided below.
Propose broadening or replacing written assessments by doing the following:
- Concentrate on the process of writing rather than the finished product
- While AI technologies have been demonstrated to be inadequate at displaying these higher-order abilities, students should prioritize evidence of creative ideas and critical thinking
- request that students utilize current sources (post-September 2021)
- request that students apply personal experience or expertise to academic subjects
- or, replace a written assessment with a multimodal one
You may also wish to revise your grading standards or rubrics to emphasize examination of deeper discipline-specific abilities like argumentation, evidence usage, or interpretative analysis rather than writing mechanics and essay structuring. This can help you reweight your evaluations in favor of student learning and away from abilities that AI systems can readily do.
Nowadays, many lecturers are experimenting with incorporating generative AI technology into their assessment design. Bearing in mind the limits of AI, if you wish to use such tools into exams, students can utilize technology such as ChatGPT to apply higher-order abilities in the following ways:
- generate a ChatGPT response to a particular question, and then write an analysis of the strengths and weaknesses of the ChatGPT response;
- fact-check the responses that ChatGPT provides to identify incorrect information;
- generate a paper from ChatGPT and evaluate its logic, consistency, accuracy and bias;
- use ChatGPT to create an outline that students can then use to develop an essay. (D’Agostino, 2023; Montclair State University, n.d; Trust, n.d.)
Classroom Activities
In addition to formal evaluations, generative AI techniques may be utilized in ungraded or low-stakes learning activities in the classroom. Using this technology in lectures or discussions can assist students learn how and when to utilize AI technology efficiently, ethically, and in ways that accord with your discipline norms and standards. Consider the following learning activities:
- Using AI-generated text as a starting point for a class debate on a certain topic. What exactly does it get right? What is lacking? What changes would be required to bring it up to the academic standards of your field?
- having small teams of students experiment in using AI to create text about a given subject, and then comparing the results (what grade would they assign its response using a course rubric?) and/or the process (what prompts and tweaks were needed to generate the text?);
- engaging in a class debate against generative AI tech. Use the tool to generate counterarguments that can help them explore perspectives and strengthen their own arguments;
- asking your students! Gather anonymous feedback about whether they are using the tool, what value it provides them, and how they think it should be used in your disciplinary or teaching context.
Detection and Future Considerations
If using a detector to check for AI generated content in student work, lecturers are encouraged to be transparent about their use with students. Additionally, lecturers are also encouraged to become informed about the limitations and privacy concerns of these tools. To use detectors, most require that you copy and paste content that you suspect has been generated by AI apps. Some, like OpenAI’s AI Text Classifier require users to register. Additionally, the reliability of detectors has been questioned, for instance, this article from the Guardian discusses how OpenAI’s Text Classifier tool correctly identified only 26% of AI-generated English content.
Although current generative AI technology includes some limitations, continuous training and the release of new versions mean that these limitations will be corrected. When it comes to ensuring academic integrity, OpenAI is working on a watermark that will identify when text has been produced by AI apps, and Turnitin is also set to release an updated version of its software to detect AI-generated content.
Revising Assessments
- Bailey, J. (2022, December 7). Why Teachers Are Worried About AI
- D’Agostino, S. (2023, January 31). Designing Assignments in the ChatGPT Era
- Eaton, S. and Anselmo, L. (2023, January). Teaching and Learning with Artificial Intelligence Apps
- McClennen, N. & Poth, R. Education is about to radically change: AI for the masses
- McKnight, L. (2022, October 14). Eight ways to engage with AI writers in higher education
- McVey, C. (2022, December 05). POV: Artificial Intelligence Is Changing Writing at the University. Let’s Embrace It
- Mollick, E.R. & Mollick, L. (2022). New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments
- Prochaska, E. (2023, January 23). Embrace the Bot: Designing Writing Assignments in the Face of AI
- Schulten, K. (2023, January 23). Lesson Plan: Teaching and Learning in the Era of ChatGPT
- Webb, M. (2022, August 04). What’s next for AI in higher education?
General Information about AI Technology and Education
- Alby, C. (n.d.). ChatGPT: Understanding the new landscape and short-term solutions
- Alby, C. (2023, January 7). Can ChatGPT be a blessing?
- Ditch that Textbook (2002, December 17). ChatGPT, Chatbots and Artificial Intelligence in Education
- Kovanovic, K. (2022, December 15). The dawn of AI has come, and its implications for education couldn’t be more significant
- Morrison, R. (2022, November 16). How to Identify AI Generated Text
- Schulten, K. (2023, January 24). How Should Schools Respond to ChatGPT? (Student Opinion)
References
- D’Agostino, S. (Jan 12, 2023). ChatGPT advice academics can use now. https://www.insidehighered.com/news/2023/01/12/academic-experts-offer-advice-chatgpt
- Eaton, S. (2023). Teaching and learning with artificial intelligence apps. University of Calgary. https://taylorinstitute.ucalgary.ca/teaching-with-AI-apps
- Montclair State University (n.d.). Practical responses to ChatGPT. https://www.montclair.edu/faculty-excellence/practical-responses-to-chat-gpt/
- OpenAI (2022). Sharing and publication policy. https://openai.com/api/policies/sharing-publication/#content-co-authored-with-the-openai-api-policy
- Trust, T. (n.d.). ChatGPT and education. https://docs.google.com/presentation/d/1Vo9w4ftPx-rizdWyaYoB-pQ3DzK1n325OgDgXsnt0X0/mobilepresent?slide=id.p
- Watkins, R. (2022). Update your course syllabus for chatGPT. https://medium.com/@rwatkins_7167/updating-your-course-syllabus-for-chatgpt-965f4b57b003
- Wiggins, K. (2022). OpenAI’s attempts to watermark AI text hit limits. TechCrunch+. https://techcrunch.com/2022/12/10/openais-attempts-to-watermark-ai-text-hit-limits/
This article drew on York University [Canada] AI policyi.