Predictive language models such as ChatGPT are technologies that use large statistical models to generate natural sounding text. The technology ChatGPT, developed by OpenAI, is powerful enough to generate text-based responses like letters, recipes, essays, songs, etc. Essentially, it does a very good job of predicting what a human would write next; however, it does not understand the content it generates or determine whether or not the information is misleading (Weidinger, et al., 2022).
What are Predictive Language Models?
What can it do?
Predictive language models can write original text based on user prompts. The text is grammatically correct and paragraphs are well structured.
Examples of text may include:
- Generating ideas
- Writing essays
- Creating recipes
- Writing creative works like poems and songs
- Producing cover letters and thank you letters
- Writing block of computer code
What can’t it do?
Given that predictive language models are based on statistical models, they do not have the capacity to understand the text being generated.
Limitations include:
- Ensuring accuracy in its responses. It may provide text that seems real, but is entirely or partially fabricated. This may be difficult to detect, especially for non-experts.
- Distinguishing between factually correct and incorrect information (e.g., legal and medical advice); fundamentally it cannot understand what it is producing
- Understanding emotion or empathy in the same way as a human
- Rationalizing through a series of novel logical steps
What is the future of language models like this?
There will be other competitors in the predictive language model market. Google, for example, is expected to release its own version called Bard that will likely be integrated into Google searches. Microsoft is investing billions in OpenAI over the next several years and is already testing ChatGPT integration within Teams (Premium) and expects to integrate it into other Microsoft platforms (such as Word, Excel, PowerPoint, Bing search) by the end of the year. In short, it is expected to be everywhere in the future.
Academic Integrity
While detection tools are emerging, the accuracy of the tools is low, making detection difficult if not impossible. Given the low quality of detection and high risk of inaccuracies, instructors are discouraged from taking a detection approach. Instead, instructors are encouraged to rethink their assessment to include higher order thinking skills, consider alternative assessments (e.g., presentations, podcasts, videos), and discuss the importance of learning through writing in the classroom. Instructors should also post reminders to students that using these tools to create written material in courses where they have not been permitted is a departure from Academic Integrity.
To help you have these conversations with your students, we have drafted a set of powerpoint slides for you to use as a guide.
View the Predictive Language Powerpoint
“Departures from academic integrity include plagiarism, use of unauthorized materials, facilitation, forgery and falsification, and are antithetical to the development of an academic community at Queen’s.”
I-EDIAA Perspective
The design and the development of predictive language models are done by people who make decisions about what is and is not appropriate on the platform, and the models are trained using vast amounts of text from the internet. Thus, the text generated by predictive language models reproduces biases, reinforces discrimination and privilege, and amplifies the stereotypes that are found on the internet. The use of this tool in a classroom setting can perpetuate further harm to equity deserving groups.
Please keep in mind that if instructors choose to use this platform as part of their course, they should provide alternatives for students and TAs who do not wish to engage with this platform.
Considerations when designing assessments to mitigate usage
- When using a writing prompt, ask specific questions that require more complex understanding of the topic.
- Ask students to engage in reflections about their assessments.
- Situate assignments within a local context or situation.
- Consider scaffolding writing assessments so students are engaged with the processes throughout the course.
- Review course learning outcomes: is a written response required? What alternatives could also meet the course learning outcomes? What type of authentic assessment could be used?
- Consider where ChatGPT could be used to support student learning rather than replacing current assessments with traditional assessments.
- Apply higher-order thinking such as analysis and synthesis approaches to assessments.
- Think about alternative formats such as video, presentations, or podcasts.
- Use Writing Workshops as an approach to engage students with the writing process.
Your feedback matters
Your feedback is important to us. If you have any questions, resources or ideas about Predictive Language Modules, please complete the following form.
Teaching and Learning Resources
- https://teachingcommons.stanford.edu/news/ai-tools-teaching-and-learning
- https://www.viceprovostundergrad.utoronto.ca/strategic-priorities/digital-learning/special-initiative-artificial-intelligence/
- https://www.niu.edu/citl/resources/guides/chatgpt-and-education.shtml
- https://crlt.umich.edu/blog/chatgpt-implications-teaching-and-student-learning
Research articles
Birhane A. (2021). Algorithmic injustice: a relational ethics approach. Patterns (New York, N.Y.), 2(2), 100205. https://doi.org/10.1016/j.patter.2021.100205
Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P. S., ... & Gabriel, I. (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.