Overview
Since the release of ChatGPT in 2022, a range of algorithmic systems marketed as ‘artificial intelligence’ (AI) that are capable of generating and manipulating media such as text have become widely available. The Department of Political Studies’ policy on the use of these and similar tools in activities related to the Department, its courses, and its other work is as follows:
- This policy is an elaboration of the University’s regulations on AI and academic integrity (see “Unauthorized content generation” under “What is a departure from Academic Integrity?”, “Guidance and Updates Regarding the Continued Approach to Generative Artificial Intelligence Tools in Education”, and “Guidelines for AI use in Graduate Research”). Adhering to the University’s academic integrity standard is a matter of ethics, professionalism, and respect.
- Tools described as ‘AI’ may be used in a specific course for purposes explicitly described by the course instructor in the syllabus or other written communication. In this case some task delegation is not a violation of academic integrity, in the same way that sharing tasks in instructor-approved group work is not. If the use of AI is not explicitly permitted, it is prohibited by default.
- Delegating a task that a student is supposed to perform themselves to another person, service, or tool is a violation of academic integrity known as ‘contract cheating’.
- Suspected contract cheating will be subject to a formal investigation by the instructor/supervisor or a representative. If the investigation determines that there was a departure from academic integrity, a report will be filed with the Faculty of Arts and Science (FAS) Academic Integrity office, and the instructor/supervisor and/or the FAS will administer remedies or penalties/sanctions, which could include the requirement to withdraw from the university (see “Academic Integrity Procedures”).
- Large language models (LLMs) such as ChatGPT and Gemini are inherently unreliable, frequently provide incorrect information, and are non-deterministic, meaning that the same prompt will generate different outputs each time the system is used (for more information, please see the “What is ‘artificial intelligence’?” section below). Accordingly, even when their use is permitted for the purpose of generating or rephrasing text, these systems are not an appropriate substitute for sources that can be assessed by an instructor.
- Research conducted using algorithmic systems may violate the University’s regulations regarding research integrity (a subcategory of academic integrity; see “Integrity in Research”) in a variety of ways, such as if you do not know the character or provenance of the system’s training data, whether the data was legally obtained, or how the data was modelled.
- Ignorance is not a defence. Members of the Department who are unsure that they understand this policy or ‘artificial intelligence’ more generally are encouraged to read the rest of this page, which explains and contextualizes the policy, and to contact Dr. Stephen Larin if they have any further questions.
What is ‘artificial intelligence’?
By Stephen Larin
Artificial intelligence (AI) is a marketing term for a wide range of different computer-based algorithmic systems that has no single, unambiguous definition, but generally refers to ‘apparently intelligent action performed by a machine’. An ‘algorithm’ is a step-by-step procedure for accomplishing some purpose (performing a calculation, solving a problem, pattern-matching, etc.), and there is no inherent relationship between algorithms and computers; recipes are algorithms, for example. All computer programs/apps are based on many different algorithms, and there is no clear line between ‘just a program’ and ‘artificial intelligence’.
Conceptually, there are three broad categories of AI: ‘artificial narrow intelligence’ (ANI), ‘artificial general intelligence’ (AGI), and ‘artificial super intelligence’ (ASI). ANI is a task-specific algorithmic system that is capable of some autonomous intelligent action; AGI and ASI are speculative computational systems that are imagined to be capable of autonomous intelligent action across a range of domains and are either similar or superior to humans, respectively. ANI is the only type of artificial intelligence that actually exists; the other two are speculative fiction which do not, and may never exist. The strong influence of science fiction on popular perceptions of AI often leads to serious misunderstandings, however, so many researchers prefer terms such as ‘algorithmic system’.
ANI can be divided into two main types: symbolic ‘good old-fashioned AI’ and machine learning.
Symbolic AI was the dominant approach from the 1950s–80s and is ‘symbolic’ in the sense that it is based on human-readable symbolic programming language, similar to the symbols used in mathematics and formal logic. Its actions are determined by pre-programmed instructions that specify the range of options for a particular task and the best course of action.
Machine learning is the type of AI that has driven the surge of interest and development since the early 2010s. Unlike symbolic AI, a machine learning system is capable of doing things that its programmers did not foresee and program. It is designed with a core set of rules to follow, but ‘learns’ what to do within those parameters by being ‘trained’ on a data set, and in some cases also through operation. There are several different approaches to machine learning, but the most influential is ‘deep learning neural networks’. Don’t take the name too seriously—it’s aspirational, in the sense that it is supposed to model how brains work, but neural networks are not ‘artificial brains’ in any meaningful sense. Deep learning neural networks are very good at pattern-matching, and their performance has significantly improved since about 2012, but most people were unaware of these advances until ChatGPT was released in late 2022.
Large language models
ChatGPT and most of the other systems that are marketed as AI are ‘large language models’ (LLMs), which are a particular type of deep learning neural network system. They are called large language models because they ‘model’ the pattern of linguistic relationships in large datasets of text.
For example, ChatGPT is based on terabytes of text from the Internet, ranging from books to Reddit posts. When OpenAI was developing ChatGPT, they used a algorithmic system that conducted a statistical analysis of the patterns of relationships between the different parts of that text dataset, the end product of which is ChatGPT’s ‘language model’ (the representation of the patterns of association found in the training data).
The purpose of an LLM is to generate plausible, clear, and grammatically-correct prose that is a linguistic match for its input. That’s it. It is crucial to recognize that no algorithmic system has the capacity for understanding, and when ChatGPT appears to be ‘answering your question’ (for example), it does not understand either your question or the answer, but is instead just generating the statistically best match between your text and the patterns in its model (with some deliberate randomization added in to help avoid repetitiveness and mimic creativity).
This often happens to provide the right answer, especially when the input matches with something that is well-represented and uncontested in the training data, but the truth or falsity of the text that the system generates is irrelevant, and impossible for it to assess. Large language models, like all deep learning neural networks, are pattern-matching machines and “bullshit generators”, as Princeton computer science professor Arvind Narayanan puts it, using philosopher Harry Frankfurt’s term for speech that is intended to persuade without regard for the truth.
Here’s a brief conceptual summary (read the arrows as ‘is a type of’):
ChatGPT → large language model → deep learning neural network → machine learning → artificial narrow intelligence → algorithmic system
AI and academic integrity
The core principle of academic integrity is that all work that you submit for evaluation must be yours alone, because the university accreditation system is based on students demonstrating that they actually have the skills that their grades and degrees certify. Delegating a task that you were supposed to perform to an algorithm is no different than delegating it to another person, so delegating a task that you were supposed to perform to an algorithmic system violates academic integrity.
The ‘that you were supposed to perform’ part is key. In group work, for example, whatever you submit is meant to be the product of a collaborative effort. Similarly, some instructors may not only permit the use of algorithmic systems in their course, but even encourage or require it for an assignment. Some instructors permit some algorithmic systems and prohibit others, based on the tasks that they perform.
For example, some instructors recommend that their students use reference management software such as EndNote or Zotero. These are algorithmic systems that automate most of the citation process: if you need to cite something while you’re writing, you call up the reference manager, choose the source you want to cite, and the manager will automatically insert the reference you need on that page and add it to your bibliography, all formatted according to whatever citation style you are using. Automation is usually appropriate for tasks that are not integral to learning and for which ‘cognitive offloading’ is helpful. This is the same reason that calculators aren’t prohibited in most Political Studies courses, but might be in those where the student is meant to be learning how to do some types of calculations on their own, if only so that you will genuinely understand what a calculator is doing when it does that type of calculation for you (which is very important in some professions).
On the other hand, algorithmic systems that generate or paraphrase/rewrite text are prohibited in most Political Studies courses. That is usually because writing is integral to both learning and evaluation in political science, and delegating that task undermines these things.
For example, when writing a paper, we often don’t really know what we think about something until we’ve typed out a few sentences, read them aloud, and rewritten them many times. Writing is a kind of self-dialogue that allows us to work out complex ideas and analyses because it, too, is a kind of cognitive offloading that facilitates the development and application of many core skills, including precise conceptualization, logical organization, and clear communication.
Given all of this, it should be obvious why the unauthorized use of an algorithmic system both violates academic integrity and doesn’t make any sense. It violates academic integrity because you are pretending to have done something that you didn’t do. It doesn’t make sense because it undermines the pedagogical purpose of the assignment. If you don’t even try to do your own work, you will never be able to do it. It’s like training for a marathon by driving the route in a car.
Ask for help when you need it
If you are struggling with your work, contact your instructors and ask to meet with them during office hours. Student Academic Support Services also offers a variety of services and opportunities, including one-on-one consultation, to help students improve their study and writing skills. William Zinsser’s book On Writing Well: the Classic Guide to Writing Nonfiction (Harper Collins, 2006) is also highly recommended.
The politics of artificial intelligence
If you would like to learn more about the political and broader social implications of artificial intelligence, the documentary Coded Bias (2020) is a good place to start. Kate Crawford’s book Atlas of AI (Yale, 2021) is currently the best overview of the politics of artificial intelligence, and Sasha Luccioni et al.’s “The Environmental Impacts of AI – Primer” is a good introduction to that under-studied subject. Students are also encouraged to take “POLS 478: Politics of Artificial Intelligence”.