What is Generative Artificial Intelligence (AI)

What are Generative AI tools?

Generative AI is a broad label that is used to describe any type of artificial intelligence (AI) that can be used to generate text, images, video, audio, simulations, computer code or synthetic data.

Use cases for Generative AI Models include:

  • Responds to questions and prompts.
  • Analyze, improve, and summarize text.
  • Write computer code.
  • Translate text from one language to another.
  • Create new ideas, prompts, or suggestions for a topic or theme.
  • Generate text with specific attributes, such as tone, sentiment, or formality.

What is a Large Language Model?

Large language models such as ChatGPT are technologies that use large statistical models to generate natural-sounding text. The technology ChatGPT, developed by OpenAI, is powerful enough to generate text-based responses like letters, recipes, essays, songs, etc. Essentially, it does a very good job of predicting what a human would write next; however, it does not understand the content it generates or determine whether or not the information is misleading (Weidinger, et al., 2022).

Text Generator Products/Tools: 

Image Generator Products/Tools:

Know the Risks

As Generative AI continue to become more sophisticated and versatile, it is crucial to be cautious and assess their capabilities, issues, and potential biases. These include legal, ethical, political, ecological, social, and economic concerns.

Biases and Harms

Generative AI tools, as they have been designed and developed, reproduce biases, reinforce discrimination, and amplify stereotypes, leading to further harm to equity-deserving groups. This is because they use large amounts of data from the internet, and do not distinguish between reliable and unreliable data. For example, they reproduce collective writings (such as Facebook or Reddit comments, porn, and fake news as well as academic journals and “real” (fact-checked) news from across the world. Further, while ChatGPT can provide references, studies have shown that outputs are often made-up or nonexistent. But these are not the only harms that are reproduced by Generative AI. In the interactive graphic below, Sweetman (2023) highlights some of the harms that need consideration, including environmental harms, economic harms, as well as epistemic harms. Click on the hotspots (plus signs on the graph) to learn more about these harms and their implications:

Graph developed by Rebecca Sweetman, "Some Harm Considerations of Large Language Models (LLMs)" focuses on the relationship between Environment, Economy, Social Norms and Knowledge Reproduction with Design and Development, Operationalization, and Future Legacy.

 

View this graph: Some Harm Considerations of LLMs in the eCampusOntario H5P Studio.