Ethical Concerns Regarding Generative AI

Ethical Concerns Regarding Generative AI

For instructors and students, understanding and evaluating the uses of Generative AI tools is important, whether or not you choose to incorporate these tools in your courses. Facilitate discussions with your students about ethical concerns related to GenAI, including the impacts of spreading disinformation, the lack of regulation of companies that develop these technologies, and other dangers. Students will likely continue to use GenAI tools, and the UCLA community must have an understanding of both the limitations and opportunities of these tools. 

Below are some factors to consider with the ethical use of GenAI tools:

Data Use and Consent: GenAI tools are trained with vast amounts of data, and they collect and store data about users. Some of these tools may have been trained with data without their owner's consent. There have been a myriad of lawsuits concerning copyright infringement ("The Times Sues OpenAI and Microsoft Over Use of Copyrighted Work"  AND "Boom in A.I. Prompts a Test of Copyright Law") Always use commercially licensed GenAI tools to minimize the risks of intellectual property rights infringement. In a classroom setting, instructors should also consider the information you are asking students to share.

Cost & Limits to Access: While many GenAI tools are free, others apply a cost to access premium features, which can create barriers to access.

Bias: The datasets used to train GenAI tools may have incorporated biased or incomplete data, potentially causing their outputs to also generate biased and/or discriminatory content ("ChatGPT is as Biased as We Are"). Students should ask themselves: How does the information generated impact or influence their thinking on this topic? Who is represented in the data? Is the data inclusive in terms of the material’s scope and the perspectives that it presents?

False Information & Hallucinations: Don’t rely on GenAI tools as primary sources of information. The content generated by GenAI tools can contain “hallucinations”, not be accurate, and/or be out of date ("Chatbots May 'Hallucinate' More Often Than We Realize"). Additionally, the underlying models powering GenAI tools may have been trained with biased or partial/incomplete data ("Disinformation Researchers Raise Alarms about A.I. Chatbots"). Always use your judgment to evaluate the content generated by these tools. Verify the accuracy of AI-generated content using reliable sources.

  • Be sure to cite a generative AI tool whenever you paraphrase, quote, or incorporate into your own work any content (whether text, image, data, or other) that was created by it.

Energy & Environmental Impacts: The building, training, and use of GenAI requires a tremendous amount of energy, consumes a lot of water for cooling, and contributes to carbon emissions. While efforts are being made to make GenAI more sustainable, consider whether its use is worth the environmental impact or if you can use these tools more efficiently ("The Uneven Distribution of AI's Environmental Impacts" and "Hungry for Energy, Amazon, Google and Microsoft Turn to Nuclear Energy").

Labor Exploitation & Labor Harm: These systems use massive datasets of materials from the internet that were created by humans. The training and improving these models requires humans to review and rate output. These workers are usually poorly paid, contract, and precarious (“America Already Has an AI Underclass”, “Cleaning Up ChatGPT Takes Heavy Toll on Human Workers”, “AI needs to face up to its invisible-worker problem”)

Adapted from materials from UCLA Teaching and Learning Center, Widener University Library, and Amherst College Library.