Skip to Main Content

Generative AI Literacy: Ethics & AI

This guide offers resources for beginners looking to understand Generative AI and how to use tools like ChatGPT effectively and ethically.

AI Use

Before using generative AI tools in your classes, review the syllabus for the instructor’s AI policy or consult your instructor directly.

Misinformation & Bias

Misinformation

While generative AI tools can help users with such tasks as brainstorming for new ideas, organizing existing information, mapping out scholarly discussions, or summarizing sources, they are also notorious for not relying fully on factual information or rigorous research strategies. In fact, they are known for producing "hallucinations," an AI science term used to describe false information created by the AI system to defend its statements. Oftentimes, these "hallucinations" can be presented in a very confident manner and consist of partially or fully fabricated citations or facts.

Certain AI tools have even been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead the audience. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous. 

Additionally, the information presented by generative AI tools may lack currency as some of the systems do not necessarily have access to the latest information. Rather, they may have been trained on past datasets, thus generating dated representations of current events and the related information landscape.

Bias

Another potentially significant limitation of AI is the bias that can be embedded in the products it generates. Fed immense amounts of data and text available on the internet, these large language model systems are trained to simply predict the most likely sequence of words in response to a given prompt, and will therefore reflect and perpetuate the biases inherent in the inputted internet information. An additional source of bias lies in the fact that some generative AI tools utilize reinforcement learning with human feedback (RLHF), with the caveat that the human testers used to provide this feedback are themselves non-neutral. Accordingly, generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.       

Related Recommendations  

  • Meticulously fact-check all of the information produced by generative AI, including verifying the source of all citations the AI uses to support its claims.
  • Critically evaluate all AI output for any possible biases that can skew the presented information. 
  • Avoid asking the AI tools to produce a list of sources on a specific topic as such prompts may result in the tools fabricating false citations. 
  • When available, consult the AI developers' notes to determine if the tool's information is up-to-date.
  • Always remember that generative AI tools are not search engines--they simply use large amounts of data to generate responses constructed to "make sense" according to common cognitive paradigms.

Academic Integrity

Plagiarism

Generative AI tools (GAI) have introduced new challenges in academic integrity, particularly related to plagiarism.

Plagiarism is typically defined as presenting someone else's work or ideas as one's own. While a GAI tool might not qualify as a "someone," using text generated from an GAI tool without citing may still considered plagiarism because the work is still not the researcher's own. Individual policies for using and crediting AI tools might vary from class to class, so looking at the syllabus and having a clear understanding from your instructor is important.

False Citations

Another area of academic integrity affected by GAI tools is that of false citations.

Providing false citations in research, whether intentional or unintentional, may violate the Academic Integrity Honor Code at Foothill College. GAI tools such as ChatGPT have been known to generate false citations, and even if the citations represent actual papers, the cited content in ChatGPT might still be inaccurate.

Related Recommendations

  • If GAI tools are only permitted to be used for topic development, in the early stages of research, you might not need to cite them at all, but it's still important to check with your instructor first.
  • If you are providing commentary or analysis on the text generated by a chatbot and are either paraphrasing its results or quoting it directly, a citation is always required. You can find more information on citing GAI tools on the Library's ChatGPT Guide.
  • It's important to always look up citations and check to make sure they are accurate, and if you're citing information from that source, to cite the original source rather than ChatGPT or whichever GAI tool you're using.

Privacy

Breaches of Privacy & Danger of Re-Identification

There are currently also multiple privacy concerns associated with the use of generative AI (GAI) tools. The most prominent issues revolve around the possibility of a breach of personal/sensitive data and re-identification. More specifically, most AI-powered language models, including ChatGPT, require for users to input large amounts of data to be trained and generate new information products effectively. This translates into personal or sensitive user-submitted data becoming an integral part of the collection of material used to further train the AI without the explicit consent of the user. Moreover, certain GAI policies even permit AI developers to profit off of this personal/sensitive information by selling it to third parties. Even in cases when clear identifying personal information is not entered by AI user, the utilization of the system carries a risk of re-identification as the submitted dataset may contain patterns allowing for the generated information to be linked back to the individual or entity. 

Related Recommendations

  • Avoid sharing any personal or sensitive information via the AI-powered tools. 
  • Always review the privacy policy of the GAI tools before utilizing them. Be cautious about policies that permit for the inputted data to be freely distributed to third-party vendors and/or other users. 

Attribution

"Artificial Intelligence (Generative) Resources: Ethics & AI" by Georgetown University Library, used under CC BY-NC 4.0 / Library-specific information adapted for community college students at the Foothill Library.