Skip to Main Content

Generative AI (ChatGPT, Gemini, & More): AI & Ethics

Guide & Resources on generative AI in education, including examples, explanations, and more.

Is Using AI Ethical in Education?

Using generative AI in your academics poses ethical, plagiaristic, and misrepresentative challenges. You have to critically and academically review, verify, and authenticate all responses you receive from generative AI tools such as chatbots. Do not rely on the generated responses for your information. Be prepared to fact-check everything generative AI supplies. The responsibility of academic integrity relies on the user (e.g. the student) and not the evaluator (e.g. the instructor).


The Congressional Research Service created a report on using Generative AI and interacting with copyright law, with a subtheme of ethics. Please view the Generative Artificial Intelligence and Copyright Law page to learn more.

GPT & AI in Research

Using GPT & AI for Research

Generative AI can be queried to perform searches on your behalf. Google (Bard) and Microsoft (Bing) have plans to incorporate their chatbot AI tools into their search tools. Soon, using Google Search will have you use Google Bard first and foremost.

Currently, chatbots are separated from search tools. While you can ask a chatbot a research question, it will not actively search the web for you (in certain situations, Bing Chat is the exception). Instead, it generates information based on its dataset, which is (sometimes) updated periodically to keep current.

Even with the updates and developmental progress generative AI has experienced in the relatively short time it has publicly existed, it is still prone to errors. This can take the form of minor inaccuracies to full-blown hallucinations. When using chatbots to perform research, the responsibility is on you to ensure the information you are receiving is factually correct. If you are looking at chatbots to save you time and effort in searching, bear in mind that labor will instead be shifted to ensuring the research is authoritative and has integrity.

Chatbots come with up-front disclaimers stating inaccuracies will occur. Keep this in mind when performing research for papers, projects, and information searches.

AI's Ethical Implications

Time reported on January 18, 2023 OpenAI (developer of ChatGPT) used outsourced labor to identify, tag, and report textual descriptions of abuse, hate speech, violence, and other traumatic content to a Kenyan business. These laborers earned less than $2 per hour for their work. The article highlights the hidden reliance on human behavior, in this case (and in many) using exploitative labor while touting the advances & intelligence of these tools.

Generative AI is inherently biased due to the dataset it works with. No dataset is perfect and the data within these sets embodies the inaccuracies, biases, and faults of the Internet. The more information these chatbots process, the more unsafe they become. MIT published a widely-used article on making chatbots that are not racist or sexist, which is stated to be nigh-impossible. The Washington Post published a story on a paper from the University of East Anglia, showing chatbots lean liberal in political biases.

For a showcase of how biased and flawed chatbots can be, look no further than Microsoft's previous foray into chatbots with Tay.

Detecting AI Use

As of time of writing, current efforts in AI detection are inaccurate. If there is a need for a teacher to control the use of generative AI in the classroom, the focus would need to be on preventing the use, compensating for possible use, or integrating the use of AI in instruction or assignment construction, rather than detecting.

Turnitin has a detection tool, located here.

Another option is GPTZero, but like above it is not fool-proof.

ChatGPT Doesn't Know?

ChatGPT stating it does not know if it has access to DALL-E, which it does in the paid tier.

Here is an example of the quality of information ANI LLMs such as ChatGPT can generate. The free-tier data is two years old, so it doesn't know the paid tier has access to DALL-E for the past several months (as of May 2024). When using generative AIs, double check all the information it provides you.