Using generative AI in your academics poses ethical, plagiaristic, and misrepresentative challenges. You have to critically and academically review, verify, and authenticate all responses you receive from generative AI tools such as chatbots. Do not rely on the generated responses for your information. Be prepared to fact-check everything generative AI supplies. The responsibility of academic integrity relies on the user (e.g. the student) and not the evaluator (e.g. the instructor).
The Congressional Research Service created a report on using Generative AI and interacting with copyright law, with a subtheme of ethics. Please view the Generative Artificial Intelligence and Copyright Law page to learn more.
Generative AI can be queried to perform searches on your behalf. Google (Gemini) and Microsoft (Copilot) have plans to incorporate their AI tools into their search tools. You can now see "AI Summaries" at the top of your Google Search results. These AI Summaries use generative AI in conjunction with search results to pull information and create a brief overview of your search query. These AI Summaries are very convenient, but it can still hallucinate or improperly provide information (or context) due to the generative AI integration.
Even with the updates and developmental progress generative AI has experienced in the relatively short time it has publicly existed, it is still prone to errors. This can take the form of minor inaccuracies to full-blown hallucinations. When using chatbots to perform research, the responsibility is on you to ensure the information you are receiving is factually correct. If you are looking at chatbots to save you time and effort in searching, bear in mind that labor will instead be shifted to ensuring the research is authoritative and has integrity.
Time reported on January 18, 2023 OpenAI (developer of ChatGPT) used outsourced labor to identify, tag, and report textual descriptions of abuse, hate speech, violence, and other traumatic content to a Kenyan business. These laborers earned less than $2 per hour for their work. The article highlights the hidden reliance on human behavior, in this case (and in many) using exploitative labor while touting the advances & intelligence of these tools.
Generative AI is inherently biased due to the dataset it works with. No dataset is perfect and the data within these sets embodies the inaccuracies, biases, and faults of the Internet. The more information these chatbots process, the more unsafe they become. MIT published a widely-used article on making chatbots that are not racist or sexist, which is stated to be nigh-impossible. The Washington Post published a story on a paper from the University of East Anglia, showing chatbots lean liberal in political biases.
For a showcase of how biased and flawed chatbots can be, look no further than Microsoft's previous foray into chatbots with Tay.
As of time of writing, current efforts in AI detection are inaccurate. If there is a need for a teacher to control the use of generative AI in the classroom, the focus would need to be on preventing the use, compensating for possible use, or integrating the use of AI in instruction or assignment construction, rather than detecting.
Turnitin has a detection tool, located here.
Another option is GPTZero, but like above it is not fool-proof.