Table of Contents
The increasing use of AI-based Large Language Models (LLMs) like ChatGPT in academia has sparked a lively debate within the scientific community. Today, students and researchers can save time and boost work productivity by using AI to get input for essays, theses, dissertations, and research papers, ensuring timely submissions.
However, this growing dependence on ChatGPT is now giving rise to concerns about academic integrity, accuracy, and the long-term impact on researchers’ critical thinking abilities and learning skills. More so since the content generated by ChatGPT and other LLMs is not always original or accurate.
Before addressing the question of whether ChatGPT produces plagiarized content, it is important to understand what constitutes plagiarism. Plagiarism is the use of someone else’s ideas, words, or text in a verbatim form or even a paraphrased version without providing due acknowledgement to the respective authors. It is considered a breach of academic integrity and is dealt with seriously by academic institutions.
Does ChatGPT plagiarize?
The original GPT-3 has 175 billion parameters¹, making it one of the largest and most powerful models for AI processing available today. It has been trained on an extensive dataset of text and code, allowing it to generate original content based on prompts, so technically, it does not copy pre-existing material verbatim. However, it may produce content that closely resembles the material it has been exposed to during training and often does not give due credit to the source. This is a violation of academic guidelines as the work is not entirely original.
Concerns regarding the use of ChatGPT
Given that an integral aspect of research is the contribution of new knowledge and perspectives to the existing body of work, originality is considered to be a key attribute of good research. However, ChatGPT’s content is based on a huge amount of existing data and information. Relying on such content makes a student lose out on the experience of imbibing essential skills and learnings in the process.
Therefore, the real issue here is whether students and researchers are genuinely learning and engaging with AI-generated material or if they’re using LLMs to cut corners and avoid doing the hard work themselves. It’s about understanding the content, not just letting AI do all the thinking for them.
In fact, given the extensive use of LLMs in academia, universities are crafting guidelines to manage the use of generative AI in academic settings. For instance, Harvard University has issued guidelines that focus on information security, data privacy, compliance, copyright, and academic integrity when using tools like ChatGPT.² Similarly, the APA style guide (among others) now recommends that researchers using LLMs like ChatGPT must quote and cite the language model like you would any other source, and it even offers a citation format for ChatGPT texts.³
Importantly, even as students and researchers increasingly turn to AI tools like ChatGPT for quick access to information and assistance with writing, it is essential to be aware of some of the significant issues regarding AI-generated content.
Why using ChatGPT for research or academic writing may not be a good idea?
- Dated information: Another primary concern regarding content generated by AI is the lack of the latest information. As mentioned earlier, ChatGPT creates content based on a vast quantum of data that extends until September 2021, and information made more recently will, therefore, be unavailable on ChatGPT.
- Made-up references: Sometimes, AI can create content with complete citations that look authentic but are fabricated. In fact, sometimes ChatGPT (model 4) also provides DIO hyperlinks to references, which, when checked, are linked to articles other than the ones cited.4 This has enormous repercussions on the integrity of academic output. That is why it may be a good idea for researchers to cross-check all citations and perhaps use established online citation generators to help.
- AI hallucination: A significant concern with AI-generated content is AI hallucination. Also termed artificial hallucination or confabulation, it happens when text generated by LLMs is seemingly plausible but actually contains mistakes and inaccurate information. In fact, it may not be connected to reality at all. If the information is not validated, students and researchers could end up following flawed investigation routes, which could translate into misleading results and wasted research dollars. For policymakers, the inability to detect false research may ground policy decisions in incorrect information that could have monumental effects on society.
- Writing issues: AI-generated content is often excessively verbose. It uses significant words and fails to vary word choices and sentence structures like a human would, which makes it sound impersonal. Also, content produced by ChatGPT is not free from biases and stereotypes, as it usually reinforces a user’s biases during the interaction. Therefore, it must be reviewed carefully and not used as is.
- Unreliable prompts: If prompts are not structured correctly, Using ChatGPT for research can result in varied, unreliable, and possibly incomplete responses.
How do you avoid plagiarism while writing with AI tools?
- Proper monitoring: While using AI tools, it is important to individually check and recheck generated content for bias and inaccuracies. Make sure to verify each source and data point. Read through the content carefully to ensure that information is written and presented correctly and unambiguously.
- Build your awareness: Students and researchers should gain a good understanding of how to use AI tools ethically. They should be aware of established institutional policies and journal guidelines regarding the use of AI tools.
- Disclosure in cases where the use of AI tools is allowed: Ensure that full disclosure is made when using AI tools. Style guides such as APA, Chicago, and MLA provide guidelines on how to cite and acknowledge content generated through AI.
Lastly, keep in mind that while AI-based tools can be used as an e-research assistant to improve efficiency and complement your work as a researcher, they can never replace original thinking and engagement with the topic.
References:
- https://pmc.ncbi.nlm.nih.gov/articles/PMC10028016/
- https://provost.harvard.edu/guidelines-using-chatgpt-and-other-generative-ai-tools-harvard?utm_source=chatgpt.com
- https://apastyle.apa.org/blog/how-to-cite-chatgpt
- https://edintegrity.biomedcentral.com/articles/10.1007/s40979-023-00137-0
Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 22+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.
Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$25 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.
Experience the future of academic writing – Sign up to Paperpal and start writing for free!