Share

Generative AI tools like ChatGPT are writing papers, raising integrity concerns

It’s an F for academic integrity…for now
Generative AI tools like ChatGPT are writing papers, raising integrity concerns
Generative AI tools are a two-edged sword in academia

Plagiarism detector Turnitin recently shared that over 22 million papers that it scanned included at least 20 percent AI-generated content. Worse still, six million papers contained at least 80 percent content written by generative AI tools.

Generative AI tools like ChatGPT are a wonderful aid for everything from streamlining customer support to summarizing reports. But one can just as easily use them for malicious activities like crafting flawless phishing campaigns, or writing research papers. If the Turnitin study is anything to go by, unchecked use of generative AI tools will erode academic integrity.

And it isn’t going to get better anytime soon. In an email with Economy Middle East, a Turnitin spokesperson pointed to another similar study conducted by Tyton Partners.

Read: How to use ChatGPT for business growth

Nearly half of the students surveyed admitted to using generative AI tools, such as ChatGPT, monthly, weekly or daily. Shockingly, 75 percent of these students said they will continue to use the technology even if faculty or institutions ban it.

Time for a change

Turnitin believes generative AI writing tools have significantly disrupted education since the launch of ChatGPT 3.5 in November 2022.

“Continued integration of GenAI in academia will further complicate academic integrity,” says Neil Sahota, UN AI Advisor, and CEO of ACSILabs. 

This resonates with Dr. Gregory P. Gasic, Co-Founder, VMeDx. “With AI-generated content being so readily available and sophisticated, the lines between original thought, research-based conclusions, and AI-generated content (has) become blurred,” says Gasic.

He believes the use of generative AI for writing will also dramatically increase incidents of plagiarism. “If not used responsibly, AI tools like ChatGPT could turn into “intellectual shortcuts”, leading to lesser personal effort and originality in the academic workspace,” says Gasic.

However, Turnitin believes there’s more to academic integrity than just weeding out AI-assisted writing.

“If we focus only on one piece—AI writing detection—we are missing the bigger, more important picture,” Turnitin tells us. They believe other pieces in the academic integrity puzzle are institutional policy updates, valuable conversations with students, and assessment evolution.

Read: 5 ways to make money with Generative AI

“Turnitin’s latest report highlights both the challenges and opportunities this technology presents,” says Sahota. “To begin with, universities and colleges will need to redefine their academic integrity frameworks to specifically address AI authored content,” suggests Sahota.

Gasic is also in favor of defining and implementing new policies and regulations in academic institutions to mitigate the issue.

“Introducing checks and controls around the use of AI-powered tools, alongside educative initiatives to ensure understanding of their correct and incorrect usage, would be crucial,” says Gasic. “In parallel, classrooms should instil a deep respect for the academic integrity culture, emphasizing critical thinking, originality, and individual effort.”

Order in chaos 

Gasic believes generative AI tools like ChatGPT are a two-edged sword. While unchecked use might hurt academic integrity, he believes they may promote a more accessible, efficient and dynamic learning environment.

“They provide students and researchers with a tool for gathering and synthesizing information, sparking new ideas, and even offering a grammar and language aid,” says Gasic. “Using these tools, individuals can benefit from the immense scale of knowledge and speed of processing that AI affords.”

Turnitin, too, believes AI-generated writing is not a binary concept with rigid lines around what is or is not acceptable. Referring to it as “true disruption”, Turnitin says the advent of generative AI requires a rethink in many aspects of our world, from education to the workforce and beyond. 

Read: Generative AI in MENA to hit $23.5 billion by 2030

Instead of outright banishment of the tech from schools, Turnitin calls for providing students an environment where they can learn the technology’s strengths and weaknesses. They say that students must be taught to use it safely and ethically. This, they insist, will help prepare a future workforce that is required to use generative AI writing tools.

“If we don’t, they will enter the workforce at a disadvantage, having lost out on valuable instruction on how to harness their power and understand their limitations,” Turnitin tells us.

Sahota, too, believes using generative AI in education can take academic standards to a whole different level. He points to WTRI, which uses AI and cognitive science to clone expertise, to say that AI tools can be leveraged for teaching critical thinking and research skills.

“We need to rethink how we approach AI writing in the classroom, because these tools are intricate and ever-evolving,” says Turnitin. “Therefore, we are obligated to teach students how to use them effectively.”

For more technology stories, click here.

The stories on our website are intended for informational purposes only. Those with finance, investment, tax or legal content are not to be taken as financial advice or recommendation. Refer to our full disclaimer policy here.