Photo: Tada Images / Shutterstock.com
There is a massive influx of AI-generated articles in academia and research papers calling for a plan of action and policy measures by institutions. The issue has ignited discussions about transparency, integrity, and the potential consequences of AI-driven contributions.
The whole furore started when a peer-reviewed study published in Resource Policy, an academic journal affiliated with Elsevier, included a seemingly misplaced sentence that some of the avid users of ChatGPT will promptly notice — “Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table.”
This revelation sparked an investigation by Elsevier, as the publisher sought to understand the role of AI in the study and whether similar occurrences have occurred elsewhere.
Various academic journals address the challenges of undisclosed AI use through various policies. The JAMA Network prohibits listing AI generators as authors and demands disclosure. Science journals require editorial permission for AI-generated content, and PLOS ONE mandates disclosure of AI tool use, evaluation methods, and validity assessment, reported Wired.
There have been many incidences where the generative AIs have confidently provided users with false information, leading to massive misinformation. However, detecting AI use is also challenging. The current AI detection tools are miserable at best. The company has shut down OpenAI’s AI detection tool over low accuracy rates. Mind you; this is the same company that built ChatGPT. One would think that at least the maker of ChatGPT could easily detect AI-written content, but that is not the case.
A report by Ars Technica detailed how the AI detectors have failed miserably and have claimed that AI has written the US Constitution.
However, a few ways exist to determine whether AI or a human has written a research paper. Guillaume Cabanac, a professor of computer science at the University of Toulouse in France, noticed weird terms like ‘counterfeit consciousness’ instead of artificial intelligence, which led to him coining a whole new phrase, tortured intelligence, to spot AI-generate content.
Some researchers are serious about the implications of generative AI in research journals. A tool in works can differentiate with 99 per cent accuracy between science writing by AI and a human. The tool only works for science-based writing instead of the catch-all AI writing detection tools. However, the tool still has a lot more ground to cover.
What will be the end game for all this? As the academic community navigates this terrain, there is an increasing demand for a new mechanism to detect AI content with respectable accuracy to preserve scientific research’s hard work and integrity.
In the News: Cybersecurity experts in danger as online threats now turn real