Jakarta, Indonesia Sentinel — A large-scale study has revealed that 13.5 percent, or roughly 2 million medical journal articles published in 2024 show signs of artificial intelligence (AI) use, particularly involving large language models (LLMs) like ChatGPT and Google Gemini.
The findings, published in Science Advances by a team of researchers from the United States and Germany, analyzed over 15 million articles in the PubMed database to detect linguistic shifts linked to the growing adoption of generative AI.
The researchers observed a notable change in writing style: a move away from dense, technical language filled with nouns, toward more expressive language using verbs and adjectives. there was a significant shift away from the excess use of “content words” to an excess use of “stylistic and flowery” word choices, such as “showcasing,” “pivotal,” and “grappling.”
“There was a significant shift away from the excess use of “content words” to an excess use of “stylistic and flowery” word choices, such as “showcasing,” “pivotal,” and “grappling” the researcher states, as reported by Phys.org.
According to the study, prior to 2024, roughly 79 percent of newly added words in scientific abstracts were nouns. By 2024, that number had plummeted to just 34 percent, with the remainder made up of verbs and adjectives.
The research did not rely on AI metadata or labels to identify machine involvement. Instead, it employed statistical analysis to trace “AI fingerprints” by examining word pattern deviations. The researcher were analyzing anomalies in word usage to detect AI’s broader influence.
Read Also:
Government Found Nearly 600 Thousands of Welfare Recipients Involved Online Gambling
The study does not fault authors or institutions for using AI tools. Instead, it highlights the lack of effective detection systems and the potential risks to scientific credibility if AI use remains unregulated. The stylistic shift is significant as if its left unchecked, it could impact the quality and trustworthiness of scientific literature.
Going forward, academic journals and institutions face the challenge of updating internal policies to ensure AI use aligns with academic integrity. One proposed solution is to require authors to disclose any use of AI tools in the writing process.
The findings serve as a wake-up call: artificial intelligence is already reshaping the landscape of scientific writing. As a result, education and regulation are becoming increasingly urgent.
(Raidi/Agung)