In the long tradition of science, every discovery carries a trace of the hands that shaped it. From ink-stained notebooks to carefully typed manuscripts, the journey of knowledge has always left behind clues about how it was created.
Today, however, a new collaborator often sits quietly beside researchers — not in the laboratory or lecture hall, but inside a screen. Artificial intelligence has begun to assist with drafting text, refining language, and even shaping the structure of scientific papers. Its presence is subtle, sometimes helpful, and increasingly common.
Yet according to a recent study examining thousands of scientific publications, that presence is rarely acknowledged.
Researchers analyzing more than 75,000 scientific papers published since 2023 found that only 76 articles — roughly 0.1 percent — explicitly disclosed the use of AI writing tools, even though many academic journals now require such disclosure.
The findings suggest what scholars describe as a growing “transparency gap.” In principle, the rules appear clear: a large share of journals have introduced policies asking authors to reveal when generative AI tools help draft or edit manuscripts. In practice, however, those policies may not yet be shaping everyday behavior in academic publishing.
The study also observed that AI-assisted writing has expanded rapidly across disciplines since the release of widely accessible language models. The increase appears particularly noticeable in fields with high volumes of open-access publishing and among researchers writing in a second language, for whom AI tools can offer assistance with grammar and clarity.
None of this necessarily signals misconduct. Many scientists use AI in ways similar to traditional editorial support — for proofreading, paraphrasing, or improving readability. Journals generally do not forbid such use. Instead, they ask for transparency so that readers understand how a manuscript was produced.
Why, then, might disclosure remain so rare?
Some observers point to a lingering stigma around AI-assisted writing. Admitting the use of generative tools can raise questions about authorship, originality, or the intellectual contribution of researchers themselves. As a result, some authors may hesitate to include such acknowledgments, even when journals encourage it.
Others suggest the issue may be more practical than philosophical. Policies differ between publishers, definitions of “AI assistance” remain unclear, and enforcement mechanisms are limited. Without consistent standards or clear consequences, disclosure requirements can become easy to overlook.
The broader scientific community is still learning how to navigate this transition. Generative AI arrived rapidly in academic life, and editorial guidelines have often followed in response rather than anticipation.
In some ways, the situation reflects a familiar pattern in the history of science. New tools — from statistical software to automated sequencing machines — have repeatedly reshaped research practices before norms and policies had time to catch up.
For now, the study’s authors suggest that the challenge may not lie in preventing AI use but in encouraging open acknowledgment of it. Rather than resistance alone, they propose that institutions and publishers explore new frameworks that support transparency while recognizing the evolving role of digital tools in research.
As scientific writing continues to adapt to new technologies, the quiet presence of AI may gradually become less controversial and more openly discussed.
For readers, editors, and researchers alike, the question may not simply be whether AI is used in science — but how clearly its role is described when knowledge is shared with the world.
AI Image Disclaimer Images in this article are AI-generated illustrations, meant for concept only.
Sources Physics World Times Higher Education Nature Financial Times ScienceDaily

