Generative Artificial Intelligence (AI)

For Authors

The application of AI-assisted technologies and generative AI in scientific writing

The use of Artificial Intelligence (AI) technologies to evaluate and derive insights from data as part of the research process is not covered by this policy; it solely relates to the writing process.

Generative AI and AI-assisted technologies should be used only by authors to enhance the work's language and readability. Since AI can produce output that seems authoritative but may be inaccurate, biased, or incomplete, authors should carefully analyze and adjust the results before implementing the technology. The authors are ultimately responsible for and accountable for the contents of the work.

When using AI and AI-assisted technologies, authors should disclose this information in their manuscript; a statement will be included in the final product. By revealing the usage of these technologies, authors, readers, reviewers, editors, and contributors can benefit from increased transparency and confidence, as well as easier compliance with the relevant terms of use of the tools or technologies employed.

It is improper for authors to cite AI as an author or to name AI and AI-assisted technologies as co-authors. Authorship entails duties and responsibilities that are exclusive to and carried out by humans. Authorship requires the ability to approve the final version of the work and consent to its submission. Each co-author is responsible for ensuring that any questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Before submission, authors should review the Ethics and Malpractice policy to ensure that their work is original, meets authorship requirements, and does not infringe upon the rights of others.

Using AI-assisted tools and generative AI to create figures, images, and artwork

Generative AI and AI-assisted techniques are not permitted to be used to create or modify images in submitted manuscripts. Features within an image or figure must not be enhanced, obscured, moved, removed, or introduced. Adjustments to brightness, contrast, or color balance are permitted, provided they do not obscure or eliminate any information from the original image. Submitted manuscripts may be subjected to image forensics analysis or specialist software to detect potential image manipulation.

The sole exception applies when the research design or methodology explicitly involves AI or AI-assisted tools (for example, AI-assisted imaging techniques used to generate or interpret underlying research data, such as in biomedical imaging). In such cases, the methods section must include a replicable description of the AI usage. This description should specify the model or tool name, version, extension number, and developer, as well as a clear explanation of how the AI or AI-assisted tools were used to generate or modify images. Authors must adhere to the AI software's usage guidelines to ensure appropriate attribution of content. For editorial assessment, authors may be requested to provide pre-AI-modified images or the original composite raw images used to generate the final submitted figures, where applicable.

The creation of artwork, including graphical abstracts, using generative AI or AI-assisted techniques, is not permitted. In limited circumstances, authors may be allowed to use generative AI to produce cover artwork, provided that prior approval has been obtained from the publisher and the journal editor, all necessary rights to use the content are secured, and appropriate content attribution is ensured.


For Reviewers

Implementing AI-assisted and generative AI in the peer review process

When invited to review a manuscript, reviewers must treat the manuscript as a confidential document. Reviewers must not upload a submitted manuscript, in whole or in part, to a generative AI tool, as this may violate the authors' proprietary rights and confidentiality, and, where applicable, data protection and privacy regulations.

Peer review reports may also contain confidential or sensitive information regarding the manuscript or its authors. Therefore, even if a reviewer intends to use an AI tool solely to improve the language or readability of a review report, uploading such content to an AI system is not permitted.

Peer review is a cornerstone of the scientific ecosystem and relies on human judgment, expertise, and accountability. Reviewers should not use generative AI or AI-assisted technologies to conduct the scientific assessment of a manuscript. These technologies lack the capacity for critical thinking and original evaluation required for peer review and may produce biased, inaccurate, or incomplete assessments.

Authors are permitted to use generative AI and AI-assisted technologies during the writing process before submission, provided that such use is transparently disclosed and limited to improving language and readability in accordance with the applicable guidelines.

Innovative AI-powered tools that support editors and reviewers in the editorial process may be welcomed, provided that they fully respect data privacy, confidentiality, and ethical standards applicable to authors, reviewers, and editors.


For Editors

The implementation of AI-assisted technologies and generative AI in the journal editing process

Submitted manuscripts must be treated as confidential documents. Editors should not upload a submitted manuscript, or any portion of it, to a generative AI tool, as this may infringe upon the authors' proprietary and confidentiality rights and, where applicable, violate data protection and privacy regulations.

Editorial correspondence, including decision letters and reviewer notifications, may also contain sensitive information related to the manuscript or its authors. Accordingly, editors should not upload such communications to AI tools, even if the intention is solely to improve language quality or readability.

The editorial oversight of peer review is a responsibility that rests exclusively with human editors. Editors should refrain from using generative AI or AI-assisted technologies to evaluate manuscripts or to support editorial decision-making. Such technologies are unable to perform the critical reasoning and independent judgment required for editorial assessment and may introduce biased, incomplete, or inaccurate outcomes.

The integrity of the editorial process, the final publication decision, and communication with authors remain entirely under the editor's responsibility and accountability.