Saltar al contenido principal

Lamentablemente no somos totalmente compatibles con su navegador. Si tiene la opción, actualice a una versión más reciente o utilice Mozilla Firefox, Microsoft Edge, Google Chrome o Safari 14 o posterior. Si no puede y necesita ayuda, envíenos sus comentarios.

Agradeceríamos sus comentarios sobre esta nueva experiencia.Díganos qué piensa

Elsevier
Publique con nosotros

Generative AI policies for journals

These policies were initially triggered by the rise of generative AI* and AI-assisted technologies, which were expected to increasingly be used by researchers and have now been updated to reflect evolving good practice. These policies aim to provide greater transparency and guidance to authors, reviewers, editors, readers and contributors. Elsevier will continue to monitor developments in this area and will adjust or refine policies as appropriate.

For authors

The use of generative AI and AI-assisted technologies in manuscript preparation - an overview

Elsevier recognizes the potential of generative AI and AI-assisted technologies (“AI Tools”), when used responsibly, to help researchers work efficiently, gain critical insights fast and achieve better outcomes. Increasingly, these tools, including AI agents and deep research tools, are helping researchers to synthesize complex literature, provide an overview of a field or research question, identify research gaps, generate ideas and provide tailored support for tasks such as content organization and improving language and readability. Authors preparing a manuscript for an Elsevier journal can use AI Tools to support them. However, these tools must never be used as a substitute for human critical thinking, expertise and evaluation. AI Tools should always be applied with human oversight and control. Ultimately, authors are responsible and accountable for the contents of their work. This includes accountability for:

  • Carefully reviewing and verifying the accuracy, comprehensiveness, and impartiality of all AI-generated output (including checking the sources, as AI-generated references can be incorrect or fabricated).

  • Editing and adapting all material thoroughly to ensure the manuscript represents the author’s authentic and original contribution and reflects their own analysis, interpretation, insights and ideas.

  • Ensuring the use of any tools or sources, AI-based or otherwise, is made clear and transparent to readers — for the use of AI Tools we require a disclosure statement upon submission.

  • Ensuring the manuscript is developed in a way that safeguards data privacy, intellectual property and other rights, by checking the terms and conditions of any AI Tool that is used.

Responsible use of AI Tools Authors must check the terms and conditions of any AI Tool that they use to ensure that the privacy and confidentiality of their data and inputs, including their unpublished manuscripts, is maintained. Particular care should be taken with any personally identifiable data. Images that duplicate or refer to existing copyrighted images, real people, or others’ identifiable products or brands must not be generated, nor any likeness of an individual’s voice. Authors should check for factual errors and for any potential bias.

Authors should also check the terms and conditions of any AI Tool they wish to use to ensure that, they only grant to the AI Tool the right to use their materials to provide the service to them and that they do not grant to the AI Tool any other rights to the materials that they input into the AI Tool (including without limitation the right to train the AI Tool on those materials). They must also ensure that the AI Tool does not impose constraints on the use of outputs from the AI Tool in a way that could restrict the subsequent publication of the relevant article. Disclosure Authors should disclose the use of AI Tools for manuscript preparation in a separate AI declaration statement in their manuscript upon submission and a statement will appear in the published work. Authors should document their use of AI, including the name of the AI Tool used, the purpose of the use, and the extent of their oversight. Declaring the use of AI Tools supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant AI Tool. Basic checks of grammar, spelling and punctuation need no declaration. AI use in the research process should be declared and described in detail in the methods section. Authorship Authors should not list AI Tools as an author or co-author, nor cite AI Tools as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans. Each (co-) author is accountable for ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved and authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original and has not been previously published, that the stated authors qualify for authorship, and the work does not infringe third party rights, and should familiarize themselves with Elsevier’s Ethics in Publishing policy before they submit.

The use of generative AI and AI-assisted tools in figures, images and artwork

We do not permit the use of Generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original. Image forensics tools or specialized software might be applied to submitted manuscripts to identify suspected image irregularities.

The only exception is if the use of AI or AI-assisted tools is part of the research design or research methods (such as in AI-assisted imaging approaches to generate or interpret the underlying research data, for example in the field of biomedical imaging). If this is done, such use must be described in a reproducible manner in the methods section. This should include an explanation of how the AI or AI-assisted tools were used in the image creation or alteration process, and the name of the model or tool, version and extension numbers, and manufacturer. Authors should adhere to the AI software’s specific usage policies and ensure correct content attribution. Where applicable, authors could be asked to provide pre-AI-adjusted versions of images and/or the composite raw images used to create the final submitted versions, for editorial assessment.

The use of generative AI or AI-assisted tools in the production of artwork such as for graphical abstracts is not permitted. The use of generative AI in the production of cover art may in some cases be allowed, if the author obtains prior permission from the journal editor and publisher, can demonstrate that all necessary rights have been cleared for the use of the relevant material, and ensures that there is correct content attribution.

View Elsevier’s generative AI author policies for books.

For reviewers

The use of generative AI and AI-assisted technologies in the journal peer review process

When a researcher is invited to review another researcher’s paper, the manuscript must be treated as a confidential document. Reviewers should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.

This confidentiality requirement extends to the peer review report, as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their peer review report into an AI tool, even if it is just for the purpose of improving language and readability.

Peer review is at the heart of the scientific ecosystem and Elsevier abides by the highest standards of integrity in this process. Reviewing a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI or AI-assisted technologies should not be used by reviewers to assist in the scientific review of a paper as the critical thinking and original assessment needed for peer review is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The reviewer is responsible and accountable for the content of the review report.

Elsevier’s AI author policy states that authors are allowed to use generative AI and AI-assisted technologies in the manuscript preparation process before submission, but only with appropriate oversight and disclosure, as per our instructions in Elsevier’s Guide for Authors. Reviewers can find such disclosure at the bottom of the paper in a separate section before the list of references.

Please note that Elsevier owns identity protected AI-assisted technologies which conform to the RELX Responsible AI Principles, such as those used during the screening process to conduct completeness and plagiarism checks and identify suitable reviewers. These in-house or licensed technologies respect author confidentiality. Our programs are subject to rigorous evaluation of bias and are compliant with data privacy and data security requirements.

Elsevier embraces new AI-driven technologies that support reviewers and editors in the editorial process, and we continue to develop and adopt in-house or licensed technologies that respect authors’, reviewers’ and editors’ confidentiality and data privacy rights.

View Elsevier's generative AI reviewer policies for books.

For editors

The use of generative AI and AI-assisted technologies in the journal editorial process

A submitted manuscript must be treated as a confidential document. Editors should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.

This confidentiality requirement extends to all communication about the manuscript including any notification or decision letters as they may contain confidential information about the manuscript and/or the authors. For this reason, editors should not upload their letters into an AI tool, even if it is just for the purpose of improving language and readability.

Peer review is at the heart of the scientific ecosystem and Elsevier abides by the highest standards of integrity in this process. Managing the editorial evaluation of a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI or AI-assisted technologies should not be used by editors to assist in the evaluation or decision-making process of a manuscript as the critical thinking and original assessment needed for this work is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The editor is responsible and accountable for the editorial process, the final decision and the communication thereof to the authors.

Elsevier’s AI author policy states that authors are allowed to use generative AI and AI-assisted technologies in the manuscript preparation process before submission, but only with appropriate oversight and disclosure, as per our instructions in Elsevier’s Guide for Authors. Editors can find such disclosure at the bottom of the paper in a separate section before the list of references. If an editor suspects that an author or a reviewer has violated our AI policies, they should inform the publisher.

Please note that Elsevier owns identity protected AI-assisted technologies which conform to the RELX Responsible AI Principles, such as those used during the screening process to conduct completeness and plagiarism checks and identify suitable reviewers. These in-house or licensed technologies respect author confidentiality. Our programs are subject to rigorous evaluation of bias and are compliant with data privacy and data security requirements.

Elsevier embraces new AI-driven technologies that support reviewers and editors in the editorial process, and we continue to develop and adopt in-house or licensed technologies that respect authors’, reviewers’ and editors’ confidentiality and data privacy rights.

View Elsevier's generative AI editor policies for books.

*Generative AI is a type of artificial intelligence technology that can produce various types of content including text, imagery, audio and synthetic data. Examples include ChatGPT, NovelAI, Jasper AI, Rytr AI, DALL-E, etc.

Policy updated updated September 2025.

Frequently asked questions

Frequently asked questions