The use of generative AI and AI-assisted technologies in writing for Elsevier
Policy for Book and Commissioned Content Authors
This policy aims to provide greater transparency and guidance to authors, readers, reviewers, editors in relation to generative AI and AI-assisted technologies. Elsevier will monitor this development and will adjust or refine this policy when appropriate. Please note the policy only refers to the writing process, and not to the use of AI tools to analyze and draw insights from data as part of the research process.
Where authors use AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work and not to replace key authoring tasks such as producing scientific, pedagogic, or medical insights, drawing scientific conclusions, or providing clinical recommendations. Applying the technology should be done with human oversight and control and all work should be reviewed and edited carefully, because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. The authors are ultimately responsible and accountable for the contents of the work.
Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans. Each (co-) author is accountable for ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved and authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original, that the stated authors qualify for authorship, and the work does not infringe third party rights, and should familiarize themselves with Elsevier’s Publishing Ethics policy before they submit.
The use of generative AI and AI-assisted tools in figures, images and artwork
Elsevier does not permit the use of generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if they do not obscure or eliminate any information present in the original. Image forensics tools or specialized software might be applied to submitted manuscripts to identify suspected image irregularities.
The only exception is if the use of AI or AI-assisted tools is part of the research design or research methods (such as in AI-assisted imaging approaches to generate or interpret the underlying research data, for example in the field of biomedical imaging). If this is done, such use must be described in a reproducible manner in the methods section. This should include an explanation of how the AI or AI-assisted tools were used in the image creation or alteration process, and the name of the model or tool, version and extension numbers, and manufacturer. Authors should adhere to the AI software’s specific usage policies and ensure correct content attribution. Where applicable, authors could be asked to provide pre-AI-adjusted versions of images and/or the composite raw images used to create the final submitted versions, for editorial assessment.
The use of generative AI or AI-assisted tools in the production of artwork such as for book or commissioned content covers or graphical abstracts is not permitted.
Authorship implies responsibilities and tasks that can only be attributed to and performed by humans. Each author is accountable for ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved; authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original, that the stated authors qualify for authorship, and the work does not infringe third-party rights.
Elsevier will monitor developments around generative AI and AI-assisted technologies and will adjust or refine this policy should it be appropriate. More information about our authorship policy can be viewed here: https://www.elsevier.com/about/policies/publishing-ethics
No, this policy does not relate to tools such as spelling or grammar checkers. In addition, the policy does not cover reference managers that enable authors to collect, organize, annotate, and use references to scholarly articles – such as Mendeley, EndNote, Zotero and others. These tools can be used by authors without disclosure. This policy is specific to AI and AI-assisted tools, such as Large Language Models, which can generate output that may be used to create original content for publication.
No, this policy refers to generative AI and AI-assisted technologies, such as Large Language Models, when they are used to create original content for publication. This policy does not prevent the use of AI and AI-assisted tools in formal research design or research methods. We recognize that the use of such technology is common in many fields. Where AI or AI-assisted tools are used in this context, they should be described as part of the methodology of the work, with details provided in the Methods section, if relevant, or in a separate section preceding references or bibliography.
We ask authors who have used AI or AI-assisted tools to insert a statement at the end of their manuscript immediately above the references or bibliography entitled ‘Declaration of AI and AI-assisted technologies in the writing process’. In that statement, we ask authors to specify the tool that was used and the reason for using the tool. We suggest that authors follow this format when preparing their statement:
During the preparation of this work the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.