Reviewers' Update - information for reviewers about relevant Elsevier and industry developments, support and training.

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.
We'd appreciate your feedback.Tell us what you think!
September 15, 2025 | 5 min read
By Lipsa Panda
©istockphoto.com/AndreyPopov
As artificial intelligence (AI) advances at a rapid pace, its role in scholarly publishing continues to evolve. This Peer Review Week, we are reflecting on what implications this technology has for peer review in an era characterized by powerful innovation, rising expectations, and new responsibilities.
At Elsevier, we believe that while AI and other technologies can enhance scientific publishing, they cannot replace the expert judgment of trained reviewers and editors. In this article, we share perspectives from our Director of Generative AI, Peer Review Innovation Lead, and two Editors-in-Chief to explore how we are navigating this landscape with responsibility, transparency, and a deep respect for human insight.
The integration of AI into scientific publishing should be rooted in a clear philosophy: AI should serve to enhance workflows, not replace human judgment. According to Alina Helsloot, Director of Generative AI at Elsevier, “AI is a support tool, not a decision-maker.” She highlights this as she discusses how we integrate new technologies to ensure the quality of scientific research with human oversight.
While AI can assist in analyzing and summarizing information, the key dimensions of peer review require human interpretative judgement, lived expertise, and ethical responsibility.
“While our existing policies do not allow reviewers to upload manuscripts into AI tools, we are currently exploring ways in which these tools could support the peer review process in the future. When applied responsibly and with human oversight, AI tools could improve the efficiency of peer review,” Alina notes, “but they cannot replace the expert judgment of trained reviewers and editors.”
The real promise of AI lies in complementing human capabilities while upholding the values and discernment essential to science. “That’s why implementation must include safeguards, ethical oversight, and validation, particularly in the context of content evaluation,” Alina emphasizes.
Alina Helsloot
Peer review requires critical thinking, domain expertise, and ethical discernment — qualities that AI cannot replicate. Dr Bahar Mehmani, our Peer Review Innovation Lead, agrees and explains: “Ensuring trust is our guiding principle, and transparency plays a vital role in building that trust. With these principles in mind, we are exploring how AI can be reliably and responsibly used throughout the evaluation process.”
AI should be seen as an assistant to researchers, improving their literature analysis and reporting. In the context of peer review, it should help editors in maintaining ethical standards and allow reviewers to deploy their invaluable time where they are most needed.
For those sensitive to AI’s growing presence in peer review, the focus remains on constructive partnership and shared responsibility of all the stakeholders in scholarly publishing.
“We rely upon the vigilance and feedback of our in-house teams as well as authors, reviewers, and partners” Dr Mehmani explains. “As we move forward, it’s essential for all parts of the scholarly ecosystem — publishers, funders, institutions, researchers, librarians, and societies — to work together in safeguarding the trust and integrity of scientific research. Maintaining the reliability of the scientific record is a collective effort that benefits from ongoing dialogue and collaboration.”
Dr Bahar Mehmani
Shaping the future of peer review requires active collaboration with the scholarly community. At Elsevier, editors play a pivotal role in guiding the responsible integration of AI into our journal workflows.
Professor Jim Jansen, Editor-in-Chief of Information Processing & Management, shares his perspective: “AI has the potential to be an assistive tool in peer review — supporting summarization, drafting, and literature searches. But a fully AI-driven review is unacceptable. We need guardrails — such as AI detectors integrated into content management systems — similar to similarity checkers.”
Professor Jim Jansen
He emphasizes that responsible AI use should be guided by researchers themselves. Similarly, Doctor Francesco Mangano, Editor-in-Chief of the Digital Dentistry Journal, highlights the importance of human expertise, especially in specialized fields like dentistry: “We have a dedicated Editorial Board and reviewers who live and breathe the topic — they don’t need to be ‘instructed’ by AI. My role as an editor is to match the right reviewer with the right paper, so that there will be no need for AI to review the article.”
While acknowledging AI’s usefulness in literature searches, Dr Mangano draws a clear line when it comes to writing reviews: “I’ve told my reviewers to write their reports manually. AI can assist with literature searches, but it cannot replace our experience, judgment, and passion. Peer review is a matter of time, dedication, and respect.”
Dr Francesco Mangano
The future of scientific publishing and peer review will likely include more AI-powered tools. At Elsevier, we are committed to navigating this future responsibly, with the research community at the heart of every decision, always with human oversight, ethical safeguards, and transparency.
As we rethink peer review in the AI era, one principle remains clear: technology can support the process, but human insight must lead it.
Disclaimer: AI was used in refining the language of the article with human oversight.