Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

We'd appreciate your feedback.Tell us what you think!

Elsevier
Publish with us
Connect

Researchers have their say on what research integrity means in the age of AI

January 15, 2026 | 3 min read

By Ian Evans

Librarian sitting in office working on laptop

Rather than weakening standards, researchers expect AI to operate within, and even reinforce, the systems that protect trust in the scholarly record. However, they’re also concerned about whether institutions are ready for this technology, even as they remain clear about what is needed to build trust in AI research tools.

These are among the findings from Elsevier’s Researcher of the Future report, which indicates that while researchers are increasingly open to using AI, they remain deeply committed to the principles that underpin research integrity.

Established integrity mechanisms will remain as important as ever

Despite rising pressures across the research environment, the report shows that researchers continue to place high value on established integrity mechanisms.

  • 74% say peer-reviewed research is trustworthy

  • 78% rate research methodology as extremely or very important when judging the reliability of others’ work

  • 55% say they have successfully replicated other researchers’ work

Researchers remain strongly focused on rigour, reproducibility and methodological quality — even as workloads increase and new tools are introduced. They also recognise that integrity is an ongoing process, not a one-off checkpoint.

  • 85% agree that corrections and retractions help ensure the integrity of the scholarly record

  • 76% agree that publishers play a critical role in maintaining research integrity

This context matters when considering AI. Researchers are not looking to relax standards as technology evolves; they expect those standards to be upheld.

AI introduces new integrity risks, but researchers are alert to them

While AI offers practical benefits, the report shows that researchers remain cautious about its implications for trust.

  • Only 22% of researchers currently say AI tools are trustworthy

  • Others view AI tools as unreliable, particularly where transparency is lacking

Concerns are most acute in areas closely tied to the scholarly record, such as writing, citation and analysis. Researchers are aware that AI systems can generate fluent text and plausible outputs, but may also introduce errors, fabricate references or obscure the origin of information if safeguards are not in place. This scepticism reflects professional judgement rather than resistance to innovation. Researchers are evaluating AI through the same lens they apply to research more broadly: reliability, accountability and transparency.

Researchers are clear on what trust signals need to be present in AI tools

The Researcher of the Future report shows that researchers have clear, practical expectations for how AI should support integrity. When asked what would increase their confidence in AI tools, researchers point to familiar trust markers:

  • 59% trust AI tools more when references are automatically cited

  • 55% value AI systems trained on the most up-to-date scholarly literature

  • 55% value training on high-quality, peer-reviewed content

  • 49% say regular expert review of AI outputs would increase confidence

These expectations closely mirror how researchers assess research quality today. Integrity, from their perspective, depends on traceability, recency and human oversight, whether AI is involved or not.

Governance gaps are slowing confidence

While expectations are clear, the report highlights gaps in institutional readiness.

  • 45% of researchers say they feel undertrained in using AI

  • Only 32% agree that AI governance at their institution is good

In the absence of clear guidance, researchers often rely on individual judgement to decide how and when AI should be used. This can lead to inconsistent practices and uncertainty about responsibility, particularly in sensitive areas such as authorship, analysis and peer review.

The findings also reinforce that responsibility for integrity is shared. Researchers see institutions, publishers and technology providers as essential partners in setting standards and ensuring accountability.

Looking ahead

The Researcher of the Future report suggests that AI does not need to undermine research integrity — but it does need to earn trust. Researchers are already engaging with AI thoughtfully, guided by long-standing professional values and a strong commitment to quality.

When AI tools are transparent, well governed and aligned with scholarly norms, they have the potential to strengthen confidence in research rather than weaken it. With the right safeguards in place, AI can support researchers in maintaining high standards — even as the research landscape becomes more complex and fast-moving.

Contributor

Portrait photo of Ian Evans

Ian Evans

Content Director

Elsevier

Read more about Ian Evans