Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

We'd appreciate your feedback.Tell us what you think!

Elsevier
Publish with us
Connect

Why AI literacy for researchers is the new essential skillset

January 30, 2026

By Ian Evans

Researchers are increasingly expected to engage with AI tools that influence how research is discovered, analysed and communicated. Findings from Elsevier’s Researcher of the Future report show that while AI adoption is accelerating, confidence and preparedness are not keeping pace.

Why it matters: AI adoption is moving faster than confidence

The Researcher of the Future report shows a sharp increase in AI use across the research community.

  • 58% of researchers say they have used AI tools for work, up from 37% the previous year

However, this growth has not been matched by a similar increase in preparedness.

  • 45% of researchers say they feel undertrained in using AI

  • Only 32% agree that governance of AI at their institution is good

This connects directly to research quality. The Researcher of the Future report shows that researchers remain deeply committed to integrity:

  • 74% say peer-reviewed research is trustworthy

  • 78% rate research methodology as extremely or very important

  • 55% say they have successfully replicated others’ research

AI literacy helps ensure these standards are upheld as AI becomes more embedded in writing, analysis and review.

Together, these findings suggest that many researchers are already using AI while still navigating uncertainty about how tools work, where their limitations lie, and what constitutes appropriate use.

Visit our AI literacy hub

How researchers define AI literacy

The report indicates that researchers define AI literacy in ways that closely align with established research norms.

When asked what would increase their confidence in AI tools, researchers point to familiar trust markers:

  • 59% trust AI tools more when references are automatically cited

  • 55% value AI systems trained on the most up-to-date scholarly literature

  • 55% emphasise training on high-quality, peer-reviewed content

  • 49% say regular expert review of AI outputs would increase confidence

AI literacy is less about learning new technology and more about critical evaluation of AI outputs, in much the same way researchers already assess research evidence.

Discover LeapSpace, Elsevier’s research-grade AI-assisted workspace

Building AI literacy

While the Researcher of the Future report highlights the gap between AI use and confidence, UNESCO’s Guidance for generative AI in education and research offers practical direction on how that gap can be addressed — with a strong focus on human agency and capability, rather than technical mastery.

For researchers, AI literacy means strengthening everyday judgement:

  • Treating AI outputs as provisional, recognising that generative systems can produce confident but incorrect results

  • Verifying AI-generated text, citations and data against trusted, peer-reviewed sources

  • Being alert to bias and the dominance of majority perspectives in training data

  • Retaining responsibility for interpretation and conclusions, particularly in high-stakes research contexts

For institutions, literacy is enabled through support rather than control:

  • Providing structured training and continuous coaching on responsible AI use

  • Validating tools for their appropriateness, limitations and risks before encouraging adoption

  • Embedding AI literacy into existing research integrity and methodology training

  • Encouraging researchers to co-design and critically test AI uses within their disciplines

UNESCO emphasises that the long-term value of AI in research depends on protecting human agency — ensuring that AI augments, rather than replaces, critical thinking and scholarly judgement.

Explore Elsevier’s toolkit for embracing AI Literacy

Looking ahead

The Researcher of the Future report suggests that researchers are already approaching AI thoughtfully, guided by a strong commitment to quality and integrity. AI literacy is becoming essential not because researchers lack expertise, but because the tools they are using are evolving rapidly.

When supported by training, transparency and shared expectations, AI literacy for researchers becomes a foundation for responsible innovation — and a prerequisite for trust in an AI-enabled research future.

Contributor

Portrait photo of Ian Evans

Ian Evans

Content Director

Elsevier

Read more about Ian Evans