跳转到主内容

非常抱歉,我们不完全支持您的浏览器。如果您可以选择,请升级到较新版本或使用 Mozilla Firefox、Microsoft Edge、Google Chrome 或 Safari 14 或更高版本。如果您无法进行此操作且需要支持,请将您的反馈发送给我们。

全新设计的官网为您带来全新体验,期待您的反馈

Elsevier
通过我们出版
Connect

Why AI literacy for researchers is the new essential skillset

2026年1月30日

Ian Evans

Researchers are increasingly expected to engage with AI tools that influence how research is discovered, analysed and communicated. Findings from Elsevier’s Researcher of the Future report show that while AI adoption is accelerating, confidence and preparedness are not keeping pace.

Why it matters: AI adoption is moving faster than confidence

The Researcher of the Future report shows a sharp increase in AI use across the research community.

  • 58% of researchers say they have used AI tools for work, up from 37% the previous year

However, this growth has not been matched by a similar increase in preparedness.

  • 45% of researchers say they feel undertrained in using AI

  • Only 32% agree that governance of AI at their institution is good

This connects directly to research quality. The Researcher of the Future report shows that researchers remain deeply committed to integrity:

  • 74% say peer-reviewed research is trustworthy

  • 78% rate research methodology as extremely or very important

  • 55% say they have successfully replicated others’ research

AI literacy helps ensure these standards are upheld as AI becomes more embedded in writing, analysis and review.

Together, these findings suggest that many researchers are already using AI while still navigating uncertainty about how tools work, where their limitations lie, and what constitutes appropriate use.

Visit our AI literacy hub

How researchers define AI literacy

The report indicates that researchers define AI literacy in ways that closely align with established research norms.

When asked what would increase their confidence in AI tools, researchers point to familiar trust markers:

  • 59% trust AI tools more when references are automatically cited

  • 55% value AI systems trained on the most up-to-date scholarly literature

  • 55% emphasise training on high-quality, peer-reviewed content

  • 49% say regular expert review of AI outputs would increase confidence

AI literacy is less about learning new technology and more about critical evaluation of AI outputs, in much the same way researchers already assess research evidence.

Discover LeapSpace, Elsevier’s research-grade AI-assisted workspace

Building AI literacy

While the Researcher of the Future report highlights the gap between AI use and confidence, UNESCO’s Guidance for generative AI in education and research offers practical direction on how that gap can be addressed — with a strong focus on human agency and capability, rather than technical mastery.

For researchers, AI literacy means strengthening everyday judgement:

  • Treating AI outputs as provisional, recognising that generative systems can produce confident but incorrect results

  • Verifying AI-generated text, citations and data against trusted, peer-reviewed sources

  • Being alert to bias and the dominance of majority perspectives in training data

  • Retaining responsibility for interpretation and conclusions, particularly in high-stakes research contexts

For institutions, literacy is enabled through support rather than control:

  • Providing structured training and continuous coaching on responsible AI use

  • Validating tools for their appropriateness, limitations and risks before encouraging adoption

  • Embedding AI literacy into existing research integrity and methodology training

  • Encouraging researchers to co-design and critically test AI uses within their disciplines

UNESCO emphasises that the long-term value of AI in research depends on protecting human agency — ensuring that AI augments, rather than replaces, critical thinking and scholarly judgement.

Explore Elsevier’s toolkit for embracing AI Literacy

Looking ahead

The Researcher of the Future report suggests that researchers are already approaching AI thoughtfully, guided by a strong commitment to quality and integrity. AI literacy is becoming essential not because researchers lack expertise, but because the tools they are using are evolving rapidly.

When supported by training, transparency and shared expectations, AI literacy for researchers becomes a foundation for responsible innovation — and a prerequisite for trust in an AI-enabled research future.

撰稿人

Portrait photo of Ian Evans

Ian Evans

Content Director

Elsevier

阅读更多有关 Ian Evans 的信息