Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

We'd appreciate your feedback.Tell us what you think!

Elsevier
Publish with us
Connect

Five ways libraries can support researchers in the AI era

April 29, 2026

Feature Image

Key takeaways from Elsevier’s new “Researcher of the Future” report explores a vision for developing next-generation library services

From the rapid growth of generative AI to increased oversight and tighter funding policies, researchers are experiencing rapid changes in their work environment. Among these changes, Elsevier’s latest Confidence in Research report, “Researcher of the Future,” reflects the rapid integration of AI in research.

The report draws on the experiences of over 3,000 active researchers across academia, research institutions and R&D-led corporations from 113 countries. It also highlights three other key influences on research: increased time and publishing pressures; more emphasis on collaboration and mobility, and finally, a growing movement towards demonstrating real-world impact. Above all, upholding research integrity remains a core value.

In two regional webinars in December, librarians joined other members of the wider research community to discuss the new report and ways that AI and research can evolve together to meet the challenges researchers currently face.

Screenshot 2026-04-22 at 3.48.47 PM 1

Read the full report: Researcher of the Future

Librarians will resonate with the many observations about AI and research integrity in those discussions, which suggests opportunities for libraries to grow strategically in this era. In this article, we share five ways that library services and advocacy can enable the researcher of the future.

1. Provide expertise in using research-integrated AI

While 58% of researchers reported using AI in their work (up from 37% in February 2024), only 22% of those who use it say they believe AI tools are trustworthy.

In particular, the integration of AI in research has raised concerns regarding transparency, bias and knowledge gaps, and prioritizing human capacities when using AI. Researchers need support in navigating these concerns to implement AI responsibly in their work and become more effective and efficient at using it.

By understanding both the limitations and possibilities of various AI tools for different research tasks, librarians are well-suited to partner with researchers in helping them meet their objectives. Below are some insights from the speakers.

Picture2

Summary of findings from “Researcher of the Future”

How to prioritize transparency

Many AI tools can produce convincing responses to queries, but the sources may be unreliable and the process invisible. According to Professor Krasimira Tsaneva, Vice-President and Deputy Vice-Chancellor (DVC) for Research and Impact at University of Exeter, understanding how to think critically about AI-generated results is a key skill for researchers.

“It’s thinking about the models and how they work. I don’t trust blindly, I always benchmark or test across several tools, and think about the source of the answers and how to manage the uncertainty that creeps in.”

Knowing the source of data used by an AI tool is part of managing that uncertainty. While general-purpose AI models often draw on broad web-scale data, AI tools designed specifically to integrate with higher-quality data may be more likely to be trusted by researchers.

Krasimira also pointed to reproducibility as a core tenet of research rigor. If an AI tool references reliable, transparent, peer-reviewed data, it can become much easier for a researcher to validate their results. Libraries can help researchers with understanding these distinctions and choosing tools that fit their needs.

Why mitigating bias is essential

Webinar speakers discussed what they described as their “backwards-looking” basis – referring to how their data is drawn from the imperfect record of the past – and how models may lack capabilities for imagining the future.

Mara Franke, Director of Research and Training at PIH/IMB Rwanda, observes “in global health communities, there’s a lot of discussion about whether [AI is] going to be this great wall-breaker or going to cement systems of oppression that we have seen. I think it could go either way, depending on how we as the scholarly community use it.”

Similarly, Nereyda Ortiz Osejo, a PhD candidate at Texas A&M University, notes “I'm a sociologist and I have seen that ChatGPT would show me papers related more to economics than sociology. If we just rely on what AI can give us, sometimes it will erase part of the research that you can be in contact with.”

Learn how librarians are contributing to efforts that address bias in AI training data.

How to elevate human capacities

To use AI for research with integrity, the researcher’s field knowledge and processes remain critical. Krista Walton, Vice Chancellor for Research and Innovation, North Carolina State University emphasized: “AI is not able to determine which questions matter. Only humans can weigh consequences and contextualize harm and understand cultural and societal impacts too.”

Librarians have long understood how leveraging tools for research can impact the results. AI tools are “just tools, not the research itself,” explains Nereyda. This knowledge positions librarians well for guiding researchers in their use to maximize the researcher’s expertise and skill.

Integrating AI specialists in the library team can support these efforts. Karim Boughida, Dean of University Libraries, Stony Brook University notes, “We hired many AI people. The role of the library is to offer AI services, and we get requests from everywhere, including from the office of research. They see the libraries because we serve everyone.”

2. Offer comprehensive training in both general purpose and research-specific AI skills

The need for foundational training on how to use AI responsibly was underscored the speakers’ views throughout the webinars. Many academic libraries are already taking the lead in providing education on how to use AI in teaching, study and research contexts at their institutions. But the availability of training is not a given. According to the report, only 45% feel that they have access to adequate training in how to use AI in their work.

Picture3

Additionally, the "Researcher of the Future" reports shares that pressure to publish has increased for 68% of researchers in the past 3 years. Time constraints add to this, with only 45% feeling they have enough protected time for their research alongside their administrative or teaching responsibilities.

The challenge of meeting these commitments reinforces the role libraries already serve in helping researchers learn efficient research skills such as search strategy, topic refinement, manuscript writing and preparation, and supporting them through the publication process.

Besides training in research-specific AI tools, foundational training can also include using general-purpose AI tools to enhance productivity when combined with other foundational skills – something that researchers at all levels may appreciate if it helps lighten some of their current load.

3. Provide vetted AI tools built for the high standards of research

As not all AI tools are the same, it’s important for researchers to understand the differences and have access to tools developed for them. Xu Jie, Deputy Librarian of Wuhan University Library, emphasized the library’s role in “curating trusted AI tools and data. We collected transparent tools and data on the university website and told our researchers, students and teachers that these are the trustworthy resources.”

Vetting and advocating for research-grade AI tools (such as LeapSpace) built on trusted research collections also supports training and helps align policy efforts with responsible use.

4. Contribute to AI policy leadership

In “Researcher of the Future,” only 32% of respondents said they believe there is good governance of AI at their institution. And government policies, when they exist, can often lead to confusion.

Karim explains, “a lot of policymakers don’t understand AI at all. Some universities are now offering AI literacy for policymakers for both state and federal government, which is very interesting. That's a niche that we should pursue - so they understand the impact.”

Jie believes libraries can help by “bridging the policy and practice gap.” An example is helping navigate when multiple guidelines create conflicts.

“In China, for example, separate ministries provide guidelines for teachers and for researchers. But university lecturers are often researchers as well. They need to know all these rules, and it’s impossible for them to learn all this. So we need to work together to make all these rules and guidelines as simple as possible.”

Another way libraries can support clearer AI governance is by lending their expertise to both institutional and government-level policy discussions – sitting on committees or visiting government representatives and offering insight and knowledge.

5. Provide resources to support collaboration across disciplines

Movement towards collaboration across disciplines, institutions and geographies has grown. Interdisciplinary research is rising the most, with 68% of researchers reporting increased collaboration with those from other disciplines.

This points to a complex ecosystem that relies more than ever on being able to quickly evaluate publications, institutions and resources that are outside a familiar domain. Jie says, “Scientific research is a universal thing; it needs collaboration. This AI era gives people more tools and power to collaborate internationally.”

Libraries can help researchers use research platforms to identify potential research partners. Research databases that support demographic reporting can parse geography, field, and institutional data to reveal potential directions to explore. And research databases with built-in AI tools can use these features to suggest avenues with interdisciplinary potential.

While the environment for researchers continues to evolve, the above insights reinforce how libraries have an abundance of experience in finding ways to support them and can advance knowledge for the greater benefit of all.

To hear more from researchers and librarians discussing AI, research integrity, and findings in the “Researcher of the Future” report, visit these links to view the recordings.

Watch the Americas webinar

Watch the Europe, Africa, Asia Pacific webinar