Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

We'd appreciate your feedback.Tell us what you think!

Elsevier
Publish with us

AI in Higher Education - A Question of Trust

Last updated on: July 28, 2025

Elsevier’s AI tools break new ground, but represent continuity with the traditions of research

a person balancing on the line while reaching the other end of the mountain

Standing still is not an option

Artificial Intelligence (AI) is transforming research and higher education. Following the wave of excitement triggered by the release of OpenAI’s ChatGPT in late 2022, AI has become a hot topic due to its potential to increase speed, efficiency, and personalization. Various products and services have emerged, creating a complex landscape, while leaders, librarians and faculty members have worked hard to master the new technology and understand its potential impacts on their programs, budgets, and jobs.

Over time, there has been a growing acceptance that AI is here to stay, with institutions working to realize its extraordinary promise, sometimes through localized pilots and sometimes as part of more ambitious university-wide transformation programs. While many questions remain unanswered, standing still is no longer an option.

Part of the solution or part of the problem?

Of course, no one in higher education can afford the luxury of being swept downstream. As the discussion around AI becomes more informed, questions about its potential downsides proliferate. What is the best way of countering the risks of inaccuracy, bias, or hallucination? Will AI tools compromise data security? What are the ethical implications of using AI? Lately, there have also been more deep-seated concerns about the impact of AI use on human critical faculties, particularly those of students.

Despite leading the adoption of AI tools, students are increasingly seeking guidance as they consider a changing job market. Unfortunately, many of the faculty members and librarians who are well placed to provide this guidance have themselves been playing catch up, juggling other responsibilities, and managing limited resources. Meanwhile, academic leaders – many of them navigating increasingly turbulent conditions – have often been disappointed by the pace of AI adoption, with just 34% believing their institutions had made good progress, according to a recent Elsevier report.

Given this heady combination of promise, risk, and urgency, it is not always clear whether AI is part of the solution or part of the problem.

Back to the future

So much has changed since AI went mainstream that orientation can be challenging, particularly in the absence of community-backed standards that indicate “what good looks like.” While this situation feels very contemporary, it's worth noting that Elsevier has been utilizing AI in its tools for more than 15 years. Many researchers and librarians may already be familiar with some of our AI-informed features, such as Scopus’ Author Profiles or the popular ScienceDirect Topics pages. Alive to the potential of the new technology, Elsevier’s product teams took the time to develop these innovations in close collaboration with the research community, focusing on the delivery of clearly defined benefits. We follow the same proven working practices today.

Feature

Author and Organization Profiles

ScienceDirect Topics

Elsevier Journal Finder

United Nations Sustainable Development Goals mapping

SciVal Topics

Solutions

Scopus

ScienceDirect

Scopus, SciVal, Digital Commons Data

SciVal

Use case

Powerful algorithmic data processing matches published articles to the 19 million+ Author Profiles and 94,000+ Organizational Profiles in Scopus.

Machine reading technology extracts information from reference materials, enabling researchers to seamlessly access introductions to new subjects while they read.

Smart search technology and field-of-research specific vocabularies assist authors in identifying the Elsevier journal with the best scope for their unpublished manuscript based on keywords or abstract.

Developed in collaboration with external experts, our sophisticated SDG queries map research related to the UN SDGs, aiming to highlight the impact of scholarly work on global sustainability efforts.

GenAI creates consistent titles and summaries across 94,000 Topics and 1,500 Topic Clusters and pre-generated summaries do not vary by year or user. 

Benefits

Author and Organizational Profiles help tell a more accurate and complete story about the output and influence of researchers and institutions compared to other solutions.

Helps researchers quickly build their understanding of a scientific topic, support learning and interpreting scientific concepts and enhance their knowledge base across multiple disciplines.

Helps authors save time by quickly locating the most suitable Elsevier journal for an unpublished research article.

Makes it easier for researchers and their institutions to highlight the impact of their scholarly work on global sustainability efforts.

Enables users to gain a consistent view of each Topic or Topic Clusters, and quickly understand the research fields and questions being addressed. 

GenAI – raising the stakes

While the advent of mature Generative AI (GenAI) was a key event for the whole research community, Elsevier’s use of this technology has been a continuation of existing product development practices. This extensive practical experience is why many of our GenAI solutions –which surface evidence-based insights from our research and clinical platforms – were developed so quickly.

While “traditional” AI focuses on performing specific tasks intelligently, GenAI models create something new (for example, a response to a user query) based on underlying patterns in a dataset (for example, Scopus research abstracts). This technology raises the stakes, increasing both the power of the tools that harness it and the number of potential problems this entails. What follows is an account of how Elsevier systematically works to maximize these benefits while minimizing the associated risks.

Quality in, quality out – and vice versa

The famous computer science axiom “garbage in, garbage out” is especially pertinent to AI tools. The most technologically advanced solution may produce unreliable or incorrect responses if it is based on content that is unvetted and unverified, regardless of whether those responses appear superficially plausible. For this reason, we build our GenAI tools to work with the highest-quality scientific content and data, utilizing three broad layers of assurance.

Peer review

For Elsevier-published content, like many of the journals covered by ScienceDirect AI, the foundation is still the robust peer review process that underpins the quality and validity of our articles. Peer review remains mandatory for credible scientific communication. Similarly, non-Elsevier full-text articles covered by our AI tools are also peer-reviewed via their respective publishers, ensuring consistently high standards.

Research Integrity

Besides our independent network of 36,000 expert editors and 1.7 million expert reviewers, we have a growing team of in-house research integrity and publishing ethics specialists who are committed to ensuring the integrity of research. This human expertise is supplemented by a range of in-house AI-assisted technologies, including tools to help identify suitable reviewers or screen submitted manuscripts for completeness and plagiarism. The rigor of our editorial processes is one reason why Elsevier has the highest journal quality among major publishers, with 29% of global citations.

Data curation by independent experts

GenAI tools, like Scopus AI, cover abstracts from the whole corpus of peer-reviewed scientific literature. In this case, our dataset is drawn directly from the Scopus abstract and citation database. Scopus is source-neutral and curated by the independent subject matter experts of the Content Selection and Advisory Board (CSAB), which continuously re-evaluates indexed titles to ensure we maintain quality. In the age of misinformation, quality eclipses quantity, although breadth of coverage still matters. In other words, the human selection and maintenance of content is even more important than just adding extra records to a dataset, particularly if the source of those records is unknown or unreliable. Comprehensiveness is vital, but curation is king.

Responsible AI in practice

The term “Responsible AI” is now widely used, but in the absence of clear community standards for AI implementation, what does this actually mean? Responsible to whom? And for what? Elsevier’s AI technology is based on a set of public Responsible AI Principles, briefly summarized below:

We consider the real-world impact of our solutions on people

All our AI tools are informed by continuous input from researchers, librarians, academic leaders and clinicians. This virtual dialog guides everything from interface design decisions to new feature launches. As part of our broader societal commitment, we conduct an Algorithmic Impact Assessment based on the tool developed by the Canadian Government and endorsed by the Ada Lovelace Institute. We are currently developing approaches to support AI user accessibility and limit the environmental impact of our AI solutions.

We take action to prevent the creation or reinforcement of unfair bias

We minimize bias and hallucinations – the inaccuracies that AI models can generate – by grounding the responses of our tools in the trusted data sources described above and by using strict prompt engineering. Moreover, we actively test our solutions to try to produce responses that either create or reinforce unfair bias based on both internal and external queries like Quora’s Insincere Questions Classification. Our AI solutions also integrate community feedback channels, allowing users to report issues that our development team manually reviews.

We can explain how our solutions work

Once again, transparency begins at the data level through providing clarity around the content behind our AI solutions and the processes governing their selection or curation. As responses are generated, the Copilot feature shows exactly how they are being created (e.g., vector search or keyword search), while the Reflection Layer in our tools informs users of the level of confidence in our responses. Finally, any claims or assumptions made by our tools are always backed up with the reference or (in the case of ScienceDirect AI) full-text source used to make that statement.

We create accountability through human oversight

As mentioned above, we utilize human Algorithmic Impact Assessment to identify risks, weaknesses and strengths in our AI systems and inform consequent improvements. We leverage both a quality and harmful bias evaluation framework to determine which types of queries perform well and which require adjustments. We then solicit continuous user feedback to identify new feature needs, adjust for poorly performing queries, and improve the overall performance of our solutions.

We respect privacy and champion robust data governance

All our AI features and solutions align with Elsevier’s Privacy Principles, as well as relevant legislation like GDPR (the European Union’s General Data Protection Regulation law). We design our products to avoid unnecessary data retention. We avoid storing personal user information or chat history on our systems unless it is done in a compliant manner that improves a product’s performance – for example, if it supports personalization features. Our usage of third-party large language models (LLMs) is private, meaning there is no data exchange or use of our data to train them.

“As part of our design process, we consider what elements of the system need explaining and to whom, and how to go about explaining them. Ultimately, the aim is to ensure that different users can understand and trust the output.”

Photo of Harry Muncey, PhD

HM

Harry Muncey, PhD

Senior Director of Data Science and Responsible AI at Elsevier

Promoting AI Literacy, Supporting Critical Engagement

“AI Literacy” is another frequently used term, but it can mean different things to different people. For example, librarians may be keen to foster research readiness among students. Students, however, may be focused on learning the AI soft skills (non-technical attributes such as communication, adaptability and critical thinking) that will one day make them more employable in the AI-enabled workplace. Similarly, university leaders may be thinking about boosting research outputs, while their Chief Technology Officers are anxious to reduce cybersecurity risks. In most cases, however, there are three key areas:

  • Knowing how to use AI tools

  • Knowing how to recognize and respond to problems like hallucinations or misinformation

  • Engaging critically and ethically with the technology

All of these points are important, although arguably, critical engagement with AI is the least widely explored. Effectively, this means using AI tools to support the thinking process rather than as a substitute for it. This human-centered critical stance is at the heart of Elsevier’s approach to AI – both in our advocacy of AI literacy and in the design of our tools, which are not configured to deliver end-stopped “answers” but support insight generation and creative questioning strategies. As the biologist E.O. Wilson put it, "The right answer to a trivial question is also trivial, but the right question, even when insoluble in exact form, is a guide to major discovery" (Consilience, E.O. Wilson, 1998).

The answer is the question

AI is transforming research and higher education. It certainly won’t solve all the problems facing the sector – if poorly managed, it has the potential to make some of them much worse – but it can also become a part of the solution. Elsevier’s AI tools break new ground, but stress continuity with the traditions of research. Our strategy is based on a tightly knit combination of trusted data and responsible technology, with human-led quality checks at every level. We seek innovation, but not for its own sake, following the guidance of our users and the broader research community to define clear, practical goals. Finally, our AI portfolio embodies an ever-curious mindset where a negative response can be more telling than a positive one, or finding the right question can be more important than finding the correct answer. It is this curiosity that has shaped our AI program over the last 15 years – and that we carry with us to meet the challenges and opportunities in our shared future.