Scopus AI minimizes hallucinations by using only high-quality, curated content and always provides a reference when making a claim or assertion. It matches responses with the intent of your query and suggests alternative queries if no relevant academic papers are found.
Scopus AI was among the first products to pioneer what is rapidly becoming the gold standard for LLM use – the retrieval augmented generation (RAG) fusion model, improving the quality of vector search retrieval and LLM summary generation. Scopus AI responses are also regularly tested against two evaluation frameworks to reduce hallucination risks, and we are constantly working to minimize them further.
We take bias very seriously. If your query is biased, it may be reflected in the AI's response. Even if your question is neutral, there may be bias in the Scopus documents used by the AI. To mitigate bias, we test Scopus AI against two rigorous evaluation frameworks. One specifically requires the AI to answer questions linked to areas of potential bias so we can identify and reduce inappropriate responses.
For instance, our prompt engineering helps the AI filter out ‘unsafe’ answers that exacerbate prejudice, harm or stereotypes against people of different genders, race, age & geographic, socioeconomic, cultural or religious backgrounds. If the AI detects bias in a document it uses, it will acknowledge this and give a reference for the source. Users can also report harmful responses, which we actively review.