Saltar al contenido principal

Lamentablemente no somos totalmente compatibles con su navegador. Si tiene la opción, actualice a una versión más reciente o utilice Mozilla Firefox, Microsoft Edge, Google Chrome o Safari 14 o posterior. Si no puede y necesita ayuda, envíenos sus comentarios.

Agradeceríamos sus comentarios sobre esta nueva experiencia.Díganos qué piensa

Elsevier
Publique con nosotros

Ensuring responsible AI use and data privacy in Elsevier's AI tools

For more than a decade, Elsevier has been using AI and machine learning technologies responsibly in our products. Even as the technology has evolved and grown more widely used, we have prioritized data privacy and security every step of the way. Elsevier AI solutions are deployed and maintained using the same security architecture, reviews, audits, and validation as for all Elsevier solutions. Generative AI capabilities have been engineered to meet our demanding, standards-compliant security requirements, with a key focus on protecting customer activity. Our security team is actively engaged in all aspects of the engineering and deployment of Elsevier AI capabilities.

Through our enterprise level agreements with AWS, Microsoft Azure, OpenAI and Anthropic, we have zero-retention contracts in place, ensuring your prompts and documents are never used to train any large language models (LLMs). Elsevier also does not train any LLMs hosted within our private closed cloud environments with customer data. By using Elsevier AI solutions, your firm benefits from our enhanced data privacy and enterprise-grade safeguards.

Frequently asked questions

How does Elsevier ensure responsible AI use and protect user data privacy in its AI solutions?

  • Elsevier has a long history as a trusted source for curated, peer-reviewed scientific content with domain-specific knowledge. Elsevier's Five Responsible AI Principles help to drive responsible, ethical and appropriate use.

  • For each use case and solution, we select the most appropriate large language model from a carefully chosen range of leading providers—including OpenAI, Anthropic and others—hosted securely on cloud services from Microsoft Azure or AWS. We tailor model selection to meet the specific needs of the task, ensuring both performance and safety.

  • At Elsevier, we recognize that the proper handling of personal data is very important to our customers and the communities we serve. As such, we are committed to behaving with integrity and responsibility regarding data privacy. All user inputs and data are treated in line with our Privacy Policy and Responsible AI principles.

  • We treat personal data in line with applicable privacy laws, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). We also take further steps to ensure we meet the privacy expectations of our users and the scientific community. You can read more about our privacy principles here https://www.elsevier.com/about/policies-and-standards/privacy-principles

  • Our use of 3rd party LLMs is private, meaning there is no data exchange or use of our data to train their public models. Elsevier does not examine user search prompts, at individual or organizational level . We only review aggregated, anonymized patterns—such as prompts yielding no results or low satisfaction to help improve overall system performance and relevance.

What is the high-level architecture flow for Elsevier’s AI solutions?

  • A user’s prompt/ask/document is sent securely using TLS 1.2 or higher to the trusted Elsevier environment. The prompt is parsed for intent and broken down into separate prompts by an embeddings model to retrieve information from our content store.

  • The prompt, along with content response, is then sent using TLS 1.2 or higher to our foundational model providers within the trusted Elsevier environment.

  • A grounded, generated response is then presented to the user in the Elsevier AI solution.

  • User prompts and responses in their conversation history are secured in encrypted databases with AES-256 level encryption.

  • Our architecture and associated contracts preclude third-party model providers from logging or training models based on users’ prompts.

How is data encrypted and routed when using Elsevier’s AI tools?

  • Elsevier strictly controls what content is shared with, retained by, or used for training by vendors. Neither Microsoft (Azure) nor AWS (Bedrock) retain Elsevier’s content or customer prompts for training or storage.

  • User prompts remain private; only aggregated, anonymized insights are used by Elsevier to improve the service.

  • Data security and encryption: Proxies direct data to the appropriate Azure-hosted OpenAI model, which could be located anywhere in the world. Data is encrypted in transit using TLS 1.2 or higher and nothing is stored at rest.

  • Elsevier has zero-retention contracts in place with our foundation model providers. This ensures that your prompts and documents are never stored or used to train any large language models (LLMs). By using Elsevier’s AI solutions, your organization benefits from our enhanced data privacy and enterprise-grade safeguards.

How does Elsevier manage its Cloud Infrastructure?

  • All Elsevier AI services, including our product environments, are hosted in leading cloud data centres provided by Amazon Web Services (AWS) or Microsoft Azure. Services may be hosted in Europe or the US based on application and regulatory requirements.

  • We protect your data wherever it goes. At rest, it’s locked down with Advanced Encryption Standard (AES)-256 encryption. When your data is in transit, we use TLS 1.2 or higher, which not only encrypts data but also authenticates the server and verifies data integrity.

  • We use industry best practices such as web application firewalls, application and infrastructure vulnerability scanning, secure code reviews, bug bounties, and other preventive, detective, and response controls to protect our systems and your data from attackers.

  • Our architecture and associated contracts preclude third-party model providers from logging or training models based on users’ conversations.

  • All cross-border transfers of personal data are subject to appropriate safeguards compliant with the GDPR, including the EU Standard Contractual Clauses. Customer personal data is not transferred to China.

What steps has Elsevier taken to lower its environmental impact and support sustainable AI development?

  • Elsevier always works to advance sustainability through our products, and through our research advocacy and social responsibility programs. There are several specific steps we’ve taken to reduce the environmental impact of our AI tools. For example:

  • We use a multi-model approach which helps keep environmental impacts low by allowing the use of smaller models for less intensive tasks.

  • We utilize Microsoft Azure and AWS whose data centres are powered by green electricity.

  • Our robust data governance program minimizes unnecessary data storage and processing, indirectly contributing to energy efficiency.

  • Our multi-model approach helps in keeping environmental impacts low by allowing the use of smaller models for less intensive tasks. This means that instead of always relying on large models, which consume more energy, smaller models can be applied to tasks that don't require as much computational power. This approach is more efficient and reduces the overall energy consumption, thereby minimizing the environmental impact.

  • Elsevier, part of RELX, prioritizes environmental responsibility by reducing our carbon impact and advancing sustainable practices. With a strong alignment to the UN Sustainable Development Goals—particularly in Climate Action and Responsible Consumption—we are committed to shaping a sustainable future. Learn more about our environmental efforts across RELX here.

  • Elsevier backs up our responsible AI use policy through a comprehensive approach that integrates our Responsible AI Principles into the development lifecycle of our solution. Learn more about Elsevier Responsible AI Principles.