Clarivate AI for Academia

Pushing the boundaries of research and learning with AI you can trust.

Talk to us
Clarivate AI for Academia

Artificial intelligence (AI) is transforming research, teaching and learning. Clarivate makes sure you can safely and responsibly navigate this new landscape, driving research excellence and student learning outcomes.

 

 

Trusted Academic AI

Clarivate AI-based solutions provide users with intelligence grounded in trustworthy sources and embedded in academic workflows, thus reducing the risks of misinformation, bias, and IP abuse.

  • A wealth of expertly curated content
  • Deep understanding of academic processes
  • Rigorous testing and validation of results
  • Close partnership with the academic community
  • Strong governance, driven by academic principles
AI newsletter
Think forward with Clarivate AI

A monthly newsletter that will keep you informed on our latest AI news and product developments

Subscribe now

Web of Science™ Research Assistant

Discover a new, conversational way to understand topics, gain insight, locate must-read papers, and connect the dots between articles in the world’s most trusted citation index.

  • Natural language search of documents, in multiple languages.
  • Guided workflows and contextual data visualizations.
  • Responses to scholarly questions and commentaries on relevant articles.
  • Links to Web of Science articles and result sets for further exploration.

Watch the demo Learn more

ProQuest Research Assistant

Harnesses AI’s capabilities and applies them in a responsible, reliable manner as a research companion for students. Powerful features allow users to:

  • Easily craft more effective and targeted searches
  • More effectively review, analyze, and interrogate documents
  • Quickly evaluate the usefulness of each document for your research
  • Receive guidance on next steps including choosing a research topic and understanding key concepts

Watch the video Learn more

Alethea Academic Coach

Nurture students learning skills and critical thinking with Alethea. The AI-based coach guides students to the core of their course readings, helping them distill takeaways and prepare for effective class discussion.

  • Chat-based interactions, questions and prompts
  • Combined proven learning principles and GenAI
  • Insights to support students at risk of falling behind
  • Built into your academic environment

Watch the demo Learn more

Primo Research Assistant

Transform your library discovery, providing an ideal starting point for users seeking to find and explore learning and research materials. Answers are grounded in the Ex Libris Central Discovery Index, one of the world’s most extensive scholarly indexes.

  • Search intuitively, using natural language to find what you need
  • Enrich the research experience with narrated answers, references, and links to full text sources
  • Discover fresh perspectives and ideas to gain new insights
  • Maximize the use of your library’s electronic collection

Foundation for Innovation:

Clarivate Academic AI Platform

The Academic AI platform serves as a technology backbone, enabling accelerated and consistent deployment of AI capabilities across our portfolio of solutions.

  • Employing Retrieval Augmented Generation architecture alongside document insights and metadata capabilities to ground answers in scholary content
  • Using rigorous testing methodologies to ensure accuracy and integrity of answers
  • Centralized management of Large Language Models (LLMs) for enhanced performance and relevance, all within a private and secure environment to protect user data
  • Facilitating a common, intuitive user experience across solutions, helping users easily navigate products
  • Enabling translation of AI queries into multiple languages, promoting global accessibility and inclusivity

Learn more

In partnership with the community

Clarivate Academia AI Advisory Council is being formed to ensure that generative AI is developed in collaboration with the academic community. The council will help foster responsible design and application of GenAI for academic settings, including best practices, recommendations and guardrails.

arrow_forwardContact us

Committed to responsible application of AI

At Clarivate, we’ve been using AI and machine learning for years, guided by our AI principles.

  • Deliver trusted content and data, based on authoritative academic sources
  • Ensure proper attribution and easy access to cited works
  • Collaborate with publishers, ensuring clear usage rights
  • Do not use licensed content to train public LLM and AI
  • Adhere to evolving global regulations

Frequently Asked Questions

We do not train public LLMs. We use commercially pre-trained Large Language Models as part of our information retrieval and augmentation framework. Currently, this includes the use of a Retrieval Augmented Generation (RAG) architecture among other advanced techniques. While we are using the pre-trained LLMs to support the creation of narrative content, the facts in this content are generated from our trusted academic sources. We test this setup rigorously to ensure academic integrity and alignment with the academic ecosystem. Testing includes validation of responses through academic subject matter experts who evaluate the outputs for accuracy and relevance. Additionally, we conduct extensive user testing that involve real-world research and learning scenarios to further refine accuracy and performance.

We are committed to the highest standards of user privacy and security. We do not share or pass any publisher content, library-owned materials, or user data to large language models (LLMs) for any purpose.

While the LLM is a key tool to provide a fluent narrative, answers to user queries are based on our extensive collection of curated scholarly content. This means that our AI-generated responses draw from trusted academic sources, such as Web of Science, ProQuest One, Ex Libris Central Discovery Index, as well as local library collections, rather than broad (and potentially inaccurate or biased) internet content that common chatbots might use.

We strongly believe that we have a critical responsibility to the academic community to mitigate AI-induced inaccuracies. We continuously test our solutions and the results they produce, including through dedicated beta programs and close collaboration with customers and subject matter experts. Our data science expertise helps ensure system accuracy, fairness, robustness and interpretability. Pairing this with our trustworthy, curated content, we significantly reduce the risk of ‘hallucinations’ and misinformation.

Ensuring clarity and trust in our solutions is one of our top priorities. Our conversational discovery and AI-powered tools present a list of academic resources on which their responses are based, so that you can always explore relevant materials for further context.
Data privacy and trust are top priority when designing our AI tools. We comply with data privacy regulations and adhere to the evolving global AI legislation.
As part of the user query, only the content that users themselves input into the query is transmitted to the LLM. No additional data is shared during this process, ensuring the protection of sensitive information. Furthermore, we are not using any of the LLM API endpoints directly but accessing LLMs through a private setup. This ensures that data entered by users in the query stays protected and cannot be seen or accessed by any other party. This approach to data protection aligns with our practices in academic search, where we apply our knowledge in securely managing user information.For more information on our privacy and data protection program, visit: https://clarivate.com/privacy-center/

The ranking and prioritization of sources by our AI-based discovery tools will vary according to the specific characteristics of the user’s query, the user persona, and the context of each query.

The approach to ranking and prioritization is similar to the way it is traditionally done in our discovery solutions. This understanding enables us to present the most relevant and valuable sources first, ensuring that the information provided matches the user’s needs as closely as possible.

Our tools provide the means for academic discovery, exploration, and research. But sometimes users may enter queries that are inherently biased, prejudiced, or seek sensitive information.

While our tools will not normally block user queries, the foundational large language models we use for narration capabilities are trained to recognize sensitive or offensive queries and handle them appropriately.

This means that such questions will typically be answered with a balanced perspective, drawing on our curated academic content. Please note that there is no guarantee that all sensitive or offensive content will be identified, and errors might occur. If you do encounter such content, please use the feedback mechanism to report it.

Speak to our team

Learn how Clarivate can help you advance innovation.

 

Contact us

#code { display: none } button[aria-expanded=true] .material-icons::after { content: "\e5cd"; font-size: 1.4rem; color: #fff !important; } button[aria-expanded=false] .material-icons::after { content: "\e148"; font-size: 1.4rem; color: #fff !important; } #d-none-section{display:none!important;} console.log('inline script'); jQuery('[data-vidyard-lightbox]').each(function() { jQuery(this).on('click', function(e) { e.preventDefault(); e.stopPropagation(); var $this = jQuery(this); var vidyardId = $this.data('vidyard-lightbox'); var players = VidyardV4.api.getPlayersByUUID(vidyardId); var player = players[0]; player.showLightbox(); return false; }); });