Clarivate
Academic AI
Pushing the boundaries of research and learning with AI you can trust
Artificial intelligence is accelerating change across academia. Clarivate helps institutions adapt with confidence to drive research excellence, student outcomes and library productivity.
Agents: The next chapter in Academic AI
Clarivate delivers AI agents that accomplish complex, multi-step tasks — saving time, accelerating academic workflows and keeping humans in control.
Purpose-built agents
Task-specific agents help researchers, students and librarians achieve more, with greater efficiency and precision, while upholding academic integrity.
Agent builder
A flexible development environment requiring little to no coding expertise, enabling institutions to create, customize and deploy their own AI tools.
Community tools
Facilitating collaboration through shared templates, workflows and tools — allowing institutions to re-use, repurpose and build on each other’s AI solutions.
AI literacy: A micro-course to build your AI confidence
Clarivate and ACRL-Choice have teamed up to develop a newsletter-based course for academic librarians. Learn core AI literacy concepts in a flexible, self-paced format. This newsletter series delivers eight weeks of bite-sized content grounded in the ACRL AI literacy framework.
AI you can trust
Our solutions promote a responsible and effective use of AI in academic and research environments:
Grounded in a wealth of expertly curated, authoritative sources
Embedded in research, learning and library workflows
Backed by rigorous quality testing and evaluation frameworks
Developed in close partnership with the academic community
Governed by academic principles
Our academic AI solutions
TDM Studio
Driving real impact, at scale
Clarivate AI solutions are integrated into core academic workflows — saving time, boosting productivity and supporting better research and learning outcomes at scale.
3,000+
1.9m+
90+
+6%
Built on the Clarivate Academic AI Platform
The Academic AI platform technology backbone enables accelerated and consistent deployment of AI capabilities across our portfolio of solutions.
- Employing Retrieval Augmented Generation (RAG) architecture to ground answers in scholarly content
- Centralized management of Large Language Models (LLMs) for enhanced performance and relevance
- Private and secure environment to protect users’ and publishers’ data
- Unified, intuitive user experience across solutions
- Multilingual support to promote global accessibility and inclusivity


In collaboration with the community
Clarivate AI-powered solutions are developed in close partnership with customers and the academic community through beta programs and dedicated forums.
Established in 2024, the Clarivate Academia AI Advisory Council includes senior leaders from libraries and higher education. The council provides best practices, recommendations, and guardrails to help address AI opportunities and challenges.
AI resources for you


Evaluating the quality of generative AI output: Methods, metrics and best practices
By Christine Stohn and Marta Enciso, Clarivate As generative AI (GenAI) tools become more embedded in academic workflows—from research discovery to learning support—questions about output quality are moving to the…


Evaluating the quality of generative AI output: Methods, metrics and best practices
Generative AI is becoming an increasingly accepted practice in academic research and learning. As its use expands, ensuring the quality and reliability of AI-generated content is a critical priority for the scholarly community. Unlike traditional systems, Large Language Models (LLMs) produce variable outputs that challenge conventional quality assessment methods. How can system providers and institutions...


Demystifying AI (and what it means for libraries)
The discussion starts with the need for “Academic AI” and the key principles driving this specialized approach, informed by inputs from the library community. In addition, the panelist provides a straightforward introduction to AI, explaining the role of large language models (hint: they are not used for providing facts and information), the process of converting...


Research Smarter with the Web of Science Research Assistant
AI as a technology presents significant opportunities and challenges for academic research. It must be packaged in a way that solves real world problems for researchers while ensuring trustworthiness and reliability. With this goal in mind, we partnered with the research community to develop the Web of Science™ Research Assistant, a responsible, generative AI-powered tool...
Committed to responsible application of AI
At Clarivate, we’ve been using AI and machine learning for years, guided by our AI principles.
We are committed to:
- Deliver trusted content and data, based on authoritative academic sources
- Ensure proper attribution and easy access to cited works
- Collaborate with publishers, ensuring clear usage rights
- Do not use licensed content to train public LLM
- Align to evolving global regulations

Frequently Asked Questions
We do not train public LLMs. We use commercially pre-trained Large Language Models as part of our information retrieval and augmentation framework. Currently, this includes the use of a Retrieval Augmented Generation (RAG) architecture among other advanced techniques. While we are using the pre-trained LLMs to support the creation of narrative content, the facts in this content are generated from our trusted academic sources. We test this setup rigorously to ensure academic integrity and alignment with the academic ecosystem. Testing includes validation of responses through academic subject matter experts who evaluate the outputs for accuracy and relevance. Additionally, we conduct extensive user testing that involve real-world research and learning scenarios to further refine accuracy and performance.
We are committed to the highest standards of user privacy and security. We do not share or pass any publisher content, library-owned materials, or user data to large language models (LLMs) for any purpose.
While the LLM is a key tool to provide a fluent narrative, answers to user queries are based on our extensive collection of curated scholarly content. This means that our AI-generated responses draw from trusted academic sources, such as Web of Science, ProQuest One, Ex Libris Central Discovery Index, Ebook Central, as well as local library collections, rather than broad (and potentially inaccurate or biased) internet content that common chatbots might use.
Depending on the product and your content subscription options, our AI tools will use various content types, such as scholarly journals, books/book chapters, conference proceedings, reports, reviews, case studies, magazines and news content to generate the responses. The coverage of some of this content spans from the 1800s to today.
We strongly believe that we have a critical responsibility to the academic community to mitigate AI-induced inaccuracies, and we take many steps toward achieving this goal:
- The prompts used in our products are crafted by expert prompt engineers who ensure that the settings of the LLM are optimized to maximize faithfulness to the source content and minimize hallucinations. Our system is designed to provide references to the source text when delivering an answer. Additionally, our systems can handle negative rejections, ensuring that it does not fabricate one when an answer cannot be provided due to insufficient information. Instead, the system presents an explanation to the user for the lack of response.
- The information presented to users always originates from our trustworthy, curated content. We use a combination of RAG/RAG Fusion models to ensure that the information users see is based on the vetted content your library can access via Clarivate solutions. Our tools offer full transparency regarding the content that was used to generate the response and ensure proper attribution to the specific sources used.
- We continuously test our solutions and the results they produce, including through dedicated beta programs and close collaboration with customers and subject matter experts. Our data science expertise helps increase system accuracy, fairness, robustness and interpretability in a programmatic way.
Ensuring clarity and trust in our solutions is one of our top priorities. Our conversational discovery and AI-powered tools present a list of academic resources on which their responses are based, so that you can always explore relevant materials for further context.
Data privacy and trust are top priority when designing our AI tools. We comply with data privacy regulations and adhere to the evolving global AI legislation.
As part of the user query, only the content that users themselves input into the query is transmitted to the LLM. No additional data is shared during this process, ensuring the protection of sensitive information. Furthermore, we are not using any of the LLM API endpoints directly but accessing LLMs through a private setup. This ensures that data entered by users in the query stays protected and cannot be seen or accessed by any other party. This approach to data protection aligns with our practices in academic search, where we apply our knowledge in securely managing user information. For more information on our privacy and data protection program, visit: https://clarivate.com/privacy-center/
The ranking and prioritization of sources by our AI-based discovery tools will vary according to the specific characteristics of the user’s query, the user persona, and the context of each query.
The approach to ranking and prioritization is similar to the way it is traditionally done in our discovery solutions. This understanding enables us to present the most relevant and valuable sources first, ensuring that the information provided matches the user’s needs as closely as possible.
Our tools provide the means for academic discovery, exploration, and research. But sometimes users may enter queries that are inherently biased, prejudiced, or seek sensitive information.
While our tools will not normally block user queries, the foundational large language models we use for narration capabilities are trained to recognize sensitive or offensive queries and handle them appropriately.
This means that such questions will typically be answered with a balanced perspective, drawing on our curated academic content. Please note that there is no guarantee that all sensitive or offensive content will be identified, and errors might occur. If you do encounter such content, please use the feedback mechanism to report it.
Most of our tools support a variety of languages, allowing users from different regions to interact with our products in their own language. While coverage may vary by tool and language, we are proud to support a wide range of languages across all world regions.
The conversation regarding carbon emissions caused by AI is part of a wider discussion around systems, data storage and sustainability which cannot be solved by any one organization. This is an important industry-wide challenge, which we are working with our vendors, customers and community to understand and address.
By continuously focusing on actions and outcomes at Clarivate, we are making a positive impact on our business, our people and our planet. We are mindful of the need to reduce waste in systems and data storage and are committed to get to net zero carbon emissions before 2040. Our products and services are designed, developed and deployed following environmental and sustainable best practices including optimizing to reduce waste and pollution such as CO2 emissions. We are building a comprehensive climate transition plan that includes setting Science Based Targets (SBTs).
We are working in close partnership with all our cloud systems and data storage providers (Amazon, Google and Microsoft.) Each of these companies has their own sustainability commitments and ambitions to get to net zero by 2030 or 2040, the majority of which is aimed at reducing emissions created in the first place through using green energy rather than offsetting.
Our Environmental Management Statement outlines our framework to adjust existing working practices towards our Net Zero before 2040 target, which includes our usage of AI services. For more information, please see our Sustainability Report.
In academia, our Academic AI platform serves as a technology backbone, enabling a centralized and consistent deployment of AI capabilities across our portfolio of solutions and promoting efficient performance. We choose AI models that balance performance, cost, and sustainability. For each task, we always prioritize high-quality output with the least resource-intensive technology.
Our centralized platform approach also helps eliminate system redundancies, reducing both resource consumption and emissions. Additionally, we use caching and text compression mechanisms to reduce the workload on the LLM, making LLM calls more efficient.
We continue to engage with stakeholders across academic communities to explore and implement best practices for reducing the environmental impact of AI.
Speak to our team
Learn how Clarivate can help you advance innovation.