Clarivate
Academic AI
Pushing the boundaries of research and learning with AI you can trust
Artificial intelligence is accelerating change across academia. Clarivate helps institutions adapt with confidence to drive research excellence, student outcomes and library productivity.
Agents: The next chapter in Academic AI
Clarivate delivers AI agents that accomplish complex, multi-step tasks — saving time, accelerating academic workflows and keeping humans in control.
Purpose-built agents
Task-specific agents help researchers, students and librarians achieve more, with greater efficiency and precision, while upholding academic integrity.
Agent builder
A flexible development environment requiring little to no coding expertise, enabling institutions to create, customize and deploy their own AI tools.
Community tools
Facilitating collaboration through shared templates, workflows and tools — allowing institutions to re-use, repurpose and build on each other’s AI solutions.
AI literacy: A micro-course to build your AI confidence
Clarivate and ACRL-Choice have teamed up to develop a newsletter-based course for academic librarians. Learn core AI literacy concepts in a flexible, self-paced format. This newsletter series delivers eight weeks of bite-sized content grounded in the ACRL AI literacy framework.
AI you can trust
Our solutions promote a responsible and effective use of AI in academic and research environments:
Grounded in a wealth of expertly curated, authoritative sources
Embedded in research, learning and library workflows
Backed by rigorous quality testing and evaluation frameworks
Developed in close partnership with the academic community
Governed by academic principles
Our academic AI solutions
TDM Studio
Driving real impact, at scale
Clarivate AI solutions are integrated into core academic workflows — saving time, boosting productivity and supporting better research and learning outcomes at scale.
3,000+
1.9m+
90+
+6%
Built on the Clarivate Academic AI Platform
The Academic AI platform technology backbone enables accelerated and consistent deployment of AI capabilities across our portfolio of solutions.
- Employing Retrieval Augmented Generation (RAG) architecture to ground answers in scholarly content
- Centralized management of Large Language Models (LLMs) for enhanced performance and relevance
- Private and secure environment to protect users’ and publishers’ data
- Unified, intuitive user experience across solutions
- Multilingual support to promote global accessibility and inclusivity
In collaboration with the community
Clarivate AI-powered solutions are developed in close partnership with customers and the academic community through beta programs and dedicated forums.
Established in 2024, the Clarivate Academia AI Advisory Council includes senior leaders from libraries and higher education. The council provides best practices, recommendations, and guardrails to help address AI opportunities and challenges.
AI resources for you
Evaluating the quality of generative AI output: Methods, metrics and best practices
By Christine Stohn and Marta Enciso, Clarivate As generative AI (GenAI) tools become more embedded in academic workflows—from research discovery to learning support—questions about output quality are moving to the…
Evaluating the quality of generative AI output: Methods, metrics and best practices
Generative AI is becoming an increasingly accepted practice in academic research and learning. As its use expands, ensuring the quality and reliability of AI-generated content is a critical priority for the scholarly community. Unlike traditional systems, Large Language Models (LLMs) produce variable outputs that challenge conventional quality assessment methods. How can system providers and institutions...
Demystifying AI (and what it means for libraries)
The discussion starts with the need for “Academic AI” and the key principles driving this specialized approach, informed by inputs from the library community. In addition, the panelist provides a straightforward introduction to AI, explaining the role of large language models (hint: they are not used for providing facts and information), the process of converting...
Research Smarter with the Web of Science Research Assistant
AI as a technology presents significant opportunities and challenges for academic research. It must be packaged in a way that solves real world problems for researchers while ensuring trustworthiness and reliability. With this goal in mind, we partnered with the research community to develop the Web of Science™ Research Assistant, a responsible, generative AI-powered tool...
Committed to responsible application of AI
At Clarivate, we’ve been using AI and machine learning for years, guided by our AI principles.
We are committed to:
- Deliver trusted content and data, based on authoritative academic sources
- Ensure proper attribution and easy access to cited works
- Collaborate with publishers, ensuring clear usage rights
- Do not use licensed content to train public LLM
- Align to evolving global regulations
Frequently Asked Questions
We do not train public LLMs. We use commercially pre-trained Large Language Models as part of our information retrieval and augmentation framework. Currently, this includes the use of a Retrieval Augmented Generation (RAG) architecture among other advanced techniques. While we are using the pre-trained LLMs to support the creation of narrative content, the facts in this content are generated from our trusted academic sources. We test this setup rigorously to ensure academic integrity and alignment with the academic ecosystem. Testing includes validation of responses through academic subject matter experts who evaluate the outputs for accuracy and relevance. Additionally, we conduct extensive user testing that involve real-world research and learning scenarios to further refine accuracy and performance.
We are committed to the highest standards of user privacy and security. We do not share or pass any publisher content, library-owned materials, or user data to large language models (LLMs) for any purpose.
While the LLM is a key tool to provide a fluent narrative, answers to user queries are based on our extensive collection of curated scholarly content. This means that our AI-generated responses draw from trusted academic sources, such as Web of Science, ProQuest One, Ex Libris Central Discovery Index, Ebook Central, as well as local library collections, rather than broad (and potentially inaccurate or biased) internet content that common chatbots might use.
Depending on the product and your content subscription options, our AI tools will use various content types, such as scholarly journals, books/book chapters, conference proceedings, reports, reviews, case studies, magazines and news content to generate the responses. The coverage of some of this content spans from the 1800s to today.
We strongly believe that we have a critical responsibility to the academic community to mitigate AI-induced inaccuracies, and we take many steps toward achieving this goal:
- The prompts used in our products are crafted by expert prompt engineers who ensure that the settings of the LLM are optimized to maximize faithfulness to the source content and minimize hallucinations. Our system is designed to provide references to the source text when delivering an answer. Additionally, our systems can handle negative rejections, ensuring that it does not fabricate one when an answer cannot be provided due to insufficient information. Instead, the system presents an explanation to the user for the lack of response.
- The information presented to users always originates from our trustworthy, curated content. We use a combination of RAG/RAG Fusion models to ensure that the information users see is based on the vetted content your library can access via Clarivate solutions. Our tools offer full transparency regarding the content that was used to generate the response and ensure proper attribution to the specific sources used.
- We continuously test our solutions and the results they produce, including through dedicated beta programs and close collaboration with customers and subject matter experts. Our data science expertise helps increase system accuracy, fairness, robustness and interpretability in a programmatic way.
Ensuring clarity and trust in our solutions is one of our top priorities. Our conversational discovery and AI-powered tools present a list of academic resources on which their responses are based, so that you can always explore relevant materials for further context.
Data privacy and trust are top priority when designing our AI tools. We comply with data privacy regulations and adhere to the evolving global AI legislation.
As part of the user query, only the content that users themselves input into the query is transmitted to the LLM. No additional data is shared during this process, ensuring the protection of sensitive information. Furthermore, we are not using any of the LLM API endpoints directly but accessing LLMs through a private setup. This ensures that data entered by users in the query stays protected and cannot be seen or accessed by any other party. This approach to data protection aligns with our practices in academic search, where we apply our knowledge in securely managing user information. For more information on our privacy and data protection program, visit: https://clarivate.com/privacy-center/
The ranking and prioritization of sources by our AI-based discovery tools will vary according to the specific characteristics of the user’s query, the user persona, and the context of each query.
The approach to ranking and prioritization is similar to the way it is traditionally done in our discovery solutions. This understanding enables us to present the most relevant and valuable sources first, ensuring that the information provided matches the user’s needs as closely as possible.
Our tools provide the means for academic discovery, exploration, and research. But sometimes users may enter queries that are inherently biased, prejudiced, or seek sensitive information.
While our tools will not normally block user queries, the foundational large language models we use for narration capabilities are trained to recognize sensitive or offensive queries and handle them appropriately.
This means that such questions will typically be answered with a balanced perspective, drawing on our curated academic content. Please note that there is no guarantee that all sensitive or offensive content will be identified, and errors might occur. If you do encounter such content, please use the feedback mechanism to report it.
Most of our tools support a variety of languages, allowing users from different regions to interact with our products in their own language. While coverage may vary by tool and language, we are proud to support a wide range of languages across all world regions.
We recognize that sustainability is a key concern for the academic community. At Clarivate, we share this focus and are committed to reducing the environmental impact of our AI-powered solutions.
How we reduce our AI footprint
The Clarivate Academic AI Platform serves as the backbone for our AI solutions and is designed to carefully balance performance, cost, and environmental impact. Our approach:
- We use pre-trained models. Pre–trained LLM models avoid the large energy consumption and carbon emissions required to train new models from scratch.
- We choose the right model for the task. We prioritize lightweight models (e.g., GPT-4.0-mini or O4-mini) over large, general-purpose models to reduce energy consumption while maintaining high-quality output.
- We optimize prompts and outputs. By shortening prompts, setting clear length limits, and minimizing unnecessary tokens, we reduce the computational resources required to generate a response.
- We cache responses for repeated use. Caching LLM outputs avoids repeated calls for identical requests. For example, if one user asks for a document’s key concepts, that response is stored and served to subsequent users without triggering a new LLM call.
- We use text compression. Compressing documents before sending them to an LLM reduces the volume of processed data, lowering energy use and carbon emissions.
- We build on optimized cloud infrastructure. Our solutions are deployed on infrastructure from leading cloud providers, who are committed to ambitious sustainability targets and increasingly powered by renewable energy. Additionally, using cloud infrastructure can help reduce energy use and emissions compared to maintaining traditional on-premises systems.
- We enable flexibility and choice. We acknowledge that institutions and individuals may wish to make their own decisions about when and how to use AI. Where possible, we provide flexibility, allowing AI features to be enabled or disabled in line with institutional policies and user preferences.
- We have a centralized AI governance. A cross-functional team of AI experts provides oversight to ensure consistent, accountable, ethical and environmentally sustainable management of our AI systems.
- We apply AI thoughtfully. Before implementation, we assess different technologies to determine whether AI offers clear advantages and is the most effective solution for the task at hand.
Part of a broad digital ecosystem
AI usage is one part of academia’s broader digital infrastructure. We continuously evaluate and improve our data storage, network management, and system designs with an aim to improve efficiency and lower environmental impact.
Our sustainability commitment
We report our greenhouse gas (GHG) emissions annually through the Carbon Disclosure Project (CDP), in line with the GHG Protocol to measure and manage Scopes 1 and 2. We are developing our journey towards reporting on relevant Scope 3 emissions. We have established internal goals and are tracking progress against our 2040 commitment to achieve net zero emissions, ensuring transparency and alignment with global standards.
You can learn more in our Sustainability Report and Environmental Management Statement.
Sustainability in AI is rapidly evolving, and we’re committed to transparency, collaboration and adoption of best practices. We welcome feedback and ideas from the academic community to help shape a future of responsible and sustainable AI.
Speak to our team
Learn how Clarivate can help you advance innovation.