CiteScore: A Non-Rival for the Journal Impact Factor

Clarivate is Independent and Unbiased

We believe an organization independent from journal publishers is best positioned to provide journal evaluation metrics that customers and stakeholders can trust and which are free from the perception of bias. Clarivate Analytics is neutral: we are not a publisher and we have no plans to become one.  We are fully transparent with the methodology used to calculate JIF. Our reputation for high-quality, trusted metrics has been hard earned and is a reflection of our long history of independence and transparency.

Whilst Elsevier has published its methodology for CiteScore, we believe there are legitimate questions that researchers may have in connection with Elsevier’s position as a major journal publisher. Undoubtedly Elsevier will always have better records on their own journals than others used to generate CiteScore. Commentators have also noticed and questioned the neutrality of this metric. For example, an initial review of the CiteScore listings by the researchers behind Eigenfactor.org suggests that Elsevier’s own journals have a received a boost to the detriment of those of its biggest competitor (Springer Nature). The Eigenfactor.org study notes that Elsevier journals perform 15% higher than their competitors with CiteScore than they do with JIF.

Breadth of Coverage

The Web of Science, across all databases, indexes 32,925 journals vs. the 22,256 in Scopus. We provide the broadest coverage available for research discovery, while also focusing on quality metrics for evaluation.

Journals selected for inclusion in the JCR undergo a rigorous vetting process by our team of full-time experts. We carefully weed out any predatory and non-peer-reviewed journals, so you can be confident that only the best journals are eligible to be given a JIF score.

For more information on our journal selection process, click here.

Additionally, the databases in the Web of Science Core Collection include cover-to-cover indexing: our indexing and quality assurance methods ensure that no titles, articles, or cited references are overlooked. So when you look at a journal in the JCR, you can be sure that every page has been captured and recorded for your bibliometrics purposes.

JIF: A Quality Benchmark

There has long been debate about our use of “citable items” in the JIF calculation. We believe this is a fair methodology and it has been long accepted in the publishing industry. Elsevier claims that by counting ALL document types in CiteScore, they are somehow leveling the playing field, making things more transparent. So what are citable items in the JCR world? It’s really not a secret: citable items are all materials indexed in the Web of Science (Science and Social Science Citation Indices) as articles or reviews. It’s as simple as that – whenever you are in the Web of Science, just look at the document type and you’ll know if we consider it citable or not for the JCR.

In a recent posting, Ludo Waltman, the Deputy Director of CWTS, pointed out that “Including publications of all document types in the calculation of CiteScore is problematic. It disadvantages in an undesirable way journals such as NatureScienceNew England Journal of Medicine, and Lancet. These journals publish many special types of publications, for instance letters, editorials, and news items. Typically these special types of publications receive relatively limited numbers of citations, and in essence CiteScore therefore penalizes journals for publishing them.” The net result of this could be that journals will publish less of these types of items in order to bump up their CiteScore figures, negatively impacting journal quality and reader needs.

As for our full JIF calculation formula, it’s always been freely available:

JIF

We work with editors and publishers in varied ways to foster transparency in how we index their content and produce JCR metrics. Journals covered in the Web of Science span styles and formats so it is through collaboration with publishers we insure their content is indexed in a way that meets their expectation while holding to our policies for consistency. Through this process, we provide:

  • Regular calls with high-volume publishers to create an open and recurring forum to address questions or changes to journal content
  • On demand review of authority files (the indexing instructions for each journal, including specification of section document types)
  • Direct access to our JCR, Bibliographic Policy and Operations leads throughout the year to address JCR related inquiries, in particular leading into JCR’s annual production and post-release

Elsevier is claiming that their CiteScore’s three-year window is “optimal,” but the truth is, no optimal time frame has ever been proven for journal data. Because some fields move quicker than others, some journals will develop robust data in the two-year window the standard JIF score uses, while others need a larger window of time in order to measure excellence – and this is the reason we developed the five-year JIF score. Users of the JCR get the best of both worlds with these variations on the JIF time frame.

So what’s the verdict on the appearance of CiteScore on the horizon? Citing the problems with bias and the treatment of documents, Phil Davis wrote on the Scholarly Kitchen: “Taken together, it doesn’t appear that the CiteScore indicator can be considered a viable alternative to the Impact Factor.” The Eigenfactor.org group also stated: “By neglecting to count the front matter in its denominator, Impact Factor creates incentives for publishers to multiply their front matter. By counting front matter fully in the denominator, CiteScore does the reverse. Because we value and enjoy the front matter in many of our favorite journals, we see the Impact Factor as the lesser of evils in this regard.”

Time will tell whether CiteScore will end up as a worthwhile journal metric, but Clarivate Analytics has full confidence in our metrics, due to their long history of quality and utility.