{"id":5122,"date":"2016-12-14T09:47:43","date_gmt":"2016-12-14T09:47:43","guid":{"rendered":"https:\/\/clarivate.com\/?p=5122"},"modified":"2025-08-04T16:02:14","modified_gmt":"2025-08-04T16:02:14","slug":"citescore-a-non-rival-for-the-journal-impact-factor","status":"publish","type":"post","link":"https:\/\/clarivate.com\/academia-government\/blog\/citescore-a-non-rival-for-the-journal-impact-factor\/","title":{"rendered":"CiteScore: A Non-Rival for the Journal Impact Factor"},"content":{"rendered":"<p><strong>Clarivate is Independent and Unbiased<\/strong><\/p>\n<p>We believe an organization independent from journal publishers is best positioned to provide journal evaluation metrics that customers and stakeholders can trust and which are free from the perception of bias. Clarivate Analytics is neutral: we are not a publisher and we have no plans to become one.\u00a0 We are fully transparent with the methodology used to calculate JIF. Our reputation for high-quality, trusted metrics has been hard earned and is a reflection of our long history of independence and transparency.<\/p>\n<p>Whilst Elsevier has published its methodology for CiteScore, we believe there are legitimate questions that researchers may have in connection with Elsevier\u2019s position as a major journal publisher. Undoubtedly Elsevier will always have better records on their own journals than others used to generate CiteScore. Commentators have also noticed and questioned the neutrality of this metric. For example, <a href=\"http:\/\/eigenfactor.org\/projects\/posts\/citescore.php\" target=\"_blank\" rel=\"noopener\">an initial review of the CiteScore listings by the researchers behind Eigenfactor.org<\/a> suggests that Elsevier\u2019s own journals have a received a boost to the detriment of those of its biggest competitor (Springer Nature). The Eigenfactor.org study notes that Elsevier journals perform 15% higher than their competitors with CiteScore than they do with JIF.<\/p>\n<p><strong>Breadth of Coverage<\/strong><\/p>\n<p>The Web of Science, across all databases, indexes 32,925 journals vs. the 22,256 in Scopus. We provide the broadest coverage available for research discovery, while also focusing on quality metrics for evaluation.<\/p>\n<p>Journals selected for inclusion in the JCR undergo a rigorous vetting process by our team of full-time experts. We carefully weed out any predatory and non-peer-reviewed journals, so you can be confident that only the best journals are eligible to be given a JIF score.<\/p>\n<p>For more information on our journal selection process, click <a href=\"http:\/\/wokinfo.com\/essays\/journal-selection-process\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n<p>Additionally, the databases in the Web of Science Core Collection include cover-to-cover indexing: our indexing and quality assurance methods ensure that no titles, articles, or cited references are overlooked. So when you look at a journal in the JCR, you can be sure that every page has been captured and recorded for your bibliometrics purposes.<\/p>\n<p><strong>JIF: A Quality Benchmark<\/strong><\/p>\n<p>There has long been debate about our use of \u201ccitable items\u201d in the JIF calculation. We believe this is a fair methodology and it has been long accepted in the publishing industry. Elsevier claims that by counting ALL document types in CiteScore, they are somehow leveling the playing field, making things more transparent. So what are citable items in the JCR world? It\u2019s really not a secret: citable items are all materials indexed in the Web of Science (Science and Social Science Citation Indices) as articles or reviews. It\u2019s as simple as that \u2013 whenever you are in the Web of Science, just look at the document type and you\u2019ll know if we consider it citable or not for the JCR.<\/p>\n<p>In a recent <a href=\"https:\/\/www.cwts.nl\/blog?article=n-q2y254\" target=\"_blank\" rel=\"noopener\">posting<\/a>, Ludo Waltman, the Deputy Director of CWTS, pointed out that \u201cIncluding publications of all document types in the calculation of CiteScore is problematic. It disadvantages in an undesirable way journals such as\u00a0<em>Nature<\/em>,\u00a0<em>Science<\/em>,\u00a0<em>New England Journal of Medicine<\/em>, and\u00a0<em>Lancet<\/em>. These journals publish many special types of publications, for instance letters, editorials, and news items. Typically these special types of publications receive relatively limited numbers of citations, and in essence CiteScore therefore penalizes journals for publishing them.\u201d The net result of this could be that journals will publish less of these types of items in order to bump up their CiteScore figures, negatively impacting journal quality and reader needs.<\/p>\n<p>As for our full JIF calculation formula, it\u2019s always been freely available:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-5123\" src=\"https:\/\/clarivate.com\/wp-content\/uploads\/2016\/12\/JIF.png\" alt=\"JIF\" width=\"746\" height=\"356\" \/><\/p>\n<p>We work with editors and publishers in varied ways to foster transparency in how we index their content and produce JCR metrics. Journals covered in the Web of Science span styles and formats so it is through collaboration with publishers we insure their content is indexed in a way that meets their expectation while holding to our policies for consistency. Through this process, we provide:<\/p>\n<ul>\n<li>Regular calls with high-volume publishers to create an open and recurring forum to address questions or changes to journal content<\/li>\n<li>On demand review of authority files (the indexing instructions for each journal, including specification of section document types)<\/li>\n<li>Direct access to our JCR, Bibliographic Policy and Operations leads throughout the year to address JCR related inquiries, in particular leading into JCR\u2019s annual production and post-release<\/li>\n<\/ul>\n<p>Elsevier is claiming that their CiteScore\u2019s three-year window is \u201coptimal,\u201d but the truth is, no optimal time frame has ever been proven for journal data. Because some fields move quicker than others, some journals will develop robust data in the two-year window the standard JIF score uses, while others need a larger window of time in order to measure excellence \u2013 and this is the reason we developed the five-year JIF score. Users of the JCR get the best of both worlds with these variations on the JIF time frame.<\/p>\n<p>So what\u2019s the verdict on the appearance of CiteScore on the horizon? Citing the problems with bias and the treatment of documents, Phil Davis wrote on the <a href=\"https:\/\/scholarlykitchen.sspnet.org\/2016\/12\/12\/citescore-flawed-but-still-a-game-changer\/\" target=\"_blank\" rel=\"noopener\">Scholarly Kitchen<\/a>: \u201cTaken together, it doesn\u2019t appear that the CiteScore indicator can be\u00a0considered a\u00a0viable alternative to the Impact Factor.\u201d The Eigenfactor.org group also <a href=\"http:\/\/eigenfactor.org\/projects\/posts\/citescore.php\" target=\"_blank\" rel=\"noopener\">stated<\/a>: \u201cBy neglecting to count the front matter in its denominator, Impact Factor creates incentives for publishers to multiply their front matter. By counting front matter fully in the denominator, CiteScore does the reverse. Because we value and enjoy the front matter in many of our favorite journals, we see the Impact Factor as the lesser of evils in this regard.\u201d<\/p>\n<p>Time will tell whether CiteScore will end up as a worthwhile journal metric, but Clarivate Analytics has full confidence in our metrics, due to their long history of quality and utility.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recently, publishing giant Elsevier came out with a new journal evaluation metric, CiteScore.<br \/>\nAt a time when new metrics are proliferating, it can sometimes be difficult to understand the key differences between indicators. So how does this new measure stack up against the Journal Impact Factor (JIF), part of the Journal Citation Reports (JCR) from Clarivate Analytics? Let\u2019s take a look.<\/p>\n","protected":false},"author":10,"featured_media":5125,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[16],"tags":[62],"class_list":["post-5122","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-academia-government","tag-journal-citation-reports"],"acf":[],"lang":"en","translations":{"en":5122},"publishpress_future_workflow_manual_trigger":{"enabledWorkflows":[]},"pll_sync_post":[],"_links":{"self":[{"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/posts\/5122","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/users\/10"}],"replies":[{"embeddable":true,"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/comments?post=5122"}],"version-history":[{"count":1,"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/posts\/5122\/revisions"}],"predecessor-version":[{"id":287767,"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/posts\/5122\/revisions\/287767"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/"}],"wp:attachment":[{"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/media?parent=5122"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/categories?post=5122"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/clarivate.com\/academia-government\/wp-json\/wp\/v2\/tags?post=5122"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}