Learning from history: Understanding the origin of the JCR

This article is part of a 2018 JCR blog series exploring journal metrics, history, transparency, and features​. See more in this series.

For the past decade or so, the annual release of the Journal Citation Reports (JCR) has led to a frenzy of discussion about the Journal Impact Factors (JIF), their meaning, value, increase or decrease, and their imperfections. Before all that distraction sets in, it’s a good moment to examine the genesis of the JCR data. The construction of the JCR reflects several key ideas that were important to Dr. Garfield[i] and his colleagues at the Institute for Scientific Information (ISI).  Many of those values have been overlooked or overrun in the fetishizing of the JIF; it’s time to bring them forward again.

The first JCR began with this statement:

“This book is the product of more than ten years’ research”

Preface to Journal Citation Reports, Volume 9 of the 1975 SCI[ii]

By 1975 ISI was preparing the 11th release of the Science Citation Index (SCI).  The annual compilation of these data were, to Dr. Garfield, “a unique and unprecedented opportunity to look at references and citations not just as tools for information retrieval, but to look at them also as characteristics of the journals they linked.”  The JCR was presented as an extension of the SCI itself, not as a separate work or new direction.  Indeed, the first 14 years of JCR were published as the closing volumes of each year’s Citation Indexes.  Thus, the data that built the JCR are not different from or in contrast to the article citation data in the Citation Indexes.  Rather, the JCR reorganizes the article- and author-based citation index so that it can be used for the express purpose of understanding the citation properties of journals.

Like the Citation Indexes, the JCR is “…based on the principle that there is some meaningful relationship between one paper and some other that it cites or that cites it, and thus between the work of the two authors or two groups of authors who published the papers.”  In the construction of the JCR, the idea of meaningful citation linkage is projected onto the journal.  Thus, the cited and citing matrixes that form the majority of the data in the JCR are meant to answer the questions that were thought to be critical in understanding a journal’s role in the literature: “…who uses a particular journal? How frequently? For what purposes?”  Journal-to-journal relationships are a way to provide objective information about which scholarly communities are using a journal, and therefore more information about the topical focus of the content. The JCR demonstrates that citation linkage, created by scholarly authors in the process of publishing their work, is intrinsically relevant at scale.

The placement of JCR as an “annex” to the SCI had the direct consequence that the journals that appear in the JCR would be all and only those that were selected for indexing in the SCI.  As a discovery service, the SCI needed to balance selectivity, which gives authority and consistency, with breadth which ensures that the global scholarly community and the diversity of topics are represented.  From its earliest days, the source materials in Clarivate Analytics products have been validated by the selection process which has been a lodestar through the 54 year history of the Citation Indexes[iii]. The criteria for selection have always considered both quantitative and qualitative features of publications, so that the JCR is not a driver of journal selection at Clarivate Analytics, it is the beneficiary of that selection.  Predatory journals are not listed in the JCR because they are not indexed for Web of Science.  Selectivity ensures that the titles in the JCR have been pre- (and post-) certified as contributors to the scholarly literature.

The JCR takes citation data from selected sources and summarizes it at the level of the journal.  The JCR moves upward in organizational complexity from the article-to-article-network of the Citation Indexes, to a journal aggregate.  That makes it necessary to define operationally what a “journal” is, for the purpose of both data and metrics.  Again, there is a central idea and logic that has been too often lost in the consideration of the JCR.  Dr. Garfield said of his research using the first 10 years of the SCI:

“I began to study journals as socio-scientific phenomena as well as communications media.”

Studying journals from this perspective drives an operational definition of “journal” in the JCR that is more than merely a shell or container around a group of articles; a journal must be allowed to operate as a phenomenon within research, and in the communication of that research.  Contrary to an article-centric notion that treats the journal as a simple sum, this holistic notion of “journal” allows a dynamism to exist between articles and journals, with each contributing to the understanding of the other.  Journals are created and maintained, used and valued by a network of individuals: publishers and editors who set the direction, authors who submit, reviewers who comment, critique, and improve – and then readers – both readers who cite and readers who do not cite.  Journals and the individual items they contain are interdependent, but each has unique properties due to their level of organization in the scholarly communication system.[iv]

The JCR data reflect this concept of the journal entity explicitly in the construction of the journal metrics, particularly the JIF.  Citations in the JIF numerator are not summed across the articles, but aggregated to the journal title, considering the journal itself as the cited entity. Citations linked to all aspects of the journal’s content are included.[v]

However, scholarly articles and reviews form the core offerings of the journal to the literature and are the majority of the content intended to engage the research community in discussion.  While educational, editorial, or new content contribute to how many journals accomplish their communications goals, scholarly journals exist for the primary purpose of offering research reports, scholarly commentary and reviews.  The Journal Impact Factor (JIF) takes the absolute citation frequency and scales it according to the count of that scholarly content, that is, the “citable items” published.

“The JCR impact factor is basically a ratio between citations and citable items published.”

Over 95% of the JIF numerator in any given year is made up of citations linked to items also counted in the denominator of the JIF[vi], but the proportion of non-citable materials in a journal, and the number of citations these materials receive as part of the JIF varies.  The JCR maintains that citation is a scholarly recognition, but that journal’s influence, even within the specific realm of citations, is not solely defined by scholarly articles.  When a journal publishes a news item or editorial that is cited broadly in the literature two or three years later, those citations reflect the journal’s impact in the literature, and a part of how the journal functions as a “communication [medium]”.

The JCR has a uniquely journal-centric focus that makes the characteristics of the metrics unique to the journal.  Other common metrics can be calculated for a wide variety of materials, and that makes them useful in comparisons and benchmarks in many applications.  Average citations can be calculated for any group of items, materials in a journal, research articles published by a university; an h-index can be calculated for any countable population with any countable property.  The definition of these metrics is not specific to the unit of analysis, only to the arithmetic of their calculation.  The JIF, however, is only defined for a journal, and its properties, mathematically, are determined by and for journal analysis.  The differential between the numerator population and the denominator population is not an accident – it’s a choice on how to represent a journal’s complex content.

This is not to put the JIF forward as applicable to all assessment of all research.  Rather, it is to explain some aspects of the JIF and the JCR as they reflect fundamental principles of citation indexing.  Dr. Garfield, while enthusiastic about its possibilities for understanding the dynamics of scholarly communication, did not propose the JCR, or citation data as the sole component of a journal’s value.  He insisted that it be used “within a total framework proper to the decision to be made, the hypothesis to be examined, and rarely in isolation without consideration of other factors, objective and subjective.” He also noted, “Caution is advisable in comparing journals, especially journals from different disciplines,” citing the variations between fields in the role of journal literature, the difference in both age and extent of citations across subject areas.

“…I have deliberately, and with some difficulty, restrained my own enthusiasm about the value of what some may find at first sight to be merely another handbook of data.”

Dr. Garfield’s vision of how the JCR could serve as a resource for the study and evaluation of journals contributed to his providing not just summary indicators like the JIF and total citation count, but the vast and detailed array of the Citing and Cited Journal tables.  The first JCR devoted over 1100 of its 1330 pages to these tables – in 7-point font.  The citing and cited data tables have been presented in every JCR, and are by far the largest part of the data presented in any year.  What some may see as a super-abundance of data points is actually a fundamental value statement on transparency in reporting data.  The journal network was set out in great detail, both by journal pairs (Journal A cites Journal B) and by the distribution of the citation exchange across time, precisely because the details of the compiled metrics were considered, are considered critically important.  This presentation allowed full visibility into the way each journal interacted with others, both for the development of the field, as well as for the calculation of metrics.

As a consequence, but not as a cause, of these data being provided, the contribution of journal self-citations, and journal-to-journal citation dependencies were part of the JCR since 1975.  Dr. Garfield’s introduction to the JCR and the glossary of terms both mention the role of journal self-citation and its possible effects on the citation metrics of the JCR.  Users were always encouraged to understand the JCR citation metrics and to view the data that created the numbers.  Dr. Garfield acknowledged both the possibility of error as well as the great efforts that were extended for the specific purpose of diminishing error.  Responsible, informed use of JCR metrics were part of the original design.

The JCR from its start was considered an ongoing research process, as well as an offering to the community of researchers to spur their own exploration of citation science as it is mediated by and reported in journals.

If you think the JCR is just Journal Impact Factors, you’re missing the point.

Follow the JCR 2018 blog series for further updates.

 

 

[i] Throughout this essay, I refer to Dr. Eugene Garfield as “Dr. Garfield.”  He was an icon, advisor, guide, and mentor, I never cease to feel I owe him the honorific.

[ii] All of the quotes are taken from the introductory material written by Dr. Garfield as preface, introduction, and explanation of the first JCR, published in 1976.  The full text is available here: http://garfield.library.upenn.edu/papers/jcr1975introduction.pdf.

[iii] Garfield, E. (1990). “How ISI selects journals for coverage:  Quantitative and qualitative considerations. Essays of an Information Scientist.  13 (22): 185-193. Full text available at: http://www.garfield.library.upenn.edu/essays/v13p185y1990.pdf

[iv] “A Journal is as a Journal does.”  Marie McVeigh, 2018:  https://clarivate.com/blog/science-research-connect/journal-journal-four-emergent-properties-journals-scholarly-communication/

[v] Hubbard, SC and McVeigh, ME. (2011), “Casting a wide net: the Journal Impact Factor numerator.” Learned Publishing, 24: 133-137. doi:10.1087/20110208 Available at: https://onlinelibrary.wiley.com/doi/pdf/10.1087/20110208

[vi] McVeigh, M and Mann, SJ (2009). “The Journal Impact Factor Denominator

Defining Citable (Counted) Items.” JAMA. 302(10):1107-1109. doi:10.1001/jama.2009.1301.