Journal Citation Reports 2021: The evolution of journal intelligence – transcript

Ideas to Innovation

[music]

Voiceover: The Ideas to Innovation podcast from Clarivate.

Joan Walker: Hello. I’m Joan Walker and welcome to the Ideas to Innovation podcast. In this brand new series, we’ll be talking to the people who live and breathe the process of turning ideas into innovation. The smartphones and technologies that we depend on, the medicines that we rely on, the electricity that powers our day-to-day life, they were all once ideas before becoming inventions, inventions that have changed our lives for the better. Join the conversation with experts and industry leaders to discuss innovation at its core.

For almost 50 years, the global research community has relied on the data in the Journal Citation Reports to identify the world’s leading journals. The JCR is based on data from the Web of Science, the world’s largest publisher-neutral global citation database, and its journal intelligence metrics play a key role for funders, institutions, librarians, and researchers. Academic Publishers across the globe use the reports to evaluate their journals’ impact in their field and promote them to the research community. Each year’s update is met with great anticipation.

Today, we welcome guests from The Institute for Scientific Information at Clarivate as well as publishers, Hindawi, Frontiers, and SAGE Publishing, to talk about the recent 2021 Journal Citation Report’s release and the treasure trove of new features inside. Together, we’ll be exploring the fundamental role the JCR plays in supporting academic research and accelerating the pace of innovation by providing transparent publisher-neutral journal intelligence and helping those in the research community make better-informed, more confident decisions.

With me today are Dr. Nandita Quaderi, Editor-in-Chief and Editorial VP of the Web of Science, Dr. Martin Szomszor, Director at the Institute for Scientific Information, and three academic publishers, Mathias Astell, Chief Journal Development and Marketing Officer at Hindawi, Frederick Fenter, Chief Executive Officer at Frontiers, and David Ross, Vice President of Open Research at SAGE Publishing.

Nandita, Martin, Mat, Fred, and Dave, it’s great to have you with me today. I’d love to hear you tell us a little bit more about yourself, how you started out in your respective careers, and what keeps you inspired in your day-to-day work. That’s a big question. Nandita, let’s start with you first.

Nandita Quaderi: Thank you, Joan. I started my career in academia. After my PhD and postdoctoral training in molecular genetics, I went on to establish my own research lab, investigating the causes of a rare birth defect. I then made the move into scholarly publishing and spent 13 years there. Most recently, I was publishing director at Springer Nature where I had responsibility for the open access Nature Research journals. I joined Clarivate in 2018.

What keeps me on my toes is making sure our Web of Science selection process strikes the right balance in screening out poor quality content and not creating an impossible barrier for new journals, especially those from developing regions.

Joan: Thank you. It seems that what you’re suggesting there is that you are really looking at the rigorousness of the work.

Nandita: Absolutely. It’s the rigor of our evaluation process that really sets us apart from the competition.

Joan: Excellent. Now, Martin, over to you. Same question.

Martin Szomszor: Thank you. My background is in computer science. I spent about 10 years in academia as a researcher before taking a job at a newly founded technology company called Digital Science. I worked there for about six years, during which time I built up the data science unit and, for a while at the end, ran the consultancy team. I joined Clarivate three years ago to work at the Institute for Scientific Information to help reinvigorate the R&D capability within the company. My main interest is in the data, how we can use it to understand the world around us, and ultimately make better choices.

Joan: Martin, thank you. That’s excellent. Mat, good to have you with us today. It’s good to have some publisher perspectives in here. Tell us. What’s your story?

Mathias Astell: Hi, Joan. I started my career working in higher education colleges, helping students with guidance and attainment, before moving into publishing. I’ve worked in academic publishing for the last eight, nine years, primarily in open access and open science-focused roles, a number of academic publishers, including SAGE, Nature Research, Springer Nature, The British Medical Journal, Wiley, and now Hindawi, where I currently oversee journal development of all of Hindawi’s 230 open access journals.

The reason I love science publishing and academic publishing, and the reason that I stay in the field is really the veracity of information. With the rise of the internet, the ability to determine what is good quality information has become harder and harder. I see academic publishing and the role of peer review in the collaborative community way in which journals are developed and published as one of the bastions of ensuring good quality, knowledge, and information.

Joan: Again, we’re back to the rigor and the truth of the material, aren’t we?

Mat: Yes.

Joan: Absolutely. Fred, hello. Tell me about you.

Frederick Fenter: Terrific. Hi. Thanks so much for having me on this presentation. I trained as a chemist. I have a PhD in chemistry. I worked in research until my mid-30s or so when I moved into publishing. I began my publishing career at Elsevier here in Switzerland. Then I moved on to a number of roles and that has allowed me to see publishing from a number of different perspectives. I launched a small university press here in Switzerland. I was involved in a number of innovative projects at the time, one of which was the launch of Frontiers.

I was here as the publishing consultant at the very beginning of Frontiers. At that time, the open access movement was just starting to gain a little bit of momentum so it was very exciting, and voila. What I would say is that working with Frontiers and having seen the publishing world from a number of perspectives, I think what keeps me going is the fact that open access publishing with the power of dissemination and with the recognition that journal publishing gets is really, I think, the most impactful publishing service that we can provide to our research communities.

Joan: Excellent. Thank you for that. Now, Dave, how about you?

David Ross: Good morning, everyone. I got into publishing somewhat by accident. I was an engineer by training, but due to the recession of the early ’90s, I found the opportunities in that field rather limited, and found myself in publishing via a summer of cash-in-hand work for a company called Verso and the New Left Review. Here I am, 25 years later. Why have I stayed in it for 25 years? I guess what started as a general sense of being involved in something meaningful, the dissemination of scholarly knowledge rather than the production of widgets, I now more precisely identify the reason as a passionate belief in the importance of the authenticated version or record and everything that goes around that.

For the last seven or eight years, I’ve led the development of SAGE’s open access program, and I’m now the VP of Open Research. With that, I have oversight of policy and infrastructure as well as the journal’s program. For us, the main challenge right now, and one I find fascinating, is to try and find a way, a viable route to OA for mainly unfunded humanities and social science disciplines.

Joan: Wow. That’s an interesting development for you-

Dave: Yes, very much so.

Joan: -from your beginnings as you describe. Well, thank you to each and every one of you for giving some background as to how you got to where you are now.

[music]

Voiceover: The Ideas to Innovation podcast from Clarivate.

Joan: Now, can we get to the heart of our discussion today and talk about the 2021 Journal Citation Reports, which launched on the 30th of June? Nandita, tell us a bit more about the history of the JCR and why this is such a big deal to so many in the research community.

Nandita: Sure. The JCR first appeared in 1975 in print as part of SCI, the Science Citation Index, the forerunner of the current Web of Science. It was a summary at journal level of the article citation network revered in SCI. It also introduced the Journal Impact Factor, the JIF, the world’s most famous, if not infamous, journal-level citation metric. JCR was included in SSCI, the Social Sciences Citation Index, in 1978, but it wasn’t until 1989 that it appeared as a separate product.

Since then, the JCR has evolved. It’s much more than a summary of the journal citation network. It provides article-level transparency of the drivers of journal impact, and shows the dynamic relationship between article performance and journal performance. The journal profile page provides a comprehensive and nuanced view of what makes a journal valuable to its authors and readers, such as OA contribution and the institutions and countries that create the author community.

You have more exciting developments this year. For the first time, JCR will include content from journals in the Arts & Humanities Citation Index, AHCI, and from the multi-disciplinary Emerging Sources Citation Index, ESCI. This means the number of journals in JCR grows from around 12,000 to around 21,000, representing approximately 70% increase. This also brings us back full circle with the JCR including all the journals in our citation database.

We’re also introducing a new metric, the Journal Citation Indicator that Martin will tell us more about in a moment. It’s the unique richness, the depth, the breadth, and the curation of JCR data that makes it such an invaluable tool, and why the annual release is such a big deal. I can still vividly remember back to my days in publishing, desperately waiting for the new JCR data to be released (it was often late in the day in the UK) and to find out how my journals fared against the competition.

Joan: That’s lovely, actually. I can imagine. I’ve got equivalent things going around my head when you race to read a review and you rip open that particular newspaper or publication to see where you rate.

Nandita: Exactly. It’s that mixture of fear and trepidation that makes it all so exciting.

Joan: Absolutely. That feeling of, “That’s wrong. That’s not fair. They’ve got it all completely wrong.”

[laughter]

Nandita: There is that.

Joan: Yes. Now, Mat and Fred, I’m interested from a publisher perspective. What does the JCR mean for you and the journals that you work with? Maybe Mat, if we could start with you first.

Mat: As Nandita said, as a publisher, it is one of those times of year where you do wait with bated breath to see what the results will be within the JCR. It’s not the only way in which we look at the assessment of our journals. As a signatory of the Declaration On Research Assessment, we at Hindawi, we’re strong proponents of ensuring we’re using multiple ways to assess research assessment and journal performance. We believe that the impact factor shouldn’t be used to assess individual research performance or scientific output, or be the only measure of the output of a journal.

That doesn’t mean that we don’t see the JCR as still probably the most important and comprehensive source for assessing and understanding how our journals are serving their research audience globally. Now with the inclusion of all journals indexed in the Web of Science and the addition of this new, more inclusive field-normalized metric that Nandita just alluded to, I think the JCR will continue to play a really, really key role in the assessment of journals.

Joan: Fred, what would you add to that?

Fred: I would add that I think a keyword is trust. I think, first of all, the JCR is a resource that is compiled by professionals that are dedicated to the objective and independent evaluation of titles. This is something which is recognized across the industry. There’s real vetting that takes place of the titles. This is something which contributes to the benchmarking that all of us publishers do every year when the JCR is published.

I think I’d like to add that no matter which business model your company is working with, the journals are benchmarked on a level playing field. That goes back to this concept of trust. As a publisher, when your journals come out at the top of the JCR, people need no further convincing. It’s very, very helpful in terms of bringing credibility to open science, frankly. People see that the open science journals are landing at the top of the categories in the JCR, and this has been really instrumental in terms of contributing to the acceptance of open access and open science practices.

I’d just like to add one last thing, is that it’s very much appreciated by our scientific communities. When the editorial projects are recognized in the JCR as a core resource in their communities, it does act as a stamp of quality.

Joan: Understood. Yes. Excellent. Now, Dave, if I can check in with you. Sage is primarily a humanities and social science publisher, so this must have a big impact for you. Now, what does the new JCR mean for your journals? I would think they’ll be excited to be included in the JCR, especially if they weren’t before.

Dave: Yes. As others have said, the power of the new JCR and this new metric combined is that it’s going to cover so much more of the literature, everything that’s currently in the SCI and everything that’s in the AHCI. Equally important is that it’s field-normalized. I can’t speak for the entire HS community, and it’s going to be important to watch how the research community itself reacts, but the introduction of a new metric that normalizes across discipline, I do think, will be welcomed by HSS. As others have suggested, you never rely on a single metric when attempting to assess the impact of research, but in my opinion, the addition of another citation-based metric to add to the existing baskets can only be a good thing.

Joan: Yes.

Dave: We’ll certainly be promoting it in all of our journal sites alongside all the existing ones. With specific regard to the social sciences, I do think this field normalization is going to be very important. They’ve often felt the poor cousins when you look at it in that absolute quantified number of impact. Normalizing this across all disciplines, I think, will be really welcomed.

Mat: I would echo that as well, Joan. Not just in the humanities and social sciences, but even in the areas that were included in the JCR historically, areas such as engineering, maths, the computer sciences, that perhaps had citation patterns that didn’t fit with the assessment that the impact factor put in place. This new field-normalized metric is really going to help give clarity to the impact that those subject areas and the journals in those areas have as well.

Joan: Brilliant. Now that’s a really good springboard, actually, into my next question, because I want to talk about the new metric that was introduced this year, the Journal Citation Indicator, the JCI. Martin, can I ask you what exactly is the new metric? Can you tell us, or tell me, how is it calculated?

Martin: Sure. It is a metric to measure the citation impact of a journal. For a long time, we have associated citation impact with influence and utility. Basically, if a paper has been cited, it’s been used by someone. However, the value of a citation, the currency it holds, if you like, varies between disciplines. 10 citations for an article in mathematics does not carry the same value as 10 citations for an article in chemistry.

Joan: Why would that be?

Martin: Well, partly, this is because of the publication mechanics. For example, how many papers are published overall, how frequently they’re published, and how many references a paper has in them. It’s also due to the variety in the research culture. For example, why do people make citations, how old is the material that they tend to cite, and any bias or influence they might have about who they cite. The combination of those two factors means there is variation across fields and disciplines.

The Journal Citation Indicator uses what we call field-normalization to account for these differences. The value of the metric is standardized. Rather than count the absolute number of citations, we benchmark it to a baseline which represents the average or expected amount within the field. Referring to our earlier example, the maths paper with 10 citations. If the expected amount in that field is 10, then we would say it’s about average. The value would be 1.0. Whereas in chemistry, for example, the average might be 20, so a paper with 10 citations is only getting about half the expected amount, and the value would be 0.5.

Joan: I see.

Martin: We calculate this metric for all articles and reviews published in a journal in the prior three-year period. Take the average, the mean, to derive the Journal Citation Indicator value.

Joan: Excellent. There is already a fixed expectation, say, per subject?

Martin: That’s correct, yes. Those vary according to time, and field, and discipline.

Joan: Now, can I ask this next question? Forgive me if this sounds contentious, but I wonder if some people are also thinking the same thing. Why do we need a new metric? It seems there are already quite a few out there.

Mat: As Dave touched on and I joined in, adding this field-normalized standard metric that’s also applicable to everything in the JCR is an important step. It’s a much more inclusive step. It’s one that takes into account the performance of articles in relation to the rest of its field rather than just the wider literature as a whole. I think that’s really going to enable there to be an understanding across published outputs of the effectiveness and impact of them. I think having multiple citation metrics is only beneficial as long as we understand the roles that they’re playing and that they are not all equal but make up a tapestry, effectively, of how the journal performs.

Joan: Indeed. I think, Martin, that’s where you were heading in your description of needing the new metric.

Martin: Yes. This year in the JCR, we’re including more journals, especially those from the Emerging Sources Citation Index. For some time now, we’ve been getting feedback from publishers and editors and the academic community that they would like to see metrics for those. There’s extra reasons to include this metric because it’s providing visibility across a much bigger dataset.

Joan: Back to you, Mat, for a second. Do you think it’s going to be useful for Hindawi and the journals you work with directly?

Mat: Yes, I think so. All of our journals are fully open access. I think it’s going to be very, very welcome in those journals that are ranked in the catalogs of the JCR that are now being included. I think we have almost 200 journals included in the JCR. Previously, only 80 or so of those would have appeared in the JCR, but now all will be included. We have strong portfolios in engineering, maths, interdisciplinary physical sciences. I think they’re all going to be really well-represented by this new metric. As others have said, researchers respect and pay attention to the JCR, and so having these included on a level footing, I think, is really useful and valuable for the wider research community.

Joan: Yes, indeed. It feels like a very big welcome to– I was going to say an exclusive club, but it’s a very well-heeled, very rigorous club. You can completely understand why, as Nandita said a few minutes ago, people would dive straight in and see where they rate. Fred, can I come to you? Word on the street is that you have a strong data-led culture at Frontiers. Now, I’m curious to hear how you think the Journal Citation Indicator could help with portfolio management and competitor analysis, et cetera.

Fred: Terrific. First of all, thanks for pointing out the fact that we do actually have a very strong data-led culture at Frontiers. I can confirm that our data team is really very much looking forward to getting their hands on the new data. They’re looking forward to getting their hands on the new data and on these new reports.

I think what’s really interesting about it, as has already been mentioned, is the way that the normalization is being carried out and the way that they’re providing data on time series. Even the young titles in the categories can be compared to the other titles right from the very beginning. I think there’s going to be something very interesting that evolves from the fact that these time series are going to be made available.

The article-type normalization is something that I’m particularly interested to see how it’s going to play out. We haven’t talked about that specifically, but this idea of trying to normalize performance of titles based on the distribution of the article types, I think, is also going to provide some very interesting insights. Personally, you see the way the normalization is set out, we’re going to see a clustering of a lot of titles around the value 1.0. I’m looking forward quite a bit to see exactly what kind of spread we’re going to be seeing around this key normalization value.

Frontiers is a multidisciplinary publisher and this will, as has been mentioned, provide us a way of really being able to benchmark across all disciplines. I think that’s very, very important. I think at a higher level, the ESCI really needed its own metric. It needed to be included in a very data-driven way into the rest of the JCR. I’m confident that this new metric will be well-received by the community because of the reputation that Clarivate has in terms of being a trusted provider of this type of data.

Joan: Yes, understood. Now, just so I’m absolutely clear, can you explain the difference between the Journal Citation Indicator and the JIF? From what I understand, the JIF is the most well-known metric in the JCR and usually receives a lot of attention from the academic community. Is the Journal Citation Indicator meant to replace the JIF?

Martin: No, it’s not meant as a replacement. It’s meant to complement the Journal Impact Factor.

Joan: I see.

Martin: The main difference is that the Journal Impact Factor is a measure of the entire journal whereas the Journal Citation Indicator is a metric that’s aggregated from the individual articles that have been published. For the JIF calculation, we include all citations to the journal including front matter such as news, editorials, letters, and so on, as well as what we call the unlinked citations. These are references that are made to the journal. We can tell that, but the other information wasn’t accurate enough for us to determine exactly which article was cited. This total citation count for the journal was normalized against the journal size when we do that by counting the number of citable items.

For the Journal Citation Indicator, we look at the individual articles and reviews that are published and calculate this field-normalized metric. The Journal Citation Indicator reflects the average of those across a certain time period. It’s a bit like comparing two different sporting metrics about team performance. One might be an aggregate for the team, such as the number of games won, where they finished in the league table, or so on. Another might be derived from the individual player performance, such as average runs in a game, or a batting average, or something.

Usually, these are correlated. With sports, the teams that are most successful overall typically contain the strongest players. The differences in those measurement reveal variations in the composition of the team, if you like. That’s why having both of these metrics gives you slightly different perspectives on the citation impact of the journal.

Joan: That’s a great analogy, though. Thank you for that, Martin, because that sporting analogy clarifies it for me. Now, if a journal has got both a JIF and a Journal Citation Indicator, as I understand from what you’re saying many of them will, what then? How would you know when to use which?

Martin: That’s a good question. Like many-use cases for bibliometrics, it depends on what your question is. For example, if you want to compare the overall citation impact of a flagship journal to other influential journals in the field, you will stick with the JIF. If you want to compare a journal from the Emerging Sources Citation Index against its peers or see how that stacks up against another flagship journal, then you can use the Journal Citation Indicator for that.

If the journal you want to compare has a strong focus on, say, original research articles, it doesn’t publish much front matter news and editorial and so forth, it might be more appropriate to make use of the Journal Citation Indicator to make a comparison because you’ll be benchmarking like with like in terms of the types of articles. The opposite would apply as well. If you have a journal which is more strongly focused in terms of the influence of its front matter material, then the Journal Impact Factor is going to give you a reflection of that.

Joan: Yes, understood. Thank you for that. Now, those familiar with the research world might be wondering how this fits in with the Declaration On Research Assessment or DORA. Martin, how should the research community make sure they’re using metrics responsibly, especially in view of DORA recommending less emphasis on journal-led metrics for assessment?

Martin: Indeed, that’s an important question and Mat already touched on that earlier. Clarivate strongly supports the initiatives like DORA and the Leiden Manifesto that urge people to consider the full range of contributions made by a researcher and not just rely on citation performance. Specifically for researcher evaluation, there isn’t any logical reason to measure their citation impact based on the journals they’ve published in. There are much better ways to get an accurate picture of this, using article-level metrics. In the Web of Science author profiles, we’ve recently added a new visualization feature called beamplots for exactly this purpose. That incorporates field normalization.

The main thing, and others have alluded to this already, is that– We’ve argued this in a global research report called Profiles, not Metrics. If you squeeze the information about a researcher or an institution or a journal into a single point metric, or you compile a league table, you lose valuable information on the composition of that portfolio. That risks inaccurate or misleading interpretation of the data. It’s essential that you have access to a tool like the JCR, as it allows you to unpack the data to get a more informed perspective.

Nandita: If I could just add another point, just to hammer home that point that Martin and others have been making?

Joan: Absolutely.

Nandita: Firstly, we’ve got the argument that we should be using a range of metrics rather than a single point metric. Also, both the JIF and the JCI, as their name suggests, are journal-level metrics. The problem arises when you take a metric that’s meant to compare journals to compare individuals, to compare researchers.

Joan: Yes, indeed. The question, maybe, then, Nandita, is how does one get around that?

Nandita: There are other metrics available for researchers. Martin’s just mentioned the beamplots, for example. The important thing is you take a metric and you use it for the type of item it’s meant for. You use article-level metrics to compare articles, journal-level metrics to compare journals, and metrics that compare researcher performance to measure researchers’ performance.

Joan: It’s key to have the right tool, isn’t it, to make the proper measurement, the proper assessment?

Fred: If I may, I’d also like to add that I think most institutions and most scientific communities, in general, are doing an effective job in discouraging the lazy association of journal-level metrics with the intrinsic value of individual articles or with the individual researcher outputs. I think that, as Nandita was saying, that this new JCI increases the richness of the set of metrics that are being used. This is the type of context that will help avoid the misuse of journal-level metrics.

Joan: Indeed, because obviously, the JCR is a big deal. As publishers, we are now talking about metrics in recent years and how to use them responsibly. Do you think that misuse in the industry does give journal metrics a bad name, a bad rep, a bad press? Do you think that actually does happen?

Dave: I think it has, and people have alluded to this already. It’s about using them responsibly.

Joan: Yes. Within your world, the publishing world, do people call out other publishers and say, “No, hang on. You’ve misrepresented. You’ve used the wrong tool”?

Dave: I don’t think it’s so much publishers calling out publishers. As Fred was just suggesting, it’s the use of metrics to assess the value or impact of an individual researcher and their work. Metrics are loved by people who like to measure things, but some things are very hard to measure. The actual value or the import of a piece of academic research is something that often cannot be measured in a quantifiable way. Again, particularly in the social sciences when you’re dealing with ideas and concepts and not clearly things with measurable outputs and economic impacts.

To try and then shoehorn the assessment of that into even a very broad basket of metrics is very difficult. Each of these pieces of works does need to be judged on its own merits. There is a problem there with the proliferation of research. How would you do that? There is an ever-growing amount of research being published every year. That is why people fall back on metrics. It is very important that everyone does realize that and use as wide a possible range of metrics and also qualitative assessment as well.

Mat: It’s a very well-made point and everybody is talking around the same areas. I think one point of difficulty is that the impact factor is a single number. It’s very easy to look at it and say, “Okay, there’s a number that I can compare against something.” I think it’s the role of publishers and those that run journals to give that broad spectrum of data and metrics. I think journal-level metrics are an important part of the metric ecosystem but, as everybody here has said, they should be used for what they’re intended, providing an understanding of specific elements of a journal’s output, and not as a proxy for all impact.

That goes for just citation metrics in general. We should be, as publishers, trying to be more and more transparent about all the metrics related to a journal and giving an understanding of journal usage, geographical impact, how they play into economic environments, patents, policy. I think the reliance on a single area actually disadvantages many other areas. Dave said in the social sciences and humanities, but also in practitioner-led fields where citation is not the primary measure of impact. We, as publishers, the more metrics that we can have to build a better picture of the way that a journal is playing a role for its community, the better.

Joan: Yes, absolutely. There is a lot to digest here. It’s fascinating to get all of your insights into the evolution of such an important part of the research ecosystem. It’s really fascinating from where I’m sitting. I think Dave used a great expression. You can’t shoehorn a measurement in or out of whatever field it is. Very, very interesting. Nandita, tell me. If people do have questions about the more nitty-gritty details of the JCR, its new features, et cetera, where can they go to learn more?

Nandita: I would say the Clarivate blog would be the best place to start. You can use #JCR to search and find posts describing further details on the addition of new content, the inclusion of early access content, the Journal Citation Indicator, and details of the new features on the new JCR interface.

Joan: Thank you for that. It’s been a really lively, excellent discussion today. Thank you, everyone. Mat Astell.

Mat: Thank you, Joan and everyone.

Joan: Dave Ross.

Dave: Thank you very much for inviting me. It’s been a fascinating conversation.

Joan: Nandita Quaderi.

Nandita: Thanks, everyone.

Joan: Fred Fenter.

Fred: Thank you. It’s been a great pleasure to be part of this.

Joan: Martin Szomszor.

Martin: Thanks, Joan.

Joan: A fascinating insight into the Journal Citation Reports and what this year’s new features mean for the global research community. Please follow and listen to Ideas to Innovation for engaging, informative, and inspirational content with insights you can use. Now available on Apple podcasts, Google podcasts, Spotify, and other podcast directories. Share, like, review, or join the conversation with your comments on Twitter, LinkedIn, and Facebook by clicking on the share link. Thank you for joining us. Until next time, I’m Joan Walker. Goodbye.

Voiceover: The Ideas to Innovation podcast from Clarivate.

[music]

[00:38:12] [END OF AUDIO]