Artificial Intelligence: Building bridges between science and policy – TRANSCRIPT

Ideas to Innovation - Season Two


Jaron Porcielo: We want to continue to provide more like literacy, more conversations, more dialogues around, because AI is not actually autonomous, right? It is controlled by humans and by companies. We do get ourselves into some pretty, new territory. Where we are letting machines make decisions that humans had once made.

Intro: Ideas to Innovation from Clarivate.

Neville Hobson: If there’s one big topic that’s capturing imaginations, stimulating excitement and provoking fear and uncertainty all at the same time, it’s artificial intelligence. Today, it’s hard to miss the topic across the media, social networks, business and industry journals, in and out of boardrooms and at all levels in organizations and governments around the world. It comes under many labels from AI to machine learning from LLMs and generative AI to DALL-E and ChatGPT, and a lot more.

Much discussion continues on whether AI will ultimately be for better or worse. As with any seismic shift in technology and wider society, the answer will depend on how we engage with, deploy, and use the new tools many of us are already experimenting with.

Welcome to Ideas to Innovation, a podcast from Clarivate, with information and insight from conversations that explore how innovation spurs incredible outcomes by passionate people in many areas of science, business, academia, technology, sport, and more. I’m Neville Hobson.

In this episode, we’ll focus on some of the practical matters surrounding AI in the broad business context.

I’m delighted to welcome our guest, Jaron Porciello. Jaron is an information and data scientist, the associate professor of the practice information and data science at the Lucy Family Institute for Data and Society at the University of Notre Dame in Indiana in the US. She’s also the co-founder and chief technology officer of, a startup focused on providing data science and AI services to the public sector and governments.

Welcome, Jaron.

Jaron Porciello: Thank you, Neville. It is absolutely wonderful to be here.

Neville Hobson: Yeah, really pleased to have you, thank you. So we have a big topic to talk about, even as we narrow the focus a bit. So you work at Notre Dame, which is in Indiana, as I mentioned, but you’re remote, right? Working from your base in Ithaca in upstate New York. Have I got that right?

Jaron Porciello: That is correct.

Neville Hobson: Okay, so I understand that your research focuses on building programs that use machine learning tools to help humans make better decisions. Bit of a mouthful, isn’t it? Tell us more about your work at Notre Dame and what you do.

Jaron Porciello: Thanks, Neville. I’d be delighted to, and as you say, it is a bit of a mouthful. So as you mentioned, I am a professor of the practice, and one of those words actually means a lot more to me than the other, and it’s not professor, right?

So my work is very applied and interdisciplinary. So as an information scientist, I draw from fields such as computer, library, cognitive and data science, as well as information management, to think about effectively managing and leveraging data and knowledge. And I use those approaches primarily in the field of international agriculture and rural development to think about science policy issues, especially around development and the use of evidence, which the use of evidence has its own long and colorful history.

And so the communities that I work with are primarily funders and governments, international UN and multilateral agencies to first better understand global goals such as the Sustainable Development Goals, and especially with the background in agriculture and rural development, SDG2, which is the goal to achieve zero hunger by 2030.

I also work with scientists and publishers and technical agencies in low and middle income countries to better understand the breadth and depth of knowledge gaps, or conversely, where we might have better evidence and data, but maybe it’s not reaching policy makers.

And so I recently joined the University of Notre Dame, as you mentioned, after 15 years at Cornell University, because I’m really interested in integrating conversations around food and hunger and data science as part of discussions that are happening in other disciplines, such as climate, peace building, and even the liberal arts.

Neville Hobson: That’s quite a lot you’re involved in. And in fact, I’m intrigued by what you’re doing, what you have been doing at Cornell with regard to this program called Ceres2030. Can you tell us a bit about that?

Jaron Porciello: Neville, I’d be really glad to tell you about Ceres2030. It’s a really interesting project. So it started in 2018, and it started with a discussion with G7 funders who were really struggling with, what are the ways in which they were going to achieve SDG2? Where should they put their investments for maximum impact? And this is really around the field of agriculture and agricultural interventions.

And I suggested to them that we could think about using methods in evidence-based decision-making. And this doesn’t have a strong rooted history within the field of agriculture, but it does in health and the medical field. So working with funders in this way, we came up with a strategy to say, let’s work with some of the best scientists in the world to evaluate agricultural interventions. However, that’s a huge topic, right? That’s an absolutely enormous domain. And so what we agreed to do was really bring in, this was in 2018, so before the conversations we’re having today around AI. to think about how do we bridge that gap between what’s happening in the scientific field and where we have lots of technical knowledge and really bring those methods that we can use to better understand evidence-based decision-making and ways that we can achieve global goals, such as sustainable development goal number two.

So that’s exactly what we did. Using AI, I was able to identify early and mid-career researchers from around the world. We had 85 researchers working together. in interdisciplinary teams, each focused on a pretty specific question in agricultural development. You know, what are opportunities to improve the uptake of climate resilient crops and plants, for instance?

And you know, by focusing in on these specific questions, we were able to better align them to policy goals. And, you know, we started to galvanize and get people really excited about this question. And I was really pleased to work with the publisher Nature to put out a special focus collection. of eight systematic evidence reviews in 2020. All of which went through peer review, with 85 new scientists working in interdisciplinary groups. So you can imagine, this is really exciting to see how we can bring people together in this way and then leverage the tools of AI to make kind of enrich what each group is looking to do and the knowledge that they can gain.

Neville Hobson: That’s really interesting. That’s a great foundational element I think you’ve mentioned. It actually leads me to think about something you’ve spoken about. And indeed, you and I have talked about this before, which is this whole notion of building bridges between science and policy, where AI is an instrumental element in how you do that. I think you started touching on that in what you were saying about what you’re doing with Ceres2030.

Can you talk a bit more about… whether that actually fits in this context and what actually happens in the policy environment. Tell us more about that aspect of it.

Jaron Porciello: Sure, I’d be happy to. I mean, I think both science and policy, they have their own cadence, right? They have their own calendars, their own belief structures, and they follow their own processes that are sort of maybe completely opaque to each other, right?

So in the scientific field, we know about the peer review process. In policy domains, they know about legislation, how to push things through kind of their own policy cycles. And… The thing that’s really interesting though is we’re all looking to achieve similar goals, right? We want to end poverty. We want to ensure that our climate is improving and not getting worse. But in order to do that, we need to have kind of a common bridge, a common vocabulary where we can bring these ideas together. And again, going back to this methodology of evidence-based decision-making, it focuses on a few key things. It focuses on who are the populations we’re trying to achieve. What are the interventions? What are those outcomes? And what can we use as a basis of comparator? But again, the communities may think about the problems in similar ways, but they’re not using similar words, nor are they even looking at the same evidence spaces. And AI really gives us an opportunity to think about fusing those two worlds, kind of through this simple way of saying, we are all asking the same questions. Can we use AI to ensure that what I say to you is similar? And can we test? what that looks like to me, science policymaker, and going back and forth.

Neville Hobson: Got it, okay. So I can tell from how you’re talking about this, Jaron, this is something you really, really are interested in. It’s quite clear to me. And I just wondering, just to kind of draw out a little bit more from you in that area, what drew you into this in the first place, this sort of academic area that’s a lot more than just research? You are very passionate about this and I’m curious to know what drives you in it. Is it the AI element or is it more about the people? What is it exactly?

Jaron Porciello: No thanks Neville. Yes, so it is an area that

Neville Hobson: Hahaha

Jaron Porciello: I am really passionate about. And I have to say, you know, this term socio-technical is really important to me because socio-technical brings the social with the technical. And I think it’s really important to have that balance. But I’m really interested in the ways that people work, right? We, how do we work? How do we self-organize? How do we settle debates? How do we build consensus with each other? You know, neighbours, communities. going all the way up to global governance. So how people work is really interesting and important to me.

And I think overall my experiences  working for the past 15 years with farmers and scientists and students all across the world, but especially in sub-Saharan Africa and Southeast Asia on issues related to agriculture and data collection, helped me understand the value of data as something much more than exists in the flat files or databases or things that we interact with technically.

So data collection and reporting in a knowledge economy that we all live in is what gives both visibility and validity to people and ideas. And I think we see this all the time within marginalized communities, but within any community. If we don’t have data about the people or the issues, then we cannot raise the agenda. And I’ll just say one more thing quickly here. It’s that I think the beauty of AI, actually, and we talk about the negative issues of AI a lot. But I think actually the beauty of AI is that it does give us an opportunity where we can use it responsibly, respectfully. We can actually identify more data about people and use that as an input for decision making and championing good causes.

Neville Hobson: Hmm, that’s really a good narrative. Let’s explore the business landscape and how we see it today in mid 2023. With AI, it seems to me that we’re at a crossroads with many routes to choose from in terms of direction we might want to go in the business context. In our context, we’re looking at outcomes, aren’t we? Not so much the tools or the means, whether it’s public ChatGPT or generative coding AI, what they enable us to do.

Let me set the scene a little bit in that area… For example Clarivate has invested in a new, internally developed AI tool to help us identify outlier characteristics that indicate that a research journal may no longer meet our quality criteria.

This new technology has substantially improved Clarivate’s ability to identify and prioritise our re-evaluation efforts, underpinning research integrity to ultimately increase trust in the scientific record. So that’s one example.

How do you see the landscape? And one thing I think you… touched on it in your earlier comments about risk and what about the risk and the downsides of all of this.

Jaron Porciello: Thanks, Neville. Yeah, this is such a, we’re living in such a rich time actually. And it’s incredible to see how it just really within the last six to eight months, how these conversations around AI have really just shaped so many things that we do, right? And we, you and I work in a knowledge business, right? We work in a knowledge economy. So I think this touches us every day, but again, this isn’t limited to people who are working in the knowledge economy. AI is really touching, so many different people in so many different industries.

And the opportunities for new regulations, for thinking about the ways in which we manage risk and the ways in which we also manage fear, I think are really incredibly important. It’s also quite fun to see with, what people are actually experimenting with and where they’re finding value and utility in these different types of tools.

So let me maybe just pause. pause there for one moment because I kind of want to go back to the example that you gave and wonder if, you know, this idea of, you know, hybrid human intelligence, which is what you’ve just described, right? This is the way I think most of us are used to working with artificial intelligence where it’s not, it’s unsupervised, but it’s not autonomous. And I think the conversations we’re having right now are leading people to believe that maybe AI is, is autonomous. And I think that’s something that we want to continue to provide more like literacy, more conversations, more dialogues around, because AI is not actually autonomous, right? It is controlled by humans and by companies, right? New tools that we’re putting out are created by programmers, by computer scientists. So I kind of want to help us think through the fact that AI is really not an autonomous engine at this time, but it is driving a lot of new services. And if companies are not willing to sort of invest in regulations, practices, protocols, you know, whatever name you want to give it, that, that help determine that humans are still making the decisions at the end of the day, then I think we do get ourselves into some, some pretty, um, new territory, right? Where we are letting machines make decisions, um, that humans had once made.

Neville Hobson: Yeah, yeah, there is a risk of that, I think. And indeed, listening to some of the voices who are loud in the public square, let’s say, talking about AI and what it could do, kind of accelerates some of the concerns about autonomous activities, you know, Skynet is upon us and the Terminator, all that kind of science fiction stuff that enters into the picture.

And that’s inevitable, I suppose, with something like this, hence. Hence, governments talking about, listen, we’ve got to get a handle on this. You hear words like guardrails and regulation and so forth.

So we’re at the very early stages of this, and I was actually having a conversation with someone just a week or so back, talking about experimentation in organizations to understand how this works. And many organizations started to do that. And these are the companies and businesses that aren’t talking about this loudly publicly. They’re just getting on with trying to figure out how this works in their own organizations. That’s a pretty good approach, it seems to me, to do that. I think the answer is, in that context, is you need to understand what this thing actually is that everyone is calling artificial intelligence. And it’s not just a single thing. And there’s lots to learn from this. And there are risks and downsides, and we rely on governments in particular to steer us in many areas, but it’s not just governments. That’s a big topic in itself, Jaron, and we could spend another half an hour talking about that. We don’t have time for that bit. So it is pretty interesting where we are currently at. So I think you’ve helped shine a good spotlight on where we’re currently at in this midpoint of 2023 that I mentioned earlier. Here we are, a lot of people experimenting with this. Here we’ve got an example from Clarivate doing these things and many other companies doing similar things. Your introduction to… work areas you’re in, particularly when you spoke of in the agricultural area that’s a strong interest of yours and where a lot of your experience has been derived is really, really interesting and highly relevant to the state we’re at and how this is moving along. So you give us a good sense of the state of, let me call it, pragmatic AI in 2023.

Let’s peek ahead a bit. We’re kind of looking at a horizon ahead of us, which is a decade from now. say it could be more, it could be less, but 2033. Let’s take a look at that. And you know, you could look in the tea leaves in your cup or your crystal ball and tell us some thinking here, because I’m really keen to know what you see in this. This whole notion of bridging the gap between science, policy, public and indeed private, all those sectors and all of that together.

How do you see all of this evolving in the next decade? Is that too short a time span, too long. What should we be expecting in this broad area by 2033? What’s your thinking on this?

Jaron Porciello: So such a really good and important question. And I think it’s important too, when we look at time horizons, obviously we’ve seen so much progress just in six months that there’s no reason to progress or movement mobility, however you think about it. We’ve seen a lot happening within just the last six to eight months. So I don’t see any reason for us to think that we won’t continue to see, you know, really accelerated progress with the AI technology itself. But where we’ve centered on some themes in this conversation, I think are, what’s the human element as part of all of it? And for me, I’m going to take a little bit of an optimistic view, I think, here. Because in 10 years, I actually think AI is going to help us much. It’s going to help us understand the traceability of knowledge. Where knowledge originated as its own data point, so that we can. kind of mitigate, I would say, the risk of bad actors. So that’s important, being able to mitigate the risk of bad actors who wanna perpetuate where misinformation’s coming from. But the optimistic part of that is that we are bringing new voices, new knowledge, new data, into conversations that we haven’t been able to hear from before. And if I connect this back to the example you gave of Clarivate and indexing journals, I think one opportunity there, just on a very broad scale, is to ensure that when AI is making decisions about, if it’s making a decision about where high quality journals are and what that selection criteria includes, I hope that we’re now able to reach more regional journals that have just been sitting on repositories or locally indexed and are not really part of the global conversation in large part because of technology that just hasn’t been powerful enough to really be able to bring those systems and those journals, for instance, into a broader academic or scholarly communication system because the technology wasn’t there and we had to rely too much on human curation. So I would say that this idea of traceability and knowledge, I really hope that is something that powerful computation and kind of these systems are able to solve for, as well as thinking about inclusivity and data justice a little bit more. We really are not getting… full picture of knowledge currently in our AI systems, right? What they’re trained on currently is not representative, I would say, of so many types of perspectives, opinions, traditional knowledge, indigenous knowledge. All of these other really important knowledge and evidence systems are not currently included because they’re not digitized, they’re not available in large corpuses. The technology is powerful enough to help us think through how we might be able to do that in the future.


Neville Hobson: Yeah, I agree with the assessment with that, absolutely. It makes me think a bit about something that started probably a decade ago, where tools emerged that could process vast quantities of unstructured data, not stuff in databases or tables, but almost to differentiate kind of conversational data, words people mentioned in conversations that are recorded in some form, and you can pluck out the words. That’s a very difficult task for humans to do. Think of the legal profession as a good example, but I mean, journals are a great one to… kind of sort what’s good, what’s not, what’s this about, what’s that about. Now we’ve got a means that this is accelerating, it seems to me.

So that’s maybe a second conversation one of these days that we might return to in another conversation, Jaron. But for now, this has been terrific. Thank you so much for sharing your knowledge and insights on a very, very exciting topic that to many is of concern, to others, it’s very exciting. It actually reminds me to say something I noticed myself before we started speaking today. If someone were to ask me, is AI an opportunity or a threat? The answer is yes. That’s the answer to that question. Is it an opportunity or a threat? Yes.


Jaron Porciello: Exactly.


Neville Hobson: Depends on your point of view. So again, thank you for being with us on this, on this exciting topic. Thank you so much. Appreciate your time.


Jaron Porciello: Thank you, Neville. It has been a real pleasure and I look forward to all of those follow-up conversations. Thank you so much.


Neville Hobson: You’re very welcome. So you’ve been listening to a conversation about the pragmatic uses of artificial intelligence with our guest, Jaron Porciello, information and data scientist and associate professor at the University of Notre Dame.

For information about artificial intelligence and how Clarivate uses it, visit

We’ll be releasing our next episode in a few weeks time. Visit for information about ideas to innovation. And for this episode, please consider sharing it with your friends and colleagues, rating us on your favourite podcast app, or leaving a review.

Until next time, thanks for listening.

Outro: Ideas to innovation from Clarivate.