Conversations in Healthcare
Presenter: The BioWorld Insider Podcast.
Lynn Yofee: This is the BioWorld Insider Podcast. I’m Lynn Yoffee, BioWorld’s publisher. The increasing use of artificial intelligence technologies across all stages of drug development is presenting interesting new challenges for regulators around the world.
From discovery and lead optimization to trial data recording and analysis, the technologies applications raises new questions about the transparency of algorithms and their meaning, but even trickier has been a question of the growing importance for companies working with AI to accomplish these ends.
Can an artificial and intelligence algorithm be an inventor? If so, can the AI system apply for or receive a patent? As a sci-fi fan, I’m sure some of our listeners might grin along with me after I admit that I can hear echoes of HAL 9000 saying, “I’m sorry, Dave. I’m afraid I can do that.”
Today, joining Michael Fitzhugh, BioWorld’s managing editor to discuss these questions are Ryan Abbott, a professor of law and health sciences at the University of Surrey School of Law and an adjunct assistant professor of medicine at the David Geffen school of medicine at UCLA. He also leads the artificial inventor project which is pushing to expand IP rights to the products of machines.
We also have Jim Belfiore who is Senior Vice President of Innovation. He’s also our colleague here at Clarivate, BioWorld’s parrot company. Jim has led, advised, guided, and taught pioneers across a multitude of industries, including aerospace, defense, medical devices, pharma, and more. Prior to joining Clarivate, he led the innovation consultancy Sensorinus and held a variety of leadership positions at IHS Markit. Michael, over to you.
Michael Fitzhugh: Thanks, Lynn. Ryan, Jim, welcome.
Ryan Abbot: Thank you. Very excited to be here.
Jim Belfiore: Thank you. Really appreciate the opportunity to talk today.
Michael: Both South Africa and Australia made headlines when patent offices in each country made interesting but different decisions on patent applications made in the name of the artificial intelligence system DABUS short for Device for Autonomous Bootstrapping of Unified Sentience. The subject of the patent sounds funny in the way that even the most amazing patents sometimes can. It’s for a food container and devices and methods for attracting enhanced attention. Brian, what is this attention-seeking food container, and who or what invented it?
Ryan: Well, you’ve delved right into the most knit picky part of being a patent attorney which involves technically combining two national applications for different inventions into an international one, but that’s okay because I think people are very interested in patent law and patent attorney work generally, so it was easy to get this project into public consciousness.
We had an AI that invented two new things in the sense that it automated what it is that traditionally makes someone an inventor. One of the things that it came up with was a novel food container based on fractal geometry and the other was essentially a light that flashes in a particular way to attract attention. We filed separate applications on these, but at the international stage, we combined them. You get two inventions for the price of one issued in South Africa, although in some jurisdictions, they will split back out.
Michael: How have the patent offices received these patent filings into DABUS’ name?
Ryan: Well, substantively, they have received them well. We filed these initially in the UK and in Europe because they would do a substantive examination before looking at the inventorship issue. Both inventions were found to be new, non-obvious, and useful which are what is required to get a patent generally. When we corrected it and said, “Actually, we don’t have a traditional human inventor in this case,” it got rejected from or denied by the US, the UK, Germany, Europe, and Australia.
South Africa was the first to approve it. It was approved in July. The AI is listed as the inventor, but the owner of the patent is the AI’s owner, Dr. Steven Thaler, who owns and operates the artificial intelligence. A few days after that, the federal court of Australia ordered IP Australia to reinstate our applications on the basis that there was no prohibition against patenting something just because an AI made it and no prohibition on listing the AI as the inventor. At least in our case, Dr. Thaler was best entitled to the patents. The US, the UK, Germany, and Europe are still working on it, but they’ll get there.
Michael: Just as a side note, can you tell me anything about these inventions, or do the content of the inventions maybe not matter– Clearly, they don’t matter as much as the issues at hand, but I can’t help but wonder, is there any purpose for either of them and does that matter?
Ryan: Well, indeed, the applications matter as much as really any patent someone files. Most patents end up not being enforced or being terribly commercially valuable, but I think these two were just about as good as anything other than a cure for COVID. One is a beverage container that basically looks like the outside of a snail shell, which is an example of fractal geometry. That’s something that might help with transporting beverage containers or storing them or helping robotic arms grip them and transfer. The other one is [unintelligible 00:05:55] that could attract attention. That might be useful in an emergency situation. If a plane crashes is at night and you’re looking to draw attention from either a rescue crew or perhaps an AI that is looking out for someone to rescue, both of which have received some commercial interest in licensing, although that’s a little tricky with the current patent situation.
As you point out, these could have been any two inventions. What everyone is interested in this case is the fact that we didn’t have someone who did the traditionally inventive part of the inventive act. I didn’t say that well, but you know what I meant, I think.
Michael: Yes. That is really the next thing I want to ask about that. I’m going to direct this to Jim. One of Clarivate’s core beliefs and I’ve worked for Clarivate for about seven years. As I’ve gotten to know the company better, I’ve gotten to know the core beliefs and mission better. One of those core beliefs that I’ve become more familiar with is that human ingenuity can transform the world and improve our features. What’s your take on what Ryan’s talking about here? Should AI systems enjoy the same inventorship rights as humans?
Jim: That’s a great question, which just cuts right to the heart of the matter. In terms of AI systems, one of the things that I find fascinating is that we are talking about, and by we, I don’t just simply mean in this particular podcast, but the topic before all of us in many industries and in governments, we are taking for granted perhaps the definition of what an AI system is or what an Automata is. It’s very similar to when we talk about topics such as innovation. I can put five people in a room and ask them what is innovation and I can get 10 descriptions.
When we ask the question, should an AI system enjoy the same mentorship right as a human? We need to have a very clear definition as to what that system actually is. Now, this is nothing new. We can go back to the Turing test. We can go to a number of different ways of parameterizing and there are many sciences and people including Dr. Thaler, the co-creator of DABUS, as to what makes for a self-organizing system. We need to have a clear definition that we all agree upon as to what that system is.
Once we have that definition, then at least in my opinion can we start to say, does that kind of end of system enjoy, or should it enjoy a certain level of rights, whether it’s to inventorship and perhaps more importantly ownership of property or of concepts? I would say that right now, we are in a period of transition as this happened throughout history. Human beings as you’ve had the industrial revolutions, as you’ve had significant changes in how we organize, how we work, and then how we enable tools and technologies to help us improve have caused these kinds of questions and existential questions to arise.
With AI, in particular, we’re now at a point where we are creating intelligences and intelligence systems that are doing things that human beings are not capable of doing, and they are providing ways of insight for human beings that otherwise would not be possible. A quick example would be Alexander Fleming’s discovery, for example, of happenings in a Petri dish that led to the field of antibiotics.
I will not sit here and say that AI cannot have rights as inventors, I would say that we don’t understand yet what a clear and applicable definition of an AI system is, at least such that we can apply laws to it. Now, that’s just my opinion. In fact, it’s very much in the realm of Mr. Abbott and others to help define that. This is something that is a very, very important question to resolve. That’s my take at least.
Michael: What characteristics would help you understand or come in ability to more clearly make that judgment, would you say, or just society, in general, being able to judge that?
Jim: That is also a very important and a very, very detailed question. I would say, independent of the characteristics because they don’t necessarily have to be human characteristics, although human characteristics are certainly the ones that would be most recognizable to other human beings. In fact, many AI systems, at least as I understand them, I am a layperson and an enthusiast of the technology, the ability to create a memory and ability to have an emotional reaction; these may very well be parameters that can be used to measure an artificial intelligence or self-organizing system and its ability to create ideas.
I would say that the inspectability of artificial intelligence systems is a characteristic that will become of paramount importance. Right now, we can identify that an artificial intelligence system, especially one that leverages machine learning that is really trying to develop concepts based on data that it’s using to create its models and extend on them, we know when they are creating insights, but we don’t necessarily know what led them to a specific insight. I think it’s important that as we define what an AI system is, that can have rights of inventorship or rights of ownership, that the inspectability of those AI systems would be of paramount importance.
Ryan: Michael, if I might just hop into that, I think that was a great answer from Jim. I think your point was well taken about human ingenuity, transforming the world, and that really isn’t so different, even once we have AI ingenuity transforming the world. We are moving increasingly to a paradigm though from which we want to encourage people to do things like find new drugs to one in which we want to encourage people to build machines that will do things like find new drugs.
Fundamentally legal rules and the benefits of legal rules are designed for human beings and society, but it is through things like patent protection of AI-generated inventions that we will influence the behavior of the people who make use of and build AI that will result in more innovation and more social benefits for everyone. Even though AI is stepping into the shoes of people and doing human sorts of things, at the end of the day, the rules are always there for our benefit as a society.
Sometimes we benefit more when we turn over tasks to AI that AI can do a better job of doing; whether that’s finding a new drug, driving a car, or diagnosing melanoma in a suspicious skin lesion. Even though we listed in our applications an AI as an inventor, we didn’t do that as a matter of providing an AI any sort of right. AI’s aren’t legal persons. They can’t own property, we’ve never suggested they should own property because even though you could change the law, an AI wouldn’t care about getting a patent, and it would be awfully difficult to exploit one.
For us, it was much more a question of, are the incentives right for encouraging socially valuable output from AI? Are we being transparent about how an invention was generated? Do we want to allow someone to essentially fudge the matter and claim credit for work they haven’t done? At the end of the day for us, even though we’re listing an AI as a patent, it’s a very human-centric framework that we’re advocating for.
Michael: I have trouble grasping the transparency part, less of the incentive to some degree, but the transparency aspect- and clearly AIs are at some point in their, I guess, if you want to say lifespan, generated, put together by humans. If a human creates an AI, why is the human not the inventor?
Ryan: This gets to Jim’s point about defining artificial intelligence, and it is challenging when you put five people in a room and get 10 different definitions of innovation, even harder when you start regulating for innovation, you really need to have a standard set of definitions. Right now, as people are trying to regulate AI around the world, people have very different ideas of what that thing means. Just as with innovation in AI definition, so too is inventorship. Not only a concept that differs significantly between jurisdictions but that often isn’t that well defined within jurisdictions. It’s one of the mushiest concepts of patent law. When you have a research group, and the group says, “Yes, these four of us are inventors,” that has different meanings to different people.
Of course, there are people all over AI-generated inventions at least designing an AI, and if you ever had an AI that made an AI, well, someone started the AI in the first place or started with the original AI. The question for us and for me as a patent attorney is, what is the thing that makes someone an inventor? It’s generally not finding a problem to be solved. Sometimes it could be programming an AI to solve a specific problem, but not if the people programming the AI and the people using the AI are different people and the programmer may not know specifically what the AI is being used to solve.
Sometimes recognizing the value of AI output, but AI can recognize the value of its own output depending on the design of the AI. In our cases, it wasn’t that there weren’t people involved, it’s that no one did what it is that traditionally make someone an inventor and that activity was at least functionally automated by the AI. I think an analogy is, for example, in the self-driving car context. People design self-driving cars, sometimes very large teams of people, but when we get a fully self-driving car that will take me from point A to point B there really isn’t a sense in which there is a person directly responsible for that AI is driving. The AI really is functionally automating the activity of driving.
Michael: In terms of incentivizing innovation, our traditional incentives, economic, societal recognition, falling short in driving innovation, and AI offers something additive there, or tell me more about that.
Ryan: Though people have tended to focus on the AI inventorship issue, it is certainly not the most important commercial issue with these applications. The most important commercial application is if you don’t have a traditional human inventor, can you get a patent at all because the outcome of the US, the UK patent offices saying, well, you can’t list an AI, but also there’s no person you can list in this case based on what you’ve disclosed, is you can’t get a patent at all. Increasingly, companies, for example, Siemens have been publicly stating that they have had instances in which they have been unable to apply for patents because of this phenomenon.
If you look, for example, at AI making music, that’s something that’s done for decades, but the music’s been really bad. It’s only been doing it in a way that’s been academically interesting, but AI is getting good enough that the music is now getting to be tolerable. In the next several years, AI may start making music that’s really commercially very valuable. If it does that, then there are commercially very significant implications to whether if an AI makes the best selling song, you can own or license that, sell it in movies and video games and streaming services. The answer right now differs by jurisdiction.
Similarly with AI, not only do we now have instances where you don’t have traditional human inventors, but as AI continues to improve, we’re going to get a lot more of that. Under the current law, the United States, the law basically says to companies, even if you can use an AI in your research and development more effectively than a person, you can’t do that if you need a patent. You have to rely on people to do research or a human in the loop, but that may not be the most efficient way to innovate for example, with having AI optimize industrial components, or repurposing drugs for new indications. That is really the commercial and the innovation harm that’s being caused by current policies.
Michael: I want to get at this element of risk a little bit more. Ryan, what’s at stake if societies don’t create a legal recognition for AI inventorship?
Ryan: There are a number of issues raised by these cases. Again, as I mentioned, I think commercial issue number one is can you get protection for things invented by AI? There are ways to do that, for example, by legally deeming someone to be an inventor. That’s something the United Kingdom does with AI. Copyright. They take the human producer of the work to be the author when there isn’t a traditional human author. You could also say for an AI-generated invention, you don’t need to list an author or will find some other way of saying someone who doesn’t traditionally qualify qualifies here. All of which though is in some sense acknowledging that AI is automating inventive activity.
If we don’t have a system for that, then it again says to businesses, “If patents are critical to your business model, then this is not something that you can do or use.” As AI continues to improve, I think in the next coming years, we will see it become really the standard way that people solve problems in some industries. I do think optimizing industrial components and repurposing drugs are two examples where AI is used very heavily. If you’re in an industry like the pharmaceutical industry where you really need a patent, then you have to design systems with people in them to point to as inventors or fudge the thing and hope nobody notices, but the end result is that outdated laws are really interfering with the pace of scientific progress.
Ryan: Jim, you told my colleague [unintelligible 00:21:11] Sammy that there might be ripple effects from rejecting the prevailing anthropocentric view of the patent system, like what?
Jim: Again, there’s a lot of a cause and effect relationships that, as we talk about this in terms of risk, engineers might call this FMEA for example, not saying that AI is a failure mode, but if we were to propagate through the unintended consequences, there are actually a variety of consequences and controversies that are going to be associated with AI as inventors. I would say regardless of whether or not AI is an inventor that then is allowed to be named as an inventor on a patent.
You had talked a little bit earlier, one of the questions was on incentivization and this is a topic that’s under a lot of discussion with some of my colleagues here at Clarivate, AI inventorship removes potentially capital barriers to research and development, and that could potentially create IP asset inflation, which could cause erosion and value of invention and innovation, and potentially could be rather upsetting to R&D incentivization.
It could incentivize trade secret usage, removing disclosure from the public sphere, and inhibit general technology development, including that of some inventing machines. Now, I could list a number of other- “Here’s what could happen,” the problem of course, is that nobody’s got a crystal ball that can actually say what’s going to be very specific. My crystal ball is a bag of glass shards, so I’ve long since stopped using it. To reject, I would say that the rejection of AI as inventor is as bad of an outcome as embracing our robot overlords with open arms and just simply saying, whatever is decided would be best.
In fact I’m reading a book right now, I may have mentioned earlier, It’s The Age of AI and Our Human Future, it’s from Henry Kissinger, Eric Schmidt, and Daniel Hutteloncher.
They’ve got a really good couple of chapters for the end of that book, in terms of saying, just looking at what happens if we accept, what happens if we reject. The react and adapt process is probably the most common for human beings. In some regards, it’s going to be the least efficient way that we grapple with these issues.
If we do nothing, then we’re going to be in a reactionary mode with regards to each of the different cases of AI inventorship that surface. We’ve seen this with technologies that don’t necessarily go through a patenting process, whether it’s the ethical use of deep fake technologies, whether it’s the use related in some pharmaceutical worlds. What I would say is that if we reject the technology, if we simply say, “No, this cannot be allowed to be considered an inventor, this cannot be something that can own property,” I suspect the genie is already out of the bottle. There will be other actors in other parts of the world that regardless of whether they are seeking patent protection are going to continue to push and evolve the technologies and the capabilities, and rather than address these issues in a stage and with parameters of our own choosing, it’s going to be done in an area that is not of our own choosing. That could be perhaps far more difficult.
Michael: Ryan, how do you think this issue’s going to develop in the year ahead? Jim is talking about the shards in his bag. [chuckles] Do you feel that you are gaining a picture of potential compromise ahead, which legislators and inventors and IP experts come to gather to amend laws and share definitions in a way that makes room for the assignment of patents to AI inventors?
Ryan: Well, again, just to be clear, the AI and case will never own a patent and we have never argued the AI should own a patent. Again, not because it is a philosophically terrible outcome. In point of fact, most patents are owned by corporations and a fairly small number of corporations have an awfully large number of patents.
The idea of an artificial person owning property or owning patents is really something we already accept fairly well. We do that though not because companies are morally entitled to rights, but because they’re members of our legal community in ways that benefit people. We think that because you can have companies as an immoral artificial legal person, it makes it easier for commerce to take place and it promotes entrepreneurship.
Ultimately, we as a society get more and better sorts of outputs. Again, I don’t know that works well in this case with an AI simply because it is less sensitive to receiving property than the company that owns the AI would be and also would be very difficult to enforce it. In terms of recognizing that AI is factually inventing and that it is important to protect this output and that it is important to be transparent about how innovation occurs.
I actually surprisingly do think that we are headed in the right direction with this. Whereas a few years ago, people did seem to think it was a bit far out. Now it is much less the case that people wonder whether this thing is happening. We have already had not only the decisions in South Africa, Australia, but legislative changes or activity in India where they’ve done a parliamentary consultation recommending that the law change explicitly to protect this thing.
The president of South Korea recently announced that AI-generated inventions should be protected and the UK intellectual property office, while they have been contesting our applications, is also now running a public consultation to potentially make a recommendation to parliament about changing the law. It does for me seem like that’s a very real possibility right now.
I think that countries, especially countries that are IP exporters, like the US and the UK are recognizing that AI and IP is going to be a critical area for them in the near future for their industrial strategy and for their IP strategy. There’s a growing consensus that it is vital to protect AI-generated innovation.
Michael: Thank you so much for illuminating this issue for us, educating me on it as we went, and to both of you for engaging in this matter in a way that really is clearly driving innovation forward really appreciate your time.
Ryan: Thanks so much.
Jim: Thank you very much.
Lynn: A truly fascinating discussion. Thank you, both, Ryan and Jim. As always, BioWorld will continue to keep you informed of all the most important scientific clinical and business updates. That’s our show for today. If you need to track the development of drugs, turn to bioorld.com, follow us on Twitter, or firstname.lastname@example.org. If you’re enjoying the podcast, don’t forget to subscribe. Thanks for joining us.
Presenter: BioWorld published by Clarivate is a subscription-based news service, but all of our COVID-19 content, over 6,000 articles, and data entries since the start of the pandemic are freely accessible.