Gaining insight with artificial intelligence

Part 3 of Prioritizing of drug safety

In part three of this blog series about best practices in toxicology, we continue the discussion from previous articles that outlined elements in the discovery stage that could impact safety, and considering a drug’s effects on toxicity of specific organs and specific patients via toxicogenomics, pharmacogenomics and pharmacoepigenomics.

To help reach a deeper understanding of the mechanisms related to toxicity and subtle differences in the genetic makeup that influence the maximum tolerated drug dose or efficacy, we need to be able to access and process large quantities of experimental data, which can be facilitated by incorporating artificial intelligence (AI) and machine learning algorithms into data analysis workflows.

 

Role of artificial intelligence

Artificial intelligence (AI) and machine learning algorithms can be incorporated into data analysis workflows for quick, efficient management and processing of large amounts of data in a way that is meaningful and free of human bias,. To use effectively, we need to understand the common types of toxicology data for which AI could be useful and the potential approaches to handling them. |

OMICs data analysis would primarily be geared toward identifying biomarkers and constructing adverse outcome pathways through computational methodologies that prioritize and validate the genes involved in adverse outcomes. The datasets generated from high-throughput screening rely heavily on computational methods to crunch numbers, generate dose-response curves and predict the median lethal dose, or LC50, for a set of compounds. However, there is also growing demand for machine learning methods to tackle image recognition and classification in context of toxicology.

Finally, pharmacokinetics and pharmacodynamics modeling can be used to predict drug-induced side effects with significant accuracy, which has huge implications for personalized medicine.

 

What are some of the challenges involved with AI?

While AI offers an exciting solution to our ever-growing problem of analyzing big data, we need to err on the side of caution. As with any new technology, there are some challenges associated with developing AI.

First, for machine learning, high-quality training datasets are needed to make accurate predictions. Whether it is ADMETox prediction models such as for the ability of a compound to block the hERG channel, or predicting tissue pathology based on image recognition, the quality of underlying data is very important to determining prediction accuracy. Similarly, other computational algorithms used for target deconvolution and biomarker prediction rely on the quality and granularity of prior published knowledge of the interactome, disease and toxicity associations. Thus, it is critical to source the information from a reliable and experimentally validated source.

Second, to be incorporated into routine drug discovery workflows, computational techniques need to be reproduced and validated. The Allen Institute for Artificial Intelligence Citing reported a lack of reproducibility and existing guidelines for reporting and publishing new algorithms. They suggested that the use of a checklist will help streamline the use and validity of computational techniques in the future.

 

Implications

The ballooning cost of healthcare has forced the question of whether current discovery and development methods are even sustainable. As the cost of bringing a drug to the market increases each year, we need to take a hard look at the underlying factors contributing to these numbers. Drug-induced toxicity and adverse events figure prominently at the core of these issues.

Interested in learning more? Read the full report: “Prioritizing drug safety: Implementing toxicity testing best practices in every phase of drug research.”