Passing the (FDA) test: how data can make clinical trials more effective

The future of pharma and healthcare hinges on how successfully we manage big data

Matthew Clark quote card

As outgoing FDA Commissioner, Dr. Scott Gottlieb made his thoughts on clinical trials clear. He called for the clinical trial process to become more innovative and streamlined, providing a shorter time to market for new drugs, stating:

Opportunities can be delayed or stymied by a clinical research enterprise that is often extraordinarily complex and expensive. Efforts to streamline medical product development based on advancing science can be frustrated by legacy business models that discourage collaboration and data sharing, and the adoption of disruptive technologies that make clinical research more effective. Without a more agile clinical research enterprise capable of testing more therapies or combinations of therapies against an expanding array of targets more efficiently and at lower total cost, important therapeutic opportunities may be delayed or discarded because we can’t afford to run trials needed to validate them.

Tim HoctorWith only 14 percent of all drugs in clinical trials winning FDA approval, and the cost of drug development still rising, it’s becoming increasingly necessary to make clinical trials more efficient. However, pharma companies are also facing new challenges in drug development – including the ”data deluge” overwhelming researchers. Even though many organizations feel overwhelmed by the sheer volume of data available, this data could hold the answer to improving clinical trials. As my colleague Tim Hoctor, VP of Life Science Solutions Services at Elsevier, explained:

Thanks to advances in genomic sequencing, the digitization of existing research, and the rise in connected and wearable devices, new data is continually being produced. And hidden among these millions of published records, and the data pharma companies have collected themselves, there are valuable insights waiting to be discovered. In order to make use of this data and find the insights, organizations must take a ‘big data’ approach. This now means looking to tools like AI and machine learning for help analyzing datasets.

As an information analytics company, Elsevier’s “bread and butter” is data, so we have been exploring what we can do with big data to help improve clinical trials. As a first step, this led us to look at the safety of trials – and to take a step back in the process to see how animal tests inform trial design.

What can we learn from animal testing?

In 2018, alongside Dr. Thomas Steger-Hartmann, head of Bayer AG’s Investigational Toxicology department, we conducted a big data research project to answer this question. The study measured how concordant animal testing is for predicting human reactions to drugs. For 3,290 approved drugs and formulations, we analyzed more than 1.6 million adverse events reported for both humans and the five most commonly used animals in FDA and EMA regulatory documents. Our investigation revealed that:

  • Some animal tests are more predictive of human response than others, depending on the species and symptom being reported.
  • When it comes to cardiac events, such as arrhythmia, animal and human responses are often very similar.
  • However, some events that were identified in animals have never been reported in a human.
  • And some events observed in humans have never been reported in an animal study.

The findings will help researchers design safer and more relevant clinical trials by letting pharmaceutical companies decide which animal tests are the most predictive for specific drugs or compounds. Then, when the drug gets to human trials, it will have already been tested in the most suitable way, making clinical trials safer and more relevant.

This approach would also help reduce the amount of unnecessary animal testing, especially where the results of animal tests have been shown to differ from the effects a drug will have on humans. While providing tangible outcomes that could improve the safety of clinical trials, the study also emphasizes the benefits of using publicly available data in clinical trial design.

What can we learn from existing trials?

After our animal testing project, the next step was to consider how we can use data to analyze the safety of drugs that make it to market . My preliminary study considered the similarities and differences in events reported in humans in pre-marketing trials and post-marketing.

This project is a good example of how data can be used to pinpoint specific issues and draw conclusions which can help to make trials more predictive. For example, in a clinical trial setting, patient compliance is likely to be much higher than when the drug is introduced to a wider patient group or gone to market.

This means the trial isn’t a true reflection of how patients will actually take the drug outside of a clinical setting; this data can better predict the true success of a drug post-marketing. In addition, as the number of patients taking a drug during a pre-marketing trial is typically much lower, it’s unsurprising that more adverse events are reported after a drug has been licensed.

While these examples show how existing data can be used to help redesign clinical trials, these types of big data studies would be difficult to replicate in many organizations at scale. During these projects, my partners and I spent time manually building statistical models as a proof of concept. Realistically, AI, machine learning, and deep learning tools and techniques will be able to analyze this data at a much quicker pace. So to ensure the industry can use exisiting data to improve clincial trials, we need to combine the data science skills with the power of the technology.

Building a data platform for all

Without this link between technology and skills, data will only be as effective as the ability of a researcher to analyze it. Research scientists are not data scientists, and they face many challenges generating outcomes from big data:

  • Data is often siloed.
  • It is difficult to retrieve as a result.
  • Data is stored in varying formats.
  • Considerable time is spent cleansing and prepping data.

Jabe Wilson, PhDWe undertook our big data projects using internal expertise and high-quality data within Elsevier, but not every company can do the same today. I was part of the team that developed Elsevier’s Entellect to change this by building a bridge between technology and scientific knowledge. Entellect is a life sciences platform that empowers research and discovery by ensuring that all available data is gathered, harmonized, formatted, and made AI-ready – and that it can be used by scientists without specific data skills. As my colleague Dr. Jabe Wilson, Consulting Director for Text and Data Analytics at Elsevier, commented:

There are few areas that won’t benefit in some way from AI, but the potential is particularly promising for those workflows that require scientists to process very large volumes of data in order to find an ‘answer.’ Entellect specifically handles the exact sorts of data challenges that life sciences companies encounter – from handling huge volumes of existing data stored in individual electronic lab notebooks to finding information in scientific literature.

Once data is “clean,” researchers can use AI to help build and design better clinical trials, improve patient safety, and make sure new drugs get to the patients who need them. This could mean selecting animal trials using AI to ensure they’re predictive of certain conditions that could worsen related problems, or selecting patients more carefully so they’re representative of the population being treated. It could also help in combining clinical trial results with other data from the trial participants to predict any differences in taking the drugs outside of a trial setting. Jabe also believes AI can have benefits in other related areas:

Drug safety and pharmacovigilance are data-heavy areas that stand to benefit from AI. This is an area where we already have clients using Entellect to gather, cleanse and connect tens of thousands of different unstructured medical documents to make them standardized and searchable.

With AI and machine learning, we have the potential to uncover insights more quickly and effectively than ever before.  They could well deliver the FDA commissioner’s request to innovate in clinical trials if companies can overcome the challenges involved in effectively analyzing big data. Aside from providing value in improving trials, machine learning and AI can deliver a route to creating stronger hypotheses and more accurately predicting outcomes of research, making the development processes before drugs reach clinical trials more streamlined and generating a more effective drug pipeline.

The future of pharma and healthcare hinges on how successfully we manage big data – and clinical trials are one part of that future. As Bryn Roberts, Global Head of Operations for Roche Pharmaceutical Research & Early Development and Site Head in Basel, has said:

Imagine what we will be able to do in decades to come, when individuals have access to their complete healthcare records in electronic form, paired with high quality data from genomics, epigenetics, microbiome, imaging, activity and lifestyle profiles … supported by a platform that enables individuals to share all or parts of their data with partners of their choice, for purposes they care about, in return for services they value – very exciting!

Quick question for you

Which terms do you most associate with Elsevier? (check all that apply)

Data and analytics
Research platforms
Technology
Decision support tools
Publishing
Books and journals
Scientific articles
Healthcare content

Tags


Contributors


Comments


comments powered by Disqus