The AI software Trial Pathfinder offers hope for medical clinicians looking to safely expand and diversify clinical drug trials, for cancer and other diseases. Chunhua Weng and James Rogers outline a Stanford University research project, led by Ruishan Liu, which revealed the possibility of including people in trials who have often been excluded for reasons of age, health or gender.
- Clinical trial success often depends on enrolling people who meet study criteria within a specific time period.
- Ruishan Liu’s team created an open-source AI tool dubbed Trial Pathfinder.
- Liu’s team identified restrictive, non-beneficial criteria when they analyzed clinical trials for other cancer types.
- Liu’s study could inspire AI-based trial participant selection for other diseases.
Clinical trial success often depends on enrolling people who meet study criteria within a specific time period.
Sometimes the inability to find enough appropriate people for a clinical trial leads to inferior conclusions. But a new software allows clinicians to include more diverse participants safely, by analyzing real-world cancer patient data.
Most clinical trials endeavor to sign up low-risk people. They exclude elderly individuals, pregnant women or people with co-morbidities other than the targeted condition. Such exclusions weed out those who are physically weak, or have toxic drug intolerance, in order to ensure trial uniformity. This strategy excludes some people who might benefit from the treatment being studied. Sometimes it results in a smaller-than-optimal study group which compromises, delays or even ends the trial.
“Researchers are increasingly recognizing that eligibility criteria for clinical trials should be simplified, be made less restrictive and be better justified clinically than is currently the case.”
Eligibility criteria are often established by using old trial data or by arbitrary trial design decisions. A recent study looked at ways electronic health records (EHRs) could modify a trial’s eligibility criteria to grow the participant pool. But researchers lacked a reliable software tool that uses EHRs to emulate trials.
Ruishan Liu’s team created an open-source AI tool dubbed Trial Pathfinder.
Liu’s Stanford University team’s AI software uses EHRs to compare survival statistics of people who received, or did not receive, a specific drug. The software evaluates the results of including or eliminating eligibility data from the original trial. It allows scientists to optimize eligibility criteria, by creating a balance between participation inclusiveness and individual safety.
Liu’s team worked with the Flatiron Health EHR-derived database. This encompassed data from 61,094 people at 280 US cancer facilities who suffered from non-small-cell lung cancer. The researchers studied 10 clinical trials designed for drugs used to treat that specific form of cancer. Trial Pathfinder reproduced the trials by targeting people who met the original trial’s eligibility status.
Based on treatment data, trial managers assigned eligible patients to an emulated treatment group. About 250 participants in the Flatiron data set matched one of the trial’s categories. The AI software compared participant outcomes by assigning a value called the overall-survival hazard ratio. This ratio determined treatment benefits by determining which ones increased an individual’s survival potential. When dealing with treatment situations, physicians sometimes retain biases based on a patient’s expected prognosis, resulting in inconsistent drug assignments. Since clinical trials often randomize to avoid such biases, Liu’s team used “inverse probability of treatment weighting” to create more neutral estimates of treatment outcomes.
Trial Pathfinder then calculated the hazard ratio by running trial emulations which excluded some eligibility criteria. The AI metric known as the Shapley value determined the average effects of each criterion’s hazard ratio, to determine how each affected safety and inclusiveness.
“Using this data-driven approach to select a smaller subset of the original eligibility criteria would increase the eligible population in this database from 1,553 to 3,209, on average, while achieving a lower overall-survival hazard ratio.”
One result suggested that a larger number of older people and women could have participated in the trials. As Liu’s team added more trial comparisons, they found that greater participant diversity occurred when standardized eligibility criteria aligned with trials employing certain relaxed lab thresholds.
Liu’s team identified restrictive, non-beneficial criteria when they analyzed clinical trials for other cancer types.
Trial Pathfinder estimated a potential 53% increase in the number of trial participants for other cancers such as melanoma, breast cancer and colorectal cancer. The trials retained a better survival rate while using less restrictive patient criteria. When evaluating follow-up data from 22 additional cancer trials, the team found that despite eligibility criteria differences, some relaxing of criteria thresholds would not increase participant toxicity risks.
“This was demonstrated by monitoring eligibility-criteria differences and finding that the omission of some criteria was associated with minimal to no changes in the number of treatment withdrawals from these trials owing to adverse events.”
Trial Pathfinder allows a robust evaluation of how relaxing certain eligibility criteria affects treatment effectiveness. Researchers can now embrace EHR data and specific algorithms to enhance participant safety and diversity.
Liu’s study could inspire AI-based trial participant selection for other diseases.
Challenges remain in improving EHR data quality. Some problems arise due to variable methods used to assess patient outcomes. Some companies consider trial protocols business secrets, making them less accessible. While the curated Flatiron database offers uniformity, some EHR systems provide less accuracy and completeness.
In the future, Trial Pathfinder could adopt practices and standardizations recommended by the global consortium Observational Health Data Sciences and Informatics.
“This could be achieved by Trial Pathfinder using the widely adopted OMOP (Observational Medical Outcomes Partnership) Common Data Model standardization approach, which would improve its interoperability with the vast number of different types of EHR data.”
By using this AI software, policy makers could encourage clinical-trial owners to freely share trial protocols, to improve consistency between available public summaries and full protocol results.
About the Authors
Chunhua Weng and James R. Rogers are members of the Department of Biomedical Informatics at Columbia University.
This document is restricted to personal use only.