Grand Rounds August 1, 2025: Clinical Trial Notifications Triggered by Artificial Intelligence-Detected Cancer Progression (Kenneth L. Kehl, MD, MPH)

Speaker

Kenneth L. Kehl, MD, MPH
Assistant Professor of Medicine and Physician
Dana-Farber Cancer Institute

Keywords

Artificial Intelligence; Cancer; Notification; Enrollment; Patient Identification

Key Points

  • Historically, less than 10% of adults with cancer enroll in clinical trials. At the same time, many trials struggle to reach their accrual goals. One possible contributor is that many trials of novel therapies for cancer have specific molecular criteria.
  • Dana Farber Cancer Institute (DFCI) developed MatchMiner, a computational matching tool, to connect patients to trials. However, identified patients often weren’t at a place in their treatment when information about trials was relevant. The research team was interested in whether they could train an artificial intelligence (AI) model to identify “trial-ready” patients.
  • The team conducted an implementation pilot, providing clinicians and research staff with weekly spreadsheets containing predictions of clinical trial “readiness” as identified by AI. The majority of identified patients were found to be ineligible upon RN review. Of those who were eligible, the majority opted not to move forward with the trial referral. At the end of the 9-month pilot, 6 AI-identified patients had been consented and enrolled in a therapeutic trial.
  • To assess the impact of AI-driven identification of trial-ready patients, the team launched OPTIONS (Optimizing Precision Trials with an artificial Intelligence driven Oncologist Notification System). The primary outcome of the trial was enrollment in any DFCI therapeutic clinical trial.
  • Patients with solid tumors were randomized into either a control group, in which they could be identified by the standard MatchMiner workflow, or 1 of 2 intervention groups. In the intervention arms, treating oncologists for genomically-matched patients with progressive disease and anticipated changes in treatment were contacted via email. In group 3, patients who met the readiness criteria were manually reviewed before the oncologists were contacted.
  • They found that, though the AI models successfully predicted which patients with active or progressive cancer may need treatment changes, sharing the trial information with oncologists did not increase trial enrollment.
  • This intervention addressed 1 barrier to trial participation. Other barriers may include eligibility criteria that goes beyond genomics and recent progression; and factors related to patient or oncologist preference, such as the motivation for participating, the complexity of the trial, and time toxicity.
  • Dr. Kehl concluded with a reminder that while AI can accelerate clinical cancer research by rapidly identifying clinical trial options for patients, impact requires integration. AI must be applied thoughtfully and continuously evaluated, and researchers should be aware of the pitfalls and shortcuts associated with the technology.

Discussion Themes

The DFCI team is currently working on MatchMiner-AI: an open-source tool that they hope will improve the accessibility of clinical trials for all patients by providing a list of relevant clinical trials. They’re running a pilot study focused on incorporating MatchMiner-AI with the historical tool.

It’s easier to train a model than it is to deploy it in a complicated healthcare context. Given that the tool performs as hoped, there are evidently implementation challenges that still need to be worked out.

The study team considered training the model on a more proximal task – i.e., “Predict whether this patient will enroll in a clinical trial.” However, they were concerned that this would introduce biases – a pertinent concern with AI models – based on which patients typically have the opportunity to enroll in clinical trials.

While there may be use cases in which providing the trial information directly to patients would be more efficient, this would need to be done carefully. Information about worsening cancer, for instance, is best contextualized in a conversation with an oncologist.

Grand Rounds July 18, 2025: State of Clinical Trials: An Analysis of ClinicalTrials.gov (Adrian F. Hernandez, MD, MHS; Rebecca D. Sullenger, MPH; Sara Bristol Calvert, PharmD; Karen Chiswell, PhD; Christopher J. Lindsell, PhD)

Speakers:

Adrian F. Hernandez, MD, MHS
Executive Director
Duke Clinical Research Institute

Rebecca D. Sullenger, MPH
Duke University School of Medicine
MD Student | Class of 2026

Panelists:

Sara Bristol Calvert, PharmD
Director of Projects
Clinical Trials Transformation Initiative

Karen Chiswell, PhD
Statistical Scientist
Duke Clinical Research Institute

Christopher J. Lindsell, PhD
Director, Data Science and Biostatistics
Duke Clinical Research Institute

Keywords

Clinical Trials; Enrollment; Pragmatic Clinical Trials; Policy; Data Science

Key Points

  • A study of clinical trials from 2007 to 2010 found that the field was dominated by small trials and contained significant heterogeneity in methodological approaches, including reported use of randomization, blinding, and Data Monitoring Committees.
  • Clinical trials in the United States may be limited by legal, regulatory, and cost-related barriers. In a study of patient enrollment for cardiovascular clinical trials, the authors concluded that the U.S. had more trial sites than Eastern Europe or South America, but enrolled significantly fewer patients per site. These trends highlight the need for improved clinical trial infrastructure.
  • The presenters noted several promising trends in the field: growth in pragmatic clinical trials; high interest in clinical trial innovation from regulatory bodies and funding agencies; and the rapidly evolving capacity of clinical trials, particularly around accessibility.
  • The presenters provided an updated picture of the clinical trials landscape in the U.S., based on retrospective analyses of interventional clinical trials registered on ClinicalTrials.gov between 2018 and 2024.
  • They found that many trials remain small, lack a control group, and are incomplete after 5 years. Although small clinical trials without controls may be appropriate or necessary in specific contexts, such trials are also less likely to produce actionable data.
  • National policies prioritizing a more rapid, rigorous evidence generation system will likely be necessary to create a clinical trial ecosystem best equipped to advance public health.
  • In light of these insights, the team shared 5 potential policy approaches to improve the evidence-generation system in the U.S.:
    • Streamline trial start-up processes, institutional review board approvals, and contracting;
    • Enable scalable technologies to support greater participation;
    • Invest in modern clinical trial design strategies;
    • Require public reporting of key performance indicators and pay-for-performance results; and
    • Create stronger data sharing requirements and accountability rules.

Discussion Themes

Though the team utilized existing fields in ClinicalTrials.gov for their data, future research may utilize the key word search (i.e. adaptive platform trials) or natural language processing to investigate the state of clinical trials.

The value of small (<100 participants) trials was debated by the panelists. Though they do have a time and place, the high proportion of Phase III trials that enrolled less than 100 participants was surprising and concerning.

There are some limitations to ClinicalTrials.gov, namely in data entry. The more complex the trial, the more difficult it is to submit in a timely fashion. The system may post a barrier to embracing modern clinical trial design strategies.

Academia will also need to make policy changes to facilitate a healthier clinical trials ecosystem. The way career development and promotion pathways are structured, researchers are incentivized to lead small, potentially duplicative trials. Institutions need to reward, compensate, and value individual contributions to large-scale programs; i.e., the informative trial over the individually led trial.

Grand Rounds March 21, 2025: Generative Artificial Intelligence in Clinical Trials: A Driver of Efficiency and Democratization of Care (Alexander J. “AJ” Blood, MD, MSc)

Speaker

Alexander J. “AJ” Blood, MD, MSc
Associate Director, Accelerator for Clinical Transformation Research Group
Instructor of Medicine at Harvard Medical School
Cardiologist and Intensivist
Brigham and Women’s Hospital

Keywords

Artificial Intelligence; Cost; Large Language Models; Enrollment; Eligibility; Recruitment

Key Points

  • The Accelerator for Clinical Transformation (ACT) is a research group that seeks to use emerging technology to try and expand access to healthcare and improve quality and quantity of healthcare delivery. They focus on team-based models and scalable applications.
  • It’s becoming more expensive and time-consuming to move a drug from the clinical trial stage to approval. Patient recruitment is the leading driver of costs in clinical trials, and 55% of trials that fail to complete cite low accrual rate as the reason for study termination. There’s pressure from industry to conduct clinical trials in a way that is faster, cheaper, and better for both the patients and the research environment.
  • ACT conducted a pilot study in which they embedded a Large Language Model (LLM) tool called RECTIFIER into an active clinical trial of patients with heart failure. RECTIFIER is an AI-powered, comprehensive software application able to ask and answer questions about unstructured clinical data. In a pilot study, RECTIFIER determined patient eligibility with higher accuracy and specificity than study staff, indicating its potential to streamline screening.
  • LLMs are the engines that power the software. There are two key challenges that need to be taken into consideration to use these tools effectively: 1) there’s a content window – a limit to the amount of Electronic Health Record (EHR) data you can pull in; and 2) Using LLMs is expensive.
  • Following up on the pilot study results, ACT conducted a prospective randomized controlled trial: The Manual Versus AI-Assisted Clinical Trial Screening Using LLMs (MAPS-LLM) trial. MAPS-LLM compared two methods for analyzing a randomized pool of potentially eligible participants: manual review by study staff, and RECITIFIER-augmented review by study staff. Their primary endpoint was eligibility determination.
  • They found that AI-assisted patient screening using the RECTIFIER system significantly improved eligibility determination and enrollment compared with manual screening in a heart failure clinical trial.
  • ACT concluded that implementing AI-assisted tools like RECTIFIER can enhance clinical trial efficiency, reduce resource utilization, and promote equitable recruitment, potentially leading to faster trial completion and earlier patient access to novel therapies. Generative AI is likely to play a significant role in the future of clinical trials.

Discussion Themes

Study staff in the MAPS-LLM intervention arm were able to direct more time and effort towards contacting patients and managing patients with the time they would have spent reviewing charts and manually screening the EHR.

The rate of eligibility between the two arms was equivalent; the difference was, the AI-augmented group was able to assess twice as many potentially eligible patients.

While this tool can do a lot of analytical work, a human element will be essential to utilizing it effectively and to bringing “human intelligence” to participant enrollment.

The ACT team has started to pilot this technology in other disease areas, including cardiology more broadly, endocrinology, oncology, and gastroenterology.