Grand Rounds February 21, 2025: Texting for Behavior Change: Lessons Learned Across 2 Interventions to Improve Chronic Care Management (Michael Ho, MD, PhD; Sheana Bull, PhD)

Speakers

Michael Ho, MD, PhD
Kaiser Permanente Colorado

Sheana Bull, PhD
University of Colorado School of Public Health

Keywords

Text Messaging; Artificial Intelligence; Chatbots; Health Behaviors

Key Points

  • Ample evidence now exists demonstrating the benefit of using text messaging in support of health behavior and access to care. It’s ubiquitous, increasing reach; theory in message design is impactful; and it can improve adherence to medical appointments and health behaviors.
  • Two NIH Collaboratory Trials, Nudge and Chat 4 Heart Health (C4HH), test the effectiveness of text messaging interventions to support behavior change. Nudge randomized patients to receive usual care, generic texts, behavioral texts, or behavioral texts plus chatbot messages. Their primary outcome was medication adherence.
  • C4HH, the subsequent trial, is randomizing patients to receive a generic text message curriculum; an AI chatbot messaging curriculum; or AI chatbot messages plus proactive pharmacist support. Their primary outcome is cardiovascular risk factors, as measured by the American Heart Association’s “Life’s Essential 8” adherence.
  • Nudge used an opt-out consent approach where CC4H used an opt-in consent approach. In the former, the research team noted, patients who identified as Black, Hispanic, and primary Spanish speakers were more likely to remain in the study. An opt-out approach in the appropriate context may be a way to diversify clinical trial populations and improve external validity of results.
  • The use of AI chatbots allows users to generate questions in their own words and the system to retrieve a response from a closed, curated library.
  • Message engagement is key to text messaging interventions. Participants in the Nudge study who were randomized to optimized texts had more questions. Questions were related to medications, refill logistics, and costs. The study team hypothesizes that the optimized texts may have led to greater patient engagement, and therefore more questions about their medications.
  • Over 12 months, the Nudge study found no significant difference in the rates of prescription refills, between the 3 intervention arms and usual care. CC4H is ongoing, and will send a higher volume of messages in an effort to engage patients and change patient behavior.
  • So far, the top 5 topics in messages initiated by C4HH participants have been healthy eating, physical activity, managing cholesterol, quitting smoking, and medication management.

Discussion Themes

The study team had to be very careful to ensure that patient health data, including cell phone numbers and the messages sent, were encrypted. Vendors and phone carriers were not able to access this data and it was not stored on their servers.

One of the challenges they encountered was that their systems weren’t integrated into the health care organizations’ pharmacies or electronic health records. The integration piece will be key to any future sustainability.

As technology evolves significantly over the course of, say, a 5-year study, developing the skillset to utilize interactive interventions or a SMART design could be helpful for investigators interested in conducting research in this area.

Grand Rounds March 21, 2025: Generative Artificial Intelligence in Clinical Trials: A Driver of Efficiency and Democratization of Care (Alexander J. “AJ” Blood, MD, MSc)

Speaker:

Alexander J. “AJ” Blood, MD, MSc
Associate Director, Accelerator for Clinical Transformation Research Group
Instructor of Medicine at Harvard Medical School
Cardiologist and Intensivist
Brigham and Women’s Hospital

Title: Generative Artificial Intelligence in Clinical Trials: A Driver of Efficiency and Democratization of Care

Date: Friday, March 21, 2025, 1:00-2:00 p.m. ET

Please click the link below to join the webinar:

https://duke.zoom.us/j/96025253609?pwd=xr6PQHaPDQ24b2FFaytZw3HblN3k7e.1

Passcode: 646677

One-Tap Mobile

+13052241968,,96025253609#,,,,*646677# US

+13092053325,,96025253609#,,,,*646677# US

Audio Only Option

+1 305 224 1968

+1 309 205 3325

International numbers available: https://duke.zoom.us/u/aJtwMRxLu

Webinar ID: 960 2525 3609
Passcode: 646677

Grand Rounds December 6, 2024: Opportunities and Challenges in the Use of Large Language Models for Post-Marketing Surveillance of Medical Products (Michael E. Matheny, MD, MS, MPH)

Speaker

Michael E. Matheny, MD, MS, MPH
Director, Center for Improving the Public’s Health Through Informatics
Professor of Biomedical Informatics, Biostatistics, and Medicine
Vanderbilt University Medical Center
Staff Scientist, Geriatrics Research Education and Clinical Care Service
Associate Director, VA ORD VINCI
Tennessee Valley Healthcare System VA

Keywords

Artificial Intelligence; Large Language Models; Surveillance; Medical Products

Key Points

  • Increasingly, leaders in many disciplines are finding new applications for Artificial Intelligence (AI). Within healthcare, this technology is being used to support clinical decision-making; imaging processing; drug discovery; clinical trials; and as Ambient and Autonomous AI.
  • Large Language Models (LLMs) are a subset of generative AI. Since 2012, LLMs have emerged as a promising new technology with rapid growth, evolution of capacity and reach, and many potential applications in healthcare and clinical research.
  • There is significant interest in using LLMs to assist with patient trial matching, clinical trial planning, and the development of trial protocols and consent documents.
  • Another key area that LLMs could provide support in is medical product safety surveillance, with potential applications in adverse event detection, probabilistic phenotyping, and information synthesis.
  • The post-marketing surveillance space utilizes an ecosystem of healthcare data, imaging, radiology reports, insurance, structured data, medical literature, and social media. These sources could be integrated to conduct LLM reasoning and extractions.
  • Key challenges in safe and effective use of LLMs for this purpose include the lack of evaluation for medical product surveillance, the complexities of prompt engineering, hallucination risk (i.e., false positives), and the fact that evolving models over time challenge stable performance estimates.

Discussion Themes

 A segment of the clinical workforce could be trained to be “super users,” partnering with development teams in order to make sure that these tools are working appropriately in a clinical environment.

There is substantial interest in using LLMs to support clinical decision-making. However, studies have shown that the quality of the AI output can influence the performance of the clinicians. Especially in high-risk clinical environments, any drift in those algorithms could result in adverse clinical outcomes. The life cycle approach to conceptualization, development, implementation, surveillance, and maintenance will be necessary to achieve and maintain performance.

July 1, 2024: Latest Podcast Features Michael Pencina and Brian Anderson of Coalition for Health AI

Headshots of Dr. Michael Pencina and Dr. Brian AndersonIn a new episode of our Rethinking Clinical Trials podcast, Drs. Michael Pencina and Brian Anderson of the Coalition for Health AI speak with host Dr. Adrian Hernandez about public-private partnerships in a trustworthy health AI ecosystem. Pencina and Anderson presented on their experiences during the March 8 session of PCT Grand Rounds.

Listen and subscribe to the podcast on SoundCloud or Apple Podcasts, and view the full March 8 PCT Grand Rounds webinar.

March 6, 2024: In This Week’s PCT Grand Rounds, Public-Private Partnerships in Health AI

In this Friday’s PCT Grand Rounds, Michael Pencina of Duke University will present “Public-Private Partnerships in the Trustworthy Health AI Ecosystem.”

The Grand Rounds session will be held on Friday, March 8, 2024, at 1:00 pm eastern.

Pencina is a professor of biostatistics and bioinformatics and the vice dean for data science in the Duke University School of Medicine. He is the director of the university’s Duke AI Health initiative and the chief data scientist for Duke Health.

Join the online meeting.

August 2, 2023: Want to Play a Game? AI and Machine Learning in This Week’s PCT Grand Rounds

Headshot of Dr. Eric Perakslis
Dr. Eric Perakslis

In this Friday’s PCT Grand Rounds, Eric Perakslis of Duke University will present “AI & ML: Want to Play a Game?”

The Grand Rounds session will be held on Friday, August 4, 2023, at 1:00 pm eastern.

Perakslis is a professor in population health sciences and the chief research technology strategist in the Duke University School of Medicine.

Join the online meeting.

May 4, 2022: Ethics Core Members Pen Guest Editorial for AJOB Focus on Machine Learning in Healthcare

In a guest editorial in the American Journal of Bioethics, members of the NIH Pragmatic Trials Collaboratory’s Ethics and Regulatory Core introduced the issue’s target article and peer commentaries on artificial intelligence and machine learning in healthcare. Prof. Kayte Spector-Bagdady and Drs. Vasiliki Rahimzadeh and Kaitlyn Jaffe, who are Core members, were joined by coauthor Dr. Jonathan Moreno in writing the editorial.

The target article of the themed collection proposes a research ethics framework for the clinical translation of healthcare machine learning. In several peer commentaries accompanying the article, experts offer their perspectives on the proposed framework, including critiques of “the insufficiency of current ethics and regulatory solutions to adequately protect communities at higher risk for [machine learning] bias.”

Read the full editorial, “Promoting Ethical Deployment of Artificial Intelligence and Machine Learning in Healthcare.” Learn more about our Ethics and Regulatory Core.

June 7, 2019: In Dreams Begin Responsibilities: Data Science as a Service—Using AI to Risk Stratify a Medicare Population and Build a Culture (Erich Huang, MD, PhD)

Speaker

Erich S. Huang, MD, PhD
Co-Director, Duke Forge
Departments of Biostatistics & Bioinformatics and Surgery
Duke University School of Medicine

Topic

In Dreams Begin Responsibilities: Data Science as a Service—Using AI to Risk Stratify a Medicare Population and Build a Culture

Keywords

Data science; Data liquidity; Data standards; Machine learning; Duke Forge; Application programming interface; Artificial intelligence

Key Points

  • Duke Forge focuses on bringing the best methodological approaches to actionable data problems in health. It is motivated by a framework of value-based healthcare to address societal inequities in health.
  • Essential components to building a data science culture include clinical subject matter expertise, quantitative and methodological expertise, and software architecture and engineering expertise, along with interoperable tools and applications.
  • Like freight shipping containers, health-relevant data needs standardized containers that make any type of data easy to pack, grab, combine, and move around. The aim should be to build a “data liquidity ecosystem” equivalent to freighters, cranes, trains, and trucks that facilitate the logistics of health data transport.

Discussion Themes

If we’re trying to build an ecosystem, then the electronic health record (EHR) platform needs to be evaluated by whether it is truly participatory in this ecosystem. If not, then its deficiencies must be remediated.

The faster we can move to the cloud and use building blocks that “snap” together, the faster we can get answers. We want to be building applications instead of infrastructure.

Algorithms don’t have ethics; some have hidden biases. Algorithms need to be scrutinized and tested for such biases. They also must be secured so they cannot be manipulated.

Read more about Duke Forge and check out articles on the blog.

Tags

#pctGR, @Collaboratory1, @DukeForge