Grand Rounds April 10, 2026: Impact of Behavioral Science-Based Electronic Health Record Tools on Deprescribing for Older Adults (Julie Lauffenburger, PharmD, PhD)

Speaker

Julie Lauffenburger, PharmD, PhD
Associate Professor of Medicine
Brigham and Women’s Hospital and Harvard Medical School

Keywords

Adaptive trial design; Behavioral science; Deprescribing; electronic health record; EHR; Inappropriate prescribing; NUDGE-EHR; Overprescribing.

Key Points

  • Older adults are often overprescribed medications or prescribed potentially inappropriate medications like benzodiazepines, non-benzodiazepine sedative hypnotics, or strongly anticholinergic medications with long-term use associated with a 30% increased risk of hospitalizations and falls.
  • Medication management or optimization in older adults is often difficult due to a tendency to maintain the status quo, time constraints, patient preference, or diffusion of responsibility, and existing interventions for medication management are highly resource intensive.
  • Behavioral science techniques employed in the NUDGE-EHR and NUDGE-HER-2 trials may enhance the effectiveness of electronic health record (EHR) tools to alert clinicians to inappropriate medications during patient visits.
  • NUDGE-EHR was a 16 arm two-stage adaptive pragmatic trial among 216 primary care providers and older adult patients conducted from October 2020 to August 2022 examining 14 promising EHR tools using 9 different behavioral principles with deprescribing as a primary outcome.
  • The 2 most promising tools were included in the second 3 parallel arm pragmatic trial, NUDGE-EHR-2, in a different health system from November 2022 to March 2024. EHR tools used pop-up windows to suggest deprescribing. The study provided a set of helpful options to providers including a tapering algorithm, instructions for patients, orders for alternative medications, and referrals to behavioral health providers to make this process faster and easier.
  • Deprescribing increased by 6.5% to 10.4% over usual care. Active discontinuation by primary care providers appeared to drive the results.

Discussion Themes

The adaptive trial design of the first NUDGE-EHR study helped inform the more traditional confirmation trial NUDGE-EHR-2.

The way EHR tools are used varies widely from provider to provider. Tools may be adapted over time so the tool works best for the individual provider.

 

Read more about the NUDGE-EHR study.

 

March 24, 2026: Study Design Paper Published for Chat 4 Heart Health

Dr. Michael Ho and Dr. Sheana Bull, principal investigators for NudgeThe study design paper for Chat 4 Heart Health has been published online in Trials. Congratulations to the study team on reaching this important milestone for all NIH Collaboratory Trials!

The Chat 4 Heart Health trial is testing the comparative effectiveness of 3 text messaging delivery strategies that have been shown to improve individuals’ self-management health behaviors, including physical activity and medication adherence. The study will provide evidence regarding the best population-based strategy for universal delivery to engage patients in self-management to improve the American Heart Association’s “Life’s Essential 8” measures for improving and maintaining cardiovascular health.

The study is being led by Mike Ho of Kaiser Permanente Colorado and Sheana Bull of the Colorado School of Public Health and is supported by a grant award from the National, Heart, Lung, and Blood Institute.

Read the full study design paper.

January 8, 2026: Study Design Paper Published for BEST-ICU

Headshots of Dr. Michele Balas and Dr. Eduard Vasilevskis
Dr. Michele Balas and Dr. Eduard Vasilevskis, principal investigators for BEST-ICU

The study design paper for BEST-ICU has been published online in Trials. Congratulations to the study team on reaching this important milestone for all NIH Collaboratory Trials!

The BEST-ICU trial is evaluating 2 strategies grounded in behavioral economics theory and implementation science to increase adoption of the ABCDEF bundle in the intensive care unit and improve care for critically ill adults across a variety of healthcare systems. The ABCDEF bundle is a multicomponent, evidence-based intervention to improve team-based care.

The study is being led by Eduard Vasilevskis of the University of Wisconsin and Michele Balas of the University of Nebraska and is supported by a grant award from the National, Heart, Lung, and Blood Institute.

Read the full study design paper.

Grand Rounds July 11, 2025: Novel Approaches to Recruiting Clinical Sites for Embedded Pragmatic Clinical Trials: Insights from the AIM-Back Trial (Trevor Lentz, PT, PhD and Tyler Cope, PT, DPT, ACT)

Speakers

Trevor Lentz, PT, PhD
Tyler Cope, PT, DPT, ACT
Duke Clinical Research Institute
Duke Department of Population Health Sciences
Durham Veterans Administration

Keywords

AIM-Back; Clinical site recruitment; Cluster randomized trial; Low back pain; Recruitment funnel

Key Points

  • Low back pain is an impactful condition that is more common in the veteran population. Typical low back pain care involves imaging and pharmacologic treatments that don’t always resolve pain issues and may lead to more invasive injection-based or surgical measures that often don’t result in better outcomes.
  • Research has shown that non-drug treatments (eg, cognitive behavioral therapy [CBT], yoga, physical therapy [PT]) are effective but not often used.
  • The AIM-Back trial (Improving Veteran Access to Integrated Management of Back Pain), an embedded pragmatic cluster randomized trial, sought to restructure care practices in Veteran’s Administration (VA) healthcare systems to promote and facilitate 2 clinical non-drug pathways that are supported by established guidelines as first-line treatment for low back pain.
  • Two care pathways were developed in coordination with VA clinicians, veterans, and care givers: (1) Sequenced Care Pathway – This pathway provided an initial onsite physical therapy evaluation and treatment session followed by weekly telehealth physical activity training for 6 weeks. The patient then saw the physical therapist again and was either discharged or provided with 6 weeks of training in psychologically-informed practices to help patients manage pain. (2) Pain Navigator Pathway – In this pathway, a local site clinician who was trained by the study team as a pain navigator discussed and facilitated alternative treatments for low back pain (eg, PT, yoga, CBT, massage). Patient follow up at both 6 and 12 weeks assessed progress and outcomes.
  • AIM-Back used a novel and intentional recruitment method, borrowing the concept of the business sales funnel, to generate as many site leads as possible. The recruitment process was systematic involving a 3 step framework: (1) Identify leads, (2) Approach leads, (3) Engage and select sites.
  • In step 1, leads were identified through Warm Market methods (sites known to the researchers), by Leveraging Data (evaluating lists of providers for potential fit), and through traditional Promotional Outreach efforts (advertising through networks and listservs). AIM-Back identified 184 leads from 53 VA healthcare systems.
  • Step 2 involved approaching leads through email messages. AIM-Back learned that promoting the trial in a way that helps clinicians solve their problems instead of asking clinicians to help with the research was more likely to yield the site. AIM-Back received responses from 23 VA healthcare systems.
  • In step 3, AIM-Back engaged personnel at all levels, from leadership to clinicians, to assess feasibility and buy-in at the site. AIM-Back selected 19 participant sites within 10 VA healthcare systems.
  • The Promotional Outreach strategy proved most effective with 9 (47.4%) of sites resulting from this strategy. The Leveraging Data strategy netted 6 (31.6%) sites, and 4 (21.1%) sites came from the Warm Market strategy. Site recruitment took approximately 3.6-3.8 months on average.
  • 17 sites enrolled 1817 Veterans with most sites (n=16) meeting or exceeding the minimum enrollment goal. When sites chose not to participate, they cited a reluctance to change their existing programs, a lack of clinicians or resources, or they were already participating in similar trials.

Discussion Themes

AIM-Back messaging evolved over the course of recruitment from a more traditional trial marketing email to an email that was more personal, short, and leveraged the standing of Duke University. This more personal approach to recruitment led to better relationships with sites during the trial.

Project management software can be helpful for tracking follow up with site leads and communication during the recruitment process.

One overall goal of AIM-Back was to set up a new clinical program that could continue after the end of the trial. Sites were given training materials for the centralized study components and support from AIM-Back was stepped down slowly. Sites that chose to continue the intervention trained a physical activity/whole health coach and a PT for the psychologically informed PT portion of the intervention.

Indicators of a potentially successful site included qualitative components that reflect a high level of engagement such as high interest and excitement in the study along with a sufficient patient population.

Read more about the AIM-Back trial design.

June 24, 2025: Study Design Paper Published for ARBOR-Telehealth

ARBOR-Telehealth logoThe study design paper for ARBOR-Telehealth has been published online in BMJ Open. Congratulations to the study team on reaching this important milestone for all NIH Collaboratory Trials!

The ARBOR-Telehealth study is evaluating the use of a telehealth physical therapy strategy for patients who present to primary care clinics with low back pain in rural communities. A secondary aim of the study is to compare the effectiveness of the study’s risk-stratification approach.

The study is being led by Richard Skolasky and Kevin McLaughlin of Johns Hopkins University and is supported by a grant award from the National Institute of Arthritis and Musculoskeletal and Skin Diseases.

Read the full study design paper.

April 7, 2025: Study Design Paper Published for IMPACt-LBP

Logo for the IMPACt-LBP Demonstration ProjectThe study design paper for IMPACt-LBP has been published online in BMJ Open. Congratulations to the study team on reaching this important milestone for all NIH Collaboratory Trials!

IMPACt-LBP is evaluating the effect of first-contact patient referral to physical therapists and doctors of chiropractic for low back pain. The study team is seeking to determine if initial contact with these “primary spine practitioners” will improve physical function and pain, decrease opioid prescriptions, improve patient satisfaction, and reduce the costs and use of health care services in patients with low back pain when compared with usual medical care.

The study is being led by Christine Goertz, Adam Goode, and Hrishikesh Chakraborty of Duke University and Jon Lurie of Dartmouth Hitchcock Medical Center and is supported by a grant award from the National Center for Complementary and Integrative Health.

Read the full study design paper.

March 11, 2025: Study Design Paper Published for MOMs Chat & Care Study

MOMs Chat and Care Study logoThe study design paper for the MOMs Chat & Care Study has been published online ahead of print in Contemporary Clinical Trials. Congratulations to the study team on reaching this important milestone for all NIH Collaboratory Trials!

The MOMs Chat and Care Study is testing the effectiveness of an integrated care model approach at 2 levels of intensity designed to facilitate timely, appropriate care for Black birthing people to reduce their risk for severe maternal morbidity. Patients in both study arms will receive close clinical and behavioral health monitoring and navigation to timely care and services.

The study is being led by Stephanie Fitzpatrick at the Feinstein Institutes for Medical Research and is supported by an R01 grant award from the National Institute of Nursing Research.

Read the full study design paper.

March 4, 2025: PRIM-ER Team Develops Innovative Statistical Techniques for Stepped-Wedge Trials

Cover image of Statistics in MedicineResearchers with PRIM-ER, an NIH Collaboratory Trial, published 2 innovative statistical techniques for evaluating intervention effects in stepped-wedge, cluster randomized trials. The new models, which use Bayesian methods, outperformed traditional analytic methods and other Bayesian approaches in simulations and real-world applications.

The article was published online in Statistics in Medicine.

In cluster randomized trials with stepped-wedge designs, the clusters are randomized into several groups, and all groups start the trial in the control condition. Groups of clusters cross over to the intervention condition on a staggered timeline, and all groups receive the intervention before the end of the trial.

Stepped-wedge designs can be advantageous when simultaneous rollout of the intervention to all clusters is infeasible, or when withholding the intervention from any cluster would be unethical, or when there is a risk of contamination between intervention subjects and control subjects. However, stepped-wedge designs can also introduce confounding by time, as the intervention is rolled out to clusters in waves. Temporal trends during the study can influence the study’s outcomes.

(Learn more about stepped-wedge designs in the Living Textbook.)

The PRIM-ER researchers tested 2 new Bayesian hierarchical penalized spline models to improve the estimation of intervention effects in stepped-wedge trials. The first model focuses on immediate intervention effects and accounts for large numbers of clusters and time periods. The second model extends the first by accounting for time-varying intervention effects. The researchers applied both models to data from PRIM-ER.

Read the full report.

PRIM-ER tested a multidisciplinary primary palliative care intervention in a diverse mix of emergency departments in the United States to improve the delivery of goal-directed emergency care of older adults. The study was supported by the National Institute on Aging. Learn more about PRIM-ER.

October 15, 2024: Case Study Describes a Reassessment of Sample Size in an Ongoing Cluster Randomized Trial

FM-TIPS logoA new case study from the NIH Pragmatic Trials Collaboratory highlights an interim reassessment of sample size during an ongoing cluster randomized trial. The case study was published this week in the Living Textbook of Pragmatic Clinical Trials.

Researchers in cluster randomized trials must account for potential correlation between clusters in the design and analysis of their trial by estimating the intraclass correlation when calculating the target sample size. Often they use preliminary data from the planned enrollment sites to estimate the correlation. However, when preliminary data are unavailable at the time of study design, they may use interim data collected during the trial itself to reassess the trial’s sample size.

The contributors of the case study focus on FM-TIPS, an NIH Collaboratory Trial, to describe an approach to conducting an interim reassessment of sample size in an ongoing trial. Read the full case study.

FM-TIPS is examining whether the addition of transcutaneous electrical nerve stimulation to routine physical therapy improves movement-evoked pain compared with physical therapy alone among patients with fibromyalgia. The trial is supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases through the NIH HEAL Initiative. Learn more about FM-TIPS.

The contributors of the case study include members of the FM-TIPS study team and leaders of the NIH Collaboratory’s Biostatistics and Study Design Core. David-Erick Lafontant is a statistician, Bridget Zimmerman is a clinical professor of biostatistics, and Emine Bayman is an associate professor of biostatistics—all at the University of Iowa. Megan McCabe is an assistant professor of biostatistics at the University of Alabama at Birmingham. Patrick Heagerty is a professor of biostatistics at the University of Washington. Liz Turner is an associate professor of biostatistics and bioinformatics at Duke University.

September 12, 2024: NIH Collaboratory Biostatisticians Evaluate Analytic Models for Individually Randomized Group Treatment Trials

Headshot of Dr. Jonathan Moyer
Dr. Jonathan Moyer

To avoid inflation in the rate of type 1 error, or false positives, in individually randomized group treatment (IRGT) trials, researchers should choose an analytic model that accounts for the correlations in outcome measures that arise when study participants receive an intervention from the same source, according to a report from the NIH Pragmatic Trials Collaboratory’s Biostatistics and Study Design Core.

The report was published online ahead of print in Statistics in Medicine.

Many IRGT trials randomly assign individuals to study arms but deliver the study intervention through shared “agents,” such as clinicians, therapists, or trainers. After randomization, interactions between participants who share the same agent can lead to correlations in study outcomes. The delivery agents may be nested in or crossed with study arm, and participants may interact with a single agent or multiple agents. There has been no systematic effort to identify the appropriate analytic models for these complex study designs.

To address this knowledge gap, members of the NIH Collaboratory’s Biostatistics and Study Design Core conducted a simulation study to examine the performance of a variety of analytic models for IRGT trials in which complex clustering arises from participants interacting with multiple agents or single agents in both nested and crossed designs. They found substantial inflation in the type I error rate in studies with nested designs when the analytic model did not account for participants interacting with multiple agents.

Read the full article.

This article is the latest in a series of reports completed this year by members of the Biostatistics and Study Design Core to explore analytic approaches to clinical trials with complex clustering and other novel design features:

Lead author Jonathan Moyer, a statistician in the NIH Office of Disease Prevention, led a discussion of complex clustering in pragmatic trials in a session of the NIH Collaboratory’s weekly Rethinking Clinical Trials webinar series: “The Perils and Pitfalls of Complex Clustering in Pragmatic Trials.”

Learn more about the NIH Collaboratory’s Biostatistics and Study Design Core.