Grand Rounds April 10, 2026: Impact of Behavioral Science-Based Electronic Health Record Tools on Deprescribing for Older Adults (Julie Lauffenburger, PharmD, PhD)

Speaker

Julie Lauffenburger, PharmD, PhD
Associate Professor of Medicine
Brigham and Women’s Hospital and Harvard Medical School

Keywords

Adaptive trial design; Behavioral science; Deprescribing; electronic health record; EHR; Inappropriate prescribing; NUDGE-EHR; Overprescribing.

Key Points

  • Older adults are often overprescribed medications or prescribed potentially inappropriate medications like benzodiazepines, non-benzodiazepine sedative hypnotics, or strongly anticholinergic medications with long-term use associated with a 30% increased risk of hospitalizations and falls.
  • Medication management or optimization in older adults is often difficult due to a tendency to maintain the status quo, time constraints, patient preference, or diffusion of responsibility, and existing interventions for medication management are highly resource intensive.
  • Behavioral science techniques employed in the NUDGE-EHR and NUDGE-HER-2 trials may enhance the effectiveness of electronic health record (EHR) tools to alert clinicians to inappropriate medications during patient visits.
  • NUDGE-EHR was a 16 arm two-stage adaptive pragmatic trial among 216 primary care providers and older adult patients conducted from October 2020 to August 2022 examining 14 promising EHR tools using 9 different behavioral principles with deprescribing as a primary outcome.
  • The 2 most promising tools were included in the second 3 parallel arm pragmatic trial, NUDGE-EHR-2, in a different health system from November 2022 to March 2024. EHR tools used pop-up windows to suggest deprescribing. The study provided a set of helpful options to providers including a tapering algorithm, instructions for patients, orders for alternative medications, and referrals to behavioral health providers to make this process faster and easier.
  • Deprescribing increased by 6.5% to 10.4% over usual care. Active discontinuation by primary care providers appeared to drive the results.

Discussion Themes

The adaptive trial design of the first NUDGE-EHR study helped inform the more traditional confirmation trial NUDGE-EHR-2.

The way EHR tools are used varies widely from provider to provider. Tools may be adapted over time so the tool works best for the individual provider.

 

Read more about the NUDGE-EHR study.

 

April 8, 2026: Behavioral Science-Based Electronic Health Record Tools, in This Week’s Rethinking Clinical Trials Grand Rounds

In this Friday’s Rethinking Clinical Trials Grand Rounds, Julie Lauffenburger of Harvard Medical School will present on the “Impact of Behavioral Science-Based Electronic Health Record Tools on Deprescribing for Older Adults.”

The Grand Rounds session will be held on Friday, April 10, 2026, at 1:00 pm eastern.

Lauffenburger is an associate professor of medicine at Brigham and Women’s Hospital and Harvard Medical School.

Join the online meeting

Add to your calendar:
Google
iCal
Outlook

Or join the Grand Rounds mailing list to receive calendar invitations.

Grand Rounds June 27, 2025: Building Electronic Tools To Enhance and Reinforce CArdiovascular REcommendations for Heart Failure (BETTER CARE-HF) (Amrita Mukhopadhyay, MD, MS)

Speaker

Amrita Mukhopadhyay, MD, MS
Eugene Braunwald, MD Assistant Professor of Cardiology
The Leon H. Charney Division of Cardiology Department of Medicine
Division of Healthcare Delivery Science Department of Population Health
NYU School of Medicine
NYU Langone Health

Keywords

Heart Failure; Electronic Health Record; Prescribing

Key Points

  • Heart failure is a major public health issue and a leading cause of hospitalization, affecting over 6 million Americans. Mineralocorticoid antagonists (MRA) are a potentially life-saving treatment but are under-prescribed in patients with heart failure with reduced ejection fraction (HFrEF). Closing this treatment gap could save over 20,000 lives in the U.S annually.
  • Electronic Health Record (EHR) tools could be a low-cost, scalable way to improve prescribing. However, there’s wide variability in EHR tool development and design. The optimal delivery and timing of EHR tools is unknown.
  • EHR tools fall into 2 categories: alerts and messages. Alerts apply to a single patient at a time and pop up during a clinical encounter; messages apply to multiple patients at once and are seen between encounters. The BETTER CARE-HF team designed both in accordance with Cognitive Load Theory and Nudge theory, applying the concepts of positioning, the split attention effect, default option, the transient information effect, and social influence.
  • They hypothesized that A) among patients with HFrEF who are evaluated by a cardiologist in the outpatient setting, an alert or a message will improve prescribing of MRA as compared to usual care, and B) the alert would be more effective than the message.
  • The researchers approached the pilot study as a “qualitative phase,” in which they would solicit feedback from participants and refine the intervention. They made several modifications to the EHR alerts and messages in response, and noted that guiding frameworks and pilot-testing were critical to designing an electronic intervention.
  • The pilot study was followed by a pragmatic trial that took place in over 60 practices in the NYU Langone Health System. Patients were cluster-randomized to an alert arm, message arm, or usual care. The primary outcome was new MRA prescription during the study period.
  • In the alert arm, nearly 30% of MRA-eligible patients were newly prescribed MRA – a highly statistically significant increase. The alerts were effective across all practice settings but were especially effective in high-volume settings.
  • In the message arm, 15.6% of MRA-eligible patients were newly prescribed MRA. Compared to 11.7% in the usual care arm, this was still a statistically significant increase, but was less effective than the alerts. Looked at another way, the number of MRA-eligible patients needed to result in one prescription was 25.6 in the message arm, compared to 5.6 in the alert arm.
  • An automated, EHR-embedded, tailored, and selective alert delivered at the time of the visit more than doubled prescribing of MRA as compared to usual care. Well-designed EHR tools could save lives.
  • Despite EHR tool effectiveness, busy physicians may still be hesitant. Too many tools can cause fatigue and burnout; concerns about workload and time costs can hinder uptake. Conversely, EHR tools that save time and reduce cognitive load may be more beneficial in busy practices. A post-trial survey indicated that cardiologist perceptions were generally favorable towards the BETTER CARE-HF tools, with some notable differences when asked about workflow.
  • The research team is conducting a multi-center trial to assess the effectiveness of the alert at other institutions, specifically across 3 high-volume health systems around the country. They are actively seeking other institutions to join the trial and encouraged attendees to reach out if interested.

Discussion Themes

The research team started by compiling EHR data on the current gap in care at NYU Langone. Having that real-time data helped the health system, and the physicians were a part of it, recognize that the intervention was necessary – despite their predisposition that they were delivering high-quality care.

This intervention was targeted to a specific population (cardiologists at NYU Langone) and a specific treatment (MRA) for a specific condition (HFrEF). In a different setting or if there was a different treatment involved, implementation may need to be adjusted.

Dr. Mukhopadhyay noted that folks who saw how the intervention worked were often surprised by how rarely the alert was triggered. She suspects that the selective nature of the intervention helped drive the intervention’s effectiveness by preventing burnout.

Working with a single IRB that understood the intention behind a learning health system helped standardize regulatory expectations across sites and facilitated onboarding.

Grand Rounds April 25, 2025: Automated Response Technology Integrated into EMR and Physician-Patient Communication (Ming Tai-Seale, PhD, MPH)

Speaker

Ming Tai-Seale, PhD, MPH
Professor
Departments of Family Medicine and Medicine (Bioinformatics)
University of California San Diego School of Medicine
Director of UC San Diego Learning Health Systems Science Center

Keywords

Electronic Health Record; Artificial Intelligence; MyChart; Patient Messages; Large Language Models; Clinician Well-Being; Mental Health

Key Points

  • Physician work is increasingly centered around the electronic health record (EHR). It consumes nearly 50% of scheduled clinic time. The volume of patient messages in MyChart increased significantly from 2020 to 2022, and remains much higher than pre-pandemic levels.
  • Research published in Health Affairs and JAMA Network Open suggests that this influx of inbox messages is detrimental to physicians’ well-being. The emotional timbre of messages from patients plays a role, as well; in an analysis of EHR inbasket messages, the research team found messages from patients that contained expletives, vitriol and personal attacks.
  • The research team sought to examine the association between generative AI (GenAI)-drafted replies for patient messages and physician time spent on answering messages. They were also looking at the quality of GenAI-drafted replies for messages dealing with mental heath concerns.
  • The team created a prompt within the EHR that gave physicians the option to either use an AI-generated response as a starting point or to start with a blank reply. Messages eligible for responses drafted by GenAI included refills, results, paperwork, and general questions.
  • The pilot study took place from June 16 to July 12, 2023, targeting primary care attending physicians at University of California San Diego. 52 physician volunteers received the intervention; the 70 physicians in the control arm did not.
  • In the pilot study, clinicians who were given the option of a GenAI-drafted reply spent more time reading patient messages. There was no change in average reply time.
  • When clinicians received messages dealing with mental health issues, replies drafted by more recent versions of GenAI had more utility than older versions.
  • The physicians expressed that they valued the GenAI-drafted replies as a compassionate starting point for their communication. They noted areas for improvement, like a robotic tone, and emphasized the continued need for human oversight and intervention.
  • The study team acknowledged potential risks when using large language models (LLMs) in mental health communication. These included a loss of human touch and empathy; overreliance and deskilling; and privacy and security risks.
  • This is an ongoing effort. Next steps include using LLMs to facilitate analyses of qualitative data on electronic patient-clinician communication; triangulating qualitative and quantitative data in the EHR; and aiming for a more comprehensive understanding of mental health communication and how LLMs might improve its quality.

Discussion Themes

Anecdotally, the researchers have heard from physicians that ART technology – which Epic and Microsoft continue to refine – seems to have improved. But issues still remain, such as GenAI recommending patients see clinicians from external hospital systems.

When a modified GenAI-drafted reply was sent to a patient, a disclaimer was included: “Part of this message was generated automatically.” The research team felt that it was important to provide this transparency and disclose to patients when AI contributed to the messaging they received.

Health systems and professional organizations must develop standards advocating for equity in the implementation of and access to these tools.

Grand Rounds March 28, 2025: A Cross-Sectional Study of GPT-4–Based Plain Language Translation of Clinical Notes to Improve Patient Comprehension of Disease Course and Management (Anivarya Kumar, BA; Matthew Engelhard, MD, PhD)

Speakers

Anivarya Kumar, BA
Fourth-Year Medical Student
Duke University School of Medicine

Matthew Engelhard, MD, PhD
Assistant Professor, Department of Biostatistics & Bioinformatics
Duke University School of Medicine

Keywords

Health Literacy; Large Language Models; Artificial Intelligence; Electronic Health Records

Key Points

  • Limited health literacy (HL) has tangible effects on morbidity and mortality: it’s associated with higher rates of hospital admissions and readmissions; medication nonadherence; healthcare costs; and all-cause mortality. 9 in 10 adults have limited HL, and rates are 2 – 3 times lower in marginalized populations.
  • 71% of patients report accessing their electronic health records (EHRs) to read documentation from their clinical visits, particularly the discharge summary notes (DSNs). But clinical notes have low levels of readability, hindering patients’ ability to engage in shared decision-making.
  • The research team looked at whether a Generative-Pre-trained-Transformer-4 (GPT-4)-based plain language translation of DSNs could improve patient comprehension of disease course and management.
  • 533 patients, recruited from a pool of EHR users, were randomly assigned 4 DSNs to assess. After reading the DSNs – 2 translated into more accessible language, 2 untranslated – patients answered questions assessing their objective comprehension, subjective comprehension, confidence, and time spent on each DSN.
  • Compared to the untranslated DSNs, objective understanding of the translated DSNs increased by 6.1%; subjective understanding increased 18%; confidence increased 45%; and average time spent with the DSNs decreased 51%.
  • The research team concluded that GPT translation of DSNs significantly improved patient comprehension of disease course and management and optimized time spent reading them. The effect was significantly greater in marginalized populations with historically low health literacy, reducing the gap in comprehension scores between patient populations.
  • Limitations included the use of standardized DSNs as opposed to real-world DSNs; the use of MyChart when enrolling patients, leading to a participant group with a higher baseline HL; and the modest number of Hispanic patients enrolled in the study.
  • Race is a significant and independent factor for HL. Preliminary data suggests that GPT translation can help close this gap. The research team identified this as an area for further study.

Discussion Themes

While discharge instructions alone can be great for providing patients with action items, they lack some of the context that DSNs can provide, lending the patient a more complete understanding of their condition.

The advantages of providing pre-generated materials, as opposed to pointing patients to an large language model (LLM) like Chat GPT for a more interactive explanation of their condition, include the potential for screening by a healthcare professional and less of a burden on the patient.

The study team ended up favoring “semantically-focused” translations over translations that focused solely on simplifying the language or avoiding jargon. When the LLM was asked to focus on semantics, it was more likely to define concepts and their implications.

Health literacy and reading level are not necessarily on par, and patient-centric or accessible language/LLMs are very important to consider. This may require further investigation, e.g. through qualitative interviews.