July 10, 2025: Researchers Consider the P Value’s Usefulness in Healthcare Systems Research

The P value is a statistic frequently used in biomedical research for the presentation of study findings. It represents a dichotomous decision about whether a finding is “statistically significant” based on a predetermined level, typically  < .05.

Although the peer-reviewed journals in which researchers aspire to publish their work are anchored to P values, the information used to drive decisions in healthcare is not. At the NIH Pragmatic Trials Collaboratory’s 2025 Annual Steering Committee Meeting, a panel led by Greg Simon, leader of the Health Care Systems Interactions Core, discussed P values versus decision-maker perspectives.

Communities, partners, and healthcare systems leaders make decisions based on many, multidimensional factors.

“We care about health outcomes, but we also we care about cost and the satisfaction of members, patients, and employees. Any attempt to roll those up into one statistic is really problematic,” Simon said.

Key Takeaways

  • Where possible, measure and report on what is meaningful to partners, including effect sizes, confidence intervals, cost, and patient and employee satisfaction.
  • Recognize that that P values are a useful metric, but they are only one piece of a larger toolbox.
  • Understand that what is important depends on context, the audience, and local and national priorities.

The panelists included Corita Grudzen, co–principal investigator for the PRIM-ER trial; Rich Platt, co-lead of the NIH Collaboratory’s Distributed Research Network; and Liz Turner, colead of the Biostatistics and Study Design Core.

This summer, we are sharing highlights from the 2025 Annual Steering Committee Meeting.  Access the complete collection of meeting materials.

March 12, 2025: In This Week’s PCT Grand Rounds, Spillover as a Potential Source of Bias in Pragmatic Trials

Headshot of Sean Mann
Sean Mann

In this Friday’s PCT Grand Rounds, Sean Mann of RAND will present “Spillover Due to Constraints on Care Delivery: A Potential Source of Bias in Pragmatic Clinical Trials.”

The Grand Rounds session will be held on Friday, March 14, 2025, at 1:00 pm eastern.

Mann is a senior policy analyst at RAND.

Join the online meeting.

March 10, 2025: Developing Monitoring Plans Warrants Special Attention in Pragmatic Clinical Trials

Cover of Contemporary Clinical TrialsIn an article published online ahead of print, leaders from the NIH Pragmatic Trials Collaboratory share lessons learned about the importance of independent oversight by a safety office or data and safety monitoring board in pragmatic clinical trials, even for trials deemed to have minimal risk.

Challenges specific to pragmatic trials include:

  • complexity, quality, and timing of a real-world data pipeline, especially in trials with many heterogeneous sites
  • embedding of interventions in clinical workflows, so investigators have less control over treatments or interventions
  • potential for incidental and collateral findings

“We recommend regular, rigorous data quality checks, ongoing monitoring of adherence to interventions, and including someone who is knowledgeable about pragmatic clinical trials and novel research designs in the development of Data and Safety Monitoring Plans and Data and Safety Monitoring Boards,” the authors wrote.

By attempting to reflect real-world conditions, pragmatic trials are conducted in settings that cannot be closely controlled. Therefore, close monitoring is critical for a successful study that produces meaningful results, whether it be by independent monitors or data and safety monitoring boards.

The authors drew on experiences from 7 of the NIH Collaboratory Trials and the expertise of the Coordinating Center, the Ethics and Regulatory Core, the Biostatistics and Study Design Core, and the Health Care Systems Interactions Core.

The article was published in Contemporary Clinical Trials.

March 4, 2025: PRIM-ER Team Develops Innovative Statistical Techniques for Stepped-Wedge Trials

Cover image of Statistics in MedicineResearchers with PRIM-ER, an NIH Collaboratory Trial, published 2 innovative statistical techniques for evaluating intervention effects in stepped-wedge, cluster randomized trials. The new models, which use Bayesian methods, outperformed traditional analytic methods and other Bayesian approaches in simulations and real-world applications.

The article was published online in Statistics in Medicine.

In cluster randomized trials with stepped-wedge designs, the clusters are randomized into several groups, and all groups start the trial in the control condition. Groups of clusters cross over to the intervention condition on a staggered timeline, and all groups receive the intervention before the end of the trial.

Stepped-wedge designs can be advantageous when simultaneous rollout of the intervention to all clusters is infeasible, or when withholding the intervention from any cluster would be unethical, or when there is a risk of contamination between intervention subjects and control subjects. However, stepped-wedge designs can also introduce confounding by time, as the intervention is rolled out to clusters in waves. Temporal trends during the study can influence the study’s outcomes.

(Learn more about stepped-wedge designs in the Living Textbook.)

The PRIM-ER researchers tested 2 new Bayesian hierarchical penalized spline models to improve the estimation of intervention effects in stepped-wedge trials. The first model focuses on immediate intervention effects and accounts for large numbers of clusters and time periods. The second model extends the first by accounting for time-varying intervention effects. The researchers applied both models to data from PRIM-ER.

Read the full report.

PRIM-ER tested a multidisciplinary primary palliative care intervention in a diverse mix of emergency departments in the United States to improve the delivery of goal-directed emergency care of older adults. The study was supported by the National Institute on Aging. Learn more about PRIM-ER.

January 15, 2025: Designing for Diversity, in This Week’s PCT Grand Rounds

Headshot of Dr. Christopher Lindsell
Dr. Christopher Lindsell

In this Friday’s PCT Grand Rounds, Chris Lindsell of Duke University will present “Design for Diversity.”

The Grand Rounds session will be held on Friday, January 17, 2025, at 1:00 pm eastern.

Lindsell is professor and cochief of biostatistics and bioinformatics, director of data science and biostatistics at the Duke Clinical Research Institute, and director of biostatistics and bioinformatics at the Duke Clinical and Translational Science Institute—all at Duke University.

Join the online meeting.

October 15, 2024: Case Study Describes a Reassessment of Sample Size in an Ongoing Cluster Randomized Trial

FM-TIPS logoA new case study from the NIH Pragmatic Trials Collaboratory highlights an interim reassessment of sample size during an ongoing cluster randomized trial. The case study was published this week in the Living Textbook of Pragmatic Clinical Trials.

Researchers in cluster randomized trials must account for potential correlation between clusters in the design and analysis of their trial by estimating the intraclass correlation when calculating the target sample size. Often they use preliminary data from the planned enrollment sites to estimate the correlation. However, when preliminary data are unavailable at the time of study design, they may use interim data collected during the trial itself to reassess the trial’s sample size.

The contributors of the case study focus on FM-TIPS, an NIH Collaboratory Trial, to describe an approach to conducting an interim reassessment of sample size in an ongoing trial. Read the full case study.

FM-TIPS is examining whether the addition of transcutaneous electrical nerve stimulation to routine physical therapy improves movement-evoked pain compared with physical therapy alone among patients with fibromyalgia. The trial is supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases through the NIH HEAL Initiative. Learn more about FM-TIPS.

The contributors of the case study include members of the FM-TIPS study team and leaders of the NIH Collaboratory’s Biostatistics and Study Design Core. David-Erick Lafontant is a statistician, Bridget Zimmerman is a clinical professor of biostatistics, and Emine Bayman is an associate professor of biostatistics—all at the University of Iowa. Megan McCabe is an assistant professor of biostatistics at the University of Alabama at Birmingham. Patrick Heagerty is a professor of biostatistics at the University of Washington. Liz Turner is an associate professor of biostatistics and bioinformatics at Duke University.

September 12, 2024: NIH Collaboratory Biostatisticians Evaluate Analytic Models for Individually Randomized Group Treatment Trials

Headshot of Dr. Jonathan Moyer
Dr. Jonathan Moyer

To avoid inflation in the rate of type 1 error, or false positives, in individually randomized group treatment (IRGT) trials, researchers should choose an analytic model that accounts for the correlations in outcome measures that arise when study participants receive an intervention from the same source, according to a report from the NIH Pragmatic Trials Collaboratory’s Biostatistics and Study Design Core.

The report was published online ahead of print in Statistics in Medicine.

Many IRGT trials randomly assign individuals to study arms but deliver the study intervention through shared “agents,” such as clinicians, therapists, or trainers. After randomization, interactions between participants who share the same agent can lead to correlations in study outcomes. The delivery agents may be nested in or crossed with study arm, and participants may interact with a single agent or multiple agents. There has been no systematic effort to identify the appropriate analytic models for these complex study designs.

To address this knowledge gap, members of the NIH Collaboratory’s Biostatistics and Study Design Core conducted a simulation study to examine the performance of a variety of analytic models for IRGT trials in which complex clustering arises from participants interacting with multiple agents or single agents in both nested and crossed designs. They found substantial inflation in the type I error rate in studies with nested designs when the analytic model did not account for participants interacting with multiple agents.

Read the full article.

This article is the latest in a series of reports completed this year by members of the Biostatistics and Study Design Core to explore analytic approaches to clinical trials with complex clustering and other novel design features:

Lead author Jonathan Moyer, a statistician in the NIH Office of Disease Prevention, led a discussion of complex clustering in pragmatic trials in a session of the NIH Collaboratory’s weekly Rethinking Clinical Trials webinar series: “The Perils and Pitfalls of Complex Clustering in Pragmatic Trials.”

Learn more about the NIH Collaboratory’s Biostatistics and Study Design Core.

August 20, 2024: NIH Pragmatic Trials Collaboratory Launches New Self-Paced Learning Path on Pragmatic Trial Study Design

The Pragmatic Clinical Trial Study Design learning path is a self-paced, 1-hour course with expert-led content, reference materials, and knowledge checkpoints. Learners will gain insights into selecting the right study design, randomization, parallel vs stepped-wedge designs, and more. The learning path is free and you can earn a certificate for completing the course.The NIH Pragmatic Trials Collaboratory has launched a new interactive learning path that provides essential knowledge to research teams on how to choose the most appropriate study design for a pragmatic clinical trial.

The learning path is a series of self-paced training modules that include expert videos, reference materials, and knowledge checkpoints. Its content covers key information for designing a study, including:

  • Choosing between an explanatory or pragmatic study design
  • How to make decisions about randomization
  • Choosing between parallel and stepped-wedge design

The learning path features expert insights from Liz Turner, PhD, co-chair of the NIH Collaboratory’s Biostatistics and Study Design Core, adding to the program’s robust online training resources. This new tool is free and takes about 1 hour to complete. Learners will receive a certificate upon completing the course.

Head shot of Dr. Liz Turner
Liz Turner, PhD

“We created this innovative learning path to help research teams work through the many issues and considerations that come up in the design phase of a pragmatic trial,” Turner said. “It guides learners through the decision-making process for study design in a fun and fast-paced learning environment.”

“I hope research teams take advantage of this exciting new resource that answers common questions about pragmatic trial study design,”said Kevin Weinfurt, PhD, a co-principal investigator for the NIH Pragmatic Trials Collaboratory Coordinating Center. “Making informed decisions at the trial’s design stage is critical to ensure a trial can be successful in producing reliable evidence.”

To access the learning path, visit the learning module page to sign up. Simply click “get this course” and then “sign up” to create an account in our learning management system.

The NIH Pragmatic Trials Collaboratory Coordinating Center led the development of the study design learning path in partnership with Symphony Learning Partners.

July 10, 2024: Asking Different Causal Questions in Randomized Trials, in This Week’s PCT Grand Rounds

Headshot of Dr. Miguel Hernán
Dr. Miguel Hernán

In this Friday’s PCT Grand Rounds, Miguel Hernán of Harvard University will present “Causal Estimands: Should We Ask Different Causal Questions in Randomized Trials and in the Observational Studies That Emulate Them?”

The Grand Rounds session will be held on Friday, July 12, 2024, at 1:00 pm eastern.

Hernán is the Kolokotrones Professor of Biostatistics and Epidemiology and the director of the CAUSALab at Harvard T.H. Chan School of Public Health. Researchers at the CAUSALab generate, analyze, and interpret data to support decision-makers in making better decisions about what works in medicine, public health, and policy.

Join the online meeting.

April 3, 2024: In This Week’s PCT Grand Rounds, a New Look at P Values for Randomized Trials

Dr. Erik van Zwet

In this Friday’s PCT Grand Rounds, Erik van Zwet of Leiden University Medical Center will present “A New Look at P Values for Randomized Clinical Trials.”

The Grand Rounds session will be held on Friday, April 5, 2024, at 1:00 pm eastern.

Dr. van Zwet is an associate professor in the Department of Biomedical Data Sciences at Leiden University Medical Center in the Netherlands.

Join the online meeting.