Grand Rounds April 21, 2023: Personalised Cooler Dialysate for Patients Receiving Maintenance Haemodialysis (MyTEMP): A Pragmatic, Cluster-randomised Trial (Amit Garg, MD, MA, FRCPC, FACP, PhD; Stephanie N. Dixon, PhD MSc)

Speakers

Amit Garg, MD, MA (Education) FRCPC, FACP, PhD
Associate Dean, Clinical Research, Schulich School of Medicine and Dentistry
Lead, Institute for Clinical Evaluative Sciences Kidney, Dialysis and Transplantation Provincial Program
Director, Institute for Clinical Evaluative Sciences (ICES) Western Facility
Nephrologist, London Health Sciences Centre
Professor, Medicine, Epidemiology & Biostatistics, Western University

Stephanie N. Dixon, PhD MSc
Staff Scientist, Institute for Clinical Evaluative Sciences Kidney, Dialysis and Transplantation Research Program
Biostatistician, London Health Sciences Centre

 

Keywords

MyTEMP, Pragmatic Clinical Trials, Ethics, Biostatistics

 

Key Points

  • For each hemodialysis treatment clinicians typically set the temperature of dialysate on the machine to 36.5 or 37.0 degrees Celsius. The reasoning for this temperature is unclear, though it likely represents what was considered the average body temperature of most patients.
  • In a recent international survey of more than 270 centers, nearly half now use a cooler temperature dialysate in patient care of less than or equal to 36.0 degrees C. This change in practice is based on data that suggests the at cooler (vs. standard) temperature dialysate is beneficial. However, in two recent systematic reviews the overall quality of evidence for dialysate cooling was deemed low with a high risk of bias.
  • The MyTEMP trial is a pragmatic, cluster randomized controlled trial in Ontario, Canada, to determine if adopting a default center-wide policy of personalized cooler dialysate is superior to a standard temperature dialysate of 36.5 degrees C.
  • MyTEMP was innovative and pragmatic, implemented as part of a learning healthcare system, with covariate constrained randomization, registry-based, and embedded in routine care delivered by more than 2,000 nurses at 84 centers.
  • During the 4-year study period, about 8,000 patients were randomized for the personalized cooler dialysate and about 7,400 were randomized for the standard temperature dialysate. The mean temperature for the standard group was 36.4 degrees C, and the mean temperature for cooler group was 35.8 degrees C.
  • The primary composite outcome was cardiovascular mortality or hospital admission with MI, stroke, or heart failure. There is a high risk of these events in the hemodialysis population with a cumulative instance at 30% at 4 years. There was no appreciable difference in the estimate for the cooler temperature group. Additionally, the cooler temperature group reported a higher likelihood of discomfort.
  • The lack of cardiovascular benefit and higher likelihood of patient discomfort provides no justification to adopt cooler dialysate as a center-wide policy vs. use of 36.5 degrees. After MyTEMP, centers in Ontario stopped adopting colder temperature dialysate as a center-wide policy, and patients felt less discomfort during hemodialysis care.
  • Cluster randomized trials of hemodialysis center-wide policies raise complex ethical issues. Many patients who receive hemodialysis are vulnerable. A patient or their nephrologist could decide to opt-out of the randomly allocated center-wide default policy but could not opt out of the symptom data but not de-identified health records. The REB approved the MyTEMP request to use an altered patient consent process because the research was deemed of minimal risk to patients.
  • MyTEMP worked with patients and caregivers to develop the trial, and Kidney Patient and Family Advisory Councils guided the choice of additional outcomes. Participants were debriefed on the trial results.

Learn more

Read about the MyTEMP trial in The Lancet.

Read about the MyTEMP statistical analysis plan.

Discussion Themes

We run into situations where we are talking to stakeholders and thinking about criteria for waiver of consent and it’s just not right. Where the risks and benefits are equal, but I wouldn’t want to be randomly assigned. There is a natural assertion of autonomy. Did some of these questions of autonomy vs. risk come up? I think it is very complex, and it is not black and white. You need to identify the principles before the process. In terms of the dialysis component, in routine care we get consent when we start treatment but there are many things in the background. As a clinician when I am providing care, I’m not discussing these concerns; we are talking about other things. Am I delivering great medicine when I’m not sure what to do here when there is practice variability?

Tags

#pctGR, @Collaboratory1

April 10, 2023: Li Receives New PCORI Award to Develop Causal Inference Methods for Stepped-Wedge Cluster Randomized Trials

Headshot of Dr. Fan Li
Dr. Fan Li

Dr. Fan Li, a member of the NIH Pragmatic Trials Collaboratory’s Biostatistics and Study Design Core since 2013, has received approval of a 3-year funding award from the Patient-Centered Outcomes Research Institute (PCORI) to develop causal inference methods for stepped-wedge cluster randomized trials—a design that has been increasingly adopted in pragmatic trials. Li is an assistant professor of biostatistics at the Yale School of Public Health.

The new study, entitled “Toward Improved Design and Analysis of Stepped Wedge Trials: An Estimand-Aligned and Efficiency-Focused Framework,” will contribute new methods and software for planning and analyzing stepped-wedge cluster randomized trials that enable investigators to (a) target transparent causal estimands under the counterfactual outcomes framework and (b) to leverage baseline information for achieving higher statistical efficiency.

This is Li’s second PCORI award. Read a summary of his previous PCORI award.

An estimand is a precise description of the treatment effect reflecting the scientific question, and is ideally a model-free concept. The research team will contribute weighted average effect estimands to quantify treatment effect evidence by recognizing that unequal cluster sizes may contribute to variations of treatment effects in each cluster-period. In addition, pragmatic trials that adopt a stepped-wedge cluster randomized design frequently collect baseline data on the patient-centered outcomes and/or patient-level characteristics. The research team will study and operationalize estimand-aligned methods that effectively leverage such baseline variables through parametric regression and nonparametric machine learning methods.

Li has assembled a multidisciplinary team for this study, including Dr. Patrick Heagerty, professor of biostatistics at the University of Washington and a cochair of the NIH Collaboratory’s Biostatistics and Study Design Core. In addition, Drs. Jeffrey Jarvik, principal investigator of the NIH Collaboratory’s LIRE NIH Collaboratory Trial, and Douglas Zatzick, principal investigator of the TSOS NIH Collaboratory Trial, serve as stakeholders of the study. The stakeholder team also includes colleagues from the NIA IMPACT Collaboratory, Drs. Thomas Travison and Monica Taljaard.

Grand Rounds March 10, 2023: Estimands in Cluster-Randomized Trials: Choosing Analyses that Answer the Right Question (Brennan Kahan, PhD)

Speaker

Brennan Kahan, PhD
MRC Clinical Trials Unit
University College London (UCL)

 

Keywords

Cluster-Randomized Trial, Estimands, Cluster-Average Treatment Effect, Participant-Average Cluster Size

 

Key Points

  • The TRIGGER trial inspired statistician Brennan Kahan to ask in the questions, “But what if we’d chosen a different analysis? “and “How much would standard errors really change?”
  • An estimand is a precise description of the treatment effect that a researcher aims to estimate from the trial. This concept is especially important in the context of cluster-randomized trials, but it is important to determine if the estimand will reflect participant-average treatment effect versus a cluster-average treatment effect or a marginal versus cluster-specific. The difference relates to how data is weighed.
  • Two estimands will differ when there is informative cluster size in a trial. Informative cluster size refers to situations in which outcomes or treatment effects differ in large clusters versus smaller clusters. An example of this is when patients experience better outcomes in a large hospital compared to a small hospital when given the same medication.
  • Which estimand (participant-average or cluster-average) to use depends on the study question. Participant-average treatment effect will demonstrate the population-effect when going from one intervention to another. A cluster-average treatment effect better enables the evaluation of an intervention or treatment’s impact directly on clusters.
  • Mixed-effects models or Generalized Estimating Equations (GEE) are the most common analysis methods for clustered randomized trials, but when informative cluster size is present, both are bias. To avoid this bias, Independence Estimating Equations (IEEs) and cluster-level summaries can be used to estimate either cluster- or participant-average effects.
  • The occurrence of informative cluster size has never formally been evaluated to the presenter’s knowledge. Statisticians working on pragmatic trials should consider estimand and tailor analysis around the chosen estimand.

Learn more

Many of the concepts described in the Grand Rounds presentation are outlined the article published in the International Journal of Epidemiology: Estimands in cluster-randomized trials: choosing analyses that answer the right question

Discussion Themes

The summary measure scale is very important in determining the best estimand for a specific cluster-randomized trial.

Equal cluster size does not necessarily mean that estimand is irrelevant because of differences in sample size. Reweighting may be necessary, and there may be additional assumptions.

It’s difficult to determine definitively if there is informative cluster size, but it would be interesting to evaluate in pilot data clusters.

– Stratification would not be a solution to solve the issue of informative cluster size in analysis.

Tags

#pctGR, @Collaboratory1

March 8, 2023: Biostatistics Core Sponsors This Week’s PCT Grand Rounds on Estimands in Cluster Randomized Trials

Headshot of Brennan KahanIn this Friday’s PCT Grand Rounds, Brennan Kahan of University College London will present “Estimands in Cluster-Randomized Trials: Choosing Analyses That Answer the Right Question.” This session is sponsored by the NIH Pragmatic Trials Collaboratory’s Biostatistics and Study Design Core Working Group.

The Grand Rounds session will be held on Friday, March 10, 2023, at 1:00 pm eastern.

Kahan is a senior research fellow in the Institute of Clinical Trials and Methodology at University College London.

Join the online meeting.

March 7, 2023: Webinar on Causal Inference in Pragmatic Trials Coming to IMPACT Collaboratory

Headshot of Dr. Eleanor MurrayIn the March 16 session of IMPACT Grand Rounds, the NIA IMPACT Collaboratory will host Dr. Eleanor Murray, who will present on the topic of causal inference in pragmatic trials.

From the announcement:

Join us for IMPACT Grand Rounds on Thursday, March 16 at 12pm ET with Dr. Murray who will be presenting on the “Causal Inference in Pragmatic Trials.”

Eleanor (Ellie) Murray, PhD, is an assistant professor of Epidemiology at Boston University School of Public Health with expertise in causal inference. Her work focuses on improving methods for evidence-based decision-making and human-data interaction, as well as improving the translation of methodological advances into practical applied work. Application areas include HIV, HPV, cancer, cardiovascular disease, reproductive health research, tuberculosis research, access to care, psychiatric disorders, musculoskeletal disorders, social and environmental epidemiology, and maternal and adolescent health. Dr Murray also conducts meta-research evaluating bias in existing research. She has an ScD in Epidemiology and MSc in Biostatistics from Harvard TH Chan School of Public Health, where she also did her postdoctoral fellowship in the Program on Causal Inference, an MPH in Epidemiology from Columbia Mailman School of Public Health, and a BSc in Biology from McGill University. Dr Murray is the co-host of the podcast Casual Inference, an Associate Editor for Social Media at the American Journal of Epidemiology, and a science communicator under the handle @epiellie on Twitter.

Zoom Conferencing
Join from PC, Mac, iOS or Android: https://hebrewseniorlife.zoom.us/j/97344810673
Dial-In:  +1 312 626 6799 (US Toll) or  +1 470 250 9358 (US Toll)
Meeting ID:  973 4481 0673

February 14, 2023: IMPACT Collaboratory to Host Grand Rounds on Treatment Effect Heterogeneity in Cluster Randomized Trials

Headshot of Dr. Fan Li
Dr. Fan Li

Dr. Fan Li, a member of the NIH Pragmatic Trials Collaboratory’s Biostatistics and Study Design Core, will present “Methods for Designing Cluster Randomized Trials to Detect Treatment Effect Heterogeneity” during IMPACT Grand Rounds on Thursday, February 16, at 12:00 pm eastern. IMPACT Grand Rounds is hosted by the NIA IMPACT Collaboratory.

Fan Li, PhD, is an assistant professor in the Department of Biostatistics at Yale School of Public Health, and faculty member in the Center for Methods in Implementation and Prevention Science and the Yale Center for Analytical Sciences. He is the principal investigator of a Patient-Centered Outcomes Research Institute (PCORI)–funded methods award that investigates new study planning methods and software for testing treatment effect heterogeneity in cluster randomized trials.

Zoom Conferencing
Join from PC, Mac, iOS or Android: https://hebrewseniorlife.zoom.us/j/97344810673
Dial-In:  +1 312 626 6799 (US Toll) or  +1 470 250 9358 (US Toll)
Meeting ID:  973 4481 0673

Read more about this IMPACT Grand Rounds session.

August 16, 2022: Biostatistics Core Develops Tools and Strategies for Common Research Challenges

Head shot of Dr. Patrick HeagertyHead shot of Dr. Liz TurnerIn an interview at the NIH Pragmatic Trials Collaboratory’s annual Steering Committee meeting and 10th anniversary celebration, we asked Dr. Liz Turner and Dr. Patrick Heagerty to reflect on the role of the Biostatistics and Study Design Core Working Group in helping the NIH Collaboratory Trial teams design their trials and analyze the data, and to discuss their focus for the Core's future contributions to pragmatic clinical trials.

Based on your experience working with the NIH Collaboratory Trials, what are some of the common challenges of the Core?

Given the pragmatic nature of the NIH Collaboratory Trials, most use a design that involves some kind of clustering of outcomes. This could be a cluster randomized design or an individually randomized group treatment trial. As a consequence, nearly all projects face the challenge of how to account for clustering in both the design and analysis of the trial.

For the NIH Collaboratory Trials that use a cluster randomized design, one of the most common challenges is deciding between a stepped-wedge design and a standard parallel-arm design. The Core’s recommendation is clear: only use a stepped-wedge design if you have to! Likewise, only use a cluster randomized design if you have to and, if possible, use an individually randomized design. Nevertheless, a cluster randomized design is often the design of choice to address a pragmatic research question, and a stepped-wedge cluster randomized design may be the only way to perform a randomized evaluation of an intervention (for example, when all centers wish to receive the intervention in order to agree to participate in the trial).

From an analysis perspective, common challenges involve how to handle missing outcome data and how to handle longitudinal (that is, repeated) measures data. For both design and analysis, as you can imagine, the COVID-19 pandemic has posed huge challenges, including how to handle the disruption of an ongoing stepped-wedge trial (as in the GGC4H NIH Collaboratory Trial). In short, clustering of outcomes is the biggest theme (and challenge) across the NIH Collaboratory Trials.

What strategies have NIH Collaboratory Trials used to overcome these barriers?

A common strategy used by the NIH Collaboratory Trials to overcome these barriers has been to leverage what we call the “Core group process.” This dynamic process is driven by the NIH Collaboratory Trials and supported by the Core, together with NIH Collaboratory leadership. The process is centered around the monthly Core meeting to which all NIH Collaboratory Trial teams are invited and that involves all Core members. These meetings provide dedicated time for each study team to provide project updates and elicit feedback from the Core and the other NIH Collaboratory Trial teams. In particular, all the study teams are invited to present at least once during the UG3 planning phase and on multiple occasions during the UH3 implementation phase. Core members are also available for ad hoc, smaller group meetings, as requested. What this process allows is for the NIH Collaboratory Trials to present challenges and for us to jointly identify solutions.

How are the NIH Collaboratory Trials’ experiences with the Core helping the field of pragmatic research?

Through the challenges and ideas that have been brought to the Core, the NIH Collaboratory Trials have pushed the field of pragmatic research. In particular, through the Core group process, they have pushed the Core to solve methodological challenges and provide tools to tackle the design issues that arise in the changing research landscape.

Thumbnail image of the COVID-19 checklist

A key example of the Core’s methodological work was inspired by the STOP CRC NIH Collaboratory Trial and is related to the design and analysis choices faced in the unique context of embedded pragmatic trials. This example addresses a common challenge in embedded pragmatic trials, namely how to handle varying cluster sizes, something that arises in so many of the NIH Collaboratory Trials. The research, recently published in Contemporary Clinical Trials, highlights that a seemingly natural analysis in this context may produce a biased inference about intervention effectiveness, which is clearly problematic.

The second example is the Core’s recently published Statistical Analysis Plan Checklist for Addressing COVID-19 Impacts. Development of this tool was inspired by the many challenges faced by the NIH Collaboratory Trials as a result of the COVID-19 pandemic, such as delayed recruitment (as in the BackInAction NIH Collaboratory Trial) and adjustments to how interventions were delivered (as in the ACP PEACE NIH Collaboratory Trial).

What do you think the Core can contribute over the next decade?

The Core has a lot to contribute over the next decade. A key goal is to ensure we are building and diversifying the next generation of statisticians who are experts in pragmatic trials and who can engage deeply in the design and analysis of pragmatic trials embedded in healthcare systems.

To achieve this, we need to continue to bring trainees into the Core, as we have done over the past 6 years, through funded graduate research assistant positions. By doing this, we should be able to not only build the next generation of pragmatic trial experts but also build scholarship in pragmatic trial methodology by identifying methodological gaps needed to be filled so the NIH Collaboratory Trials study teams—and pragmatic trialists in the broader research community—have the best methods available to them.

The opportunity to participate in a cross-institution working group such as ours is surprisingly rare. As a consequence, we are in a unique position to not only build the next generation of experts but also to strength our own collective expertise and knowledge by learning from each other’s perspectives.

March 30, 2022: Two Weights Make a Wrong: New Article From the Biostatistics and Study Design Core

Contemporary Clinical TrialslsIn a new article from the NIH Pragmatic Trials Collaboratory Biostatistics and Study Design Core, the authors share analytic considerations for cluster randomized trials with hierarchical nesting of participants within clusters. The authors illustrate the problem using theoretical derivations, a simulation study, and data from the STOP CRC NIH Collaboratory Trial as an example.

“We conclude that an analysis using both an exchangeable working correlation matrix and weighting by inverse cluster size, which may be considered the natural analytic approach, can lead to incorrect results. That is, two weights make a wrong. The bias is minimal when there is homogeneity of treatment effects according to cluster size but unacceptable when there is heterogeneity of treatment effects according to cluster size. In addition, we show that only an analysis with an independence working correlation matrix and weighting by inverse cluster size always provides valid results for the UATE [unit average treatment effect] estimand.”

Read the full article.

December 16, 2021: NIH Collaboratory Publishes COVID-19 Checklist for Statistical Analysis Plans in Pragmatic Trials

Thumbnail image of the COVID-19 checklistA new tool from the NIH Collaboratory assists investigators in identifying impacts of the COVID-19 public health emergency on ongoing pragmatic clinical trials. The Statistical Analysis Plan Checklist for Addressing COVID-19 Impacts summarizes impacts on trial conduct that study teams should document, measure, analyze, and report.

The new checklist was developed by the NIH Collaboratory’s Biostatistics and Study Design Core Working Group. Since the beginning of the COVID-19 pandemic, many of the NIH Collaboratory Trials have had to postpone recruitment, alter methods of participant engagement, and modify tools for research assessment and intervention delivery.

The leaders of the Biostatistics Core, Dr. Patrick Heagerty and Dr. Liz Turner, spoke in a recent interview about the impacts of the pandemic on the NIH Collaboratory Trials. Early next year, the Coordinating Center will report the results of a survey of the study teams about their experiences with these impacts.

Download the Statistical Analysis Plan Checklist for Addressing COVID-19 Impacts.

December 14, 2021: A Year of New Insights From the NIH Collaboratory

Collage of journal coversNIH Collaboratory researchers in 2021 shared study results, generated new knowledge, and developed innovative research methods in pragmatic clinical trials. Their work included insights from the Coordinating Center and Core Working Groups, analyses from the NIH Collaboratory Distributed Research Network, and results and methodological approaches from the NIH Collaboratory Trials.

So far this year, the NIH Collaboratory has produced 3 dozen articles in the peer-reviewed literature, including the primary results of the PPACT and TSOS trials, the study design of the Nudge and OPTIMUM studies, insights into the COVID-19 pandemic from the EMBED and ACP PEACE studies, and more:

NIH Collaboratory Coordinating Center

NIH Collaboratory Distributed Research Network

ACP PEACE NIH Collaboratory Trial

BackInAction NIH Collaboratory Trial

EMBED NIH Collaboratory Trial

GRACE NIH Collaboratory Trial

HiLo NIH Collaboratory Trial

LIRE NIH Collaboratory Trial

Nudge NIH Collaboratory Trial

OPTIMUM NIH Collaboratory Trial

PPACT NIH Collaboratory Trial

PRIM-ER NIH Collaboratory Trial

PROVEN NIH Collaboratory Trial

SPOT NIH Collaboratory Trial

TSOS NIH Collaboratory Trials