Grand Rounds March 10, 2023: Estimands in Cluster-Randomized Trials: Choosing Analyses that Answer the Right Question (Brennan Kahan, PhD)

Speaker

Brennan Kahan, PhD
MRC Clinical Trials Unit
University College London (UCL)

 

Keywords

Cluster-Randomized Trial, Estimands, Cluster-Average Treatment Effect, Participant-Average Cluster Size

 

Key Points

  • The TRIGGER trial inspired statistician Brennan Kahan to ask in the questions, “But what if we’d chosen a different analysis? “and “How much would standard errors really change?”
  • An estimand is a precise description of the treatment effect that a researcher aims to estimate from the trial. This concept is especially important in the context of cluster-randomized trials, but it is important to determine if the estimand will reflect participant-average treatment effect versus a cluster-average treatment effect or a marginal versus cluster-specific. The difference relates to how data is weighed.
  • Two estimands will differ when there is informative cluster size in a trial. Informative cluster size refers to situations in which outcomes or treatment effects differ in large clusters versus smaller clusters. An example of this is when patients experience better outcomes in a large hospital compared to a small hospital when given the same medication.
  • Which estimand (participant-average or cluster-average) to use depends on the study question. Participant-average treatment effect will demonstrate the population-effect when going from one intervention to another. A cluster-average treatment effect better enables the evaluation of an intervention or treatment’s impact directly on clusters.
  • Mixed-effects models or Generalized Estimating Equations (GEE) are the most common analysis methods for clustered randomized trials, but when informative cluster size is present, both are bias. To avoid this bias, Independence Estimating Equations (IEEs) and cluster-level summaries can be used to estimate either cluster- or participant-average effects.
  • The occurrence of informative cluster size has never formally been evaluated to the presenter’s knowledge. Statisticians working on pragmatic trials should consider estimand and tailor analysis around the chosen estimand.

Learn more

Many of the concepts described in the Grand Rounds presentation are outlined the article published in the International Journal of Epidemiology: Estimands in cluster-randomized trials: choosing analyses that answer the right question

Discussion Themes

The summary measure scale is very important in determining the best estimand for a specific cluster-randomized trial.

Equal cluster size does not necessarily mean that estimand is irrelevant because of differences in sample size. Reweighting may be necessary, and there may be additional assumptions.

It’s difficult to determine definitively if there is informative cluster size, but it would be interesting to evaluate in pilot data clusters.

– Stratification would not be a solution to solve the issue of informative cluster size in analysis.

Tags

#pctGR, @Collaboratory1

October 1, 2018: Dr. Greg Simon Uses a Pie Eating Contest Analogy to Explain the Intraclass Correlation Coefficient

In a new video, Dr. Greg Simon explains the intraclass correlation coefficient (ICC) with an analogy to a pie eating contest. The ICC is a descriptive statistic that measures the correlations among members of a group, and it is an important tool for cluster-randomized pragmatic trials because this calculation helps determine the sample size needed to detect an effect.

Greg Simon from NIH Collaboratory on Vimeo.

“When we randomize treatments by doctors, clinics, or even whole health systems, we need to think about how things cluster, and the intraclass correlation coefficient is the measure of that clustering. When we think about sample sizes in pragmatic clinical trials, it’s important to understand what an intraclass correlation coefficient actually is.”

For most pragmatic trials, the ICC will be between 0 and 1. If the outcomes in a group are completely correlated (ICC=1), then all participants within the group are likely to have the same outcome. When ICC=1, sampling one participant from the cluster is as informative as sampling the whole cluster, and many clusters will be needed to detect an effect. If there is no correlation among members of the groups (ICC=0), then the available sample size for the study is essentially the number of participants.

For more on the ICC, see the Intraclass Correlation section in the Living Textbook or this working document from the Collaboratory’s Biostatistics and Study Design Core.

STOP CRC Trial: Analytic Challenges and Pragmatic Solutions


Investigators from the STOP CRC pragmatic trial, an NIH Collaboratory Trial, have recently published an article in the journal eGEMs describing solutions to issues that arose in the trial’s implementation phase. STOP CRC tests a program to improve colorectal cancer screening rates in a collaborative network of Federally Qualified Health Centers by mailing fecal immunochemical testing (FIT) kits to screen-eligible patients at clinics in the intervention arm. Clinics in the control arm provided opportunistic colorectal-cancer screening to patients at clinic visits in Year 1 and implemented the intervention in Year 2. In this cluster-randomized trial, clinics are the unit of analysis, rather than individual patients, with the primary outcome being the proportion of screen-eligible patients at each clinic who complete a FIT.

The team dealt with various challenges that threatened the validity of their primary analysis, one of which related to potential contamination of the primary outcome due to the timing of the intervention rollout: for control participants, the Year 2 intervention actively overlapped with the Year 1 control measurements. The other challenge was due to a lack of synchronization between the measurement and accrual windows. To deal with these issues, the team had to slightly modify the study design in addition to developing a few sensitivity analyses to better estimate the true impact of the intervention.

“While the nature of the challenges we encountered are not unique to pragmatic trials, we believe they are likely to be more common in such trials due to both the types of designs commonly used in such studies and the challenges of implementing system-based interventions within freestanding health clinics.” (Vollmer et al. eGEMs 2015)

The Publish EDM Forum Community publishes eGEMs (generating evidence & methods to improve patient outcomes) and provides free and open access to this methods case study. Readers can access the article here.


Report from NIH Collaboratory Workshop Examines Ethical and Regulatory Challenges for Pragmatic Cluster Randomized Trials

A new article by researchers from the NIH Collaboratory, published online this week in the journal Clinical Trials, explores some of the challenges facing physicians, scientists, and patient groups who are working to develop innovative methods for performing clinical trials. In the article, authors Monique Anderson, MD, Robert Califf, MD, and Jeremy Sugarman, MD, MPH, MA, describe and summarize discussions from a Collaboratory workshop on ethical and regulatory issues relating to pragmatic cluster-randomized trials.


Pragmatic Cluster-Randomized Trials

Many of the clinical trials that evaluate the safety and effectiveness of new therapies do so by assigning individual volunteers to receive either an experimental treatment or a comparator, such as an existing alternative treatment, or a placebo. However, this process can be complex, expensive, and slow to yield results. Further, because these studies often take place in specialized research settings and involve patients who have been carefully screened, there are  concerns that the results gathered from such trials may not be fully applicable to “real-world” patient populations.

For these reasons, some researchers, patients, and patient advocacy groups are interested in exploring different methods for conducting clinical trials, including designs known as pragmatic cluster-randomized trials, or CRTs. In a pragmatic CRT, groups of individuals (such as a clinic, hospital, or even an entire health system) are randomly assigned to receive one of two or more interventions being compared, with a focus on answering questions about therapies in the setting of actual clinical practice—the “pragmatic” part of “pragmatic CRT.”

Pragmatic CRTs have the potential to answer important questions quickly and less expensively, especially in an era in which patient data can be accessed directly from electronic health records. Just as importantly, that knowledge can then be fed back to support a “learning healthcare system” that is constantly improving in its approach to patient care.  However, while cluster-randomized trials are not themselves new, their widespread use in patient-care settings raises a number of potential challenges.

For example: in a typical individually randomized clinical trial, patients are enrolled in a study only after first providing written informed consent. However, in a CRT, the entire hospital may be assigned to provide a given therapy. In such a situation, how should informed consent be handled? How should patients be notified that research is taking place, and that they may be part of it? Will they be able to “opt out” of the research? What will happen to the data collected during their treatment? And what do federal regulations governing clinical trials have to say about this? These are just a few of the questions raised by the use of pragmatic CRTs in patient-care settings.


The NIH Collaboratory Workshop on Pragmatic Cluster-Randomized Trials

The NIH Collaboratory Workshop of Pragmatic CRTs, held in Bethesda, Maryland in July of 2103, convened a panel of experts in clinical trials, research ethics, and regulatory issues to outline the challenges associated with conducting  pragmatic CRTs and to explore ways for better understanding and overcoming them. Over the course of the intensive 1-day workshop, conference participants identified key areas for focused attention. These included issues relating to informed consent, patient privacy, oversight of research activities, insuring the integrity of data gathered during pragmatic CRTs, and special protections for vulnerable patient populations. The article by Anderson and colleagues provides a distillation of discussions that took place at the workshop, as well as noting possible directions for further work.

In the coming months and years, the NIH Collaboratory and its partners, including the National Patient-Centered Clinical Research Network (PCORnet), plan to build on this workshop experience. Together, they hope to explore these issues in greater detail and propose practical steps for moving forward with innovative clinical research methods, while at the same time maintaining robust protections for patients’ rights and well-being.


Jonathan McCall, MS, and Karen Staman, MS, contributed to this post.


Read the full text of the article here:

Anderson ML, Califf RM, Sugarman J. Ethical and regulatory issues of pragmatic cluster randomized trials in contemporary health systems. Clin Trials 2015 [e-Pub ahead of press].
doi:10.1177/1740774515571140 
For further reading:

Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: Increasing the value of clinical research decision making in clinical and health policy. JAMA 2003;290(12):1624-32. PMID:14506122; doi:10.1001/jama.290.12.1624.

The Ottawa Hospital Research Institute Ethical Issues in Cluster Randomized Trials Wiki.

Special Report: Ethical Oversight of Learning Health Systems. Hastings Center Report 2013;43(s1):S2–S44, Si–Sii.

Sugarman J, Califf RM. Ethics and regulatory complexities for pragmatic clinical trials. JAMA 2014;311(23):2381-2. PMID: 24810723; doi: 10.1001/jama.2014.4164.

Collaboratory Investigators Publish Article on Ethical and Regulatory Complexities for Pragmatic Clinical Trials in JAMA


“Ethics and Regulatory Complexities for Pragmatic Clinical Trials,” a Viewpoint article by Jeremy Sugarman, MD, MPH, MA, and Robert Califf, MD, was published online in JAMA today. In the article, the authors draw on early experiences from two large networks conducting pragmatic clinical trials, the NIH Collaboratory and the National Patient-Centered Clinical Research Network (PCORnet), to describe 10 ethical and regulatory complexities facing this new field of research. Topics covered include informed consent, risk determination, the role of gatekeepers, and institutional review board review and oversight, among others, as well as the ongoing need for further discussion and research as a key part of efforts aimed at creating a learning healthcare system.

Dr. Sugarman is chair of the Regulatory/Ethics Core of the NIH Collaboratory and deputy director for medicine of the Johns Hopkins Berman Institute of Bioethics. Dr. Califf is the principal investigator of the NIH Collaboratory Coordinating Center and director of the Duke Translational Medicine Institute.