February 2, 2023: New Tool Developed to Assess Intervention Complexity

Cover of Contemporary Clinical TrialsThe NIH Pragmatic Trials Collaboratory has developed an online tool to evaluate the complexity of delivery of a trial intervention. To develop the tool, principal investigators of embedded pragmatic clinical trials (ePCTs) shared critical drivers of complexity that affected their ability to implement an intervention and discern treatment effects. An article describing the tool and its development was published today in Contemporary Clinical Trials.

“The complexity of the intervention delivery can have implications for study planning, ability to maintain fidelity to the intervention during the trial, and/or ability to detect meaningful differences in outcomes,” the authors wrote.

The tool consists of 6 domains comprised of internal and external factors that can impact complexity.

  • Internal factors pertain to the intervention itself:
    • the degree to which the intervention requires re-engineering of existing work flows and tasks;
    • the number of components in the intervention to be delivered; and
    • the level of familiarity or extra training needed for those delivering the intervention.
  • External factors are related to intervention delivery at the system level:
    • the degree to which delivery of the intervention is dependent on setting in which it is implemented;
    • the number of health care systems and clinics involved in delivering the intervention; and
    • the number of steps between the intervention and the outcome’s intended effect.

The authors hope the tool will enable communication about potential challenges of intervention delivery.

“By raising awareness about and increasing preparedness for the potential pitfalls of delivering the intervention within the ePCT design, we hope that this version of the tool will be useful to the trial team and its health system partners during trial planning and conduct,” they wrote.

Future work is planned to further refine the tool based on input from other networks that conduct ePCTs. The tool is free to use and availble online.

December 7, 2022: Ethics and Regulatory Grand Rounds Series Explores Stepped-Wedge Designs This Friday

Headshots of Monica Taljaard and David MagnusThis Friday’s PCT Grand Rounds will feature the next installment of our special series, Ethical & Regulatory Dimensions of Pragmatic Clinical Trials. Monica Taljaard and David Magnus will present “The Stepped Wedge Cluster Randomized Trial: Friend or Foe?”

The Grand Rounds session will be held on Friday, December 9, 2022, at 1:00 pm eastern.

Taljaard is a senior scientist in the Clinical Epidemiology Program at the Ottawa Hospital Research Institute and a professor of epidemiology and community medicine at the University of Ottawa. Magnus is the director of the Stanford Center for Biomedical Ethics and the Thomas A. Raffin Professor of Medicine and Biomedical Ethics and associate dean for research at the Stanford University School of Medicine.

Join the online meeting.

This special Grand Rounds series features moderated webinar discussions with panels of experts. The sessions focus on a range of topics, including the ethics of data sharing; ethical and regulatory considerations in the design and conduct of pragmatic trials; pragmatic research involving patients with dementia; and the use of waivers and alterations of consent.

Read the full program.

Grand Rounds October 28, 2022: The HERO (Healthcare Worker Exposure Response & Outcomes) Program: An Online Community to Support Observational Studies, Randomized Trials, and Long-Term Safety Surveillance (Emily O’Brien, PhD, FAHA; Russell Rothman, MD, MPP)

Speakers

Emily O’Brien, PhD, FAHA
Associate Professor
Duke Clinical Research Institute
Duke University School of Medicine
Department of Population Health Sciences

Russell Rothman, MD, MPP
Senior Vice President, Population and Public Health
Director, Vanderbilt Institute for Medicine and Public Health
Vanderbilt University Medical Center

 

 

Keywords

HERO Registry; HERO TOGETHER; Hydroxychloroquine; COVID-19; PCORI; PCORnet

 

Key Points

  • On March 21, 2020, in response to the COVID-19 pandemic, PCORI contacted leadership at Duke Clinical Research Institute and PCORnet and a decision was made to focus on the space of healthcare workers. The HERO Program was fully approved and began recruiting participants on April 22, 2020.
  • The HERO Registry aimed to create a diverse virtual community of healthcare workers and their families and communities, ready for future COVID-19 research.
  • The HERO Registry explored topics that mattered most to participants and found these topics changed with time. Important issues early on included COVID-19’s effects on the workplace, vaccine access and willingness, and impact on home life. Later, burnout and lack of appreciation and support became larger issues.
  • The first trial undertaken by the HERO Program, HERO-HCQ, evaluated the efficacy of hydroxychloroquine (HCQ) to prevent COVID-19 in healthcare workers. Over 1300 participants were recruited from the HERO Registry. No statistically significant benefit was found.
  • The HERO TOGETHER study leveraged the HERO Registry to estimate real-world incidence of safety events among vaccinated individuals. The most common safety events reported included non-hospitalized arthritis/arthralgia and non-hospitalized non-anaphylactic allergic reaction.

Learn more

On the HERO Program website.

Discussion Themes

– The HERO Registry survey collected a large broad range of information. Participant feedback revealed that shorter more targeted surveys focusing on the most high-value information may have been less burdensome for participants..

– The creative multi-faceted approach to recruitment that includes diverse stakeholder engagement could be successful in creating research registries for other important health issues. .

Tags

#pctGR, @Collaboratory1

August 16, 2022: Biostatistics Core Develops Tools and Strategies for Common Research Challenges

Head shot of Dr. Patrick HeagertyHead shot of Dr. Liz TurnerIn an interview at the NIH Pragmatic Trials Collaboratory’s annual Steering Committee meeting and 10th anniversary celebration, we asked Dr. Liz Turner and Dr. Patrick Heagerty to reflect on the role of the Biostatistics and Study Design Core Working Group in helping the NIH Collaboratory Trial teams design their trials and analyze the data, and to discuss their focus for the Core's future contributions to pragmatic clinical trials.

Based on your experience working with the NIH Collaboratory Trials, what are some of the common challenges of the Core?

Given the pragmatic nature of the NIH Collaboratory Trials, most use a design that involves some kind of clustering of outcomes. This could be a cluster randomized design or an individually randomized group treatment trial. As a consequence, nearly all projects face the challenge of how to account for clustering in both the design and analysis of the trial.

For the NIH Collaboratory Trials that use a cluster randomized design, one of the most common challenges is deciding between a stepped-wedge design and a standard parallel-arm design. The Core’s recommendation is clear: only use a stepped-wedge design if you have to! Likewise, only use a cluster randomized design if you have to and, if possible, use an individually randomized design. Nevertheless, a cluster randomized design is often the design of choice to address a pragmatic research question, and a stepped-wedge cluster randomized design may be the only way to perform a randomized evaluation of an intervention (for example, when all centers wish to receive the intervention in order to agree to participate in the trial).

From an analysis perspective, common challenges involve how to handle missing outcome data and how to handle longitudinal (that is, repeated) measures data. For both design and analysis, as you can imagine, the COVID-19 pandemic has posed huge challenges, including how to handle the disruption of an ongoing stepped-wedge trial (as in the GGC4H NIH Collaboratory Trial). In short, clustering of outcomes is the biggest theme (and challenge) across the NIH Collaboratory Trials.

What strategies have NIH Collaboratory Trials used to overcome these barriers?

A common strategy used by the NIH Collaboratory Trials to overcome these barriers has been to leverage what we call the “Core group process.” This dynamic process is driven by the NIH Collaboratory Trials and supported by the Core, together with NIH Collaboratory leadership. The process is centered around the monthly Core meeting to which all NIH Collaboratory Trial teams are invited and that involves all Core members. These meetings provide dedicated time for each study team to provide project updates and elicit feedback from the Core and the other NIH Collaboratory Trial teams. In particular, all the study teams are invited to present at least once during the UG3 planning phase and on multiple occasions during the UH3 implementation phase. Core members are also available for ad hoc, smaller group meetings, as requested. What this process allows is for the NIH Collaboratory Trials to present challenges and for us to jointly identify solutions.

How are the NIH Collaboratory Trials’ experiences with the Core helping the field of pragmatic research?

Through the challenges and ideas that have been brought to the Core, the NIH Collaboratory Trials have pushed the field of pragmatic research. In particular, through the Core group process, they have pushed the Core to solve methodological challenges and provide tools to tackle the design issues that arise in the changing research landscape.

Thumbnail image of the COVID-19 checklist

A key example of the Core’s methodological work was inspired by the STOP CRC NIH Collaboratory Trial and is related to the design and analysis choices faced in the unique context of embedded pragmatic trials. This example addresses a common challenge in embedded pragmatic trials, namely how to handle varying cluster sizes, something that arises in so many of the NIH Collaboratory Trials. The research, recently published in Contemporary Clinical Trials, highlights that a seemingly natural analysis in this context may produce a biased inference about intervention effectiveness, which is clearly problematic.

The second example is the Core’s recently published Statistical Analysis Plan Checklist for Addressing COVID-19 Impacts. Development of this tool was inspired by the many challenges faced by the NIH Collaboratory Trials as a result of the COVID-19 pandemic, such as delayed recruitment (as in the BackInAction NIH Collaboratory Trial) and adjustments to how interventions were delivered (as in the ACP PEACE NIH Collaboratory Trial).

What do you think the Core can contribute over the next decade?

The Core has a lot to contribute over the next decade. A key goal is to ensure we are building and diversifying the next generation of statisticians who are experts in pragmatic trials and who can engage deeply in the design and analysis of pragmatic trials embedded in healthcare systems.

To achieve this, we need to continue to bring trainees into the Core, as we have done over the past 6 years, through funded graduate research assistant positions. By doing this, we should be able to not only build the next generation of pragmatic trial experts but also build scholarship in pragmatic trial methodology by identifying methodological gaps needed to be filled so the NIH Collaboratory Trials study teams—and pragmatic trialists in the broader research community—have the best methods available to them.

The opportunity to participate in a cross-institution working group such as ours is surprisingly rare. As a consequence, we are in a unique position to not only build the next generation of experts but also to strength our own collective expertise and knowledge by learning from each other’s perspectives.

Grand Rounds August 5, 2022: Economic Evaluation of Platform Trial Designs (Jay JH Park, PhD, MSc)

Speaker

Jay JH Park, PhD, MSc
Assistant Professor
Department of Health Research Methods, Evidence and Impact
Faculty of Health Sciences
McMaster University

 

 

Keywords

Platform Trial, Study Design, Pragmatic Clinical Trials

 

Key Points

  • Adaptive trial design is an overarching terminology for trials that use accumulating data in a formal way. In this design, we come up with how we are going to look at the data, how we plan to react to the data, with one or more rounds of internal evaluations or interim reviews where we make the adaptations to the trial design, if the data says we should. The most common types of adaptive trial design are sequential design and response adaptive randomization.
  • Platform trial design refers to trials that are designed with the flexibility to add new intervention. They use a series of documents called “master protocols” that outline trial plans and standard operating procedures for evaluation of multiple interventions. You can conduct platform trials using adaptive trial designs or fixed sample trial designs.
  • To evaluate these trial methods, we did an economic evaluation to determine what are the costs and time requirements conducting a single platform trial versus multiple independent trials? We administered a survey to international experts with publication record on platform trials and master protocols using purposive sampling. The response rate was low (10%). Participants were asked how long it takes and cost for trail set up, conduct, and analysis for a two-arm multi-trial and in addition for a platform trial how long it takes what it costs to add a new intervention.
  • The main outputs were the set-up cost and time, comparing the single study set-up and the total set-up cost and time across different trials and scenarios, and the total cost and time (set-up, conduct, and analysis). We compared a single platform trial to a single 2-arm trial and found it takes considerably less time and cost in setting up a single trial for conventional trials. The findings were similar for cost. There was not much difference in cost between a platform trial and multi-arm trial; setting up a single platform does appear to save money. The single platform trial requires less time measured by total persons.
  • The key takeaway from the simulation is that the platform trial has larger set-up requirements, but it can save money and time in the long run. The platform trial model is not easy but not impossible; as we saw during the COVID-19 pandemic, it can be an effective way to discover important therapies and research in a fast and effective manner.

Discussion Themes

-Did the total cost include both coordinating center costs and site-level costs? We did not get into the site-level costs.

-These platform trials are almost a public good so that there is a sustainable model for the infrastructure so it continues to grow and evolve and no one bears the full cost. Does academic environment create some disincentives as well? Who becomes the PI is an important question because their institution gets the overhead. I’m not sure what the right answer is. There’s lots of red tape in the academic world. It’s an example of the kind of barriers there are to innovative approaches. How can we chip away at the barriers? At the individual level there is interest and commitment, and we see the value in this but there are barriers that do get in the way. Calling them out is step one.

Read the full study.

 

Tags

#pctGR, @Collaboratory1

July 18, 2022: New Article Offers Recommendations for Pragmatic Trials in Emergency Medicine

Dr. Edward Melnick, Co-PI of EMBED, and Dr. Corita Grudzen, PI of PRIM-ER
(From left) Dr. Edward Melnick, Co-PI of EMBED, and Dr. Corita Grudzen, PI of PRIM-ER

Investigators planning emergency department (ED)­–based pragmatic clinical trials (PCTs) face multiple decisions during the planning phase to ensure robust and meaningful study findings in this unique setting; however, there is no guide for planning and conducting these studies.

A new article in Academic Emergency Medicine fills the gap by providing recommendations for the design of PCTs in emergency settings. Among the authors are investigators from EMBED and PRIM-ER, both NIH Pragmatic Trials Collaboratory Trials.

The authors recommend that investigators planning ED-based PCTs should strongly consider the use of the PRECIS-2 wheel diagram during the study design phase to increase chances that the results of the trial are generalizable across the intended practice settings. The PRECIS tool was developed to help investigators work through study design decisions to avoid designing a trial that does not meet their own intentions.

The authors expand upon the 9 domains within the PRECIS-2 framework to identify points for investigators to consider in the design of ED-based PCTs, and they provide examples of successful studies. Using the PRECIS-2 framework can help investigators navigate unique challenges in the ED setting, including time-sensitive conditions, limited availability of electronic health data for the patient population, and added complexity surrounding capacity and consent for certain vulnerable ED populations with mental illness or substance use disorder.

The authors also address special considerations for randomization, human subjects concerns, and electronic health record integration.

Finally, the authors provide an analysis that highlights the advantages, disadvantages, and rationale for the use of 4 common randomized PCT study design types and examples of similar trials set in the ED.

Read the full text of the article.

April 1, 2022: ICD-Pieces: Improving Care for CKD, Diabetes and Hypertension in Health Systems (Miguel A. Vazquez, MD; George (Holt) Oliver, MD, PhD)

Speaker

Miguel A. Vazquez, MD
Professor of Medicine
University of Texas Southwestern Medical Center
Dallas, TX

George (Holt) Oliver, MD, PhD
Vice President Clinical Informatics
Parkland Center for Clinical Innovation
Dallas, TX

Keywords

ICD-Pieces; Chronic kidney disease (CKD); EHR data collection; Diabetes; Hypertension

Key Points

  • ICD-Pieces focuses on the chronic conditions of Diabetes, Hypertension, and CKD. These conditions are common, under-recognized, and can have serious complications.
  • PIECES is an information technology software developed to help facilitate primary care practices provide comprehensive evidence-based care for patients with these chronic conditions.
  • Researchers enrolled 11,000 patients with CKD in the study. The interventions included controlling blood pressure with medication, avoiding hypoglycemia, use of statins, and avoidance of NSAIDs.
  • Interventions were determined to be feasible. Outcomes for study populations are to be determined with further analysis of the data.

Discussion Themes

Pragmatic trials are often practical laboratories for implementation science.

An inherent challenge of collecting data from the EHR record is delay. It may help to have part of the study team embedded in the clinical trial for the data collection aspect, or possibly to collect the data at the time of intervention rather than waiting.

 

Read more about ICD-Pieces.

 

 Tags

#pctGR, @Collaboratory1

March 11, 2022: Understanding a Patient’s Daily Experience Through Mobile Devices and Wearables: Lessons Learned From the 8,000 Patient MIPACT Study and Implementation in a National Pragmatic Trial (Sachin Kheterpal, MD, MBA; Jessica Golbus, MD; Nicole Pescatore, MPH)

Speakers

Sachin Kheterpal, MD, MBA
Professor of Anesthesiology
Associate Dean for Research IT
PI for MIPACT and Co-PI for THRIVE University of Michigan

Jessica Golbus, MD
Clinical Instructor, Cardiovascular Medicine
Co-I for MIPACT
University of Michigan

Nicole Eyrich, MPH
MIPACT Clinical Research Project Manager University of Michigan

Keywords

MIPACT; Patient-reported outcomes; Virtual recruitment; Cohort Identification Toolkit; VALENTINE study; THRIVE study; Wearable data research; Propofol

Key Points

  • The MIPACT study combined patient reported outcome data with electronic health record data and data collected from wearable devices.
  • The MIPACT study followed over 7,000 participants for 3 years. Both in-person and virtual recruitment had similar success rates. Participant diversity was a priority during recruitment.
  • The VALENTINE Study was a prospective randomized controlled study using mobile wearable devices to enhance cardiac rehabilitation.
  • Older participants in the VALENTINE study were receptive and capable of using wearable technology to assist data collection for the study.
  • THRIVE is pragmatic clinical trial studying Propofol anesthesia in 22 states and 2 countries.

Discussion Themes

Economic barriers to participation may be present when using wearable device technology as an inclusion criteria for a study. There are various methods to reduce these barriers, such as providing participants with the wearable device and a minimal data plan.

There may be limits to the types of participant reported outcomes that can be collected remotely. Ideally, participants will decide what data researchers will have access to.

Read more about MIPACT, VALENTINE, and THRIVE study.  Read results from MIPACT and VALENTINE.

 

 Tags

#pctGR, @Collaboratory1

February 25, 2022: The Next Generation of Patient-Centered Trials – No Site Visits, Home-delivery of Meds and Patient-reported Outcomes – The CHIEF-HF Trial (John Spertus, MD, MPH, FACC, FAHA)

Speaker

John Spertus, MD, MPH, FACC, FAHA
Daniel Lauer/Missouri Endowed Chair and Professor
University of Missouri – Kansas City
Clinical Director of Outcomes Research
Saint Luke’s Mid America Heart Institute

Keywords

Heart failure; Canagliflozin; INVOKANA; Kansas City Cardiomyopathy Questionnaire (KCCQ); tele-health; tele-trials

Key Points

  • The CHIEF-HF trial (Canagliflozin: Impact on Health Status, Quality of Life and Functional Status in Heart Failure) was a double blind randomized clinical trial of the medication Canagliflozin for heart failure.
  • CHIEF-HF was designed to learn if patients have fewer symptoms after 3 months of treatment with Canagliflozin.
  • CHIEF-HF used a novel trial design that involved no site-visits and electronic monitoring of patient engagement. The follow up rate for this study was over 97%.
  • The Total Symptom Score on the KC Cardiomyopathy Questionnaire was the key outcome measure in the CHIEF-HF Trial.
  • Heart failure patients treated with Canagliflozin experienced a statistically significant improvements in symptoms.
  • Sites in the study chose recruitment methods that worked best for them. More personalized recruitment strategies were most successful.
  • Difficulties of electronic trials include the technology limitations of the participants, and electronic consent concerns from regulatory agencies. Positives of electronic trials are high enrollment and completion rates.

Discussion Themes

How do technology heavy studies enroll diverse and underserved populations who may not have access to smartphones and wearable technology?

Virtual studies such as this that enrolled at 6 to 8 times the rate on a traditional trial can save quite a bit of money on overhead expenses.

 

Read more about the CHIEF-HF trial.

 

Tags

#pctGR, @Collaboratory1

February 18, 2022: Building a Resource: The Process of Developing a Trans-stakeholder Framework to Enable Pediatric Drug Development (Perdita Taylor-Zapata, MD)

Speaker

Perdita Taylor-Zapata, MD
Best Pharmaceuticals for Children Act (BPCA) Program Lead and NICHD Program Officer
Obstetric and Pediatric Pharmacology and Therapeutics Branch
National Institute of Child Health and Human Development

Keywords

NIH Best Pharmaceuticals for Children Act; Pediatric Trial Network; Trial design; Pediatric drug development

Key Points

  • The current model for pediatric drug development can be slow and neglect neonates and rare pediatric conditions.
  • The NIH Best Pharmaceuticals for Children Act (BPCA) allows the NIH to conduct clinical trials with off-patent drugs in children.
  • Goals of the BPCA program include developing novel trial designs and including diverse and understudied populations.
  • A new framework to enable pediatric drug development could identify resources to assist in drug development, identify areas in need of further research, provide a pathway for integrating approaches, and connect pediatric researchers.
  • The BPCA went through a rigorous systematic approach to develop a comprehensive resource listing for best practices for pediatric drug trials.

Discussion Themes

Most data collected through the opportunistic model presented is PK data to determine dosing so that a more traditional drug trial can be conducted in the future.

With the right infrastructure in place, such as the Pediatric Trials Network, can substantially improve time to conduct trials.

 

Read more about the BPCA and their commitment to diversity in pediatric drug trials.

 

Tags

#pctGR, @Collaboratory1