Advantages and Considerations for mHealth in Pragmatic Trials

Real World Evidence: Mobile Health (mHealth)


Section 3

Advantages and Considerations for mHealth in Pragmatic Trials

There are many different advantages and challenges associated with the use of digital technologies in ePCTs. In this section, we describe some of the special considerations.

Mobile Apps

Mobile app-based studies can be built with existing mobile frameworks like Apple ResearchKit for iPones and iPads and ResearchStack for Android. Potential participants can download an app in order to participate, and typically receive a secure token that links them to a particular study (Dameff et al. 2019). These apps provide a foundation for efficiently building mobile-based studies, which have been gaining traction in recent years. For example, a PubMed search on the keywords ResearchKit returned 15 clinical studies (Chan et al. 2017; Goyal et al. 2017; Webster et al. 2017; Zens et al. 2017; Crouthamel et al. 2018; Egger et al. 2018; Hausmann et al. 2018; Radin et al. 2018; Hershman et al. 2019; Yamaguchi et al. 2019; Yoshimura et al. 2019; Rubin et al. 2019; Ahmad et al. 2020; Wang et al. 2020; Inomata et al. 2020). In order to successfully use mobile apps in pragmatic clinical trials, investigators need to consider whether or not prospective patients own a smartphone capable of operating the app, and whether they are willing to download, authenticate, provide consent as necessary, and actually use the app. Mobile apps have to potential to be used for standalone data collection, and they can be linked to existing real-world data sources, such as the electronic health record.

The FDA MyStudies app is and example of a mobile app that can be used for research. It is a customizable and reusable application for mobile devices that links electronic health data with the patient perspective for use in clinical research. Potential users can configure the app for specific research studies through a web-based configuration portal, and use a secure patient data storage environment that is compliant with Federal Information Security Management Act (FISMA) security standards, HIPAA, and 21 CFR Part 11. The open source code is available on GitHub, and more information can be found at the MyStudies Mobile App Quick Overview for Research. In response to the COVID-19 pandemic, the COVID MyStudies app replaced the FDA MyStudies app to allow for remote patient enrollment and informed consent for studies of the disease. However, effective September 2022, the COVID MyStudies app will no longer be available as funding expires. For more information, refer to Phasing Out of the COVID MyStudies Application (App) on the FDA website.

Eligibility

Eligibility criteria outlined in exploratory clinical trials often fail to fully encapsulate the actual demands placed on individuals asked to participate. For example, randomization procedures for an explanatory trial of an mHealth intervention meant to improve medication adherence cannot be feasibly conducted with patient blinding. Further, it would need to account for or assume some background use of other common tools, such as pharmacy reminders, digital health assistants, and others. Pragmatic trials attempt to expand the pool of eligible participants to more closely match the clinical population seen in real-world settings. It bears noting that pragmatic mHealth trials still rely on a network of physical, social, and institutional assumptions about participant abilities and burdens, chiefly, consistent access to and comfort with the requisite technology. mHealth interventions, as well as many digital health interventions in the broad sense, seek to close this gap further by engaging patients/consumers through platforms they already use, on technology which is otherwise ubiquitous and familiar to even technologically-naive individuals.

The same is true of pragmatic evaluations of these interventions, where recruitment and retention exist within a two-stage framework: 1) recruitment and retention of the individual, and 2) their requisite technology. When done successfully, the opportunities for evaluation of a demonstrably representative study population and dissemination to other contexts and health systems are vastly improved.

Patient Engagement and Recruitment

Mobile apps and other digital technologies can facilitate remote engagement and recruitment and have the potential to reduce barriers to engagement and increase efficiency for enrollment. For example, Hugo is a freely available app that allows the combination of clinical data from all of a person’s encounters (pharmacy, labs, health systems, and payers) with patient-generated and patient-reported data for use by the patient (Beckman and Gupta 2018). Although the use of Hugo is relatively new in “sync-for-science” research, a recent study demonstrated its feasibility for enrollment and continued engagement in 25 people after percutaneous coronary intervention (Dhruva et al. 2019). Hugo Health can be downloaded in the app store for use on a smartphone or tablet or accessed through a website, and a person can add a study code (given to them by their healthcare provider) to join a study.

The Clinical Trials Transformation Initiative (CTTI) has a set of recommendations for Optimizing Mobile Clinical Trials by Engaging Patients and Sites that includes

  • Recommendations for maximizing value and minimizing burden for study participants
  • Addressing challenges for sites
  • A checklist for trial sponsors for selecting and equipping sites for mHealth trials
  • A checklist for investigative sites intended to help with budgeting and contracting

The recruitment of a diverse set of eligible individuals represents the fundamental challenge to pragmatism in mHealth trials, as each consideration represents a trade-off between representativeness and real-world applicability versus the practical ability to deliver materials and conduct an effective evaluation.

While identifying participants administratively may be possible—and even preferable—using administrative electronic health record data, the traditional consent process often proves insufficient, as it sacrifices either pragmatism or a priori informed consent. In the first case, the team may elect to identify and approach patients in the same manner as they would for standard clinical trials (where consent occurs face-to-face, often during or adjacent to an in-person clinical encounter), simultaneously limiting the sample size, representativeness, and diversity of the resulting sample. An advantage to this strategy, however, is that study staff are available to help patients overcome any initial technologic barriers to participation (usernames and passwords, transfer syncing, etc.). In the second case, patients meeting criteria may be able to be identified administratively, and enrolled in a program using some form of technology-mediated outreach (text messaging in particular), using either an opt-in or opt-out notification procedure. This avoids the limitations associated with traditional consent, but raises concerns about the ability to provide participants with adequate notice and information necessary for consent to participate.

Conducting Interventions Remotely

For some studies, the digital technology can be used to deliver the intervention. In our case example of the Nudge study, the intervention is a text message reminding patients with chronic cardiovascular conditions to refill their medications (described in detail in Case Example From the Nudge Study). Another recent example is a pragmatic clinical trial conducted to determine if a smartphone app is effective in reducing menstrual pain (Wang et al. 2020).

Completing Follow-up Remotely or Research at Home

Patients can connect their personal devices to healthcare systems to both retrieve information for personal use and continuity of care, or to deliver information for clinical care and/or research, such as glucose levels, levels of pain after surgery, or information about hospitalizations or readmissions (Dameff et al. 2019).

Ascertainment of a Range of Outcomes

Digital technology can be used to assess a range of outcomes, such as patient health status, daily activities, sleep, hospitalization, or patient-reported outcomes.

Setting

The setting domain provides perhaps the greatest advantage to mHealth interventions, particularly for pragmatic trials. With common methods of evaluating pragmatism (Thorpe et al. 2009), trial settings are described according to how similar or different the intervention context is than where patients typically receive care. In the case of mHealth, this comparison is not made between the message or application and office visit, but rather between the types of health-related activities patients are apt to engage in with their personal electronics. Text messaging, push notification, web-enabled treatment or symptom monitoring, and other related interventions all seek to structure the manner in which patients interact with healthcare resources in the social environment patients already inhabit, piggybacking on platforms and technology-use behaviors already deeply ingrained in daily routines.

Organization

The organizational setting or context is another highly variable consideration for mHealth interventions and pragmatic trials. While some interventions may exist on ubiquitous platforms (e.g., SMS text messaging, email), the integration of data sent to and received from patients may rely on an IT infrastructure that only exists within highly specialized medical systems, academic settings, or worse yet—at the study site itself. This issue is magnified with interventions using less common or bespoke platforms, where the expertise necessary to maintain and navigate these systems is highly localized.

Authenticity

Authenticity of the data being captured in pragmatic mHealth trials deserves special consideration. When patients share devices, or when caregivers or other loved ones primarily manage the devices being used to gather patient data, it is critical to confirm who is entering data at any given time. This is particularly pertinent to studies targeting older, multimorbid, or individuals with lower SES. In this case, consideration of data authenticity has both scientific and practical implications, as older and low SES populations are those not typically receiving outreach or education in other formats, and are therefore theoretically more likely to benefit from an intervention if it is efficacious. Some methods of authentication, including biometric authentication, metadata, and time stamps, may be viable in some cases. However, individuals who share or do not manage their own devices are less likely to own devices that are capable of sophisticated biometric authentication. In these cases, it may be advisable to simply include a question on each observation about whether the patient or someone else is entering data.

 


SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Ahmad FA, Payne PRO, Lackey I, et al. 2020. Using REDCap and Apple ResearchKit to integrate patient questionnaires and clinical decision support into the electronic health record to improve sexually transmitted infection testing in the emergency department. J Am Med Inform Assoc. 27:265–273. doi:10.1093/jamia/ocz182. PMID: 31722414.

Beckman AL, Gupta S. 2018. Empowering people with their healthcare data: an Interview with Harlan Krumholz. Healthcare. 6:238–239. doi:10.1016/j.hjdsi.2018.08.002. PMID: 30143459.

Chan Y-FY, Wang P, Rogers L, et al. 2017. The Asthma Mobile Health Study, a large-scale clinical observational study using ResearchKit. Nat Biotechnol. 35:354–362. doi:10.1038/nbt.3826. PMID: 28288104.

Crouthamel M, Quattrocchi E, Watts S, et al. 2018. Using a ResearchKit smartphone app to collect rheumatoid arthritis symptoms from real-world participants: feasibility study. JMIR MHealth UHealth. 6:e177. doi:10.2196/mhealth.9656. PMID: 30213779.

Dameff C, Clay B, Longhurst CA. 2019. Personal health records: more promising in the smartphone era? JAMA. 321:339. doi:10.1001/jama.2018.20434. PMID: 30633300.

Dhruva SS, Mena-Hurtado C, Curtis J, et al. 2019. Learning how to successfully enroll and engage people in a mobile sync-for-science platform to inform shared decision making. J Am Coll Cardiol. 73:3039. doi:10.1016/S0735-1097(19)33645-9.

Egger HL, Dawson G, Hashemi J, et al. 2018. Automatic emotion and attention analysis of young children at home: a ResearchKit autism feasibility study. NPJ Digit Med. 1:20. doi:10.1038/s41746-018-0024-6. PMID: 31304303.

Goyal S, Nunn CA, Rotondi M, et al. 2017. A mobile app for the self-management of type 1 diabetes among adolescents: a randomized controlled trial. JMIR MHealth UHealth. 5:e82. doi:10.2196/mhealth.7336. PMID: 28630037.

Hausmann JS, Berna R, Gujral N, et al. 2018. Using smartphone crowdsourcing to redefine normal and febrile temperatures in adults: results from the feverprints study. J Gen Intern Med. 33:2046–2047. doi:10.1007/s11606-018-4610-8. PMID: 30105481.

Hershman SG, Bot BM, Shcherbina A, et al. 2019. Physical activity, sleep and cardiovascular health data for 50,000 individuals from the MyHeart Counts Study. Sci Data. 6:24. doi:10.1038/s41597-019-0016-7. PMID: 30975992.

 

Inomata T, Iwagami M, Nakamura M, et al. 2020. Association between dry eye and depressive symptoms: large-scale crowdsourced research using the DryEyeRhythm iPhone application. Ocul Surf. doi:10.1016/j.jtos.2020.02.007. PMID: 32113987.

Radin JM, Steinhubl SR, Su AI, et al. 2018. The Healthy Pregnancy Research Program: transforming pregnancy research through a ResearchKit app. NPJ Digit Med. 1:45. doi:10.1038/s41746-018-0052-2. PMID: 31304325.

Rubin DS, Dalton A, Tank A, et al. 2019. Development and pilot study of an iOS smartphone application for perioperative functional capacity assessment. Anesth Analg. doi:10.1213/ANE.0000000000004440. PMID: 31567326.

Thorpe KE, Zwarenstein M, Oxman AD, et al. 2009. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. Can Med Assoc J. 180:E47-57. doi:10.1503/cmaj.090523. PMID: 19372436.

Wang J, Rogge AA, Armour M, et al. 2020. International ResearchKit app for women with menstrual pain: development, access, and engagement. JMIR MHealth UHealth. 8:e14661. doi:10.2196/14661. PMID: 32058976.

Webster DE, Suver C, Doerr M, et al. 2017. The Mole Mapper Study, mobile phone skin imaging and melanoma risk data collected using ResearchKit. Sci Data. 4:170005. doi:10.1038/sdata.2017.5. PMID: 28195576.

Yamaguchi S, Waki K, Nannya Y, Nangaku M, Kadowaki T, Ohe K. 2019. Usage patterns of GlucoNote, a self-management smartphone app, based on ResearchKit for patients with type 2 diabetes and prediabetes. JMIR MHealth UHealth. 7:e13204. doi:10.2196/13204. PMID: 31017586.

Yoshimura Y, Ishijima M, Ishibashi M, et al. 2019. A nationwide observational study of locomotive syndrome in Japan using the ResearchKit: The Locomonitor study. J Orthop Sci Off J Jpn Orthop Assoc. 24:1094–1104. doi:10.1016/j.jos.2019.08.009. PMID: 31492535.

Zens M, Woias P, Suedkamp NP, Niemeyer P. 2017. “Back on Track”: a mobile app observational study using Apple’s ResearchKit framework. JMIR MHealth UHealth. 5:e23. doi:10.2196/mhealth.6259. PMID: 28246069.


Version History

Published March 16, 2020

Added information related to COVID-19 (changes made by K. Staman and L. Stewart)

Opportunities in mHealth

Real World Evidence: Mobile Health (mHealth)


Section 2

Opportunities in mHealth

Digital health is a broad, inclusive term referring to the use of digital technology to improve health services, health research, or health outcomes. Mobile health (mHealth) is a type of digital health that uses wireless mobile devices, i.e., mobile phones, tablets, and laptops, to deliver health-related interventions.

mHealth offers unique opportunities to improve health service delivery, implement public health interventions, facilitate healthy behaviors, and ultimately improve health outcomes. People increasingly use mobile technology regardless of age, socioeconomic class, and primary language (Hunsaker and Hargittai 2018). Using familiar mobile devices that people are already using can increase the reach and access of health interventions to diverse populations. mHealth can provide health tools to people with lower-socioeconomic status (SES) who may not have access to more expensive technologies or in-person health care. mHealth can approach sensitive health topics, such as mental health, unintended pregnancy, death and dying, that people may not feel comfortable discussing. This approach allows people with the flexibility to engage in health services and interventions at convenient locations and times based on their individual needs.

Common mHealth applications for health providers include mobile applications for medical education and tips to providers when deciding upon or prescribing medications for patients, as well as collecting patient-reported outcomes with mobile screening tools and intake assessments. On the patient side, mHealth services are offered by healthcare systems via mobile applications, web-based platforms, patient portal apps, and text messaging to encourage chronic disease self-management, appointment attendance, and adherence to medical regimens. These mHealth services are often, yet not always, linked to electronic health record systems.

Currently, there are over 318,000 health-related, commercially available mobile applications (Institute for Health Informatics 2015; Research 2 Guidance). These applications, initiated by industry, health systems, clinicians, and researchers, offer tools for diet, exercise, medication, health education, disease specific information, monitoring features, and games. The applications target a wide variety of health concerns ranging from pregnancy, infant well-being, sexual health, fitness, to chronic disease and end-of-life care  (Flores Mateo et al. 2015; Fedele et al. 2017; Hanlon et al. 2017; Gyselaers et al. 2019; Portz et al. 2020). While some apps are designed to help individual patients, others target caregivers, and some are created for large-scale public health campaigns (e.g., smoking cessation, suicide prevention, breastfeeding).

Evidence suggests that mHealth interventions likely improve antecedents to health behavior, including self-efficacy, disease-specific knowledge, health literacy and numeracy, and motivation (Hamine et al. 2015; McKay et al. 2018; Aromatario et al. 2019). However, very little is known about the effectiveness of mHealth in improving health and health services (Free et al. 2010). Adoption rates of mobile apps are high, but use declines over time (Tang et al. 2016). Little is known about mHealth engagement and the associations between engagement on health behaviors and outcomes. There are also barriers to mHealth implementation including interoperability with electronic health record systems, regulations and security issues, and clinical provider and workflow concerns (Bradbury et al. 2014).

 


SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Aromatario O, Van Hoye A, Vuillemin A, et al. 2019. How do mobile health applications support behaviour changes? A scoping review of mobile health applications relating to physical activity and eating behaviours. Public Health. 175:8–18. doi:10.1016/j.puhe.2019.06.011. PMID: 31374453.

Bradbury K, Watts S, Arden-Close E, Yardley L, Lewith G. 2014. Developing digital interventions: a methodological guide. Evid-Based Complement Altern Med ECAM. 2014:561320. doi:10.1155/2014/561320.

Fedele DA, Cushing CC, Fritz A, Amaro CM, Ortega A. 2017. Mobile health interventions for improving health outcomes in youth: a meta-analysis. JAMA Pediatr. 171(5):461–469. doi:10.1001/jamapediatrics.2017.0042.

Flores Mateo G, Granado-Font E, Ferré-Grau C, Montaña-Carreras X. 2015. Mobile phone apps to promote weight loss and increase physical activity: a systematic review and meta-analysis. J Med Internet Res. 17(11):e253. doi:10.2196/jmir.4836. PMID: 24648848.

Free C, Phillips G, Felix L, Galli L, Patel V, Edwards P. 2010. The effectiveness of M-health technologies for improving health and health services: a systematic review protocol. BMC Res Notes. 3(1):250. doi:10.1186/1756-0500-3-250. PMID: 20925916.

Gyselaers W, Lanssens D, Perry H, Khalil A. 2019. Mobile health applications for prenatal assessment and monitoring. Curr Pharm Des. 25(5):615–623. doi:10.2174/1381612825666190320140659. PMID: 30894100.

Hamine S, Gerth-Guyette E, Faulx D, Green BB, Ginsburg AS. 2015. Impact of mHealth chronic disease management on treatment adherence and patient outcomes: a systematic review. J Med Internet Res. 17(2):e52. doi:10.2196/jmir.3951. PMID: 25803266.

 

Hanlon P, Daines L, Campbell C, McKinstry B, Weller D, Pinnock H. 2017. Telehealth interventions to support self-management of long-term conditions: a systematic metareview of diabetes, heart failure, asthma, chronic obstructive pulmonary disease, and cancer. J Med Internet Res. 19(5):e172. doi:10.2196/jmir.6688. PMID: 28526671.

Hunsaker A, Hargittai E. 2018. A review of Internet use among older adults. New Media Soc. 20(10):3937–3954. doi:10.1177/1461444818787348.

Institute for Health Informatics. 2015. IMS Institute Patient Adoption of MHealth Report. https://www.iqvia.com/-/media/iqvia/pdfs/institute-reports/patient-adoption-of-mhealth.pdf. Accessed March 9, 2020.

McKay FH, Cheng C, Wright A, Shill J, Stephens H, Uccellini M. 2018. Evaluating mobile phone applications for health behaviour change: a systematic review. J Telemed Telecare. 24(1):22–30. doi:10.1177/1357633X16673538. PMID: 27760883.

Portz JD, Elsbernd K, Plys E, et al. 2020. Elements of social convoy theory in mobile health for palliative care: scoping review. JMIR MHealth UHealth. 8(1):e16060. doi:10.2196/16060. PMID: 31904581.

Research 2 Guidance. 325,000 mobile health apps available in 2017 – Android now the leading mHealth platform. https://research2guidance.com/325000-mobile-health-apps-available-in-2017/. Accessed March 9, 2020.

Tang C, Lorenzi N, Harle CA, Zhou X, Chen Y. 2016. Interactive systems for patient-centered care to enhance patient engagement. J Am Med Inform Assoc JAMIA. 23(1):2–4. doi:10.1093/jamia/ocv198. PMID: 26912537.


Version History

Published March 16, 2020

Introduction

CHAPTER SECTIONS

Real World Evidence: Mobile Health (mHealth)

Section 1

Introduction

Contributors

Christopher E. Knoepke, PhD, MSW
Jennifer D. Portz, PhD, MSW
Sheana Bull, PhD, MPH
Lisa Sandy, MPH
Thomas Glorioso, MS
Joy Waughtal, MPH

Phat Luong, MS
Adrian Hernandez, MD
Michael Ho, MD, PhD

Contributing Editor
Karen Staman, MS

Mobile and digital technologies hold an intriguing theoretic benefit to health, particularly in terms of supporting patient self-management of common, chronic conditions. The combined ubiquity of consumer electronic devices (cell phones, tablets, computers, etc.) with a growing body of knowledge surrounding the use of administrative and patient-reported outcome data to identify, target, and tailor interventions to specific patients, provides increasing opportunities in which to improve how patients manage their health. These possibilities, coupled with the profitability of many digital health applications, has led to explosive development of these programs and associated interventions (Griffiths et al. 2006).

Unfortunately, little is known about either the efficacy or effectiveness of these applications. Less is known, arguably, about the logistic and practical issues of identifying and targeting individual patients and healthcare providers for digital health interventions using administrative data, either in the context of care delivery or for the purposes of research. While a small number of digital health interventions have been subjected to traditional efficacy trials, even fewer have been evaluated using pragmatic methods (more closely mirroring the manner in which these programs are used by typical patients in the context of their care), and most are not evaluated at all (Eysenbach et al. 2011; Larsen et al. 2019).

A Pivot to Virtual Technologies During the COVID-19 Pandemic

“The COVID-19 pandemic has considerably disrupted nearly all aspects of daily life, including healthcare delivery and clinical research. Because pragmatic clinical trials are often embedded within healthcare delivery systems, they may be at high risk of disruption due to the dual impacts on the conduct of both clinical care and research” (O’Brien et al, 2022).

Beginning in 2020, the widespread disruptions caused by the COVID-19 pandemic included delays or pauses in research site activation and in-person staff training, challenges to data collection strategies, travel restrictions, and the need to adapt the delivery of the intervention. Social distancing also affected operations, research teams, and patients during the shift from in-person to virtual interactions.

A recent analysis of the pandemic’s effect on several of the NIH Collaboratory pragmatic trials found that teams needed to carefully consider whether the study intervention would be effective when delivered virtually. However, the studies “least affected by healthcare operations-related disruptions were those with enrollment systems already in place and those relying heavily on automated data collection through the electronic health record and/or mobile technologies.”

Key benefits of pandemic-related modifications included expanded outreach capabilities and greater inclusiveness when using virtual interventions. The authors highlighted a need in the post-COVID era for research identifying the impacts of virtual interventions and data collection on study populations, completeness of data, and participant engagement.

In this chapter, we will outline many of the possibilities, advantages, and challenges associated with mobile health (mHealth) interventions, with a particular focus on design and evaluation of these programs using pragmatic trial methodologies. We will illustrate many design and evaluation challenges, culminating with a discussion of how these considerations influence the ongoing development of the “Personalized Patient Data and Behavioral Nudges to Improve Adherence to Chronic Cardiovascular Medications (Nudge)” project: an NIH Collaboratory-funded pragmatic clinical trial of a targeted, cell phone-based medication adherence intervention specifically for patients with common cardiovascular conditions (hypertension, hypercholesterolemia, diabetes, atrial fibrillation, and coronary artery disease; NCT03973931).

 

SECTIONS

CHAPTER SECTIONS

sections

Resources

Grand Rounds

Introducing the Digital Medicine Society (Andy Coravos, MBA, Jen Goldsack, MS, MBA)

Advancing the Use of Mobile Technologies for Data Capture & Improved Clinical Trials (John Hubbard, PhD, Barry Peterson, PhD, Cheryl Grandinetti, PharmD)

Using Nudges to Improve the Delivery of Health Care (Mitesh S. Patel, MD, MBA)

The Democratization of Medicine: Open Access, Social Media, AI, Apps, and Empowering the Patient as the Future of Clinical Research

REFERENCES

back to top

Eysenbach G, CONSORT-EHEALTH Group. 2011. CONSORT-EHEALTH: improving and standardizing evaluation reports of Web-based and mobile health interventions. J Med Internet Res. 13(4):e126. doi:10.2196/jmir.1923. PMID: 22209829.

Griffiths F, Lindenmeyer A, Powell J, Lowe P, Thorogood M. 2006. Why are health care interventions delivered over the Internet? A systematic review of the published literature. J Med Internet Res. 8(2):e10. doi:10.2196/jmir.8.2.e10. PMID: 16867965.

 

Larsen ME, Huckvale K, Nicholas J, et al. 2019. Using science to sell apps: evaluation of mental health app store quality claims. Npj Digit Med. 2(1):1-6. doi:10.1038/s41746-019-0093-1. PMID: 31304366.

O’Brien EC, Sugarman J, Weinfurt KP, et al. 2022. The impact of COVID-19 on pragmatic clinical trials: lessons learned from the NIH Health Care Systems Research Collaboratory. Trials. 21;23(1):424. doi: 10.1186/s13063-022-06385-8. PMID: 35597988.

Version History

Published March 16, 2020

Added information related to COVID-19 (changes made by K. Staman and L. Stewart)

current section :

Introduction

Advice From Healthcare System Leadership

 

Building Partnerships and Teams to ensure a successful trial

Section 5

Advice From Healthcare System Leadership

We represent healthcare systems leaders and researchers who have partnered with the NIH Collaboratory Trials. Through our experience, we have learned that the successful conduct of ePCTs requires shared vision and priorities, continuous communication and commitment, and, frequently, compromise. Our experience has led us to develop this section with recommendations for investigators who wish to conduct ePCTs.

The job of a healthcare system is to efficiently and affordably deliver high-quality healthcare to patient populations. Pragmatic clinical trials offer an opportunity to combine the work of caring for patients with the generation of real-world evidence that could improve the quality of care for patients in a reliable way. This may help create a cycle that leads to continual learning, i.e., the learning health system (Committee on the Learning Health Care System in America 2013). However, the conduct of research is an embellishment of the mission of most healthcare systems, not a primary obligation, and participation is voluntary.

“The purpose of the healthcare system is not to do research, but to provide good healthcare. Researchers often have a tail-wagging-the-dog problem. We assume that if we think something is a good idea, the healthcare system will too … We need to remember that we’re the tail and the healthcare system is the dog.” —Greg Simon, MD, MPH (PI of SPOT)

Healthcare systems have constrained bandwidth, and participating in learning activities involves significant costs and challenges: there are direct costs but also intangible costs, which can be substantial, such as personnel time, IT time, distraction of clinical staff, and the potential for supply chain issues (Sands 2018). Additionally, when partnering with researchers, it takes time to define relationships and develop trust; the process can be complex, lengthy, and costly (i.e., legal costs). Participating in an ePCT may also involve NOT participating in competing interventions or other quality improvement activities or practice changes. If operational and research timelines fall out of sync, communication and compromise is often necessary. Finally, for multi-site studies, communication between and within organizations can be challenging, as these delivery systems each have site-specific, complex communication structures, and often, differing priorities.

“We have to think about how the research that we will do will either interfere with or add value to the health system. Ideally, we would like to leave something behind to the people who have helped us, and to do that, we really need to understand the values, priorities, and expectations of the healthcare systems, and to realize that the people we are talking to about pragmatic clinical trials often have bigger fish to fry.” —Greg Simon, MD, MPH from the January 2020 Collaboratory Grand Rounds: What is a Pragmatic Trial and How do I Start?

Because of these challenges, the successful design and conduct of ePCTs is predicated on a strong partnership between clinical researchers and delivery systems partners. In Real-World Advice for Generating Real-World Evidence, we establish 5 principles of partnership (Sands et al. 2019).

5 Principles of Partnership
 
Establish and maintain a durable partnership This partnership should involve an empowered champion to provide enterprise leadership throughout the course of the study. Participation must provide benefit for each collaborating organization. Credit (such as authorship and presentations) and intellectual property should be equitably shared and determined ahead of time.
Select research topics of mutual interest A research question that will interest a healthcare system must be clinically, operationally, or strategically important and provide value to the healthcare system, such as providing an adoptable solution to a problem or clarifying alternatives between different treatments or therapies.
Give precedence to operational imperatives The activity should be largely transparent or similar to normal workflow and inserted into operational cadence; the schedule cannot be driven by research timing. Investigators may need to accommodate changing institutional priorities.
Recognize that data security is essential Security systems should avoid transfer or duplication of data.
Build data systems for learning healthcare Use standards that promote interoperability and aggregation of data, such as the CMS requirements for meaningful use.

For healthcare system partners, the idea of improvement through systematic and scientific thinking is part of the value of participating. Healthcare systems leaders who volunteer to conduct embedded research may also participate for a variety of other reasons, including:

  • Evidence generation is for the greater good and may improve the care of patients
  • Early access to new knowledge of best practices
  • Research is in keeping with the mission of the healthcare system
  • The conduct of research is a market differentiator and/or a reputation builder
  • As a strategy for physician recruitment
  • As part of performance improvement initiatives/formal evaluation
  • Core to being a learning healthcare system
  • Research an important activity within the domain of corporate responsibility and advocacy
  • Research is an extension of a commitment to “evidence based medicine.” (Sands et al. 2019)

When the National Academy of Medicine advanced the idea of a Learning Health System over a decade ago, they stated that a “strong synchrony of efforts” would be required between healthcare system partners and clinical researchers (Committee on the Learning Health Care System in America 2013). If the partnership is symmetric, and “strongly synchronized,” then the embedded pragmatic research should be a “win-win”; it should feel equitable and be broadly perceived as such.

SECTIONS

CHAPTER SECTIONS

Resources

Watch a video where Dr. Greg Simon is tasked with convincing healthcare system leadership to invest in implementing the intervention from the Suicide Prevention Outreach Trial (SPOT). In this hypothetical situation, the trial is complete and the intervention reduced suicide attempts in at-risk populations.

The Communicating with Health System Partners handout outlines points in a study when research teams may need to engage with health system leaders, clinic-level managers, and frontline staff.

REFERENCES

Committee on the Learning Health Care System in America, Institute of Medicine. 2013. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Smith M, Saunders R, Stuckhardt L, McGinnis JM, editors. Washington (DC): National Academies Press (US). http://www.ncbi.nlm.nih.gov/books/NBK207225/. Accessed January 28, 2015.

Sands K, Platt R, Perlin JB. 2019 Sep 4. Real World Advice for Generating Real World Evidence. NEJM Catalyst. https://catalyst.nejm.org/doi/full/10.1056/CAT.19.0621.

Sands, Kenneth. 2018. Partnering with Stakeholders: Keys to Success. https://dcricollab.dcri.duke.edu/sites/NIHKR/KR/Panel-1-Combined.pdf Accessed February 26, 2020.


Version History

October 3, 2022: Made minor nonsubstantive text corrections (changes made by K. Staman and L. Stewart)

August 27, 2020: Made minor nonsubstantive text corrections (changes made by L. Wing).

Published Feb 25, 2020

Additional Resources

Monitoring Intervention Fidelity and Adaptations


Section 6

Additional Resources

Journal articles
Tuzzio et al. Healthc (Amst), 2019 Pragmatic clinical trials offer unique opportunities for disseminating, implementing, and sustaining evidence-based practices into clinical care: Proceedings of a workshop
Weinfurt et al. BMC Med Res Method, 2017 Pragmatic clinical trials embedded in healthcare systems: generalizable lessons from the NIH Collaboratory
Websites
RE-AIM From the website description: The goal of RE-AIM is to encourage program planners, evaluators, readers of journal articles, funders, and policy-makers to pay more attention to essential program elements including external validity that can improve the sustainable adoption and implementation of effective, generalizable, evidence-based interventions.
Dissemination and Implementation Models in Health Research and Practice From the website description: Adapt: D&I researchers often utilize theory-based approaches that evolve over time based on empirical testing. As you “shop” for an appropriate model, you should review the core concepts, proposed relationships, and outcomes to be sure you are fully informed beyond what is depicted on the graphic. It is helpful to gather associated literature to define concepts and identify any issues or recommended adaptations based on prior studies. There is likely no comprehensive model that will perfectly fit every study, so it may be necessary to either adapt a model and/or to combine multiple models for your study.
Online presentations
Videocast of NIH Workshop on D&I This is a full-day videocast workshop hosted by the NIH on the topic Pragmatic Clinical Trials–Unique Opportunities for Disseminating, Implementing, and Sustaining Evidence-Based Practices into Clinical Care (May 2017).
RE-AIM Introduction Video This is a 10-minute YouTube video where an originator of RE-AIM, Dr. Russell Glasgow, describes the framework (March 2019).
Using RE-AIM Beginning With the End in Mind: Using RE-AIM to Guide Program Planning, Implementation, and Evaluation. This is a webinar conducted by Dr. Laura Balis, University of Wyoming Extension (February 2019).
Grand Rounds webinars
November 8, 2019 Lumbar Imaging with Reporting of Epidemiology: Initial Results and Some Lessons Learned (Jeffrey Jarvik, MD, MPH, Patrick Heagerty, PhD)
April 19, 2019 Trauma Survivors Outcomes & Support (TSOS) Pragmatic Trial: Revisiting Effectiveness & Implementation Aims (Doug Zatzick, MD)
February 23, 2024 Virtual Vigilance: Monitoring of Decentralized Clinical Trials (Adrian Hernandez, MD; Christopher J. Lindsell, PhD)


SECTIONS

CHAPTER SECTIONS


Version History

June 3, 2025: Added Grand Round (changes made by K. Staman).

Published March 2020

Intervention Adaptation Strategies and Examples

Monitoring Intervention Fidelity and Adaptations


Section 5

Intervention Adaptation Strategies and Examples

This section describes strategies to anticipate how to work with health systems that could potentially adapt the ePCT intervention, and includes real-world case studies from the NIH Collaboratory Trials.

Common Strategies to Anticipate Changes and Monitor Adherence and Adaptations

Build and nurture relationships with health system partners Consider using different communication strategies and communicate often. For example, ask a chief medical officer to write an e-mail to explain the importance of the intervention. Conduct debriefing meetings with site teams to understand what unplanned changes have occurred or might be planned in the near future.
Document aspects of the intervention that are essential features Assess how much room there is to modify the intervention from the target. For more, see the PCORI methodology standards for studies of complex interventions
Request, monitor, and act on data regularly (eg, monthly, quarterly) Detect and assess changes made within the healthcare setting and to the intervention. For example, if the researcher has data on orders (eg, time frame for a dialysis session), they could assess if and when they were changed.
Monitor for change over time External changes, including changes to guidelines, reimbursement policy, the electronic health record, and changes to public health may all impact standard-of-care and/or the implementation of the intervention.
Periodically verify that the software or EHR is functioning as expected For example, to assess whether manual changes were made by a physician.
Offer training to staff to encourage fidelity Training can be in-person or virtual, half-day or full-day. Repeat training sessions can be offered as a booster.
Conduct brief qualitative check-in meetings or site visits Use ethnographic methods to observe, document, and learn how the intervention is being implemented and which variations and different levels of intensity are being used. Document anecdotes and stories that go along with the intervention over time. This information could inform your results and dissemination products like toolkits that a healthcare system could use after the trial is over.

Intervention Adaptation Scenarios from the NIH Collaboratory Trials

“Begin with the mindset that there will be some adaptations during the trial. There will be multiple time points along the way for adaptations, including those during program delivery and at the end (eg, post-lessons learned) and before finalizing toolkits (eg, implementation and adaptation guides).” –Russell Glasgow, PhD (Nudge Study Dissemination & Implementation Workgroup)

Next we describe a few real-world scenarios from the NIH Collaboratory Trials. All these trials are PCTs embedded in healthcare systems. Scenarios include considerations for whether the study team has an a priori plan to anticipate and handle changes; whether the team uses a particular framework or method to track modifications; and how the team plans to work with their partners to adapt the intervention.

Case Study: PRIM-ER

Background: The PRIM-ER (Primary Palliative Care for Emergency Medicine) NIH Collaboratory Trial is an ongoing ePCT evaluating the effect of a primary palliative care program on outcomes for older adults with serious illness in diverse emergency department (ED) settings. Using a pragmatic, cluster-randomized, stepped-wedge design, the study is being conducted across a diverse group of 35 EDs that vary in specialty geriatric and palliative care capacity, geographic region, payer mix, and demographics. The intervention includes evidence-based, multidisciplinary primary palliative care education; simulation-based workshops on communication in serious illness; clinical decision support; and provider audit and feedback. The hypothesis is that older adult visitors with serious, life-limiting illness cared for by providers with primary palliative care skills will be less likely to be admitted to an inpatient setting; more likely to be discharged home or to palliative care service; and will have higher home health and hospice use, fewer inpatient days, ICU admissions at 6 months, and longer survival than patients seen before implementation.

Intervention monitoring plan: Over the study period, the team documented and evaluated changes as they occurred within the participating health systems. The team tracked modifications at each site to better understand their impact on intervention fidelity. And, as described in the trial’s protocol publication (Grudzen et al. 2019), the RE-AIM framework will be used to analyze the quality of the implementation.

A complex intervention such as PRIM-ER consists of the core features (i.e., functions) and the components and processes that promote the core features (i.e., forms). Before the full implementation phase of the trial, the study team identified the core functions that needed to be standardized across all intervention sites as well as the forms that could be adapted. The team remains flexible and transparent in communicating with key stakeholders about the components that can and cannot be tailored so that the study retains a level of standardization and integrity in design. For maximum buy-in from stakeholders, the study team is encouraging sites to adapt, when possible, the processes for each intervention component to their local site’s context in order to carry out the deliverables of the intervention. The team will collaborate with each site and assess and approve any new processes that arise to ensure the fidelity of the intervention while allowing adaptation to local context. The team will also share suggestions with stakeholders about what worked at other sites that have completed the intervention.

If substantial changes occurred in leadership, the PRIM-ER study team organized a teleconference with the new and existing leaders in the health system. During the call, expectations and potential strategies to maximize continued engagement and partnership were discussed. Upon completion of each site’s 3-week intervention period, the program manager completed post-intervention reflection notes that identified the impact of system changes on the intervention.

The PRIM-ER study team also used the RE-AIM framework to evaluate specific components of the intervention. For example, the PRIM-ER team evaluated the clinical decision support (CDS) tool at the delivery-site level through mixed-methods analysis. This included a 12 month-post intervention site level 30 minute quantitative survey and qualitative interviews of Principal Investigators and physician champions. The post-implementation survey was developed using the RE-AIM framework and it served multiple purposes. It was an opportunity to 1) ensure the clinical support tool screenshots and detailed information collected during implementation were accurate, 2) understand barriers and facilitators in implementation and maintenance 3) assess if changes had been made post-intervention to the CDS and the drivers for change and 4) understand COVID-19 impacts if any as half of the sites implemented the intervention pre-COVID. The qualitative interviews were conducted as part of a larger assessment to understand the best practice alert implementation adoption, effectiveness, and maintenance processes. Results of this evaluation is currently under peer review.

“The application of theory and delineation of forms and functions, as well prospective adaptation monitoring of large complex interventions can support the balance of fidelity with adaptability to encourage successful interventions among a variety of clinical environments.” (Hill et al. 2020)

Other Examples of Monitoring Plans from the NIH Collaboratory Trials

Study Intervention Monitoring Plan
Nudge

Personalized Patient Data and Behavioral Nudges to Improve Adherence to Chronic Cardiovascular Medications

Tests effectiveness of automated mobile phone text reminders sent at scale based on pharmacy data for patients to refill medications with the goal of improving adherence and outcomes The team is monitored for system changes that affected the intervention through monthly meetings with all site PIs. Quarterly partner meetings with patients, providers, and health system leaders were held to keep them apprised of study updates and to obtain feedback about study issues.

Modifications of study procedures were tracked and evaluated for their impact on intervention fidelity and outcomes. The team used the RE-AIM framework to evaluate the implementation, short-term sustainability, and dissemination of the intervention.

LIRE

Lumbar Imaging with Reporting of Epidemiology

Tests effectiveness of a simple intervention whereby epidemiologic benchmarks are inserted into lumbar spine imaging reports Although the team did not use a formal framework, they tracked EHR data every 6 months to verify whether the intervention text was being implemented as designed. Troubleshooting by the site PI uncovered instances where it was not being delivered. A switch in the radiology reporting system led to a break in the automatic insertion of the intervention text.
PROVEN

Pragmatic Trial of Video Education in Nursing Homes

Offers and shows an advance care planning video to patients admitted to the nursing home within partner nursing home systems The team did not use a formal framework but did employ a structured “real-time” monitoring of the video intervention fidelity at 119 nursing homes. They worked with the facilities to integrate a novel report in the EHR that provided documentation about whether the video was offered and shown to patients as per the implementation protocol.

Facility-level adherence reports (proportion of enrolled patients offered and shown a video) were created by the research team that were provided as feedback to the program implementation site leader at each facility. At monthly telephone meetings, the PROVEN implementation team reviewed these reports with the site leaders and discussed strategies to improve adherence.

Facility site leaders were asked every 6 months about existing or new initiatives external to the trial that focused on advance care planning or reduction in hospital transfers.

ICD-Pieces

Improving Chronic Disease Management with Pieces

Uses a novel technology platform to enable the use of EHR data to improve care of patients with the triad of chronic kidney disease, diabetes, and hypertension within primary care practices or medical homes in the community The team did not use a framework for monitoring but did conduct regular calls with their partners to learn about and document changes to the implementation. The team had planned to monitor fidelity through automatic detection of order sets, but had to subsequently add manual efforts.

The Data and Safety Monitoring Board (DSMB) asked the team to come up with ways to measure fidelity through tracking of various metrics related to implementation of the intervention and monitor for separation between control and experimental groups. The study team was blinded to this comparison, but the DSMB had access to and reviewed these data.

To document the changes sites made, every week the practice facilitators reviewed lists of patients that should have been flagged and enrolled compared with those who were not enrolled. Discrepancies were communicated with the clinic by providing updated patient lists or by phone. Manual reminders for who was eligible to clinic staff were also  successful to supplement the automated process.


SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Grudzen CR, Brody AA, Chung FR, et al. Primary Palliative Care for Emergency Medicine (PRIM-ER): Protocol for a pragmatic,  cluster-randomised, stepped wedge design to test the effectiveness of primary palliative care education, training and technical support for emergency medicine. BMJ Open. 2019;9:e030099. doi: 10.1136/bmjopen-2019-030099.

back to top

Hill J, Cuthel AM, Lin P, Grudzen CR. 2020. Primary Palliative Care for Emergency Medicine (PRIM-ER): Applying form and function to a theory-based complex intervention. Contemporary Clinical Trials Communications. 18:100570. doi:https://doi.org/10.1016/j.conctc.2020.100570.


Version History

June 3, 2025: Updated case studies, added row to table, minor updates to text (changes made by K. Staman).

Published March 2020

Frameworks for Characterizing Fidelity and Adaptations

Monitoring Intervention Fidelity and Adaptations


Section 4

Frameworks for Characterizing Fidelity and Adaptations

Whether study teams choose to use a formal framework to characterize or report on adaptations to their study interventions, tracking such modifications will be key to ensuring that you understand the internal validity of the study. One prominent framework is FRAME.

FRAME

The Framework for Reporting Adaptations and Modifications-Enhanced (FRAME) is an approach developed, and recently expanded, by Wiltsey Stirman and colleagues to help study teams identify and report modifications to interventions or implementation strategies—planned and unplanned (Stirman 2013; Wiltsey Stirman 2019). FRAME, an update and synthesis of earlier adaptation research and models (Glasgow et al. 1999), helps researchers compare the impact of a modification with the impact on fidelity to the intervention. The FRAME approach can be used to track and document aspects of the intervention’s implementation such as why, when, and where the change occurred; the nature of the change; the target of the change; and, importantly, the goal of the change—for example, to improve effectiveness, increase reach or engagement, or reduce cost.

As an example,  a recent study documented the process and outcomes of adapting the Savvy Caregiver Program (SCP) for Korean American dementia caregivers according to the 8 domains in FRME:

  1. What was modified,
  2. Who participated in recommending and deciding the modification to be made,
  3. When the modification occurred
  4. Whether the modification was planned
  5. Whether the modification was fidelity-consistent
  6. Whether the modification was temporary
  7. At what level of delivery, the modification was made, and
  8. Why the modification was made.(Jang et al. 2024)

The study team found that the primary reasons for adaptations were to improve he primary for engagement (62.5%) and fit with recipients (43.8%)(Jang et al. 2024).

The authors state: “The FRAME categorization provided a detailed understanding of the process and nature of adapting the SCP [Savvy Caregiver Program] and served as a foundation for further implementation and scale-up. FRAME not only serves as a guide for adapting evidence-based interventions but also promotes their replicability and scalability (Jang et al. 2024).

Other Frameworks

RE-AIM

The RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) framework was designed to help study teams evaluate “essential program elements including external validity that can improve the sustainable adoption and implementation of effective, generalizable, evidence-based interventions” (Glasgow et al. 1999; www.re-aim.org). Read more about RE-AIM in the Dissemination and Implementation Chapter.

RAPICE

The approach called Rapid Assessment Procedure Informed Clinical Ethnography (RAPICE) uses ethnographic methods to collect and analyze qualitative data about a clinical intervention in a relatively short period (Palinkas and Zatzick 2019).

Additional Considerations

The Living Textbook identifies other considerations related to potential changes to the intervention:


SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Glasgow RE, Vogt TM, Boles SM. 1999. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 89:1322-1327. doi:10.2105/ajph.89.9.1322. PMID: 10474547.

Glasgow RE, Harden SM, Gaglio B, et al. 2019. RE-AIM Planning and Evaluation Framework: adapting to new science and practice with a 20-year review. Front Public Health. 7:64-64. doi:10.3389/fpubh.2019.00064. PMID: 30984733.

Jang Y, Hepburn K, Haley WE, et al. 2024. Examining cultural adaptations of the savvy caregiver program for Korean American caregivers using the framework for reporting adaptations and modifications-enhanced (FRAME). BMC Geriatr. 24:79. doi:10.1186/s12877-024-04715-w.

 

back to top

Palinkas LA, Zatzick D. 2019. Rapid Assessment Procedure Informed Clinical Ethnography (RAPICE) in pragmatic clinical trials of mental health services implementation: methods and applied case study. Adm Policy Ment Health. 46:255-270. doi:10.1007/s10488-018-0909-3. PMID: 30488143.

Stirman SW, Miller CJ, Toder K, Calloway A. 2013. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implement Sci. 8:65. doi:10.1186/1748-5908-8-65. PMID: 23758995.


Version History

March 17, 2026: Added Devon Check and Hayden Bosworth as contributors (changes made by G. Uhlenbrauck).

June 3, 2025: Updated with new literature (Jang et al) and revised section (changes made by K. Staman).

August 6, 2020: Added link to TSOS case study using RAPICE in the Dissemination and Implementation chapter (change made by L. Wing).

Published March 2020

Anticipating Changes That May Affect Intervention Fidelity

Monitoring Intervention Fidelity and Adaptations


Section 2

Anticipating Changes That May Affect Intervention Fidelity

“External changes can give rise to unexpected challenges for the trials, including decisions regarding how to respond to new clinical practice guidelines, increased difficulty in implementing trial interventions, achieving separation between treatment groups, and differential responses across sites.” (Curtis et al. 2019)

As described in the Assessing Feasibility chapter, it is important for study teams to pilot their ePCT intervention and assess feasibility with the partner health system as much as possible before launching the implementation phase. Piloting provides the real-world feedback needed to understand the capabilities, capacities, and workflows of sites delivering the intervention.

During the study’s implementation phase, a variety of changes within clinics, hospitals, and health systems may have an impact on the delivery of the embedded intervention. Researchers should expect changes to occur. One example could be when a component of the intervention is incorporated into usual care at a control site or cluster. This could be due to unintentional spill-over of intervention effects; healthcare system initiatives, guidelines, or policies that focus on solving a similar problem; or changes in staffing, clinic workflow, or leadership. In the early stages of intervention implementation, it will be beneficial to identify and monitor aspects that may be vulnerable to internal and external changes and that could drive adaptations. Also important is knowing in advance which features (core functions) of the embedded intervention are so essential to its effectiveness that modifying them could negatively affect the study’s outcomes and impact.

Examples of Changes That Can Drive Adaptations

Location of Change Examples
Within a clinical setting
  • Competing clinical initiatives
  • Workflow complexities
  • Maintaining expertise amid clinical staff turnover or reduction
  • Resource and tool constraints
  • Abandonment of intervention tasks
  • Intervention design is overly complicated or burdensome
  • Discovering new efficiencies
Across a health system
  • Quality improvement initiatives
  • Leadership buy-in
  • Leadership, clinician, and staff turnover
  • Champion or facilitator turnover
  • EHR system updates, transitions, or other technology/tools changes
  • Data collection processes
  • Data quality and pertinence to process and outcomes
  • Changes to health system priorities, incentives, or ownership
  • Clinical guideline changes
At the community level
  • Competition
  • Emerging community priorities
  • Social media trending topics (eg, vaccinations)
At the state or national level
  • Healthcare system market consolidation
  • Policy changes
  • Payment coverage changes, especially Medicare and Medicaid
  • Regulatory changes


SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Curtis LH, Dember LM, Vazquez MA, et al. 2019. Addressing guideline and policy changes during pragmatic clinical trials. Clin Trials. 16:431-437. doi:10.1177/1740774519845682. PMID: 31084378.


Version History

June 3, 2025: Minor updates to the text (changes made by K. Staman)

Published March 2020

Introduction

Monitoring Intervention Fidelity and Adaptations


Section 1

Introduction

A primary goal of conducting embedded PCTs (ePCTs) is to contribute high-quality evidence needed to establish and sustain a learning healthcare system. Embedded PCTs by nature are conducted in dynamic, complex healthcare delivery settings where unanticipated events and changes will happen. In addition to the importance of designing ePCT interventions with implementation in mind, it is likewise essential to monitor both planned and unplanned changes to the intervention during the trial’s execution phase. Evaluating such changes will help inform whether the intervention was successful or not. It may also be useful in the implementation, dissemination, and sustainability of an effective intervention, or at least its core functions, within the health system (Denis et al. 2002).

Interventions that are more pragmatic on the PRECIS-2 spectrum deliberately build in flexibility to the intervention’s components, delivery, and adherence in order to be “fit for purpose” (Louden et al. 2015; Norton et al. 2019). Researchers should expect health systems to make slight adjustments so that the intervention fits within clinical workflow and care provision, and to minimize additional work and burden. And researchers should plan for course corrections in order to promote effectiveness of trial implementation and sustainability in the real-world contexts in which they are set. Consistent monitoring and reporting of intervention adaptations will also be needed to assess fidelity to the intervention’s design and analysis and to determine whether the adaptation had an impact—positive or negative—on the effectiveness or reproducibility of the intervention and potentially on the findings (Wiltsey Stirman et al. 2019).

In this chapter, we describe different types of changes a study team may encounter while conducting an ePCT. We also introduce strategies that teams can use to anticipate, monitor, and document adaptations to their intervention to support study analysis and sustainability, and to set the stage for dissemination and implementation of successful interventions in other healthcare settings.


SECTIONS

CHAPTER SECTIONS

REFERENCES

Denis JL, Hebert Y, Langley A, Lozeau D, Trottier LH. 2002. Explaining diffusion patterns for complex health care innovations. Health Care Manage Rev. 27:60-73. PMID: 12146784.

Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. 2015. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 350:h2147. doi:10.1136/bmj.h2147. PMID: 25956159.

Norton WE, Zwarenstein M, Czajkowski S, et al. 2019. Building internal capacity in pragmatic trials: a workshop for program scientists at the US National Cancer Institute. Trials. 20:779. doi:10.1186/s13063-019-3934-y.

back to top

back to top

Wiltsey Stirman S, Baumann AA, Miller CJ. 2019. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 14:58. doi:10.1186/s13012-019-0898-y. PMID: 31171014.


Version History

Published March 2020

Missing Data and Intention-to-Treat Analyses

Analysis Plan


Section 5

Missing Data and Intention-to-Treat Analyses

In many randomized clinical trials, the primary analysis is an intention-to-treat (ITT) analysis, an approach based on the treatment assignment as randomized rather than the actual treatment received. One rationale for the ITT approach is that it evaluates the real-world effects of the intervention. However, a common misconception is that the ITT analysis will be unbiased regardless of crossover or missing data.

To understand the effects of crossover and dropout in an ITT analysis, it is useful to understand the 2 types of treatment effect that are generally of interest: the ITT effect and the average causal effect. The ITT effect measures the intervention effect as randomized; the average causal effect measures the intervention effect as actually received. In the ideal situation with perfect compliance and no missing outcome data, the ITT effect and the average causal effect are identical. This section of the Living Textbook considers the population-level causal effects in situations in which there is noncompliance or missing outcome data. Missingness in covariates may require further consideration.

In the presence of treatment noncompliance, the ITT effect and the average causal effect usually are not the same. In the absence of study dropout, the ITT effect can be estimated using standard methods and ignoring noncompliance. However, the ITT effect is diluted by crossover. A large crossover rate diminishes the ITT effect and reduces the statistical power of the analysis.

In the presence of dropout, the validity of a complete-case ITT analysis (that is, a standard analysis ignoring missing data) requires an untestable assumption that there is no selection bias by study dropout. This assumption will be violated, for example, if those who drop out of the study are "sicker" than those who do not. In such situations, even when the dropout pattern does not differ across treatment arms, the resulting naive estimators ignoring missing data would be biased for the originally targeted population-level ITT effect, provided the ITT effect in "sicker" participants is different from the general population. When the dropout pattern differs across treatment arms, the resulting naive estimators would be biased even for the ITT effect in the population represented by participants remaining in the trial. This assumption can be weakened to no selection bias by study dropout within levels of a set of measured baseline factors. Under such assumptions, valid ITT effect estimates can be obtained through methods that adjust for measured baseline selection bias due to study dropout (such as inverse probability weighting or g-estimation).

For a detailed explanation using the causal counterfactual framework to understand these issues, see "Analyses of Randomized Controlled Trials in the Presence of Noncompliance and Study Dropout," a white paper from the NIH Pragmatic Trials Collaboratory’s Biostatistics and Study Design Core.

SECTIONS

CHAPTER SECTIONS


Version History

April 30, 2024: Added an item to the Resources sidebar as part of the annual content update (changes made by D. Seils).

June 23, 2022: Updated the name of the NIH Collaboratory in the contributors list and made nonsubstantive changes as part of the annual content update (changes made by D. Seils).

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

May 27, 2020: Reordered the sections of this chapter as part of the annual content update (changes made by D. Seils).

May 1, 2020: Made nonsubstantive changes to the Resources sidebar as part of the annual content update (changes made by D. Seils).

Published August 5, 2019.