What is a Pragmatic Clinical Trial?
Section 3
Differentiating Between RCTs, PCTs, and Quality Improvement Activities
Understanding the difference between traditional, explanatory RCTs, PCTs, and quality improvement (QI) activities is important, because different ethical and regulatory guidelines may apply. RCTs and PCTs both constitute research, which is defined in federal regulations as an activity intended to create generalizable knowledge (45 Code of Federal Regulations (CFR) 46). Alternatively, QI activities have been defined as “data-guided activities designed to bring about immediate improvements in health care delivery in particular settings” (Lynn et al. 2007). When a PCT is intended to improve care locally and provide generalizable knowledge, as in a learning health system, the distinction between QI and research is arguably fuzzy (Faden et al. 2013), especially as QI techniques become more sophisticated and extensive (Finkelstein et al. 2015). At its core, however, QI is designed to change local processes to achieve accepted standards of care, while pragmatic research is designed to determine the standards themselves (Finkelstein et al. 2015). A single trial may also have some elements that are more pragmatic (e.g., a trial accepting all-comers with very limited exclusion criteria) and other elements that are more explanatory in nature (e.g., no flexibility allowed in how an intervention is delivered across settings) (Loudon et al. 2015). This is explored in more detail in the next section. The key differences between RCTs, PCTs, and QI activities are shown in the following Table. We explore the continuum of pragmatism in study design in more detail in the next section.
Key Differences Between Traditional RCTs, PCTs, and QI Activities
Attribute | RCT | PCT | QI |
Who develops the study questions? | Researchers | Clinical decision makers (patients, clinicians, administrators, and policy makers) (Califf and Sugarman 2015) | Clinicians, administrators, and policy makers |
What is the purpose? | Create generalizable knowledge; determine causes and effects of treatments | Create generalizable knowledge, improve care locally, and inform clinical and policy decisions (Johnson 2014) | Improve care locally, inform clinical and policy decisions |
What question does it answer? (Thorpe et al. 2009) | Can this intervention work under ideal conditions? | Does this intervention work under usual conditions? | How do I best implement this intervention? |
Who is enrolled? | A cohort of patients with explicitly defined inclusion and exclusion criteria | Diverse, representative populations (Johnson 2014); inclusion and exclusion criteria still apply, but tend to be broader | Patients in routine clinical care |
Who collects data? | Researchers; data collection occurs outside of routine clinical care | Clinicians at the point of care in cooperation with researchers; EHRs and registries are used as sources of research data | Clinicians at the point of care |
What is studied? | “A biological or mechanistic hypotheses (Califf and Sugarman 2015)” | “The comparative balance of benefits, burdens and risks of a biomedical or behavioral health intervention at the individual or population level (Califf and Sugarman 2015)” | “Systematic, data-guided activities designed to bring about immediate improvements in health care delivery in particular settings (Lynn et al. 2007)” |
What is compared? | Treatment vs placebo or non-treatment | The comparative effectiveness of real-world alternatives | Accepted standards based on published guidelines |
Is the study randomized to control for potential biases? | Yes; usually at the individual level | Yes; may use experimental designs and randomization schemes such as cluster randomization (randomization by hospital or unit) or stepped wedge randomization (which involves random crossover of clusters over time from control to intervention until all clusters are exposed (Hemming et al. 2015)) | Varies |
What is the setting? | Medical centers designated as research sites | Multiple, heterogeneous settings (Johnson 2014) | Local clinic or hospital; may include multiple clinics or hospitals |
Adherence to the intervention | Strictly enforced (Zwarenstein et al. 2008) | Flexible (as it would be in usual care) (Zwarenstein et al. 2008) | Normal practice |
Outcomes | May be surrogates or process measures (Zwarenstein et al. 2008) | “Directly relevant to participants, funders, communities, and healthcare practitioners” (Zwarenstein et al. 2008) | Directly relevant |
SECTIONS
REFERENCES
Califf RM, Sugarman J. 2015. Exploring the ethical and regulatory issues in pragmatic clinical trials. Clinical Trials. 12:436–441. doi:10.1177/1740774515598334. PMID: 26374676.
Faden RR, Kass NE, Goodman SN, Pronovost P, Tunis S, Beauchamp TL. 2013. An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics. Hastings Cent Rep. Spec No:S16–27. doi:10.1002/hast.134. PMID: 23315888.PMID:23315888.
Finkelstein JA, Brickman AL, Capron A, et al. 2015. Oversight on the borderline: quality improvement and pragmatic research. Clin Trials. 12:457–466. doi:10.1177/1740774515597682. PMID:26374685.
Hemming K, Haines TP, Chilton PJ, Girling AJ, Lilford RJ. 2015. The stepped wedge cluster randomised trial: rationale, design, analysis, and reporting. BMJ. 350:h391–h391. doi:10.1136/bmj.h391. PMID: 25662947.
Johnson K. 2014. Introduction to pragmatic clinical trials. https://dcricollab.dcri.duke.edu/sites/NIHKR/KR/Introduction%20to%20pragmatic%20clinical%20trials.pdf Accessed Aug 2, 2017.
Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. 2015. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 350:h2147. doi:10.1136/bmj.h2147. PMID: 25956159.
Lynn J, Baily MA, Bottrell M, Jennings B, Levine RJ, et al. 2007. The ethics of using quality improvement methods in health care. Ann Intern Med. 146:666–673. doi:10.7326/0003-4819-146-9-200705010-00155. PMID: 17438310.
Thorpe KE, Zwarenstein M, Oxman AD, et al. 2009. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 62:464–475. doi:10.1016/j.jclinepi.2008.12.011. PMID: 19348971.
Zwarenstein M, Treweek S, Gagnier JJ, et al. 2008. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 337:a2390–a2390. doi:10.1136/bmj.a2390. PMID:19001484.