Skip to content

COVID-19 Resources

Access the latest information on COVID-19 for clinical researchers
  • Home
  • About
    • NIH Collaboratory
      • Coordinating Center
      • NIH Collaboratory Trials
      • Core Working Groups
      • Steering Committee
      • Distributed Research Network
      • Our Impact
    • Living Textbook
      • Table of Contents
      • How to Use This Site
  • Resources
    • Data and Resource Sharing
    • Training Resources
    • Tools for Researchers
    • Publications
    • Knowledge Repository
  • Webinar
  • Podcast
  • News
    • News Feed
    • Calendar
    • Subscribe
return to home
Subscribe to Newsletter go to twitter feed go to linkedin go to blue sky feed
Search
NIH Collaboratory
Living Textbook of
Pragmatic Clinical Trials

COVID-19 Resources

Access the latest information on COVID-19 for clinical researchers
home button

Rethinking Clinical Trials

A Living Textbook of Pragmatic Clinical Trials

  • Design
    • What is a Pragmatic Clinical Trial?
    • Decentralized Pragmatic Clinical Trials
    • Developing a Compelling Grant Application
    • Experimental Designs and Randomization Schemes
    • Endpoints and Outcomes
    • Analysis Plan
    • Using Electronic Health Record Data
    • Building Partnerships and Teams to Ensure a Successful Trial
    • Intervention Delivery and Complexity
    • Patient Engagement
  • Data, Tools & Conduct
    • Assessing Feasibility
    • Acquiring Real-World Data
    • Assessing Fitness-for-Use of Real-World Data
    • Study Startup
    • Participant Recruitment
    • Monitoring Intervention Fidelity and Adaptations
    • Patient-Reported Outcomes
    • Clinical Decision Support
    • Mobile Health
    • Electronic Health Records–Based Phenotyping
    • Navigating the Unknown
  • Dissemination & Implementation
    • Data Sharing and Embedded Research
    • Dissemination Approaches for Different Audiences
    • Implementation
    • End-of-Trial Decision-Making
  • Ethics & Regulatory
    • Privacy Considerations
    • Identifying Those Engaged in Research
    • Collateral Findings
    • Consent, Disclosure, and Non-Disclosure
    • Data and Safety Monitoring
    • Ethical Considerations of Data Sharing in Pragmatic Clinical Trials
    • Ethics for AI and ML
    • IRB Responsibilities and Procedures

Evaluating CDS

CHAPTER SECTIONS

Real World Evidence: Clinical Decision Support


Section 5

Evaluating CDS

Expand Contributors

Brian Douthit, MSN
Rachel Richesson, PhD, MPH
Keith Marsolo, PhD
Edward R. Melnick, MD, MHS
Corita R. Grudzen, MD
Lesley Curtis, PhD

Contributing Editor
Karen Staman, MS

The evaluation of CDS is dependent on the intent of the CDS itself. Unfortunately, there is no one-size-fits-all solution to evaluation; there must be careful consideration to the tool’s context and purpose. The PRIM-ER trial, for example, is not powered or designed in a way to evaluate the CDS tool. For multisite trials, there needs to be consistency and standardization in evaluation processes to truly evaluate outcomes. In the following section, we give more detail into CDS evaluation and some potential solutions.

What Is Success in CDS?

To speak of success of CDS tools in generalities is difficult, as there are many uses and forms. However, success of a CDS tool should be judged on both clinical and nonclinical domains: patient outcomes, the end-user outcomes, functionality, workflow fit, and others. A successful CDS tool should solve the issue it was built to address, be able to be measured and monitored, and should feel transparent, accessible, useful, and noninterruptive to the end-user.

Evaluating the success of a CDS system may be split into two general categories: formative and summative evaluations (Lobach 2016). Formative evaluation refers to the processes and factors that ensure that a CDS is feasible and functions as intended. Often these evaluations manifest as focus groups, Delphi studies, workflow analysis, and structured interviews. These discussions and analyses often focus on the build, the data requirements, the feasibility (both in terms of longevity of the CDS tool, and in terms of resources, cost, and ability to build) and the overall strategy of the tool. In other words, a formative evaluation answers the questions: Should it be built? Can it be built? And how do we build it? Once the build is complete, post hoc testing ensures that the tool is indeed working as intended: Is it reaching the correct audience at the right time with the right data? (ie, The Five Rights of CDS).

Although formative evaluations are crucial, especially in the planning phase, summative evaluations cannot be overlooked. Summative evaluation refers to the process of evaluating the effects and outcomes of the CDS (Lobach 2016). The purpose of CDS in PCTs is often to support care processes and improve outcomes, so measuring the success of a CDS tool without consideration to summative processes should be considered incomplete. In contrast to formative evaluation, summative evaluations take the form of process measures (see section 4) and the effects of the tool on specified clinical outcomes. This may often take the form of a trial, whether it be a full randomized control trial, quasi-experimental, or an observational study. It is also highly dependent on the purpose of the tool; if a CDS tool has been developed purely for a business process, cost analysis may be the most appropriate form of summative evaluation. Again, depending on intent, a mix of analyses may take place to fully evaluate the impact and success of the CDS tool based on its intended design. To evaluate data-related attributes that may influence the success of a CDS tool, both formative and summative factors must be included.

Summary of Formative and Summative Evaluations

Formative Evaluations Summative Evaluation
CDS Feasibility and Function Process Measures and Outcome
Should it be built? Can it be built? Clinical and nonclinical outcomes
Ensuring tool functionality (technical function) Cost analysis
5 rights of CDS End-user satisfaction
Usability and human factors analysis

Designing, Evaluating, and Implementing a CDS Intervention

The evaluation of CDS systems and CDS interventions is not formulaic; it is dependent on the purpose of the CDS tool and the nuances of the environment in which it is being implemented. As mentioned previously, there are many types and methods of evaluation regarding CDS, but it is very possible that a direct metric is not available, and a proxy measure must be used instead (Lobach 2016). Outcomes must be defined early in the development of a CDS intervention, there must be a plan for outcome measurement, whether that be directly or using a proxy. An example of this is using cost data to show adherence to appropriate lab ordering guidelines, as it may be impossible to accurately review and evaluate each lab order in a health system for appropriateness. In this case, it should be assured that accurate cost data are available, and a means to extract these data should be put in place with measurements occurring prior to the implementation.

Another major challenge in measuring CDS outcomes (especially in research) is selecting the appropriate analytical methods and avoiding contamination (Lobach 2016). Depending on the CDS tool, it is possible that the tool is not intended to be used very often, but only in niche cases. Decision support is important in these cases, as the trigger may be in response to a clinical situation that the clinician is not as familiar with due to its scarcity, and support from a tool would be extremely beneficial in ensuring adherence to a protocol. However, if seldom used, the data available to evaluate its success may not be adequate to achieve desired power. If being conducted as a trial rather than a quality improvement project, the consenting of subjects may also hinder the sample size, especially in predictive decision support where the chance of an event occurring in a subject is not known, making targeted recruitment difficult. These factors must be considered when planning the evaluation phase of a CDS tool, as extended time for data collection may be needed. CDS interventions being evaluated in trials also pose the unique challenge of having the patient as the unit of randomization, while the clinician solely interacts with the tool. To avoid contamination, cluster randomization may be used, but again sample size becomes a concern unless the trial is particularly large.

Maintenance Considerations

Unfortunately, the work does not end when a CDS tool is implemented. For its lifetime, there must be constant maintenance and reevaluations of its functionality and appropriateness. Iterative evaluation may include reviewing how often it is being used, considering user feedback, tweaking the displayed content as evidence-based practice evolves, expanding the tool to other patient populations or clinicians, modifying the level of the alert (including how interruptive the alert is to the workflow), or retiring the tool if it is no longer necessary. This can be a daunting process and is difficult without the right resources in place. While this is not solely the responsibility of the research team, the lifecycle of the tool should be considered.

Previous Section Next Section

SECTIONS

CHAPTER SECTIONS

sections

  1. Introduction
  2. Definitions and Uses
  3. Uses in PCTs: Experiences From the NIH Collaboratory Trials
  4. Designing and Building CDS Tools for Pragmatic Clinical Trials
  5. Evaluating CDS
  6. Disseminating and Sharing CDS
  7. Additional Resources

REFERENCES

back to top

Lobach DF. 2016. Evaluation of clinical decision support. In: Berner ES, editor. Clinical Decision Support Systems: Theory and Practice. 3rd ed. Switzerland: Springer: 147-161.

back to top


Version History

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

Published May 30, 2020

current section :

Evaluating CDS

  1. Introduction
  2. Definitions and Uses
  3. Uses in PCTs: Experiences From the NIH Collaboratory Trials
  4. Designing and Building CDS Tools for Pragmatic Clinical Trials
  5. Evaluating CDS
  6. Disseminating and Sharing CDS
  7. Additional Resources

Citation:

Douthit B, Richesson RL, Marsolo K, et al. Real World Evidence: Clinical Decision Support: Evaluating CDS. In: Rethinking Clinical Trials: A Living Textbook of Pragmatic Clinical Trials. Bethesda, MD: NIH Pragmatic Trials Collaboratory. Available at: https://rethinkingclinicaltrials.org/chapters/conduct/real-world-evidence-clinical-decision-support/evaluating-cds/. Updated December 3, 2025. DOI: 10.28929/134.

Footer Menu

  • How to Use This Site
  • About NIH Collaboratory
  • Enrollment Reporting
  • Grand Rounds
  • Funding Statement
Link to Twitter Link to LinkedIn Link to Blue Sky Link to NIH Collaboratory email

Reference in this Web site to any specific commercial products, process, service, manufacturer, or company does not constitute its endorsement or recommendation by the U.S. Government or National Institutes of Health (NIH). NIH is not responsible for the contents of any “off-site” Web page referenced from this server.

Log in
Privacy Statement
WordPress is a content management system and should not be used to upload any PHI as it is not an environment for which we exercise oversight, meaning you the author are responsible for the content you post. Please use this system accordingly. Site Map