Additional Resources

Real World Evidence: Clinical Decision Support


Section 7

Additional Resources

Resource Description
PC-CDS Learning Network A collaborative seeking to inform and connect different stakeholders to address CDS, both in terms of current challenges and opportunities.
CDS Connect Project A project aiming at promoting the uptake of CDS artifacts to support evidenced-based standards of care. CDS authoring, dissemination, and implementation resources are available.
AHRQ Learning Health Systems Resources and information regarding learning health systems, provides a useful framework for designing and evaluating PCTs using CDS.
Improving Outcomes with Clinical Decision Support: An Implementer's Guide, Second Edition A guide to CDS implementation.
Implementer's Guide, Second Edition
HL7 CDS Hooks An HL7 standard for authoring CDS tools.


SECTIONS

CHAPTER SECTIONS


Version History

October 24, 2022: Added 2 new resources (changes made by K. Staman & L. Stewart)

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

Published May 30, 2020

Disseminating and Sharing CDS

Real World Evidence: Clinical Decision Support


Section 6

Disseminating and Sharing CDS

Vision of the NIH Collaboratory

The goal of the NIH Collaboratory is to build a national infrastructure to support PCTs, including trials with positive results that can be adopted by any organization, regardless of size. When CDS is part of the intervention, it needs to be presented in a way to be easily adopted to local organizations. Researchers should include plans to share their CDS-based interventions; some options are outlined below.

Why and When to Disseminate

Data and resource sharing is fundamental to the mission of the NIH Collaboratory, and sharing details about CDS tools used in the NIH Collaboratory Trials is consistent with that.  Pragmatic trialists should plan to share details about effective CDS tools in order to disseminate their interventions beyond their organizations. This can be done through many media, including through publications and conferences. In addition to more established academic media, publishing the CDS tool itself can save potential implementers months of work and costs for local adoption, and this should be considered the gold standard of dissemination.

AHRQ has developed CDS Connect as a repository for CDS artifacts that identify and codify new evidence-based standards of care. Another such hosting platform is the SMART (Substitutable Medical Applications, Reusable Technologies) App Gallery, where different apps are hosted for use on the web, via smartphones, or the EHR. OpenCDS is a similar resource, where standards-based open source CDS tools are hosted. Both may require some work on the part of the adopting institution, as FHIR or certain terminology standards may differ from what is done locally, so extra work in mapping may be necessary. EHR vendors also maintain ways of sharing artifacts with and between their customers, and while this in not open source, it provides another mechanism for disseminating CDS tools.

SECTIONS

CHAPTER SECTIONS


Version History

December 3, 2025: Updated hyperlinks (changes made by G. Uhlenbrauck).

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

Published May 30, 2020

Evaluating CDS

Real World Evidence: Clinical Decision Support


Section 5

Evaluating CDS

The evaluation of CDS is dependent on the intent of the CDS itself. Unfortunately, there is no one-size-fits-all solution to evaluation; there must be careful consideration to the tool’s context and purpose. The PRIM-ER trial, for example, is not powered or designed in a way to evaluate the CDS tool. For multisite trials, there needs to be consistency and standardization in evaluation processes to truly evaluate outcomes. In the following section, we give more detail into CDS evaluation and some potential solutions.

What Is Success in CDS?

To speak of success of CDS tools in generalities is difficult, as there are many uses and forms. However, success of a CDS tool should be judged on both clinical and nonclinical domains: patient outcomes, the end-user outcomes, functionality, workflow fit, and others. A successful CDS tool should solve the issue it was built to address, be able to be measured and monitored, and should feel transparent, accessible, useful, and noninterruptive to the end-user.

Evaluating the success of a CDS system may be split into two general categories: formative and summative evaluations (Lobach 2016). Formative evaluation refers to the processes and factors that ensure that a CDS is feasible and functions as intended. Often these evaluations manifest as focus groups, Delphi studies, workflow analysis, and structured interviews. These discussions and analyses often focus on the build, the data requirements, the feasibility (both in terms of longevity of the CDS tool, and in terms of resources, cost, and ability to build) and the overall strategy of the tool. In other words, a formative evaluation answers the questions: Should it be built? Can it be built? And how do we build it? Once the build is complete, post hoc testing ensures that the tool is indeed working as intended: Is it reaching the correct audience at the right time with the right data? (ie, The Five Rights of CDS).

Although formative evaluations are crucial, especially in the planning phase, summative evaluations cannot be overlooked. Summative evaluation refers to the process of evaluating the effects and outcomes of the CDS (Lobach 2016). The purpose of CDS in PCTs is often to support care processes and improve outcomes, so measuring the success of a CDS tool without consideration to summative processes should be considered incomplete. In contrast to formative evaluation, summative evaluations take the form of process measures (see section 4) and the effects of the tool on specified clinical outcomes. This may often take the form of a trial, whether it be a full randomized control trial, quasi-experimental, or an observational study. It is also highly dependent on the purpose of the tool; if a CDS tool has been developed purely for a business process, cost analysis may be the most appropriate form of summative evaluation. Again, depending on intent, a mix of analyses may take place to fully evaluate the impact and success of the CDS tool based on its intended design. To evaluate data-related attributes that may influence the success of a CDS tool, both formative and summative factors must be included.

Summary of Formative and Summative Evaluations

Formative Evaluations Summative Evaluation
CDS Feasibility and Function Process Measures and Outcome
Should it be built? Can it be built? Clinical and nonclinical outcomes
Ensuring tool functionality (technical function) Cost analysis
5 rights of CDS End-user satisfaction
Usability and human factors analysis

Designing, Evaluating, and Implementing a CDS Intervention

The evaluation of CDS systems and CDS interventions is not formulaic; it is dependent on the purpose of the CDS tool and the nuances of the environment in which it is being implemented. As mentioned previously, there are many types and methods of evaluation regarding CDS, but it is very possible that a direct metric is not available, and a proxy measure must be used instead (Lobach 2016). Outcomes must be defined early in the development of a CDS intervention, there must be a plan for outcome measurement, whether that be directly or using a proxy. An example of this is using cost data to show adherence to appropriate lab ordering guidelines, as it may be impossible to accurately review and evaluate each lab order in a health system for appropriateness. In this case, it should be assured that accurate cost data are available, and a means to extract these data should be put in place with measurements occurring prior to the implementation.

Another major challenge in measuring CDS outcomes (especially in research) is selecting the appropriate analytical methods and avoiding contamination (Lobach 2016). Depending on the CDS tool, it is possible that the tool is not intended to be used very often, but only in niche cases. Decision support is important in these cases, as the trigger may be in response to a clinical situation that the clinician is not as familiar with due to its scarcity, and support from a tool would be extremely beneficial in ensuring adherence to a protocol. However, if seldom used, the data available to evaluate its success may not be adequate to achieve desired power. If being conducted as a trial rather than a quality improvement project, the consenting of subjects may also hinder the sample size, especially in predictive decision support where the chance of an event occurring in a subject is not known, making targeted recruitment difficult. These factors must be considered when planning the evaluation phase of a CDS tool, as extended time for data collection may be needed. CDS interventions being evaluated in trials also pose the unique challenge of having the patient as the unit of randomization, while the clinician solely interacts with the tool. To avoid contamination, cluster randomization may be used, but again sample size becomes a concern unless the trial is particularly large.

Maintenance Considerations

Unfortunately, the work does not end when a CDS tool is implemented. For its lifetime, there must be constant maintenance and reevaluations of its functionality and appropriateness. Iterative evaluation may include reviewing how often it is being used, considering user feedback, tweaking the displayed content as evidence-based practice evolves, expanding the tool to other patient populations or clinicians, modifying the level of the alert (including how interruptive the alert is to the workflow), or retiring the tool if it is no longer necessary. This can be a daunting process and is difficult without the right resources in place. While this is not solely the responsibility of the research team, the lifecycle of the tool should be considered.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Lobach DF. 2016. Evaluation of clinical decision support. In: Berner ES, editor. Clinical Decision Support Systems: Theory and Practice. 3rd ed. Switzerland: Springer: 147-161.


Version History

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

Published May 30, 2020

Designing and Building CDS Tools for Pragmatic Clinical Trials

Real World Evidence: Clinical Decision Support


Section 4

Designing and Building CDS Tools for Pragmatic Clinical Trials

The design and implementation of CDS tools should not only include careful consideration of their content and purpose, but their method of monitoring for success in terms of outcomes, functionality, and how they fit in the context of other CDS tools that are currently in place (see Section 5 for further details).

In 2022, the Food and Drug Administration (FDA) released a Guidance for Industry and Food and Drug Administration staff about Clinical Decision Support Software (CDS) (FDA 2022). CDS is a broad term and encompasses a range of different functions, including “computerized alerts and reminders for providers and patients, clinical guidelines, condition-specific order sets, focused patient data reports and summaries, documentation templates, diagnostic support, and contextually relevant reference information (FDA 2022).” Some CDS software meet the regulatory definition of a medical device, and therefore, that CDS software is subject to regulatory oversight by FDA.

The FDA guidance outlines criteria for determining which CDS are considered non-devices based on criteria in the 21st Century Cures Act. The CDS software is a not a device if the following are all true:

  1. it is not intended to acquire, process, or analyze a medical image, for example, from an computed tomography (CT), x-ray, ultrasound, or magnetic resonance imaging (MRI) OR a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system from within, attached to, or external to a body.
  2. it is intended for the purpose of displaying, analyzing, or printing medical information about a patient or other medical information (such as peer-reviewed clinical studies and clinical practice guidelines)
  3. it is intended for the purpose of supporting or providing recommendations to a health care professional about prevention, diagnosis, or treatment of a disease or condition
  4. it is intended for the purpose of enabling a health care professional to independently review the basis for any recommendations and not have them rely primarily on such recommendations when making diagnosis or treatment decisions about a patient.

If a software is the focus of FDA’s regulatory oversight, the next step is a determination of whether an investigational device exemption (IDE) is required. This determination is based on risk, and evaluating the risk of a software is based on its function, who it is intended for, and what the user is supposed to do with the information provided by the software. Note that a device being non-significant risk does not mean the research is minimal risk; those determinations are distinct.

The FDA encourages engagement early and often to understand how the regulations and policies apply to a particular product or technology. For specific questions, the presenters encouraged researchers to reach out to the FDA’s digital health experts at DigitalHealth@fda.hhs.gov.

In addition to the CDS guidance, there are other policies that may apply to a digital health technology, and the FDA created a Digital Health Policy Navigator to help people find the relevant guidance for their situation.

Resources Needed

To understand what resources are needed to build CDS implementations for PCTs, potential researchers must consider the EHR systems, workflows, policies, and personnel at each research site. The work required to implement CDS can vary by system depending upon what data they already collect and how their system is configured. Some sites may not have the technical staff required to integrate new CDS into the EHR or may have policies that prohibit the addition of new CDS (to prevent “alert fatigue” experienced by many clinicians). The cost of developing CDS can also be quite significant, as many organizations will charge fees for building CDS, and there are expenses for consultation, development, coding, quality assurance, data pulls, and numerous other services. When developing a PCT using CDS, it is important to budget for these expenses, investigating the potential feasibility and costs with the health system that you will be working with ahead of time. Also often underestimated, there may be a considerable waiting period prior to the completion of the build, depending on organizational priorities. This is in reference to not only the time of the individuals on the development team, but also time in a general sense; the development, testing, and implementation may take many months. For this reason, it is recommended that the timeline for the PCT is flexible and allows for adequate time for development and validation.

To reduce build time, a detailed explanation of the required data elements should be considered prior to the request. For example, if a specific diagnosis is required, it would be beneficial to identify the list of ICD-10 codes that would correctly identify the needed patient population. Pointing the builder to a value set or predefined phenotype would save time in communicating specific data needs. It also helps to put the capabilities of the EHR into perspective. For instance, creating a list of all the data requirements for the recruitment tool and mapping them to existing sources of data collection in the EHR will allow the research team to assess the feasibility and level of specificity that is possible by an EHR-based recruitment tool. From there, it may be possible to further assess for data quality and different workflow considerations to determine feasibility.

Most importantly, building, measuring, and maintaining a CDS tool requires a team. There will be a number of individuals external to the PCT team that will be involved depending on particular domains of expertise that are required. First, there must be technical and informatics experts available to build the tool into the EHR (or desired channel) using specifications provided by the research team. They may also give guidance to facilitate interoperability and determine if the requested data are available for the tool. A data manager will provide additional insight into data collection for multisite trials and facilitate trial evaluation (i.e. surveys, interview data, and heuristics). Another important individual to consult may be a clinical expert, or an informaticist well-versed in the appropriate workflow targeted by the PCT. Understanding workflow is essential, as integrating CDS tools with workflow has been found to be a predictor of their success (Khalifa 2014). In addition, there is a noted synergy between workflow and data experts; understanding when the data will be available contributes greatly to determining the feasibility of building a CDS tool. The required data may exist, but it might not be available when it is needed. Finally, involving key stakeholders in the development and implementation process is crucial, including end-users. Participatory design in CDS development has been found to be an effective method to assure adherence and functionality of CDS (Jeffery et al 2017). Other stakeholders will be dependent on the PCT, and may also involve leadership, financial services, and patients. These examples are not meant to be exhaustive, as there may be many other consultants who are necessary to engage. For instance, security experts may be needed when developing a CDS intervention using a mobile app. Always consider what resources you need prior to conducing a trial, and the information provided from these experts can save a lot of time and frustration when developing a complex CDS tool.

Specificity and Utility

Paramount to the functionality of CDS is the ability to target the intended recipient of the intervention. EHR data are complex and variable, and defining the patient in terms of these data can be difficult. A phenotype (the data that define the characteristics of a specified cohort) is important to define for EHR-based CDS. Identifying a specific phenotype allows for consistent targeting of a heterogeneous population, making a CDS functional and plausible to implement. However, there are challenges when selecting or creating a phenotype. In a 2013 study, Richesson et al. examined a number of validated phenotypes for diabetes mellitus, finding that these different definitions yielded meaningfully different results (Richesson et al 2013). When selecting a phenotype, especially if there are multiple choices for a similar population, it is important to select the definition that best coincides with the research being conducted, practice setting, and the purpose of the CDS tool. This may also mean selecting a more specific set of inclusion criteria, depending on the required specificity. (See the Electronic Health Records–Based Phenotyping chapter of the Living Textbook.)

To determine the level of specificity required for the CDS, workflow should be closely examined with an outlined use case being developed for this initial analysis. Foremost, a decision of the frequency of the alert should be made. Should it fire during a certain time in the workflow? Does it warrant multiple warnings to the same clinician? Is it critical enough to yield false positives, or would this result in alert fatigue? By asking questions similar to these, the criteria for the CDS tool may be developed or modified from the original phenotype.

Hypothetical Example: A Screening Tool for Lung Cancer Implemented in a Primary Care Setting

Using guidance from the American Cancer Society, an alert to suggest screening to the clinician is instituted if the patient is age 55 to 74 and is a current smoker. Prior testing of the alert is done before implementation, and 70% of the patients at this clinic would be recommended for screening, which is determined to not be specific enough for practical use. To target patients who are at highest risk, only those with a 30 pack-year smoking history will be included for the alert, dropping the population being captured by this alert to an acceptable number.

From this example, initial criteria are tested and determined to not be specific enough, so further criteria are added. This required change could also manifest as modifying a diagnosis list, redefining upper or lower limits of a laboratory value, or including additional criteria such as family history. There are many options, and these criteria are only limited to the available data. Also seen in the example is the use of pretesting prior to implementation. This is highly recommended, as simulating the CDS tool prior to implementation can save time and frustration. If the CDS targeting criteria are too specific, criteria must be removed or changed. Regardless, there should be an expectation of the specificity of the alert, whether it is derived from previous evidence or from the data of the health institution in which the tool will be implemented. If the specificity of the tool is not calibrated, it will not have the desired effect. This could result in the tool being retired or removed because it is not usable, or even detrimental effects if recommendations are given to a patient who should not have met criteria.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Food and Drug Administration. 2022. Clinical Decision Support Software - Guidance for Industry and Food and Drug Administration Staff. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software

Jeffery AD, Novak LL, Kennedy B, Dietrich MS, Mion LC. Participatory design of probability-based decision support tools for in-hospital nurses. J Am Med Inform Assoc. 2017;24(6):1102-1110. doi:10.1093/jamia/ocx060. PMID: 28637180.

Khalifa M. 2014. Clinical decision support: strategies for success. Procedia Comp Sci. 37:422–427. doi:10.1016/j.procs.2014.08.063.

Richesson RL, Rusincovitch SA, Wixted D, et al. 2013. A comparison of phenotype definitions for diabetes mellitus. J Am Med Inform Assoc. doi: 10.1136/amiajnl-2013-001952.


Version History

February 22, 2024: Updated text with information from FDA’s 2022 Guidance (changes made by K. Staman).

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

Published May 30, 2020

Uses in PCTs: Experiences From the NIH Collaboratory Trials

Real World Evidence: Clinical Decision Support


Section 3

Uses in PCTs: Experiences From the NIH Collaboratory Trials

In PCTs, CDS can be integral to the conduct of studies by supporting the intervention or by serving as the intervention itself. Here, we will give some use cases of CDS and how they may support or be the focus of PCTs.

Recruitment

One obvious and practical use of CDS is in the use of recruitment for trials. Recruitment of subjects for trials can be very time-consuming and costly when done without CDS if a member of the research team must manually audit patient charts for inclusion and exclusion criteria, potentially several times a day depending on the research protocol. CDS can facilitate patient recruitment by automatically identifying patients with specific lab values, diagnoses, ages, genders, and other inclusion and exclusion criteria as specified by the protocol. A registry can perform this function, track patient flow for future reporting, and drive CDS logic based on registry inclusion to contact patients and delivery intervention components. A CDS tool may alert teams when a patient becomes trial eligible or indicates receptivity to participating. Building these CDS tools is feasible with the appropriate resources.

There are many ways to identify a list of potential participants for a trial. If the patients are to be approached or assessed for eligibility by their care provider, the CDS may need to be integrated with the workflow. A systematic review (Köpcke and Prokosch 2014) found workflow integration was a more important factor than the actual algorithm of the CDS tool. Aside from the data requirements, it is also important to consider how your recruitment tool will be used, and who the intended audience is. Does the research protocol involve acutely ill patients that arrive through the emergency department? If so, are the data available at the time they are needed? In an emergency, all of the data desired will not be available at the point of care through the EHR which may affect the desired functionality of the tool.

Supporting the Interventions

Many PCTs focus on the application of knowledge—often in the form of clinical practice guidelines—to practice. Some of these guidelines are complex and require multiple components to implement properly, including communications and shared data, information, and knowledge across care teams representing different disciplines and even organizations. Order sets, links to external information, and flowcharts may be created to facilitate information sharing. For instance, there could be a new protocol supporting pain management. To standardize the process, CDS could be used to ensure all clinicians prescribe the correct medications and interventions according to the protocol. In this case, the CDS is not being tested itself. Rather, it is simply supporting the research process by ensuring the intervention is administered as intended.

Case Example: PRIM-ER

The Primary Palliative Care for Emergency Medicine (PRIM-ER) trial is a pragmatic, cluster randomized trial designed to shift emergency care for seriously ill older adults away from treatment of acute illness and injury to goal-concordant palliative care in appropriate patients (Grudzen et al 2019). The intervention includes palliative care education, simulation-based workshops, CDS to identify patients, and audit and feedback (Grudzen et al 2019).

The CDS for PRIM-ER is an alert intended to help emergency department providers identify patients who could be candidates for palliative care, including those who already had consultations or contacts with social services, hospice, or palliative care experts, and patients that had terminal or critically serious conditions. The investigators and research staff initiated discussions with sites 6 to 7 months before implementation. At the first meetings, the investigators described the study and suggested CDS tools used at the host organizations. The meetings included clinical investigators and local IT staff who helped identify which CDS supportive tools would work best in that organization and how. Over the next 6 to 7 months, the investigators provided mapping documents for sites to use or adapt that included known local terms or ICD/CPT and other codes that mapped to concepts that are triggers used to identify patients for palliative care. Sites could use the information to check feasibility of local mapping and to decide what would be the most useful triggers and tools at the specific site. By engaging local experts in design, investigators hope to develop a CDS that not only complements local workflows and existing systems but will also be sustainable.

The customized nature of the CDS design and monitoring at each site required deliberate and thoughtful approaches by the research team.  While investigators had guessed that the CDS would need to be flexible, they did expect that it could be more standardized across some sites that it was, particularly those that used the same EHR system. However, because the sites all have different populations, workflows, and policies, the CDS had to be customized at each place. For example, at the first site, the investigators developed an interruptive alert if a potentially eligible patient presented to the emergency department. At a different site, which did not allow interruptive alerts per organizational policy, the investigators used a passive banner on the patient record instead. They determined that the banner would be sufficient for the trial because it addressed the required function (ie, it helped make the emergency department provider aware of the patient condition and possible alternatives, including a palliative care consultation). By focusing on function, not form of the CDS, a variety of approaches were deployed across sites that were different in design but true to spirit of CDS.

Additionally, the study investigators intended for the CDS to not only be useful, but also sustainable. Therefore, the investigators encouraged every site to monitor their CDS, and although they did not prescribe the methods or the exact metrics for this, they did suggest monitoring it as part of their Plan, Do, Study, Act (PDSA cycles). Different sites had different outcome metrics due to the unique nature of each emergency department and health system. At some sites, the audit was based on the number of alerts, and at others, the audit was based on the number of referrals to social work, palliative care consults, hospice, etc.

Based on their experience with PRIM-ER, the investigators have provided the following success factors for CDS:

  • Engage clinical and technical (EHR) experts at each site
  • Allow a lot of time to develop the tools
  • Focus on form not function
  • Enable local adaptation of supportive CDS tools
  • Continuously monitor at all sites but allow the evaluation to be customized
  • Continuously engage with local stakeholders

CDS as the Intervention

In addition to serving as a supporting tool for complex interventions or other research activities, CDS tools themselves may be the interventions under evaluation in a PCT. In these cases, the CDS tool may include order sets (for specific medications or test/procedure orders), reminders, practice recommendations, clinical calculators, and others. If CDS is the primary mode of delivery for the intervention, it must be evaluated with the full rigor of the PCT. This evaluation includes the impact on patients and also the acceptability for clinicians—a requirement for pragmatic interventions so that they will be widely adopted into practice.

To measure the effectiveness of CDS, there must be a reliable way to measure patient outcomes; to be feasible, it needs to require little or no additional data collection (Richesson et al 2020). It is important to consider how outcome measures will be collected as part of the feasibility assessment of the CDS tool. Often, the research team has extra resources as a part of a research grant, and there is dedicated effort for data collection. This includes measures to determine if the target audience viewed the CDS, their actions (ordered a medication, discontinued an order, documented in a flow sheet, dismissed/ignored, etc), and the number of alerts. Asking end-users to rate the CDS as they see it in practice, collecting qualitative feedback about the CDS, or gathering data in a more structured format are commonly used approaches.

Regardless of the format, there must be ample evidence to support the use of the intervention as evidence-based practice, and it must be possible for others to monitor its success without the support of a research team (ie, pragmatic on the PRECIS scale). In other words, careful planning is required to ensure a feasible way to measure the impact and effectiveness of the CDS tool, both in terms of effort and the availability of a reliable data source. In addition, if these interventions are implemented in a PCT and found to be effective in achieving the desired outcome, then researchers need to think about and plan for how they can be disseminated and adopted by other organizations.

Case Example: EMBED

Buprenorphine (BUP) is an effective treatment option for patients with opioid use disorder, and although patients with opioid use disorder often present to the emergency department, BUP is rarely initiated as a part of routine emergency department care (Melnick et al 2019a). The Pragmatic Trial of User-Centered Clinical Decision Support to Implement Emergency Department-Initiated Buprenorphine for Opioid Use Disorder (EMBED) is designed to test the use of CDS as an intervention for improving the rates of initiation of BUP for patients with opioid use disorder who are treated in the emergency department (Ahmed et al 2019; Melnick et al 2019a; Melnick et al 2019b; Ray et al 2019).

This 18-month parallel, cluster-randomized trial will evaluate the intervention in 20 emergency departments in 5 healthcare systems nationally (Melnick et al 2019b).

Design

To design the CDS, the investigators elicited feedback from 26 emergency department physicians to determine the necessary elements, which were that the CDS "(1) identify patients appropriately, (2) avoid workflow disruptions, (3) streamline clerical burden, and (4) help users understand the treatment process" (Melnick et al 2019a; Melnick et al 2019b). The intervention also needed to be vendor agnostic and capable of integration within multiple healthcare systems, and because the CDS is the intervention being tested, it was important to be sure that it worked the same at each organization. While the EMBED trial required this approach, this may not be essential in other studies if there is a single vendor.

The challenges and solutions encountered by investigators are described in detail elsewhere (Melnick et al 2019a), and we highlight a few of them here:

  • Challenge: Each health system in the trial uses a different EHR platform and/or different build of the same vendor’s product, and there was limited ability to customize the CDS available from the EHR vendor.
    • Solution: The investigators created an EHR-integrated web application with a graphical user interface similar to the final design prototype. The web application "automates a care pathway that includes patient-specific orders (emergency department medications, prescriptions, and referral) and documentation (a note in the chart reflecting the use of the app and discharge instructions)" (Melnick et al 2019a).
  • Challenge: In the healthcare system where the CDS was being built, initial design and implementation took approximately 6 months. When it was proposed for dissemination in the 4 other health systems in the trial network, the informatics leaders in these systems expressed reservations due to "(1) the resources required for local customization and maintenance of a nonstandards-based intervention and (2) the security limitations and potential loss of control of a centralized, nonstandards-based solution hosted outside of their system" (Melnick et al 2019a).
    • Solutions: The investigators addressed organizational issues first, including adding resources to build CDS and address security concerns. They customized the alert for each site and tested the data for sensitivity and specificity. They also tested the tool at each site for user acceptability.
Case Study from NOHARM

The Nonpharmacologic Options in Postoperative Hospital-based and Rehabilitation Pain Management (NOHARM) pragmatic trial tests a suite of CDS tools embedded in the shared EHR among 4 semiautonomous healthcare systems. In addition to self-management educational materials and a portal-based conversation guide, the NOHARM intervention includes CDS tools to advance patient-centered, guideline concordant care. Pragmatic, EHR-based strategies have not yet been tested as a means of enhancing postoperative pain care, and aligning care with evidence-based standards. The NOHARM intervention seeks to make patients aware of the benefits and effective use of nonpharmacological pain modalities. Its goal is to encourage patients to combine nonpharmacologic care with medications and other approaches to managing their postoperative pain and thereby diminish reliance on opioids after surgery. Nonpharmacological pain care (NPPC) is both integral to current guidelines and consistently underutilized despite a robust evidence base.

Because peri-operative care spans diverse sites, providers, and workflows, EHR CDS offers a unique opportunity to strategically insert defaults and prompts at feasible points in the workflow to advance NPPC. CDS can be specified to trigger on “EHR events,” such order placement, entry of a severe pain score, registration for a clinic visit, etc. Although the frequency and timing of such events varies widely depending on the type and location of surgery, by leveraging CDS, the NOHARM intervention is able to introduce NPPC content at appropriate times that match a patient’s surgical type, setting (inpatient vs. outpatient), and phase of perioperative care.

The NOHARM CDS tools inform care from the placement of the initial surgical order until 3 months after a patient’s surgery when they rotate off study. Surgical order placement triggers the sending of a portal-based conversation guide that educates patients about NPPC, and prompts them to select three modalities to include in their pain management plan. All NPPC options are validated for post-operative pain management and include movement (walking, yoga, and Tai Chi), relaxation (meditation, breathing, music, guided imagery, muscle relaxation, and aromatherapy), and physical (acupressure, massage, cold or heat, and TENS.)

The guide additionally queries patients about their pain-related anxiety, confidence, and medication use. The conversation guide is built on an EHR questionnaire base with embedded HTML and graphics to optimize user experience and engagement. Because the guide uses portal questionnaire functionality, patients’ NPPC selections and item responses are saved in the EHR and can be incorporated in logic and algorithms that drive CDS farther along the surgical care pathway. Patients must submit their questionnaire responses to save them, as shown in the Figure.

Patient information included in the NOHARM trial registry includes patients’ responses to the conversation guide items and clinical documentation during routine care; these are triangulated to individualize CDS to reflect patients characteristics, preferences, and level of interaction with the NOHARM intervention. The Epic EHR CDS is mapped to peri-operative workflows, and prompts nurses, physicians, and physical/occupational therapists to discuss and support the patient’s preferred nonpharmacologic options for pain management. For example, when a patient is admitted for surgery, if they have not yet entered NPPC preferences via the portal, the alert pictured below will prompt a nurse to solicit the patient’s NPPC preferences and enter these in EHR flowsheets.

In contrast, if the patient has already made NPPC selections via their portal prior to surgery, the inpatient nurse will see this alert screen and be prompted to deliver pain management training, as feasible, and document what has been done.

Nurses are presented with the dedicated NOHARM interface, below, to enter a patient’s NPPC preferences. The preferences will file directly to flowsheets and can be used to drive CDS.

After a patient’s preferences have been documented, new orders will populate, directing providers along the patient’s perioperative care pathway to support the patient’s selected preferences.

As part of the NOHARM instervention, instructions were developed specifically for inpatient nurses, physical and occupational therapists, and post-anesthesia care unit or post-operative nurses, such that patients can be supported in their pain management preferences across the entire care continuum.

Dr. Andrea L. Cheville, Co-PI of NOHARM, provides an update on the project in this interview.

NOHARM has engaged more than 48,000 patients with a target of roughly 100,000 patients total. Dr. Cheville said the project has demonstrated that patients’ receptivity to different NPPC modalities is dynamic over the course of their perioperative journey. For example, patients who initially select heat and massage prior to surgery, may try and ultimately prefer aromatherapy or guided imagery. Nurses play a critical role in helping patients to identify the modalities that are likely to offer greatest benefit.

Intervention Outcomes

The use of CDS in different studies is varied and does not need to always provide prescriptive suggestions to support care processes. CDS supports a variety of outcomes, including those in the following categories (Bright et al 2012):

Summary of CDS Outcome Categories
Category Example Measures
Clinical Morbidity/mortality, length of stay, quality of life, adverse events
Healthcare process Impact on user knowledge, adherence to guidelines/process
Healthcare provider workload, efficiency, and organization Clinician workload, efficiency, throughput, burnout
Relationship-centered Patient satisfaction
Economic Cost, resource utilization
Healthcare provider use and implementation User acceptance and satisfaction

When developing the CDS tool as an intervention, it is important to consider from the beginning how it will be evaluated. By examining the categories above, it may be desirable to measure multiple outcomes, especially for research. Having positive outcomes in multiple categories gives stronger evidence to the effectiveness of the evaluation and will provide a stronger case for other institutions to adopt the intervention. For instance, if a tool is shown to have a positive effect on both clinical and economic outcomes, it is much more likely to be adopted. Regardless, all proposed outcome measures with operationalized definitions for data collection should be specified in the PCT research protocol. If other potential outcomes are discovered to be important after initiation of the trial, investigators should write an amendment to the protocol and begin data collection as soon as possible, especially if it is relatively early in the trial.

The feasibility of easily and reliably collecting the data needed to measure outcomes should be strongly considered. Once the data required are identified, will they be available? If so, can they be collected electronically/automatically, or will manual abstraction or measurement be required? Will data be available at the right time in the workflow to be useful? If it is not available, are there secondary proxy measures that may be available? If many of the data points must be collected manually, this poses great threat to the longevity of the tool. The overall "success" of a CDS tool, therefore, includes measures for both clinical effectiveness (ie, patient impact) as well as practical implementation (eg, acceptability, EHR integration).

SECTIONS

CHAPTER SECTIONS

Resources

Examples and resources for CDS-assisted recruitment in PCTs:

A real-time screening alert improves patient recruitment efficiency. AMIA Annu Symp Proc. 2011;2011:1489-1498. PMID: 22195213.

Desiderata for major eligibility criteria in breast cancer clinical trials. AMIA Annu Symp Proc. 2015;2015:2025-2034. PMID: 26958302.

Effect of a clinical trial alert system on physician participation in trial recruitment. Arch Intern Med. 2005;165(19):2272-2277. doi:10.1001/archinte.165.19.2272. PMID: 16246994.

REFERENCES

back to top

Ahmed OM, Mao JA, Holt SR, et al. 2019. A scalable, automated warm handoff from the emergency department to community sites offering continued medication for opioid use disorder: lessons learned from the EMBED trial stakeholders. J Subst Abuse Treat. 102:47-52. doi:10.1016/j.jsat.2019.05.006. PMID: 31202288.

Köpcke F, Prokosch H-U. 2014. Employing computers for the recruitment into clinical trials: a comprehensive systematic review. J Med Internet Res. 16(7):e161. doi:10.2196/jmir.3446. http://www.jmir.org/2014/7/e161/. PMID: 24985568.

Grudzen CR, Brody AA, Chung FR, et al. 2019. Primary Palliative Care for Emergency Medicine (PRIM-ER): protocol for a pragmatic, cluster-randomised, stepped wedge design to test the effectiveness of primary palliative care education, training and technical support for emergency medicine. BMJ Open. 9(7):e030099. doi:10.1136/bmjopen-2019-030099. PMID: 31352424.

Melnick ER, Holland WC, Ahmed OM, et al. 2019a. An integrated web application for decision support and automation of EHR workflow: a case study of current challenges to standards-based messaging and scalability from the EMBED trial. JAMIA Open. doi:10.1093/jamiaopen/ooz053. PMID: 32025639.

Melnick ER, Jeffery MM, Dziura JD, et al. 2019. User-centred clinical decision support to implement emergency department-initiated buprenorphine for opioid use disorder: protocol for the pragmatic group randomised EMBED trial. BMJ Open. 9(5):e028488. doi:10.1136/bmjopen-2018-028488. PMID: 31152039.

Ray JM, Ahmed OM, Solad Y, et al. 2019. Computerized clinical decision support system for emergency department-initiated buprenorphine for opioid use disorder: user-centered design. JMIR Hum Factors. 6(1):e13121. doi:10.2196/13121. PMID: 30810531.

Richesson RL, Rusincovitch SA, Wixted D, et al. 2013. A comparison of phenotype definitions for diabetes mellitus. JAMIA. 20(e2):e319-26. doi:10.1136/amiajnl-2013-001952. PMID: 24026307.


Version History

October 24, 2022: Added NOHARM Case Study information and made minor nonsubstantive text edits (changes made by K. Staman & L. Stewart)

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

Published May 30, 2020

Definitions and Uses

Real World Evidence: Clinical Decision Support


Section 2

Definitions and Uses

What Is Clinical Decision Support?

CDS encompasses a wide variety of tools, such as alerts and reminders, clinical practice guidelines, customized order sets, data visualization dashboards or interfaces, documentation templates, diagnostic support, and other reference information—all tailored to clinicians' data, information, and knowledge needs (see Table; AHRQ 2022).

CDS provides clinicians, staff, patients or other individuals with knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health and health care. CDS encompasses a variety of tools to enhance decision-making in the clinical workflow. These tools include computerized alerts, prompts, and reminders to care providers and patients; clinical guidelines; condition-specific order sets; focused patient data reports and summaries; documentation templates; diagnostic support, and contextually relevant reference information, among other tools. — Office of the National Coordinator (ONC)

The idea of CDS tools emerged from the recognition that medicine and healthcare have become data and knowledge intensive. Clinicians as humans have limited cognitive capacity to assimilate or process high volume and high dimensional data, and can require tools that filter, prioritize, and present this information. Take for example that there are a number of protocols that health systems have established for suspected pneumonia, often including a combination of labs, medications, and imaging. This evidence-based knowledge can be represented by a CDS tool in the form of an order set, alleviating cognitive burden and saving time that could otherwise be spent with the patient. CDS allows for more streamlined workflows and helps clinicians adhere to standard clinical processes. Without such tools, clinicians may be overwhelmed with keeping track of patient data and acting on it in a timely manner.

Further, our knowledge of biomedicine is growing exponentially (Fontelo and Liu 2018). No clinician can keep up with the medical literature or guidelines. Therefore tools that assist clinicians to search, retrieve, and integrate relevant knowledge and guidelines are important and have been a driving force for CDS.

Table. Types of CDS Formats
Name Description Examples
Order sets Structured sets of orders based around an objective or clinical problem with logic that can specify when and how they appear Blood transfusion order sets, TPN order sets, stroke/TIA order sets, admission order sets
Dashboards Visualizations, whether interactive or passive, often to aid in decision-making and monitoring for a large number of individuals Advanced sepsis monitoring with integrated machine learning algorithms, hospital-acquired infection monitoring
Tailored forms and flowsheets Structured documentation templates to attempt to standardize responses and choices, or to standardize a documentation process among several individuals Interdisciplinary surgical checklist, structured H+P documents, care pathways
Dynamic guidelines Multi-step tools that guide a clinician to a decision based on how they answer a number of questions Catheter removal protocols, chemotherapy protocols
Infobuttons and reference guides Integrated links and resources to provide knowledge at the time of decision-making Integrated drug reference information in a medication administration record, drug dosing calculators
Alerts and reminders Passive or active notifications that guide decisions by giving additional information (often by other methods as described in this table) or providing additional functionality to make a more informed decision Allergy/drug interaction alerts, vaccine reminders, duplicate therapy alerts, critical lab results

CDS Delivery, Design, and Challenges

The near ubiquitous adoption of electronic health records (EHRs) creates potential for CDS tools to be integrated into clinical workflows. In fact, the adoption of CDS has become a primary focus of informatics and a driving use case for the adoption of EHR systems. There are many examples where CDS has been integrated into EHR systems to support improvements in clinical care. For example, CDS has been shown to improve adherence to guidelines (Lu et al 2016; Silveira et al 2016) and to improved patient outcomes (Heekin et al 2018). CDS has also been used in patient-facing applications (Goehringer et al 2018) and through mobile health technologies (Raghu et al 2015). The reach of CDS is ever expanding and proving to be an effective mode of delivering real-world interventions.

While CDS can be very effective, there are also many reported cases where CDS does not show benefit, potentially being detrimental (Roshanov et al 2013; Matui et al 2014). Despite the potential of EHR CDS, it has not been adopted and scaled in the way informatics visionaries had intended. There are several reasons for this, including a lack of a central resource for CDS guidance and problems in interoperability. Even when CDS is shown to be effective at one site, it cannot be “scaled up” or rapidly implemented into other organizations and settings due to the lack of standardization of EHR systems (i.e., function, data) and clinical workflows. However, there are areas that can be controlled during its design.

To start, if clinicians do not trust or find value in the CDS tool, then its guidance will most likely not be followed. Many studies have shown that a large proportion of CDS alerts are ignored or dismissed (Carroll et al 2012), often due to a lack of specificity and executable action. In other words, if CDS tools do not offer an executable recommendation that is relevant to the patient, it is likely that it will be ignored and not have the desired effect. Most clinicians can easily recall a time that a CDS alert has not been helpful, or interrupted their workflow. A reminder to schedule a pap smear for a male patient is not an effective use of an alert. For a less extreme example, imagine the frustration of a consulting surgeon attempting to document a note, and being interrupted to order a flu shot. Other side effects of ineffective CDS include alert fatigue, workarounds, and the dissemination of out-of-date content (Ash et al. 2007). At best this frustrates clinicians, and at worse, it creates unsafe conditions. However, not all CDS are alerts, as for the purposes of research, alerts can be built in many different ways and can target a broad range of clinical stakeholders. The majority of current alerts are not “hard stop” requiring a user to interrupt their workflow and interact with the alert.

Despite the many challenges for implementing CDS that is useful and safe, there is tremendous enthusiasm for the potential of CDS to transform and improve care. In fact, the routine use of automated CDS is a fundamental component of the national healthcare reform strategy endorsed by the Centers for Medicare and Medicaid Services (CMS) and the Office of the National Coordinator (ONC). Other requirements for CDS will most certainly increase over time. Meeting the requirements will be a challenge from both an implementation standpoint and from its ability to be useful in real-world workflows. There are many ways to mitigate the challenges and move toward success in CDS design, which has been, and is, a major focus for clinical informatics.

Next, we describe the 5 rights of CDS framework and the GUIDES checklist to provide a foundation for how to think about CDS design.

5 Rights of CDS

Each potential CDS application must be carefully considered and thoughtfully designed to achieve the desired organizational goals without any unintended effects. This process is partially tied to the 5 Rights of CDS framework, a well-known framework used to plan, assess, and deploy CDS interventions. First developed by Osheroff (Osheroff et al 2007), this framework has been adopted by several developers, implementers, researchers, and organizations such as the Agency for Healthcare Research and Quality (AHRQ).

The 5 rights of CDS are as follows:

The right information, to the right person, in the right format, through the right channel, at the right time in the workflow.

The right information refers to what content is presented to the end-user of the CDS tool. This may seem straightforward, but there are several considerations to keep in mind. First, the information presented should be derived from a reputable source, such as from evidence-based practice, government regulations, or clinical practice guidelines from an accredited agency. It should not be based solely on expert opinion, or without consensus on the recommendations of the CDS tool from the targeted audience (Campbell 2016). If the information is not universally accepted, there may be issues in adherence and acceptance from the end-user, and the tool will not have the intended effect. Often to fine tune the CDS tool, override rates and comments are collected (either by the tool or qualitatively) and analyzed to understand the reasoning of why the alert is causing an unneeded disruption in the clinician’s workflow.

The right person refers to the CDS tool reaching the individual who may take action based on the information given. This can mean a number of different individuals of the care team (and in different combinations), including nurses, physicians, respiratory therapists, physical therapists, pharmacists, patients, and patient caregivers. The information may need to be adjusted depending on the audience, especially in team-based CDS. For instance, there may be a protocol around antibiotic dosing, with different alerts targeting the nurse, physician, and pharmacist. The nurse should not receive dosing instructions, as that is not in their scope of practice. Instead, there may be an alert when scanning medication if there is a worry for overdose. Upon ordering, dosing information may be better suited to the pharmacist or physician. Again, if action cannot be taken by the individual, then they should not receive an alert; this only increases alert fatigue.

The right format refers to how the CDS tool is presented, whether it be an alert, order set, infobutton, clinical calculator, clinical practice guideline, or any other format. The developers and implementers of CDS should consider what problem needs to be solved, and how it may be done in the least interruptive format as possible. Again, if all CDS were interruptive alerts, there would be severe alert fatigue, and the use of these tools overall would become worthless. Format may be chosen due to acuity, or it may depend on what makes the most sense. For instance, if there are issues with clinicians remembering the correct orders to initiate blood transfusion, an order set may be developed so there is no delay in the patient receiving the appropriate blood product. This not only ensures the correct ordering process but reduces cognitive burden on the clinician and is done so in a non-interruptive, transparent, customizable format. Second, tying into the right person, the information should be presented in a way that is usable to the end-user. There should be just enough information to be usable, and not cause cognitive overload. Alert fatigue is a serious concern in CDS systems (Agency for Healthcare Research and Quality 2019), and cognitive overload is a contributive risk factor. Information should also be tailored to the audience, and considerations include what information should be given to a clinician versus a patient.

The right channel refers to how the CDS tool is delivered. This may be through the EHR, a patient portal, another clinical system (such as a separate computerized physician order entry system or radiology service), a smartphone app, or by paper. This is becoming an increasingly important factor to consider with the expansion of digital health, as patients and caregivers are now becoming the primary focus of many decision support tools. The EHR should not always be considered the primary modality of delivering care, and consideration should be made in how to expand the access of decision support. In addition, downtime procedures should be put in place for when access to the EHR (or other system) is unavailable, making paper decision support tools not yet completely obsolete.

Finally, the right time in the workflow refers to the fit of the CDS tool into current clinical processes. This is often considered one of the more difficult parts of the 5 rights of CDS, as informatics solutions such as CDS must be fit into workflows that may have been in place for many years without consideration to new technology. This often results in reworks of workflows, which can lead to resistance and strain on the end-user. In contrast, the deployment of CDS may be ineffective when built around workflow if the information is not delivered in a timely manner, or if the information needed for the CDS tool to function is not yet available. For instance, there may be an order set or alert based on past medical history. If the patient is new to the health system or arrives in an emergency and is not identifiable on arrival, this information may not be available for some time. Another example may be that there could be multiple times information could be presented. For example, a warning could be developed for prescribing a sleep aid for someone who is receiving high doses of narcotics, as this could cause severe respiratory depression. Should the alert be presented when the drug is selected during order entry, or only when the order is attempted to be signed? Implementation of such alerts requires and in-depth understanding of the clinical workflow and the needs of the end-user. The alert should not be interruptive to workflow and be presented at the most effective time to reduce alert fatigue and support clinical processes by saving time and frustration.

While the 5 rights are a good set of principles, it does not fully address all implementation issues. Specifically, when developing a CDS tool, you must consider the context of how this new tool would fit with the others already in place. Careful consideration of this is needed to prevent unintended consequences, such as unexpected changes to workflow or conflicting information. Additionally, continued performance monitoring is needed to ensure the intended function of the tool, as well as to monitor patient safety, both in terms of intervention efficacy and whether the clinicians act on the CDS. Multisite CDS may also present unique challenges, whether that be different workflows, data availability, or other facts that may vary how the CDS should be implemented.

These challenges represent areas that are open to research and development, many of which have been called out as priority areas to explore (Osheroff et al 2007). The The Agency for Healthcare Research and Quality (AHRQ) (see Additional Resources) is an active funder and coordinator of collaboration in this space. Despite the need for more detailed guidance, the 5 Rights of CDS provides a solid framework to conceptualize the early design of a CDS intervention.

GUIDES Checklist

The GUIDES checklist is a self-assessment tool developed to help potential CDS implementers (and health system leaders) identify and address factors that affect the success of CDS interventions (Van de Velde et al. 2018). The checklist identifies 4 domains with a total of 16 factors essential to the success of CDS:

Domain Description Factors
CDS Context The CDS system is built for a specific purpose with measurable outcomes and adequate input from stakeholders and users.
  • The ability to achieve defined quality objectives
  • The quality of patient data is adequate
  • Stakeholders and users accept CDS
  • CDS can be added with existing workload, workflows, and systems
CDS Content The CDS system contains relevant, actionable, and accurate information.
  • The information in the CDS tool is trustworthy
  • The information contained in the CDS is relevant to the situation in which it is presented
  • There is a call to action within the CDS tool
  • The amount of CDS is manageable for the end-user
CDS System CDS systems should be well designed to accommodate different workflows and clinical situations.
  • The CDS is easy to use and follows usability principals
  • The CDS is reaching the correct people
  • The CDS is in the correct format and is well designed
  • The CDS is available at the right time for the end-user
CDS Implementation The rollout of CDS systems should be seamless, and there should be a plan for potential pitfalls.
  • Information regarding the CDS and its functions should be available to the end-user
  • Address barriers to CDS compliance
  • Implementation of CDS is stepwise, and CDS improvements are done in continuous intervals
  • A strong governance structure for CDS is in place

The 5 Rights and the GUIDES checklist are well-known frameworks to help conceptualize and evaluate CDS, but others exist. Regardless of the framework you use, the core principals are usually the same: CDS should be based on established evidence, be easy to use, target the right people, and have regular evaluation. While these frameworks are a solid base to work from, there are many other considerations including how different CDS tools interact with each other within an EHR system in real settings. Next, we describe specific use cases for CDS in PCTs and further explore specific considerations for their implementation and evaluation.

SECTIONS

CHAPTER SECTIONS

Resources

GUIDES Checklist
A self-assessment tool developed to help potential CDS implementers (and healthcare system leaders) identify and address factors that affect the success of CDS interventions.

REFERENCES

back to top

Agency for Healthcare Research and Quality. Section 2 - Overview of CDS Five Rights. https://healthit.ahrq.gov/ahrq-funded-projects/current-health-it-priorities/clinical-decision-support-cds/chapter-1-approaching-clinical-decision/section-2-overview-cds-five-rights. Accessed May 27, 2020.

Agency for Healthcare Research and Quality. Clinical Decision Support. https://www.ahrq.gov/cpi/about/otherwebsites/clinical-decision-support/index.html. Accessed October 14, 2022.

Ash JS, Sittig DF, Campbell EM, Guappone KP, Dykstra RH. 2007. Some unintended consequences of clinical decision support systems. AMIA Annu Symp Proc. 26-30. PMID: 18693791.

Campbell R. 2016. The Five Rights of Clinical Decision Support: CDS Tools Helpful for Meeting Meaningful Use. http://library.ahima.org/doc?oid=300027. Accessed May 27, 2020.

Carroll AE, Anand V, Downs SM. 2012. Understanding why clinicians answer or ignore clinical decision support prompts. Appl Clin Inform. 3(3):309-317. doi:10.4338/ACI-2012-04-RA-0013. PMID: 23646078.

Fontelo P, Liu F. 2018. A review of recent publication trends from top publishing countries. Syst Rev. 7(1):147. doi:10.1186/s13643-018-0819-1. PMID: 30261915.

Goehringer JM, Bonhag MA, Jones LK, et al. 2018. Generation and implementation of a patient-centered and patient-facing genomic test report in the EHR. EGEMS. 6(1):14. doi:10.5334/egems.256. PMID: 30094286.

Heekin AM, Kontor J, Sax HC, Keller MS, Wellington A, Weingarten S. 2018. Choosing wisely clinical decision support adherence and associated inpatient outcomes. Am J Manag Care. 24(8):361-366. PMID: 30130028.

Lu MT, Rosman DA, Wu CC, et al. 2016. Radiologist point-of-care clinical decision support and adherence to guidelines for incidental lung nodules. J Am Coll Radiol. 13(2):156-162. doi:10.1016/j.jacr.2015.09.029. PMID: 26577875.

Matui P, Wyatt JC, Pinnock H, Sheikh A, McLean S. 2014. Computer decision support systems for asthma: a systematic review. NPJ Prim Care Respir Med. 24:14005. doi:10.1038/npjpcrm.2014.5. PMID: 24841952.

Office of the National Coordinator for Health Information Technology. 2018. Clinical Decision Support. https://www.healthit.gov/topic/safety/clinical-decision-support. Accessed May 27, 2020.

Osheroff JA, Teich JM, Middleton B, Steen EB, Wright A, Detmer DE. 2007. A roadmap for national action on clinical decision support. J Am Med Inform Assoc. 14(2):141-145. doi:10.1197/jamia.M2334. PMID: 17213487.

Raghu A, Praveen D, Peiris D, Tarassenko L, Clifford G. 2015. Engineering a mobile health tool for resource-poor settings to assess and manage cardiovascular disease risk: SMARThealth study. BMC Med Inform Decis Mak. 15:36. doi:10.1186/s12911-015-0148-4. PMID: 25924825.

Roshanov PS, Fernandes N, Wilczynski JM, et al. 2013. Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials. BMJ. 346:f657. doi:10.1136/bmj.f657. PMID: 23412440.

Silveira PC, Ip IK, Sumption S, Raja AS, Tajmir S, Khorasani R. 2016. Impact of a clinical decision support tool on adherence to the Ottawa Ankle Rules. Am J Emerg Med. 34(3):412–8. doi:10.1016/j.ajem.2015.11.028. PMID: 26682677.

Van de Velde S, Kunnamo I, Roshanov P, et al. 2018. The GUIDES checklist: development of a tool to improve the successful use of guideline-based computerised clinical decision support. Implement Sci. 13(1):86. doi:10.1186/s13012-018-0772-3. PMID: 30126421.


Version History

December 3, 2025: Updated hyperlinks (changes made by G. Uhlenbrauck).

October 24, 2022: Minor updates to text. Added a reference (changes made by K. Staman & L. Stewart).

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

Published May 30, 2020

Introduction

Real World Evidence: Clinical Decision Support


Section 1

Introduction

The use of clinical decision support (CDS) functionality is increasingly playing a prominent role in pragmatic clinical trials (PCTs)—either by enabling conduct of the trial, supporting the delivery of new interventions or being the intervention under evaluation. The key to its success has been the automation and knowledge transformation of certain tasks. Well-built CDS can save time and relieve cognitive burden on the part of the end-user. CDS can additionally assist research teams in identifying potential participants, initiating recruitment, and optimizing outcome collection. There are several examples of CDS used to support the NIH Collaboratory Trials.

In the Primary Palliative Care for Emergency Medicine (PRIM-ER) trial, CDS is used to identify seriously ill patients in the emergency department who might benefit from palliative care (Grudzen et al 2019). Here, the tool automates the process of cohort identification by querying the EHR for patient-level factors and automatically calculating risk scores. Without such a tool, manual calculations and individual searches would need to be done. Instead, this process is facilitated by the use of CDS and delivers alerts to clinicians to prioritize care and assessment. In this case, this study is supporting the testing of a new intervention with CDS as its mode of delivery.

In contrast, in the Emergency Department-Initiated Buprenorphine for Opioid Use Disorder (EMBED) trial, investigators tested CDS to identify and facilitate management of patients with opioid use disorder in the emergency department (Ahmed et al 2019; Melnick et al 2019a; Melnick et al 2019b; Ray et al 2019). Buprenorphine is a well-established therapy that is commonly used in settings other than the emergency department. Here, CDS was tested as the intervention itself and was bundled with implementation and engagement efforts (Melnick et al. 2022 ). CDS brings new potential to established interventions, expanding their effectiveness and ability to be dessiminated and implemented.

This chapter aims to define CDS, describe best practices for designing and evaluating decision support, and examine special considerations for using CDS in PCTs.

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Ahmed OM, Mao JA, Holt SR, et al. 2019. A scalable, automated warm handoff from the emergency department to community sites offering continued medication for opioid use disorder: lessons learned from the EMBED trial stakeholders. J Subst Abuse Treat. 102:47-52. doi:10.1016/j.jsat.2019.05.006. PMID: 31202288.

Grudzen CR, Brody AA, Chung FR, et al. 2019. Primary Palliative Care for Emergency Medicine (PRIM-ER): protocol for a pragmatic, cluster-randomised, stepped wedge design to test the effectiveness of primary palliative care education, training and technical support for emergency medicine. BMJ Open. 9(7):e030099. doi:10.1136/bmjopen-2019-030099. PMID: 31352424.

Melnick ER, Holland WC, Ahmed OM, et al. 2019a. An integrated web application for decision support and automation of EHR workflow: a case study of current challenges to standards-based messaging and scalability from the EMBED trial. JAMIA Open. ooz053. doi:10.1093/jamiaopen/ooz053. PMID: 32025639.

Melnick ER, Jeffery MM, Dziura JD, et al. 2019b. User-centred clinical decision support to implement emergency department-initiated buprenorphine for opioid use disorder: protocol for the pragmatic group randomised EMBED trial. BMJ Open. 9(5):e028488. doi:10.1136/bmjopen-2018-028488. PMID: 31152039.

Clinical Decision Support. https://www.ahrq.gov/cpi/about/otherwebsites/clinical-decision-support/index.html. Accessed October 14, 2022.

Ray JM, Ahmed OM, Solad Y, et al. 2019. Computerized clinical decision support system for emergency department-initiated buprenorphine for opioid use disorder: user-centered design. JMIR Hum Factors. 6(1):e13121. doi:10.2196/13121. PMID: 30810531.


Version History

October 24, 2020: Minor updates to the text. Added new reference (changes made by K. Staman & L. Stewart)

July 2, 2020: Minor corrections to layout and formatting (changes made by D. Seils).

Published May 29, 2020

Preparing for Data Sharing

Data Sharing and Embedded Research


Section 7


Preparing for Data Sharing

Investigators need to consider and prepare for data sharing throughout the ePCT lifecycle—from writing to the grant through publication of results and sharing data sets—all of which can take time and resources (Li and Rockhold 2019).

  • Grant submission: The Draft NIH Policy for Data Management and Sharing and supplemental draft guidance proposes that applicants for research funding submit a plan describing how scientific data will be managed and shared.
  • Trial registration: After funding has been awarded, investigators will be asked to provide a data sharing statement on ClinicalTrials.gov as part of trial registration (Taichman et al. 2016).
  • Conduct: During the data collection process, continuous data curation, cleaning, and preparing for data sharing would have a profound effect on the evidence generated from ePCTs and help speed the dissemination process.
  • Dissemination: Investigators will be asked by medical journals to provide a data sharing statement as a condition of publication (Taichman et al. 2017)
  • Sharing of data: After trial completion, data will be need to be shared in a repository using a mechanism to promote re-use and proper citation of the data (Pierce et al. 2019).

To help investigators think through the considerations for their data sharing plans and statements, NIH Collaboratory Trials are given a Data and Resource Sharing Informational Document and an Onboarding Data and Resource Sharing Questionnaire during the onboarding process. Upon closeout, NIH Collaboratory Trials are provided a Closeout Data and Resource Sharing Checklist and are expected to utilize this checklist to provide a final data share package, which is shared on the Living Textbook Resources page.

There are companies, such as Vivli, that can support the data sharing process at all of the various stages, as well as make data available for requests. For more information, see the Grand Rounds, Preparing for Clinical Trial Data Sharing and Re-use: The New Reality for Researchers.

When preparing for data sharing, investigators should understand the unique aspects of sharing data from research that uses healthcare system data from embedded research. NIH Collaboratory leadership and NIH Collaboratory Trials principal investigators, along with their colleagues, highlighted these considerations when responding to the draft policy on data sharing.

The main topics covered in the response are:

  • Assessing and mitigating re-identification risk: Embedded pragmatic research occurs in a different context than traditional research. It uses routinely collected data from electronic health records and claims databases, and may involve detailed data on large populations, often including hundreds of thousands of patients. In many cases, these studies are conducted with waiver of informed consent. Before sharing data, investigators may need to do more than simply remove or alter explicit identifiers; they may also need to remove or alter data elements that could enable re-identification through data linkage.
  • Protecting secondary subjects: Embedded pragmatic trials require different considerations to protect the privacy and confidentiality of those involved, who include not only the participants in the trial, but also friends and family members of participants, providers, healthcare systems, and members of vulnerable classes.
  • Use of data enclaves: Health systems are often voluntary participants in embedded research with the goal of answering specific questions. They may not be willing to bear the risk for use of sensitive organizational information to address unrelated topics. Their providers are often unable to opt out of embedded research in which their delivery system participates. The potential for disclosure of sensitive information regarding providers or health systems could be substantial, with commensurate harm. Data archives and enclaves are acceptable data sharing mechanisms in routine use that can help mitigates these risks. The Centers for Medicare and Medicaid Services Virtual Research Data Center is an example of a research enclave. It permits investigators to conduct research on approved topics by working with the data in the enclave, and only aggregated data can be removed from the enclave. This has proven to provide a good balance between access and protection of patients’ privacy.
  • Credit those who share data: As stated Credit Data Generators for Data Re-use, we need to develop and mandate the use of a data set ID that will link the use and published analysis from a data set back to the original researchers (Pierce et al. 2019).

Other signatories include participants in the National Academy of Medicine’s Clinical Effectiveness Research Innovation Collaborative of the Leadership Consortium for Value and Science-Driven Health Care, and leaders of the Health Care Systems Research Network.

The full letter is available for download and includes the list of signatories.

 

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Li R, Rockhold F. 2019. Preparing for Clinical Trial Data Sharing and Re-use: The New Reality for Researchers. https://rethinkingclinicaltrials.org/news/september-27-2019-preparing-for-clinical-trial-data-sharing-and-re-use-the-new-reality-for-researchers-rebecca-li-phd-frank-rockhold-phd/. NIH Collaboratory Grand Rounds.

Pierce HH, Dev A, Statham E, Bierer BE. 2019. Credit data generators for data reuse. Nature. 570(7759):30–32. doi:10.1038/d41586-019-01715-4. PMID: 31164773.

Taichman DB, Backus J, Baethge C, et al. 2016. Sharing clinical trial data: a proposal from the International Committee of Medical Journal Editors. Ann Intern Med. 164(7):505. doi:10.7326/M15-2928. PMID: 26792258.

Taichman DB, Sahni P, Pinborg A, et al. 2017. Data sharing statements for clinical trials: a requirement of the International Committee of Medical Journal Editors. Lancet. doi: 10.1016/S0140-6736(17)31282-5. PMID: 28596041.


Version History

February 25, 2025: Updated hyperlinks (change made by G. Uhlenbrauck).

March 22, 2023: Updated hyperlinks (changes made by G. Uhlenbrauck).

Published May 20, 2020

Incentive Structure and Citations for Data Sets

Data Sharing and Embedded Research


Section 6


Incentive Structure and Citations for Data Sets

Increased data sharing is expected to bolster scientific advancement and research integrity; however, the incentive structure for academic researchers is designed to reward publication in scholarly journals, not the creation of data sets that can be shared and re-used to generate new knowledge. Some have suggested changing the incentive structure to recognize that the generation of data that others use for secondary research is a valuable scientific contribution. (Pierce et al. 2019; Popkin 2019). We note that investigators may need to devote considerable effort to annotating data sets and analytic programs in a way that makes publicly available data sets sufficiently easy for others to use. Providing financial resources to support this effort can address part of this need. However, true success will require shifting the paradigm from simply requiring data sharing to creation of incentives for investigators to want their data sets to gain wider use.

One way to do this will be for universities to revise their appointment, promotion, and tenure (APT) process to incorporate effective data sharing into the decision-making and recognize creators of data sets that gain meaningful use by others (Hernandez 2019). However, in order to accomplish this, a well-defined system for linking researchers to their data is needed for citing data sets so that academic researchers can get credit for their work (Pierce et al. 2019). In a recent article, “Credit Generators for Data Re-use,” Pierce et al. depict a mechanism of linking a persistent identifier to an author’s ORCHID ID, and the digital object identifier (DOI) of the published article, to ensure appropriate credit in a “virtuous cycle”.

Figure from Pierce et al. Nature 2019. Used with permission.

The infrastructure for sharing data should ensure that data are cited properly, and data management strategies that encourage making data sets “FAIR” (findable, accessible, interoperable, and reusable) (Wilkinson et al. 2016) have been endorsed by the US National Academies of Sciences, Engineering, and Medicine and the European Commission.

“If a system linked data sets to individuals and reliably tracked the subsequent uses of those data, would institutions incorporate these metrics into the promotion process?

“The answer is an unambiguous ‘yes’,” says Antony Rosen, vice-dean for research at Johns Hopkins School of Medicine in Baltimore, Maryland. “Having an objective method to assess the uses of data would give faculty additional ways to communicate the contributions of their work.”—from Pierce et al. 2019

How to attach a DOI to a data set:

Digital object identifiers (DOIs) are unique, persistent identifiers that can be attached to data sets or other objects. These persistent identifiers can be cited in order to give credit for the creation of the data set.

DOIs are essentially a permanent name of an entity (or object) on a digital network that does not change even when the location (or URL) or other characteristics change.

To assign a DOI to a clinical data set (or other object), an individual should:

1. Deposit the data set in an appropriate data repository, which can include public or private enclaves or archives, as described in the section Data Sharing Solutions for Embedded Research. The journal Scientific Data also provides a list of public repositories for clinical data.

2. Acquire a URL through the data repository for the data set and assemble the metadata.

3. Contact a registration agency appropriate for the domain of data to be shared. For clinical data sets, registration agencies include Figshare, Zenodo, CrossRef, or Dryad, among others.

Anecdotally, the registration agency used to create DOIs for the Living Textbook chapters is CrossRef, an agency dedicated to the scholarly communication of research outputs.

 

SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Hernandez AF. 2019. Open Science: Are we there yet? [accessed 2020 Feb 12]. https://rethinkingclinicaltrials.org/news/august-9-2019-open-science-are-we-there-yet-adrian-hernandez-md/. NIH Collaboratory Grand Rounds

Pierce HH, Dev A, Statham E, Bierer BE. 2019. Credit data generators for data reuse. Nature. 570(7759):30–32. doi:10.1038/d41586-019-01715-4. PMID: 31164773

Popkin G. 2019. Data sharing and how it can benefit your scientific career. Nature. 569(7756):445–447. doi:10.1038/d41586-019-01506-x. PMID: 31081499.

Wilkinson MD, Dumontier M, Aalbersberg IjJ, et al. 2016. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 3(1):160018. doi:10.1038/sdata.2016.18. PMID: 26978244.


Version History

Published May 20, 2020

Case Example From the Nudge Study

Real World Evidence: Mobile Health (mHealth)


Section 4

Case Example From the Nudge Study

Up to 50% of patients do not take their cardiovascular medications as prescribed, which results in increased morbidity, mortality, and healthcare costs (Brown and Bussell 2011). Interventions to improve adherence include patient education, reminders, pharmacist support, and financial incentives and have produced mixed results—some demonstrating benefits, but many producing small to negative results (NICE Clinical Guidelines No 76 2009). Adherence interventions have been limited by 1) including adherent patients who may not need an intervention; 2) resource-intensive approaches involving pharmacists and/or behavioral health; and 3) lack of attention to evidence-based strategies to motivate human behavior (Costa et al. 2015).

Brief behavioral interventions can influence decision-making and are impactful. Principles of behavioral economics have been incorporated into health interventions to “nudge” people to achieve improved health outcomes (Matjasko et al. 2016). A behavioral nudge is a small change in choice framing that alters people’s behavior in a predictable way. A prior study testing financial incentives through elimination of copayments for cardiovascular medications in the year after acute myocardial infarction improved adherence by 4% to 6% (Choudhry et al. 2014); however, financial incentives are not generalizable and are unlikely to be sustainable. Behavioral nudges, such as commitments (e.g., asking patients for demonstrated commitment to change through a pledge), norms (using examples of others who take action), and salience (making information or recommendations resonant through use of stories) build on a well-evidenced body of behavioral science theory and have been shown to improve health behaviors such as smoking cessation and weight loss (Matjasko et al. 2016). These have yet to be tested to improve medication adherence.

Mobile and digital technologies for health promotion and disease self-management offer an intriguing, low-resource, and as yet untested opportunity to adapt behavioral "nudges" using ubiquitous cell phone technology to facilitate medication adherence.

The objectives of the Personalized Patient Data and Behavioral Nudges to Improve Adherence to Chronic Cardiovascular Medications (Nudge) study, a two-part, multi-center study are as follows:

Objective 1: We developed and programmed a theoretically informed technology-based (a) nudge message library and (b) chat bot content library using multiple and iterative N of 1 within-subject studies to optimize content for a range of diverse patients. N of 1 participants came from three participating healthcare systems: University of Colorado Health System, VA Eastern Colorado Health Care System, and Denver Health Medical Center.

Objective 2: We conducted a pilot intervention to demonstrate feasibility of delivering the intervention and preliminary effects in two of the three healthcare systems. Throughout the process, we engaged patient, provider, and health systems stakeholders in designing, refining, and implementing the pilot intervention.

As the next step, we are building off of this work to conduct a pragmatic clinical trial to improve medication adherence and patient outcomes.

Approach

Objective 1

We drafted a complete library of proposed text messages, informed by principles of behavior change and behavioral economics, principally:

  1. Communicating social norms. Social norms can activate and guide behavior in positive ways when a message normalizes positive behaviors, such as medication adherence, placing non-adherence outside the definition of typical behavior. In other contexts, social norms have been shown to improve healthy food choices, physical activity, everyday health behaviors (e.g., using the stairs vs. elevators), and even reduce home energy use.
  2. Behavioral commitments. A behavioral commitment is affirmatively stating that the desired behavior (i.e., filling one’s prescription) will occur. Prior research has demonstrated a strong desire among individuals to act consistently with their prior commitments, and eliciting commitments to engage in a specific behavior has been shown to be effective at improving a range of behaviors, including substance use changes, safety-seeking behavior in the context of suicide prevention, and judicious use of antibiotics among clinicians. Commitments to fill one’s prescription are able to be elicited via text messaging and may lead to greater concordance between individuals’ commitment and their behaviors.
  3. Narrative stories: Narrative stories are increasingly recognized as an important way to increase vividness and comprehension of medical information and outcomes (Thompson and Kreuter 2014). One issue underlying medication non-adherence is likely a failure to recognize or understand the potential negative consequences the behavior, e.g., stroke, heart attack, or even death. Narrative interventions—particularly ones that describe stories of negative outcomes—may be particularly effective at helping patients concretely understand potential risks of non-adherence, spurring them to take action (improving medication adherence) to prevent negative outcomes.

Following the initial drafting of these messages, we conducted a series of N of 1 trials, with the a priori goal of refining the nudge messages, defining the best delivery method, and tailoring the interventions for diverse audiences, including Spanish-speaking patients and veterans. We engaged patients, providers, and health systems to provide feedback on the messages themselves, intervention design, and outcomes as well as engaging them in routine feedback during the study to help address potential barriers to implementation and help ensure the sustainability of the program. Concurrent with these activities, we established the IT infrastructure across the three healthcare systems for the study and built the library of nudge and chat bot messages.

Objective 2

We pilot tested the delivery and response to text messages, with specific interest in demonstrating feasibility of delivering the intervention and preliminary effects at two of the three healthcare systems, composed of: 1) generic medication refill reminder, 2) behavioral nudge, or 3) behavioral nudge plus artificially intelligent (AI) chat bot. We engaged patient, provider, and health systems leader stakeholders in designing, refining, and implementing the pilot intervention.

We beta-tested the delivery of the text messages and chat bot messages in a two-phased pilot randomized controlled trial within two healthcare systems to ensure feasibility and acceptability. Figure 1 offers a diagram of the process for message delivery for each of the study arms. We enrolled  210 patients from the two healthcare systems for the two-part pilot study, inclusive of patients who 1) received regular cardiovascular care through one of the two systems, 2) were prescribed at least one medication for long-term management of common cardiovascular conditions (Table 1), 3) did not return the opt-out consent form, and 4) had at least one medication refill at least 7 days overdue during the pilot period.

Figure 1. Pilot intervention

Patient Identification: Using administrative claims data, we developed codes to identify eligible patients currently being treated for one of the five cardiovascular conditions of interest, and in particular those who have been prescribed one or more of the classes of medications typically associated with the conditions in Table 1. Once an eligible patient had a “refill gap,” or a period after which they should have refilled medications but had not, of at least 7 days, we an sent opt-out consent letter. Once the deadline for response to the opt-out consent had expired, we randomized patients accordingly.

Cardiovascular conditions and associated classes of medications

Condition Classes of medications
Hypertension Beta-blockers (B-blocker), calcium channel blockers (CCB), angiotensin converting enzyme inhibitors (ACEi), angiotensin receptor blockers (ARB), thiazide diuretics
Hyperlipidemia HMG CoA reductase inhibitors (statins)
Diabetes Alpha-glucosidase inhibitors, biguanides, DPP-4 inhibitors, sodium glucose transport inhibitors, meglitinides, sulfonylureas, thiazolidinediones, and statins
Coronary artery disease PGY-2 inhibitors, B-blockers, ACEi or ARB, and statins
Atrial fibrillation Direct oral anticoagulants, B-blockers, CCB

Message Description: Patients randomized to receive text messages (Figure 1) received a combination of such messages so long as their index medications had not been refilled. As noted above, the content and framing of the messages themselves were informed by behavioral economics principles, and these messages will be compared to generic reminders (to account for the Hawthorne effect or simple forgetfulness on the part of the patient), and to a pre-programmed chatbot, which attempted to problem solve common barriers to medication adherence. Message types and timing of delivery include:

  1. Generic text: A generic reminder text was delivered to patients to refill their medication at days 1, 3, 5, and 7 after they been labeled as non-adherent. In the day 1 text message, patients had another opportunity to opt out of the study with text such as “text STOP if you wish to withdraw from this study.” The texts will stop once a patient has filled their medication.
  2. Behavioral nudge: A behavioral nudge text will be delivered to patients to remind them to refill their medications at days 1, 3, 5 and 7 after they have been labeled as non-adherent (Figure 2). In the day 1 text message, patients had another opportunity to opt out of the study with text such as “text STOP if you wish to withdraw from this study.” The texts will stop once a patient has filled their medication. The content of the behavioral nudge text messages varied with each text and was derived from the text message library built as part of Objective 1.

Figure 2. Schematic of the text messages for each of the text messaging

  1. Behavioral nudge plus AI chat bot: A behavioral nudge text was delivered to patients to remind them to refill their medications at days 1 and 3 after they had been identified as nonadherent. In the day 1 text message, patients had another opportunity to opt out of the study with text such as “text STOP if you wish to withdraw from this study.” The texts stopped once a patient had filled their medication. If the patient had not filled their medication on days 5 and 7, an AI would conduct interactive chat via a chat bot to assess barriers filling the medication as described in Objective 1.

The AI chat bot assessed for common barriers to medication adherence: 1) socioeconomic factors, 2) provider-patient/healthcare system factors; 3) condition-related factors; 4) therapy-related factors, and 5) patient-related factors using a script that we are currently employing in a medication adherence study. Communication about all of these barriers were pre-programmed in the chat bot automated program. For each barrier, the AI chat bot problem solves with the patient and identifies commonly used successful approaches to overcome barriers. It asks patients to choose and enact one solution to improve medication adherence.

For example, patients would be asked if they have difficulty remembering what medications to take and when to take them; those that do would be asked if using a medication diary, involving a caretaker, or setting an alarm on their phone would help. For those that agree to try a strategy, the AI chat program checks in one week later to see how the strategy is going. Those who do not agree or identify a strategy are offered other options, and the process repeated until they identify a strategy. If there are issues that arise that are not pre-programmed into the AI chat bot library, the AI chat bot refers the patient to the study pharmacist at each site for consultation and assistance with the issue. For example, a patient may have stopped taking his medication due to a side effect. The AI chat bot will document this information through interactive chat, then refer the patient to a study pharmacist to see if there are alternative medications. Dr. Bull, the co-PI of the study, has programmed libraries very similar to this AI chat bot approach and utilized them for behavior change in prior interventions.

Although behavioral nudges have not been tested for the improvement of medication adherence, the use of nudges builds on a substantial body of knowledge and has been shown to improve other health behaviors (Matjasko et al. 2016). If mobile and digital technologies can improve adherence to medication for patients with cardiovascular disease, there may be an opportunity to expand the use of nudges to facilitate medication adherence for a multitude of conditions.


SECTIONS

CHAPTER SECTIONS

REFERENCES

back to top

Brown MT, Bussell JK. 2011. Medication adherence: WHO cares? Mayo Clin Proc. 86:304–314. doi:10.4065/mcp.2010.0575. PMID: 21389250.

Choudhry NK, Bykov K, Shrank WH, et al. 2014. Eliminating medication copayments reduces disparities in cardiovascular care. Health Aff (Millwood). 33:863–870. doi:10.1377/hlthaff.2013.0654. PMID: 24799585.

Costa E, Giardini A, Savin M, et al. 2015. Interventional tools to improve medication adherence: review of literature. Patient Prefer Adherence. 9:1303–1314. doi:10.2147/PPA.S87551. PMID: 26396502.

Matjasko JL, Cawley JH, Baker-Goering MM, Yokum DV. 2016. Applying behavioral economics to public health policy. Am J Prev Med. 50:S13–S19. doi:10.1016/j.amepre.2016.02.007. PMID: 27102853.

 

NICE Clinical Guidelines No 76. 2009. Medicines Adherence: Involving Patients in Decisions About Prescribed Medicines and Supporting Adherence.  Chapter 8. Interventions to Increase Adherence to Prescribed Medicine.

Thompson T, Kreuter MW. 2014. Using written narratives in public health practice: a creative writing perspective. Prev Chronic Dis. 11:130402. doi:10.5888/pcd11.130402. PMID: 24901794.


Version History

Published March 16, 2020