Article Text
Abstract
Pulmonary embolism (PE) is a serious condition that presents a diagnostic challenge for which diagnostic errors often happen. The literature suggests that a gap remains between PE diagnostic guidelines and adherence in healthcare practice. While system-level decision support tools exist, the clinical impact of a human-centred design (HCD) approach of PE diagnostic tool design is unknown.
Design Before–after (with a preintervention period as non-concurrent control) design study.
Setting Inpatient units at two tertiary care hospitals.
Participants General internal medicine physicians and their patients who underwent PE workups.
Intervention After a 6-month preintervention period, a clinical decision support system (CDSS) for diagnosis of PE was deployed and evaluated over 6 months. A CDSS technical testing phase separated the two time periods.
Measurements PE workups were identified in both the preintervention and CDSS intervention phases, and data were collected from medical charts. Physician reviewers assessed workup summaries (blinded to the study period) to determine adherence to evidence-based recommendations. Adherence to recommendations was quantified with a score ranging from 0 to 1.0 (the primary study outcome). Diagnostic tests ordered for PE workups were the secondary outcomes of interest.
Results Overall adherence to diagnostic pathways was 0.63 in the CDSS intervention phase versus 0.60 in the preintervention phase (p=0.18), with fewer workups in the CDSS intervention phase having very low adherence scores. Further, adherence was significantly higher when PE workups included the Wells prediction rule (median adherence score=0.76 vs 0.59, p=0.002). This difference was even more pronounced when the analysis was limited to the CDSS intervention phase only (median adherence score=0.80 when Wells was used vs 0.60 when Wells was not used, p=0.001). For secondary outcomes, using both the D-dimer blood test (42.9% vs 55.7%, p=0.014) and CT pulmonary angiogram imaging (61.9% vs 75.4%, p=0.005) was lower during the CDSS intervention phase.
Conclusion A clinical decision support intervention with an HCD improves some aspects of the diagnostic decision, such as the selection of diagnostic tests and the use of the Wells probabilistic prediction rule for PE.
- Standards of care
- Evidence-Based Practice
- Decision support, clinical
- Decision support, computerised
- Hospital medicine
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
- Standards of care
- Evidence-Based Practice
- Decision support, clinical
- Decision support, computerised
- Hospital medicine
WHAT IS ALREADY KNOWN ON THIS TOPIC
The development of clinical decision support tools focuses on information representation and providing recommendations to healthcare providers for various clinical decisions. The design of diagnostic decision support tools for pulmonary embolism (PE) has relied on content representation of its key diagnostic elements. Knowing that such diagnostic decisions are often complicated by uncertainty, time constraints and the heuristic approach, effective support should consider strategies to mitigate failure in clinical reasoning in addition to presenting knowledge and higher-order cognitive tasks. Furthermore, all such tools should ideally be evaluated as they are deployed to understand their impact and potential barriers of uptake and incorporation into clinical workflows.
WHAT THIS STUDY ADDS
A computerised decision support tool for diagnosing PE incorporating a human-centred design (HCD) approach was deployed in a real-world care setting and rigorously evaluated for its impact. The deployment led to positive changes in diagnostic test utilisation and physician compliance with diagnostic testing guidelines.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
Healthcare institutions that deploy system-level clinical diagnostic decision support interventions are encouraged to prioritise tools based on an HCD approach. Rigorous evaluation is required after deployment.
Introduction
The evidence-based diagnostic approach for patients with suspected acute pulmonary embolism (PE) follows a recommended decision path that begins with clinical assessment and applying a validated prediction rule combined with D-dimer measurement when appropriate. If necessary, this is followed by imaging studies. Diagnostic pathways for PE exist; however, latent threats to diagnostic clinical reasoning often lead to situations where the diagnosis is either delayed or missed or leads to avoidable testing with direct consequences to patient safety, efficiency, cost and radiation exposure.1 The widespread recognition of the complexity of PE diagnosis and the shortcomings of usual care practices have triggered many efforts to develop decision support tools for PE.2–7
Despite the myriad of clinical decision support tools, utilisation of available diagnostic tests remains suboptimal.8–10 Furthermore, medical decision-making is vulnerable to uncertainty, operating intuition and distorting heuristics.11–13 In prior published work,14 15 our team developed a clinical decision support system (CDSS) integrated with the regional hospital’s electronic medical record system. The new CDSS addresses vulnerability issues in PE diagnosis and demonstrates that human-centred design (HCD) can successfully integrate required content with natural cognitive diagnostic workflows. Briefly, the design of this CDSS was initiated by a University of Calgary (UCalgary) PhD student who employed visualisation science theories to present key elements of Bayes’ theorem without showing the mathematical detail.16 17 Initial HCD iteration phases focused on best practices in data visualisation of uncertainty surrounding the presence/absence of disease paired with feedback from internal stakeholder engagement and usability testing. Next, a clinically trained UCalgary MSc student evaluated the CDSS in a preclinical study using simulated clinical scenarios.15 In total, 30 physicians with various expertise were randomised to the CDSS or a refresher PE diagnosis lecture. Results showed substantial improvement in physician diagnostic performance for CDSS and traditional interventions following a presentation of clinical scenarios. Importantly, this preclinical study demonstrated that physicians at all levels of training/experience exhibit a suboptimal baseline performance level.
The PE CDSS was then iteratively redesigned, paired with usability testing and prepared for deployment in regional hospitals.14 15 This produced a final version of the PE CDSS tool. Key to this redesign was the integration of a supportive (rather than directive) philosophy guiding practicing physicians in their decision-making by providing a tiered wealth of evidence-based information while leaving final clinical decisions to their discretion.18–21
While the development of a CDSS to support the practice of evidence-based medicine might be promising, substantial evaluation of the impact on diagnostic processes is critical to determine the effectiveness and identification of physicians’ uptake challenges. In this CDSS clinical deployment study, we describe the evaluation of the effectiveness of an HCD-developed PE diagnostic CDSS in a clinical setting. The primary objective of this study was to evaluate CDSS’s impact on physician adherence to evidence-based diagnostic guidelines for PE,22 23 with a secondary objective to assess the impact on the utilisation of PE diagnostic tests.
Methods
Study design
We employed a before–after study design with the preintervention period as a non-concurrent control.24 25 Preintervention PE workups (August 2017 and February 2018) were a control to compare against postintervention PE workup data (July 2018 and December 2018). A technical verification phase (February 2018 to June 2018) was between the preintervention and postintervention phases, where PE workup quality was not formally assessed. This phase ensured data integrity as part of the new PE CDSS deployment into the existing hospital electronic health record (EHR).
Patient and public involvement
No patients were involved in the design and conduct of this study.
Study setting
The CDSS was deployed at two UCalgary-affiliated hospitals. Inpatient medical and surgical units with patients at risk of developing PE-suggestive symptoms were included.
Intervention
The CDSS was hosted on an external webpage (https://project-b.researchcalgary.ca) developed and supported by the UCalgary Clinical Research Unit. This provided a secure link to the study site EHRs. This webpage was triggered whenever a PE diagnostic test (ie, D-dimer, imaging study, Doppler of lower extremities) was ordered. Healthcare providers ordering PE diagnostic tests were shown a small prompting window with a link to the CDSS page from within the EHR. Figure 1A,B shows a sample screenshot of the initial dialogue and the post workup summary page highlighting the start and endpoints of the PE CDSS, respectively.
Study protocol
The study was conducted in three phases: preintervention, CDSS deployment and technical testing, and CDSS intervention.
Preintervention phase
Case identification of PE workups was done by accessing EHRs. EHRs were initially searched for incidents of D-dimer testing, CT pulmonary angiogram (CTPA), ventilation–perfusion (V/Q) scans, Doppler of lower extremities and two-dimensional (2D) echocardiogram. From this dataset, the EHR numbers of patients in study hospital units were compiled and restricted to include only those where a general internal medicine (GIM) specialist was the most responsible physician. This last restriction was because subsequent CDSS implementations would only occur on clinical services where GIM physicians were involved in the PE workups, typically the admitting or consulting specialty for these clinical workups. These admission health records (hospital charts) were then collected by a research assistant.
CDSS technical testing
During this phase, we tested functional and integration aspects of the CDSS within the existing EHR (sunrise clinical manager (SCM), Allscripts, Richmond, British Columbia, Canada). Operationally, when a healthcare provider enters an electronic order for any PE diagnostic test, a popup window within SCM appears and directs the user to a webpage hosting the CDSS. Each unique CDSS automatically uses links with the specific patient’s medical record number undergoing PE workup. To support the practical PE diagnostic timeline, in-progress workup results were accessible from any computer terminal within the hospital for 24 hours. We evaluated the impact in actual healthcare practice where medical providers could select and close popups and apply clinical reasoning by considering recommendations in the context of their experience and patients’ clinical presentations. A final summary screen showing a complete list of PE diagnostic tests with results and recommendations was presented. Physicians were empowered to override the CDSS and were welcome to provide comments recording overriding reason(s).
CDSS intervention phase
The same case record identification procedure was applied during this phase as in the preintervention phase. A second case identification line was automatically generated. CDSS used triggers through email notifications to the study team. This additional mechanism created a partial or complete CDSS log to identify patients’ health record information for further analysis. We also offered monthly orientation sessions during the CDSS intervention phase for rounding physician groups in the study’s hospital units. These sessions included a 15 min presentation about PE CDSS (ie, access, diagnostic pathways, documentation) followed by a question-and-answer period. Laminated cards with reminders and study contact information were also attached to computers.
Data collection
Two research assistants completed hospital chart reviews for all included cases in the preintervention and CDSS intervention phases. Clinical summary data were also collected. Data was collected into a standardised form, including all relevant patient’s hospital stay clinical data and date of PE workup initiation (see online supplemental appendix A).
Supplemental material
Inclusion criteria
A PE workup was included if any PE diagnostic test was initiated by clinicians practising on a study inpatient unit. Specifically, one or more tests were defined as eligible PE workup cases: a D-dimer blood test, CTPA or a V/Q scan.
Exclusion criteria
We excluded PE workups initiated on other hospital units outside our target sample (eg, emergency room, respirology, haematology) or requested by specialist healthcare providers (eg, respirologists, haematologists). Further, workups were excluded by any PE CDSS exclusion criteria (ie, unstable patients, recent venous thromboembolism of <3 months, therapeutic anticoagulation therapy, pregnancy, upper extremity Deep Vein Thrombosis, and age of <18 years).
Medical reviewer recruitment for final PE workup quality assessment
We recruited a group of practising physicians to perform expert chart summary reviews blinded to the case study period. GIM physicians were invited by email with a summary of the study objectives and a volunteer request. Nine volunteer physicians were identified and invited to in-person orientation meetings (see online supplemental appendix B). The research support team presented sample chart summaries to the nine physician reviewers and conducted group sessions to review scoring instructions (online supplemental appendix A) and the joint scoring of the sample cases. The reviewer’s task for these reviews is highly objective—that is, a simple documentation of the sequence of tests ordered by the clinical team. The group sessions determined that this was a simple task for the reviewing physicians and did not involve any subjective judgement on their part. Following the training session, chart summaries were compiled into batches and distributed equally among the reviewing physicians, who were blinded to whether the case was preintervention or CDSS intervention. On review completion, the physicians returned completed reviews by email or in print for final data entry.
Supplemental material
Measured outcomes
Adherence to evidence-based guidelines for PE diagnosis (primary study objective) was measured by counting correct diagnostic steps taken within each chart summary divided by the predetermined number of correct steps based on published diagnostic guidelines for PE.22 23 This was determined for each diagnostic scenario encountered within PE diagnostic step recommendations and clinical spectrum presentation. This evaluation resulted in a value between 0 and 1.0, with 1.0 indicating perfect adherence and lower values indicating incomplete adherence. Volunteer reviewers analysed each patient’s chart summary and compared this with the actual diagnostic approach to the best diagnostic option for this presentation. The list of the 12 diagnostic options (see online supplemental appendix A) was generated based on the full range of PE diagnostic pathways, with each option given a predetermined score of correct steps.
Actual PE workups, as conducted and described in chart summaries, were awarded points based on adherence to one of the best-fitting options (as judged by the medical reviewer), and points were taken away for missing steps (eg, option 3: proceeding with an imaging study immediately without documentation of Wells or ordering a D-dimer).
Secondary outcomes focused on assessing PE CDSS’s possible impact on clinical practice behaviours. Specifically, we evaluated if the intervention affected diagnostic test utilisation (ie, D-dimer, V/Q scans, CTPA, 2D echocardiograms and Doppler ultrasonography of the legs).
Statistical analysis
Sample size analysis was centred around detecting an adherence improvement of 10% (0.1) to PE diagnostic pathways, the adherence measure when using the CDSS. This numerical assumption was supported by CDSS tool pilot testing, showing an improvement of 15% on the same measure.26 The sample size calculation (assuming a SD of 0.3, α of 0.05 and power of 80%) indicated needing at least 140 PE workups in each of the preintervention and CDSS intervention phases—case volumes achievable in 6-month periods for each phase.
Statistical analysis consisted of simple between-group comparisons for either categorical baseline characteristics and outcomes (ie, use of D-dimer of imaging studies) or ordinal outcome measures (ie, adherence with diagnostic pathway steps). A χ2 test was used for dichotomous primary and secondary measures, and the Kruskal-Wallis test was used to assess the statistical significance of differences for ordinal variables. The Kruskal-Wallis test was used for statistical analysis because the diagnostic adherence measure has the potential to be non-normal in distribution characteristics. The measurement scale is ordinal in nature, and the non-parametric Kruskal-Wallis test is a robust test for such a situation.
Results
In total, 345 charts were retrieved in the preintervention phase and 343 in the CDSS intervention phases, of which 203 (59%) and 168 (50%) satisfied inclusion criteria and underwent analysis, respectively. Figure 2 provides exclusion details.
Table 1 provides patients’ baseline characteristics with PE workups included in the study. The patient average age was 65.6 years and 64.2 years for preintervention and CDSS intervention phases, respectively. Among eligible study patients with a PE workup, 152 (74.9%) patients had a Wells Score≤4 in the preintervention phase versus 115 (68.5%) in the CDSS intervention phase.
Primary outcome
The overall adherence to evidence-based diagnostic guidelines was not statistically significant (median 0.60 in the preintervention phase vs median 0.63 in the CDSS intervention period, p=0.18). Figure 3, a box plot of diagnostic concordance scores, demonstrates the modest change in central tendency paired with a reduction in PE workups with very low diagnostic guideline concordance. There was also a statistically significant improvement in overall adherence to PE workups when the assessment of the workups documented the use of the Wells prediction rule (median adherence score=0.76 with the use of Wells vs 0.59 when Wells was not used, p=0.002) (figure 4). This PE workup adherence improvement with incorporated Wells prediction rule was also found and emphasised in a related analysis limited only to PE workups undertaken during the CDSS intervention phase (median adherence score=0.80 with use of Wells vs 0.60 when Wells was not used, p=0.001) (figure 5).
Secondary outcomes
CDSS intervention phase PE workups demonstrated a noticeable difference in diagnostic test utilisation compared with the preintervention phase. Table 2 reveals a decrease in relative frequencies of both D-dimer blood tests (42.9% vs 55.7%, p=0.014) and CTPA imaging studies (61.9% vs 75.4%, p=0.005) in the postintervention phase. Additionally, the relative frequency increased for V/Q imaging studies (17.9% vs 9.9%, p=0.025). There was no change in Doppler frequencies of lower extremities, but there were fewer 2D echocardiogram studies in the CDSS intervention phase. These reductions in specific diagnostic tests occurred despite a higher proportion of PE workups yielding a PE diagnosis in the CDSS intervention phase (23.8% vs 15.3%, p=0.037). The CDSS intervention was also associated with improved documentation of Wells prediction rule results in either the electronic CDSS tool or EHR.
Discussion
We conducted an extensive clinical care innovation process and formal impact assessment of an electronic PE diagnosis CDSS intervention on clinical care decisions in tertiary care hospitals. Our results suggest some positive effects of the CDSS intervention on PE diagnostic care in hospitals. While the central tendency of the diagnostic adherence score suggested a change towards improvement (ie, fewer PE workups with low diagnostic adherence scores), the change was not statistically significant. We did find statistically significant improvements in diagnostic adherence scores when the Wells prediction rule for PE (a core element of the tool) was used and documented. In addition, there were significant and desirable differences in using specific PE diagnostic tests in the CDSS intervention phase relative to the preintervention period. Specifically, there was a favourable change in the pattern of using imaging studies (more V/Q scans and less CTPA) and a decrease in ordering D-dimer blood tests when not indicated. This pattern was paralleled with more PE cases diagnosed in the CDSS intervention phase. These observed differences collectively indicate an overall improvement in the PE diagnostic approach.
While this finding may not be surprising given the known value of prediction rules in clinical reasoning,27 it is another crucial reminder of the impact and value of system-level integration of clinical prediction rules.2 4–7 For example, Roy et al2 showed that ER units randomised to use a handheld decision support system had improved PE diagnostic decision-making compared with posters or pocket cards. Additionally, Jimenez et al6 demonstrated that mandatory PE CDSS use significantly reduced CTPA frequency in a retrospective ER study setting. Our study demonstrates how CDSS introduction improves PE diagnostic care. Further, our study extends earlier work by demonstrating CDSS’s benefit in a largely voluntary-use deployment through the automatic triggering of the tool within the electronic ordering system when PE-related orders were entered. This approach to deployment embeds the CDSS in busy clinical workflow without a need for intervening research personnel. Such an approach automatically presents the tool at the moment it is needed to improve care. Here, despite some of the generally perceived disadvantages of CDSSs (eg, interruption in patient–physician communication, increase in unnecessary referrals), the adoption of CDSSs has been identified with a positive impact in several domains of patient care, such as work efficiency, personalised care, confidence in decision making and reducing the number of laboratory and imaging workups.28
The diagnostic reasoning process is fraught with challenges that extend beyond clinical competence, poor judgement and faulty evaluation/processing of clinical data.29 Generally, systems need to shift to a patient-centred diagnostic processes paradigm and collaborative activity at a system level that positively influences the diagnostic process.30 While the value of CDSS is well recognised,30–32 poor health information technology design can be a patient safety risk.33–35 For example, CDSS design methods that do not model practitioners’ behaviour will disrupt clinical workflows and have a high potential for decision support to be ignored or overridden.36 Furthermore, a complex design can hinder engagement or cognitively overload users.37 38
Our PE diagnosis CDSS adhered to HCD principles achieved through iterative redesign39 and usability testing.40 41 Specifically, human factors knowledge was incorporated to demonstrate usability and proper integration within clinical workflows. At the same time, a simple and intuitive user interface ensured adequate interaction with evidence-based recommendations at point of care. Moreover, a preclinical testing phase within a simulated environment provided early insight into functionality aspects and potential impact on clinical diagnostic performance for PE. This study also contributes to the healthcare literature on diagnostic CDSSs by achieving the often tricky and high level evaluation standard in a real-world setting for knowledge translation.
Our study has some caveats and limitations. First, our study involves a quasi-experimental design. A cluster randomised clinical trial (RCT) would have been a more robust evaluation design. However, a cluster RCT was not feasible here as we lacked sufficient hospital environments to evaluate this CDSS tool; the CDSS was deployed on an EHR platform shared by a few hospitals. Mitigating this concern, the control preintervention PE workups were from the same GIM services as the CDSS intervention PE workups, and the physicians involved in the workup were comparable in the control versus intervention periods. Furthermore, the physician reviewers who scored adherence to the PE workups were blinded to the study phases (ie, intervention vs control). In addition, the reviewers participated in a standardised training process, and they were equipped with standardised tools that simplified the review task (ie, a simple data form with clear criteria for identifying diagnostic steps in PE workups).
A second caveat is that the CDSS was deployed in our hospital EHR system without firm diagnostic step enforcement. Specifically, there were no hard stops where an entry for a test was declined based on available results of other tests and/or clinical data. Also, no enforcement required physicians to acknowledge or enter a comment to proceed. As such, the ‘dose’ of our CDSS intervention may have been lower than it could have been. A CDSS with strict enforcement of diagnostic steps might provide more substantial and more positive effects on the quality of PE workups. It is also possible that clinicians chose not to use the CDSS after closing the initial popup or decided not to complete the CDSS diagnostic pathway. As such, we prioritised integrating a guidance and support layer, avoiding perceptions of being obstructive or a nuisance. This approach was taken in anticipation of the fact that a supportive CDSS mode may be more appropriate and acceptable for clinicians working in such diverse and dynamic clinical contexts. Literature suggests that CDSSs with ‘hard stops’ are reserved for decisions that put a patient at risk of harm (eg, known severe allergic reaction to a medication) to avoid burdensome acknowledgements that might trigger workaround behaviours and overrides by clinicians.36 42 From the above, we believe the deployment and adoption of CDSSs will be multidimensional and complex, thus may benefit from qualitative approaches from implementation science as we continue to promote this CDSS’s use in clinical practice.43
A third caveat is that our orientation sessions were limited to the first day of each month during the CDSS intervention phase. Other measures included using laminated reminder cards and posters in meeting rooms without much in-person interaction with physicians daily or one-on-one training. This might have reduced the visibility of the PE CDSS and restricted use and users’ perceptions and interests. This is yet another aspect of CDSS intervention intensity. Our CDSS rollout was passive; a more intensive campaign promoting the CDSS can potentially increase uptake and the ultimate impact on clinical care.
A final caveat is that the findings presented here are from a GIM inpatient clinical setting, where patients tend to be immobile and have clinical risk factors predisposing them to PE. As such, the PE workups we assessed tend to have a high clinical pretest PE probability. This contrasts with the fast-paced ER setting, where many patients with low pretest PE probability also undergo such workups. Our CDSS intervention warrants further evaluation in ER settings using the HCD approaches described here.
In summary, the deployment of an HCD for PE CDSS led to favourable shifts in the diagnostic approach to PE in the setting of GIM inpatient units. Our results shed light on the promise of well-designed CDSS interventions for complex care processes such as PE and the inherent challenges in successfully deploying such tools. Fundamental questions remain on the advantages and disadvantages of forced function (vs unenforced/supportive CDSS recommendations) approaches to provider onboarding to use such tools and the best ways to engage patient stakeholders in CDSS tool design and use. As future research tackles these questions, health systems should embrace EMR-embedded CDSS interventions to address diagnostic and therapeutic care gaps in various clinical domains.
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information.
Ethics statements
Patient consent for publication
Ethics approval
The University of Calgary Conjoint Health Research Ethics Board approved the study.
Acknowledgments
We would like to acknolwedge Drs. Barry Baylis, Michelle Grinman and Oliver Haw For Chin for their contribution to clinical chart summary reviews.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Contributors GA and WAG conceived the design and study idea. WF, JC, HTS, JK and JNB provided conceptual support for study design, outcomes and deployment strategies. WO, SM and NL carried out the chart reviews, data collection, data analysis, recruitment and training of medical reviewers. JNB carried out the usability evaluation and technical integration.WAG, GA, WF, JC, JS and AB contributed to clinical chart summary reviews. GA carried out the physician orientations, provided clinical support for integration and usability evaluation, took the lead in writing the manuscript and is the author acting as guarantor. All authors provided critical feedback and commented on the manuscript.
Funding This study was funded by Alberta Innovates (no grant/award number).
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.