Research & reporting methodology

Harnessing the full potential of hospital-based data to support surgical quality improvement

Abstract

Surgical departments commonly rely on third-party quality improvement registries. As electronic health data become increasingly integrated and accessible within an institution, alternatives to these platforms arise. We present the conceptualization and implementation of an in-house quality improvement platform that provides real-time reports, is less onerous on clinicians and is tailored to an institution’s priorities of care.

Background

The use of quality improvement registries is widespread in surgical care.1 2 Over 700 hospitals participate in the American College of Surgeon’s National Surgical Quality Improvement Program (ACS NSQIP) and can benefit through improvement in specified quality measures.3–5 For example, in an analysis of 118 participating centres, ACS NSQIP was found to have potentially prevented 200–500 surgical complications and 12–36 deaths annually.4 Multicentre QI registries also exist focusing on specific surgical subspecialties, intervention types or patient populations. Examples are the Society for Vascular Surgery’s Vascular Quality Initiative (SVS VQI), the ACS Trauma Quality Improvement Program (ACS TQIP), the National Health Service National Emergency Laparotomy Audit and the Australian Orthopaedic Association National Joint Replacement Registry.6–9

These platforms support an individual hospital’s quality improvement efforts through external benchmarking. Patient and procedure characteristics and process measures and outcomes are collected in accordance with a data dictionary. These data are then used to define a hospital’s risk-adjusted performance relative to other participating hospitals.2 Of course, the utility of these results relies on the quality and scope of data inputs. Participating hospitals must therefore employ trained clinical reviewers and rely on, or require, surgeon input to accurately capture procedure characteristics and perioperative outcomes. Often, external QI platforms are hosted on third-party software and accordingly require dedicated data extraction and input on top of standard clinical documentation. Furthermore, no single QI registry captures the full scope of emergency and scheduled surgical care at a given institution; therefore, many hospitals participate in multiple QI registries.2 At the authors’ institution, for example, we have simultaneously contributed to ACS NSQIP, ACS TQIP, Society for Thoracic Surgery Registry and SVS VQI.

Hospitals collect abundant patient data. However, these data generally exist in different formats across various platforms (eg, time-stamped administrative data on patient encounters, ambulatory clinic records, inpatient ward vital signs, laboratory data, pharmacy medication dispensing, imaging results, etc). Some of these data may not be captured within the primary electronic medical record (EMR) software in a functional format for data analysis (eg, PDF document of pulmonary function test that can be downloaded from EMR with values of the test such as forced expiratory volume in one second that cannot be searched via EMR). A healthcare analytics team at our institution was brought together to harness the untapped potential of internally generated data and created what is known as an Enterprise Data Warehouse (EDW). Built on IBM’s PureData for Analytics system, the EDW is created and updated from automated algorithms that clean raw patient data and, following deterministic linkage, repackage them for more efficient queries.10 In other words, raw data from various hospital sources are first cleaned by standardising the format and removing any redundant information. Next, these refined data are linked together using a patient’s unique medical record number (deterministic linkage rather than probabilistic). These linked data are then organised to accurately reflect a patients’ course within and across multiple healthcare visits. Thus, stagnant and disarrayed data are converted into a normalised and optimised format that sets the ground for powerful healthcare analytics, including the use of machine learning.10 With the inception of the EDW, our author group saw an opportunity to develop a novel platform for surgical quality improvement. This article aims to describe the conceptualisation, development and use of this platform for vascular surgery quality improvement.

The concept

With respect to vascular services, the aforementioned registries collect data already mostly captured in our institution’s EMRs. Vascular QI metrics could therefore theoretically be captured without involving an external data collection platform, resulting in less redundant data entry. Furthermore, an internal data platform offers the opportunity to have customised quality metrics that reflect the unique types of patients treated at our institution. Previously reliant on semiannual and quarterly reports by ACS NSQIP and SVS VQI, an internal platform would equip the vascular surgery division with real-time quality auditing and the opportunity for real-time alerts in the event of deviations in performance.4 11

While there are limited data on the relative value of internal versus external benchmarking, what is known is that the quality of the data itself is instrumental for meaningful queries to be performed. Moreover, it has been recognised that with sufficiently large organisations (eg, hospitals), internal benchmarking is a valid quality improvement strategy that has the advantage of being rapidly responsive, well integrated with the existing information technology infrastructure and protective of confidentiality issues.12 We sought to develop a high-quality internal benchmarking system for vascular surgery quality improvement that allows for continuous and historical analysis of performance at the authors’ institution that informs discussion and interventions to improve future performance.

The project was named the Urban Angel Vascular Quality Improvement Program (UA-VQIP). The vascular patient cohort would be defined from the EDW, categorised by procedure and admission type and then queried for specific quality metrics. These data would populate an interactive dashboard updated daily. This concept offers three major benefits over existing external QI platforms.

First, delayed period specific reports can slow plan–do–study–act cycles. Delayed impact assessment is often cited as a barrier to effective QI implementation.13 By providing access to data in near real time, UA-VQIP would overcome the delay associated with periodic reports from external QI registries.

Second, UA-VQIP was conceptualised to minimise labour associated with data extraction and processing. Whereas external QI registries rely on human staff to manually enter data into third-party software UA-VQIP’s code pulls specific data elements from specified hospital encounters based on known formats in the EDW. In this way, data elements (eg, serum troponin at a specified time during hospitalisation) are extracted from the EDW. In other instances, the code is written to review text notes (eg, operative records) and, using a natural language processing algorithm, searches for specified text flags/phrases (eg, ‘Vascular Quality Case Peripheral Artery Disease’).

Third, since the platform is designed internally, there is potential for growth and adaptation—metrics can be updated or further customised to most accurately reflect evolving care patterns, benchmarks and new data inputs (eg, patient-reported outcomes).

A system like UA-VQIP can be used to identify and address evident outliers in quality of care (eg, mortality) through patient-specific care review and discussion. In addition, quality benchmarks are defined and can evolve through regular review of practice guidelines and published literature. For example, one quality metric in UA-VQIP is designed to monitor compliance with the SVS guidelines for adequate postoperative imaging follow-up of endovascular aortic aneurysm repair.14 As mentioned above, there is no evidence to our knowledge demonstrating superiority of external versus internal benchmarking in quality improvement. By internal benchmarking on historical performance, UA-VQIP can support quality improvement based on the unique considerations at a given institution.

Development of the quality improvement platform

Our team of vascular surgeons, a vascular nurse practitioner, interventional radiologists and residents led the development of UA-VQIP. All hospitalisations to the vascular surgery service and specified outpatient procedures (day surgery or interventional radiology) were included. To relate quality metrics to appropriate patient groups, 14 subgroups were defined (box 1) and identified in the EDW according to the aforementioned text flags in addition to structured data originating from the operating room booking system.

Box 1

Patient subgroups

  1. Aortobifemoral bypass.

  2. Above knee amputation.

  3. Below knee amputation.

  4. Carotid endarterectomy.

  5. Endovascular abdominal aortic aneurysm repair.

  6. Endovascular thoracic aortic aneurysm repair.

  7. Advanced endovascular aortic aneurysm repair.

  8. Open abdominal aortic aneurysm repair.

  9. Thoracic outlet syndrome decompression surgery.

  10. Haemodialysis access surgery.

  11. peripheral artery surgery.

  12. Peripheral artery bypass with vein graft.

  13. peripheral artery angioplasty and/or stenting.

  14. All vascular surgery inpatients.

Key quality metrics for our service were identified through consensus within the vascular team based on relevance to our practice and clinical importance.11 Following the Donabedian model of quality, 17 process and outcome measures relating to patients’ perioperative course were identified (table 1).15 At the onset of the initiative, benchmarks for each metric were established based on consensus of the vascular team supported by published literature, practice guidelines and benchmarks in existing QI programs.

Table 1
|
Quality metrics

The data processed by UA-VQIP are used to populate an interactive dashboard that has multiple navigable menus. The homepage provides a sweeping summary of all patients sorted under the 17 metrics, with off-target metrics coded in red (figure 1). Each of the 17 metrics has its own interactive detailed overview (figure 2). A colour-coded mosaic plot indicates performance of each applicable subgroup. A bar plot shows the temporal trends in performance and can be restricted to a specified subgroup. Finally, a detailed tabular summary is shown, which includes the date, patient identifier, the caregiver and admission type to allow individual patient record review.

Figure 1
Figure 1

Tabular summary of all metrics for all patients. Red=off target.

Figure 2
Figure 2

Mosaic plot of subgroups for the serum glucose at discharge metric. Red=off target (top left). Tabular display summarising encounter details (bottom). Bar chart displaying overall performance over fiscal years relative to the benchmark in red (top right). ABF, Aortobifemoral bypass; AKA, Above-knee amputation; BKA, Below-knee amputation; CEA, Carotid endarterectomy; EVAR, Endovascular Abdominal aortic aneurysm repair; TEVAR, Endovascular Thoracic aortic aneurysm repair; AEVAR, Thoracic/abdominal aortic aneurysm repair with custom endovascular graft; OAAA, Open abdominal aortic aneurysm repair.

During the initial development of the UA-VQIP platform, repeated data validations were performed by a research assistant reviewing accuracy of the output relative to manual review of medical records. Periodic validation of random samples continues to be performed in each quarter. Formal measures of validity are not calculated since, when an error is identified, the written code that pulls UA-VQIP data from the EDW is modified.

In summary, the creation of the UA-VQIP platform required an intensive co-creation model involving data scientists and clinicians to: (a) develop the scope of the initiative, (b) specify the data and reporting requirements, (c) identify, extract, transform and report the required data in a timely manner, (d) develop a data visualisation solution and (e) test and correct data errors before implementation. Specifically, UA-VQIP required three key individuals over an initial 6-month platform development period: a lead vascular surgeon, a data engineer and a data scientist with expertise in analytics and data visualisation.

Use of the platform

With the UA-VQIP platform, quality improvement efforts in vascular care at our institution are supported in four main ways.

First and foremost, systematic deviation in a quality metric can be addressed through interventions designed according to the plan-do-study-act (PDSA) approach—iteratively testing the impact of small initiatives and refining them to realise sustained improvement.15 Our group has committed to review UA-VQIP data at least every 2 weeks during vascular surgery multidisciplinary rounds. These meetings involve discussion of rare events (eg, mortality) as well variations in care over short (weeks) and longer (months to years) terms. During these meetings, each of the 17 targets are reviewed. Off-target measures are examined to identify the patient subgroup(s) that are driving the deviation. Potential causes are discussed and later examined through data audits. For example, following discussion at rounds, our nurse practitioner reviewed a random sample of records with elevated blood glucose at discharge and identified that a large proportion of patients was not being properly prescribed an insulin sliding scale. In late February 2020, the division implemented a standard for insulin sliding scale prescription in all patients with diabetes. A subsequent finding of ongoing high hyperglycaemic rates in March 2021 prompted the increase in insulin sliding scale dosing. Time series analyses to assess the impact of these interventions were planned however unfortunately, the implementation of these interventions coincided with the first and third waves (and the corresponding ramp downs of scheduled surgeries) of the COVID-19 pandemic. The author group plans to report broader impacts of UA-VQIP in future papers as the platform has the opportunity to interact with more consistent patient volumes.

Second, individual providers use the UA-VQIP platform to audit their own patients’ outcomes, which helps build a culture of reflective and forward-thinking practice that meets criteria for clinical practice audit defined by the Royal College of Surgeons of Canada for maintenance of certification activities.16

Third, the UA-VQIP platform captures rare but serious events (eg, stroke following carotid endarterectomy, death following elective surgery) with transparency so that these can be individually discussed during divisional quality rounds.

Fourth, certain metrics (eg, imaging follow-up, discharge medication prescription) can lead to immediate action for the individual patients who did not meet the quality metric. For instance, a patient who has missed imaging follow-up after endovascular aneurysm repair can be contacted to return for follow-up.

Finally, an additional use of the UA-VQIP platform revealed itself during the COVID-19 pandemic: the ability to capture real-time patient volumes. This allowed for the authors’ institution to assess the magnitude of backlog of scheduled vascular surgical procedures and to inform the allocation of constrained operating room resources throughout the hospital.17

UA-VQIP has considerable potential for supporting lasting improvements in the quality of care for our patients. However, two major challenges warrant emphasis.

First, ensuring data accuracy is critical. Development of the platform began in April 2018 but required significant refinement over time. Text flags had to be added (prospectively) to dictations and coding errors in the algorithms used to extract data from the EDW resulted in misclassification of patients and outcomes. Refining the platform is and has been a priority to ensure data validity. In the first year of implementation, iterative data audits of all fields were performed. Since then, random sampling of patients within quality measures has been performed. For example, the vascular surgery group is currently using UA-VQIP to identify barriers to appropriate EVAR imaging follow-up, while simultaneously screening for coding inconsistency and errors. Due to the iterative and ongoing correction of code, formal error rate and accuracy have not been calculated. However, with more experience and stability of vascular volumes and case mix, these measures can be quantified via manual chart reabstraction.

Second, as a single-institution data platform, the sample size remains relatively small. As a result, risk adjustment of outcomes based on patient characteristics over time is not possible, although confounding is partly accounted for through stratification on procedure type and admission type. This area will be the aim of future updates as the sample size accrues.

In addition to refining analytics, we hope in the future to expand the scope of quality metrics in UA-VQIP. Emerging evidence for novel practices, regional funding for specific priorities of care or new hospital-wide tools to capture patient-reported outcomes require new quality metrics. For example, Health Quality Ontario’s 2017 report identifies patient care transitions (eg, hospital discharge home) as an important locus for lapses in the quality of care.18 In fact, over 20% of vascular patients return to an emergency room or are readmitted after discharge, within a median time of 7 days.19 We are currently discussing rolling out postdischarge early virtual follow-up and could track the compliance with this program and its impact though UA-VQIP.

Conclusion

This article has presented our experience developing an in-house QI platform that is less onerous on clinicians, tailored to our institution’s patients, and that can be adapted to evolving priorities of care. As health data become ever more abundant and accessible, opportunities for data-driven quality improvement abound. While the relative value of internal benchmarking has not been compared with external benchmarking, we hypothesise that there will be less need for third-party data platforms to consolidate and analyse data already coded within medical records.