Article Text
Abstract
Clinical classification systems have proliferated since the APGAR score was introduced in 1953. Numerical scores and classification systems enable qualitative clinical descriptors to be transformed into categorical data, with both clinical utility and ability to provide a common language for learning. The clarity of classification rubrics embedded in a mortality classification system provides the shared basis for discussion and comparison of results. Mortality audits have been long seen as learning tools, but have tended to be siloed within a department and driven by individual learner need. We suggest that the learning needs of the system are also important. Therefore, the ability to learn from small mistakes and problems, rather than just from serious adverse events, remains facilitated.
We describe a mortality classification system developed for use in the low-resource context and how it is ‘fit for purpose,’ able to drive both individual trainee, departmental and system learning. The utility of this classification system is that it addresses the low-resource context, including relevant factors such as limited prehospital emergency care, delayed presentation, and resource constraints. We describe five categories: (1) anticipated death or complication following terminal illness; (2) expected death or complication given clinical situation, despite taking preventive measures; (3) unexpected death or complication, not reasonably preventable; (4) potentially preventable death or complication: quality or systems issues identified and (5) unexpected death or complication resulting from medical intervention. We document how this classification system has driven learning at the individual trainee level, the departmental level, supported cross learning between departments and is being integrated into a comprehensive system-wide learning tool.
- Morbidity and mortality rounds
- Quality improvement methodologies
- Graduate medical education
- Healthcare quality improvement
- Audit and feedback
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
- Morbidity and mortality rounds
- Quality improvement methodologies
- Graduate medical education
- Healthcare quality improvement
- Audit and feedback
Introduction
In 1953, Virginia Apgar realised that obstetric evaluation and documentation of the status and well-being of a newborn baby lacked consistency—and determined to translate key elements of a baby’s appearance and activity and into objective categories, each with a numerical classification. The introduction of her proposed scoring system for systematic observations of the neonate established a (now globally adopted) ‘simple clear system or grading of newborn infants which can be used as a basis for discussion and comparison of the results of obstetric practices.’1 2
Since that time, the understanding that numerical classifications enables qualitative clinical descriptors to be transformed into categorical data has expanded. There has been a proliferation of scoring systems, with multiple scoring systems now existing in critical care,3 4 surgery5 6 and other disciplines. The focus on scoring systems in medicine has been predominantly to inform clinical decision-making or to support diagnosis and to develop an accurate prognosis—an emphasis on clinical utility.
Less attention has been paid to the potential educational and quality improvement value of classification and scoring systems at an organisational level. Translating clinical descriptors into numerical systems supports the development of a common language for learning. The clarity of classification rubrics builds a basis for discussion and comparison of results across both time and geography. The value of that common language for learning is evident in the almost 13 000 publications that cite the Apgar score. Developing learning opportunities and addressing the learning needs of the system, not just needs of individual trainees, is a newer stage of medicine, one influenced by the attention to evidenced based care as well as systems thinking in regard to healthcare.7
The learning needs of the system are well recognised and mortality audits have been long seen as learning tools. Their value, especially with regard to maternal, neonatal and paediatric outcomes, has been described in both the high income country (HIC)8–10 and in the low-middle income country (LMIC) context.11 Yet the ability of the system to learn from small mistakes and problems rather than from serious adverse events is rare.12
The aim of this paper is to describe a mortality classification system developed for use at AIC Kijabe Hospital. We illustrate how it is fit for purpose in LMICs, and describe how it has driven learning opportunities first at departmental level, but across the institution because it gave common language across department and disciplines. We explain how this scoring system was incorporated into performance indicators, advancing the expectation that all mortalities would be audited and classified. We point to future work assessing the system performance of the score.
Methods
Developing a mortality classification system
In searching for a potential foundation for the mortality classification score, surgical, anaesthesia and trauma literature was reviewed to identify potential starting points. It was important to address the multiple small issues and problems, not just the catastrophic events that impact care and patient outcomes, including survival.
The trauma literature contributed an important discussion on preventable deaths with a greater system perspective including the prehospital care. However, the trauma literature demonstrated the variability of definitions. Multiple uses and descriptions of ‘preventable/potentially preventable/unpreventable’ deaths occurred. The call for a consensus by Oliver and Walter in 2016 identified ‘inconsistency in the methodology and terminology between studies of trauma deaths in defining what constitutes ‘preventable’…. The differing definitions of preventability make comparison between systems and studies both difficult and unreliable.’13 Oliver and Walter also described the importance of context in defining preventable deaths. Context is particularly important in LMICs where there is highly variable access and capability within the health systems. This means that a substantial number of patients arrive at the hospital well past the point where medical intervention would likely make a difference.
The anaesthesia literature includes the Australian anaesthesia mortality scoring system (see table 1), which is more complex but adds more specificity.14 Eight categories were described, broadly describing mortality due to anaesthesia, not due to anaesthesia or not assessable. The classification does not seek to further identify broader contributors to the mortality—solely whether the anaesthesia component of care had a role (table 2).
In examining the surgical literature, there was a much more consistent common language for complications than for mortalities per se. The surgical literature used Shackford’s nomenclature for describing complications,15 and later this was developed into a broader definition of complications separating complications related to patient disease and complications due to errors (table 3).16
From this literature came the first three adapted categories of mortality for Kijabe Hospital (table 4):
Expected death, not preventable: Given the clinical situation, death was likely despite the hospital taking preventive measures. This would include a multitrauma patient whose early identification, resuscitation and surgical care were appropriate, but injuries were ultimately not compatible with life. This category also allowed ongoing full evaluation of the patient’s likely clinical course throughout admission, with the decision to continue life-saving measures as long as possible, or (if critical care resources were exhausted or care likely to be futile) palliation was chosen.
Unexpected death, not reasonably predictable or preventable: This would be a death that was completely unexpected, and on further investigation there was nothing that could have reasonably been done to prevent it. An example may be a primigravid 23-year-old woman who underwent a caesarean section but died from a saddle embolus on postoperative day 2, or unexpected cardiac arrest due to an undiagnosed congenital arrhythmia in a patient admitted for elective surgery.
Potentially preventable death: Hospital quality or systems issues identified, whether gaps in policy, procedure, training, staffing skill mix, human resource, fatigue/scheduling, human factors in similarly labelled medications, etc. This category combined the ‘potentially preventable’ and ‘preventable’ categories above. This category of death may include an unrecognised surgical complication, or an unrecognised nosocomial sepsis in which worsening early warning scores were missed for 48 hours, and by the time it was recognised the patient was in septic shock and was not salvageable.
It was clear in a setting with resource limitations and variable system reliability that there were two significant categories of mortality that were not fully explained by the above three categories.
Patients who were either dead on arrival (DOA,) palliative on arrival or preterminal on arrival, for whom the system failed and further preventive measures would be futile and would exceed system resources. For our context, it seemed important to differentiate those patients we did not even try to save. This is because while under some circumstances their deaths may have been preventable, delays in presenting to our facility for care made medical care futile. Given the LMIC context, it was essential to address where the prehospital system fails patients before they get to our facility and to see how often that happened. This category included those circumstances where a patient was DOA or whether a family was requesting inpatient palliative care and death was imminent. It also included circumstances such as national healthcare strikes in which patients arrived dead or unsalvageable due to national system failures.
Preventable deaths due to frankly iatrogenic causes, often contextually unique to the resource-limited setting. Individual iatrogenic causes such as sepsis due to contaminated total parenteral nutrition (TPN), or a significant medication error resulting in death or disability were included in this category. However, additional factors including unreliable electrical and water supplies or old/secondhand equipment can be direct causes of mortality in low-resource settings. For example, old anaesthesia machines or ventilators malfunctioning in theatre and a child dying, or the hospital oxygen system failing-with insufficient cylinder oxygen resources to fill the need may result in unnecessary mortality that needs to be categorised independently of clinician care itself, and is not in and of itself medical error in the traditional sense. In these cases, inability to replace large expense items may mean system failures that are beyond the control of any individual nurse, physician or department. This category of deaths would be applied if there was a patient event that indicated an urgent need to ‘stop things now’ due to an identified ongoing risk to patient safety.
After these considerations and after consultation with department leads, a five-category mortality classification system was implemented at AIC Kijabe Hospital in November 2018.
Implementation
Data for every death in every department, including the emergency department and outpatient departments, were collected systematically on a monthly basis from November 2018. The authors of this paper include four successive directors of clinical services for the hospital from 2018 to 2022, representing the phases of development, consultation, implementation and data collection throughout that period of time.
Departmental mortality review
All department heads were instructed to report every mortality, even the most simple and expected death monthly to the director of clinical services. Every case was to be audited and consensus classified by a team of more than one rater within departments, and the final classification of every death was to be included with the monthly report.
Multidisciplinary audit meeting
A weekly hospital-wide audit meeting was already in process at the time of the mortality classification roll-out, with a rotating roster led by each of the key departments (internal medicine, paediatrics, neonatology, outpatient/emergency department, general surgery, paediatric surgery, orthopaedics, neurosurgery, ear, nose and throat (ENT), plastic surgery, obstetrics.) This schedule ensured that every department presented outcome data to a wider audience of medical, nursing, clinical officers and students on a quarterly basis. To this audit, quarterly reporting classifying all mortalities was added as an expectation for every department, for hospital-wide presentation and discussion.
Total hospital mortality
This was aggregated across departments to compare with other published and accessible mortality data for similar hospitals in the region, country and continent. At the commencement of data collection, a benchmark of <5% was set for this measure based on available published hospital mortality rates in Kenya in 2018.
Department-specific mortality
After implementation of data collection was proven to be consistent and successful, each department head was asked to set their own departmental metrics for high-quality outcomes in the different mortality categories by benchmarking their area of practice against local, regional and global literature. This submission formed part of the quality assurance and quality improvement measures once baseline data were collected.
Of note, for every department, a metric of 0% was set for category 5 (iatrogenic/infrastructure failure) deaths, with the expectation that these should be ‘never’ events. A requirement was implemented that all category 5 events must be immediately escalated to the Director Clinical Services and hospital Chief Executive Officer, as immediate resourcing may be required to solve the problem (eg, additional oxygen supply or generator;) or there may need to be a significant change to services (eg, shutting down theatres for equipment malfunction or cessation of TPN reconstitution in the event of multiple invasive bacterial infections.)
Conversely, categories 1 and 2 (DOA, palliative or unpreventable by the time of arrival) deaths did not have metrics set hospital-wide, as this outcome was beyond the control of individual clinicians or departments.
Results
The first 3 years of routine data collection after implementation revealed significant information and quality achievements. Mortality measures were calculated with some metrics set by the hospital for quality assurance and improvement.
Departmental mortality review
From 2018, all mortalities were reviewed under the leadership of each department head (internal medicine, paediatrics, neonatology, outpatient/emergency department, general surgery, paediatric surgery, orthopaedics, neurosurgery, otolaryngology, plastic surgery, obstetrics.) The department heads consulted the attending physicians to review every mortality case as a team, attempting to minimise any potential bias of reviewing only cases with ‘good learning points’ or glossing over potential contributory factors to mortality that may avoid disclosure. These reviews were centrally collected and reported monthly to the Director of Clinical Services. All mortalities were scored by more than one rater—as part of routine departmental audits. This reduced the potential to miss a significant finding one set of eyes might miss. During the implementation phase, 100% reporting was achieved from every department within 6 months, with consistent reporting thereafter.
Multidisciplinary audit meeting
The consistent expectation across departments of disclosure of all deaths and assigned categories ensured a transparent approach to sharing mortality data and learning points that had arisen as a result. Since all-cause mortalities were audited, learning points, whether minor or major, became available beyond the individual or department to a broader multidisciplinary group, across all patient age groups. This drove the learning cycle beyond a few residents or consultants and encouraged system-wide change. An example of this was determining that late recognition of deterioration was a root cause of mortality in wards and the emergency department– leading to the implementation of early warning score charts in the emergency department and inpatient wards across the hospital.
Total hospital mortality and metrics
This was aggregated across departments, more as a matter of interest to compare with other hospitals in the region, country and continent. Overall all-cause mortality for the hospital in year 1 of data collection was 0.23%, which was the first time the hospital had aggregated its data to determine where it sat compared with other national and regional reported outcomes (table 1).
Department-specific mortality and metrics
Some departments had well-established metrics set by global entities, such as obstetrics. For this department, the goal in-hospital maternal mortality was aligned with the WHO 2030 goal of <0.07% of births; with fresh stillbirths set at <0.6% of total births of Kijabe Hospital maternity clinic patients, aligned with the WHO 2030 GOAL of <0.6% total births. The internal medicine and adult critical care team set their own departmental goal of zero category 5 deaths by 2020; with category 4 deaths to decrease by 50% every quarter (from initial data collection of 6%, a goal that was more than achieved.) The paediatric inpatient team set a goal of <4 category 4 deaths per quarter by December 2019, with <2 category 5 deaths per year by December 2019. The emergency department set its department metrics of zero category 4 or 5 deaths. The orthopaedic team set their own metrics at categories 1–3 at <1% of patients, and categories 4–5 at 0%.
Category 1 data, especially in the emergency department and the internal medicine ward, proved to be a valuable additional classification. Individual deaths in this category previously could be accompanied by a sense of helplessness, if not moral distress. The data collected in this category were felt to be an indicator of the broader health system maturity and responsiveness; and indicators of both community awareness of severity of illness, timely prehospital care and timely referral from referring clinics and institutions. All clinicians knew of cases that came too late, or with incorrect diagnoses and missed therapeutic windows, and collection of the magnitude of this occurrence has started to form a foundation of data that can be used to measure the extent of external gaps in diagnosis, treatment or timely referral. Category 1 death data were felt to be an important tool to potentially bring visibility to systems issues and guide advocacy in public health in targeted areas, or for specific feedback to be given to referring hospitals about optimising resuscitation or referral pathways prior to sending patients to our institution.
Categories 3 and 4 (unexpected/non-preventable and potentially preventable) data presented the greatest opportunities for discussion and learning, both at an individual and a systemic level. At a departmental level, department-wide audits on individual cases proved to be a fruitful opportunity to interrogate the clinical record leading up to the event, ask good probing questions, model a culture of learning and use a fishbone-type analysis of causal categories to avoid a culture of blame, and thereafter assign a classification aligned with resulting departmental actions or Quality Improvement projects. The outcomes of these department-level discussions were shared for all category 3/4 deaths at the hospital-wide multidisciplinary audit the next time the department was rostered to present their data, with a transparent approach and an invitation for clinicians in the audience to question the classification assignation and/or actions arising to improve future patient safety and quality.
Category 5 data were encouraging, in that category 5 events (such as a 10× opioid dosing error which occurred in paediatrics) were acknowledged by the department, internally analysed using root cause methodology and then transparently shared with the broader multidisciplinary team when they did occur. This shows a cultural shift from a ‘blame’ culture to a culture of transparency, in which at a hospital-wide audit, a department was comfortable sharing a ‘never’ event, their analysis of what led to it, and what systems improvements had been implemented as a result of the root cause analysis.
Feedback loops
The greatest learnings came from departments whose mortality reports included actionable change for their team, and who reported back either monthly or quarterly on the results of action taken to address issues raised. System learnings at Kijabe Hospital still tend to be departmentally focused, however, the hospital is positioned well to begin to examine processes at institutional level where issues are recurring across departments.
Discussion
This development of a rigorous and contextually appropriate mortality classification system attempts to draw together multiple sources to develop a simple yet effective tool for classifying deaths from both medical, surgical, traumatic and other causes. In addressing the potential utility of any scoring system, a balance needs to be struck between specificity (such as the detail present in the Australian Anaesthesia mortality system) and practicality. A tool that is too complex runs the risk of lack of adoption and implementation failure. A tool that is too simple runs the risk of masking significant differences in causality and areas for system improvement. Every scoring system must deal with this tension. In developing the Kijabe system, we chose a relatively simple, context-informed approach that met both the learning needs of the organisation and the limited auditing time and resources available, yet still was sufficiently rigorous to drill down into specifics of the case, where that would benefit the learning of either trainees or the system as a whole.
Historically, traditional learning in medicine tends to be driven by audit and feedback. Clinical audits have the potential to provide a rich data set for health system change—yet, the standard process tends to focus on learning derived from the care for a specific patient, especially where there has been a bad outcome. Trainees and their teachers are usually completely separated from data synthesis opportunities that might provide insight into how the health system is working for multiple patients with similar conditions, even when multiple patients with similar conditions and related poor outcomes pass through their care and their clinical audit process. Clinical audits in many settings are often viewed by physicians and nurses as reactive, rather than proactive. Reactive processes, especially when led by supervisors, tend to feel punitive—and the ‘blame game’ when things go wrong does not facilitate a healthy learning environment.17
One advantage of the mortality classification system is that it gives a common language to all departments. In our experience at Kijabe where all departments meet every Friday and different departments share presenting their own data, the mortality classification mechanism promotes and supports a goal of collaborative learning across institutional departments, not just within the department. Taking a systems perspective of a routine data gathering process, the mortality classification, ensures that process change can be tracked as it is implemented across different contexts to see what works, where and for whom. Having a common language opens opportunities to foster cross-learning within institutions, across departmental boundaries. This is particularly important because system issues in the theatre, for example, may be related to and impact across multiple departments.
The implementation of this classification system had some limitations, which institutions in resource-limited contexts seeking to replicate are also likely to find. At a hospital-wide level, the goal total all-cause mortality for our hospital was set at a benchmarked level of <5% based on published hospital mortality rates in East Africa. This was understood at the time to be a likely poor proxy for hospital-wide quality of care, knowing that systems issues (internal and external) could impact this number at any time, and hospital mortality is a poor overall metric of quality because such a small fraction of deaths are likely to be sensitive to changes in quality of within hospital care.11 For example, a private, elective surgical facility will always have a lower total hospital mortality rate than a public, all-comers, emergency department and acute admissions facility. This indicator may also be more of an indicator of a region or country’s healthcare resources than individual hospital performance per se. An example occurred for our institution when a national public healthcare strike in 2016/2017 caused an increase in late presentations and resource demands overwhelmed our facility, increasing mortality across the paediatric medical, newborn, paediatric surgical and obstetric units with an OR (95% CI) of death of 3.9 (95% CI 2.3 to 6.4), 4.1 (95% CI 2.4 to 7.1), 7.9 (95% CI 3.2 to 20) and 3.2 (95% CI 0.39 to 27), respectively.18 Nonetheless, it was felt to be important to know what the initial mortality rate for patients at our hospital was—with a goal of reducing it over time without substantially changing demographics or reasons for admission for our patients.
Proper attribution of error, especially for a category 3 and category 4 death, is a valuable concern. Proper attribution was ensured by having multiple people review each mortality. This in and of itself presented a significantly improved examination of the mortality. It is also the case that all category 4 deaths are presented at a multidisciplinary audit and discussed by the senior consultants from all specialties within the hospital. This process is designed to promote learning and is-by its very nature-not completely precise. However, the identified concerns, arising actions and learning outcomes that reach across departmental silos are valuable especially in low-rate events. The variability possible in assigning a category is addressed best by multiple raters in a robust discussion, with presentation of the rational from the departmental level audit at the multiple specialty audit.
An additional limitation is that at a departmental level, initial determination of classification of deaths at Kijabe Hospital is currently developed by consensus opinion of a small consultant level team. Additional rigour is added via these categories being discussed and debated at a multidisciplinary audit, in which category determinations are questioned and occasionally changed as a result. This, of course, relies on the clinical information and circumstances as shared by the responsible team to a panel of their specialty peers, which lends itself to a potential limitation and bias in the information shared.
Manaseki-Holland et al noted that ‘preventable death can only be directly measured by the judgement of expert clinical observers who retrospectively review case notes.19 Such judgement-based assessments have generally reported low reliability, meaning that they lack consistency across repeated reviews. Thus, current and future policy and research agendas that propose measuring preventable mortality, should push us to define, and if possible, improve the measurement characteristics of those estimates. Only then can we use case note review measurements in research to validate standardised mortality rates, to design operational systems for learning from mortalities within hospitals, and to compare preventable deaths between hospitals’.
An approach which uses a root cause analysis type approach to all deaths would be more rigorous and objective, but requires human resourcing that is often beyond the capabilities of a constrained LMIC institution. This inevitably would further increase consistency across repeated reviews. However, every institution needs to start somewhere, and having a classification system to aspire to seem to be better than no objective classification system at all.
Of note, the inclusion of a separate category 5 was initially, in our context, a decision that carried some risk at the time of implementation. Overtly disclosing (even internally) that the death of a patient was either potentially preventable, or frankly iatrogenic, was a degree of transparency that historically had not been well modelled in our hospital, region or country. Healthcare globally is trending towards an expectation of open disclosure for patients when provided medical care does not meet with evidence-based standards, with transparency and humility aligning with the reality that there is a well-documented frequency of medical error that occurs in healthcare provision. However, the cultural context of our hospital also carries known risks of retaliation from patients’ relatives, increasing frequency of legal action for perceived or actual medical negligence, and risk to the reputation of the institution and its affiliated donors/sponsors. Categories 4 and 5 deaths, however, were and continue to be reported, with an increasing acceptance over the course of data collection of open disclosure. This acceptance demonstrates an increasingly mature organisational culture that prioritises patient safety and quality.20
Conclusion
A mortality scoring system provides an important and distinctly different learning perspective from clinical audits, which tend to be case based and held within an individual department and within an individual clinical discipline. With a classification and data-based reporting structure, learning is no longer the purview of solely the individual trainees or practitioners, but broader communities of practice can support and drive learning.21 22 Comparisons with gold standards can be more easily achieved, as aggregated mortality data for specific demographics and conditions can be a proxy for evaluating the quality of service provision, and good quality data may support evaluation of large-scale quality improvement programmes in LMICs, as data on hospital outcomes in these settings are often missing.11
The mortality classification system we describe provides a simple tool to promote a transparent, learning healthcare culture.
Ethics statements
Patient consent for publication
References
Footnotes
Twitter @mardi_steere, @mbadam
Contributors Concept and design: MS and MBA. Data acquisition, analysis: FM, RED and EM. Interpretation of data: MS, MBA, FM, RED and EM. First draft: MBA and MS. Revising critically for intellectual content: MS, MBA, FM, RED and EM. Responsible for content in final draft: MS, MBA, FM, RED and EM. Note: each author was in the role of medical director at different times between 2018 and 2022, and actively working with the mortality classification system implementation, with the exception of MBA, who was a paediatrician using the MCS at a departmental level.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.