Introduction In South Africa (SA), prehospital emergency care is delivered by emergency medical services (EMS) across the country. Within these services, quality systems are in their infancy, and issues regarding transparency, reliability and contextual relevance have been cited as common concerns, exacerbated by poor communication, and ineffective leadership. As a result, we undertook a study to assess the current state of quality systems in EMS in SA, so as to determine priorities for initial focus regarding their development.
Methods A multiple exploratory case study design was used that employed the Institute for Healthcare Improvement’s 18-point Quality Program Assessment Tool as both a formative assessment and semistructured interview guide using four provincial government EMS and one national private service.
Results Services generally scored higher for structure and planning. Measurement and improvement were found to be more dependent on utilisation and perceived mandate. There was a relatively strong focus on clinical quality assessment within the private service, whereas in the provincial systems, measures were exclusively restricted to call times with little focus on clinical care. Staff engagement and programme evaluation were generally among the lowest scores. A multitude of contextual factors were identified that affected the effectiveness of quality systems, centred around leadership, vision and mission, and quality system infrastructure and capacity, guided by the need for comprehensive yet pragmatic strategic policies and standards.
Conclusion Understanding and accounting for these factors will be key to ensuring both successful implementation and ongoing utilisation of healthcare quality systems in emergency care. The result will not only provide a more efficient and effective service, but also positively impact patient safety and quality of care of the services delivered.
- prehospital care
- qualitative research
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
The importance of quality systems in the prehospital emergency care (PEC) setting is becoming increasingly recognised given that the delivery of PEC services is frequently provided against the backdrop of demanding environments, often with limited resources, and for patients of varying and unpredictable acuity.1–4 As PEC focused tools for measuring and understanding patient safety and quality of care have been developed and implemented, so too has the recognition of the importance of formal systems for governing such activities.4–9
In South Africa (SA), a mix of government-funded and private emergency medical services (EMS) deliver PEC across the country.10 Within these services, quality systems are in their infancy.11 Among PEC clinicians, the general perception of EMS quality systems in the country is poor.11 Concerns regarding system transparency, reliability and contextual relevance have been cited as common reasons for this.11 These issues have been exacerbated by apparent poor communication, ineffective leadership and a historical association of the use quality systems as a punitive mechanism.11
Recent National Department of Health policy reviews have highlighted the importance of systems for developing, implementing and monitoring the quality of healthcare in the country.12 While significant advances have been made in improving the scope of practice, training and education of PEC clinicians, little has been done towards developing formal quality systems aimed at assessing and maintaining standards of quality of care and patient safety in the PEC setting in SA.
There are a multitude of potential factors that could affect these systems as a whole. Therefore, in order to determine priorities for focus regarding their development and improvement, it is important to first understand the current state of EMS quality systems in the country. Given this need, we undertook a study to assess prehospital EMS quality systems in SA.
A multiple exploratory case study design was used in order to achieve the study aim.13 14 For the purposes of this study, a case was defined as the quality programme or system of performance measurement of a participating service. The definition of a case was purposely kept broad given that quality measurement by EMS in SA is limited and the existence or scope of formal quality systems likely to be equally limited.11 The quality systems of four provincial government EMS and one national private EMS organisation were used for the purposes of this study.
Primary data collection
Multiple sources and data types were used and collected to achieve the study aim.14 The Institute for Healthcare Improvement’s Quality Program Assessment Tool was employed as the primary means of data collection (online supplementary file 1). The tool uses a categorical rating scale of 0–5 to answer 18 key questions across six broad criteria, namely:
Quality improvement activities.
Staff involvement in the quality programme.
Evaluation of the quality programme.
The tool was used as both a formative assessment for each participating service’s quality programme, as well as a semistructured interview guide to further explore the results obtained from the formative assessment. Data were collected via interviews of directors and leaders of the participating services with intricate knowledge of their respective service’s operations. To maintain anonymity, their specific titles have been omitted. All interviews were conducted in English and recorded for transcription and analysis. Reflective notes were maintained during each interview, and immediately after, for verification of the interview results during analysis.
Secondary data collection
Multiple sources of secondary data were collected to support the primary data, grouped into two categories. Category A secondary data were made up of the results of a targeted literature review to identify policy-focused guidance for EMS organisations in SA regarding the implementation of a quality programme; and/or the development, implementation and utilisation of methods to assess quality of care. A search of several key websites was conducted, including: The Health Professions Council of SA—the healthcare licensing body of the South African National Department of Health (SADoH); the SADoH; and Statistics South Africa—the statistical service of the South African national government. Category B secondary data were made up of publicly accessible quality and/or performance reports published by the participating services.
Setting and population
The delivery of prehospital emergency medical care in SA is based on a three-tiered system of basic, intermediate and advanced life support levels of qualification. Each level is licensed for independent practice and governed by a national registration board, yet delivered primarily through provincial government-funded EMS, with several private EMS located in the larger cities across the country servicing medical insurance clients. Given the variations in geography and population distribution across SA, the four provincial prehospital emergency medical services of KwaZulu Natal, Western Cape (WC), Limpopo (LP) and North West (NW) provinces were purposively selected to be as inclusive of this variation as possible (figure 1). There is limited evidence to suggest that private EMS in SA are more advanced regarding the utilisation of quality assessment tools and frameworks.11 As a result, a national private EMS organisation was additionally included as part of the multiple case review.
For the primary data collection, descriptive statistics were used to describe and summarise the categorical-based formative assessment. Conventional content analysis, as described by Hsieh and Shannon, was used to sort and analyse the interview data.15 Prior to analysis, each interview transcript was reread for content familiarisation. First-level coding was conducted through the extraction of meaning units from each transcript and summarised into codes using open coding. Once completed, similar codes were combined and organised to develop clustered subcategories. Throughout the first-level coding and subcategory development, the reflective notes were referenced for verification. Interview transcriptions were analysed using MAXQDA (MAXQDA, 2016; Sozialforschung GmbH, Berlin, Germany).
For the secondary data collection, document analysis as described by Bowen was used to sort and analyse the supporting data.16 Eligible documents were retrieved and scanned for relevance based on the inclusion criteria. A full-text review was conducted if the document remarked on quality systems, quality of care or quality indicators (QIs). Supporting excerpts, quotations or passages that made reference to EMS in general or by case example were extracted and synthesised. Data were extracted using a standardised data extraction form (Microsoft Excel 2010; Redwood, Washington USA)
The utilisation and triangulation of multiple methods and data sources attempt to safeguard against potential implications that findings are simply an artefact of a single method, a single source or a single investigator’s bias.16 Therefore, for the purposes of this study, multiple methods were used to ensure internal validity and trustworthiness of the overall study, as described by Guba, and included17: the quality programme formative assessment and supporting documentation; the in-depth qualitative exploration of the assessment via recorded interviews and transcripts; reflective notes; national and/or provincial legislation, policies and directives; and published organisational performance reports.
Consent for participation was provided by each of the participating services and individuals prior to data collection.
Patient and public involvement
No patients were involved in the development of the research question, study design or data collection. The results of the study will be disseminated to participants in the form of a peer-reviewed publication, once complete.
The services included for the case review covered a multitude of social and healthcare demographics found across the country (table 1). There was equal variation in the outcomes of the formative assessment, where services generally scored higher for structure and planning (table 2). Measurement and improvement, however, were found to be more dependent on the services’ utilisation and perceived mandate. There was a relatively strong focus on clinical quality assessment and improvement within the private service, whereas in the provincial systems, QIs reported were exclusively restricted to call times and available vehicle resources, with little to no focus on clinical care. Given the limited scope of QIs measured and reported, it was somewhat predictable that staff engagement and programme evaluation were generally among the lowest scores for the participating services (see table 3 for subcategories and supporting quotes from the qualitative analysis of the quality programme assessment).
South Africa population: 57 458 000.
No. of households: 16 671 000.
Public transport use: 46.2%.
Population: 6 650 000 (11.6%).
No. of households: 1 877 000 (11.3%).
Public transport use: 44.7%.
The provincial service’s higher points in the formative assessment were largely within structure and planning, where a hybrid centralised/decentralised system of subdistrict engagement with two ‘centralised’ quality nodes (ie, one urban and one rural) was employed for the services quality system. Within this system were staff primarily dedicated to quality assessment and monitoring. Despite this strength, it was acknowledged that a lack of higher level leadership had had an impact on the programme (1.1). Similarly, while a comprehensive quality plan existed, it was acknowledged to be outdated and inconsistently reviewed and/or updated.
The most significant points to emerge regarding measurement and improvement were in relation to the services understanding of its mandate, and the view that the service operated as a transport company more than a medical company, especially given the sociopolitical history of the region (1.2, 1.3). In light of this, it was felt that reporting on time-based measures of performance was wholly appropriate. Similarly, much of the focus on improvement activities were centred around transport and improving interfacility transport booking and operations in particular. The service acknowledged that improvements could be made in terms of staff engagement; however, they felt their public engagement had improved significantly in recent years. Unfortunately, the primary driver for this had been an exponential increase in attacks on ambulances in the community (1.4).
Population: 11 245 000 (19.5%).
No. of households: 2 905 000 (17.4%).
Public transport use: 40.9%.
The service scored low for structure in the formative assessment, compared with the other services. The decentralised approach towards measurement and evaluation adopted made coordination difficult, which was further exacerbated by the perceived rudimentary means with which data were captured and shared (2.1). While the service acknowledged the lack of described roles, responsibilities and accountabilities within its quality plan, the content of the plan was otherwise described as comprehensive and underwent regular evaluation and update [2.2].
The service scored highest in measurement, where a strong focus was placed on continuous monitoring for trend analysis. As with the WC, the focus was strongly associated with its perceived mandate and service utilisation (2.3, 2.4). The service scored low for staff and public engagement where it was acknowledged that while some effort was made towards this, there was still much to be improved on (2.5, 2.6).
Population: 5 854 000 (10.2%).
No. of households: 1 579 000 (9.3%).
Public transport use: 41.9%.
The LP EMS quality system scored relatively highly within the structure and planning categories of the formative assessment. There was a strong focus on strategic planning, where their quality system and planning were firmly entrenched into the broader provincial health structures (3.1). The importance of this relationship with the provincial health system was emphasised as a driver for potential improvements in service quality monitoring (3.2).
It was acknowledged that much could be done to improve quality measurement and improvement within the service, which scored lower in the formative assessment. The service focused primarily on response time targets and complaints for measuring and reporting of quality and performance (3.3). The notion of relationships was echoed in these sections, where feedback from the facilities the service interacted with were too seen as an important measure of quality.
Despite the low scores for staff engagement and evaluation, these had been areas earmarked for attention in the services current strategic plan. Staff attitude was acknowledged and planned for as an important driver of general service success (3.4). Similarly, technology was also earmarked as a driver of success, both for staff engagement, and community accountability as well (3.5).
Population: 3 925 000 (6.8%).
No. of households: 1 210 000 (7.3%).
Public transport use: 41.3%.
The NW scored low across all questions and categories in the formative assessment. This was unsurprising considering (unbeknownst to the authors at the time of data collection) the provincial government, including the health system and EMS, had been placed under administration. On deeper examination, several key factors became apparent that highlighted the difficulties faced by EMS in the province.
From a managerial perspective, the extreme decentralisation in which the service was structured made coordination and oversight complicated, and significantly hindered process and/or plan implementation (4.1). Coupled with this, the service found it difficult to retain high-level clinical staff, further hampering the ability to implement and sustain a clinically focused quality programme (4.2). From an operations point of view, based on a recent audit, it was recognised that the province’s non-personnel resources were inappropriately matched towards the needs of their daily activity (4.3, 4.4).
The QIs that were reported by the service were limited to time-based measures, and vehicle and staff counts. Furthermore, the service lacked their own standalone committees regarding complaints and patient safety, which were instead incorporated into broader general provincial health service committees and structures.
Based on the formative assessment and interview, several strengths were highlighted within the service, largely centred around structure. There was a strong clinical focus within the quality system of the service, with representation up to the executive level (5.1). Furthermore, while input was collected from across the service branches, much of the planning came from a centralised office, providing overall strategic direction (5.2). Similarly, there was a relatively strong focus on quality improvement activities within the service. While input and scope were somewhat limited, a robust and comprehensive process was consistently followed when a project was carried out (5.3).
In contrast, the service acknowledged that there was room for improvement with regards to programme planning and evaluation. While a quality management plan existed, it was outdated, and not often reviewed, at least in any formal capacity. Likewise, while several clinically focused indicators are consistently reported and discussed at a high level, the system was acknowledged to be outdated and rudimentary, largely manually captured, and difficult to change as it is not fit for purpose (5.4). This was perceived to have had an impact on both general quality monitoring and monitoring for sustained improvement.
Of all the categories, staff and patient engagement were perceived to be the weakest, and an area for improvement within the service. The strengths the service enjoyed in this area were largely as a result of the services private hospital group parent company (5.5).
Nationally and provincially focused policy documents were included as part of the secondary data collection (table 4). Several concentrated on the development and implementation of quality and patient safety systems yet were almost exclusively limited to health facilities. Despite this, they were in depth and pragmatic in their approach towards outlining the steps required to implement effective quality systems. While these may not all be applicable to the EMS setting, several of the concepts outlined in these documents were considered useful towards the development of similar systems for EMS.
Of the EMS focused documents, all of these were limited to high-level/strategic ‘statements’ regarding quality or patient safety. None of the documents found reported any measures of clinical quality, with the focus solely restricted to call times and call volumes. Furthermore, no policy-related documents were found that outlined minimum standards or provided steps towards the development and/or implementation of a quality system or clinically focused QIs for EMS.
Healthcare organisational case studies have been identified as an important methodological approach towards describing the factors facilitating and impeding quality systems.18 This was echoed in our study, where several broad observations were made regarding EMS quality systems in SA. From a system structure perspective, a centralised approach with appropriate and engaged senior/executive level management established responsibility of the system and facilitated greater control over the direction of the system, whereas decentralisation hampered collection and reporting, and as a consequence, accountability. Leadership has previously been identified as an essential component in health quality systems, a factor present in this study as both a driver of success when incorporated, and a barrier when inadequate or unaccounted for.11 19–21 The lack of a cohesive vision and/or mission regarding quality, and the role of leadership towards developing and driving these concepts has also been associated with organisations who consistently struggle to improve quality and were similarly lacking or poorly developed within the services assessed in this study.21
Factors associated with infrastructure, support and capacity have too been identified as key drivers of success of quality systems in healthcare.19–21 While structure was among the highest scored attributes of the participating service assessments, insufficient capacity was often identified as a weak link in this study. The combination of leadership and capacity has been described as primary drivers of a quality culture in healthcare quality systems; another component reported as both an enabler of high-quality systems when present, and a barrier to its success when absent.19–21 It is unsurprising that given the lack of each of these components in the participating services that culture did not feature as a common observation or discussion point within the assessment and interviews.
All participating services were limited in their measurement of either adverse events, technical quality of care or patient-reported measures, with the primary focus largely centred around time-based measures. This is in contrast to the increasing focus on non-time-based measures of quality evident in the literature.22 This limitation was widely acknowledged and partially justified around the perceived purpose of EMS and what was understood to be the mandate of these services in SA. Non-time-based measures of safety and quality have previously been used as a strong base with which focused quality improvement programmes have led to meaningful and improved patient outcomes in the PEC setting. The lack of such measures could in part explain the generally poor results observed regarding quality improvement in this study.
Resources and technology were a common feature among the interviews as a potential driver for improvement in quality systems. Of interest to note, there was limited discussion regarding the perceived benefits offered by technology during the evaluation of the WC, as the only user of computer-aided dispatch system and electronic patient records. It nonetheless remained a specific solution identified by the remaining services as the answer to many of the problems they faced regarding quality. These contrasting views are evident in the literature, where the importance of technological resources has been often debated, and where a lack of consensus regarding their influence and status has them described as ‘probationary’ when it comes to their role in quality systems.19 20
There was little to no supporting documentation in the way of national policies and/or guidelines for EMS in either implementing quality systems, measuring quality or reporting performance. Furthermore, there was a general lack of policy outlining minimum standards for EMS quality systems altogether. This was evident in the variation of the results of the quality programme assessment and further highlights the need for such guidance. To be effective in both implementation and use, it is essential that appropriate high-level guidance and minimum standards regarding quality systems be outlined, as a driver for change.23 24
In order to deliver safe, high-quality care, it is crucial that the system or mechanism responsible for monitoring and maintaining this process is equally efficient and effective in doing so. Understanding the factors affecting this process are essential towards identifying areas and priorities for improvement within the system. The outcomes of this study have provided a base from which the factors affecting quality systems in EMS in SA can be addressed. However, as systems evolve and mature in their approach towards quality and safety, so will the factors that affect the success of the system. As such, quality system evaluation should become a regular, scheduled component of the system itself. Towards this, our study has described one approach that can be used as an objective, repeatable measure of quality system development.
The nature of the questions which case study research in general—and this article in particular—attempt to answer limit the overall extent to which the results are generalisable and/or reproducible. We attempted to address this through the previously described approach towards enhancing the validity and trustworthiness of the methodology. Despite this, the results of this study need to be understood within the context in which they were studied and appreciate the impact this has on the observations and their broader potential implications. While the specific observations found in this study may not be generalisable, the outcomes are nonetheless consistent with what is known in the literature.
A multitude of factors were identified that affected the effectiveness of quality systems, centred around leadership, vision and mission, and quality system infrastructure and capacity, guided by the need for comprehensive yet pragmatic strategic policies and standards. Understanding and accounting for these factors will be key to ensuring both successful implementation and ongoing utilisation of healthcare quality systems in PEC in SA. The result will not only provide a more efficient and effective service, but also positively impact patient safety and quality of care of the services delivered.
The authors would like to thank Raveen Naidoo for facilitating the assessments and interviews for data collection.
Contributors All authors conceived the study. IH conducted the data collection and analysis and takes responsibility for the paper. IH drafted the manuscript, and all authors contributed substantially to its revision.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication Not required.
Ethics approval Stellenbosch University Health Research Ethics Committee (Ref no. S15/09/193).
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available upon reasonable request.