Article Text

Ethiopian Pediatric Society Quality Improvement Initiative: a pragmatic approach to facility-based quality improvement in low-resource settings
  1. Jacquelyn Patterson1,
  2. Bogale Worku2,
  3. Denise Jones1,
  4. Alecia Clary3,
  5. Rohit Ramaswamy4,
  6. Carl Bose1
  1. 1Pediatrics, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
  2. 2School of Medicine, Addis Ababa University, Addis Ababa, Oromia, Ethiopia
  3. 3Health Policy and Management, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
  4. 4Department of Maternal and Child Health, and the Public Health Leadership Program, Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
  1. Correspondence to Dr Jacquelyn Patterson; jackie_patterson{at}med.unc.edu

Abstract

Objectives To describe critical features of the Ethiopian Pediatric Society (EPS) Quality Improvement (QI) Initiative and to present formative research on mentor models.

Setting General and referral hospitals in the Addis Ababa area of Ethiopia.

Participants Eighteen hospitals selected for proximity to the EPS headquarters, prior participation in a recent newborn care training cascade and minimal experience with QI.

Interventions Education in QI in a 2-hour workshop setting followed by implementation of a facility-based QI project with the support of virtual mentorship or in-person mentorship.

Primary and secondary outcome measures Primary outcome—QI progress, measured using an adapted Institute for Healthcare Improvement Scale; secondary outcome—contextual factors affecting QI success as measured by the Model for Understanding Success in Quality.

Results The dose and nature of mentoring encounters differed based on a virtual versus in-person mentoring approach. All QI teams conducted at least one large-scale change. Education of staff was the most common change implemented in both groups. We did not identify contextual factors that predicted greater QI progress.

Conclusions The EPS QI Initiative demonstrates that education in QI paired with external mentorship can support implementation of QI in low-resource settings. This pragmatic approach to facility-based QI may be a scalable strategy for improving newborn care and outcomes. Further research is needed on the most appropriate instruments for measuring contextual factors in low/middle-income country settings.

  • continuous quality improvement
  • global health
  • healthcare quality improvement
  • paediatrics
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This is a programme evaluation of a pragmatic approach to quality improvement led by the Ethiopian Pediatric Society.

  • We evaluate hospital progress with quality improvement methodology using a novel adaptation of an Institute for Healthcare Improvement Scale.

  • Although hospitals were balanced across the in-person mentorship and virtual mentorship groups with regards to location and census, they were not randomised.

  • Data were collected via medical record abstraction; when the medical record was erroneous per the experience of the quality improvement team, an estimate of compliance with a care process was used for baseline data.

Introduction

In 2016, the neonatal mortality rate in Ethiopia was 28/1000 live births; 90 000 newborns died in Ethiopia during that year.1 Many of these deaths were preventable. In response to this public health crisis, the Ethiopian Ministry of Health, in collaboration with the Survive and Thrive Global Development Alliance (S&T GDA) and the Ethiopian Pediatric Society (EPS), initiated a training programme for providers of newborn care in hospitals countrywide.2 Through this programme, midwives learnt evidence-based practices for newborn resuscitation, early newborn care and care of the small baby using the American Academy of Pediatrics’ Helping Babies Survive (HBS) suite of educational programmes.3

The HBS curriculum, and particularly the first programme in the suite entitled Helping Babies Breathe (HBB), has been adopted in many other low/middle-income countries (LMICs).4 5 Training in HBB reduces perinatal mortality and the likelihood of sustained reductions in mortality increases by using companion strategies for maintaining and translating knowledge into practice.6–11 However, translation of knowledge into practice is frequently impeded by systems barriers, including lack of resources, inadequate staffing and poorly organised processes of care.12 To eliminate some of these barriers, local strategies must be employed. Quality improvement (QI) methods that promote local adaptation of proven interventions through iterative testing may be key to sustaining a reduction in perinatal mortality following training.13 14

While many Ministries of Health in LMICs are developing QI expertise, capacity-building for facility-based QI is still needed.15 Consequently, to date, successful facility-based QI in LMICs has typically involved concentrated coaching by a QI expert.16–22 Recognising the labour-intensive and resource-intensive nature of such models, virtual consultation has been increasingly explored as a complementary or even alternative approach.23 24

In 2017, the EPS began a pilot project called the EPS QI Initiative to test a strategy to improve adherence to newborn care practices. The initiative included QI education using Improving Care of Mothers and Babies, a QI guide developed by the S&T GDA, to support facility-based QI efforts in LMICs.25 In addition to training in QI, the EPS QI Initiative provided either virtual or in-person mentoring for each hospital-based team. In this manuscript, we examine critical features of the EPS QI Initiative and present formative research on mentor models for QI teams in LMIC settings. Finally, we review key elements of the EPS QI Initiative using the Consolidated Framework for Implementation Research (CFIR), an implementation science framework of constructs associated with effective implementation.26

Methods

Features of the EPS QI Initiative

Selection of hospitals

The director of the EPS invited 20 hospitals in the Addis Ababa area to join the QI initiative. He selected hospitals that participated in the countrywide HBS training cascade and had little experience in QI methodology. This group of hospitals included general and referral hospitals in both rural and urban locations. Executives at all 20 hospitals gave permission for their hospital to participate. Two were later excluded because they did not participate in baseline data collection (see below). Therefore, 18 hospitals participated in the final cohort.

Baseline data collection

The initiative focused on newborn care in the labour and delivery ward. In an effort to track implementation of practices recommended in the HBS programmes, the EPS identified 15 key newborn process and outcome indicators from the HBS curriculum for continuous monitoring and evaluation by the cohort. These included the following dichotomous process indicators: (1) stimulation to breathe at birth, (2) administration of positive pressure ventilation, (3) cord clamping after 1 min, (4) skin-to-skin for 1 hour after birth, (5) early initiation of breastfeeding, (6) temperature measurement, (7) vitamin K administration, (8) tetracycline administration, (9) BCG administration, (10) polio vaccination and (11) kangaroo mother care (for newborns <2000 g). Additionally, four dichotomous outcome indicators were included: (1) crying at birth, (2) hypothermia (temperature <36.5°C), (3) stillbirth (including fresh vs macerated) and (4) death prior to discharge.

A midwife from each hospital participated in an initial workshop in June 2017 to learn how to abstract data from the medical record for monitoring of the quality indicators. At this workshop, participants also trained in how to implement low-dose high frequency (LDHF) practice of newborn care skills at their hospital such as bag mask ventilation. (While there is evidence to support improved knowledge translation with LDHF practice following HBS training,7 9–11 27 time constraints did not permit covering LDHF practice during the initial HBS training cascade.) Following the workshop, hospitals began collecting baseline data using tablets provided by the initiative and a purpose-designed database using REDCap electronic data capture tools (RedCap, Vanderbilt University, Nashville, Tennessee, USA).28 29 The midwife at each hospital abstracted data describing every birth and the subsequent care of the newborn in the labour and delivery ward from paper medical records and entered these data into the tablet-based database. The midwife tasked with this responsibility received a small financial incentive. Digital data were then transferred via the internet to a central computer in the EPS office. Baseline data were collected during the months following the workshop.

Mentor selection and training

The EPS director conducted a search to identify local mentors with strong understanding of Ethiopian health systems, knowledge of QI methodology, experience in coaching healthcare providers and evidence of ability to motivate change. After an in-depth interview process, three candidates were selected to serve as mentors for hospital-based QI projects. All three mentors were neonatal intensive care nurses with at least 5 years of clinical experience; each had prior leadership experience, but limited QI experience.

The mentors studied the QI guide and subsequently participated in an 1-hour training to review QI methodology, to learn how to facilitate the QI training workshop and to practice successful QI coaching. During this mentor training, two authors of the QI guide (CB and JP, also authors of this manuscript) conducted a detailed review of the basic steps for QI methodology outlined in the guide. Additionally, these authors briefed the mentors on how to facilitate the QI training workshop for the cohort and also led sessions on effective coaching using both instruction and simulation cases to highlight successful coaching strategies.

Quality improvement training workshop

In December 2017, the head nurse midwife and one additional representative self-selected from the labour and delivery ward of each hospital attended a 2-hour workshop on QI methods. We used Improving Care of Mothers and Babies as a teaching tool. On the first day, two authors of the QI guide (CB and JP, also authors of this manuscript) taught the following basic QI steps: creating a team, deciding what to improve, choosing the barriers to overcome, planning and testing change, and determining if the change resulted in improvement. They taught this portion of the workshop in English with interpretation into Amharic. Participants applied key knowledge from these steps in small group practice exercises with the help of Ethiopian facilitators. These facilitators served as mentors for the cohort in the subsequent months (see Virtual Mentorship vs In-Person Mentorship below). Representatives received all written material in English, the designated language of the Ethiopian healthcare professional.

On the second day, participants reviewed baseline data of key indicators from their hospital in the form of run charts.30 The data manager for the initiative plotted these run charts using the REDCap data and a purpose-designed template in Excel. With the assistance of a facilitator, each hospital identified gaps in their quality of care and selected an indicator for improvement based on its importance (eg, to families or the health authority), expected amount of improvement and the potential impact of the improvement. Although the initiative did not use a collaborative model in which all hospitals conduct QI on the same indicator,31 independent selection of indicators by the hospitals still resulted in the majority pursuing the same gap in quality (see below). After selecting an indicator, hospital representatives began the initial planning of a project including completion of an aim statement using the model described in the QI guide. Following this training, representatives returned to their hospitals to form a QI team and complete a project.

Mentor role

We initially planned to provide QI training, followed by the addition of in-person mentorship for only one half of the hospitals. The plan to limit in-person mentorship to a subset of the hospitals was made to permit the evaluation of the impact of mentorship on the success of implementing a QI project. The EPS director assigned hospitals to QI training alone or QI training with mentorship with the goal of achieving balance between groups with respect to rural versus urban location of the hospitals and low versus high delivery census. The EPS director also considered geographic proximity with allocation of the groups, such that each group included hospitals located both near to and far from the EPS central office. The nine hospitals in the mentorship group were clustered into sets of three based on ease of travel between them; subsequently, each mentor was assigned one of these sets of three hospitals to mentor with in-person visits. However, subsequent discussions with participants at the QI training workshop suggested that there was a very low likelihood of successful execution of a QI project in the absence of mentorship. In response to this, and in appreciation of the primary objective of the initiative to improve care in participant hospitals, the plan was modified to provide virtual mentorship to support QI activities in hospitals not receiving in-person mentorship. Thus, each mentor was assigned three hospitals for virtual mentorship in addition to their three hospitals for in-person mentorship.

The EPS director instructed mentors to interact with their hospitals once monthly by phone for the virtual mentorship subset and in person for the in-person mentorship subset. The mentor, in conjunction with the QI team, determined the content and length of mentoring sessions with consideration for the status of the QI project and perceived challenges in moving forward. Mentors recorded the length and nature of each interaction for all mentoring encounters. Mentors also participated in a monthly conference call with the EPS director and QI guide authors to discuss successes and challenges with coaching and for collective support and learning.32

Monitoring and evaluation

Patient level data collection continued following the second workshop. During this period, a data manager at the EPS produced a comprehensive monitoring and evaluation report for each hospital that summarised the hospital’s monthly data for all key indicators. The comprehensive report contained a running monthly tally of total births and births <2000 g, rates of compliance with each process indicator per month in both table and run chart format, and rates of outcomes for each outcome indicator per month in table format. The data manager provided run charts for the outcome indicator hypothermia only, as other outcomes (eg, mortality) were sufficiently rare that conclusions could not be inferred from monthly graphic data. The comprehensive report also included a detailed table on missing data for each process and outcome indicator.

The data manager also produced an indicator-specific monitoring and evaluation report for key indicators directly linked to the QI team’s process or outcome selected for improvement. The report included a running weekly tally of total births, rates of compliance with any relevant process indicator per week in both table and run chart format, and rates of outcomes for any relevant outcome indicator per week in table format.

Hospitals received comprehensive reports summarising monthly data at approximately 3-month intervals, and indicator-specific reports summarising weekly data at approximately monthly intervals. As a result of delayed data extraction in some hospitals and a period of poor internet access for a subset of the hospitals, reports were not always delivered at the prescribed intervals. Thus, data were not continuously available to guide QI teams in their work.

Evaluation of the EPS QI Initiative

Hospital-based monitoring of improvement

QI teams determined whether there was improvement in their chosen process or outcome using simple run chart rules. Teams identified change by a shift (six or more consecutive data points all located above or below the median) or trend (five or more consecutive points all going up or all going down) in the data. For 16 of the teams, the data manager calculated the baseline median depicted on the run charts using weekly data from October 2017 through the first week of December 2017. There were two teams who reviewed their baseline data and determined from their own experience that the chart data inaccurately reflected compliance with their chosen process of care. These teams determined an approximate baseline rate through either an educated guess or direct observation of that process of care for a subset of deliveries. Both teams selected rates that were worse than what was calculated using the chart data. These approximated rates were adopted as the baseline for subsequent QI work, and strategies were implemented to improve the quality of these data in the medical chart. As teams did not consistently record the timing of the changes they implemented, in this report we considered all data from the second week of December onwards as occurring after initiation of the QI project.

Progress with QI methodology

We evaluated progress with QI methodology using a QI progress scale adapted from a scale for quality collaboratives published by the Institute for Healthcare Improvement (see online supplemental table 1).33 This scale was originally designed for a collaborative model where there is a single intervention implemented across all sites. Since the EPS QI Initiative involved different interventions at each site, the two authors of the QI guide adapted this scale to remove references to a single change package and to customise language to match the methodology presented in the QI guide. Two representatives from each QI team gathered at a final workshop in April 2018 to present their team’s QI work and discuss sustainability of QI efforts. Two authors of this report independently rated the hospitals on their QI progress based on these team presentations and run chart data using the QI progress scale. Any discrepancies in the two investigator’s ratings were resolved through consensus.

Supplemental material

Context

We used the Model for Understanding Success In Quality (MUSIQ) to evaluate the context in which QI work was conducted in this cohort.34 MUSIQ is a conceptual model to describe contextual factors that influence QI success. It has been used in high-income settings to understand the context around QI projects in a paediatric hospital, a state QI collaborative, verification visits to healthcare organisations and an improvement advisor training programme.35 36

The MUSIQ Survey was revised to reflect the setting of this QI initiative, with the following adaptations: the preamble was adjusted to reflect the details of this initiative; the organisation was specified as the hospital and the microsystem as the delivery room; and three questions (#32, 34, 36) were deleted because of lack of relevance in this initiative. The final survey was comprised of 33 questions in the following five domains: (1) QI team, (2) the organisation (in this initiative, defined as the hospital), (3) the microsystem (in this initiative, defined as the delivery room), (4) support and (5) environment. The final three questions on the survey addressed outcomes of the specific QI project including a question on perceived success.

We used a back-translation strategy to produce a translated survey in Amharic.37 First, the revised English survey was translated into Amharic by an external translator and then back-translated into English. The investigators reviewed the original survey and back-translated survey for points of confusion, discussed discrepancies and came to consensus on the final Amharic survey. A committee comprised of the three mentors for the initiative reviewed the final survey and suggested additional edits for clarification.

Up to six individuals from each hospital independently completed the MUSIQ Survey, with representation from the following personnel: QI team leaders, QI team members, heads of the labour and delivery ward and hospital administrators. The MUSIQ scoring system assigns the following numeric values for the Likert scale responses to each question: totally agree=7, agree=6, somewhat agree=5, neither agree nor disagree=4, somewhat disagree=3, disagree=2, totally disagree=1, don’t know=0. We calculated median domain scores and interquartile ranges (IQRs) for each hospital.

Patient and public involvement and oversight

Neither patients nor the lay public were involved in the design, or conduct, or reporting, or dissemination plans of our research. However, the professional community represented by participant midwives advised the EPS about modifications in the design (ie, type of mentorship). Our Institutional Review Board exempted this study from review.

Results

Demographics of participating hospitals and selection of QI project

Among the hospitals receiving virtual mentorship, six were rural and seven were general hospitals; among the hospitals receiving in-person mentorship, five were rural and five were general hospitals. The number of annual deliveries ranged from 1296 to 5728 (median 2940) among the hospitals receiving virtual mentorship and from 1404 to 8732 (median 2348) among the hospitals receiving in-person mentorship.

Hospitals most commonly selected skin-to-skin care (n=10) for their improvement project because this process was identified as one with poor compliance. Hospitals also chose to improve temperature measurement (n=5), hypothermia (n=1), hand washing (n=1) and delayed cord clamping (n=1; see online supplemental table 2).

Mentor encounters

All hospitals received at least one encounter from their mentor per month (table 1). Encounters for the hospitals receiving virtual mentorship were nearly all virtual (95%) and most lasted 30 min to 2 hours (81%). In contrast, encounters for the hospitals receiving in-person mentorship were predominantly in person (84%) and lasted greater than 2 hours (74%). While mentors interacted with a variety of the QI team members, QI team leaders were most frequently involved in mentor encounters for both groups. Data collectors were involved in one-third of encounters for hospitals receiving virtual mentorship; QI team members were involved in one-third of encounters for hospitals receiving in-person mentorship. The majority of mentor encounters with hospitals receiving virtual mentorship focused on assessing progress, with directed coaching on the QI process only one-quarter of the time (figure 1). In contrast, mentor encounters with hospitals receiving in-person mentorship involved coaching on the QI process nearly half of the time, with time spent encouraging and motivating the team during one-third of visits.

Table 1

Mentor encounters to support quality improvement projects

Figure 1

Themes of mentor encounters for hospitals receiving virtual mentorship and those receiving in-person mentorship. Encounters that involved more than one theme are displayed in all relevant categories. QI, quality improvement.

In addition to the encounters to support QI projects, mentors engaged with hospitals in both groups around data entry and transmission issues (n=36 additional encounters) and assistance with development of a presentation for the April workshop (n=11 additional encounters). Among those hospitals receiving in-person mentorship, mentors also made in-person visits focused on specific reinforcement of clinical training and other reasons such as discussion of finances for purchasing supplies (n=14 additional encounters).

Progress with QI methodology

Hospitals implemented one to four changes during the initiative (figure 2, top panel). In general, hospitals receiving in-person mentorship implemented two or three changes compared with hospitals receiving virtual mentorship that implemented one or two changes. Educating staff was the most common change implemented by all hospitals in the cohort (figure 2, bottom panel). During the 5 months following QI training, all teams implemented a large-scale intervention (one that affected all providers and patients in the labour and delivery ward) targeting the process or outcome they had selected for improvement (figure 3). Two teams were able to progress to sustaining improvement through more permanent or extensive changes in the system.

Figure 2

Data describing the number of changes (top) and nature of changes (bottom) implemented by hospitals in the initiative.

Figure 3

QI progress score of hospitals in the initiative. Scores indicate the progress at each hospital during the 5 months following QI training. QI, quality improvement.

Contextual factors affecting QI

Two to six individuals from each hospital responded to the MUSIQ Survey. Median domain scores for the entire cohort indicate that hospital respondents ‘agreed’ that contextual factors predicting QI success within the domain of the QI team were accessible for their QI work (online supplemental table 3). Respondents ‘somewhat agreed’ to ‘agreed’ that factors predicting QI success were accessible within the domain of the delivery room; for all other domains (support, hospital, environment), respondents ‘somewhat agreed.’

Discussion

The EPS QI Initiative demonstrates a successful, pragmatic approach to conducting mentored, facility-level QI in low-resource settings. Hospitals in the initiative demonstrated that they could successfully engage in QI by implementing at least one large-scale intervention in their labour and delivery ward with the support of QI training and a mentor. We noted a number of strongly distinguishing constructs as described in the CFIR that may have contributed to the success of this initiative. These constructs included elements of the process, namely external change agents and reflecting and evaluating, and elements of the intervention, namely adaptability and complexity.26

External change agents

External QI mentors supporting the novice teams in this cohort were key to the overall success of the initiative, reinforcing the wealth of literature on the importance of mentorship for QI programmes in LMICs.38–44 The dose and nature of mentoring encounters may have been affected by a virtual versus in-person approach, and in turn, could have affected QI progress, though definitive conclusions cannot be drawn from this study. The approach to external change agents in this initiative was pragmatic, training healthcare workers with relatively little prior QI experience to become mentors. While these mentors spent the majority of in-person encounters coaching on the QI process or encouraging the team, they were more likely to focus on assessment of progress in virtual encounters. This difference may relate to the challenges of interacting virtually; however, it is unclear if the differences seen in the virtual mentorship versus in-person mentorship encounters would have been lessened if more experienced mentors were supporting the teams. Given the high cost of in-person mentorship noted in the literature, virtual mentorship remains an attractive approach for scale-up in low-resource settings that deserves further evaluation.45

Reflecting and evaluating

Reflecting and evaluating on progress with a QI project through the provision of data in visual run charts may have been key to the overall success of the initiative. The importance of real-time data feedback in QI interventions in LMICs has been previously described.46 During this initiative, the data collector at each hospital received a small, monthly stipend to support their data collection efforts. Additionally, this initiative required a full-time data manager at the EPS who assembled monitoring reports for the facilities using data downloaded from REDCap and a purpose-designed Excel template. This data-driven approach, although simple, may have motivated teams to improve their care. However, we recognise that the method of reflecting and evaluating used in this initiative, namely internal continuous data collection paired with external production of run charts, may not be sustainable in many low-resource settings. The large amount of data collection required for a continuous monitoring and evaluation approach, particularly in the absence of electronic medical records, is burdensome. Furthermore, accurate documentation of care in the medical record remains a barrier to data collection in many low-resource settings. Many of the teams in this initiative, challenged by inaccurate documentation of processes of care, invested time getting buy-in from their colleagues to ensure accurate data collection for the purposes of improvement. This experience further supports the need for data quality assessments as a precursor to data-driven QI interventions.21 Lean approaches to data collection that may be more realistic in low-resource settings, such as purposive sampling across a wide range of conditions, are an alternative to support data-driven QI work.47 Additionally, QI work in LMICs may benefit from programmes that allow for the generation of automated run charts from electronic data.

Adaptability

The QI approach of locally derived systems solutions to improve newborn care and outcomes may have been key to this initiative’s success. Adaptation to reflect local context has been previously shown to improve both adoption and sustainability.48 Two-thirds of the teams in the initiative demonstrated at least modest improvement through implementation of solutions specific to their hospital. The process or outcome that was the focus of each QI project was self-selected by the team based on their identification of a gap in quality and particular interest in closing that gap. The adaptability of this initiative included allowing for the selection of a gap in quality that was not part of the key indicators being monitored. Although this adaptability may have heightened the motivation of teams to invest in QI work, comparative evaluation of QI success across teams with disparate projects was a challenge. As an alternative, we evaluated their progress with QI methodology using an adapted Institute for Healthcare Improvement Scale. This adapted scale has not been validated. To our knowledge, there are few methods that have been validated to rate QI teams on their progress outside of a determination of improvement in the process or outcome selected for their project. Tools to rate QI progress, particularly in LMIC settings, are needed to support research focused on implementation of QI.

Complexity

The basic steps of QI taught in the workshop, and supported by the QI guide, were intuitive, relatively easy to apply and largely free of QI jargon. This simplified approach to QI methodology may have been key to the teams’ progress in our initiative.

Teams were encouraged to choose ‘low-hanging fruit’ in order to establish early success. Most chose a simple process of care that was applicable to all newborns as the subject of their project. It is unclear if the pragmatic strategy employed in this initiative would be effective in addressing more complex outcomes such as stillbirth or mortality. These outcomes likely require more difficult and larger systems changes that would be challenging for a novice QI team to implement. Additionally, the teams in both groups commonly resorted to education as the change they implemented, an observation consistent with literature on novice QI teams.49 Education is a necessary but often insufficient intervention to effect lasting systems changes.

Context as evaluated by MUSIQ

We used MUSIQ, a tool originally designed and evaluated in high-resource settings, to evaluate contextual factors that may have effected success in this initiative. MUSIQ has only recently been applied to LMIC settings.50–52 Results from the EPS QI Initiative suggest that QI support in hospitals in LMICs may be less available compared high-resource settings. For example, respondents to the MUSIQ Survey in our initiative more often reported that they somewhat agreed they had access to QI support in several domains, compared with respondents in high-income countries who commonly totally agreed or agreed in these same domains.36 In addition, wide IQRs in our cohort for some domains suggest considerable variability in the hospital support and environment for QI. Despite this variability, we could not draw conclusions regarding the influence of specific contextual factors on QI success in the participant hospitals. Nevertheless, it is important to understand context and causality in QI initiatives, and it is possible that an instrument such as MUSIQ that is designed for use in high-resource health systems did not transfer well to the Ethiopian context despite our adaptations. Future research should continue to develop methods for strengthening data collection as well as dealing with flawed, uncertain, proximate and sparse data.53

Limitations

While this study allows us to explore two pragmatic approaches to mentored-QI in LMICs, we cannot draw conclusions regarding the impact of mentorship on QI success in this small cohort of hospitals. The non-randomised design for assignment of mentorship, while practical for travel of the mentors during programme implementation, also limits the extent to which we can separate the role of mentorship from facility characteristics on QI success. Virtual mentorship, although novel, was put into place during programme implementation when it became obvious during QI training that teams might be at high risk of failure without external support. Given this late addition to the programme implementation, and limitation to phone calls only given available technology, it is possible that more rigorous virtual mentorship with video capability would have produced different results. In the QI workshop setting, we discovered that English proficiency was not as strong as we anticipated among participants. As such, it is possible that use of the QI guide by QI teams was decreased due to its being in English. Two hospitals estimated their baseline data due to grossly inaccurate medical record documentation. It is possible these QI teams were motivated to underestimate the quality of their care in order to demonstrate greater improvement. Interpretation of the QI progress scores reported in this study is limited by their method of assignment (two independent reviewers; resolution of discrepancies by consensus). The value of this scoring system could be strengthened for future studies by using blinded, external reviewers to assign scores with discrepancies adjudicated. Finally, the short follow-up period does not allow us to address questions regarding sustainability of QI. Future studies should address the sustainability of facility-driven QI across multiple projects, and with decreasing external support as QI teams become more experienced.

Conclusion

QI methodology was successfully implemented in this cohort of hospitals in low-resource settings with the support of education in QI paired with external mentorship. Mentoring appears to be essential for progress of QI, and further research is needed on the relative costs and effectiveness of different mentoring approaches. This pragmatic approach to facility-based QI may be a scalable strategy for improving newborn care and outcomes. Development and validation of tools to evaluate both progress with QI methodology as well as contextual factors relevant for QI success in low-resource settings is needed. Future work should also focus on whether pragmatic, lean QI strategies improve important outcomes that have complex antecedents.

Acknowledgments

We would like to acknowledge the three mentors who served as external change agents for this initiative: Azeb Sibhatu, Megerssa Kumera and Fikirte Tilahun. We would also like to acknowledge Tekleab Mekbib for his translation of the MUSIQ Survey, and Sara Berkelhamer, Kate McHugh and Renate Savich for their facilitation of the June workshop which began this initiative.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors JP developed the project design, provided oversight during the execution of the project, assisted with data analyses and wrote the initial draft of the manuscript. BW assisted with development of the project design, recruited sites, provided in-country oversight during the execution of the project, assisted with data analyses and reviewed the manuscript. DJ assisted with the provision of data during the conduct of the project and with data analyses, and reviewed the manuscript. AC assisted with data analyses, particularly with analyses of the MUSIQ data, and reviewed the manuscript. RR assisted with development of the data analytic plan and with data analyses, and with the writing and reviewing of manuscript. CB assisted in the development of the project design, provided oversight during the execution of the project, and assisted with data analyses and writing of the manuscript.

  • Funding This work was supported by a grant (#50008) from the Laerdal Foundation, Stavanger, Norway.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval The University of North Carolina at Chapel Hill Institutional Review Board exempted this study from review.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.