Article Text

Download PDFPDF

Saving 20 000 Days and Beyond: a realist evaluation of two quality improvement campaigns to manage hospital demand in a New Zealand District Health Board
  1. Lesley Middleton1,
  2. Diana Dowdle2,
  3. Luis Villa2,
  4. Jonathon Gray1,3,
  5. Jacqueline Cumming1
  1. 1Health Services Research Centre, Victoria University of Wellington, Pipitea Campus, Wellington, New Zealand
  2. 2Ko Awatea, Counties Manukau District Health Board, Auckland, New Zealand
  3. 3South West Academic Health Science Network, Exeter, Devon, UK
  1. Correspondence to Dr Lesley Middleton; lesley.middleton{at}vuw.ac.nz

Abstract

Background The current paper reports on a realist evaluation of two consecutive quality improvement campaigns based on the Institute for Healthcare Improvement’s Breakthrough Series. The campaigns were implemented by a District Health Board to manage hospital demand in South Auckland, New Zealand. A realist evaluation design was adopted to investigate what worked in the two campaigns and under what conditions.

Methods A mixed-methods approach was used, involving three phases of data collection. During the first phase, a review of campaign materials and relevant literature, as well as key informant interviews were undertaken to generate an initial logic model of how the campaign was expected to achieve its objective. In phase II, the model was tested against the experiences of participants in the first campaign via a questionnaire to all campaign participants, interviews with campaign sponsors and collaborative team leaders and a review of collaborative team dashboards. In phase III, the refined model was tested further against the experiences of participants in the second campaign through interviews with collaborative team leaders, case studies of four collaborative teams and a review of the overall system-level dashboard.

Results The evaluation identified four key mechanisms through which the campaigns’ outcomes were achieved. These were characterised as ‘an organisational preparedness to change’, ‘enlisting the early adopters’, ‘strong collaborative teams’ and ‘learning from measurement’. Contextual factors that both enabled and constrained the operation of these mechanisms were also identified.

Conclusions By focusing on the explication of a theory of how the campaigns achieved their outcomes and under what circumstances, the realist evaluation reported in this paper provides some instructive lessons for future evaluations of quality improvement initiatives.

  • quality improvement
  • collaborative, breakthrough groups
  • healthcare quality improvement

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

In 1995, the Institute for Healthcare Improvement (IHI) developed the Breakthrough Series,1–3 a collaborative model for improving quality in healthcare which has since been widely used as a vehicle for change.4 This model is designed to assist organisations in making ‘breakthrough improvements’ by applying existing knowledge to a chosen topic area.1 Typically, a large number of collaborative teams from multiple healthcare organisations come together for a short period (6–15 months) to learn about best practice, then implement and test changes to achieve improvements in their local organisations. Further joint learning sessions offer the opportunity for teams to share experiences and to learn from one another, and regular measurement is used to track change.1

Quality improvement initiatives following the Breakthrough Series Collaborative model typically focus on a single area for change where evidence already exists about best practice but is not widely applied.4 In this paper, we report on two consecutive quality improvement campaigns run by a New Zealand District Health Board that adapted the model to the broad topic of reducing demand for hospital care. We present insight on the causal mechanisms that generated the campaigns’ outcomes and the contextual conditions that helped or hindered the operation of these mechanisms.

Counties Manukau District Health Board (CMDHB) is one of 20 District Health Boards in New Zealand, responsible for planning and funding services for its South Auckland population of 563 210 people (2018/2019 estimate).5 Its multi-ethnic population has high numbers of Māori (indigenous), Pacific and Asian peoples and a significant proportion of residents living in socioeconomic deprivation with its consequent impact on health and health service provision.5 In 2011, faced with a growing demand on inpatient beds at its main hospital, CMDHB began an 18-month quality improvement initiative, the ‘20 000 Days Campaign’. The goal was to achieve a reduction in hospital demand equating to 20 000 bed days by returning 20 000 well and healthy days to the local community.6 This campaign concluded on 1 July 2013 and was then followed by a further 1-year campaign, ‘Beyond 20 000 Days’, which aimed to continue to return healthy and well days to the population of Counties Manukau.7

A campaign project team that included a campaign manager, a campaign clinical lead, improvement advisors, project managers and a communications coordinator was the locus of centralised activity during both campaigns. Both campaigns were backed up by a communications and marketing budget that gave the campaigns a distinctive visual profile throughout the hospital and other services. Interested staff were invited to form collaborative teams and suggest improvement ideas which had the potential to realise the campaigns’ aims. At the start of the 20 000 Days Campaign, an evidence-based session was conducted to select change ideas that held the greatest promise of reducing demand on the hospital. Evidence was sourced from international experience and local pilots, and drew on insights from previous work looking to integrate primary and secondary care across the region.8–10 Initially, 13 collaborative teams of between 8 and 10 members each were assembled (of which 10 completed the campaign).

In the Beyond 20 000 Days Campaign, an open call was again made for collaborative teams, with 40 proposals short-listed and reviewed through a ‘Dragons’ Den’ selection process. The aim of this process was to identify ideas that were most likely to achieve the campaign’s objective and that had the potential to involve a wider set of health professionals working across primary and community care. The latter was an attempt to shift the dialogue beyond hospital-based teams and to create sustainable change across the whole health system. In this campaign, 16 collaborative teams of 8–10 members were initially assembled and 14 completed the campaign. In both campaigns, topics were unique to each collaborative team and encompassed a diversity of proposed changes to reduce hospital admissions and length of stay, increase access to community-based support, and reduce harm to patients and readmissions (table 1).

Table 1

The collaborative teams that completed the campaigns and their aims (adapted from Middleton et al12)

As per the Breakthrough Series Collaborative model, teams in both campaigns attended 5 or 6 days of ‘learning sessions’, which were initially focused on learning the quality improvement methods of planning, implementing and evaluating small changes quickly, as well as eventually sharing their experiences and results, and considering how to spread their innovations. Between these learning sessions, the teams engaged in ‘action periods’ during which they tested and implemented changes in their own settings and collected data to measure change.1

CMDHB commissioned an independent evaluation of both the 20 000 Days Campaign and the Beyond 20 000 Days Campaign in order to understand how the campaigns achieved their outcomes and to inform the development of further initiatives.11 12 Rather than a focus on ‘does it work’, which others have argued may not be helpful for improvement initiatives,13 the emphasis of this evaluation was on how and in what contexts the campaigns worked. In the remainder of this paper, we present the evaluation design and a summary of the results, which are then discussed in terms of their relevance for evaluations of future quality improvement programmes.

Methods

A realist evaluation design was adopted to investigate what worked in the two campaigns and under what conditions.14 A key focus for realist evaluations is understanding causation, specifically how a programme or an initiative achieves its outcomes, and how the causal mechanisms responsible for generating those outcomes are both enabled and constrained by the myriad contextual factors within which the programme or initiative is located.14

The initial 20 000 Days Campaign was launched in October 2011 and concluded in July 2013. The subsequent Beyond 20 000 Days Campaign began in July 2013 and finished in June 2014. Given this timeline, data collection for the evaluation was undertaken in three phases between March 2013 and November 2014, as summarised in figure 1.

Figure 1

Three phases of data collection and analysis (adapted from Middleton et al12).

Phase I: elicit the theory of how the 20 000 Days Campaign is expected to bring about change

A realist sampling strategy15 was developed to test a provisional theory of how the initial campaign worked to achieve its effects. This provisional theory was developed in the first phase of research from the following data sources:

  • A review of campaign planning documents.

  • A review of relevant literature.

  • Eight semi-structured interviews with campaign sponsors selected to cover clinical, improvement, project management and senior leadership expertise (n=4) and collaborative team leaders selected to cover the broad groupings of topics based either in the hospital or in the community (n=4) to clarify the assumptions underpinning the campaign.

A campaign logic model was developed, which highlighted the sequence of activities and policy mechanisms central to how the 20 000 Days Campaign was expected to achieve its goal. Fieldwork in phase II was designed around refining and testing these mechanisms.

Phase II: test claims against the experiences of participants in the 20 000 Days Campaign

During the second phase, three sets of data were collected to assess the 20 000 Days Campaign:

  • Eleven semi-structured interviews with a cross section of campaign sponsors (n=6) and collaborative team leaders (n=5) 8 months after the initial campaign finished (March 2014) regarding their experiences of the campaign, including key achievements and challenges. Two of the campaign sponsors were the same as those interviewed in the first phase; the other four were representative of a broader set of senior leadership. The collaborative team leaders were different from those interviewed in the first phase.

  • A questionnaire emailed to all participants in the 20 000 Days Campaign 9 months after the campaign ended (April 2014), probing the utility of six features of the campaign’s design and implementation. This questionnaire was adapted from two previously validated instruments designed to capture the attributes of successful quality improvement programmes.16 17 The questionnaire was emailed to 150 participants in total, with 39 replies (a response rate of 26%). However, further investigation revealed that the number of active campaign participants was more likely to be 80, suggesting that the number of replies received was more representative than the initial response rate suggested (ie, 39/80=48.75%). Most respondents were collaborative team members (54%), with the remainder comprising collaborative team leaders (19%), clinical expert advisors (19%) and project managers or other expert advisors (8%). Every collaborative team was represented.

  • Secondary analysis of 8 of the 10 collaborative teams’ ‘dashboards’ of quantitative outcomes and measures, in order to understand how the teams that completed the campaign measured their achievements.

By the end of phase II, the provisional campaign logic model was refined further. Within the sequence of inputs, activities, outputs and outcomes, four policy mechanisms were distilled as central to how the campaign achieved its results . These were:

  1. Campaign benefits from an organisational climate prepared to try new approaches.

  2. Campaign outputs reliant on early adopters being prepared to step up and lead change.

  3. Short-term outcomes reliant on team members engaged in cycles of learning and improvement.

  4. Medium outcomes requiring confidence in team measurement practices.

Figure 2 presents the refined campaign logic model built from the insights at the end of phase II, which were then available for further theory refining in phase III.

Figure 2

Campaign logic model (adapted from Middleton et al12).

Phase III: test claims against the experiences of participants in the Beyond 20 000 Days Campaign

The third phase of data collection related to the subsequent Beyond 20 000 Days Campaign. These four policy mechanisms refined from phase II were tested further via:

  • Semi-structured interviews with team leaders of nine collaborative teams.

  • In-depth case studies of four teams; the four teams were chosen to reflect the diversity of collaborative team activity occurring during the Beyond 20 000 Days Campaign and featured:

    • A team attempting a complex change.

    • A team attempting a less complex change.

    • A team whose change ideas experienced difficulties.

    • A team with a high degree of participation from people outside of CMDHB.

  • The case studies included 20 interviews with a range of health professionals and improvement advisors, as well as analyses of team dashboards, presentations, and documentation of business cases.

  • A secondary analysis of the CMDHB system-level dashboard that was used across both campaigns.

Each data source was analysed and compared with the others to produce the overall evaluation findings. Analysis was a continual process of interrogating claims embedded in the campaign logic model to progressively develop a more refined theory of how the two campaigns achieved their outcomes. When evaluating a programme using a realist approach, attention is paid to the change the programme is intended to create, who is intended to do something differently, the resource being provided to enable that change or behaviour and how recipients respond to that resource and the contexts that shape that response.18

Patient and public involvement

There were no funds or time allocated for patient and public involvement, so we were unable to involve patients. A key insight for future campaigns, however, confirmed the need for patient codesign to ensure patients and family members contributed to the prioritisation of change ideas.

Results

Monthly dashboards19 monitored the difference between projected demand and actual bed use and concluded that the target of saving 20 000 bed days was met during the course of the 20 000 Days Campaign. This conclusion was based on the assumption that if actual use was less than predicted, then the hospital had a bed day saving. CMDHB used two growth models to track changes. The first model showed that 23 060 days were saved by 1 July 2013, based on extrapolations from past activity combined with demographic growth. The second model, using only demographic growth from 2011, concluded that 34 800 days were saved by 1 April 2015.

The focus of the external evaluation was not to replicate this internal monitoring but to explore how and why the campaigns achieved their results. Having distilled four policy mechanisms as provisional explanations, in what follows, we discuss each of these mechanisms in turn, along with the contextual factors that determined the ways in which participants reasoned differently because of the resources (both financial and intellectual) offered by the campaigns.

Organisational preparedness to change

Providing advice on the features of a successful campaign, the IHI highlights the importance of ‘creating will’ and the need to align campaign objectives with the wider direction of travel in an organisation.1 The 20 000 Days Campaign benefited from a culture receptive to change. While some of those interviewed believed the professional branding and marketing was key in building motivation for the campaigns, a more common observation was that the campaigns tapped into a deeper CMDHB culture of being innovative, that is, a culture of being prepared to try new things, as explained by one interviewee:

I think that our population is very diverse, and our staff reflect and embrace this diversity and I think that it was an opportunity to do something different which I think is embedded in the psyche of Counties staff. I have been here for a while and what impresses me is the receptiveness to do something differently, and be as creative as we can to embrace the diversity of our whole population. (20 000 Days Campaign’s collaborative team leader)

Figure 3 presents questionnaire responses to statements about organisational support for quality improvement during the course of the 20 000 Days Campaign. There was broad agreement that the campaign goal of reducing hospital demand by returning 20 000 well and healthy days to the Counties Manukau community was widely communicated to staff, that CMDHB senior management showed interest in the campaign and that there was intent to integrate quality improvement across the organisation. There was slightly less agreement regarding direct involvement by executives in quality improvement activities and translation of the campaign’s goals into CMDHB policy; however, the majority of respondents rejected the statement that ‘little value is placed on quality improvement’ within CMDHB.

Figure 3

Organisational support for quality improvement in the 20 000 Days Campaign (adapted from Middleton et al12).

When asked about the overall effectiveness of the 20 000 Days Campaign, nearly 80% of respondents either agreed or strongly agreed with the statement that the campaign had contributed to building a culture of quality improvement within CMDHB; 84% agreed or strongly agreed that it covered the relevant topics; and 71% agreed or strongly agreed that the campaign was a success. There was far greater variability in responses to the statement that the campaign had only a weak link with reducing demand on hospital beds; 55% either disagreed or strongly disagreed; 29% were neutral; and 16% agreed or strongly agreed, possibly indicating some uncertainty about the types of change that would have the most impact.

Interviews for the subsequent Beyond 20 000 Days Campaign also highlighted the value of senior management being seen to prioritise the campaign’s improvement work. Most interviewees viewed senior leadership support as critical to building a receptive climate for change. Some uncertainty around the types of change that would have the most impact was also evident in the case study interviews. In these cases, teams were being directly challenged to quantify bed days saved and to identify what changes would make the most difference in relation to the campaign’s objectives.

In summary, the expectation that campaign participants would be ‘inspired’ by the goals of the campaigns was enabled by an organisational culture receptive to change, widely communicated evidence of the need to manage hospital demand and visible senior management support. A key constraining factor, likely to raise more uncertainty in the minds of participants, related to what type of changes would have the biggest impact on managing demand.

Enlisting the early adopters

In the 20 000 Days Campaign, there was an early emphasis on ‘working with the willing’. As this campaign transitioned into the Beyond 20 000 Days Campaign, the importance of enlisting the ‘early adopters’ was mediated by a Dragons’ Den selection process, whereby proposed change ideas were assessed in terms of what was most likely to result in managing hospital demand.

While the campaign sponsors sought to be more active choosers of collaborative team topics in the second campaign, an openness to working with those with an idea and appetite for change continued to be important:

I learnt the hard way it is important that people bring the important topic to us, our role is to help them implement it with our expertise in methodology. They need to be owners of the topic. (Campaign sponsor)

While the campaigns were being run, there was minimal funding available for other initiatives, so staff who wanted to pursue new programmes or services had an incentive to adapt their ideas to fit the campaigns’ objectives. On occasion, this meant there was little interest in using the Breakthrough Series Collaborative model to redesign care processes, with some pre-existing project teams wanting to quickly implement and spread changes rather than using the framework as intended to test and learn from new ideas. A further factor constraining how the process of enlisting the early adopters worked in practice was that frontline staff were sometimes unable to access the institutional resources they needed, leading to delays in implementing their proposed change solutions.

In summary, the expectation that campaign participants would ‘take ownership’ of the Breakthrough Series Collaborative model meant being prepared to produce project charters and dashboards, undertake Plan Do Study Act cycles, attend learning sessions and receive expert coaching. This activity was enabled in situations where the campaign sponsors successfully mediated between harnessing the energy of those team leaders with ideas for change and marshalling those ideas into a collective effort toward the campaign’s goals.

Strong collaborative teams

One of the fundamental expectations of the IHI approach to quality improvement is that collaboration makes it possible to learn more and improve faster than working in isolation. Figure 4 presents questionnaire findings relating to collaborative team dynamics during the 20 000 Days Campaign, with respondents indicating widespread involvement in team processes.

Figure 4

Participation in 20 000 Days collaborative teams (adapted from Middleton et al12).

Interviews with team members involved in the Beyond 20 000 Days Campaign further clarified the features of effective collaborative teams in the campaign: (1) leaders who motivated their teams to use their skills to improve patient care, (2) a willingness by teams to undertake small-scale tests of change rather than move prematurely to large-scale implementation, (3) a culture receptive to learning from what patients valued, (4) having sufficient flexibility in organisational processes and structures to allow teams to test their change solutions and (5) being able to release several team members to attend learning sessions during which the Breakthrough Series Collaborative model was taught.

Conversely, constraints on developing effective collaborative teams during the campaigns included a lack of knowledge about how to apply the improvement methodology if only a small number of team members participated in learning sessions, and difficulties managing the additional work requirements involved in pursuing improvement solutions while continuing to deliver patient care.

Campaign sponsors sought to create a culture where collaborative teams could be formed, then stopped, if they were not seeing the changes predicted to occur. In practice, this meant that for the first campaign, 13 collaborative teams started, 10 completed the campaign and 8 teams moved on to permanently implement changes. The other five teams returned to business as usual. In the second campaign, 16 teams started, 14 completed and 11 teams moved on to permanently implement changes, while the remaining 5 teams returned to business as usual.

Learning from measurement

In campaigns using the Breakthrough Series Collaborative model which have a single clinical focus, collaborative teams benefit from collective wisdom around measurement, which is often communicated to teams at the outset. By contrast, in the CMDHB campaigns, the diversity of improvement activity meant that each team was responsible for developing its own process and outcome indicators. These indicators were then combined to produce team dashboards to show progress at key points during the campaigns.

A secondary analysis of eight dashboards from the 20 000 Days Campaign found substantial variation among the teams in terms of their measurement practices. Regarding the consistency with which process and outcome indicators were measured by these eight teams, 51% of planned indicators were measured up until the end of the campaign; the team that continued with the least indicators tracked 25% of those planned, while the team that tracked the most indicators followed 87.5% of those originally planned. The total number of indicators for these eight collaborative teams was 49 indicators. The reasons indicators were abandoned included the small numbers involved, which meant it was difficult to see an effect on indicators, the targets were not achievable by the team alone and difficulties in accessing data.

A secondary review of team measures from the Beyond 20 000 Days Campaign saw more robust reporting of process measures. These were most evident in internal business cases developed to secure ongoing funding. A review of these business cases indicated a 2-year turnaround was required to develop and define robust measures despite each campaign being designed to last for shorter periods. Interviewees also highlighted the work undertaken by teams to get the smaller implementation measures ‘polished’ but reiterated the ongoing challenge of quantitatively linking their improvement to the larger campaign goal of giving back healthy and well days to the local community.

A central idea of both campaigns is that teams learn by critically reflecting on what their measures tell them about the impact they are having. For participants, confidence in their measures became strongly linked to whether the results they collected helped them assemble the evidence to make the case for ongoing financial resources to sustain and spread their initiative idea once the time-limited campaign had finished. During the campaigns, teams often experimented with a wide range of outcome measures, for example, reduced waiting times and improved patient functioning scores. However, after the Beyond 20 000 Days Campaign finished, those teams that wanted to make the case for permanent funding paid increased attention to backing up their claims for improvement with long-range predictions around saving bed days.

Summary

Figure 5 presents an updated summary of the causal mechanisms argued to underpin the campaign logic model that incorporates results from the evaluation of both campaigns. The update presents the sequence of reasoning by campaign participants and the different contexts that influenced how this reasoning was triggered.

Figure 5

Updated summary of the causal mechanisms argued to underpin the campaign logic model. (adapted from Middleton et al12).

From the beginning, the campaigns’ communications—with the tagline ‘Wouldn’t you rather be at home’—sought to link the campaigns to what patients valued. By the Beyond 20 000 Days Campaign, rather than placing a strong emphasis on the hospital bed days management system, a ‘well day’ concept was increasingly used to focus on the fundamental interests of health professionals of doing what is right for patients.

Those sponsoring the campaigns explained they saw the days saved target as driving practical change, rather than being a research-based measure that needed to control for all the variables. Others have pointed out that this distinction between measurement for research and measurement for improvement is one of the hallmarks of improvement science, with measurement for improvement requiring simple measures to evaluate changes, rather than more elaborate and precise measures to produce new generalisable knowledge.20

This distinction was picked up by those sponsoring the campaigns, when they explained the first target of reducing bed days by 20 000 in the 20 000 Days Campaign was good enough for improvement. That said, the change in wording of the final goal for the Beyond 20 000 Days Campaign recognised the ongoing uncertainty around attribution and the need to find a goal that was more ‘realistic’, as explained further:

In [Beyond 20 000 Days] we were fortunate in that a lot of the argument [over numbers] has died away now. In two years we have worked through quite a lot …there is still a background of unhappiness around is it pure enough …and the answer is not a lot. It is messy and dirty but is it good enough for improvement. It probably is. That is why we kept the 20 000 days language in Beyond 20 000 days. (Campaign sponsor)

By the end of the Beyond 20 000 Days Campaign, there was an increased interest in using emotionally resonant individual patient stories as proof of success. For example, the Beyond 20 000 Days summary booklet described how:

… it’s the difference the campaign has made in the lives of the patients and families it has touched that truly shows the value of what the Beyond 20 000 Days Campaign has achieved. Patients with heart failure in the Healthy Hearts: Fit to Exercise programme successfully completed the 8.4 km Round the Bays fun run ….

Discussion

Many improvement programmes struggle to identify causal mechanisms as they exert their effects during periods when a trend is already evident in the direction of the change being promulgated. Given this ‘rising tide’, it can be difficult to find a specific causal link between what a campaign has achieved and improved outcomes occurring because of the implementation of best practice across the board.21 However, these campaigns have been justified as still having an important role in building explicit commitment for change and improvement.22

The evaluation of the two CMDHB campaigns reported in this paper employed a realist design to build an understanding of how the campaigns achieved their outcomes, incorporating findings from systematic reviews and other quality improvement studies,23–27 alongside an analysis of the experiences of those participating in the campaigns. Reviews of quality improvement initiatives have typically found variable evidence for their effectiveness4 28 29 (but see Wells and colleagues for a recent review evaluating initiatives that met accepted quality and reporting criteria).30 In response to the equivocal nature of such evidence, Dixon Woods and colleagues highlight the importance of more sophisticated evaluation approaches for assessing quality improvement initiatives. In particular, they recommend a theory-based evaluative design to elicit the causal mechanisms at work and identify the ‘conditions of context’ that are necessary to bring about the desired outcomes.31 32

Our findings confirm the importance of several attributes already well established. These include the influence of the organisational context on the success or failure of quality improvement initiatives33 and the importance of access to expertise to help with the change, clinical champions, and regular reviews.34 The CMDHB campaigns benefited from a culture responsive to change and innovation, a willingness to adapt as one campaign morphed into another and visible support from senior management. Factors constraining the organisation’s preparedness to change concerned the lack of consensus about what changes would be most likely to lead to improved performance.

Effective team processes and leaders who encourage team members to apply their knowledge and skill are routinely identified as contributing to the success of quality improvement initiatives.16 35 36 In line with this, we found that important enabling factors that contributed to the success of the CMDHB campaigns included a high level of participation in collaborative team learning using PDSA cycles and team leaders who motivated rather than just managed their collaborative teams. Constraints on the success of collaborative team-based learning during the campaigns included the need to continue to deliver care while undertaking quality improvement work, and a failure to engage fully with the methods of planning, implementing and evaluating changes if not enough team members attended the learning sessions.

Recognising the complexity of healthcare improvement, a common tension involves the push for uniformity alongside the need for local initiatives.37 This tension travelled through both campaigns as leaders sought to use the natural creativity of staff rather than exerting ‘top-down control’,38 and at the same time sought to select ideas that were most likely to achieve the objectives of managing hospital demand and to be sustainable when the campaigns finished. Over the period of the campaigns, the forecast revenue for CMDHB reduced, which put increasing pressure on teams that wanted to continue with the additional funds they received during the life of the campaign. When successful change packages were ready to be ‘handed back’ to the main business, teams needed to make a business case to their individual service managers. At the start of the second campaign, more attention was paid in the process of deciding which collaborative teams would be chosen to reviewing the potential cost implications of their proposed change. Campaign sponsers became more aware that the improvement initiatives needed to be resource neutral or use existing resources more effectively if they were to continue.

The quest for quality improvement is increasingly influenced by the ways in which health is a complex adaptive system with multiple interacting agents with ‘degrees of discretion to repel, ignore, modify or selectively adopt top down mandates’ (Braithwaite, p2).37 The campaigns can be subdivided into clearly mandated parts following the system thinking embedded in the Breakthrough Series Collaborative model. This research has focused on understanding how the campaigns worked by exploring the reasoning applied by participants in different contexts (see figure 5). The contribution made by this research responds to calls to provide a more complete understanding of the role and influence of context in implementing quality improvement strategies so that they not only achieve their goals but also are sustainable and transferable.39 40

Conclusions

In order to avoid a projected growth in demand of 20 000 hospital bed days, a New Zealand District Health Board ran two consecutive quality improvement campaigns using the IHI’s Breakthrough Series but allowing collaborative teams to test multiple ideas rather than collectively implement the same clinical best practice. The evaluation reported in this paper adopted a realist design to uncover the mechanisms by which the campaigns achieved their outcomes, as well as the enabling and constraining factors that made each of the four mechanisms identified more or less likely to generate their effects within the specific context of the CMDHB campaigns. By focusing on the explication of theory, the identification of causal mechanisms, and the enabling and constraining contextual features that mitigate the performance of those mechanisms, the evaluation reported in this paper provides some instructive lessons for designing evaluations of future quality improvement initiatives.

Acknowledgments

A very appreciative thank you to those involved in both campaigns who shared their insights and experiences. A particular acknowledgement and thanks to Janet McDonald and Claire O’Loughlin for writing support and David Mason for his technical support.

References

Footnotes

  • Contributors JG and DD designed the campaigns and contributed to the development of the logic of how the campaigns operated. LM designed the evaluation with support from JC and, along with LV, undertook interviews, analysed questionnaire responses and undertook case studies. LV was responsible for the secondary analysis of team dashboards. LM led on analysing and drafting the full report of evaluation findings with input from LV, JC, DD and JG. All authors read and approved the final manuscript.

  • Funding This work was supported by Counties Manukau District Health Board, which commissioned an independent evaluation of the campaigns.

  • Competing interests DD, JG and LV were employed by the organisation delivering the campaigns. DD and JG had direct involvement in delivering the campaigns.

  • Patient consent for publication Not required.

  • Ethics approval This evaluation was reviewed and approved by the Victoria University of Wellington Human Ethics Committee (#20306) and by Counties Manukau District Health Board’s ethics committee. All participants gave informed consent.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement No data are available.