Article Text

Download PDFPDF

Achieving a ‘top-down’ change agenda by driving and supporting a collaborative ‘bottom-up’ process: case study of a large-scale enhanced recovery programme
  1. Fatai Ogunlayi,
  2. Philip Britton
  1. Kent Surrey Sussex Academic Health Science Network, Crawley, UK
  1. Correspondence to Fatai Ogunlayi; fatai.ogunlayi{at}nhs.net

Abstract

There is increasing recognition that organisations need to look beyond their boundaries for new innovations. However, the introduction and implementation of best practice that has been developed externally may need different processes of implementation if a successful change process is going to be achieved. Using an enhanced recovery programme as an example, we report a case study that combines the best of a top-down approach with the principles of bottom-up collaborative working to successfully embed a large-scale quality improvement programme that was commissioned to improve the adoption of enhanced recovery in elective surgery. We describe a large-scale change programme that was established, coordinated and driven from within a central ‘top’ organisation but delivered and owned locally by individual organisations working collaboratively across southeast region of England. We discuss why we believe our methodology of implementing this programme was successful, the important triggers for success and the lessons we learned from the programme.

  • quality improvement
  • quality improvement methodologies
  • collaborative
  • breakthrough groups
  • healthcare quality improvement
  • enhanced recovery
  • large-scale change

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

There is an increasing recognition that organisations need to look beyond their boundaries for new change ideas.1 One of the most refreshing aspects of the National Health Service (NHS) is that it provides a forum for sharing best practice, but the introduction and implementation of best practice that has been developed externally may need different processes of implementation if a successful change process is going to be achieved, particularly on a large scale.

One option is to implement such change using a prescriptive, top-down directive approach. This approach avoids duplication of efforts by providing central coordination of efforts, clear accountabilities, timely reporting of performance and required resources to deliver the necessary change on a large scale.2 However, inadequate engagement and lack of local ownership on the front line are often linked with top-down initiatives and regularly cited as barriers to successful implementation of projects.3 Even when change initiated as top-down has been successful, this can sometimes be short lived without underlying change in behaviour and the results achieved not sustainable.4

The alternative approach is to drive the change from the bottom-up by being less directive, encouraging and empowering people to achieve change locally.5 This allows for locally tailored solutions, with local clinicians implementing the improvement initiatives.6 However, such change can be slow to happen, and the rates of adoption of even the most evidenced change can vary substantially and can result in undesirable variations across the system.7

For large-scale change to be successful, there is increasing recognition that it must combine the benefits of top-down approach, such as central coordination and pooled resources, with bottom-up engagement where teams on the ground take control and ownership of the implementation process.8 9

Enhanced recovery is a good example of best practice that has been proven to improve quality, efficiency and patient experience but has not been adopted universally throughout the NHS. It is assumed that there are many reasons for this, but often it is the absence of a catalyst for change that is the main reason that a change does not get adopted. Overall, there is limited understanding in the literature as to why enhanced recovery has not been universally adopted.10

In this paper, rather than reporting the clinical results associated with an enhanced recovery programme, we report a case study and describe the methodology used to successfully embed enhanced recovery, on a large scale, into the acute hospital Trusts across Kent, Surrey and Sussex using a top-down agenda to drive a collaborative bottom-up process. We discuss why we believe our way of implementing enhanced recovery was successful, what were the important triggers for success, what lessons we learnt from the programme, what problems we experienced through the programme and what problems we foresee in the future.

Enhanced recovery

Enhanced recovery (ER), also known as fast-track surgery or enhanced recovery after surgery, is a multimodal, evidence-based approach to delivering care that is designed to optimise the whole surgical care pathway starting at primary care and continuing through the period before, during and after surgery (ie, including the preoperative, intraoperative and postoperative periods).11

ER has its roots in elective colorectal surgery from the work of Henry Kehlet12 and has since spread to many other pathways including gynaecology,13 orthopaedics,14 urology15 and elective caesarean section.16 The benefits of ER include reduced complications, better patient experience and reduced length of stay (LOS).17 LOS is often used as a measure of success because of its ease of measurement compared with some of the other parameters, and it is estimated that 140 000–200 000 bed-days per year have been saved nationally across musculoskeletal, urology, colorectal and gynaecology pathways as a result of the introduction of ER.11 It is, however, acknowledged that reduced LOS does not necessarily equate with better patient experience and higher quality of care. Therefore, caution must always be used when assuming an improvement in LOS equates with an improvement in the level of care.

The implementation process for ER has been well documented in the literature,11 and the many benefits of ER have been well articulated in various publications.11 16 17 ER was supported by the Department of Health through the formation of a national programme, the Department of Health’s Enhanced Recovery Partnership Programme, designed to encourage widespread adoption and spread of ER. Despite this national programme and other strategies for implementation, what is evident is that implementation has been variable and that some hospitals have been more successful at implementing ER than others.17 What is lacking is not evidence base for ER but rather improved understanding of how ER is implemented and the experience of implementation in NHS settings.10

The programme

Our ER programme spanned across the Kent, Surrey and Sussex region in the south of England. The programme was hosted by the Kent Surrey Sussex Academic Health Science Network (KSS AHSN), an organisation with experience in delivering large-scale change programmes in a region with the appetite to share and collaborate. The programme spanned a period of over 4 years from May 2011 to December 2015, and there were 10 acute Trusts (consisting of 16 hospitals) and 9 independent sector providers that participated in the programme. At the start of the programme, all providers reported some level of activity in terms of ER implementation, but there were huge variations in the extent of implementation and level of engagement.

Our aim

Our aim was to develop a programme that would achieve universal uptake of ER across the participating organisations and ultimately achieve 100% compliance with ER. At the time the programme was established, there were four pathways that had sufficient evidence to support spread and adoption. These were colorectal, gynaecology, orthopaedics and urology. Due to service reconfiguration, major urological surgery was performed by only a handful of Trusts in the region and as such we did not formally include it as part of our programme. The operations included in the programme were hysterectomy (gynaecology), excision of rectum and colectomy (colorectal) and hip and knee replacement (orthopaedic).

Methodology

For the programme to be successful, we needed to have buy-in and ongoing commitment to the programme. For this reason, the first 7 months of the programme was used to set up the programme structure and engage the clinical community. Aware from the work of Donabedian18 that structure plays a key role in the model of quality, we used those initial months to intensively consult on and set up the structure for the programme. An overview of the programme methodology is illustrated in figure 1. The critical features of the programme included:

Figure 1

Programme methodology. This diagram shows the key features of the programme.

1. Clinical and non-clinical leadership

Central team

A central programme team was established, led by the clinical director and supported by the programme manager. The central team consisted of the clinical director, three consultant surgeons (orthopaedics, gynaecology and colorectal), three consultant anaesthetists, a specialist nurse and the programme manager. The clinical director was selected to lead this programme based on their credibility as previous medical director of an acute Trust, regional lead for planned care and the ability to influence peers across the region. The rest of the clinical leads were selected following competitive recruitment, and one of the key essential criteria for selection was the ability to influence change. All clinical leads were contracted to work on the programme through secondments from organisations participating in the programme. The central programme team met every quarter to review progress, worked across all participating hospitals to provide specific knowledge and advice as required, participated in the peer reviews, attended regular meetings with the hospital ER teams and presented at the collaborative events. The clinical director and the programme manager were responsible for day-to-day management of the programme. Clinical engagement and leadership were crucial to the programme and, through the leadership of the clinical director, we also established a network of clinical champions across the region and locally within individual organisations to ensure sustainable ER pathways were implemented. The central clinical team provided the appropriate support to the clinical champions in the front line by leading the change through a mutual understanding of how clinical pathways had to change and by demonstrating a full understanding of the obstacles in the way of change. The members of the central team were clinicians who still worked within their own organisations in the region; this was critical for ensuring engagement and local ownership to drive the bottom-up approach.

Hospital ER team

Each participating organisation was required to establish a project team consisting of an executive sponsor (to provide board level involvement and commitment to the programme), a clinical lead for each pathway (a clinical champion who led the engagement of clinicians at the organisation for the relevant pathway) and a programme lead (for coordinating functions and project management). Organisations identified their own clinical leads, and any incentives associated with these roles were decided by the individual organisation; however, they were encouraged to appoint people who had an interest and had already made progress with ER. Organisations also decided their own processes and governance arrangements, although experience from other organisations about best structure was regularly shared. Commitment from the clinical champions was critical and was maintained with the support of the clinical director and the central clinical team. The hospital ER teams were the ‘bottom-up’ part of the process and were the people who were leading the change locally. They supervised local data collection, provided the important support to the clinical teams, coordinated the programme of ER implementation in their organisations and cascaded the comparative data to all those involved with the programme. The organisations were also encouraged to report the progress of the project into their quality board (or equivalent).

2. Measurement support

Lack of information and poor clinical data are often cited as a barrier to improvement.3 Our ER programme was clinically led as described above but also data driven. Importantly, these were data collected locally by the individual organisations but analysed and published centrally to allow for collaborative comparison. Ownership of the data by each organisation was critical to its ability to use data to drive the required change. Figure 2 outlines our measurement approach.

Figure 2

Measurement framework for the programme. CCGs, Clinical Commissioning Groups; CEO, Chief Executive Officer; SCNs, Strategic Clinical Networks.

Along the lines of Don Berwick’s description of ‘pathway 2’ in his work on connecting measurement to improvement,19 our measurement approach in the programme was in three parts:

  1. We established a clinically defined set of data that would allow clinical teams to understand and improve the processes and outcomes for their organisations. A clinical review of the literature and guidelines was used to define a list of measures evidenced as having the most impact on patient outcome. These measures were then used to create a care bundle20 for each pathway so that clinical teams could focus their effort on a defined number of key measures.

  2. We collected a consistent set of data that would facilitate meaningful benchmarking for sharing of best practice. A data dictionary was created to ensure there was consistency in how data were being collected for each audit cycle. The data dictionary included elements such as definition of the pathway population and inclusion and exclusion criteria for each measure. A versatile collection tool was developed, which allowed data to be collected online and also via an Excel spreadsheet. These provided data validation and ensured standardisation of data.

  3. We cultivated an open and transparent culture where data was shared for the benefit of driving improvement. We used one of the early learning collaborative events to discuss the importance of having a culture of transparency and openness with regards to sharing results to facilitate sharing of best practice.Detailed benchmark reports were shared regularly across clinical teams. The detailed report was a comprehensive set of outputs produced for each pathway and included data about process (care bundle compliance), outcome (LOS) and balancing measure (readmission)—benchmarked by Trust and by consultant. The detail benchmarking reports formed the basis of discussion during our twice yearly learning collaborative events.

    Every 6 months, summary reports were produced for the chief executives of each acute provider, which analysed their organisation’s performance on quality and outcomes and highlighted areas needing investigation with input from clinical teams. Commissioners and NHS England also received copies of the report.

3. Collaborative learning events

Implementing change is challenging and even more so when on a large scale. However, some of these challenges can be overcome by working collaboratively with others who understand and have experienced the complexities of change in healthcare.21 A coordinated top-down programme provided this structured collaboration. For this programme, we established formal collaborative learning events akin to the Institute for Healthcare Improvement breakthrough series22 to share learning, discuss challenges and share data. Through these collaborative events, benchmarking as a tool for incentivising improvement became more effective and enabled best practice to spread faster across the region. We developed a vibrant network consisting of clinical teams, management and data analysts who got together twice a year at these collaborative learning events to review and compare performance and to assess what changes in data and clinical measures may be required. These collaborative learning events were a key engagement tool, which is critical in any bottom-up approach, and they helped create a collective identity for the programme through which people were mobilised to take responsibility for their own local change initiatives.

During the course of the programme, we held nine such events, and these nine events were attended by over 1000 delegates with many people attending multiple events. The agendas for these events were codesigned and codelivered by the participating organisations, and feedback from evaluation forms circulated at each event were used to inform future events. Teams from participating organisations were able to present back progress of their local work, highlighting key successes and challenges, thus enabling best practice to be shared. The stability of the designated clinical and programme leads in each organisation was a key factor in creating a sustainable network and ensured good attendance at collaborative events. The aim of these collaborative events was to facilitate team learning and in turn enhance motivation.23 One good example of this happening in practice was an anaesthetic team learning through one of these collaborative events that their hip and knee patients had longer than average LOS in comparison with the regional average. After discussion, they decided that they needed to change their anaesthetic technique. They moved from a predominantly general anaesthetic technique, with postoperative opiates, to a mainly regional anaesthetic technique, avoiding opiates where possible. This meant fewer patients with sedation and nausea postoperatively, more patients being able to mobilise early post-operatively, resulting in an earlier discharge from hospital.

4. Motivation/incentives

Peer review visits

A vital component of the programme was the peer-to-peer support process that was established through peer reviews. The purpose of the review was to allow the Trust involved to showcase their achievements, while also discussing areas of difficulty and their future plans. This approach, similar to the education model of ‘collaborative’ peer review proposed by Gosling,24 focuses on parity, reciprocity and dialogue between participating organisations and the central team. It enabled areas of best practice to be picked up and shared with others and also allowed the review team to offer assistance when required and make recommendations.

The peer review process had three parts to it:

  1. Previsit information: data pack on key information in relation to ER compliance and outcomes was prepared and shared with the review team prior to the visit. The purpose of the data was not to investigate underperformance but rather to use it as a tool to stimulate and aid discussion.

  2. The visit: the visit was a half-day visit. Although the nature of the visit was an informal one, the set-up process was formal with an official letter from the ER programme director to the executive sponsor of the participating organisation. This ensured that the right people were sitting in the room and thus facilitated meaningful discussion with the people who could make things happen. It was essential that the executive sponsor and the lead clinicians attended this visit and were supported by the wider multidisciplinary team for each clinical area.

    The aim was to create a safe environment for frank and open discussion. The visiting review team included at least three regional clinical leads consisting of a surgeon, an anaesthetist and a specialist nurse, supported by the programme manager. It is worth noting again that the regional clinical leads were peers from within participating organisations that helped remove any perception of ‘done to us’ associated with top-down programmes.

    Although the peer review visit was optional, all organisations requested to be visited and the feedback was that they valued the peer-to-peer support provided by the visit.

  3. Follow-up report: a summary report of the visit, including key findings, was drafted after the visit. The report highlighted any recommendations discussed during the visit and also highlighted best practice that could be shared with other organisations. The content of the reports was agreed with the organisations before being shared across the network.

Commissioning for Quality and Innovation (CQUIN) payments

There is a body of evidence that suggests that many improvement projects fail to progress because of failure to align organisational incentives to the project objectives.3 19 25 As part of our programme, we worked with the commissioners across the region to incentivise implementation of ER by introducing a CQUIN payment. We believe the CQUIN encouraged engagement particularly at management level within organisations mainly because of the risk of loss of income associated with non-compliance. Top-down initiatives are usually associated with top-down targets that often lead to gaming and misreported data.8 However, in this programme, participating hospitals were involved in developing their own targets and often set a more stretching target in order to drive the improvement desired.

The CQUIN had two components: (1) achievement of minimum level data completeness (80%) for eligible patients and (2) achievement of preagreed target compliance with the ER components. The CQUIN structure was designed to use a sliding scale, with individualised stretch target that reflected individual baseline performance so that organisations with a lower baseline had a bigger improvement target. This was used to encourage a shift in the underlying distribution of performance across the region.

Celebrate achievements

The programme also recognised the success of provider organisations in delivering service improvement by making a number of ‘achieving excellence in quality’ awards. The awards recognised different levels of achievement for each organisation based on audited evidence of improvement in the programme, and although supported centrally, we ensured that they incentivised and rewarded local success and achievement. A communication tool kit was developed to help providers maximise PR opportunities from the awards and publicise their service improvement success. Celebrating success in this way became even more important after CQUINs were no longer in place to support the programme.

Website

An ER website hosted by the AHSN was set up at an early stage in the programme. This provided a portal for data presentation and also a forum for sharing tools and techniques and particularly all the documentations associated with the individual care pathways. These were regularly shared between Trusts and lessened the work necessary for setting up some of the ER pathways in individual Trust.

What was achieved

Adoption of ER status

Our ER programme saw a widespread adoption of all elements of ER across the region. Compliance with ER measures increased, while variation across the region decreased, and we saw a reduction in LOS across the region. The key results from the programme are illustrated in figure 3, which shows:

Figure 3

Programme results showing (A) box plot of improvement and reduction in variation for care bundle compliance for each pathway and (B), (C) and (D) show time series analysis showing improving care bundle compliance compared with improving length of stay for each pathway.

  • Panel A: box plot of care bundle compliance for each pathway. There was a shift in performance across all pathways—compliance with ER during the first year of the programme compared with the last year shows an increase in the mean and median score and reduced variation in all the pathways.

  • Panels B–D show time series analysis for care bundle compliance compared with LOS for each pathway. This shows that the ER compliance improved and LOS decreased during the same time period.

Other measures of improvement

It is beyond the scope of this paper to describe and report all the improvements seen as a result of ER, and indeed this paper is written as a case report of implementing a change process rather than to report the benefits of ER. However, as discussed earlier, improvement in LOS is only one of many measures of improvement, and one of the aims of this programme was to highlight all areas of improvement, many of which were reported by the clinical teams from their ER area. Furthermore, the programme also encouraged individual organisations to establish ER in other pathways outside of the core three pathways. Other areas where ER started to be introduced included caesarean section, breast, bariatric, pancreatic and upper GI surgeries. Analysis of unplanned readmissions as a balancing measure , provided reassurance that the improvement in LOS was not at the expense of an increase in adverse outcomes.

In addition to the successful role out and wide-scale implementation of ER (and subsequent improvements seen in LOS), one of the key additional benefits of our programme has been the critical role played in developing the culture of sharing and collaboration for improvement among the participating hospitals. The programme has played a key role in bringing organisations together to share good practice and form a network. Establishing an ongoing network or community of practice with shared interest is one of the fundamental factors for successful sustainability,26 and this is evident in the development of the emergency laparotomy and fracture neck of femur programmes now ongoing across the region. The success of the ER programme and the establishment of an engaged clinical community created a launch pad for these regional improvement programmes. An area of further research would be to explore and determine the direct role the ER programme has played in development of these other large-scale programmes.

Efficiency savings

One of the questions often asked about improvement programmes is about the cost–benefit analysis. This paper does not attempt to estimate the financial savings associated with this programme but, as a case study, examines what attempts were made to estimate cost savings as the programme progressed.

The simplest method for calculating the financial savings would have been in terms of bed days saved from the reduced LOS. A crude calculation of efficiency savings can be deduced based on the reduction achieved in LOS, converted to bed days and multiplied by the average bed days cost. This assumes that a reduction in LOS translates into efficiency savings through the release of bed capacity. Theoretically, this should be the case, but it depends on how that bed capacity is released as to whether savings are made. Sometimes the beds may be closed. Sometimes those beds may be used for additional patients, and sometimes the bed space may be used more creatively, like the example of a North West London Trust, where it was used to create dining space to encourage mobilisation and improved patient experience.11 However, as we know, closing one bed on a ward does not always, indeed usually, translate into a cost saving equivalent to the proportional costs of that bed. Other additional costs and benefits associated with implementation of the programme would also need to be considered, for example, administration and set-up costs, CQUIN payments received, additional costs due to ER (eg, nurse follow-up after discharge and carbohydrate loading drinks) and other costs (such as follow-up attendances at outpatients).

The biggest factor to consider is how to cost improved patient experience, reduced complications and early mobilisation.

At the beginning of the programme, how we would do a cost benefit analysis and how we would answer questions about efficiency savings was not considered. It is accepted that any considerations around such an analysis will always be difficult but, in hindsight, when the programme started, there would have been benefit in agreeing what analysis we could and could not use. This may have informed us about what data to collect to answer this question more reliably but most importantly would have informed the discussions beforehand with respect to what answers we could expect in terms of results and their analysis and the overall benefits of the programme. The host organisation for this programme, KSS AHSN, is in the process of developing a fully validated cost/benefit tool that could be used to calculate return on investment from a programme such as this. The results from the ER programme are being used to help develop this tool.

Discussion

Our programme is a good example of central drive initiating local collaboration and has been very successful in increasing the uptake of ER across the Kent, Surrey and Sussex regions of England. We believe we achieved this success by embracing the advantages of top-down methodology while investing time and effort in the key elements of the bottom-up approach as described in this case study. Measurable outcomes, such as LOS, improved considerably and many more less-easy-to-measure outcomes improved as well. While the programme was centrally initiated and could be viewed as a ‘top-down’, centrally driven improvement initiative, the programme was successfully implemented by getting buy-in from the people on the ground that were actually going to make the change happen and deliver the new ER service. The programme was clinically led, with a driven core central team and delivered by highly engaged and motivated hospital teams. The clinical leadership provided essential drive for continuing engagement and direction to the programme.

Our measurement framework, which was built on robust and validated comparative data across all providers, together with the agreed principles of transparency, provided meaningful benchmarking that facilitated learning to improve care and reduce variation. The collaborative learning events that underpinned the entire programme provided a space for the vibrant network of clinicians and managers to come together and bounce around ideas. The peer-to-peer support, through the peer review process, ensured organisations felt supported and were able to formally share their innovative thinking. Finally, by aligning the programme to CQUIN and also having the ‘achieving excellence in quality’ awards, the programme provided recognition and encouraged teams to celebrate success stories.

However, we recognise that there is always room for improvement. For example, we recognise that the programme focused on engaging and energising the clinical community to collaborate and share ideas for improvement, and this has been critical to the success of the programme. However, our programme did not offer structured training in quality improvement tools and techniques that may have provided additional value to the teams. We know that focusing on improvement science only would not have been enough; we would also have needed to provide training in three inter-related types of skills: technical, soft and learning skills to promote long-term sustainability.27

We did our best to share the comparative data that were collected by sending these data to the individual Trusts on a regular basis. However, we are aware that sometimes the cascade system within Trusts for spreading such information is not always as robust as it could be and that not all the relevant data reached every member of the teams on a regular and reliable basis.

The central clinical team was set up at an early stage, and its role was codefined by participating organisations later on as the programme progressed. It was initially envisaged that one of the main functions of the central group would be to send out working parties to individual Trusts to help them with setting up individual ER programmes and to help with snags that appeared through implementation. The central team were not invited to send in working parties, and their main role proved to be with the collaborative events and the peer review visits along with providing ad hoc advice to the clinicians within each hospital as requested. The central clinical team’s empathy with the local clinicians regarding the barriers to change helped to encourage the local clinical leads to persist with the programme at the outset and as they saw results, they further engaged with the programme. The central clinical team’s role in agreeing the care bundles and reviewing the comparative data was also invaluable as was their function in sharing learning and good practice across the region.

CQUINs were removed during 2014/2015 as the commissioners felt that ER should now be business as usual. The impact of this was loss of momentum across some of the providers.

While CQUINs helped to focus the mind and drive the project forward initially, we must be aware that sometimes incentives like this only provide short-term gain and the impact of their inevitable removal should be considered at the start of the programme. In other words, CQUIN may be counterproductive when thinking of sustainability of a programme, and we perhaps could have been more proactive in mitigating the effect of CQUIN cessation by further developing our internal incentives. We are also aware that the CQUIN money did not always get allocated to the individual services who had delivered the improvement.

As it would be expected with a bottom-up approach for a programme of this size and scale, not all organisations implemented ER at the same pace and the speed of uptake varied depending on the starting point and the capabilities of the ER teams. Highlighted below are some key points in relation to this that may be useful when setting up future programmes:

  • A longer set-up time was required in some organisations to establish the project team and get the team up and running.

  • Some organisations had historically invested more into their ER programmes. This was particularly pertinent with respect to specialist nurses. Organisations that had appointed specialist nurses at an early stage to implement ER moved at a much faster pace than those who had not.

  • Organisational processes and procedures hinder some hospitals in making the required progress. Approval for funding for carbohydrate loading provided a good example of this and sometimes it was simply the difference in process for approving funding that created the delay.

  • Organisations have different competing priorities in terms of improvement projects.

Long-term sustainability

The challenge we were left with at the end of the programme was how do you end a centrally driven programme once it has achieved its aim? Should all elements of the programme be stopped, and an assumption be made that it is now business as usual for hospitals? Or should we switch to a sustainable mode where the effort and resource required for maintaining the programme is less intense? What about the infrastructure that supported the programme, for example, the data collection tool; do we maintain it or do we scrap it?

Towards the end of the programme, we discussed these points with the clinical and management teams from all participating hospitals, again engaging front-line opinion to drive bottom-up decision making. A number of options were put forward and discussed. One was stopping the programme but carrying out a snapshot audit to see if the compliance level had been maintained. Another option discussed was to cease all data collection but maintain the learning collaborative events, although it was not clear how these events would be facilitated without a central team. Opinion was divided.

We agreed on developing a data collection tool that would allow each organisation to continue to collect data individually, with a built-in functionality to provide immediate analysis. The central team has been disbanded, and the learning collaborative events have ceased. The development of new pathways, such as fracture neck of femur, has provided the opportunity to continue working with a network of engaged but different clinicians.

The learning from this programme has already contributed to ongoing improvement in the healthcare system across the region, and the legacy will remain for the foreseeable future. However, when setting up any improvement programme in the future, it is important to consider, both at the beginning and throughout the programme, how it will end to ensure that the impact and best practice from any such improvement programme are sustained.

Acknowledgments

The ER Clinical Reference Group, participating hospitals across Kent, Surrey and Sussex and ER programme team members, in particular, Kay Mackay for her inspirational leadership at the start of the programme.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.

Footnotes

  • Contributors Both authors have made substantial contribution to the following: conception and design, acquisition of data or analysis and interpretation of data; drafting the article or revising it critically for important intellectual content; and final approval of the version published. Both authors have agreed to be accountable for the article and to ensure that all questions regarding the accuracy or integrity of the article are investigated and resolved.

  • Funding The ER Programme was funded by Kent Surrey Sussex Academic Health Science Network (KSS AHSN).

  • Disclaimer The views and opinions expressed are those of the authors and not necessarily those of KSS AHSN.

  • Competing interests FO is a peer reviewer for BMJ Open Quality.

  • Ethics approval Ethical approval for this project was not required due to it being a quality improvement project.

  • Provenance and peer review Not commissioned; externally peer reviewed.