Our initial aim had been to achieve ≥90% compliance with documentation of indication and prescribing guidelines by June 2015. We reached 100% compliance by March 2015, and this was maintained for a further 18 months. We noted a decrease in compliance in late 2016, which coincided with a temporary vacancy in our antimicrobial pharmacist post, but this subsequently improved and was maintained at a median of 90% to the end of 2017 (figure 2).
We subsequently reduced the frequency of audits and feedback from weekly to 3 weekly (from 13 April 2015) to monthly (from 6 July 2015 onwards). Compliance was maintained at 100% over the following 18 months, including through three further changeovers of junior doctors (July 2015, January 2016 and July 2016).
We found that there was an overall reduction in antibiotic consumption following the interventions (72.5 Defined Daily Doses per 100 bed-days used in 2014, compared with 62.1 in 2015). This was associated with a sustained decrease in antibiotic expenditure for the hospital (compared with data for 2014, this equated to €105 000 decrease in expenditure for 2015). Although we did not target the overall level of antibiotic consumption, and there were other antimicrobial stewardship interventions introduced during this time, we believe that our improvement project contributed to this reduction through increased awareness of good prescribing practice among prescribers. We also documented this sustained decrease in overall antimicrobial consumption occurred while the national median consumption for acute hospitals was increasing (online supplementary material).
We subsequently participated in two further national point prevalence surveys, which also demonstrated an improvement in the proportion of antibiotic prescribing courses that had a documented indication and were in line with local prescribing guidelines. As we had found in our weekly audits, these surveys also demonstrated an improvement in prescribing quality indicators (such as documentation of planned duration or stop date) that were not included in data feedback or targeted interventions (figure 3).
Lessons and limitations
We believe that our focus on using FLO was a large factor in reaching, and surpassing, our initial project aim. Allowing prescribers time to understand and accept ownership of the weekly audit data meant that we were more likely to get positive engagement in the process when it came to seeking ideas for change. Goldmann describes seven rules for engaging clinicians in quality improvement.21 Applying these rules probably further contributed to our success, as follows:
Emphasising improvement, rather than assurance: we avoided any discussions around quality assurance (eg, compliance with national standards), but rather focused on local data and interventions.
Avoiding ‘mystical’ language: we avoided using any technical language associated with quality improvement methods (eg, model for improvement, PDSA, Lean, etc), which might have alienated prescribers unfamiliar with these methods or terminology.
Relating improvement work to what matters to clinicians: we focused on using the sense of professional pride, along with the competitive nature of clinicians, as levers in our improvement process. Incorporating data feedback and engagement sessions into a clinical case discussion meeting also helped to set the improvement project in the context of direct patient care.
Accommodating clinician’s workload and schedule: we incorporated our data feedback and engagement sessions into a pre-existing weekly meeting attended by paediatricians and junior doctors.
Being upfront about the fiscal agenda: we did not emphasise the financial benefits of the improvement project during the initial engagement sessions, though we did include feedback at subsequent sessions on how cost savings from the project helped to support appointment of an additional clinical pharmacist.
Providing relevant data: we tested a variety of metrics for feedback and chose a combined measure that resonated with and was well understood by prescribers. The practice of hand drawing the run chart each week also appeared to help engage prescribers, by making the feedback process relatively informal and introducing a ‘performance’ element.
Emphasising the academic case for quality improvement: while we did not emphasis the academic potential of the project as part of the initial engagement sessions, we did feedback to prescribers when the project was presented at national or international meetings, and when it was selected as a finalist for the 2016 Health Service Executive (HSE) Excellence Awards. Results from the project were also presented at hospital research and quality improvement seminars.
A core concept in FLO is that of ‘Wise Crowds’, whereby a sufficiently large and diverse group is likely to have a more detailed and nuanced knowledge of how a healthcare process or system works.17 Thus, with appropriate facilitation, such groups are often better placed to identify problems and potential solutions than a smaller group of subject matter experts. We found this to be the case in our project, as prescribers were able to identify key drivers for improvement that we had not considered in our initial project planning (eg, leveraging the competitive nature of clinicians), along with potential interventions to support improvement. By not presenting the project as a ‘fait accompli’, this approach also helped to foster ownership of the project among prescribers. This approach has been shown to be more effective in delivering sustained improvement, compared with the approach of seeking ‘buy-in’ for a predetermined set of interventions.19 Similarly, a recent study found that allowing physicians to choose local antimicrobial stewardship interventions was associated a sustained improvement in the quality of antimicrobial prescribing.22
A 2017 Cochrane review on interventions to improve antimicrobial prescribing practices for hospital inpatients found that ‘interventions that included feedback were more effective than those that did not. However, there were too few studies with goal setting or action planning to assess their effect in addition to feedback.’23 A 2012 Cochrane review on the impact of audit and feedback found ‘empirical evidence from non-health literature to suggest that goal setting can increase the effectiveness of feedback, especially if specific and measurable goals are used’.24 Setting a clear goal, in the form of a SMART aim, and providing regular feedback to prescribers were key components of our project. We also incorporated action planning, as we continued to engage with front line staff to identify additional interventions that could be made to consolidate and sustain improvements once the initial goal had been met.
Toma et al have recently produced a framework for balanced accounting of the impact of antimicrobial stewardship interventions.25 This framework includes the consideration of unexpected effects (both positive and negative) after the intervention has been implemented. Using FLO allowed us to achieve a balanced account by facilitating the modification of the initial planned interventions over time in an iterative fashion. This led to the identification of several pleasant surprises, such as leveraging the competitive nature of clinicians, the impact of hand-drawing run charts and not requiring the planned educational interventions. This is in keeping with the derivation of FLO from complexity science, which includes the principle of ‘emergence’. Emergent properties arise from the interaction of individual elements in a complex adaptive system, are greater than the sum of the parts and are difficult to predict by studying the individual elements.16 Thus, unexpected effects are largely unpredictable at the start of an intervention, but are still to be expected.
We had expected to see a reduction in compliance with our audit measures following the 6 monthly changeover of junior doctors, and had planned for additional engagement sessions around these times to reinforce the improvement process. However, we were pleasantly surprised to see that compliance remained at 100% in our ongoing audits, despite the influx of a large number of new staff. Discussions with junior doctors who were involved in the initial improvement process, and with junior doctors who started in the hospital in July 2015, appeared to confirm that we had witnessed a change in the culture around empiric antibiotic prescribing. Junior doctors who were involved in the initial phase of the improvement expressed a sense of pride in the results, ownership of the process and confirmed that the prescribing supports we had put in place fit well into their practice and were easy to use. Junior doctors starting in the hospital in July 2015 described being informed by their peers that following the empiric prescribing guidelines and using the prescribing tools, such as the laminated guideline summary card, was considered the norm for the hospital. Rogers described the concept of a ‘tipping point’ whereby once there is 10%–20% adoption of a new product/practice in a given population the spread and adoption of the new product/practice becomes self-sustaining.26 We believe that the time put into fostering front line ownership of empiric antibiotic prescribing paid off by ensuring we reached this tipping point relatively quickly, once the interventions were introduced.
A key factor in the success of this improvement process was being able to incorporate our data feedback and engagement sessions into a pre-existing weekly medical meeting. There was already a strong culture of open discussion and learning from clinical scenarios at this meeting, which facilitated engaging prescribers in addressing empiric antibiotic prescribing. The generalisability of our improvement process, however, may be limited if such an open forum does not exist for other groups of prescribers.
One of the challenges we faced during the initial phase of the project was in relation to obtaining data for improvement. We discovered that the ED electronic record system was designed to focus on presenting complaints, triage and flow within the ED, and that extracting data on admission diagnoses and empiric therapy was cumbersome. We chose to instead incorporate data collection into daily ward rounds. This had the advantage of providing qualitative feedback on prescribing practice through discussions with nursing and medical staff, prompted by requesting data on which children on the ward were receiving antibiotics. A disadvantage to this approach, however, was that we were not able to collect data on children admitted with an infectious diagnosis who were not commenced on antibiotic therapy (data that may have been available via the ED electronic patient record system).
We encountered confounding factors in our data, particularly when there was an over-representation of children admitted under an individual paediatric team or within a particular age group (eg, neonates). This was addressed by spreading our data collection out across different days of the week, and by ensuring all wards were sampled. Including a higher number of children in each weekly audit would have also helped to address this issue. However, because we excluded surgical admissions, admissions under subspecialties, and children who were on antibiotic therapy directed by the microbiology/infectious diseases service, we found that it was difficult to consistently include more than ten children in each weekly audit. While this meant we had a small sample size for our audit data, it did allow us to feed data back on a weekly basis. Having weekly data, despite its limitations, ensured the results were close to real time for prescribers, and probably helped to foster ownership. We were also able to demonstrate an overall reduction in antibiotic consumption and expenditure that was temporally related to our interventions, and subsequent participation in national antibiotic point prevalence surveys corroborated the improvements in documentation of indication and compliance with empiric guidelines that we had seen in our weekly audit data.
We believe that there were a number of ‘soft’ skills that contributed to the success of this project, including communication techniques, accessibility and visibility of the antimicrobial stewardship team. This probably explains a dip in compliance with our quality indicators seen in late 2016, following a period when the hospital was without an antimicrobial pharmacist. While we demonstrated sustainability of our improvement, maintaining this over a longer term does require having core team members in place and actively engaged in antibiotic stewardship activities.
Finally, we found that the high profile of the improvement project and sense of ownership among prescribers helped to foster a wider interest in quality improvement among clinical staff. We found that demonstrating a successful quality improvement project acted as a ‘proof of concept’ that quality improvement methodologies work. While we cannot claim that our project was the sole driver for this, we have seen a steady increase in the number of successful quality improvement projects being undertaken by clinical staff from across the hospital.