Intended for healthcare professionals

Analysis

Translating evidence into practice: a model for large scale knowledge translation

BMJ 2008; 337 doi: https://doi.org/10.1136/bmj.a1714 (Published 06 October 2008) Cite this as: BMJ 2008;337:a1714
  1. Peter J Pronovost, professor1,
  2. Sean M Berenholtz, assistant professor1,
  3. Dale M Needham, assistant professor2
  1. 1Department of Anesthesiology and Critical Care Medicine, Johns Hopkins Quality and Safety Research Group, 1909 Thames Street, Baltimore, MD 21231, USA
  2. 2Division of Pulmonary and Critical Care Medicine, Baltimore, MD 21287, USA
  1. Correspondence to: P J Pronovost ppronovo{at}jhmi.edu
  • Accepted 17 July 2008

Changes that can improve patients’ health are often difficult to get into practice, even when backed by good evidence. Peter Pronovost, Sean Berenholtz, and Dale Needham describe a collaborative model that has been shown to work

Evidence based therapies that prevent morbidity or death are often not translated into clinical practice. One reason is that research often neglects how to deliver therapies to patients.1 Consequently, errors of omission are prevalent and cause substantial preventable harm.2

Attempts to increase the reliable use of evidence based therapies have generally focused on changing doctors’ behaviour.3 However, doctors work in a healthcare team within a larger hospital system, which must be considered when attempting to improve the reliability of patient care.

Models to increase the reliable use of evidence based therapies typically focus on translating evidence into practice or on the best methods to run a collaborative; few if any have done both.4 Our model embeds an explicit method for knowledge translation in a collaborative model for broader dissemination of knowledge into practice.

Model to translate evidence into practice

We have described an integrated approach to improve the reliability of care5 that has been associated with substantial and sustained reductions in bloodstream infections associated with central lines.6 The approach has five key components:

  • A focus on systems (how we organise work) rather than care of individual patients

  • Engagement of local interdisciplinary teams to assume ownership of the improvement project

  • Creation of centralised support for the technical work

  • Encouraging local adaptation of the intervention

  • Creating a collaborative culture within the local unit and larger system.

This approach has matured into the Johns Hopkins Quality and Safety Research Group translating evidence into practice model (figure). The resources required to develop, implement, and evaluate programmes using this model are substantial. Thus, the model is intended for large scale collaborative projects, in which centralised researchers support the technical development (for example, summarise the research evidence and develop measures) and local teams throughout a hospital perform the adaptive work (engage staff in the project, tailor interventions to fit the local work processes, and identify how to modify work so that all patients can receive the intervention). Below, we describe the model and its application to reduce infections associated with insertion of central lines.6

Figure1

Strategy for translating evidence into practice

Summarise the evidence

The first stage involves summarising the evidence for interventions to improve a specific outcome. The interdisciplinary team of centralised researchers and clinicians review the relevant research using a standard evidence based medicine approach to identify interventions with the greatest benefit and the lowest barriers to use. They agree on the top interventions (maximum of seven) and convert them into behaviours.7 In our efforts to reduce infections associated with central lines, for example, we identified five interventions: wash your hands before insertion, use full barrier precautions, prepare the insertion site with chlorhexidine antiseptic, avoid the femoral site for insertion, and remove unnecessary lines.8

Identify local barriers to implementation

It is important to understand that the evidence based intervention will be part of a work process and appreciate the context surrounding this work. Specifically, it helps to physically walk through the steps with clinicians to observe what is required to administer the intervention to patients. This helps identify where defects occur, or where the intervention is not implemented as intended. For example, while observing insertion of central lines, we watched clinicians gather equipment essential for complying with recommended practice (sterile gloves, full sterile drape, etc) from up to eight different locations. To make compliance easier for clinicians we introduced a central line cart storing all the necessary supplies.

In order to understand the context in which the intervention will be implemented, researchers must ask all stakeholders why it is difficult or easy for them to comply with recommended practices.9 The researcher’s role is to listen carefully and discern what staff may gain or lose from implementing the intervention. In our project, we learnt that nurses were reluctant to question or challenge doctors who failed to follow recommended practice and that doctors did not like being questioned by nurses in front of patients or other staff. While clinicians agreed with the recommended practices, cultural barriers prevented reliable delivery. To address these barriers, we implemented a comprehensive safety programme that includes methods to improve culture, teamwork, and communication.10

Measure performance

The research team must develop performance measures to evaluate how often patients actually receive the recommended therapy (process measures) or evaluate whether patient outcomes improve (outcome measures). The choice of process or outcome measures has been debated, although outcome measures are preferred if valid and feasible.11 12 We chose to measure infection rates (an outcome measure) because the Centers for Disease Control provides standardised, scientifically rigorous definitions and because most hospitals already collect data on infections.8 We could not develop a valid and feasible measure of compliance with evidence based practices for central line insertion because lines are placed randomly; this makes coordination of independent observation difficult, and self reported compliance would be likely to overestimate performance.

A rigorous and iterative process for developing and collecting performance measures is required to reduce selection, measurement, and analytical biases.5 13 Validity of the outcome measure for our project was already established by Centers for Disease Control guidelines. The numerator was the number of infections and the denominator was the number of catheter days. Data for the numerator and denominator were collected by each hospital’s infection control practitioners, independent of the intensive care team. But we had to develop data collection forms, a database, a data quality control plan to reduce missing data, and an analytical plan to provide intensive care units with regular reports on their performance. During the project, the local interdisciplinary team at each participating unit received monthly feedback on the number of infections in the unit and quarterly feedback on the rate of infections per 1000 catheter days. To evaluate system-wide performance in a collaborative project, appropriate statistical models should account for variation of data over time and clustering of data within hospitals.14

After pilot testing is completed, baseline performance should be measured to understand the opportunity for improvement and the size of improvement after the intervention is implemented. For our project, most teams obtained infection data for the three months before implementation.

Ensure all patients reliably receive the intervention

The final and most complex stage is to ensure that all patients reliably receive the intervention. The interventions must fit each hospital’s current system, including local culture and resources. While there is no formula for redesigning care processes, certain tactics seem effective.9 15 16 17 Informed by evidence and our experiences, we developed a “four Es” approach to improve reliability: engage, educate, execute, and evaluate.5 This differs from the established plan-do-study-act cycle, in that it is applied only to the whole project and not to each step within it. Also, while the plan-do-study-act cycle approaches change linearly, the four Es recognises the importance of culture change, contextual factors, and engaging staff in the project. Finally, our approach places more emphasis on robustly measuring the primary goal and less emphasis on measuring secondary goals.

Engage—We engaged staff by sharing real life stories of patient tragedies and triumphs and by estimating the harm attributable to omitting the intervention in their unit or hospital given their baseline data. We informed each unit of its annual number of infections and patient deaths attributed to the infections.18

Educate—We educated all levels of staff by providing the original scientific literature supporting the proposed interventions, along with concise summaries and a checklist of the evidence.

Execute—To effectively execute an intervention, we designed an implementation “toolkit” based on identified barriers to implementation. This toolkit provides a framework for redesigning care processes and includes three principles: standardise care processes, create independent checks (such as checklists), and learn from mistakes. We created a checklist of the five evidence based behaviours previously described, which a nurse completed as the clinician prepared for a central line insertion to check compliance.19 If unambiguous and behaviourally specific, a checklist democratises knowledge or levels the field of understanding among doctors, nurses, and patients about best practices. The local team evaluated any infection to identify whether it was preventable.

Evaluate—To evaluate whether the intervention was successful, teams compared their baseline data against performance measures collected during and after execution of the intervention. During the project, median rates of central line associated infections per 1000 catheter days were reported quarterly to each unit and compared with past performance using simple run charts. Across all 103 units, the median infection rate per 1000 catheter days decreased from 2.7 (interquartile range 0.6-4.8) in the baseline period to 0 (0-2.4) in the 18 months after the intervention. Over the 18 month observation period, more than half of the units reduced their infection rate to zero, and the overall mean rate was reduced by 66%.6 Although we cannot claim that the intervention reduced rates of infection, no other improvement interventions were occurring at the time. Teams should also regularly evaluate for unintended consequences of the intervention, which may arise from either decreased attention to other processes of care or from new harms unintentionally introduced as a result of the intervention and its associated system changes.

We have recently added two additional Es to the model: endure and extend. To endure or sustain, teams were asked to integrate this project into their hospital’s quality improvement efforts. This included obtaining resources to continue measurement and feedback of performance, dedicated time for teams to continue the work, and incorporating training on the intervention into staff orientation. To extend, teams were asked to work with hospital quality improvement leaders to spread the intervention to the emergency department and operating room suites, where central lines are also inserted.

Future directions

To improve patient health, research knowledge must be translated into routine practice. Such knowledge translation is an emerging science in which researchers must partner with practising clinicians. Our model has proved successful in a collaborative that created a large and sustained reduction in infections associated with central line insertion in Michigan.6 Although we did not formally evaluate why our quality improvement collaborative was successful, the available information allows us to make some educated hunches. Our model combined culture change and evidence with rigorous measurement. We engaged individuals by telling them tragic real stories and how their baseline performance may help or harm the next patient. Our interventions were evidence based and clinicians (especially doctors) perceived the measures and results as valid. Physicians saw the results, realised satisfaction in their work, and demanded new programmes. We centralised the technical work that takes substantial resources, yet let local teams identify how they would implement the evidence given their resources and culture. Finally, the project provided social support and local ownership. However, we found it takes about one year to develop and pilot a new programme before it is ready for widescale use.

This model is generalisable and can be applied to inpatient and outpatient settings. For example, we are currently implementing a safe surgery programme in Michigan and piloting programmes in the emergency department and the outpatient diabetes service. Future efforts could adapt this model to any clinical setting.

Notes

Cite this as: BMJ 2008;337:a1714

Footnotes

  • We thank Victor Dinglas for help in preparing the figure and Christine G Holzmueller for help in editing the manuscript. We also thank members of the Quality and Safety Research Group, the Michigan Health and Hospital Association, and staff from participating Michigan hospitals who helped us develop and implement this project and mature this conceptual model.

  • Contributors and sources: This manuscript was prepared by the authors, all of whom are practising critical care physicians and have postgraduate qualifications in clinical investigation. PJP wrote the initial draft of the manuscript and all authors contributed to redrafting. PJP is guarantor.

  • Competing interests: PJP and SMB received grant support from the Michigan Health and Hospital Association and the Robert Wood Johnson Foundation. They have also received lecture fees from various healthcare organisations to speak about quality and patient safety. SMB has had grant support from the National Institutes of Health/National Heart Lung and Blood Institute and the Agency for Healthcare Research and Quality and received consulting fees from the Michigan Health and Hospital Association and the Veterans Health Administration. DMN has had grant and contract support from the National Institutes of Health/National Heart Lung and Blood Institute and a clinician-scientist award from the Canadian Institutes of Health Research.

References