Article Text

Interventions employed to improve intrahospital handover: a systematic review
  1. Eleanor R Robertson1,
  2. Lauren Morgan1,
  3. Sarah Bird2,
  4. Ken Catchpole3,
  5. Peter McCulloch1
  1. 1Quality, Reliability, Safety and Teamwork Unit, Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK
  2. 2University of Oxford Medical School, John Radcliffe Hospital, Oxford, UK
  3. 3Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, California USA
  1. Correspondence to Mrs Eleanor R Robertson, Nuffield Department of Surgical Sciences, University of Oxford, Level 6, John Radcliffe Hospital, Headington, Oxford , Oxfordshire OX3 9DU, UK; eleanor.robertson{at}nds.ox.ac.uk

Abstract

Background Modern medical care requires numerous patient handovers/handoffs. Handover error is recognised as a potential hazard in patient care, and the information error rate has been estimated at 13%. While accurate, reliable handover is essential to high quality care, uncertainty exists as to how intrahospital handover can be improved. This systematic review aims to evaluate the effectiveness of interventions aimed at improving the quality and/or safety of the intrahospital handover process.

Methods We searched for articles on handover improvement interventions in EMBASE, MEDLINE, HMIC and CINAHL between January 2002 and July 2012. We considered studies of: staff knowledge and skills, staff behavioural change, process change or patient outcomes.

Results 631 potentially relevant papers were identified from which 29 papers were selected for inclusion (two randomised controlled trials and 27 uncontrolled studies). Most studies addressed shift-change handover and used a median of three outcome measures, but there was no outcome measure common to all. Poor study design and inconsistent reporting methods made it difficult to reach definite conclusions. Information transfer was improved in most relevant studies, while clinical outcome improvement was reported in only two of 10 studies. No difference was noted in the likelihood of success across four types of intervention.

Conclusions The current literature does not confirm that any methodology reliably improves the outcomes of clinical handover, although information transfer may be increased. Better study designs and consistency of the terminology used to describe handover and its improvement are urgently required.

  • Hand-off
  • Implementation Science
  • Quality Improvement
  • Quality Improvement Methodologies
  • Transitions In Care

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The practice of modern medicine relies upon increasingly complex work environments and supporting processes. One process central to the delivery of safe care is handover (handoff). Clinical handover has been defined as: ‘the transfer of professional responsibility and accountability for some or all aspects of care for a patient, or groups of patients, to another person or professional group on a temporary or permanent basis’.1 The publication of healthcare-derived error reports2 ,3 prompted significant changes both in the delivery of care and how clinicians view error.4 Following the introduction of reduced working hours, the number of shift-handovers in Europe and the USA has increased by up to 40%,5–7 and handover between doctors has become formalised. It has been estimated that an average inpatient will require 24 handovers during their hospital stay,8 and that 1.6 million handovers occur in the USA per year.9 The safety benefit derived from shorter working weeks should be viewed in the context of the additional risks associated with an estimated 13% error rate derived from handover.10

Handover failures typically contribute to a cascade of failures involved in adverse outcomes, rather than being sole causes, making the estimation and investigation of handover-derived harms difficult. Common consequences of handover failures, such as near misses and delays in care, are difficult to assess for their overall contribution to potential harm.10

Handover can be viewed simply as a human interaction where information is sent, received and processed.11 However, this process receives multiple inputs relating to the core healthcare tasks of: prescribing; investigation requests; receiving and interpreting results; ensuring continuity of care; and delivery of patient advice.12 ,13 It is therefore possible to nest this human-to-human interaction within a wider system of work which requires the examination of the action and interaction of technology, environment, tasks and the organisation surrounding the handover.14

Recognition of the potential risks of handover errors has led many researchers to attempt to improve it using a range of methods, both simple and multi-component. Interventions generally target information transfer directly, individual behaviour or the wider system. Approaches have included process standardisation; training and education; changes to the physical environment; use of technology; explicit signalling of accountability transfer; and others.15 The diversity of methods used to evaluate the results has been even greater, but can be grouped as dealing with patient outcome, staff satisfaction, compliance with protocols, time taken for handover and information transfer (eg, completeness or accuracy of information transfer).

Uncertainty remains as to the most effective method for improving intrahospital handover. This systematic review aims to evaluate interventions which have been developed to improve the quality and/or safety of the intrahospital handover process with a view to enabling hospital practitioners and researchers to focus on refining the most effective interventions.

Methods

Scope of review

We wished to be inclusive of studies of attempts to improve all types of handover within hospital settings. We therefore developed a search strategy based on the PICO question:16 ‘In [POPULATION; groups of clinical staff handing over information about patients under their care], do [INTERVENTION; systematic intentional interventions] compared to [COMPARISON; no intervention] improve [OUTCOMES; patient outcome, staff satisfaction, time taken or information transfer]?’ Inclusion criteria for studies comprised: (a) includes an intervention developed with the intent of improving handover quality and/or safety (b) set within an intrahospital environment (c) uses both preintervention and postintervention assessment (of any factor relating to the intervention) to evaluate improvements (d) assesses any of: knowledge and skills of staff, time taken for handover or related tasks, staff behavioural change or patient outcomes.

The protocol was registered with an international database of prospectively registered systematic reviews, PROSPERO (registration number: CRD42012001995).

Search strategy

We searched EMBASE, MEDLINE, HMIC and CINAHL for papers published in English between January 2002 and July 2012, using a search strategy based on the terms below. Synonyms of handover, inter-hospital and intervention were used thus:

  • Handover(s), hand over(s), hand-over(s), handoff(s), hand off(s), sign out(s), sign off(s), shift to shift (s), inter shift(s)

  • Patient transfer(s), intrahospital transfer(s), intra hospital transfer(s), intrahospital transport(s), intra hospital transport(s)

  • Intervention(s), improve(wild-card Boolean for improvement/improvements/improving etc.), quality, safety, strateg(wild-card Boolean for strategy/strategies/strategic etc.), training, instrument(s), standardi(wild-card Boolean for standardisation/standardization/standardisations etc.), mnemonic(s).

Exclusion criteria and paper selection

The studies identified in the searches were de-duplicated and abstracts were reviewed by one reviewer for compliance with inclusion criteria. The resulting full text articles were independently reviewed by two reviewers in consultation with a third. Data were extracted independently on to collection forms and the reviewers then met and compared responses. If there were differences of opinion about eligibility, these were resolved by mutual agreement and if this was not reached, an external opinion was sought (PM).

Data extraction and synthesis

Where available, the information in box 1 was extracted from each paper. The interventions were categorised into three overarching categories of ‘person’ interventions (focusing on training people, improving awareness and changing culture), ‘information system’ interventions (focusing on rationalising systems of information delivery) and ‘wider system’ interventions (focusing on improving the technology and infrastructure underlying the handover process).

Box 1

Data extraction protocol

  • Context

    • Number of hospitals

    • Medical speciality setting

    • Type of handovers

  • Study type

    • Study design

    • Timeline (observation, intervention and follow-up) 

    • Outcome measures

  • Intervention type

    • Person

    • Teamwork training (TwT) classroom

    • TwT coaching

    • Video-reflexive techniques

    • Medical supervision

  • Information system

    • Standard operating procedures (SOP/protocol)

    • Minimum dataset (including checklists)

    • Mnemonics

  • Wider system

    • Information technology

    • Continuous process improvement

  • Outcomes

    • Measures of information transfer (information transfer, error, forgotten tasks)

    • Measures of satisfaction with the process (staff and patient)

    • Measures of compliance with the prespecified protocol for the handover

    • Duration (handover length, time to treatment and overtime requirements)

    • Clinical outcomes (adverse events and patient outcomes)

Quality assessment

Assessment of the quality of included papers was undertaken using a modified Downs and Black (D&B) checklist.17 This checklist was designed to provide an evaluation of the quality of both randomised and non-randomised studies of healthcare interventions on the same scale. It has 27 questions in three sections covering: reporting, external validity and internal validity (bias and confounding) and has previously been adapted for use with handover studies.15 We reasoned that omission of questions which would add nothing to the final assessment in the nature of the studies we were evaluating was legitimate. We therefore considered the original and handover adapted versions of the checklist together, and adapted them to produce a hybrid version for our use based on this principle. This led us to exclude seven questions deemed to be unsuitable (Q5, Q9, Q12, Q14, Q17, Q25 and Q26) and three which required adaption (Q4, Q8 and Q21). Due to these changes, the maximum score a study could achieve was reduced from the original 27 to 20.

In a similar fashion, we adopted a modification of a recognised guideline to evaluate intervention transferability. The Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines were developed in 2009 to promote standardised reporting of healthcare quality improvement interventions.18 For the purposes of this review, Q8, Q9a, Q9b, Q9c, Q14a, Q16b and Q16c were used to critique the included papers on the reporting of their intervention. We also recorded whether there was a specific mention of the SQUIRE guidelines.

Results

A total of 29 studies were identified for inclusion in this review. The search of EMBASE, MEDLINE, HMIC and CINAHL provided a total of 631 citations and following de-duplication 437 papers remained (figure 1). Of these, 329 were excluded after abstract review as not matching the inclusion criteria. The full text of the 108 remaining citations was reviewed in more detail; 79 of these did not meet the inclusion criteria and were excluded. The remaining 29 papers met the inclusion criteria (figure 1) and are displayed in the summary table (see online supplementary table S1).

Figure 1

PRISMA 2009 flow diagram.

The study designs of the included studies included: two randomised control trials (RCTs);19 ,20 one preintervention/postintervention controlled trial;21 25 preintervention/postintervention uncontrolled trials;22–46 and one Plan-Do-Check-Act design.47

A total of 11 759 handovers were included in studies which gave this information, with a median of 103 handovers per study. Ten studies19 ,21 ,26 ,27 ,31 ,32 ,39 ,41 ,44 ,46 gave no information on the number of handovers they included (see online supplementary table S1).

Study duration

Of those studies which gave information on the length of time for each study component, the median length of time (days) for preintervention data collection was 28 (range 4–224), for intervention was 28 (range 1–252), the gap between intervention and the commencement of postintervention data collection was 10.5 (range 0–365) and the postintervention data collection period was 28 (range 4–224). Seven studies gave no information on any component of their study design timeline34–37 41–43 and 14 gave no information on one or more study timeline components.1,9 ,21–23 ,25 ,29–32 ,39 ,40 ,45–47

Study environments

The majority of the studies (22) were performed in one ward environment.2041 Four studies were performed in more than one environment19 ,41 ,45 ,46 and three gave no detail on the study environment.42–44

Improvement strategies

The included studies took varied approaches to handover improvements. Fifteen studies were mono-component interventions19 ,21 ,22 ,27 ,31 ,34 ,37–44 ,46 and the remainder contained two or more components. Seven studies shared an intervention component: two interventions used the SIGNOUT mnemonic20 ,45 and five used the SBAR mnemonic in its original21 ,41 ,42 ,47 or slightly adapted format.44

Outcome measures

The studies evaluated their interventions using a total of 82 discrete outcome measures, each study using between one and five measures (median of three). Since there was no prior classification system available, we developed a simple pragmatic classification to allow us to consider studies with similar outcome focus together. Based on the measures in the papers chosen for inclusion, we chose to group study outcome measures, as relating to: Information transfer, Staff satisfaction, Handover duration, Clinical outcomes and Compliance with handover protocol. Two studies evaluated their interventions with two outcome measures;39 ,41 six used three;19 ,23 ,26 ,27 ,42 ,47 one study used four;22 and one used five outcome measures.36 There were no primary outcome measures in common between all the studies (see online supplementary table S2).

The studies are presented in online supplementary table S3 and online supplementary appendix by type of intervention—information system, person or wider system—and if a study contained a component from more than one category, the study is represented twice.

Seventeen studies reported a statistically significant change in at least one of their outcome measures,20 ,24 ,25 ,28–35 ,37 ,38 ,43–46 while 10 did not.19 ,22 ,23 ,26 ,27 ,36 ,39 ,41 ,42 ,47 Improvements in information transfer were the most commonly reported successes, being found in more than half of the studies examining this,20 ,24 ,25 ,28 ,29 ,32 ,33 ,37 ,38 ,43–45 and staff satisfaction was the next most commonly improved in 35% of studies28–32 ,34 ,35 ,43–46—a similar proportion to those reporting improvements in time taken and compliance with protocols. Of studies which attempted to evaluate changes in patient outcome, only two35 ,40 of 1019 ,21 ,22 ,30 ,32 ,35 ,36 ,40 ,41 ,47 studies reported a significant benefit with one study reporting a 12% decrease in adverse events (need for cardiopulmonary recussitation (CPR), extracorporeal membrane oxygenation (ECMO) and acidosis) (p=<0.001)35 and the other study reporting a significant reduction in length of stay (p=0.047).40 There was no obvious difference between the success rates of multi and mono-component interventions, and none of our defined categories (standardisation tools, team training approaches or quality improvement programmes) seemed to be clearly associated with a better chance of a positive outcome.

There were two RCTs in the study selection, and we considered these separately. One19 focused on the use of a computerised reporting system to speed up handover, and found that it achieved this aim without apparently increasing the risks of adverse events or care errors. The method of randomisation was poorly described and the concealment of treatment allocation was not clear. Although the senior assessor who judged whether clinical errors had occurred was blinded to treatment group, the data supplied to this clinician apparently came from the residents under study and therefore unblinded, resulting in a high risk of bias. The other RCT20 evaluated the benefit of supervisor feedback on handover performance among internal medicine residents, but suffered from similar defects in randomisation and blinding of assessors. This study reported significant improvement in compliance with the protocol but also suffered from a high risk of bias.

The quality score of the included studies according to the modified D&B checklist ranged from 1 to 17, with the median score of 9, IQ (7.5, 12). There was no statistical difference in the median D&B score of positive and negative studies (Mann–Whitney U test p=0.248).

Discussion

Our findings in context

We embarked on this review from the viewpoint that handover is important, frequently the focus for improvement studies and difficult to characterise.48 Failures in handover can produce a wide variety of untoward outcomes ranging from lack of event awareness, to loss of significance, and to dropping or lacking information required to perform tasks.49 In medicine, the serious consequences which can ensue are well recognised, as is the disparate and unsatisfactory nature of handover processes in many settings. This explains the large number of studies devoted to improving handover processes. Unfortunately, this review shows that the poor quality of most studies leaves us unable to draw many firm conclusions about how handover may be optimised. We found that the large majority of published studies are small, uncontrolled, un-blinded before/after comparisons, and often with a short or undefined follow-up period. The only outcome category which was apparently improved in more than 50% of studies which looked at it was information transfer. Time taken for the process, compliance with protocol and staff satisfaction were all improved in a minority of studies, while clinical outcome improvements were reported in only two of 10 studies. This does not exclude the possibility that the positive findings in some of these studies were valid, but the lack of strong trends and the poor study designs mean that we cannot have much confidence in this. At present, it appears that information transfer is the aspect of handover in which interventions most readily show change: whether this results in any beneficial outcomes beyond better recording of data is however unclear.

Information transfer

It seems rational to use information transfer as a key outcome measure for evaluating handover since reliable transfer of information is the principal purpose of formal handover. However, we need to consider carefully what we wish to know about information transfer in order to measure it effectively. We suggest that the functional value of a handover session can be effectively measured by evaluating three aspects of information transfer—completeness, accuracy and organisation. The last of these is essential to ensure that the most important data are not obscured by other items and are easy to identify because the information is presented in a structured way. However, we recognise that other taxonomies for describing information transfer may also be valid, for example, that proposed by Patterson and Wears50 or by Pezzolesi et al,51 and that ultimately empirical trials will determine whether our suggestion proves the most useful.

The need for a taxonomy

Another major problem identified by the review is the lack of any common language or taxonomy for describing or classifying handovers, improvement methods or types of outcome. Other fields of study have found this a major handicap to progress49 and we therefore recommend that attempts are made to harmonise terminology and definitions. This would greatly assist others trying to repeat the work. However, the problem is the great heterogeneity of handover settings and types which exist in healthcare. To develop a taxonomy which can adequately describe all of these is challenging, and arguably to consider them all together as we have done may be inappropriate, depending on the question posed. If an agreed taxonomy existed, it would have helped us to make more sense of the literature by allowing us to identify whether there were subgroups where the literature findings allowed us to hypothesise (and the data available would allow no more than this) that certain intervention types were particularly valuable.

We nevertheless suggest that handovers themselves require a template for describing them which covers setting, personnel, means of information transfer, standardisation of procedure, feedback and summarisation, task allocation and recording. We have used a four-category classification to divide the approaches to improvement reported in the studies we found, but feel further improvements to this could be made. However, for the present we recommend the classification of outcomes into measures of staff satisfaction, information transfer, protocol compliance and clinical outcome. Not only did this deal with all the papers in the current study in a satisfactory manner, but it lends itself readily to analysis of the data using the Kirkpatrick four level evaluation model for training and educational interventions.52

Need for improved study design and reporting standards

The evidence we found in this review has to be regarded as very unreliable because the studies were of poor design and therefore susceptible to bias from multiple sources. This was reflected in the low scores on the modified D&B scale used. Secular trends may give a false impression of improvement caused by interventions; observers may find it very difficult to avoid bias in assessing subjective endpoints; and short follow-up periods can give an unrealistic impression of impact if they capture a fleeting improvement in performance which quickly fades. The two randomised studies19 ,20 should be less susceptible to bias but their unusual design, the lack of clinically relevant endpoints and the lack of true blinding decrease their internal validity significantly. Generally speaking, the transferability of the studies in this review was also low, as reflected in the scoring using the SQUIRE guidelines.

Limitations

The limitations of our own study were partly a consequence of the problems of the literature we studied. A more comprehensive search not restricted by language, date range or a search of the ‘grey literature’ might have yielded further studies, but it seems unlikely that this would have improved the overall quality or reduced the heterogeneity of the studies. An example of the heterogeneity was the duration of the study periods, which varied by a factor of at least 50 for each component of the study. These two aspects of the literature, heterogeneity and poor quality, were the principal causes of our inability to reach strong conclusions. Our initial hypothesis was very broad, and perhaps we might have achieved more insights into the literature had we focused on a smaller and less heterogeneous subgroup of handover types. Any such restriction would of course have affected the applicability of our findings. We felt it was important to assess the quality of study design and reporting in these studies, since the generally poor level of scientific rigour in these areas is such an important contributor to the difficulty in reaching definitive conclusions from this literature at present. We used modifications of the SQUIRE and D&B checklists to study transferability and validity, respectively. Our modifications were designed to allow evaluation of an enormously heterogeneous and often poorly described group of studies. Several questions in both checklists were not appropriate for evaluation of studies of handover of the types included in our search, either because they were entirely irrelevant or because they were partially irrelevant and attempting to answer them would increase rather than decrease uncertainty in the evaluation of the studies. We recognise that the truncated evaluations we used have not been fully validated, but we feel the logic used in producing them means that they are more likely to be both valid and discriminatory than the use of the full versions of the tools involved. Further work could verify this hypothesis; at present, we have to accept that our quality and transferability assessments should be considered with caution.

Recommendations

We recommend that future studies of handover improvement should use a standardised taxonomy to describe the key aspects of handover, although we recognise that handovers are so heterogeneous that it is unlikely that any individual study will need to record data about every aspect. We include a proposal for a framework on which such a template could be constructed (see online supplementary table S4) and strongly suggest that this is adopted for future study designs.

We strongly recommend that the standards of study design and reporting for handover intervention research are strengthened by adoption of some basic desiderate of clinical research. Researchers should consider using a control group not subject to contamination by the intervention, use standard definitions of terms to describe the handover, measure the intervention and the outcomes, and employ blinded observers or hard objective endpoints, as well as a realistic and specified duration of follow-up. As a consequence of this review, we would recommend that a consensus is reached on a core standardised handover assessment method. This would enable inter-intervention comparison and aid the development of a strong evidence base as to which improvement methods are of benefit. We would recommend that some form of information transfer assessment should be included in this method, but that consideration should be given to including an outcome from each of the four categories we identified. We would also recommend that future interventional trials follow the SQUIRE reporting guidelines18 which would enable future researchers and clinicians to repeat their findings and the dissemination of improved safety processes between institutions.53

Acknowledgments

The authors would like to thank Tatjana Petrinic, Outreach Librarian, Cairns Library in the University of Oxford, for her assistance in generating a comprehensive search strategy for this systematic review.

References

Supplementary materials

Footnotes

  • Contributors This systematic review was led by ER. ER developed the search methodology with assistance from TP and PM. ER screened the abstracts and the full text articles. ER, LM and SB reviewed the included articles and extracted the information on to data collection sheets. ER wrote the first and subsequent drafts and PM with ER wrote the final draft of the paper. All authors contributed to the writing process by commenting on and editing drafts.

  • Funding This paper presents independent research funded by the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research programme (Reference Number RP-PG-0108-10020). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

  • Competing interests None.

  • Ethics approval The study was approved by Oxford A Ethics Committee (REC:09/H0604/39).

  • Provenance and peer review Not commissioned; externally peer reviewed.