Prioritising recommendations following analyses of adverse events in healthcare: a systematic review

Purpose The purpose of this systematic review was to identify an appropriate method—a user-friendly and validated method—that prioritises recommendations following analyses of adverse events (AEs) based on objective features. Data sources The electronic databases PubMed/MEDLINE, Embase (Ovid), Cochrane Library, PsycINFO (Ovid) and ERIC (Ovid) were searched. Study selection Studies were considered eligible when reporting on methods to prioritise recommendations. Data extraction Two teams of reviewers performed the data extraction which was defined prior to this phase. Results of data synthesis Eleven methods were identified that are designed to prioritise recommendations. After completing the data extraction, none of the methods met all the predefined criteria. Nine methods were considered user-friendly. One study validated the developed method. Five methods prioritised recommendations based on objective features, not affected by personal opinion or knowledge and expected to be reproducible by different users. Conclusion There are several methods available to prioritise recommendations following analyses of AEs. All these methods can be used to discuss and select recommendations for implementation. None of the methods is a user-friendly and validated method that prioritises recommendations based on objective features. Although there are possibilities to further improve their features, the ‘Typology of safety functions’ by de Dianous and Fiévez, and the ‘Hierarchy of hazard controls’ by McCaughan have the most potential to select high-quality recommendations as they have only a few clearly defined categories in a well-arranged ordinal sequence.


Supplementary file 2. Narrative summary of the methods
Brandrud et al developed the CPO Scale (Change Process and Outcome evaluation instrument) based on a systematic literature search and expert opinion. 11 The scale consists of 20 items and each item is scored with 1 to 5. The results are combined into 3 success levelssuccessful, promising and uncertainbased on the mean score of items 16 and 19. The projects within each of the 3 success levels are ranked accordingly to the individual sum scores. The CPO Scale is validated.
In Coburn et al, after a systematic literature search, interviews and expert opinion, experts rated each intervention using 4 criteria on a 1 to 5 scale. 12 The mean scores are calculated and the interventions with a mean score of 4 or higher on 3 of the 4 criteria are included. Some interventions with lower scores are also included if the experts believe the intervention is important.
The arrangement of de Dianous and Fiévez is based on the bowtie method. 13 They identify four categories. From strongest to weakest barrier the categories are avoid, prevent, control and limit, reduce or mitigate. They each have a different effect on the occurrence of an unwanted event. For example, an unwanted event can be 'falling down when working at height'. An avoid barrier makes the occurrence of the unwanted event impossible. The hazard 'working at height' will be eliminated. No one will be 'working at height', therefore no one can 'fall down when working at height'. A prevent barrier puts obstacles before the unwanted event can occur. The hazard 'working at height' still exists. However, a safety belt attached to the person 'working at height' and a fence will prevent the person from falling down. A control barrier will not stop the unwanted event from occurring. However, it will lead to a safe situation afterwards. A limit barrier can reduce the consequences of an unwanted event. The person 'falls down when working at height' and lands on an inflatable cushion to reduce the chance of major injury.
Flottorp et al developed a worksheet based on a systematic literature search and expert opinion, upon which every recommendation should be scored 1 to 5 based upon 4 questions. 14 They suggest that at least 2 people should independently assess the recommendations and discuss the results. They do not assign fixed weights to the 3 criteria. However, they do state if a recommendation scores low, the priority is also likely to be low. Based on the failure modes, effect and criticality analysis (FMECA) methodology, the severity of the potential effect, the likelihood of occurrence and detecting are classified on a scale of 1 to 10 through consensus between team members in the study of  The product of these three numbers is the risk priority number (RPN). The maximum RPN is 1000 and at least one improvement is determined when the RPN is higher than 100. A priority classification is established by taking into consideration the value of the criticality index, the extent of the expected reduction in criticality and the volume of work and expenditure needed to develop the proposal. Ultimately, the recommendations are prioritized per time limit, 12 months to over 48 months.
The experts in the Testik et al study construct a pairwise comparison matrix A with comparison values based on a 1 to 9 scale. 21 Then a normalized comparison matrix N is obtained with relative weights w. λmax is obtained of matrix A and used in evaluating the consistency of the pairwise comparisons through consistency index (CI = (λmaxn) / (n -1)). 4. CI is compared with the random index (RI) to obtain the consistency ratio (CR = CI / RI). A CI of 0 indicates a perfectly consistent matrix, although slight inconsistencies are tolerated up to a CR of 0.1. The relative weights w corresponding to each comparison is ranked and the one with the highest weight is identified as the highest priority.