Background Inadequate checking of safety-critical issues can compromise care quality in general practice (GP) work settings. Adopting a systemic, methodical approach may lead to improved standardisation of processes and reliability of task performance, strengthening the safety systems concerned. This study aimed to revise, modify and test the content and relevance of a previously validated safety checklist to the current GP context.
Methods A multimethod study was undertaken in Scottish GP involving: consensus building workshops with users and ‘experts’ to revise checklist content; regional testing of the modified checklist and follow-up usability evaluation survey of users. Quantitative data underwent descriptive statistical analyses and selected survey free-text comments are presented.
Results A redesigned checklist tool consisting of eight themes (eg, medication safety) and 61 items (eg, out-of-date stock is appropriately disposed) was agreed by 53 users/experts with items reclassified as: mandatory (n=25), essential (n=24) and advisory (n=12). Totally 42/55 GPs tested the tool and submitted checklist data (76.4%). The mean aggregated results demonstrated 92.0% compliance with all 61 checklist items (range: 83.0%–98.0%) and 25/42 GP managers responded to the survey (59.5%) and reported high mean levels of agreement on the usefulness of the checklist (77.0%), ease of use (89.0%), learnability (94.0%) and satisfaction (78.4%).
Conclusions The checklist was comprehensively redesigned as a practical safety monitoring and improvement tool for potential implementation in Scottish GP. Testing and evaluation demonstrated high levels of checklist content compliance and strong usability feedback, but some variation was evident indicating room for improvement in current safety-critical checking processes. The checklist should be of interest in similar GP settings internationally and to other areas of primary care practice.
- audit and feedback
- general practice
- human factors
- safety management
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
In the past decade, the quality of general practice (GP) has come under increased scrutiny in the UK and internationally.1 A recent evidence review suggests that around 1%–2% of clinical consultations in this setting may involve a patient safety incident.2 However, wider work system problems are also known to contribute to safety incidents and the effectiveness of overall practice performance.3 Inconsistent and unreliable checking processes are known contributory factors in these types of safety incidents (eg, the unsafe management of controlled drugs; inadequate emergency equipment maintenance and medication storage and variable patient identification checks), which may impact negatively on patients, families, visitors or GP team members.1–11
As part of the evolving patient safety agenda in GP worldwide,12 13 there is growing interest in ‘checklists’ to standardise necessary checking processes and act as cognitive aids to help ensure ‘high reliability’ in safety-related task completion by care teams. The expectation is that this will support workforce safety performance and provide further safeguards against systemic failure and preventable harm to patients and care providers. In response, a preliminary safety checklist prototype for the Scottish GP work system was recently developed and validated in a participatory design process with users such as GP managers, administrators, nurses and doctors.14 This study identified priority safety hazards across the GP work system to inform consensus around checklist content, which was also strongly aligned with published evidence and UK health and safety legislation and policy obligations.
However, while the study demonstrated the potential for a timely, integrated checklist approach to reducing these types of system-wide risks to ‘as low as reasonably practicable’, it was clear that further redesign, user testing and usability evaluation work were necessary to improve the overall utility, and thereby future success, of the implementation of this type of safety intervention.15–17 Redesign was necessary to align with changing evidence and legislation, while there was limited user testing and usability evaluation in the original study. As part of the next development phase, this latest study aimed to achieve the following:
To critically review and update the previously validated safety checklist content to the current GP context, that is, recheck relevance, validity and feasibility with intended users.
To classify checklist items to guide users on the priority importance of routinely checking related safety performance issues.
To identify a policy, legal or evidence-based rationale to support inclusion of each checklist item.
To test the feasibility and usability of the safety checklist in the GP work environment.
The multimethod study18 was undertaken in three phases:
Checklist redesign steering group
A group of ‘expert’ professionals (n=9) was convened to lead the redesign, testing and refinement of the checklist, including development of related guidance and identification of regulatory obligations and supporting empirical or policy evidence. ‘Expertise’ was conferred based on specialist knowledge and experience of frontline practice, patient safety research, human factors principles and methods or previous journal publications of relevance. The group comprised six experienced clinicians and managers with national patient safety improvement, research or educational roles; a national safety programme manager; a project officer for safety research and a safety researcher/human factors specialist.
Prioritisation of checklist content
As a first step, the group developed and agreed the following three-level classification system and descriptors to help guide future users and provide informed feedback after checklist completion:
Mandatory classification—‘where a legal, professional, contractual or regulatory obligation existed for the check to take place’.
Essential classification—‘where a failure to check the item would have the potential for harm to occur to patients, GP team members, or practice visitors or impact negatively on the performance and reputational risk of the practice’.
Advisable—‘where periodic checking of the item would be a voluntary demonstration of high-quality safe system practice’.
Critical review of checklist content
The group met for a 3-hour interactive workshop in July 2017 to review the original prototype checklist (consisting of 6 safety domains, 22 subcategories and 78 related items). NH chaired this process and ensured that consensus was achieved on decision-making by asking the group to carefully review and discuss the checklist on an item-by-item basis. Once reviewed, discussion ensued and the group prioritised the category of the item, coming to joint agreement once all options had been considered. Where there was disagreement, more in-depth discussion and debate took place until consensus was achieved on the outcome. In the following 2-week period, the updated checklist with the priority classifications was mailed electronically to group members for further reflection and comments. No further alterations were subsequently made.
Initial user validation and testing of updated checklist
In September 2017, the updated checklist, item priorities and related guidance were presented to 15 GP managers from three regional health boards—Ayrshire and Arran (n=13), Greater Glasgow and Clyde (n=1) and Tayside (n=1)—at a further half-day workshop for volunteers interested in receiving instruction in how to test it in a ‘real-life’ setting. The practice managers reviewed the content during a 1-hour small group work session. Feedback was collated and discussed with the main checklist development group. In November 2017, the revised checklist was tested out by each practice. The following month representatives from participating practices met with the project steering group for a final 3-hour workshop to reflect on the user testing phase, discuss the aggregated results of the study and provide further feedback on what went well, and any challenges identified.
Sense checking of final checklist
In May 2018, the pilot study findings were presented as part of a 1-hour workshop with 29 practice managers at the annual NHS Education for Scotland (NES) conference. A final review of the checklist content (now renamed ‘MoRISS’) was also undertaken with no alterations suggested, nor were any issues raised with the underlying principle of the checklist as a potential ‘solution’ to the potential safety problems identified in GP. The final part of the session involved discussion on the practicalities of checklist implementation and the related leadership and improvement role of practice managers.
Implementation and testing
As part of the 2018/19 enhanced service element of the Scottish general medical services contract,19 local implementation of the MoRISS checking tool was financially incentivised on a voluntary basis among all 55 GPs in Ayrshire and Arran Health Board (AAHB). The checklist data, including reported system improvements, were uploaded to a spreadsheet by each participating practice and emailed to AAHB to facilitate financial payment. These data were then passed on to NES for analysis.
A short online questionnaire survey (online supplemental appendix 1) of checklist usability with the practice managers of participating practices was conducted by NES 1 week after phase 2 was completed.
Study data collection and analysis
Quantitative checklist data were analysed using simple descriptive statistics (eg, frequency counts, means, range). Selected free text responses from the online questionnaire survey are presented to support the strength of reported survey findings.
Patient and public involvement statement
Patient and public involvement was not actively considered in this phase of the checklist development, but the process of involving this group in the codesign of relevant aspects of safety-critical checking is acknowledged as a necessary next step.
Phase 1—redesign, validation and rationale
The revised checklist still retained the original six safety domains but was reduced from 78 to 61 related priority items to be systematically checked by practice teams (table 1). Total of 16 items were, therefore, deleted from the original checklist, 2 were merged and 4 were slightly reworded for preciseness and clarity. A total of 25 checklist items (40.3%) were reordered and prioritised as ‘mandatory’ in line with the new grouped classification system; a further 25 were categorised as ‘essential’ (40.3%) and the final 12 were classified as ‘advisable’ (19.3%). A fuller description of the new categories with related checklist items and the policy, regulatory, legal or ‘good practice’ rationale for inclusion is described in online supplemental appendix 2.
A preliminary conceptual model illustrating the design process, purpose and anticipated benefits and outcomes of MoRISS implementation for national stakeholders at all healthcare system levels in Scottish GP was also agreed and developed by the ‘expert’ steering group (figure 1).
Phase 2—outcomes of MoRISS implementation and testing
Mean global results by grouped items and examples of non-compliances
A total of 42 practices participated in testing the MoRISS checklist (76.4%). The mean global results for all participating practices demonstrated 92.0% compliance with all 61 checklist items, range: 83.0%–98.0% (table 2). The aggregated mean result for compliance with the 25 ‘mandatory’ checklist items was 96.0% (range: 88.0%–100.0%). Examples of identified non-compliance issues included issues around public and employer’s liability insurance not being up-to-date and displayed; sharps containers not being available or out of reach of children and controlled drugs not being securely stored.
The aggregated mean result for the 25 ‘essential’ checklist items was 90.0% (range: 80.0%–100.0%). An example of an issue reported at below 85.0% compliance included: ‘the location of emergency equipment is adequately signposted throughout the premises (eg, prominent notice in each room)’. The aggregated mean result for the 12 ‘advisory’ checklist items was 89.0% (range: 58.3%–100.0%). An example of an issue reported at below 90.0% compliance included: ‘(vaccinations) your usual supplies are available in sufficient quantities’.
Aggregated mean results by safety domains
A breakdown of the aggregated mean results by each of the six safety domains of the checklist is outlined in (online supplemental appendix, table 6). The safety domain ‘information systems’ was the area of practice with the lowest compliance overall (86.6%; range: 40.0%–100.0%), with the safety domain ‘practice team issues’ being reporting as highest (97.7%; range: 93.3%–100.0%).
Phase 3—MoRISS usability evaluation
Professional and practice characteristics of online survey participants
A total of 24 GP managers responded to the postimplementation of the MoRISS checklist online survey (43.6%). Details of the professional and practice characteristics of participants are outlined in table 3. A majority of GP managers, indicated possessing greater than 5 years of experience in their role (58.3%), were based in training practices (62.5%), while 50% of practices had patient list sizes greater than 6000.
Usability of MoRISS checklist
The great majority of respondents reported high mean levels of agreement with survey statements (table 4) related to the usefulness of the checklist (77.0%), its ease of use (89.0%) and learnability (94.0%) and their overall satisfaction with MoRISS (78.4%). Representative comments on usability issues from users included:
‘Methodical and concise approach that identified areas needing improved, it was easy to follow and use…’.
‘As a relatively new manager I found it a particularly useful tool which also allowed me to engage with staff in safety issues within the practice…ideal starting point for new managers but also something to learn from those who are more experienced’.
‘The system allows a combined check of various aspects within one document. The document is easy to follow and to complete’.
Perceived impacts of MoRISS implementation
In terms of the reported impacts of MoRISS implementation, respondents indicated high levels of agreement (table 5) with a range of related statements around, for example: the checking of safety-related issues now being more of a practice priority (83.3%); identifying worrying issues that could cause a risk to patients and the practice or having the potential to do so (67%) and this type of checklist monitoring system should be in routine use in GP (88.0%). Representative comments from users on perceived impacts of MoRISS implementation included:
‘Helped focus on important issues some of which were “not on radar’’’.
‘Great idea and I would roll it out at GP training level, and any other GP training events as appropriate’.
‘Flexibility to share sections with other team members in sharing responsibility…’.
‘Gave me reassurance that things were done right or up-to-date…’.
‘Carrying out this process actually engaged full team in safety in the practice…’.
Examples of reported improvement actions by safety domain
A range of system-wide improvement actions were reported by all participating practices. Selected examples across each of the safety domains included:
Medication safety: Monthly stock takes—new practice nurse to initiate, review date in 1 month; increase signposting of location of emergency equipment in reception and clinical corridor; include refrigerator alarm and battery in monthly checks.
Health and safety: Alarm evacuation test to be repeated and included as part of automated prompts as a reminder; lacking a designated first aider, now identified and to be trained; fleece jackets to be ordered for staff working in cold areas; inadequate hot water for nurses, plumber working on repair; public and employer’s liability insurance revealed out of date certificate, now rectified.
Patient access and identification: updated information for patients on practice leaflet, website, posters and recorded message; staff training organised to reinforce importance of two forms of patient identification.
Information systems: develop a business continuity plan,and schedule automated 3-month review; Information Technology (IT) back-up data undertaken and regular schedule now in place.
Housekeeping safety: all clinical staff to be trained in the correct assembly and destruction of sharps boxes and task now added to ‘new start’ induction process.
Practice team: annual automated IT system prompts to update Cardiopulmonary Resuscitation (CPR) training for all clinicians; reflective discussions of practice safety culture to be added to programme for team meetings.
The study met its objectives in successfully engaging end-users to further redesign and validate a codesigned priority safety checklist process for the Scottish GP working environment, which was then implemented, tested and evaluated across a single health board region, with a usability survey also undertaken to inform further design improvements. Additionally, included checklist items were further categorised to guide users on their agreed priority importance while a policy, regulatory, good practice or legal rationale were identified and aligned with each item to provide a supportive evidence based for inclusion.
The revised checklist represents a comprehensive attempt to identify and prioritise safety-critical issues across the GP work system, which require to be routinely checked to minimise associated risks to patients, visitors, care team members and the practice organisation. In this context, ‘risk’ is judged to be ‘as low as reasonably practical’ (ALARP)20—a risk management concept that is well established in safety-critical sectors but less so in healthcare settings in terms of how we understand the nature of risk and safety. In terms of the identified MoRRIS checklist risks being adequately understood and managed, the ALARP principle at play involves justifiably demonstrating that the time, cost and energy involved in reducing each risk further by each practice would be ‘grossly disproportionate’ to the patient and staff safety benefits gained.
The agreed checklist content reflects known and previously reported organisational and clinical risks and safety issues that impacted negatively or have the potential to do so, on medication safety,21 house-keeping,3–5 information technology systems,22 practice team issues,3 5–8 patient access and identification3 4 and health and safety legal matters.3 6 7 While the overall mean ‘compliance’ scores for all participating practices across the aggregated mandatory, essential and advisory checklist items were above 90%, variation in each category at the practice level existed that required local actions for improvement—with self-reported evidence of a range of improvements being implemented as a result of checklist use. However, given the very small number of practices that were ‘fully compliant’ against all 61 safety-critical checklist items in this ‘one-off’ study, it may be likely that the periodic application of MoRRIS will be necessary to ensure levels of ‘high reliability’ in checking-related tasks. This mirrors the maxim that improving patient safety and staff well-being is a continuous journey23 and not a one-off activity, particularly in a clinical work setting that is characterised by its complexity and uncertainty in managing patients with multimorbidity and associated polypharmacy.2 12 13
The implementation of any checklist in healthcare is highly people dependent and ‘success’ in terms of sustainability and impact is therefore largely predicated on its utility and usefulness by users.15–17 Overall, the great majority of participants agreed or strongly agreed that MoRRIS was ‘useful’, easy to apply and learn and was ‘satisfactory’ in terms of it being ‘…a better checking system than our current approach…’. The actual or potential impact of checklist implementation was also evident with almost all participants reporting that it led to local improvements or reassured them of existing good practices. For most, the findings were also shared and discussed with the care team that augurs well as a further important source of local feedback and learning on patient safety-critical and staff well-being issues.
Implications for policy and practice
The aforementioned conceptual model of the MoRISS intervention illustrates the broad implications for policy and practice associated with implementation (figure 1). In the Scottish GP policy context, the checklist tool provides a further intervention for potential inclusion to support the goals of the Scottish Patient Safety Programme in Primary Care24 and the quality improvement (QI) activity of GP Quality Clusters in Scotland.25 As the health board participants in this study demonstrated, implementation of MoRISS as part of a regionally enhanced contractual service is possible and potentially paves the way for similar arrangements in other regions. In terms of implementation feasibility, the original study14 reported that participants ‘…unanimously agreed that the checklist should be consistently applied at least three times per calendar year (that is once every 4 months) in order to ensure necessary checking of identified safety issues within acceptable timescales’. Informal feedback from users during the redesign sessions in this study suggested that this was still the case, with some suggesting that a minimum of two times per calendar year might also be sufficient. Additionally, the checklist is being used to introduce patient safety and staff well-being issues as part of the national practice managers’ vocational training scheme and as evidence of continuing professional development (CPD) and QI activity by qualitied GP managers.14 In the original checklist study, it was strongly suggested that checklist ‘ownership’ could be a patient safety and staff well-being leadership responsibility of practice managers (potentially linked to their annual appraisal) and this appears to be borne out from their contributions at all stages of the checklist development and testing.
Outside Scotland, MoRISS aligns well with the dimensions of safety and measurement framework set out in UK Health Foundation guidance23 : namely in terms of ‘improving reliability of work processes’ and ‘building capacity to monitor aspects of safety on a daily basis’. In the English healthcare context, the prototype checklist aligns well with the Royal College of General Practitioner’s Patient Safety Toolkit,26 while the Medical Protection Society has designed a computerised ‘audit and feedback’ system based on the checklist content for its UK and Ireland GP membership (Personal Communication, Julie Price, MPS). The tool may also help support practices in meeting some Care Quality Commission obligations.27
However, we are under no illusions about the difficult issues involved in the effective implementation and sustainability of checklist use in healthcare practice—particularly given that a checklist as a tool is described a ‘technical thing’ for what is a ‘sociocultural problem’.15–17 In this respect, new approaches are more likely to be normalised if they are integrated within existing contexts and adapted for the specific requirements of different healthcare settings.28 In considering the international appeal of MoRISS, if we take the Australian GP setting, for example, there are at least three important and relevant contexts that should be considered in relation to MoRRIS and other checklist systems: practice accreditation29; existing QI initiatives and incentives30 and education and training, including CPD.31 32
While integrating MoRRIS within these contexts may help facilitate its adoption, there is also a potential quid pro quo that MoRRIS may help make accreditation, QI and CPD more feasible for GP teams and clinicians. As a first step in potentially supporting accreditation, it will almost certainly be necessary to adapt MoRISS according to local contexts. This does not necessarily need to be a lengthy or resources-intensive process in every setting. The MoRRIS domains and items are comparable to Australian Standards and Criteria and, as illustrated with the following two examples. The first example is MoRRIS item 39 ‘all clinicians are registered with regulators’, which is reflected in Criterion GP3.1 A: ‘members of our clinical team have current national registration where applicable and have accreditation/certification with their relevant professional association’. The second example is MoRRIS item 11 ‘cold chain temperature recording at least once daily’ which is comparable to Criterion GP6.1: ‘Your practice must monitor and record the minimum and maximum temperatures of refrigerators in which any vaccine is stored at least twice a day on each day the practice is open…’. The MoRISS tool could be applied to demonstrate compliance and related improvements.
If we take existing QI initiatives and CPD expectations as other context-specific examples. Since August 2019, accredited GPs in Australia are financially incentivised to undertake continuous QI activities. Practice teams are encouraged to identify their own priority areas, set their own targets and design their own interventions. Checklists like MoRISS are one potential method practices that may consider selecting and applying to achieve their QI goals. The other context is CPD, training and education. GPs in Australia are encouraged to submit QI projects as CPD accredited activities during the triennium, with each eligible project worth 40 points. The use of MoRISS as a tool by GPs to assess relevant safety practices and drive local improvements as part of CPD obligations is also a possibility.
Study strengths and limitations
Key strengths include the multimethod, participatory design approach adopted to redeveloping the checklist with multiple users. We also cited a clear rationale for the inclusion of each checklist item as supporting evidence. In user testing research, a sample of around 15 participants is recommended,33 which was exceeded in this study. Further strengths are that we tested the checklist widely in a ‘real-life’ setting and collected further feedback for improvement from the intended users. In terms of limitations, a more in-depth qualitative approach to observing and interviewing users on the utility of the checklist and any barriers to implementation would have been more rigorous but this was not feasible due to lack of resource. A larger study testing the checklist with users (but also with design inputs from patients) in multiple health boards would have provided further insights into variations in checklist results at scale as well as more diverse feedback on implementation issues. Studying checklist usage over 12 months rather than monitoring a one-off application by users would also have been more beneficial. A further limitation is that participating practices were financially incentivised to take part in the study which in itself attracts various biases; however, voluntary feedback was also received from users in different educational settings over the course of the study that supported our main findings. The thorny issue of how the checklist can be implemented in routine practice and by what mechanism (either regionally or nationally) remains unresolved for now.
Future research, improvement and evaluation
The evidence demonstrates the need for in-depth consideration of the complex sociocultural issues (eg, external imposition, oversimplifying a complex environment) and lack of attention to human-centred design (eg, usability issues, lack of frontline involvement in the development process) that can act as barriers to checklist implementation use and success in healthcare.15–17 The mixed evidence of engagement with, and success of, the WHO Safer Surgery Checklist illustrates the problems associated with these types of interventions in complex healthcare systems.15 It is of high importance that we begin to understand the overall utility of this safety intervention in potential readiness for wider implementation (assuming there is a mechanism to incentivise this regionally or nationally—otherwise implementation by frontline practices will only be achieved on a voluntary basis).
The GP environment is frequently characterised by a reactive approach to tackling quality and safety care problems (eg, significant event analysis).34 However, the proposed checklist is a proactive intervention aimed at identifying system-wide hazards in the workplace (ie, anything that can cause harm) and risks (ie, the likelihood of exposure to a hazard causing harm) and designing and implementing remedial solutions before people are harmed or performance is negatively affected. While there was significant multidisciplinary input into the tool development, it is envisaged that local practice managers will have the key leadership role in the implementation of the checklist and assume responsibility for managing and co-ordinating any related system changes and improvements that arise.
The MoRISS checklist was redesigned by users as a potentially practical safety monitoring and improvement tool for the Scottish GP setting. User testing demonstrated high levels of compliance with checklist items, but variation was evident and self-reported improvements were made. While this safety intervention can be applied immediately on a voluntary basis by practices, a policy mechanism to enable routine implementation nationally may be more useful to facilitate sustainability and provide evidence of safety-related monitoring, learning and improvement at the practice system level. Further research is necessary to provide in-depth evidence of the utility of the tool, any barriers to implementation and to provide further insights into the ‘state of safety’ at the regional and national levels. The checklist can potentially be contextualised for use in family practice settings internationally and other settings such as dentistry and community pharmacy.
The MoRISS checklist tool
A copy of the checklist, evidence-based rationale and supporting spreadsheet can be downloaded here: https://learn.nes.nhs.scot/1032/patient-safety-zone/patient-safety-tools-and-techniques/moriss-checklist
We are very grateful to the participating general practice teams from Ayrshire and Arran, Greater Glasgow and Clyde, and Tayside Health Boards for contributing to the further design and testing of the MoRISS Tool.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Twitter @pbnes, @duncansmcnab
Contributors PB was responsible for the study idea/design and leading the data collection, analysis and reporting. PB, CdW, TC, JM, PY, PW, NH, JG, JF, DM all contributed to data collection and analysis. PB drafted and led the writing of the manuscript to which CdW, TC, JM, PY, PW, NH, JG, JF and DM also contributed to this process as well as critically appraising and revising different iterations. All authors read and approved the final manuscript.
Funding The study was jointly funded by NHS Education for Scotland and Healthcare Improvement Scotland.
Competing interests None declared.
Patient consent for publication Not required.
Ethics approval Under UK ‘Governance Arrangements for Research Ethics Committees,’ ethical research committee review is not required for service evaluation or research which, for example, seeks to elicit the views, experiences and knowledge of health carehealthcare professionals on a given subject area.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available upon reasonable request. Data are available on request from NHS Education for Scotland.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.