Article Text
Abstract
Background Patient safety learning systems play a critical role in supporting safety culture in healthcare organisations. A lack of explicit standards leads to inconsistent implementation across organisations, causing uncertainty about their roles and impact. Organisations can address inconsistent implementation by using a self-assessment tool based on agreed-on best practices. Therefore, we aimed to create a survey instrument to assess an organisation’s approach to learning from safety events.
Methods The foundation for this work was a recent systematic review that defined features associated with the performance of a safety learning system. We organised features into themes and rephrased them into questions (items). Face validity was checked, which included independent pre-testing to ensure comprehensibility and parsimony. It also included clinical sensibility testing in which a representative sample of leaders in quality at a large teaching hospital (The Ottawa Hospital) answered two questions to judge each item for clarity and necessity. If more than 20% of respondents judged a question unclear or unnecessary, we modified or removed that question accordingly. Finally, we checked the internal consistency of the questionnaire using Cronbach’s alpha.
Results We initially developed a 47-item questionnaire based on a prior systematic review. Pre-testing resulted in the modification of 15 of the questions, 2 were removed and 2 questions were added to ensure comprehensiveness and relevance. Face validity was assessed through yes/no responses, with over 80% of respondents confirming the clarity and 85% the necessity of each question, leading to the retention of all 47 questions. Data collected from the five-point responses (strongly disagree to strongly agree) for each question were used to assess the questionnaire’s internal consistency. The Cronbach’s alpha was 0.94, indicating a high internal consistency.
Conclusion This self-assessment questionnaire is evidence-based and on preliminary testing is deemed valid, comprehensible and reliable. Future work should assess the range of survey responses in a large sample of respondents from different hospitals.
- Quality improvement
- Patient safety
- Surveys
- Safety Management
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
WHAT IS ALREADY KNOWN ON THIS TOPIC
Development and implementation of the safety learning system varies from one hospital to another. There is a notable gap in the availability of validated and reliable tools for hospitals to self-assess the implementation of their safety learning system (SLS) at various stages. This highlights the urgent need to create a high-quality, standardised survey tool specifically designed for SLS evaluation.
WHAT THIS STUDY ADDS
We developed an evidence-based self-assessment questionnaire that on preliminary testing is deemed valid and reliable.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
With this tool, hospitals and health systems can pinpoint areas in their SLS needing enhancement, while researchers can assess the effectiveness of SLS programmes across the board.
Background
Many healthcare systems and accrediting bodies require institutions to implement a safety learning system (SLS)1 2 to track patient safety incidents, the learnings arising from them and the completion of actions to mitigate or prevent similar events in the future.3 An SLS typically includes a digital platform and associated policies and procedures to support reporting, investigating, learning and improving following an unsafe act and/or patient harm. The premise underlying SLSs is that errors and harmful events might re-occur if predisposing factors are not addressed.4 They also contribute to a culture of safety by creating the expectation that events are followed and acted on in a fair and transparent manner.5 6
Although most hospitals and accrediting bodies have adopted and promoted SLSs, their value is a subject of debate.7 Indeed, multiple failures in their implementation and use could be detrimental; for example, failure to report events, investigate and learn from them and, most importantly, failure to act to prevent them.8–10 We recently performed a systematic review to identify the features associated with the effective use of SLSs.11 These features were classified according to the stages of the SLS derived from the Patient Safety and Incident Management Toolkit.12 Most of the included literature focuses on the reporting and feedback of incidents; event analysis and mechanisms to support learning were less prominent.11 That literature might reflect actual practice in which organisations spend more time collecting data rather than developing action plans for improving.13 Further, we documented the variability in the SLS factors evaluated, which likely reflects an underlying inconsistency in how organisations implemented their SLS.11 Thus, it seems plausible that the effectiveness of SLSs might be a result of a combination of local factors,10 in addition to generic factors. These generic factors include cultural factors (eg, lack of safety culture), educational factors (eg, lack of training on the SLS and statistical analysis), governing and regulatory bodies (eg, legislation that supports reporting and protects reporters).11 14 To deepen our understanding of the impact of an SLS, a unified approach is crucial for evaluating the complete range of required activities post-safety event. Currently, there is a notable gap in the availability of validated and reliable tools for hospitals to self-assess the effectiveness of SLS at various stages.15 This highlights the urgent need to create a high-quality, evidence-based, standardised survey tool specifically designed for SLS evaluation.
This paper outlines the creation of such a tool and was intended for diverse applications, whether by individual hospitals, entire health systems or researchers. With this tool, hospitals and health systems can pinpoint areas in their SLS needing enhancement, while researchers can assess the effectiveness of SLS programmes across the board.
Methods
Overview
Our work consisted of three steps (figure 1). First, based on our previous systematic review, we developed questions pertaining to the specific barriers and facilitators of SLS effectiveness. Next, patient safety and methodological experts (AJF, DM, SM, KT) reviewed the questions for their comprehensibility and perceived importance. Third, we tested the questions on a representative sample of staff members responsible for aspects of patient safety at a single teaching hospital in Ottawa, Ontario, Canada (The Ottawa Hospital).
Patient and public involvement
Patient and public involvement was out of the scope of this study.
Step 1: establishing candidate questions and their response variables
We developed the questionnaire using findings from our systematic review, which identified 68 factors associated with SLSs.11 We organised the questions into the same domains and subdomains used in the review, which were derived from the Canadian Patient Safety Institute’s Patient Safety and Incident Management Toolkit.12
We used an iterative approach to develop the questionnaire, carefully refining questions to eliminate redundancy. To that end, we collected different questions that evaluate the same area or process in the SLS together, then formulated single comprehensive statements to address them. This process reduced the initial 68 statements down to 47 questions for evaluation. We chose a five-point response scale (strongly disagree, disagree, partially agree, agree and strongly agree) to allow accurate SLS assessment and to help participants better understand the questions’ intent.
Step 2: input from methodology experts (pre-testing)
To ensure the clarity and relevance of the questions together with their response variables, we invited three health services researchers who participated in the systematic review to assess them independently. Using an online tool, each reviewer provided feedback on the relevance and clarity of every question. To simplify their input, they indicated whether the question should be included in the final SLS survey tool using the following responses: ‘no’, ‘uncertain’, ‘yes, with modification’ or ‘yes, as is’. The reviewers were asked to ground their responses such that ‘no’ meant no relevance or poor clarity, and ‘yes’ meant high relevance and clarity. If two reviewers or more rated the relevance of a statement ‘no’, the statement was removed from the draft. If the reviewer rated ‘yes, with modification’ they were asked to suggest new wording for consideration by the research team. If two or more rated ‘uncertain’, they were contacted for clarification and the sentence was edited. The research team met to discuss the feedback and make changes accordingly. We also asked the researchers to determine the target group—either quality and safety (Q&S) managers and leaders or frontline workers—for each question to ensure precise and accurate results.
Step 3: questionnaire testing in a representative sample of respondents (clinical sensibility and internal consistency)
This step assessed the necessity, clarity and the internal consistency of the questionnaire by seeking input from Q&S managers and leaders at The Ottawa Hospital. We identified all the Q&S experts employed at the hospital as nurses, physicians and so forth, and with a role in managing patient safety events and invited them to participate in the study (N=56). All participants were expected to be aware of the Q&S management activities in their hospital and were required to answer the entire questionnaire. We sent all participants a letter detailing our study objectives and emphasised that their participation was voluntary, anonymous and would not impact their employment. Three follow-up emails were sent 2 weeks apart to the participants as reminders to answer the questionnaire. Participants were not compensated for answering the questionnaire. The survey (LimeSurvey) was distributed using the hospital’s internal network, which facilitated tracking of completion rates and analysis.
We asked the experts to answer each survey question about the hospital by selecting one response. These data were used to measure the internal consistency of the questionnaire rather than evaluate the hospital’s SLS. They were also asked to answer the two clinical sensibility check questions (‘Do you find the question easy to understand (clear)?’ and ‘Do you think this question is important (necessary)?’) by choosing either ‘yes’ or ‘no’ for each question. We developed these two questions (based on prior work by Karen et al16) to assess the question’s perceived utility and comprehensibility. We determined that questions should be modified or removed if more than 20% of respondents selected ‘no’ for ‘clear’ or ‘necessary’, respectively. We determined this threshold based on experts’ and researchers’ feedback to strike a balance between addressing potential issues and avoiding unnecessary changes that could impact the validity of the survey results.
Statistical analysis
We report the frequencies and descriptive statistics for all questions in the survey. We assessed the internal consistency for the whole questionnaire and at the domain level using the Cronbach’s alpha coefficient. This is a test that measures the internal consistency, or reliability, of a set of survey items. We used this statistic to help determine whether a collection of items consistently measures the same underlying concept.17 It quantifies the level of agreement on a standardised 0–1 scale. It is also important to note that Cronbach’s alpha underestimates the reliability of domains having two questions; for those we used the Spearman-Brown correlation coefficient.18 For both tests, we considered values of 0.7 or greater to be reliable.18 All statistical analysis was conducted using Microsoft Excel V.16.
Results
Step 1
From the 68 factors identified in a previous systematic review,11 we developed a questionnaire consisting of 47 statements. These questions were organised within the domains and subdomains of the Patient Safety and Incident Management Tool Kit (figure 2).12
We also thought it important to know the years of experience and place of work as this might affect our interpretation of their answers and subsequently the development of the recommendations. Accordingly, we added ‘Section A: Demographics’ to the questionnaire (table 1).
Step 2
Based on the feedback from the three researchers all the questionnaire items were relevant. Clarity assessment resulted in questionnaire modifications as follows: 15 questions were edited, four of which were rated ‘uncertain’ by two of the researchers and 11 were rated ‘yes (with modification)’ by 1 or more; 2 questions were removed; 2 questions were added. The reviewers also reported that all 47 questions were applicable to be answered by the Q&S managers and leaders, whereas front-line workers could only answer 30 of these questions (table 1). At the end of this step, there were 47 statements in the questionnaire in addition to four demographic questions that all participants were required to answer.
Step 3
Of the 56 employees who have a role in managing patient safety events at The Ottawa Hospital, we received complete responses from 20 (36%), whereas 19 (34%) provided incomplete responses and 17 (30%) did not participate in the study. We considered respondents as only ‘complete responders’. Respondents were doctors (n=11; 55%), nurses (n=3; 15%), engineers (n=2; 10%), pharmacists (n=1; 5%), physiotherapists (n=1; 5%) and quality coordinators (n=2; 10%). Their years of experience in quality and safety varied between 1 and 25 years, with an average of 4.35 years (SD=4.72).
Regarding clarity and necessity, no question exceeded the threshold of 20% negative responses, thus all were adjudged clear and necessary (table 1). Two questions received low scores (<90%) for clarity: ‘I must report the same incident using different approaches. (eg, Patient files, nursing reporting system, safety department)’ was rated ‘unclear’ by only three participants, whereas the second question ‘The management has a defined communication approach to share learning externally’. was rated ‘unclear’ by four participants. The two questions that received the lowest scores for necessity were ‘Management ensures analysis teams are composed of representative expertise’. and ‘The management has a defined communication approach to share learning internally’. Both were reported ‘unnecessary’ by three participants.
To get a more accurate measure of the internal consistency and reliability of the questionnaire, and to mitigate the effect of low participation, we used Cronbach’s alpha statistics.19 This mainly depends on measuring the correlation between the responses of each participant. We measured the correlation between all the participants’ responses (strongly disagree–strongly agree) to all the questions, and measured it separately for the responses within each domain.
Based on the participants’ responses to the 47 statements in the questionnaire, Cronbach’s alpha was calculated to be 0.94. The Cronbach’s alpha coefficients for each domain were as follows: domain A ‘before the incident’, 0.77; domain B ‘immediate response’, 0.77; domain C ‘prepare for analysis’, 0.61; domain E ‘follow through’, 0.92; domain F ‘close the loop’, 0.88. For domain D ‘analysis process’, which has only two questions, we found it more reliable to calculate the Spearman-Brown reliability coefficient, which was 0.89. This indicates excellent agreement.
Table 1 also shows the percentages of the participants who agreed (total ‘partially agree’, ‘agree’ and ‘strongly agree’) on each question based on the status of their SLS in the hospital. We found that the questions ‘All staff have access to an electronic system for reporting safety events’ and ‘The management has a defined communication approach to share learning externally’. received the two lowest agreement scores (40% and 50%, respectively).
The percentages of combined ‘disagree’ and ‘strongly disagree’ scores were also calculated at the domain level. Domains C (‘prepare for analysis’) and F (‘close the loop’) received the highest disagreement scores (30.8% and 29.4%, respectively), compared with domain A (‘before the incident’, 20.8%), domains B (‘immediate response’, 21.2%), D (‘analysis process’, 17.5%) and E (‘follow through’, 12.5%).
In conclusion, step 3 confirmed the high reliability and validity of the 47 questions and did not result in the removal or modification of any.
Discussion
We systematically developed a survey to assess institutional SLSs. The tool builds on a previous systematic review of SLS outcome factors, and was refined with input from experts in patient safety and quality, ensuring comprehension, face validity and internal consistency. The study had three steps. Initially, we developed the survey’s questions and their corresponding response options, culminating in a draft of 47 questions. During the second step, experienced researchers who were also involved in the systematic review conducted a pre-test. This led to the elimination of two questions, the revision of 15 and the introduction of two new ones. Additionally, to reduce bias, it was determined that 17 questions focused on SLS management should be exclusively answered by Q&S experts and leaders, while the rest could be addressed by both Q&S professionals and other staff members involved in the study. The final step involved clinical sensibility testing—evaluating the clarity and necessity of the questions—by Q&S experts at The Ottawa Hospital. Despite receiving the lowest scores in these areas, four questions (numbers 15, 30, 44 and 45) were retained by the research team because their approval ratings exceeded the 80% benchmark set for question inclusion.
Four demographic questions were also added. The first three questions ‘Institution’, ‘Position title’ and ‘What is your professional designation?’ can help to direct improvement efforts to the right hospital department, while the fourth question ‘Years of experience’ can help to either include or exclude participants based on their experience, which might affect their answers. Finally, the questionnaire was tested for internal consistency using the participants’ evaluation of their SLS; high reliability was indicated with a Cronbach’s alpha score of 0.94.
Although this study did not intend to evaluate the hospital’s SLS, we grouped the responses of the participants’ opinions into either ‘agree’ or ‘disagree’ and only present the percentage agreement in table 1 for simpler reporting. It is also important to mention that improving the SLS is a continuous process; therefore, decision-makers can work on any of the system areas even those with high agreed percentages. We also wish to highlight that the domains ‘prepare for analysis’ and ‘closing the loop’ received the highest totals of ‘disagree’ or ‘strongly disagree’ scores.
This study’s unique strength lies in the rigorous steps taken to develop the survey. To ensure the accuracy of the collected data, we followed the methods of Karen et al16 for face validity checks, based our questions on evidence from a systematic review and piloted the survey at a large, reputable hospital. This pilot allowed us to assess the applicability, clarity and necessity of all items. Finally, we incorporated feedback from both SLS experts and experienced Q&S professionals working within the SLS at The Ottawa Hospital.
Our newly developed questionnaire has many points in common with a 44-item self-assessment tool developed by the WHO in 2020.20 That tool was developed based on feedback from international experts in SLS. Both questionnaires use similar rating scales and cover many of the same major topics, including safety culture, leadership support, training, analysis and investigation, blame-free culture, availability of resources, feedback and the internal and external sharing of learning. On the other hand, our questionnaire was rigorously developed to enhance accuracy and clarity. We carefully formulated the questions to ensure understanding across diverse participant backgrounds. The questions were organised into domains to focus improvement efforts. We divided participants into two groups for obtaining tailored, precise results. To ensure practicality and relevance, we incorporated feedback from Q&S leaders working directly within a large accredited hospital’s SLS system. Finally, demographic data will inform recommendations, allowing us to target improvements for specific departments or employee groups.
Another study developed and validated an online questionnaire survey to test recently graduated doctors’ knowledge and experience of patient safety and incident reporting, and assess-related attitudes and behaviours.21 It was developed based on previously published questionnaires for medical students and nurses. It included 21 questions that partly overlap with our questionnaire including the principles of patient safety in their hospitals, their views on local reporting, training, the reporting stage, blame-free environment, involvement in incident discussions and 1 question on feedback. Our questionnaire has more items to assess feedback in addition to a detailed evaluation of the management role and the analysis process.21 Our questionnaire was tested for validity and reliability to ensure highly precise data collection. Being based on a systematic review, our collected data and subsequent recommendations are more generalisable and applicable.
The strengths of this study include the testing of the face validity of the questionnaire. A group of experienced health services researchers who are experts in patient safety reviewed and gave feedback on each item to ensure it is fit for the purpose. The questionnaire was evaluated again in the clinical sensibility test by experts in the field. Finally, internal consistency was measured and was found to be high. Therefore, we are highly confident that no irrelevant questions were included and that all critical topics were covered. In addition, our questionnaire was based on a systematic review study that included 22 primary research studies conducted worldwide. It collected the views of a range of healthcare professionals on the barriers and facilitators of SLSs. This would be reflected on the generalisability of our results. Being standardised, it might help to compare SLS performances among different hospitals.
All the safety experts, leaders and managers at The Ottawa Hospital were invited to participate in the internal consistency assessment; however, the response rate was low at 36%. Domain C had the lowest Cronbach’s alpha score (0.61), which can be explained by the low number of questions (six) known to affect the accuracy of the test; however, the result is still close to the lower reliability limit. Everyone rating the hospital’s SLS were experts in Q&S and could be concerned with the reputation of their hospital, so the possibility of bias in their responses exists. However, generalisability of the questionnaire is not expected to be affected as questions were based on the result of a systematic review that included studies from all over the world. Furthermore, testing the questionnaire for clarity, necessity and reliability depended mainly on participants’ general knowledge on Q&S. However, despite this theoretical consideration, further testing in a large sample of participants from multiple settings is required
Conclusion
This SLS self-assessment questionnaire is an evidence-based tool that on preliminary testing is deemed valid and reliable. The questionnaire created and evaluated in this study provides a reliable means for healthcare providers to self-assess their SLS. Using this questionnaire within a larger sample of respondents from different institutions will be required to further assess the survey psychometric properties as well as determine possible differences in SLS practices.
Data availability statement
All data relevant to the study are included in the article or uploaded as supplementary information.
Ethics statements
Patient consent for publication
Ethics approval
The Research Ethics Board at The Ottawa Hospital (OHSN-REB Number: 20210441-01H (2658)) approved the protocol.
Footnotes
Contributors Corresponding Author HAM was responsible for the whole research at its different stages (protocol development, planning, survey development, data collection and analysis, writing of the manuscript, editing and publication. AJF is the research supervisor responsible for planning, supervision and revision of the final manuscript. Contributing authors, they all participated in development of the protocol, revision and approval of the analysis, writing the manuscript and revision of the different edits of the manuscript they also participated in the pretesting step in survey development.
Funding The corresponding author was responsible for all the expenses related to the study, however, University of Ottawa might provide reimbursement.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.