Article Text
Abstract
Objective We aimed to investigate the perception of the implementation success of reporting and learning systems in German hospitals, the perceived relevance of the implementation outcomes and whether and how these implementation outcomes are monitored. An reporting and learning system is a tool used worldwide for patient safety that identifies and analyses critical events, errors, risks and near misses in healthcare.
Methods A pretested exploratory cross-sectional online survey was conducted with reporting and learning system experts from 51 acute care hospitals. For communicative validation, the results were discussed in person in an expert panel discussion (N=23).
Results Fifty-three per cent (n=27) of the participants (N=51) of the online survey perceived that their reporting and learning system was being comprehensively and successfully implemented. However, no service or patient outcomes were reported to ultimately capture the concept of implementation success. Most of the participants reported a (high) relevance of the implementation outcomes’ acceptability and sustainability. In total, 44 measures were provided to monitor implementation outcomes. However, most of the quantitative measures were based on the (relative) number of entered reports. Qualitative measures were reported in relation to the ‘quality of the report’. In general, the measures were poorly specified.
Conclusion There is an underestimated need to develop validated ‘implementation patient safety indicator(s) (sets)’ to monitor implementation outcomes of reporting and learning systems. We also identified a potential need to facilitate awareness of the concept of implementation success and its relevance for patient safety. Drafts of indicators that could be used as a starting point for the further development of ‘implementation patient safety indicators’ were provided.
- patient safety
- risk management
- incident reporting
- implementation science
- outcome assessment
Data availability statement
Data are available upon reasonable request. All data relevant to the study are included in the article or uploaded as supplementary information. Data are available upon reasonable request. Data are available from the Fliedner Fachhochschule Düsseldorf.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Key messages
What is already known on this topic?
To the best of our knowledge, this is the first study to investigate the implementation outcome monitoring of reporting and learning systems (RLSs) in hospitals.
What this study adds?
Based on our research, we provided first drafts of implementation measures, that can be used to develop validated and consensus-based qualitative and quantitative ‘Implementation patient safety indicators (sets)’ to monitor RLS implementation outcomes.
How this study might affect research, practice or policy?
Considering our results and recent scientific discussions, there might be a need of additional implementation outcomes and to facilitate the awareness of implementation science in practice.
Introduction
An reporting and learning system (RLS) is a tool used worldwide for patient safety that identifies and analyses critical events, errors, risks and near misses in healthcare.1 It aims to facilitate individual and structured organisational learning, for example, in hospitals, to enhance patient safety by preventing errors and harm.2 3 An RLS is also called a (critical) incident reporting system or patient safety learning system.4 An RLS can be based on the following four core phases2: (1) the preparation phase: informing the reporting target group about what should be reported, who should report it, and how to report it, also considering the infrastructure for confidentiality and type of RLS; (2) the reporting phase: collecting reports and documentation; and expert-based analysis of incoming reports, including a classification; (3) the analysis phase: risk and system analysis; and (4) intervention: deriving (preventive) interventions, including dissemination.2 Countries have different requirements for what is reported, who reports it, how reports are made and how data are analysed, communicated and used.2 An RLS can be voluntary or mandatory and is typically required by law.2 Reporting is performed in several settings, usually by healthcare providers.2 Nevertheless, an RLS has some limitations, such as a lack of quantitative data and capture rates.1 Therefore, a combination of other instruments or methods, for example, medical chart reviews, is recommended to most effectively improve patient safety.1
When an RLS is successfully implemented, it has the potential to improve patient safety.3 Implementation relevance is supported by Proctor et al5 because poor implementation can affect the effectiveness and, ultimately, the implementation success of an intervention.5 The purpose of implementation monitoring is to use specific implementation indicators. Ideally, implementation outcomes are monitored by qualitative and quantitative indicators.6 Quantitative indicators can have a predictive function regarding the quality of healthcare measured by a reference range or value.6 Qualitative indicators capture phenomena that are difficult to monitor with numerical values, such as individual perspectives on safety.6 The established implementation outcomes are those defined by Proctor et al5: acceptability, adoption, appropriateness, implementation costs, feasibility, fidelity, penetration and sustainability.5 7 In particular, implementation outcomes serve as key intermediate outcomes in relation to service systems or clinical outcomes to assess implementation success.5 ‘Implementation success comprises both innovation and implementation effectiveness in a setting of the daily routines of healthcare services, including the measurement of service, client and implementation outcomes’.6 With this framework in mind, it is important to develop a better understanding of how the implementation of an RLS is perceived and monitored, because this knowledge can help identify and develop implementation indicators (figure 1).
Only a few studies have investigated the implementation of RLSs in hospitals.4 8 These studies predominantly aimed to explore facilitators of or barriers to RLS implementation, the characteristics of various RLSs and contextual factors. Interestingly, a review showed that the implementation of an RLS was reported to be successful without measuring all the relevant types of implementation outcomes.1 To the best of our knowledge, no studies have investigated the implementation outcome monitoring of RLSs in hospitals. Although several implementation measures exist,6 there are no validated measures with which to monitor RLS implementation outcomes in hospitals. Therefore, we aimed to investigate perceptions of the implementation success of RLSs in German hospitals, the perceived relevance of implementation outcomes and whether and how these implementation outcomes are monitored.
Methods
Study design
An exploratory cross-sectional closed online survey was conducted in accordance with the scientific quality standards of online research.9 Due to the predominantly exploratory design of the study, saturation of qualitative data from open questions was a goal. Following Tran et al,10 we calculated a minimum sample size of N=50 for data saturation in mixed qualitative and quantitative surveys. We observed that no new main categories could be developed and, therefore, we could assume that data saturation was reached.
To recruit the hospitals, we used email addresses obtained from the associated partner of the executive department of risk management at the Medical University Hannover. Criterion-based convenience sampling was conducted to recruit individuals who were directly responsible for RLSs in German acute care hospitals. The participants had to be quality and/or risk managers and quality assurance and RLS representatives. This was approved by JS using participant contact data.
Questionnaire development
The development of the structured password-protected online survey questionnaire (see Online-Questionniare) was predominantly based on the conceptual framework of Proctor et al5 and the systematic translation and cross-validation of its defined implementation outcomes.11 The requirements for questionnaire development were considered, for example, questionnaire construction, scale levels, question types and question wording.
Supplemental material
Before an expanded two-step pretest was conducted, an expert-based evaluation was performed considering the face and content validity, consistency and wording of the questions, difficulty level, filter construction and the time required to complete the questionnaire. The participants (n=6) had expertise in research methods, implementation and nursing science as well as clinical risk management. The first step of the pretest included a cognitive pretest (n=7) and a standard (n=8) pretest with the target group. After the questionnaire was revised, a final standard and technical pretest of the electronic questionnaire (n=4) was conducted. The final questionnaire (see Online-Questionniare) was subdivided into three parts and contained a total of 29 questions (n=15 open questions and n=14 closed questions). Part (A) contained closed questions regarding RLS characteristics and open questions regarding implementation monitoring (‘How is the implementation of the RLS monitored?’) and the perception of implementation success (‘Based on which criteria, do you recognise that a local RLS is implemented successfully?’). In part (B), the respondents were asked to assess the relevance of the implementation outcomes using a 4-point Likert scale (‘highly relevant’, ‘relevant’, ‘slightly relevant’ and ‘not relevant’). Additionally, the respondents were asked how they would capture each of the implementation outcomes following Proctor et al.5 Part (C) contained questions to collect data on the characteristics of the participating hospitals as well as an open question for further remarks.
Data collection and analysis
The voluntary online survey was conducted between 20 July 2018 and 12 October 2018. Three reminders were provided in mid-August, in mid-September and at the end of September 2018. First contact with potential participants was initiated via email or telephone. An individual password was sent via email after a participant provided informed consent for the pretest and/or the final survey. Data were protected by a password and limited access on the institutional server. Adaptive questioning was applied, and one or two items were displayed per screen (15 screens). The participants were able to review and revise their answers before final submission. A summary of the results could be downloaded after review completion.
Following scientific standards for qualitative research in terms of the communicative validation of the results, a face-to-face panel discussion was conducted at an expert network meeting of quality and risk managers (N=23) on 2 April 2019 in one of the participating hospitals. The panel discussion was moderated by SK, and field notes were taken.
All quantitative data were checked for missing answers and plausibility. Missing data were not replaced.
A comprehensive open (inductive) and structured (deductive) content analysis of the qualitative data was conducted based on open survey questions.12 The structured analysis was guided by Proctor et al’s implementation outcomes and was performed independently by two authors (JS and SK). The subcategories were developed through open content analysis. The categories and coding tree were checked against the raw data and discussed and revised until a final coding tree was agreed on. The coding rules were documented. Quantitative data, for example, the characteristics of the hospitals and implementation content data, were analysed separately and interpreted at the end of the analysis.
The analysis of the RLS phases was performed to identify measurement gaps.
Considering the multiple optional answers, the absolute number of participants (N) and absolute and relative frequencies were provided to avoid misinterpretation. Where appropriate, the range, standard deviation and median were calculated. The field notes from the expert meeting were documented and used to interpret the results.
Patient and public involvement
Patients were not involved.
Results
A total of 51 hospitals from 13 of 16 federal states of Germany participated (table 1). Forty-two per cent (n=21) of the responding hospitals (N=50) had 600 or more planned inpatient beds. Sixty-four per cent (n=38) reported have RLS experience of between 5 years and 10 years or more than 10 years.
The participant response rate was 81%, calculated as the number of people who finally agreed to participate (n=51) divided by the number of people who provided informed consent (N=63). The questionnaire completion rate was 100%, calculated as the number of people who submitted the questionnaire (n=51). However, not every question was answered by all of the participants. Most often, the questions regarding ‘feasibility’ and ‘fidelity’ were skipped, and the results from the ‘sustainability’ question were unclear.
All of the participants (N=51) reported that their hospital used an RLS. Sixty-one per cent (n=31) of the responding hospitals (N=51) had local and hospital-wide RLSs, and 39% (n=20) had local RLS. The decision to implement an RLS in the responding hospitals (n=49) was made between 2002 and 2017.
In 94% (n=48) of the hospitals (N=51), all of the staff members were allowed to send reports to the RLS. For one of the hospitals (N=51), education staff were not allowed to send reports. Patient complaints were included in the RLS of one hospital if they complied with RLS reporting requirements. At another hospital, patients and relatives were allowed to report. These two hospital responses did not vary notably from the others.
Perception of implementation success and implementation outcome relevance
Regarding whether an RLS was ‘completely’, ‘partially’ or ‘not yet’ successfully implemented, 53% (n=27) of the participants (N=51) reported that a local RLS had been completely and successfully implemented. Forty-five per cent (n=23) reported a system that was partially successfully implemented, and 2% reported that a system was not yet successfully implemented. The participants were asked what criteria were used to determine that the reporting system had been implemented successfully. All of the reported criteria were implementation outcomes or measures, with the aim of capturing implementation success (table 2); none of them addressed service or patient outcomes. The provided quantitative and qualitative measures (n=16) covered the implementation outcomes of Proctor et al5: adoption, fidelity, penetration and sustainability. Four other implementation outcomes were referenced: visibility, utilisation, awareness and acceptance. Of these outcomes, utilisation was most often covered by the measures (n=37), predominantly addressing the (relative) reporting rate (table 2).
In light of the perceived relevance of the implementation outcomes (Proctor et al),5 we observed that acceptability and sustainability were most often identified as highly relevant or relevant (96%, and 98%, respectively). Slightly relevant were the feasibility, appropriateness, fidelity and implementation costs of the implementation outcomes. In particular, ~61% of the participants rated implementation costs as slightly or not relevant to the monitoring of implementation outcomes (figure 2).
Implementation outcome monitoring
Most of the participants, 77% (n=39 of 51), reported that local RLS implementation was monitored, and 24% (n=12 of 51) reported that it was not.
In total, 44 measures were provided to monitor implementation outcomes (tables 2 and 3). However, several of the measures could not be further specified in detail due to a lack of data.
The overall provided measures, covering several implementation outcomes, could be subdivided into quantitative (n=20) and qualitative (n=24) measures. The outcome implementation costs as covered by both quantitative and qualitative measures regarding ‘costs’ and ‘benefits’. ‘Types of costs’ were operationalised as labour costs, fringe costs, realisation costs, analysis costs, insurance charges, software costs and combined ratio, and on-costs in terms of low costs of RLS reporting and high costs of RLS analysis and interventions. ‘Benefits’ were operationalised by (process) efficiency and effectivity, for example, a decrease in harmful events, benefits from improvement interventions, risk assessment and risk prevention, unidentified problem fields, and benefits for employees and patients without quantification. Interestingly, the participants responded that a cost–benefit calculation was impossible. One participant reported that there was no identifiable benefit of an RLS and that problems were discussed internally. The categories ranged from an estimation or calculation of costs to a lack of interest in or monitoring of costs. Finally, it remained unclear how the implementation costs were calculated.
The 28 measures (table 3) that were provided to monitor RLS implementation outcomes addressed the RLS phases as follows: the preparation phase (n=1), ‘supply and demand of regular training’; the reporting phase (n=10), for example, related to the (relative) number of entered reports; the analysis phase (n=8), for example, the number of analysed cases and the intervention phase (n=6), for example, ‘the stage of reporting analysis and deduced interventions’. Of these, four could be assigned to two of the phases among the reporting, analysis and/or intervention phases. Seven measures could not be assigned to a specific RLS phase.
Quantitative measures to monitor implementation outcomes
Most of the quantitative measures (tables 2 and 3) were operationalised by the (relative) numbers of entered reports to monitor implementation outcomes. The quantitative measures predominantly included the (relative) number of reports, cases or interventions.
All of the implementation outcomes, except implementation costs, were addressed by the ‘reporting rate’, which was most often indicated (n=42). It was operationalised by the point in time of the report and the trend (a decreasing or increasing number of reports).
Additionally, implementation outcomes utilisation, acceptability, appropriateness and penetration were addressed by the ‘relative reporting rate’, which was related to the number of beds, the standard (one report/year/hospital bed) and organisational units or related professional groups, for example, the number of nurses, physicians and employees (not further defined).
Three reference values to monitor the increase or decrease in the reporting rate were reported by the participants; critical reference values were <5 reports/week (≤260 reports/year; hospital with >600 planned beds); <150 reports/2 years (≤75 reports/year; hospital with 300–599 planned beds) and usually minimum 1 report/month (=minimum 12 reports/year; hospital with 50–299 beds). The measurement time point was defined as once per year or lacked further definition.
A deeper analysis showed that quantitative measure ‘reporting rate’ was reported in combination with qualitative measure ‘quality’ or ‘content’ . In this context, we were able to identify some complementary quantitative and qualitative measures that addressed appropriateness. Appropriateness was quantitatively addressed by the ‘number of deduced interventions’ and often qualitatively addressed by the ‘deduction of interventions as a result of analysed reports’, including (transregional) projects, ‘the deduction of recommendations’ and ‘realised recommendations’.
Qualitative measures to monitor implementation outcomes
More than half of the measures were qualitative (tables 2 and 3) and addressed all the implementation outcomes. Most often, the provided measures were operationalised by the ‘quality of the report’. This measure addressed six of the implementation outcomes: acceptability, adoption, appropriateness, fidelity, penetration and sustainability. Additionally, the acceptability and adoption outcomes were related to several measures with a specific focus on staff, for example, ‘staff participation in reporting analysis meetings’, ‘staff follow-up regarding the RLS’ and ‘the supply of and demand for regular trainings’. Appropriateness was covered by the ‘deduction of interventions as a result of analysed reports, for example, regional projects’. Interestingly, the risk potential of reports appeared as a cluster of several measures that covered this outcome: ‘risk evaluation of the reports’ in general or a ‘risk evaluation of the reports before and after a preventive intervention’, as well as the need to analyse ‘repeated events (yes/no)’. Additionally, a ‘criticality of reports (potential risks)’ measure was provided; in this case, it addressed implementation outcome fidelity. Feasibility was covered by different measures considering the realised phase of the RLS or access, for example, ‘RLS realisation and/or implementation’, ‘stage of reporting analysis and deduced interventions’ or ‘several conditions for RLS access’. Sustainability was addressed by several measures considering trust and routinisation, for example, ‘progression of trust’ and ‘RLS integration in routines and intranet’.
Input from expert panel discussion
The expert panel discussion (N=23) of the study findings showed that some of the participants perceived the implementation monitoring to be helpful.
The ‘quality of the reports’ measure was debated: it could either be used to monitor whether trainings on reporting following RLS criteria were successfully implemented or it could lead to low RLS acceptability or adoption due to high requirements for reporting. The experts recommended not using this measure and that the RLS analysis team should decide whether a report meets the criteria for an RLS before analysis. Another suggestion was to apply non-anonymous test reports to check the success of trainings to facilitate better adoption. The participants stated that acceptability was one of the most important implementation outcomes. They added that capturing the ‘time to feedback on the report and/or realised interventions’ was, from their point of view, most important for the effective implementation of an RLS.
Discussion
More than half of the participants reported that they perceived RLSs to have been comprehensively and successfully implemented and that RLS implementation was monitored. This is not surprising because German hospitals have been legally required to provide RLSs since years. With the framework of implementation success5 in mind, we found that implementation outcomes were considered to measure implementation success and that acceptability and sustainability were perceived as highly relevant or relevant. However, it remains unclear whether the implementation success framework of Proctor et al is represented in practice, considering a holistic view of service, patient and implementation outcomes. We are unsure whether there is knowledge of the framework in daily practice. Currently, there is limited evidence regarding the application implementation outcomes in hospital settings.13 It can be assumed, considering the vague answers given, that there might also be a terminology gap between quality improvement (improvement science) language and language resulting from implementation science. There are also different definitions of implementation outcomes.7
Additionally, from a scientific point of view, there are differences, commonalities and synergy opportunities as well as complementary expertise regarding the problems, principles, approaches and outcomes of interest between improvement and implementation science.14 Improvement science is focused on quality and safety improvement in healthcare, while implementation science intends to improve implementation (evidence-based) innovations in practice.14 The fact that there is little evidence that an RLS can ultimately improve patient safety1 makes it generally difficult to measure implementation success.
Implementation outcome measures
In total, 44 measures were provided to monitor implementation outcomes. However, several measures were poorly specified and did not cover all the RLS phases appropriately. Most often, the reporting rate or relative reporting rate was used or considered to be used to monitor implementation outcomes. Only a few reference values were provided, and these values varied widely. The validity of this measure should be discussed since the number of voluntary reports comes from an unknown sample of potential risks and failures, and reporting is influenced by a variety of factors.15 Thus, the interpretation of the total or relative reporting rate of an RLS is limited by the system itself. Although the reporting rate cannot be used to measure safety,16 it can be interpreted in relation to ‘acceptability’ and ‘adoption’ or reporting behaviour.17
Wu et al18 investigated the implementation of self-reporting systems of adverse events and showed that ‘perceived usefulness’ (acceptability or acceptance) is significantly associated with the reporting rate and that ‘perceived ease of use’ (feasibility) is associated with both ‘behavioural intention’ (adoption) and ‘perceived usefulness’ (acceptability or acceptance).18 Several other factors influence the willingness and motivation to report (adoption), for example, workload and intrinsic motivation.18 19 Reporting behaviour can vary in relation to the perception of factors regarding the organisation of error-reporting procedures.19 Howell et al16 aimed to identify recommendations to enhance the appropriateness of RLSs. Their study showed significantly higher reporting rates in relation to factors of safety culture, such as ‘trusted hospital officials that encourage reporting (…) keep reports confidential (…), and keep staff informed about incidents (…) and feedback on changes made (…)’.16 The reporting rate was lower if there were sanctions for incidents.16
Regarding outcome penetration, the reporting rates from medical specialties or professional groups can differ significantly.16 No significant association was found between the overall reporting rate and the number of full-time nurses per bed. A negative association was found between the overall reporting rate and the number of clinicians per bed.16 Although some studies have shown associations between the reporting rate and selected measures, the main unit of data to be reported remains unclear. Finally, high or low reporting remains a biased measure in the context of learning and report quality.17
The ‘quality of the report’ measure was the most often reported qualitative measure in our study, and it was a topic of debate in the feedback group. To improve adoption, it is recommended to simplify RLS access to reduce the need for training.15 Additionally, healthcare providers should not be responsible for report categorisation.15
Based on the survey data, considering the results of a scoping review of implementation indicators,6 first recommendations for potential RLS measures for implementation outcome monitoring can be offered. Of course, indicators must be developed with high indicator and test quality. Reference values must be defined based on evidence or empirical learning.6 Essentially, to monitor RLSs, a set of indicators that covers all of the RLS phases is needed, for example, the preparation phase (adoption): ‘the number of interventions made to enhance RLS access divided by the number of suitable RLS access interventions’, the reporting phase (adoption, appropriateness and feasibility): ‘the time (most suitable based on learning) from reporting to feedback from analysis’, the analysis phase (acceptability): ‘the number of reports analysed with staff participation divided by the number of reports analysed’ the intervention phase (acceptability, adoption, appropriateness and feasibility): ‘the time (most suitable based on learning) from first feedback (from the ward or) to the realisation of interventions’ and ‘the number of minimally realised (implemented) RLS interventions or clusters of interventions divided by the number of suitable and critical reports’.
Regarding the coverage of the RLS phases, our study shows that a smaller number of measures was provided for the preparation, analysis and intervention phases. In the literature, these three phases are a crucial part of an RLS.2 Taking a closer look at existing indicators for implementation outcomes, quality is often used to monitor implementation, for example, in terms of the time needed for performance, the correctness of interventions following a protocol or the number of supervisions.6 Therefore, it can be assumed that quality is important in implementation monitoring. In our study, acceptability was most important, and it is indeed crucial in implementation processes.5 Costs were perceived as less important in implementation. The cost–benefit relationship is difficult to determine, in accordance with previous results. However, cost monitoring is a precondition for conducting RLSs.15
Recent research has focused not only on quality but also on learning from implementation and learning outcomes to address health service complexity and to enhance process and participative learning.17 20 It is recommended to discuss implementation success in terms of not only ‘predetermined’ outcomes but also ‘perceived usefulness’ and ‘lessons learnt’.20 This can enhance the need for additional implementation outcomes and support the need to focus on implementation success, although it is difficult to prove effects. It has been recommended that complementary quantitative and qualitative indicators be combined to address healthcare complexity.6 This combined measurement allows a broader picture of the findings by predefined endpoints that are enriched, for example, by individual learning experiences. A combined measurement allows benchmarking and can also reveal unknown factors of complex settings and contexts at different levels.
Additionally, indicators of anticipated and actual implementation success considering the different implementation phases can be proposed.7
Limitations
Scientific standards were rigorously followed. Although data saturation was confirmed, compared with the pretest results, the representability of the data should be interpreted carefully because of the exploratory study design used. In some cases, qualitative data were difficult to analyse because some provided sentences were unclear or did not address the question posed. However, these sentences were discussed thoroughly and excluded in cases of divergence. Although the questions of the implementation outcomes were thoroughly tested, some respondents had difficulty answering the questions. This might be related to the fact that the participants were trained in quality improvement language and not in implementation science. Since the rigorous pretests also addressed understandability, we believe that the differentiated view on implementation monitoring could have had an impact.
Conclusions
We found that there might be a need to facilitate awareness of the concept of implementation success and its relevance, including the relevance of well-chosen and validated implementation outcome indicators. Although (relative) reporting rates are often used in practice to cover nearly all implementation outcomes, some other measures provided can be a starting point to develop validated and consensus-based qualitative and quantitative ‘implementation patient safety indicators (sets)’ to monitor RLS implementation outcomes. Considering these results and recent scientific discussions, additional implementation outcomes can be incorporated to amend the conceptual framework of Proctor et al.5
Data availability statement
Data are available upon reasonable request. All data relevant to the study are included in the article or uploaded as supplementary information. Data are available upon reasonable request. Data are available from the Fliedner Fachhochschule Düsseldorf.
Ethics statements
Patient consent for publication
Ethics approval
This study involves human participants. Ethical approval was granted by the German Society for Nursing Science (8 March 2018; Nr. 18-002). Participants gave informed consent to participate in the study before taking part.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Contributors SK designed the study and drafted and revised the manuscript. SK, TW, SOB, MICF and MR conceptualised the questionnaire. MICF and JS organised the recruitment of the participants. SB, TW, SOB and SK conducted the analyses and interpretation of one of the pretests. JS and SK conducted the analyses. SK, JS and MICF interpreted the survey data. TW, SOB, MR, JS, SB and MICF read all versions of the article and made revisions. SK accepts full responsibility for the work and/or the conduct of the study, had access to the data, and controlled the decision to publish.
Funding Fliedner Fachhochschule Düsseldorf, University of Applied Sciences, funded this study (funding number: 201701). The university had no role in the design of the study; the collection, analysis and interpretation of the data; or the writing of the manuscript.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.