Article Text

Download PDFPDF

Fourteen years of quality improvement education in healthcare: a utilisation-focused evaluation using concept mapping
  1. Frida Smith1,2,
  2. Patrik Alexandersson1,
  3. Bo Bergman1,
  4. Lisa Vaughn3,4,
  5. Andreas Hellström1
  1. 1Technology Management and Economics, Center for Healthcare Improvement, Chalmers University of Technology, Gothenburg, Sweden
  2. 2Regional Cancer Centre West, Gothenburg, Sweden
  3. 3Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, Ohio, USA
  4. 4Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
  1. Correspondence to Dr Frida Smith; frida.smith{at}chalmers.se

Abstract

Background The need for training in quality improvement for healthcare staff is well acknowledged, but long-term outcomes of such training are hard to evaluate. Behaviour change, improved organisational performance and results are sought for, but these variables are complex, multifactorial and difficult to assess.

Aim The purpose of this article is to explore the personal and organisational outcomes identified by participants over 14 years of university-led QI courses for healthcare professionals.

Method Inspired by the Kirkpatrick model for evaluation, we used concept mapping, a structured mixed method that allows for richness of data to be captured and visualised by inviting stakeholders throughout the process. In total, 331 previous course participants were included in the study by responding to two prompts, and 19 stakeholders taking part in the analysis process by doing the sorting.

Result Two maps, one for personal outcomes and one for organisational outcomes, show clusters of the responses from previous course participants and how the outcomes relate to each other in meta-clusters. Both maps show possible long-term outcomes described by the previous course participants.

Conclusion The results of this study indicate that it is possible that training in quality improvement with a strong experiential pedagogical approach fosters a long-term improvement capability for the course participants and, even more important, a long-term improvement capability (and increased improvement skill) in their respective organisations.

  • quality improvement
  • continuing professional development
  • continuous quality improvement
  • healthcare quality improvement

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The challenges facing healthcare and the need for continual improvement of care processes necessitate a transformation of the healthcare systems to allow for co-existence of what Batalden and Stoltz1 describe as the domains of ‘professional knowledge’ and ‘improvement knowledge’.2 Thus, the traditional view on medical knowledge generation, dominated by randomised controlled studies, must co-exist with a more pragmatic epistemological position.3–5 Such a viewpoint embraces the importance of continual improvement and learning for the development of actionable knowledge in the local context and the recognition that everyone in healthcare has two tasks in their job—doing it and improving it.6

Accepting that improvement knowledge is necessary in order to transform the healthcare system, translating this knowledge into practice or ‘doing’ poses a challenge,7 8 and is a clear expression of the ‘knowing-doing gap’,9 known in the specific context of healthcare as the ‘quality chasm’.10 Consequently, the role of education and training for healthcare staff and students for bridging this gap has been widely discussed and studied.11–13 The need for improvement knowledge in healthcare has led to increased interest of content in suitable quality improvement (QI) curricula,14 effectiveness of educational design and how to establish links between QI knowledge and skills and longer-term improvements in organisational performance.15 Several systematic reviews of effectiveness of QI education (see refs 16–18) have used the well-known Kirkpatrick model for evaluation of educational interventions.19 The original model differentiates educational outcomes on four levels: (1) reaction, how learners react to the education, for example, satisfaction, perceived usefulness (2) learning, what the learners actually learnt (3) behaviour, if learners have incorporated the new skills and use them in their daily practice (4) results, new and/or improved organisational practice and performance due to changed behaviour. A fifth level on the evaluation of organisational and societal levels has been added as a Kirkpatrick plus model.20

In any training, behaviour change and improved organisational performance are sought, but these levels are also difficult to assess. The fourth level, results, is especially challenging to assess since organisational performance is multifactorial rather than just the individual skills of a course participant.15 The time it may take between training and identifiable impact in an organisation is another challenging factor.

The purpose of this article is to explore the personal and organisational outcomes identified by participants over 14 years of university-led QI courses for healthcare professionals. In this article, we use self-reported evaluation after the course has ended, sometimes many years later. Because of the complexity in evaluating the impact of education over time, we captured self-reported personal and organisational outcomes with open-ended prompts. In this way, we obtained data from respondents that are inside the complex organisational systems that also have the freedom to report the kind of impacts they can identify as insiders. By asking course learners from different years (the earliest finished 2006) we also have the possibility to identify perceived long-term impact several years after the education.

Study setting

The region Västra Götaland in Sweden employs around 45 000 within the healthcare sector, making it one of the largest public organisations in the country. Swedish healthcare is considered to provide high-quality care in many clinical dimensions but has the potential to be more focused on the integration and understanding of non-clinical patient needs.21 One strategy for the region Västra Götaland to address these non-clinical needs was to promote QI as a complementary knowledge field to the traditional medical and nursing knowledge.1

The development of QI education within the region started in the late 1990s with newly established positions focusing on QI in hospitals and regional governance. However, these professionals came from many different backgrounds and there was an identified lack of shared language and view on how to organise and manage QI. Thus, a collaboration with researchers in QI at Chalmers University of Technology began in 2004 with a 2-year (corresponding to 20 full-time study weeks) academic advanced training programme in QI for healthcare professionals (eg, quality managers, quality improvers, middle managers with explicit interest in QI). In addition to this 2-year course, multiple shorter courses have been created for special target groups (eg, resident physicians, contract nurses in cancer care, top management). In total, 331 healthcare professionals have undertaken education activities within these QI courses over 14 years (see table 1). All course evaluations conducted directly after training have shown very high satisfaction, and the courses have been perceived as useful by course participants and their managers (cf, levels 1 and 2 in Kirkpatrick model). In fact, 100% of the course participants stated that they would recommend the taken course to a colleague. However, the personal and organisational long-term impacts have previously not been evaluated.

Table 1

Courses involved in the study setting

From the beginning, the QI courses were designed to combine theory with practice, striving for application in and impact on the learners’ organisations. A perspective of experiential learning22 was embedded in the course activities, including a QI project of their choosing in their own organisation perceived as important to the organisation and its customers (often patients). All courses have had 23–28 participants with classes 1–2 days/month. Every session has had didactic lectures about specific subjects combined with small group discussions, small exercises applied to their own organisations and individual/joint reflections. Overall, each course contained the same content as described in table 2, but in various depth due to the length of the course.

Table 2

Teaching methods and educational content

Methods

Patient and public involvement

No patients were involved in this study, but the methodology involves stakeholders in several of the phases as described below.

Concept mapping methodology

Since the education model of the courses in improvement knowledge was inspired by experiential learning, a co-design approach for evaluation was employed. Evaluating educational programme can be done in various manners; as an instant response to the content and perceived quality of the course or in terms of the impact on the person attending the course as well as the organisation, he or she acts in. Utilisation-focused evaluation23 emphasises intended use by intended users, claiming that for an evaluation to be of interest to the people or organisation it is produced for, the stakeholders need to understand and feel ownership throughout the evaluation process. Therefore, involving the intended users can establish ‘direction for, commitment to, and ownership of the evaluation every step along the way’.23

The mixed methods approach of concept mapping (CM) combines qualitative and quantitative data collection and analysis through a structured process of brainstorming, card sorting, multidimensional scaling (MDS) and hierarchical cluster analysis. As a co-design methodology, it enables not only collaborative design and data collection but also the interpretation of results by the stakeholders. Results of a CM study provide a data-driven visual representation of thoughts or ideas of a specific group of stakeholders.24 By not using pre-defined evaluation questions the participants can include whatever they feel is relevant through the brainstorming phase and hence input not easily captured by traditional evaluations is enabled. Because the included stakeholders themselves provide the data by completing prompts and sort statements into categories, the methodology responds to the ideas of utilisation-focused evaluation.23 By allowing the participants to perform the tasks individually, and thereby not aiming for consensus of what subjects are considered most important to discuss, CM avoids the risk of some aspects not being observed and scrutinised. Instead it captures the rich nature of the topic of investigation, where all voices are heard. Further, CM has been used to evaluate and develop nursing education,25 as well as to examine QI and equality in healthcare.26 27 Overall, CM methodology has proven to be reliable and valid.28

Data collection and analysis

The flowchart in figure 1 envisions the three phases of the CM process adapted to this study together with the data collection and the number of participants in each phase.

Figure 1

Flow chart of concept mapping process.

Previous to the first phase, a research group was formed to design the study, identify participants, design the data collection task, encouraging participation, perform analysis and analyse results. This group consisted of four people with experience designing and teaching the courses (FS, PA, BB, AH) and one methodologist with expertise in CM (LV).

Pilot study and idea generating phase

First, a pilot evaluation was conducted to refine prompts, background questions and to check the usefulness of CM for evaluation as experienced by the stakeholders. Twenty-four participants of an ongoing course in quality-driven organisational development were asked to test and give comments. All agreed to the method being useful and after minor adjustment in background questions, a web-link was sent to 331 previous participants from courses provided between 2004 and 2016 (see table 1). After nine background questions, participants were asked to complete two different focus prompts with 3–5 open-ended responses. The prompts were as follows:

  1. For me personally, the course in improvement knowledge has led to…

  2. For my organisation, the course in improvement knowledge has led to…

Three reminders were sent out. Prompt 1 gathered a total number of 351 complete statements and prompt 2 gathered a total number of 291 complete statements.

Sorting phase

To eliminate redundant ideas and delete ideas that did not respond to the focus prompts, the statements were edited by the research group. This resulted in 81 remaining statements for prompt 1 and 78 for prompt 2.

For each of the two prompts, three participants from the four course categories (see table 1) were asked to individually sort the statements into categories (n=24), by using an online tool. Sorters also named the categories to describe the content of the category according to their perspective. From the invited 24 sorters, 10 participants sorted prompt 1 and 9 sorted prompt 2 (see figure 1).

Analysis phase

MDS was used to analyse the results from the individual sorting. Here, an MDS algorithm produces x and y coordinates for each idea, positioning ideas that are often sorted together appear close together on the map and ideas that were seldom or never sorted together to appear further apart. To control the measure of goodness of fit of the map to the source data, a stress value was calculated. The stress value is an indication of how well a multidimensional pattern of objects is represented by a lower-dimensional map (usually, as in our case, a two-dimensional map). A good fit is indicated by a low stress value. In a study by Rosas and Kane,28 69 published CM papers had stress values that ranged from 0.17 to 0.34 with a mean value of 0.28 and a SD 0.04. A stress value less than 0.39 is suggested to be acceptable by Sturrock and Rocha.29

Next, hierarchical cluster analysis was applied to the x and y coordinates. This partition the map of all ideas into a smaller set of clusters of related ideas. To find a balance between a sufficient number of clusters to capture the diversity of ideas and yet maintain a manageable level of detail, multiple cluster solutions were reviewed by the research group before deciding on the final number of clusters for the two maps. Finally, the clusters were named by first computing x and y coordinates for all category labels created by the respondents in the sorting stage, and then the distance between each label and each cluster centre. The labels located closest to the centre were used to name final cluster labels but were sometimes adjusted or changed by the research group in order to find a label that best illustrated the content of the cluster.

Finally, two extra analysis, not part of the CM methodology were performed. To find out if the statements in the clusters came from participants of the same or different courses, the origin from the statement was traced back to the idea generation phase. In addition, all original responses that had been eliminated due to redundancy were qualitatively coded into one of the clusters with similar responses in the respective maps.30 The percentage of all responses coded to each cluster was calculated and presented as a proxy for the salience of the individual clusters across all original respondents.

Software

Well established web-applications were used for idea generation31 and for the sorting phase.32 R software was used for the data analysis.33

Results

The results of the MDS and the hierarchical cluster analyses are shown in the two concept maps in figures 2 and 3. Every statement is represented by a dot. For each cluster, three illustrative statements, represented by numbers, are presented. In the interpretation process, the research group agreed on an eight-cluster solution for the personal outcomes from prompt 1 while seven clusters were considered sufficient for the organisational outcomes from prompt 2.

Figure 2

Concept map visualising the personal outcomes of quality improvement education.

Figure 3

Concept map visualising the organisational outcomes of quality improvement education.

Personal outcomes (prompt 1)

For the concept map relating to personal outcomes the stress value was 0.18, which indicates a good fit of the data. Cluster 3 represents the largest number of original responses with 19% of the total number of responses.

Organisational outcomes (prompt 2)

The concept map related to organisational outcomes has a stress value 0.29, which also indicated a good fit of the data. Cluster 4, improved methodology and structure of improvement work, had the largest number of responses with 35% of the original responses, closely followed by cluster 2, organisation development (31% of original responses).

Analysis—interpreting the maps

Personal outcomes (prompt 1)

The concept map (figure 2) was polarised with cluster 3 (personal development and confidence in my professional role), cluster 6 (professional network), cluster 7 (new carrier paths) and cluster 8 (increased legitimacy) to the left of the map—clusters that all seem to relate to personal development for the individual participant of the course. On the right-hand side of the map, cluster 5 (theoretical improvement knowledge), cluster 1 (increased understanding/insight) and cluster 4 (broadening perspective and critical thinking) relate more to the achieved competence of the course participants. Thus, in the interpretation process the research group found it adequate to further combine the clusters into what could be called meta-clusters: individual development and individual improvement competence. The middle cluster 2 (organisation development) could be seen as the participants’ application of their achieved development and improvement capability for the development of the organisation, thus binding the clusters together. As a whole, the map indicates that the participants perceived a sustainable integration of improvement knowledge into their own practice, with a transfer to their organisations.

We also analysed the results further by indicating to which course category the respondents of the respective concept belonged. This was done in order to find out if there were any differences between, and also within, the different courses. We did not find any such differences. Furthermore, none of the original responses for prompt 1 were negative or critical even though the focus prompt was open-ended, and the participants had every possibility to express such views.

Organisational outcomes (prompt 2)

Similar to the personal outcomes map, additional relationships between the clusters visualised in figure 3 evolved: clusters 1 and 7 related to leadership and employee motivation and well-being, and clusters 4, 5 and 6 as related to the organisations’ structure and ability to perform improvement work. Cluster 6 seems to integrate the two sides by emphasising dissemination aspects. Thus, two meta-clusters were identified: organisational improvement capability and leadership and employee empowerment. Finally, cluster 3 separates out due to the clear relation to cancer patients as emphasised in one of the course respondent categories (contract nurses). As such, this cluster consists of only a small percentage of the original responses (4%). However, in scrutinising the responses in cluster 2, a number of the responses are about involving patients in improvement projects. We therefore identified a third meta-cluster: patient focus. For organisational outcomes, there was variation in responses across participants of the different course categories. For example, the contract nurses’ statements are over-represented in clusters 1 and 3. The statements of participants of the 2-year course are slightly over-represented in clusters 2 and 4, that is, in the meta-cluster organisational improvement capability.

Salience of ideas

For personal outcomes, the percentage of original responses was relatively well distributed over the eight clusters, with cluster 3 having the highest frequency of 19% and cluster 8 being least frequent with only 4%. This was not recognised in the same way for organisational outcomes, where the range instead was 35% (cluster 4) to 2% (cluster 3).

Discussion

In this study we have used CM methodology to explore the outcomes as reported by the course participants from QI education for healthcare professionals led by Chalmers University during a 14-year period. Two maps, one for personal outcomes and one for organisational outcomes, show clusters of the responses from previous course participants and how the outcomes relate to each other in meta-clusters. Inspired by the Kirkpatrick model for evaluation,19 we sought a method that could address complex outcomes, as well as capturing the rich nature of self-reported data. Particularly since the fourth level, results (as well as for later level additions in the Kirkpatrick plus model) have been considered challenging to assess,19 34 CM offers an ideally suited approach. Positive outcomes related to both organisation and society, here represented by the patient focus, were reported. The additional analysis of the salience of ideas of the stakeholders further enhanced the results of this evaluation for long-term outcomes. The results show positive outcomes from both personal and organisational perspectives, respectively, which complements the previous evaluations of the satisfaction and usefulness (level 1) of the courses (see study setting). However, due to the long period of evaluation (14 years), the participant responses to the two prompts indicate that these outcomes may be considered as long term and sustainable for the individual, organisation and patients.

For organisational outcomes, there was variation in responses across participants of the different course categories. For example, the contract nurses’ statements are over-represented in clusters 1 and 3. The statements of participants of the 2-year course are slightly over-represented in the clusters 2 and 4, that is, in the meta-cluster organisational improvement capability. This finding is not surprising as the 2-year course is more advanced, and the participants often come to the course as leaders of quality management.

Very few responses to organisational outcomes were critical in nature and included outcomes like ‘being away from work’, ‘left the organisation’ and ‘less clinically active’. The non-positive responses included in the concept map are statements 11 from cluster 1 and 57, 26, 76 and 77 from cluster 7. Overall, the stakeholders described positive outcomes of the education for their organisation.

The reported outcomes on the organisations are interesting since healthcare improvement capability is an area of much current discussion—it is questioned whether it is possible to increase organisational improvement capability through courses in QI; see for example refs 7 35 36. In a paper by Babich et al8 it is concluded that hospitals focusing on either projects or staff training had limited value compared with organisations strengthening their organisational systems, structures or processes aimed at improvement efforts. This is partly in line with Adler et al7 who concluded that improvement projects stimulate the creation of skills, systems, structure, strategy and culture. It is emphasised by Adler et al7 that these changes take a long time to achieve, especially the latter ones (structure, strategy and culture). In our study, we interpret the results as if at least some structural changes and possibly some strategic changes have occurred as evidenced by the many responses related to organisational development capability (clusters 2, 4, 5 and 6 with 82% of the original responses). Complementing the conclusions of Babich et al,8 we found that it is possible to obtain organisational improvement capability through a suitably designed QI training.

Our findings are similar to results of Eid and Quinn37 in a study of residents’ training in improvement techniques. They found that the outcome of training on improvement capabilities was multifactorial: (1) trainee characteristics (2) training course and (3) work environment. In the training evaluated in this study, the following applies to all courses1 : (1) the participants were selected by their respective organisation to spread the insights from the course to other parts of the organisation and in many cases also led to the subsequent improvement work; (2) in the training it was emphasised that not only the individual participants were the customers to the course but also their respective organisations and their customers (patients), meaning that not only individual skills and improvement capability were targeted but organisational QI objectives as well; (3) that the training has created a personal development and increased the confidence and legitimacy for staff working with QI (eg, personal outcome clusters 3 and 8) and increased motivation and well-being (eg, organisational outcome cluster 1).

Methods discussion

The open-ended format of the CM prompt gave valuable information from stakeholders that might not have been captured otherwise if only using pre-defined standard evaluation questions. As participant involvement has been an important theme in the courses, the method provided the use of the former course participants as informants concerning course outcomes. We found that CM helped to capture the complexity of the impact of training which corresponds with the findings of Hagell,25 who concluded the usefulness and potential of the method for evaluation and development of higher health science education. Further, CM relates well with the more recognised use of affinity diagrams within QI work,38 but can add additional aspects and validity through the mixed methods design.28 Although the response rate might seem low, the stress value showed a good fit of data and sufficiency for uncovering conceptual similarities of ideas in the sorting task,39 and the number of stakeholders in the sorting phase was consistent to other CM studies.28 In addition, CM aims for a broad sampling of ideas, rather than representative sampling of persons,28 but we achieved both by involving similar number of stakeholders from all courses throughout the time period from 2004 to 2016.

The data collection through web-links provided by a software was reported as user-friendly with the exception of the link shutting down after 4 hours, leaving the participant to start from scratch again. This affected the final response rate, and we also found some abandoned sorting in the second phase. An additional limitation might be the self-reported data, with other themes/ideas possibly emerging using another method.

Conclusions

This study offers an alternative on how to evaluate and capture the complexity of educational outcomes. The use of CM can be especially helpful when assessing complex and multifactorial outcomes like results/organisational performance (cf. level 4, Kirkpatrick),19 and organisational/societal results (cf. level 5, Watkins et al).20

The results of this study indicate strong self-reported outcomes in the participants’ organisations towards improved organisational improvement capability and increased improvement skills in the organisations. This result indicates that it is possible that training in QI with a strong experiential pedagogical approach fosters a long-term improvement capability for individuals (personal outcomes). As similar results have been obtained by Kaminsky et al,11 we also conclude that, indeed, QI training may be a catalyst for organisational change.

Acknowledgments

The authors would like to thank all involved stakeholders for responding to the different steps of the analysis. We would also like to thank Katrín Ásta Gunnarsdóttir and Anna Genell for providing updated R-code.

References

Footnotes

  • Contributors FS, PA, BB and AH designed the study and performed the analysis. FS, PA, AH and BB facilitated partnership with the involved stakeholders. FS and AH did the data collection. LV provided methodological expertise. BB performed the statistical analysis. All authors contributed by commenting the drafts and agreed to the final version of this manuscript.

  • Funding This work was supported by the Regional Cancer Center West, Gothenburg, Sweden.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement Data are available upon reasonable request.