Article Text

Download PDFPDF

Improving access and flow within Child and Adolescent Mental Health Services: a collaborative learning system approach
  1. Jamie Stafford1,
  2. Marco Aurelio1,
  3. Amar Shah2
  1. 1Quality Improvement, East London NHS Foundation Trust, London, UK
  2. 2East London NHS Foundation Trust, London, UK
  1. Correspondence to Jamie Stafford; Jamie.stafford{at}nhs.net

Abstract

Long waiting times for Child and Adolescent Mental Health Services (CAMHS) have been linked to poorer outcomes for those seeking care. CAMHS teams in England have seen recent increases in referrals, resulting in challenging waiting times nationally. Although recent health policy has brought an increase in funding and staffing, it is believed that only 25% of those needing care receive it. Between trusts, there is considerable variation in waiting times, leaving many waiting longer than others waiting for care. East London Foundation Trust has been seen to have higher waiting times for CAMHS than other organisations across the country between June 2017 and September 2018, seven CAMHS teams were supported to use quality improvement (QI) as part of a collaborative learning system with the aim of improving access and flow. Each team was encouraged to understand their system using basic demand and capacity modelling alongside process mapping. From this teams created project aims, driver diagrams and used Plan Do Study Act cycles to test changes iteratively. Measurement and data were displayed on control charts to help teams learn from changes. Teams were brought together to help learn from each other and accelerate change through a facilitated collaborative learning system. Of the seven teams that began the collaborative learning system, six completed a project. Across the collaborative learning system collectively there were improvements in average waiting times for first, second and third appointments, and an improvement in the number of appointments cancelled. For the individual teams involved, three saw an improvement in their project outcome measures, two just saw improvements in their process measures and one did not see an improvement in any measure. In addition to service improvements, teams used the process to learn more about their pathway, engage with service users and staff, build QI capability and learn together.

  • continuous quality improvement
  • outpatients
  • PDSA
  • quality improvement
  • waiting lists
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Problem

East London Foundation Trust (ELFT) is a mental and community health trust serving a population of around 1.5 million people across East London (City and Hackney, Tower Hamlets, Newham), Bedfordshire and Luton. To support the emotion and psychological well-being of Children and young people, ELFT provides Child and Adolescent Mental Health Services (CAMHS) services. There are five separate CAMHS across the trust, made of interdisciplinary teams of mental health professionals. Young people access CAMHS services for a wide range of mental health interventions for the assessment and treatment of psychosis, neurodevelopmental disorders, emotional and behavioural disorders, conduct disordered and eating disorders.

Nationally referrals to CAMHS services are increasing, with only 25% of young people needing treatment receiving it.1 For those able to access services, waiting times for a first appointment can vary from 2 to 27 weeks based on geography, leading 85% of trusts to suggest they are unable to meet the demands for their services.2 This naturally causes further anxiety for young people and their families, and risks having a detrimental effect on their health outcomes.

At ELFT the number of referrals received per 100 000 per population is 2000, which less than the national average of 2730. However, despite this, the average waiting times for appointments is 2 weeks higher than the national average at 9 weeks.3 Similarly another report placed ELFT in the ten trusts with the longest median waiting time in 2017 for CAMHS services at 90 days from referral.4 It is important to note that this figure does not reflect any potential differences in case complexities or the wider sociodemographic factors which have been seen to influence waiting times.5

Thus, with some shared challenges but different contexts, how could the teams learn together to accelerate change? Between June 2017 and September 2018, ELFT used Quality Improvement (QI) to empower teams to tackle access and flow across CAMHS. Here we brought together six different teams in a collaborative learning system to gain a deeper understanding of their flow, using QI methods to test changes and learn together.

This paper builds on previous learning from ELFT and elsewhere in using QI methods in order to tackle waiting times6 7 and the use of improvement collaboratives as a means of accelerating learning across teams. This paper describes a collaborative learning system designed to tackle access and flow across six CAMHS teams across ELFT. Focusing mainly on the learning system it provides a brief overview to the work of each team, change ideas tested and the results achieved.

Background

Long waiting times for CAMHS services have been seen to result in poorer outcomes for those seeking care, relating to worsening of symptoms and potential for families to disengage with treatment.8 Since the publication of Every Child Matters9 some 16 years ago, successive governments have attempted to redress the inequalities faced by children with mental health problems and solve one of the ‘burning injustices of our time’.10 The 2015 Future in Mind11 report outlined the government’s aspirations for Children and Young People and suggested 49 recommendations on how to support this. This was later backed by the Five Year Forward View for Mental Health12 which made young people a priority with and provided £1.5 billion in extra funding over 5 years. Increases in funding have been coupled with increases in staffing in order to cope with increases in demand for services. Underlying this are the principles of having high quality services which can be accessed in a timely manner.13

However, focusing solely on accessibility can prove problematic and can result in pushing the problem downstream, increasing waiting times at different parts of the pathway.14 Understanding flow through the system, requires analysis of the occurrence of bottlenecks15 and variation in demand.16 QI methods have been used to tackle this in a range of settings, with the health foundation identifying three key steps here including understanding the system, testing different solutions and measuring for improvement.17

At ELFT, we have been using the model for improvement (MFI)18 as our QI approach to tackle wicked problems since 2015. Previous work at ELFT have combined this approach with collaborative learning systems to accelerate improvement in efforts to reduce violence on inpatient mental health wards6 and waiting times and DNA’s across community teams.7 This allowed us to begin to develop an understanding around the impact that bringing people together in a collaborative learning system can have in improvement work.

Beginning in June 2017, East London NHS Foundation Trust (ELFT) used QI to try to improve access and flow across CAMHS. Across the five CAMHS services in the trust, seven separate teams set out to gain a deeper understanding of their flow, and to improve an aspect of the quality of care they provide. They were asked to consider what matters to them and their service users, spend time understanding their system and QI to make meaningful and impactful changes. They each identified a local issue related to flow in their system and developed QI projects to tackle these. The teams that were part of this work are described in table 1.

Table 1

Teams involved in the CAMHS collaborative learning system

Measurement

Each team developed a family of measures containing outcome, process and balancing measures.19 Teams had an ability to pick some specific measures relevant to their projects, but all teams had the following as part of their family of measures

  • Outcome measure: Overall time in days from referral to discharge (or in some projects decision made around treatment).

  • Process measures: Time from referral to first, second and third appointments, percentage of appointments not attended (total, first, second and third appointments), percentage of appointments cancelled (total, first, second and third appointments)

  • Balancing measures: Number of referrals, number of discharges.

Data were displayed using control charts, a widely used type of analysis for improvement work.20 A control chart is a statistical tool used to highlight between common cause and special cause variation within a system, enabling teams to understand whether changes might have resulted in an improvement.21 Data for all teams were displayed fortnightly with a baseline period from January 2017 to October 2017. The testing period for the teams ran from October 2017 to September 2018.

Aggregated control charts for all the teams in the collaborative learning system were also created using Life QI, an online platform for recording and managing improvement work. Dashboards for each team were also created using SQL Server Reporting Services (SSRS). It is out of scope to detail specific measurement plans of each individual team, but an example can be found here.22 Where possible this data was collected from the trusts clinical record system, Rio, but in some cases manual collection was performed by the teams.

One team measured service users’ experience of the service as their outcome measure, and counted the number of positive responses to indicate overall satisfaction with the service.

Design

Our organisational approach to QI uses the MFI, which involves clarifying what we are trying to accomplish, establishing how we will know whether a change is an improvement, and identifying changes that we believe will result in an improvement.18 Change ideas are then tested using Plan, Do, Study, Act cycles (PDSA) which enable teams test ideas on a small scale, learn quickly and build knowledge about what works through multiple iterative cycles.18 This process has been seen to empower front line staff to make changes and is important as their knowledge of the frontline process and culture are vital in making change happen. Each team followed the ELFT sequence of improvement, which is detailed in figure 1 alongside the IHI MFI.

Figure 1

The ELFT approach to QI. ELFT, East London Foundation Trust; QI, quality improvement.

Based on previous learning at ELFT, the CAMHS collaborative learning system used the following principles as design concepts

  • A shared goal and purpose across all the teams.

  • A shared theory of change, with a driver diagram created together to visualise how the teams believed they would improve access across their services.

  • A measurement system, with standardised measures that were collected and shared transparently across all the teams.

  • A way to learn from each other, with face-to-face learning sets every 6 weeks.

  • A support structure, with a project board, an executive sponsor for the whole collaborative learning system, local sponsors for each project and improvement advisors coaching each project team.7

Shared goal and purpose across all the teams

Each project team was supported to develop a measurable aim relevant to their individual context and project. These were devised using knowledge of their services, feedback from service users and service data and are detailed in table 1. Broadly speaking, all teams intended to improve access to, and flow within, their clinical pathways

In order to develop shared purpose for the collaborative learning system, it was also hugely beneficial for them to have some shared ambitions and purpose. This helped build camaraderie around the work and enabled supportive relationships to form between teams. Across the collaborative learning system a shared aim of improving access and flow across CAMHS was also developed to promote a sense of shared purpose. This helped build camaraderie around the work and enabled supportive relationships to form between teams.

Shared theory of change

Driver diagrams were used by each team to express their project theory changes and change ideas.18 These were created using divergent and convergent thinking tools such as Nominal group technique and affinity diagrams. For greater depth, three teams constructed demand and capacity models of their pathway. This highlighted variation, enabled analysis of how their clinical capacity is currently used and helped them make predictions around the impact a change might have. An example of the outputs are shown in figure 2.

Figure 2

Sample Pareto chart and output from demand and capacity model.

We also developed a high-level learning system driver diagram that drew on the individual project level theories of change, as shown in figure 3. This brought the teams together around a shared aim, and theory of change.

Figure 3

System-level driver diagram. CAMHS, Child and Adolescent Mental Health Services.

Measurement Systems

Supporting teams to understand variation in their own services was an important way to begin the projects, but also a vital reference point against which they could monitor their progress. It was a consistent feedback loop to enable learning through testing. A family of measures were selected to support learning on a QI project team level, as well as on a macro system level. They containing dashboard was published monthly, and teams also had access to weekly data via ELFT’s ‘Quality and Performance Dashboard’. Through thoughtful interaction with these sources, teams were able to make informed decisions around the effectiveness of change ideas. This has been detailed in the measurement section.

A way to bring the teams together and learn together

Teams tested the ideas from their driver diagrams using PDSA cycles. This allowed them to start by testing on a small scale, incrementally building confidence in the effectiveness of the change idea. To help do this teams regularly reviewed their data to determine whether changes had resulted in an improvement or not.

To bring this learning together, the central QI team supported the running of a 6-weekly learning set bringing the teams together with the directorate leadership for co-learning and cocoaching. Given the dispersed nature of the teams, we encouraged participants to attend by promoting the use of virtual platforms as well as in person attendance and holding learning sets across sites where teams were based.

The content was codesigned with CAMHS leadership in response to how teams were progressing and what would be most helpful to accelerate learning. Learning sets ran for an hour and were a mix of small teaching around a specific improvement topic with time for the teams to think how they mutually apply that learning in their context.23

In order to make the work visible to the rest of the wider CAMHS staff in the trust, we published a 6-weekly newsletter to celebrate progress among the teams. This coincided with the learning sets, containing updates, stories of what was being tested and progress from the teams. Each project was registered on the ‘Life QI’ web platform making data, driver diagrams and updates visible to all trust staff. This helped develop a community around the work. Team leads added monthly updates which formed a report to sponsors, enabling them to know where to celebrate success and work to resolve any challenges with the teams.

Support structure for the learning system

Having a consistent group of leadership, management and QI support around the teams provided an ecosystem for the projects to thrive. Projects teams were able to link in with the support mechanisms throughout their work, and the relationships that formed meant help was accessible and rapid.

To support the teams to overcome barriers, each project had the support of a local sponsor (senior team lead) and the CAMHS clinical director. The collaborative learning system itself was sponsored by the chief operating officer who is a member of the organisations executive team. As highlighted by the Health Foundation6, senior organisational leadership plays an important part in the success of learning system. These people ensured that teams were supported to progress, but also to create the right conditions for the improvement projects to flourish.

Support in the application of QI methods was provided by two Improvement Advisors from the central QI team. Improvement Advisors met with teams on a fortnightly basis to provide coaching on a range of QI methods, For instance, which were the correct control charts to use for measures the teams were looking at or what was the most effective strategy to test change ideas. Teams were able gain support outside of this using phone and email contact.

In addition to the regular learning sets described above, project board meetings were also held every 2 months. These were chaired by the executive sponsor and provided another opportunity for teams, senior CAMHS leaders and Improvement advisors to come together to review how the work was progressing. Projects were asked to self-rate their progress using a simple tool aligned to the key steps in the ELFT sequence for improvement and focused on which ideas were being tested. If projects were facing shared challenges this was an opportunity to surface this and develop a strategic plan to provide extra support.

Strategy

Each project team was composed of between 4 and 8 members from the wider service. Projects began by using process maps to understand their system and collecting and reviewing data to describe demand and capacity. This allowed teams to consider their systems through a different lens, and to start to identify quality issues that they wanted to work on and subsequently develop change ideas to test. Theories of change were developed using driver diagrams, with each had teams’ driver diagram specific to their context.

Change ideas were tested using PDSA cycles. Each cycle contained a theory and a prediction, and they were supported to use data (qualitative and quantitative) to build their knowledge on both the change idea, and the system in which it was being tested. Table 2 provides a summary of the different change ideas tested, mapped against the specific change concepts used.18

Table 2

Project team aims and change ideas mapped against change concepts

Data over time displayed on control charts were used to help the team learn which change ideas may have resulted in an improvement. Once the team had a strong degree of belief in the effectiveness of change ideas which had resulted in an improvement, these were then implemented.

The collaborative learning system was facilitated by the QI team (Improvement Advisor, Darzi Fellow and QI Data Analyst), had the support of an Executive Director (Chief Operating Officer) and the Clinical Director for Children’s services. The teams came together every 6 weeks with the purpose being to provide a space for methodological support and well as a space for peer-to-peer learning. Largely each learning set was aligned to a specific step in the ELFT sequence of improvement described in figure 1. In the early stages of the projects, teams requested specific teaching on QI methodology (eg, driver diagrams, PDSA cycles, etc), and as the projects progressed the teams requested more time to for coaching support from the group. The sequence of learning sets is described below:

  • Learning Set One—Developing a shared purpose.

  • Learning Set Two—The psychology of waiting, service user involvement, aims and driver diagrams.

  • Learning Set Three—Driver diagrams, developing measurement systems.

  • Learning Set Four—Reviewing tests of change, reviewing dashboards.

  • Learning Set Five—Sharing PDSAs and service user involvement.

  • Learning Set Six—Understanding where we are now, what more is possible and introducing quality control.

  • Learning Set Seven—Project storytelling.

Following learning sets we distributed each team’s data as dashboards, a summary of what was covered in the learning set and what teams needed to do for the next learning set. To support sharing of learning across the organisation, we also published quarterly newsletters to update of the progress of the teams.

Story telling is a powerful tool in improvement and we encouraged teams to consider this during their projects. At the conclusion of the collaborative learning system in September 2018, each team was asked to develop a story detailing their improvement journey. Teams were encouraged to be creative and use a variety of different means to share their what they did, how this felt and the results of their work. Stories ranged from diaries, interviews with service uses, videos and animated story books.

Results

Team-level results

Seven teams began the collaborative learning system with the team deciding to drop out of mid-way through. Of the six teams’ finishing the learning system improvements in outcome measures were seen by three; City and Hackney ADHD, Tower Hamlets Triage and Luton Emotional and Behavioural Team. Improvements in process measures were seen by two teams. One team did not see an improvement in outcome or process measures. The results are detailed below:

  • City Hackney ADHD team saw a reduction in the average number of days from referral into the pathway from 87 days to 18 days (outcome measure). This equates to an 80% reduction. A reduction in DNAs for second appointments (8% to 0%) and third appointments (14% to 0%) was also achieved (process measure).

  • Tower Hamlets Triage team saw a reduction in the average number of days from referral into their service from 20 days to 10 days (outcome measure). This equates to a 50% reduction. No change was observed in their process measures.

  • Luton Emotional and Behavioural team saw a reduction in the average time from referral to discharge from 247 days to 234 days (outcome measure). This equates to a 5% reduction. No change was observed in their process measures.

  • Newham Emotional and Behavioural team saw no change in the time young people spent in treatment with a mean of 380 days (outcome measure). A reduction in appointment cancellations from 19% to 14% was also observed (process measure).

  • Tower Hamlets Neurodevelopmental team saw no change in the time taken to complete assessments (outcome measure). No change was observed in their process measures.

  • City and Hackney Crisis team saw no change to their levels of experience reported by service users with a mean of 10 positive responses from a total of 14 survey questions (outcome measure). No change was observed in their process measures.

  • Bedford CAMHS withdrew prior to testing.

Outcome measures for all the teams are presented in the dashboard in figure 4.

Figure 4

Outcome measures results dashboard. ADHD, attention-deficit/hyperactivity disorder; E&B, Emotional and Behavioural.

System-level results

As described in the measurement section, data were also collected at learning system level on order to assess impact across the CAMHS teams participating. The results are detailed below

  • Time from referral to first appointment saw a reduction from an average of 49 days at baseline to 47 days; a 3% reduction.

  • Time from referral to second appointment saw a reduction from an average of 120 days at baseline to 93 days; a 22% reduction.

  • Time from referral to third appointment saw a reduction from an average of 167 days at baseline to 126 days; 25% reduction.

  • The percentage of appointments cancelled reduced from 15% at baseline to 14%.

  • The number of accepted referrals per fortnight reduced from 77 at baseline to 63.

There was no change in the average numbers of referrals, discharges or percentage of appointments DNA from baseline. These results are displayed on the appropriate control charts in figures 5–7.

Figure 5

System-level outcome measure results.

Figure 6

System-level process measure results.

Figure 7

System-level balancing measure results.

Lessons and limitations

This was the first time that ELFT CAMHS teams had come together to work on a large-scale improvement effort using QI. We found that a coordinated approach was particularly helpful to build community and to share learning among teams. Within this paper, we have attempted to present work undertaken as part of a collaborative learning system for CAMHS teams working on aspects of access and flow in their system. Where possible we have described the results of each teams and briefly detailed the change ideas they tested. However, it is beyond scope to discuss each team’s improvement journey in detail and we have chosen instead to focus on providing an overview. Consequently, the contribution of each change idea has not been explored fully in this paper.

One significant challenge we encountered with the work was the heterogeneous nature of the teams involved and the ensuing tension between allowing teams autonomy and the alignment to a shared aim. The distinct local context meant teams found different purpose for the projects, which meant alignment of the project aims was appropriately limited. We have rationalised this in so far that teams were working towards a common goal or purpose statement.

Another challenge was finding appropriate data sets for the specific quality issues that were being targeted. In some cases, data sets were already in place that provided useful information for the teams, while in other cases the teams needed to develop measures and collect new data independently. Combining large computerised data sets with manually compiled data was challenging, particularly when attempting to analyse data across the system level.

Through access to Life QI, a platform for managing QI projects, recording of PDSA’s was made easier for the teams. However, despite this we still found that rigorous application of the PDSA’s, with clear predictions and recorded learning from tests was challenging. This is commonplace in improvement work and can limit the learning from interventions and future scalability of the work.24 In order to tackle this, we encouraged teams to review and update their PDSA documents together each time they met. Teams were also encouraged to critique each other’s PDSA’s at learning session and PDSA’s were shared in trust wide communications to motivate teams to properly documents. The sense of shared accountability has been seen elsewhere as a helpful facet of improvement collaboratives.25

We did not perform a formal evaluation to help understand what factors of the collaborative learning system were most helpful in accelerating and facilitating change. The evidence base both around the impact of learning sets on improvement outcomes26 and how best to evaluate them remains varied.27 However, there is some evidence to suggests that, more focus should be paid to the mechanisms through which collaboratives foster more cultural, human and social sides of change28 as well as improvement skills.29 In future learning collaboratives we would encourage conveners to use simple methods to enhance the understanding of factors that amplify the conditions for success.

Lastly, we found story telling a powerful part of the collaborative learning system. The pieces produced at the end not only spoke to results of the work, but also personal journeys of learning from an improvement journey. Story telling is an important mechanism for capturing tacit or internalised knowledge and in turn externalise it for others to learn from.30 We would encourage learning set facilitators, to deliberately design this as part of the way in which teams share learning with each other.

Conclusions

As we have previously seen at ELFT, there is value in the collaborative learning system approach in bringing teams together to accelerate learning for improvement purposes. System wide results here indicate an improvement in flow across CAMHS pathways participating in the learning system, with service users being seen more quickly across first, second and third appointments. It was also noted that the percentage of appointments that were cancelled also reduced, and the number of referrals that were accepted also reduced. It is believed that both related to improved communication with service users, and improved screening processes.

It is important to note here that while not all teams reported an improvement in their objective outcome measures, the work provided an opportunity for engagement around the following:

  • Learning about their systems of work.

  • Becoming familiar with using a systematic QI method and tools to think differently about how to tackle problems.

  • Learning from each other’s experiences as part of a collaborative.

This work was done without any extra resource being provided to teams, aside from improvement methodology and data support from the central improvement team and any expenses occurred from bringing the teams together. Some members of the teams had experience working on QI projects and had completed training in improvement methodology, which we felt was a contributory factor to success.

Acknowledgments

We’d like to provide thanks to the following: Project Teams for all their hard work and dedication to improve their systems for service users. Paul Calaminus who provided executive director sponsorship. CAMHS Directorate Leaders. QI Team who provided data and communications support. The Darzi Fellowship that helped shape and support the development of this work.

References

Footnotes

  • Twitter @DrAmarShah

  • Contributors JS and MA worked directly with teams during the project, with supervision from AS throughout. JS and MA wrote large portions of the report detailing the project, with additions and editing from AS. All sections of the report were worked on by each author.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Methods section for further details.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement All data relevant to the study are included in the article.