Article Text

Improving personalised care, through the development of a service evaluation tool to assess, understand and monitor delivery
  1. Louise Johnson1,
  2. Hayden Kirk2,
  3. Beth Clark1,
  4. Stephanie Heath3,
  5. Carolyn Royse1,
  6. Carl Adams4,
  7. Mari Carmen Portillo1
  1. 1School of Health Sciences, University of Southampton, Southampton, Hampshire, UK
  2. 2Adult Services, Solent NHS Trust, Southampton, UK
  3. 3Wessex Cancer Alliance, Southampton, UK
  4. 4Academy of Research and Improvement, Solent NHS Trust, Southampton, UK
  1. Correspondence to Dr Louise Johnson; lj1b06{at}soton.ac.uk

Abstract

Systematically implementing personalised care has far reaching benefits to individuals, communities and health and social care systems. If done well, personalised care can result in better health outcomes and experiences, more efficient use of health services and reduced health inequalities. Despite these known benefits, implementation of personalised care has been slow. Evaluation is an important step towards achieving the ambition of universally delivered personalised care. There are currently few comprehensive assessments or tools that are designed to understand the implementation of personalised care at a service or system level, or the cultural, practical and behavioural factors influencing this. The aim of this paper is to describe the development and testing of a system-wide evaluation tool. The tool offers a process through which healthcare systems can better understand the current delivery of personalised care and the factors influencing this. With a focus on implementation, the development of the tool was informed by the Consolidated Framework for Implementation Research, and its content is structured using behaviour change theory (COM-B Theory of Behaviour Change Model). The tool consists of four mirrored surveys, which were developed using an iterative exploratory design. This included a series of testing cycles, in which its structure and content were continually refined. To date, it has been used by 24 clinical services, involving 397 service users, 313 front-line practitioners, 73 service managers and 40 commissioners. These services have used the evaluation process to initiate quality improvement, targeted at one of the more aspects of personalised care. The use of the COM-B model increases the likelihood of those improvements being sustained, through identification of the core factors that enable or limit personalised care behaviours among healthcare staff. We have shown this process to be applicable in a wide range of settings, thus it potentially has broad applicability as a tool for cultural change and quality improvement. The next stage of this work will focus on implementation and evaluation, to fully understand if and how the tool can be used to drive improvements in personalised care delivery.

  • Quality improvement
  • Patient-centred care
  • Health policy
  • Quality measurement

Data availability statement

Data are available upon reasonable request.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

  • Developing systems that are enabled to deliver care in a more personalised care is a strategic priority for healthcare providers across the world, due to the benefits it brings to individuals, communities and wider healthcare systems.

WHAT THIS STUDY ADDS

  • There are currently few comprehensive tools that enable understanding and evaluation of personalised care delivery from multiple perspectives. The WASP Service Evaluation Tool, developed through this project, enables healthcare providers, commissioners and policy-makers to better understand the how and why of personalised care delivery within their system.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

  • This evaluation is a powerful lever for change. Using behaviour change frameworks, the insights from the evaluation process can be used to support targeted and sustained improvements to personalised care implementation.

Problem

Personalised care is a core focus within the UK National Health Service (NHS) Long Term Plan1 and is described in detail in the comprehensive model of personalised care.2 This model establishes a whole-population approach to support people to manage their physical health, mental health and social well-being, build community resilience and make informed decisions and choices when their health changes. Research shows that this approach can lead to better outcomes and experiences,3 4 as well as a reduction in avoidable use of health services,5 and reduced health inequalities.6

The Wessex region (UK) covers Dorset and Hampshire and the Isle of Wight (HIOW) and has a population of almost 2.5 million. Like many parts of the UK, the region recognises the important role that personalised care must play in the development and sustainability of services; it is a key strategic priority. Yet delivering care that is more personalised is not straightforward. It requires a fundamental change in systems and processes, knowledge and skills, attitudes and behaviours, and it will not happen by chance. While these efforts are primarily through front-line teams working with people with health and care needs, there are others within the wider system that also have a role in enabling this change; a whole system approach is required.

As part of their strategic plans to enable the workforce to support and deliver personalised care approaches, the regional steering boards for personalised care in both HIOW and Dorset commissioned the development of a Personalised Care Service Evaluation Tool. The aims were: (1) to produce a tool to capture insights into personalised care delivery and the factors influencing this, from the perspectives of multiple stakeholders and (2) to test this within at least five different healthcare specialties/condition groups. The objectives were to provide: (1) specific insight into personalised care delivery at a system level; (2) a clear understanding, involving multiple stakeholders, around the enablers and barriers to personalised care delivery; (3) informed identification of areas for targeted quality improvement; and (4) a system through which change can be re-evaluated.

Background

Personalised care means people have choice and control over the way their care is planned and delivered. It is based on 'what matters' to them and their individual strengths and needs.2 The components of personalised care are outlined in the NHS Operating Model7; delivering this is one of the key ambitions of the NHS Long Term Plan.1

Practising personalised care within routine health and social care delivery is reliant on a complex combination of many behaviours, decisions and interactions.8 While most clinical staff strive to deliver care that is personalised, cultural, operational and system barriers may prevent them from doing so. As a result, the adoption of personalised care has been slow.9

To achieve greatest benefits, personalised care interventions must be comprehensive, intensively delivered and robustly integrated into routine care.10 Achieving integration requires structural change (in terms of healthcare pathways and delivery) and cultural and attitudinal change among clinicians and service users.9 The responsibility for embedding care that is authentically personalised is not solely of the clinician and the person in receipt of care11 but also of those who manage, lead and commission services. Understanding these perspectives is an important step in achieving the ambition of universally delivered personalised care.2

Although tools exist to evaluate personalised care at an individual service user level, these typically relate to process or experience and focus on one aspect of the operating model, for example, shared decision-making.12 There are few comprehensive assessments or tools designed to understand the implementation of personalised care at a service or system level or the cultural, practical and behavioural factors influencing this.

Using the Consolidated Framework for Implementation Research (CFIR)13 as an overarching guide, we developed a process for clinical services to evaluate their readiness to adopt behaviours that align to the personalised care model, within the systems in which they work. The CFIR comprises five major domains, which are associated with effective implementation: the intervention, the inner setting, the outer setting, the individuals involved and the process by which implementation is accomplished. It provides a practical guide for systematically assessing implementation and is particularly suited to understanding system-wide transformation.13

Measurement

Our process gives a system-wide view, by involving multiple stakeholders, including service users. We primarily use self-reported data to understand and compare the behaviours and perceptions of different groups of individuals (service users, front-line practitioners, managers and commissioners). Data is collected through paper and/or electronic surveys (depending on individual preference).

Design

The Wessex Academy for Skills in Personalised Care (WASP) Service Evaluation Tool was developed and tested using an iterative exploratory process, drawing on principles of ‘design thinking’.14 15 This approach was particularly important given the complex nature of personalised care delivery and the multiple factors that we aimed to capture within the design of the tool.

Patients and members of the public were involved at several stages of this project. We involved two people living with a long-term condition and one colleague from the voluntary sector (Stroke Association, UK) as core members of our development and steering group; these individuals contributed to all aspects of project design and delivery. When developing and trialling the service user survey, we received input from patients with lived experience, to evaluate its acceptability and usability.

The broad structure for the evaluation process and content of the tool was developed and agreed by a multiprofessional group with expertise in personalised care and behaviour change, including nurses, allied health professionals and psychologists. This overarching structure was informed by the CFIR, which emphasises that implementation is influenced at multiple levels—from external (outer setting) influences through to the central role of the individual practitioners and service users.13 Our evaluation process was designed to incorporate system-wide views, using surveys to understand the perceived behaviours (and the factors driving those behaviours) of clinical staff, service managers and commissioners, and comparing this to the experience of service users. Collectively, data from a broad range of professionals enabled an understanding of factors that align to the inner and outer setting.

The content of each survey was developed by the expert group, informed by a review of the evidence base in this field. To ensure the assessment gave a comprehensive and relevant overview, the content was mapped to key policy documents, including the personalised care operating model (NHS England). Feedback on content and acceptability was sought from service users and voluntary sector representatives, using a think aloud approach.16

The COM-B behaviour change model17 was used to underpin survey content. This model proposes that capability, opportunity and motivation interact to generate a specific behaviour. In the context of delivering personalised care, healthcare staff must perform certain behaviours—they must act and interact in a certain way. For any behaviour to be performed or maintained, the individual must have the capability, opportunity and motivation to do this. Capability is defined as the individual’s psychological and physical capacity to engage in the activity concerned, for example, having the necessary knowledge and skills. Motivation is defined as the brain processes that energise and direct behaviour, for example, habitual processes, emotional responding and analytical decision-making. Opportunity is all the factors that lie external to the individual that make the behaviour possible or prompt it.

For service users, we seek to understand their experience of personalised care and which elements are most important to them. Service users are asked 25 questions across three domains: what matters to you; developing a personalised care plan; and information and support. Each question has a 4-point response scale (‘this always happens’ to ‘this never happens’).

For front-line practitioners, we collect self-reported data relating to what they do (personalised care behaviour). These surveys include 25 questions across three domains: understanding the person; developing a personalised plan; and information and ongoing support. These questions are mapped to those within the service user survey and are answered using a 4-point response scale (‘I do this often’ to ‘I never do this’). To understand the factors driving these behaviours, we explore enablers through a series of questions underpinned by the COM-B behaviour change model.17

For service managers and commissioners, we focus on how they enable personalised care delivery within the services that they are responsible for. Both the service manager and the commissioner surveys ask 27 questions across three domains: enabling capability; creating opportunities; and enhancing motivation. Word count and readability statistics for each survey are given in table 1.

Table 1

Readability statistics

Behaviour questions are mirrored accross the surveys, to enable an understanding of concepts from multiple perspectives, for example:

  • Front-line clinician: I ask people how they would like to be contacted (eg, email, text, phone, face to face).

  • Service lead/manager: Staff are enabled to communicated with service users in a range of different formats (eg, email, text, phone, face to face).

  • Commissioner: When we commission services, we ensure that service users are offered choice about how they are contacted (eg, email, text, phone, face to face).

  • Service user: I am asked about how I would like to be contacted (eg, email, text, phone, face to face).

In each staff survey, the second section explores factors that enable the reported behaviours to happen. In total, 12 statements are provided, which relate to the capability, opportunity and motivation aspects of the COM-B model. The respondents are asked which of these statements apply to them, for example:

I have (tick all that apply to you):

  • Conversational skills to better enable personalised conversations (capability).

  • Resources available to help me do this (opportunity).

  • A belief that it is necessary (motivation).

Copies of each survey are available in online supplemental files 1-4.

Supplemental material

Supplemental material

Supplemental material

Supplemental material

Strategy

Early versions of the survey were shared with a small number of individuals, who had not otherwise been involved in their development (n=3). Feedback was used to improve survey structure and clarity. Each staff survey was then translated into electronic survey software (SurveyMonkey). Service user surveys were kept in paper format.

In our first round of pilot testing, the self-assessment was trialled by four clinical services. This allowed us to assess the relevance of each item and the feasibility of the process (informed by survey analytics data). Response data was analysed using Excel and manually collated into a team report using descriptive statistics. Data was presented using stacked bar graphs, aster diagrams and infographics. Reports were shared and discussed with each clinical service, who provided feedback on ease of interpretation, relevance and usefulness of data. Feedback was used to refine and streamline content and usability of both the surveys and the report.

In our second round of pilot testing, the tool was used with a further eight services. An iterative process was adopted through this second stage, to enable both the surveys and report to be continually refined. Through this process, we also developed a better understanding of how clinical services engaged with the process of a service evaluation and how they used this to inform improvements. Completing this process with eight teams allowed us to test the usefulness of the self-assessment in a broad range of settings (table 2).

Table 2

Population and setting of services involved with each round of testing

Once structure and content were finalised, the package of surveys were digitalised. The digital package allows surveys to be completed electronically and for the report template to be auto-populated. For service users, the option of a paper survey is maintained and data is inputted manually by an administrator if required.

Results

To date, 24 clinical services have completed a WASP Service Evaluation and have subsequently received a WASP Report. These teams deliver services in a range of settings, primarily for people with chronic and long-term conditions (table 2). Individually, 397 service users, 313 front-line practitioners, 73 service managers and 40 commissioners have completed a WASP self-assessment survey (as of October 2022). As each service uses a different approach to sampling, often using multiple strategies to collect service user data, it is not possible to report a response rate. Characteristics of service user and front-line staff respondents are shown in table 3.

Table 3

Respondent characteristics

The WASP Report is divided into three themes: understanding the person; developing a personalised plan; and ongoing support. Each section includes insights into healthcare professional ‘behaviours’ and the factors that enable those behaviours to happen (the capabilities, opportunities and motivations).

Findings from our preliminary data highlight differences between healthcare professionals’ perception of what they deliver and service users experience of care. These differences were seen across all themes and were greatest in relation to understanding what matters, making shared decisions and offering choice. While 85% of front-line clinicians reported that they always or often ask open questions and explore what matters to the person, only 37% of service users experienced this. Similar differences were found when asked about setting personalised goals (68% of front-line staff report doing this; 22% of service users experienced it) and making joint decisions (81% of front-line staff report doing this; 37% of service users experienced it). In relation to ongoing support, 89% of front-line staff reported that they always or often offer choice, while 27% of service users agreed with this. Written communication was not always provided to service users in a way that was understandable, with only 33% regularly receiving information about their condition that they found helpful, despite 70% of front-line staff believing that they had provided it.

Amalgamated data for the capability, opportunity and motivation aspects of the tool is provided in figure 1. This data relates to front-line staff and the factors they perceive to enable them to work in a personalised way.

Figure 1

Aggregated enablers of personalised care delivery, as reported by front-line clinicians. These Aster charts represent the proportion of front-line staff who reported that they have the capability, opportunity or motivation, outlined in the statement. The fuller the segment, the more people that reported having this particular aspect.

Overall, capability was the strongest domain, with front-line practitioners typically reporting that they have the knowledge and skills to understand the person and develop personalised plans. They were less confident in their capability to deliver those plans, which may be due (in part) to lack of service provision. Across all areas, front-line staff reported that they lacked evidence that service users thought personalised approaches are important. They also identified specific areas for improving capability; for example, around half of respondents identified the need to improve their knowledge regarding how to personalise information (49%) and how to support individuals to access ongoing support (54%).

Consistently across themes, front-line staff reported a lack of opportunity to deliver personalised care. Less than half (49%) of respondents felt they had the social opportunity to personalise care delivery (eg, colleagues who work in a similar way). Views around physical opportunity were also low, with the same proportion (49%) reporting a lack of resources. Time, prompts and measures were also identified as challenges.

With regards to motivation, participants typically agreed that personalised care was part of their role (71%) and believed that it is necessary (69%). However, they lacked automatic motivation such as external reporting requirements and being in the habit of exhibiting specific personalised care behaviours.

Although we have provided examples relating to service user and front-line staff comparisons, the report also contains insight into the views of experiences of service mangers and commissioners, to give a full, system-wide view.

While it is possible for a service to complete a WASP Service Evaluation as a standalone process, it is also offered as part of a programme of training, mentorship and quality improvement support. Teams taking part in this programme use their WASP Report to inform quality improvement projects and to monitor the change that comes because of their implementation.

Lessons and limitations

Given that practitioners may experience uncertainty or difficulty in adopting personalised care,18 it is important to understand the factors influencing personalised care delivery in a specific service or setting, for solutions to be found and bought into. One of the strengths of the WASP Service Evaluation is that it uses behaviour change theory to provide a comprehensive insight into local practice, directing and informing the process of quality improvement. Various models and frameworks have been designed to understand processes and factors involved in implementation. Use of the CFIR as an overarching framework enabled the WASP tool to be developed with implementation as a central concept.

It is important to note that the WASP Service Evaluation is not an implementation tool, it is designed to capture information and insight regarding local delivery of personalised care, from the perspective of multiple stakeholders. Used alongside implementation frameworks, of which there are many,19 20 it can be an enabler for personalised care delivery. Using a broader range of tools to (1) inform future development of the tool and (2) develop and describe strategies/process for how the tool is used, will help to maximise its value and impact. For example, although designed for use in technology driven interventions, frameworks such as the non-adoption, abandonment, scale-up, spread, and sustainability framework21 address key determinants of organisational readiness and could be used to evaluate the longer term impact of the WASP tool in relation to sustained adoption, non-adoption and roll out of personalised care initiatives.

Our surveys are structured using the COM-B framework,17 to provide detailed insights into what happens in practice (behaviours) and the assets that drive (or not) those behaviours (the capabilities, opportunities and motivations). Ffor example, do staff within the service have training needs; are there organisational processes that are inhibiting personalised care delivery; or are there cultural aspects that need addressing? This detailed understanding can assist healthcare teams to target improvements in a systematic and informed way, which in turn may increase the likelihood of those improvements being sustained.

Clinical services will all be at different stages of readiness to deliver comprehensive personalised care. Data collection takes time and each service needs to consider their strategy to achieve this. Understanding findings from the report and acting on this requires a culture that is open to challenge and change. We have found that the assessment process is better suited to those services that are already contemplating personalised care or have an external motivator, such as a service or commissioning objective.

Although the core components of personalised care are unlikely to change, the strategic landscape is forever evolving, with differing priorities, language and initiatives in the personalised care field. To avoid becoming outdated, the WASP Service Assessment process has been designed to capture insights into the core principles of personalised care delivery. However, one limitation is that as priorities shift, elements of the tool may require review. While a strength is that the process can be used in any setting, a limitation is that it may not cover aspects of personalised care that are very specific to a certain population or that some aspects of the assessment are less relevant to certain staff or patient populations. Services must therefore use their findings alongside other sources of information and take time to understand and interpret their report.

Conclusion

Data collected through the WASP evaluation process gives services a unique understanding of the ‘how and why’ of personalised care delivery. Importantly, the process and the data generated provide insight into (1) different perceptions of personalised care delivery, across the health and social care system, and (2) the factors that enable or not those personalised behaviours to happen. Facilitating understanding of personalised care delivery from the perspective of a range of stakeholders has potential to unlock opportunities to implement meaningful change. The WASP Service Assessment is valuable as one part of the process of quality improvement—it allows improvement initiatives to be identified and targeted, and repeat assessments can be used to monitor change.

We have shown this process to be applicable in a wide range of settings, thus it potentially has broad applicability as a tool for cultural change and quality improvement. A strength of this evaluation process is that it collates information from a range of stakeholders and perspectives, using a behaviour change approach to identify both what and how personalised care delivery can be improved within complex healthcare systems. The next stage of this work will focus on validation and an evaluation of impact, to fully understand if and how the tool can be used to drive improvements in personalised care delivery, including how it can be used over time to understand and monitor change.

Data availability statement

Data are available upon reasonable request.

Ethics statements

Patient consent for publication

Acknowledgments

The authors would like to thank all project steering board members for their invaluable contributions to this work and Dorset Clinical Commissioning Group and Hants and Isle of Wight Integrated Care Board for providing the initial quality improvement funding that enabled initiation of the project.

References

Supplementary materials

Footnotes

  • Twitter @PhysioLouiseJ, @WASP_Pers_Care

  • Contributors LJ, HK, SH and CA were responsible for the concept and developing early versions of the Wessex Academy for Skills in Personalised Care Service Evaluation. LJ, HK, SH, CA, CR and BC developed and designed subsequent iterations and were involved in testing and data collection. LJ, CR and BC were responsible for data review, analysis and reporting. MCP provided academic supervision and methodological expertise. All authors reviewed and contributed to the manuscript. LJ, HK, BC and MCP finalised the manuscript. LJ is guarantor and accepts responsibility for the conduct of the work and the finished manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting, or dissemination plans of this research. Refer to the Design section for further details.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.