Article Text

A systematic review of reliable and valid tools for the measurement of patient participation in healthcare
  1. Nicole Margaret Phillips1,2,
  2. Maryann Street1,2,
  3. Emily Haesler1
  1. 1School of Nursing and Midwifery, Deakin University, Burwood, Victoria, Australia
  2. 2Deakin University Centre for Quality and Patient Safety Research, Burwood, Victoria, Australia
  1. Correspondence to Assoc Prof Nicole Margaret Phillips, School of Nursing and Midwifery, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia; nikki.phillips{at}deakin.edu.au

Abstract

Introduction Patient participation in healthcare is recognised internationally as essential for consumer-centric, high-quality healthcare delivery. Its measurement as part of continuous quality improvement requires development of agreed standards and measurable indicators.

Aim This systematic review sought to identify strategies to measure patient participation in healthcare and to report their reliability and validity. In the context of this review, patient participation was constructed as shared decision-making, acknowledging the patient as having critical knowledge regarding their own health and care needs and promoting self-care/autonomy.

Methods Following a comprehensive search, studies reporting reliability or validity of an instrument used in a healthcare setting to measure patient participation, published in English between January 2004 and March 2014 were eligible for inclusion.

Results From an initial search, which identified 1582 studies, 156 studies were retrieved and screened against inclusion criteria. Thirty-three studies reporting 24 patient participation measurement tools met inclusion criteria, and were critically appraised. The majority of studies were descriptive psychometric studies using prospective, cross-sectional designs. Almost all the tools completed by patients, family caregivers, observers or more than one stakeholder focused on aspects of patient–professional communication. Few tools designed for completion by patients or family caregivers provided valid and reliable measures of patient participation. There was low correlation between many of the tools and other measures of patient satisfaction.

Conclusion Few reliable and valid tools for measurement of patient participation in healthcare have been recently developed. Of those reported in this review, the dyadic Observing Patient Involvement in Decision Making (dyadic-OPTION) tool presents the most promise for measuring core components of patient participation. There remains a need for further study into valid, reliable and feasible strategies for measuring patient participation as part of continuous quality improvement.

  • Patient-centred care
  • Shared decision making
  • Healthcare quality improvement

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background

Patient participation in healthcare is a key component of high-quality care. It is associated with improved patient outcomes, including shorter hospital stays,1 reduced readmission,2 improved functional status3 and reduced mortality.4 Patient participation contributes to enhanced decision-making, reduced medical error and adverse events, improved adherence, optimised self-management and increased staff retention.5 ,6 With this in mind, facilitation of patient participation is internationally recognised as essential for consumer-centric, high-quality healthcare delivery.

The concept of patient participation remains ill-defined.7 ,8 Synonymous terms, including empowerment, patient-centredness, partnership and collaboration,6 imply patient participation, but they do not necessarily encapsulate the conceptual depth of the term. A recently published conceptual model of patient-centredness8 included 19 behaviours (eg, communicating, engaging, listening, supporting) and 19 subcomponents (eg, empathy, individualisation, respect, trust) that underpin the broad concept.

Despite lack of a standardised definition of patient participation, the international trend towards its measurement as part of continuous quality improvement9–12 requires development of agreed standards and measurable indicators.13 The Organisation for Economic Co-operation and Development has developed benchmark standards9 that have been embraced internationally in quality health frameworks.9 ,10 However, reliable measurement strategies remain somewhat elusive. Earlier reviews14 ,15 have reported instruments designed to capture the patient experience, and found that existing instruments fail to adequately capture the concept of participation and/or have not been validated. With many countries now requiring the measurement of patient participation to be demonstrated as part of health system accreditation,9 ,10 it is critical that evidence-based, reliable and valid measurement strategies are identified and explored.

This systematic review sought to identify strategies to measure patient participation in healthcare and report on their reliability and validity. After considering the broad concepts in the empirical literature,8 we determined that the core requirements for patient participation include shared decision-making (SDM), acknowledging the patient as having critical knowledge regarding their own health and care needs and promoting self-care/autonomy (see figure 1). Interpersonal skills provide support for this process. These components were considered key to the measurement of patient participation, and the measurement strategies identified for this review were required to clearly focus on these concepts to meet inclusion criteria.

Figure 1

Conceptual model of key components of patient participation.

Methods

Quantitative and qualitative studies providing evidence on strategies to measure patient participation in healthcare were eligible for inclusion. The review included any studies reporting on strategies to measure patient participation in healthcare, in which, participants were aged >18 years. Outcomes of interest to the review were psychometric properties of tools used to measure patient participation and, for qualitative studies, the opinions, attitudes, perceptions and experiences of patients towards strategies to measure their participation in healthcare. Studies published in languages other than English, literature and systematic reviews, non-research studies and studies focused on patient participation in health research were ineligible for inclusion. Because the concept of patient participation in healthcare has advanced considerably in the past 10 years, from paternalistic models to the patient as a key player in healthcare,6 we only considered papers published in the previous decade from January 2004 to March 2014.

A comprehensive search was conducted in CINAHL, Cochrane database, Evidence-Based Medicine (EBM) reviews, EMBASE, the Joanna Briggs Institute database, MEDLINE, PsycINFO, Proquest, Mednar and Google Scholar. The search terms included patient participation, patient involvement, patient-centred care (PCC), evaluation, measurement, assessment, outcome and process assessment and healthcare delivery (see online supplementary appendix 1). Studies were initially appraised for inclusion based on title and abstract, and those appearing to meet inclusion criteria were retrieved for full appraisal.

Quantitative studies were assessed independently for methodological quality by two reviewers using the criteria in table 1 to determine appropriateness of analysis techniques and psychometric statistics. Criteria for appraising internal validity included appropriateness of the study design, data collection methods, sample size and sample selection; adequate description of both the measurement tool and the variables and concepts it represented and appropriate reporting of measures of validity, reliability, feasibility and utility.16–18 Criteria for judging external validity related to how well the tool and its validity and reliability could be generalised across patients and clinical settings.16–18 Any reviewer disagreement during the screening and methodological appraisal process was resolved through consultation with a third reviewer (MS). The primary reviewer (EH) extracted data using a standardised form,16–18 and the second reviewer (NP) confirmed data extraction.

Table 1

Considerations to psychometric statistics used for this review

No qualitative studies were identified for inclusion in this review; therefore, the prepublished methods16–18 for appraisal, data extraction and synthesis of qualitative studies are not included in this publication.

Review results

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram23 is presented in figure 2. Initial searches identified 1582 studies that potentially met inclusion criteria. Use of search terms that described patient-centredness and outcome and process assessment contributed to the identification of a large volume of papers. After review of titles and abstracts, 156 studies were retrieved and fully screened against the inclusion criteria. A total of 123 publications did not meet the review objective or inclusion criteria, and were excluded (see online supplementary appendix 2), methodological appraisal was conducted on the 33 included studies24–57 (see online supplementary table S2).

Figure 2

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) review flow.

The 33 included studies reported 24 patient participation measurement tools. The majority of studies were descriptive psychometric studies using prospective, cross-sectional designs. Online supplementary appendix 3 summarises the methodological appraisals.

There was significant heterogeneity in terms of concepts/variables used to measure patient participation. Patient-centredness was central to the design of many tools. Almost all tools focused on aspects of patient–professional communication. The majority of tools included a specific item referring to SDM; however, in some studies, this concept was implicit rather than explicit. Some tools included facilitation of self-care. Online supplementary table S2 provides an overview of the participants, clinical settings and tool descriptions from included studies.

Review findings

The credibility of tools used for measuring patient participation in healthcare is determined by their ability to accurately capture the patient's experience of participation. Validity and reliability are crucial to the demonstration of credibility.58 A summary of the psychometric properties of tools measuring patient participation reported in each study is provided in online supplementary tables S3 (validity) and S4 (reliability). Online supplementary table S5 presents the internal consistency (reliability) properties for items on the measurement tools that related to the three components considered most central to the concept of patient participation in this review.

Tools completed by patients

Fifteen tools designed for completion by patients met the inclusion criteria for this review. These tools were developed and tested in the USA (n=6), the Netherlands (n=4), the UK (n=1), Canada (n=1) and other European nations (n=3). Sample sizes were generally moderate to large (87–19 568 participants), and a broad range of clinical settings were represented. Tools were tested in general/family practice (n=6), community-based care settings (n=3), general medicine (n=3), rehabilitation (n=1), surgery (n=1) and specialty care settings (n=7), with some tools tested in more than one clinical setting.

Measurement of SDM

Of the concepts considered in this review to be core to patient participation, SDM was represented most often on the patient-completed tools. Only two patient-completed tools, the Patient–Doctor Relationship Questionnaire (PDRQ-9) and the Patient Enablement Instrument (PEI),43 did not include at least one item or subscale related to SDM.

As presented in online supplementary table S5, internal consistency for items related to SDM on patient-completed tools ranged from alpha=0.63 to alpha=0.88. Of those tools for which alpha coefficient was reported, the only tool that failed to reach an adequate reliability was the Patient Participation Emergency Department (PPED) questionnaire (alpha=0.63).32 Ethnicity was identified as one factor that influenced the reliability of SDM items on the Prenatal Interpersonal Processes of Care (PIPC),40 suggesting that the heterogeneity in patient populations used in the studies contributed to the variability in findings between tools.

Aside from the wide variety of clinical settings and types of patients completing the measurement tools, there was heterogeneity in the way tools addressed SDM. Some tools, including the PPED,32 the PIPC40 and the Measurement of Involvement in Decision Making (IDM),27 specifically measured the patient's level of involvement in SDM. Other tools measured opportunity to be involved in SDM,25 being asked if there was a wish to be involved in SDM36 or the level of importance the patient placed on SDM.24

The SDM Questionnaire (SDM-Q) addressed SDM in the most comprehensive manner, with inclusion of 11 items that measured different components of SDM, such as the level to which a patient's thoughts were considered, weighing up treatment options with the doctor, selection of treatment together and joint responsibility for the decision.35

Measurement of patient as an expert in his or her care needs

Only four patient-completed tools were considered to have included measurement of the patient as an expert of his or her own care needs. Items that related to this aspect of patient participation in healthcare were included in the modified Perceived Involvement in Care Scale,33 ,34 the PPED,32 the PIPC40 and the Client Centred Care Questionnaire (CCCQ).25 ,26 As presented in online supplementary table S5, the items related to the patient's expertise in his or her care needs had good-to-excellent reliability (alpha=0.82 to alpha=0.90).

There was significant variability in the way this component of patient participation was addressed in the measurement tools. The CCCQ, which is designed for use in the home care setting, had the strongest focus on the expertise of the patient regarding his or her own needs, including items that reflect the opportunity the patient is given to use his or her own experience in care needs and being provided with opportunity to arrange and organise one's own care requirements.25 ,26

Contribution to one's own care

Contribution to self-care was included in the unnamed tool by Arnetz, et al,24 the CCCQ,25 ,26 PEI, PDRQ-9, Interpersonal Processes of Care (IPC)36 and the PIPC.40 On the tools for which the alpha coefficient was reported (see online supplementary table S5), reliability of items addressing this component of patient participation was moderate to good (alpha=0.75 to alpha=0.86).

As with the other components of patient participation, there was wide variability in the wording of items related to the contribution the patient makes to his or her care. The PEI focused on the term ‘enablement’, which the tool developers interpreted to relate to the patient's ability to understand and cope with illness. A component of this was being able to contribute to self-care and keep one's self healthy.43 The PIPC included items that appraised how often the patient is provided with assistance to monitor symptoms and engage in activities to promote a healthy pregnancy. The PDRQ-9 initially included one item reflecting the patient's ability to contribute to his or her care, but this item was removed in the final version.

Almost all the tools designed for completion by patients measured aspects of interpersonal skills inherent to the promotion of patient participation. Characteristics commonly evaluated included provision of information, asking questions, listening, having sufficient time and respect.

Tools completed by family caregivers

Only two tools, the Family Perceived Involvement and the Family Importance of Involvement that were designed for completion by family caregivers were identified for inclusion in this review. The tools are designed to be used together and as such provide a measure of actual participation, as well as the significance the family place on being involved. The tools address family involvement in SDM, recognise the family as having critical information to the patient's care needs and include assessment of the family's involvement in direct care. However, these tools are designed for use in residential aged care, and would not be appropriate for other clinical settings.44

Tools completed by observers

Five different patient participation measurement tools designed for completion by an observer were identified for inclusion in this review. The observer-completed tools were tested in general/family practice settings (n=5), medical school (n=1) and outpatient clinics (n=3). The tools were tested in USA (n=3), the Netherlands (n=1), Germany (n=2) and the UK (n=1). Some observer-completed tools were tested in more than one clinical setting and geographical location. These tools are primarily designed to measure SDM as an iterative process between patient and healthcare provider within a consultation.

This review identified five studies45–49 exploring the Observing Patient Involvement in Decision Making (OPTION), four of which were patient interactions with general practitioners (GPs), while one study included patients attending a psychiatric consultation. The internal consistency of OPTION ranged from moderate45 to strong46 and very strong.47 Both inter-rater reliability47 ,48 and intrarater reliability48 were strong. The OPTION item addressing whether the clinician elicited the patient's preference regarding SDM had the weakest reliability of all the items.48

The Four Habit Coding Scheme (4HCS) is an observer coding system that takes 2–5 min to conduct, but coders required up to 10 h training to achieve a high inter-rater reliability.51 Two studies50 ,51 investigated the tool's psychometric properties and compared them with a second observer measure of patient participation, the Measure of Patient-Centred Communication (MPCC) coding system.51 Internal consistencies for two of the 4HCS subscales were inadequate,51 although the inter-rater reliability was strong.50 ,51 There was no correlation between the 4HCS and patient ratings of consultations.50 ,51 The MPCC50 showed strong inter-rater reliability, although there was low correlation with the 4HCS and no correlation with patient perception of patient-centredness.50

The Rochester Participatory Decision-Making Scale (RPAD) is an observational coding system focused on the physician's communication style and style of engaging in SDM. The RPAD had poor-to-no correlation with the MPCC coding system or with patient surveys measuring trust in and satisfaction with the physician and perception of the physician's knowledge of the patient. There was low correlation between RPAD, and simulated patients’ ratings of trust in the physician.52

Wilkerson et al53 developed an observational scale to measure the patient-centred communication skills of medical students. The embedded PCC scale was compared with a sample of medical student's Objective Structured Clinical Examination (OSCE) scores. The tool had poor internal consistency and poor correlation with the OSCE physical examination or history-taking scores, both of which include assessment of communication skills. There was low correlation with the overall OSCE score and moderate correlation with the OSCE communication scale score.53

Tools completed by more than one stakeholder

As with observer-completed tools, the tools designed for completion by more than one observer focus on patient participation, particularly SDM, as an iterative process. Three measurement strategies were identified; however, one research team did not report any psychometric properties of their quantitative tool.56 The three tools were used in the UK, Indonesia and Canada. Two were used in general practice settings, and the third was used in internal medicine.

One research team54 ,55 developed the dyadic-OPTION—a revised version of the observer-completed OPTION tool—to measure SDM from patient, observer and clinician perspectives. After adaption and face validation,55 the dyadic-OPTION was trialled in a pilot study.54 There was a moderate correlation between the patient and GP dyadic-OPTION scores and a moderate correlation between the GP-OPTION scores and observer-OPTION scores. There was almost perfect correlation between the two observer scores.54 The tool is designed for general practice, but could be adapted to other settings.

The dyadic-SDM model is also under development. A sample of physicians and patients used the tool following a consultation, and 259 unique dyads were completed for analysis. Exploratory and confirmatory factor analyses were conducted on the seven subscales, most of which were found to have strong internal consistency. Four scales were found to have similar meanings for both physicians and patients, suggesting dyadic analysis of these scales would provide a reliable and valid measurement for both parties in a consultation. However, there was poor correlation between any subscale and items on the OPTION scale.57

Discussion

As whole measurement instruments, few of the reviewed tools designed for completion by patients provided an overall valid and reliable option for measuring patient participation. The PPED, Consumer Quality Index (CQI)-Cataract and SDM-Q currently have inadequate psychometric properties. There were insufficient psychometric properties reported to evaluate the reliability and validity of the IDM, PDRQ-9 and Individualised Care Scale. The final version of the IPC was untested. The CQI-Dialysis is a tool designed for a specific clinical setting and peripheral to patient participation. The PEI had a moderate correlation with a tool that measured patient satisfaction (the Consultation Satisfaction Questionnaire), but did not correlate with observer ratings.43 The Consumer Assessment of Healthcare Providers and Systems (CAHPS) showed promise; however, the item explicitly referring to SDM was removed from the tool.

None of the observer-completed tools reported in this review provide a reliable and valid measure of patient participation. These tools focus on measuring communication skills that imply a level of SDM, and many had moderate-to-good construct validity. However, except for the OPTION and 4HCS, they do not explicitly address SDM and/or acknowledge the patient's expertise in their own needs. The RPAD, MPCC and embedded PCC scales have inadequate psychometric qualities. The OPTION, 4HCS, MPCC, RPAD and MRCGP examination have little-to-no correlation with patient-completed measurement tools. As they included specific items addressing core elements of patient participation, the OPTION and 4HCS tools showed the most promise. The challenge is resolution of the disparate evaluation of patient participation among observers, clinicians and patients.

The dyadic-OPTION is reliable and valid54 in measuring all stakeholder perspectives of care. The tool was only in pilot stage in the studies included in this review, but appears to offer a reasonable tool for patient participation measurement.54 ,55 A second dyadic-SDM model in trial stages57 also shows promise, but its length currently reduces utility.

Comparison with other measures of patient participation or care quality

Comparison with other measures of patient participation was not included in many of the studies; however, some tools demonstrated moderate-to-strong correlation with other measures of patient satisfaction, such as patient rating of overall satisfaction with care 25 ,27 the Patient Satisfaction Questionnaire33 and the Schmidt Perception of Nursing Care Survey.42

However, low correlation between many of the patient-completed tools and other patient participation measurement tools, other measures of patient satisfaction or overall rating of the health professional was an issue.32 ,35 ,43 Further, when observer-completed tools were compared with other measures of patient participation and care quality, there were poor correlations between them. For example, the MPCC had low correlation with the patient's perception of the level of patient-centredness of the consultation, and there was also low correlation between the MPCC and the 4HCS.50 The 4HCS had low or no correlation with either the Roter Interaction Analysis Scheme, duration of consultations or patient ratings of level of information provided, level of respect given by the doctor and doctors’ competence.51 Siriwardena et al49 found only a moderate correlation between SDM items on the OPTION and the MRCGP examination, and comparisons of the OPTION with patient-rated outcome measures showed less than favourable psychometric properties.46 ,48

It remains unclear why there is generally low correlations between patient participation measurement tools and various other measures of consultation quality. Perhaps these findings reflect the low prioritisation that some patients have regarding their participation in healthcare, or that some tools fail to capture the essence of participation. Jonsdottir et al34 found a negative correlation between evaluation of the provision of information provision and patient satisfaction with their physician, suggesting that patients do value receiving information. However, it has been noted that patients are often unprepared for a participatory consultation,56 which may influence their evaluations if the experience is outside expectations.

Point of time in care at which tool is administered

Consideration should be given to the variation in the administration methods used for the measurement tools included in the review. Observer tools were completed during the administration of care; however, there was wide variation in the time between care delivery and administration of patient-completed measurement tools. While some tools were completed immediately following consultations, others were completed by patients via postal surveys up to 12 months following care. The delay in administration of the measurement tools may have influenced patient recall and perception of their own participation in care.

Feasibility of tool delivery

Formal evaluation of feasibility of delivery of measurement tools was not included in most of the studies. Patients reported that the PEI tool was easy to use; however, the time required to explain the surveys to patients was onerous for registrars and limited the feasibility of administering the tool to large samples.43 The length of many tools also suggests that they may not be feasible for regular delivery. For example, the IPC (85 items) CQI-Dialysis (71 items), the CQI-Cataract (41 items), the tool by Arnetz et al24 (53 items) and the PIPC (30 items) are long questionnaires that may not be practical for completion by patients in many clinical settings.

Tools completed by observers were generally shorter than patient-completed tools. The disadvantage of this measurement strategy is the requirement to have a trained, non-participant who is capable of observing consultations between clinicians and their patients, either in real-time or through use of video. Beyond the practicalities of training observers and the time spent to code consultations, the issue of patient confidentiality could reduce the feasibility of these tools in everyday clinical practice.

While the tools completed by more than one stakeholder provided the most promise as a measurement of patient participation, the length of time required to either train raters, deliver the instrument and/or analyse results may preclude their use in many clinical settings. These tools may be best situated as research aids or, as used in many of the studies, for examining the skills of student healthcare professionals.

Limitations

The findings of this review are limited by inclusion dates. We chose to report on tools developed in the past 10 years, due to comprehensive reports of tools developed before14 ,15 and the significant progress that has been made to the concept of patient participation over the previous decade. However, the commentary in this field confirms that earlier tools have not attained acceptable levels of reliability and validity in the measurement of patient participation.8 ,14 ,15 ,45 As outlined in online supplementary appendix 2, there was variability in study methods and rigour, the sample size was often inadequate and some tools were poorly described.

Conclusion

Few reliable and valid tools for measurement of patient participation in healthcare have been developed in the preceding 10 years. The review identified few tools that measured all three concepts considered key to patient participation. Of those reported in this review, the dyadic-OPTION tool presents the most promise for measuring core components of patient participation. There remains a need for further exploration into valid, reliable and feasible strategies for measuring patient participation as part of continuous quality improvement.

References

Supplementary materials

Footnotes

  • Contributors NMP: substantial contributions to the concept and design of the work, second reviewer for all identified papers and review of manuscript. MS: substantial contributions to the concept and design of the work and review of manuscript. EH: substantial contributions to the concept and design of the work, first reviewer for all identified papers and preparation of the manuscript.

  • Funding The review was funded by a seeding grant from Centre for Quality and Patient Safety Research, Faculty of Health, Deakin University.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles