Introduction Potentially inappropriate medicine prescriptions and low-value diagnostic testing pose risks to patient safety and increases in health system costs. The aim of the Clinical and Healthcare Improvement through My Health Record usage and Education in General Practice study was to evaluate a scalable online quality improvement intervention, integrating online education regarding a national shared electronic health record and rational prescribing, pathology and imaging ordering by Australian general practitioners (GPs).
Methods The study was a parallel three-arm randomised trial comprising a prescribing education arm, a pathology education arm and an imaging education arm. Currently practising GPs in Australia were eligible to participate and randomised on a 1:1:1 basis to the study arms after consenting. The response to the intervention in reducing potentially unnecessary medicine prescriptions and tests in each arm was assessed using the other two arms as controls. The primary outcome was the cost per 100 consultations of predefined medication prescriptions, pathology and radiology test ordering 6 months following the intervention, compared with 6 months prior. Outcomes were assessed on intention-to-treat and post hoc per-protocol bases using multilevel regression models, with the analysts blinded to allocation.
Results In total, 106 GPs were enrolled and randomised (prescribing n=35, pathology n=36, imaging n=35). Data were available for 97 GPs at the end of trial (prescribing n=33, pathology n=32, imaging n=32) with 44 fully completing the intervention. In intention-to-treat analysis, there were no significant differences in the rates of change in costs across the three arms. Per protocol, there was a statistically significant difference in the rate of change in pathology costs (p=0.03). In the pathology arm, the rate of increase in pathology costs was significantly lower by $A187 (95% CI −$A340, −$A33) than the prescribing arm, and non-significantly $A9 (95% CI −$A128, $A110) lower than the imaging arm.
Discussion This study provides some evidence for reductions in costs for low-value pathology test ordering in those that completed the relevant online education. The study experienced slow uptake and low completion of the education intervention during the COVID-19 pandemic. Changes were not significant for the primary endpoint, which included all participants. Improving completion rates and combining real-time feedback on prescribing or test ordering may increase the overall effectiveness of the intervention. Given the purely online delivery of the education, there is scope for upscaling the intervention, which may provide cost-effectiveness benefits.
Trial registration number ACTRN12620000010998.
- Healthcare quality improvement
- General practice
- Continuing education, continuing professional development
- Primary care
Data availability statement
No data are available. Data for this study are available on reasonable request to the corresponding author, subject to ethics approvals.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
- Healthcare quality improvement
- General practice
- Continuing education, continuing professional development
- Primary care
WHAT IS ALREADY KNOWN ON THIS TOPIC
Potentially inappropriate medicine prescriptions and low-value diagnostic testing pose risks to patient safety and increases in health system costs. No previous studies had investigated a scalable online education intervention incorporating use of a national shared health record to improve prescribing and test ordering quality.
WHAT THIS STUDY ADDS
While completion rates were suboptimal, this study provides some evidence for cost savings from reductions in low-value pathology test ordering, in those that completed the relevant online education.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
Scalable, online educational interventions have the potential to deliver health system-level quality improvement and cost savings; however, barriers to uptake, completion and cost-effectiveness need addressing.
Potentially inappropriate medicine prescribing and low-value diagnostic testing pose risks to patient safety, increase health system costs1–4 and are the subject of significant health policy interest and numerous recommendations and guidelines. The Choosing Wisely guidelines are recent examples of international efforts to reduce unnecessary or low-value care.5 However, guidelines require implementation strategies to optimise their effectiveness in changing clinical behaviour.6 While there is little evidence of reductions in inappropriate prescribing and low-value care resulting from the dissemination of the Choosing Wisely guidelines in isolation, clinician-focused Choosing Wisely implementation interventions have been successful, with multicomponent interventions more likely to result in intended changes than single-component interventions.7 Among multicomponent Choosing Wisely interventions, strategies that included clinician education, behavioural nudges, review of service utilisation or new clinical pathways were most likely to demonstrate positive changes in intended outcomes.7 Notably, there have been relatively few randomised controlled trials (RCTs) of Choosing Wisely guidelines implementation strategies.7 This is consistent with the paucity, more generally, of controlled trials of education strategies to empower primary care practitioners to reduce potentially inappropriate medication prescribing8 9 or low-value test ordering.10 11
Arguably, guideline implementation strategies should address barriers to uptake at both the provider and system levels.6 My Health Record (MHR), Australia’s national online patient-controlled health record, provided an opportunity to test an education intervention aimed to improve the use of a system-level enabler. MHR, established in 2012, administered by the Australian Digital Health Agency, provides individual health information available on a secure online platform. All Australians have an MHR, unless they opted out prior to 31 January 2019.12 In the Clinical and Healthcare Improvement through My Health Record usage and Education in General Practice (CHIME-GP) trial, we tested an intervention that linked education regarding the system-level enabler (MHR) and clinician-level enablers (education, audit and peer interaction) to reduce potentially inappropriate medicine prescribing or low-value pathology or imaging requests by general practitioners (GPs). We posited that a multifaceted education intervention including peer interaction, coupled with the enhanced audit capacity facilitated by MHR, would aid clinical behaviour change. This was consistent with Johnson and May’s13 schema, where effective clinical behaviour interventions restructure new practice norms and link them with peer and reference group behaviours.13 We are not aware of any previous RCTs incorporating the use of a nationally available shared health record into online practitioner education for reducing low-value care. In addition, cost information is vital in decisions regarding resource allocation, and decisions whether to support scaling up of interventions by policy-makers and funders.14 In this paper, we report the primary outcomes of the CHIME-GP study including medicine prescribing and test costs, and health economic analysis. The qualitative and educational outcomes of this mixed methods study will be the subject of future publications.
The study setting included general practices in Australia. The aim of the trial was to test whether the intervention would result in a difference between intervention and control group GPs, in the cost per 100 consultations of preselected potentially inappropriate medicine prescriptions or low-value pathology or diagnostic imaging requests, in the 6 months before and after intervention. The effect of the intervention in each education arm (ie, prescribing, pathology and imaging) was assessed using the other two arms as controls. The design of this randomised three-arm parallel trial has been reported in detail previously.15
Inclusion criteria for GP study participants
Participants had an Australian Pharmaceutical Benefits Scheme (PBS) prescriber and Medicare provider number.
Participants undertook clinical work at least 1 day/week in a practice having compatible electronic health records (EHR) with PenCS (health analytics company which provided data extraction services) installed and MHR access.
Absence from clinical work for more than 8 weeks over the study period.
We initially stipulated that participants practise in an Australian state or territory where pathology and imaging results were included in MHR. Due to slow recruitment during the COVID-19 pandemic, we removed this criterion. The primary outcomes comprised the cost per 100 consultations of preselected medicine prescriptions, pathology and diagnostic imaging requests 6 months before and after intervention. Secondary outcome measures included changes in the rates per 100 consultations of preselected medicine prescriptions, pathology and diagnostic imaging requests 6 months before and after intervention. The prescriptions and diagnostic test requests included in the study were specified a priori for the education sessions and assessed across all three arms of the trial. The specific prescriptions and pathology/imaging requests were selected based on Choosing Wisely recommendations, their frequency in general practice, cost to Medicare and potential for adverse outcomes in patients.4 5 10 16 17
The study was powered for testing significance in subgroup analyses (eg, change in prescribing rates or change in pathology/diagnostic test requests) with a 1:1 intervention:control allocation and a medium intervention effect (f2=0.15) at 80% power and α=0.05. We assumed a practice-level intracluster correlation coefficient of 0.05 and an average of three participating GPs per practice. Thus, the target recruitment was a minimum of 31 participants in each of the three arms of the trial (n=93). To allow for 25% attrition, the study aimed to recruit 40 participants per arm (n=120).
Following ethics approval, the trial was registered with the Australian New Zealand Clinical Trials Registry (ACTRN12620000010998). Multiple agencies were involved in sending out invitations to practices via email, fax or another electronic medium including Medcast, PenCS, Australian Digital Health Agency, University of Wollongong and Royal Australian College of General Practitioners and Primary Health Networks. More than 100 000 invitations, including follow-up reminders, were distributed to potential participants Australia wide. Due to the COVID-19 pandemic during the study period and its impact on recruitment into the study, we extended the recruitment period to 8 months (January to August 2020) instead of the initially planned 4 months (January to April 2020) and conducted the education intervention in two waves instead of one. Wave 1 activities (all three arms) were undertaken in June to August 2020, and then wave 2 (all three arms) in September to November 2020. On completion of their education activities, GPs received $200 and were eligible for professional development points.
We initially planned for randomisation to be at the practice level. Ultimately, we registered and randomised consenting participants individually to one of the three intervention arms. Allocation was on a 1:1:1 basis. A stratified randomisation approach was used to ensure a balance of participants according to practice size (up to five GPs vs six or more GPs) and geographical location categorised by the Australian Rural Remote and Metropolitan Area classification.18 The study statisticians applied a computerised stratified randomisation algorithm RALLOC using STATA V.17 (StataCorp 2021, Texas) to ensure a balanced allocation across the three arms according to practice size and remoteness area. The statisticians provided the project officer with the randomisation sequence, who then allocated the participants into the three trial arms on a first-come, first-served basis. The statisticians remained blinded to the education intervention assigned to each group for the intention-to-treat and post hoc per-protocol analyses. The study participants were not blinded as to their allocation.
We used the Australian Choosing Wisely recommendations to inform the education content, along with other expert sources,5 19–23 for each arm of the trial. Medcast designed and delivered the educational intervention for the CHIME-GP study, embedding online peer-to-peer interaction and learning (ie, a virtual community of practice)24 in a multicomponent suite of educational activities.15 Each wave of the education intervention was delivered over a 3-month period with a total time commitment of approximately 6 hours per participant. The intervention content and procedures of the three arms of the trial are summarised in figure 1.
To assess clinical behaviour change and the resultant health economic impacts, we undertook an audit of participants’ prescribing, pathology and imaging requests using deidentified data extracted from the participants’ EHRs. Using an automated process, data were extracted for 6 months before and after intervention and included: age and sex of patients; consultation rates at GP level; and rates of prescribing, pathology and imaging requests at GP level. We assessed consultation rates by extracting episodes from the EHR that were coded as a clinic visit (referred to as consultations).
We assessed costs with reference to Australia’s universal health insurance scheme Medicare using the schedules of Australian government reimbursement for the pathology and imaging tests—Medicare Benefits Schedule (MBS)—and prescriptions (PBS). The MBS and PBS items were identified along with their costs for the 2020/2021 financial year. For MBS items, the Medicare Item Reports were used to determine the actual average cost.25 This was calculated as total Medicare expenses divided by the total number of services nationally. The scheduled PBS dispense price was used for all prescriptions.
Between-arm differences in changes in medication prescribing, pathology and imaging requests, 6 months before and after intervention, were assessed using linear mixed models with GP participant ID included as a random effect to account for repeated measures. For the primary outcomes, we undertook intention-to-treat analyses, where all participants for whom we had data were included. This also included participants who did not undertake or complete the education modules, were absent from the practice or who withdrew but still permitted use of their data. Post hoc per-protocol analysis included GP participants who had completed the educational interventions, were not absent from practice for more than 8 weeks and had not withdrawn from the study. P values <0.05 were considered statistically significant.
We used the funding to Medcast to undertake the education intervention ($A367 824) as the cost component of the health economic analysis. We compared the full cost of the intervention with the outcomes from both the intention-to-treat and per-protocol analyses. We estimated the change in cost per 100 consultations before and after intervention for prescriptions, pathology and imaging requests, relative to the change observed in the respective two control arms combined and calculated trial-specific cost-benefit ratios. We applied projections to the per-protocol results to provide indication of cost outcomes for ‘completers’ if the intervention were to be scaled up. As multifaceted education interventions have been shown to have long-term impact on prescribing and test ordering habits,26 27 we assumed that the impact of the education among ‘completers’ would last for 2 years. All costs are reported in Australian dollars ($A).
In total, 106 GP participants enrolled in the trial: 56 participants in wave 1 and 50 in wave 2. Nine participants withdrew from the study (six prior to the educational activities commencing but after randomisation, and three after the education intervention commenced). Two participants who withdrew permitted their data to be extracted for analysis. Participants cited lack of time as a reason for withdrawal; and one participant had been incorrectly enrolled as they did not comply with the eligibility criteria. In the final data collection, data extraction was unsuccessful for two participants. Figure 2 summarises the participant allocation.
Clinical data extracts were received for 97 GPs; 33 in the prescribing, 32 in the pathology and 32 in the imaging arm. The data extracted included records of 593 388 clinic visits, 197 742 pathology and radiology test requests and 209 296 medication prescriptions. The average age of participating GPs was 49 years (SD 11) and 36% were female. Thirty-eight per cent of GPs were working in small practices (up to five GPs) and the remaining 62% in large practices (six or more GPs). The majority (69%) of GPs worked in metropolitan or inner regional zones; 30% worked in rural zones; and only 1% in remote zones. According to the Socio-Economic Indexes for Areas (SEIFA),28 29% of GPs worked in the most disadvantaged areas compared with 26% who worked in the most advantaged areas (based on quintiles). GPs had on average 3104 consultations (SD 2939) during the preintervention period (6 months). The average patient age was 48 years (SD 24); 58% of them were female. See table 1 for a description of the participant sample by study arm.
Before intervention, the average costs per 100 consultations were $A203 (SD $A169) for prescribing, $A721 (SD $A390) for pathology and $A52 (SD $A51) for imaging across the sample. After intervention, these costs had reduced to $A178 (SD $A89) for prescribing, increased to $A826 (SD $A410) for pathology and increased to $A71 (SD $A65) for imaging. Overall, there were no significant differences in the rates of change in costs preintervention to postintervention across the three arms. We assessed changes in prescribing and pathology and imaging requests per 100 consultations as secondary outcome measures. As with the primary outcomes, there were no statistically significant differences in the rate of change across the three arms. See table 2.
Post hoc per-protocol analysis
Forty-four GPs were included in the per-protocol analyses: 15 in the prescribing, 15 in the pathology and 14 in the imaging arms. The average age was 50 years (SD 11) and 36% were female. Forty-one per cent worked in small practices (up to five GPs) and the remaining 59% in large practices (six or more GPs). The majority (64%) of GPs worked in metropolitan zones, 34% in rural zones and 2% in remote zones. According to SEIFA,28 27% of GPs worked in the most disadvantaged areas and another 27% of GPs worked in the most advantaged areas (based on quintiles). GPs had on average 3561 consultations (SD 3785) during the preintervention period (6 months). The average patient age was 48 years (SD 24), with 58% female. Before intervention, the average cost per 100 consultations was $A203 (SD $A93) for prescribing, $A747 (SD $A305) for pathology and $A48 (SD $A47) for imaging requests across the per-protocol sample. After intervention, these costs had reduced to $A198 (SD $A82) for prescribing, increased to $A846 (SD $A381) for pathology and increased to $A71 (SD $A60) for imaging requests. For pathology costs, the rates of change preintervention to postintervention across the three arms were significantly different (p=0.03). Pathology costs were significantly lower by $A187 (95% CI −$A340; −$A33) in the pathology arm compared with the prescribing arm, and non-significantly $A9 (95% CI −$A128; $A110) lower compared with the imaging arm. There was no significant difference in the rates of change across arms for prescribing and imaging costs. Changes in prescribing and pathology and imaging requests per 100 consultations reflected those of the costs, with a significant difference in pathology requests across the arms (p=0.03). See table 3 for per-protocol analysis results.
Intention-to-treat health economic analysis
On an intention-to-treat basis, we estimated the pathology arm achieved the largest relative reduction in costs at −$A69 (95% CI −$A180; $A43), followed by the imaging arm at −$A9 (95% CI −$A29; $A10) and the prescribing arm at −$A5 (95% CI −$A57; $A48). None of these changes were statistically significant. Relative to the average cost for prescribing at $A203 (SD $A169), pathology at $A721 (SD $A390) and imaging at $A52 (SD $A51) across the sample, the reductions in imaging amounted to 17%, in pathology to 10% and in prescribing to 2%. During the postintervention period the estimated total reductions across the three arms were −$A86 693 (95% CI −$A278 378; $A104 979). Projected over 2 years, the total estimated cost reduction was −$A346 772 (95% CI −$A1 113 514; $A419 917).
Compared with the cost of the education this amounted to a net loss of $A21 052 and a benefit to cost ratio of 0.94 for participants. For every dollar of funding for the education, the programme saved $A0.94 of MBS and PBS costs during the following 2 years. Based on the savings per 100 consultations, and an average GP consultation rate of 5438 consultations per annum,29 30 the cost reductions amounted to $A497 (−$A6190; $A5194) for prescribing, $A7453 (−$A19 621; $A4714) for pathology and $A1029 (−$A3157; $A1101) for imaging during the 2 years following the education.
Per-protocol health economic analysis
As with the intention-to-treat health economic analysis, there were no statistically significant changes observed in the per-protocol health economic analysis results. On a per-protocol basis, we estimated the pathology arm achieved the largest relative reduction in costs at −$A98 (95% CI −$A215; $A20), followed by the prescribing arm at −$A32 (95% CI −$A74; $A11) and imaging arm at −$A20 (95% CI −$A45; $A4). During the postintervention period it was −$A91 022 (95% CI −$A203 188; $A21 140). Projected over 2 years, the estimated total reductions across the three arms cost was −$A364 088 (95% CI −$A812 753; $A84 561) with a benefit to cost ratio of 0.99 for completers. Based on the savings per 100 consultations, and an average GP consultation rate of 5438 consultations per annum,29 30 the cost reductions amounted to $A3442 (−$A8071; $A1187) for prescribing, $A10 612 (−$A23 430; $A2206) for pathology and $A2196 (−$A4856; $A465) for imaging during the 2 years following the education.
To model the likely cost reductions among completers of a wider rollout we have calculated three scenarios representing low, medium and high completion among Australia’s 30 000 GPs31 and the expected savings within 2 years of completion. This does not include costs of the education, which as an online activity are not expected to increase linearly with participant numbers (see table 4).
We are not aware of any other RCTs to implement guidance on potentially inappropriate prescribing and tests that used a purely online education method, and were designed in the context of a centralised health record such as MHR. Coupling the intervention with real-world data extracted from primary care records allowed a meaningful assessment of the educational intervention’s impact. Overall, the study did not demonstrate statistically significant differences in rates of change for medication prescribing or test requests or costs across the arms of the trial. However, in the sample who completed the education, there were statistically significant differences in changes in pathology costs and rates between the pathology and prescribing intervention arms. Potentially, pathology test ordering was more amenable to change from this type of intervention than prescribing or imaging test ordering, with decisions to reduce the number of tests requiring less negotiation with patients. It may also be that pathology records in MHR provide more assistance to doctors in reducing pathology requests than medication records for deprescribing or imaging records for imaging requests.
There are few RCTs of education interventions to reduce potentially inappropriate pathology test ordering. A systematic review of education interventions identified six qualifying studies that aimed to reduce groups of pathology tests.11 Of the four trials comparable to this study, three demonstrated some significant changes in test reductions in the order of 7.9–13%. All four studies used a combination of education and regular feedback to GPs on their ordering behaviours.32–35 One study which compared education and feedback and feedback alone concluded that education and feedback were more effective.36 Adding contemporaneous test ordering feedback may have augmented the effect of the intervention in our study.
Our study focused on imaging requests for the lower back in the imaging arm. The literature concerning interventions to reduce inappropriate imaging requests for low back pain is also limited. A recent systematic review which identified five RCTs found no evidence of change in practitioners’ ordering behaviour resulting from education or guideline dissemination. However, audit and feedback provided weak evidence for effect with clinical decision support and targeted reminders providing the strongest evidence for effect.10 The negative results from our study were consistent with the literature and add to the evidence that clinical behaviour change in this decision-making context requires quite intensive support.
Reducing inappropriate prescribing has received relatively more attention in the literature and is often referred to as ‘deprescribing’. A recent ‘umbrella review’ which synthesised data from systematic reviews of interventions for deprescribing in primary care37 highlighted that deprescribing was a complex process. The authors concluded that deprescribing requires communication and support for the patient by the practitioner, as well as attention to the risk-benefit balance in stopping medications, patient preferences and comorbidities.37 A systematic review of deprescribing interventions in older community-dwelling adults concluded that physician-education only interventions were not effective.8 While the study populations differ, the results of our study support those of the literature and highlight the complexity of effecting reductions in potentially inappropriate medications.
The findings from this study should be interpreted considering the study’s limitations. The acceptance rate of GPs into the study was very low. This does raise concerns about recruitment bias, and the possibility that the cohort was particularly motivated regarding the interventions, or otherwise differed from the Australian GP population. There is also some evidence in the findings that there were overlapping effects of the education across arms, particularly between the pathology and imaging arms. This had the potential to reduce the relative effects of the education intervention specific to each arm. The online education session completion rate was suboptimal. While not unusual in online medical education trials,38 this likely reduced the effect of the intervention in the cohort overall. Potentially, the $200 incentive payment for participants for completion was insufficient in relation to the time commitment. We are uncertain of the extent to which prescribing, pathology and imaging results were accessible in MHR across the cohorts, potentially confounding results. Finally, we note that the recruitment and intervention occurred in the context of the COVID-19 pandemic. This is expected to have had a negative impact on uptake of the trial and completion rates for the education sessions. We also note that we only used costs directly related to reduced expenditure of the items included in the education intervention. It is recognised that wider health system costs due to potentially inappropriate medicines (eg, associated hospitalisations) may exceed the costs of the potentially inappropriate medicines themselves.1 As we did not assess downstream health system effects of the intervention, it is likely that we have underestimated the resulting health system cost savings.
Future research directions
Future directions for research include investigating pathology-specific online education interventions, improving education completion rates, incorporating reinforcing activities such as real-time feedback of performance, investigating efficiencies in the cost of educational delivery and estimating total health system costs associated with the intervention.
This study demonstrated some evidence of reductions in low-value pathology ordering for participants who completed the educational intervention. As the healthcare workforce increasingly accepts and adopts online education engagement in a post-COVID world, scalable and effective education interventions have the potential to deliver quality improvement and cost savings to the health system.
Data availability statement
No data are available. Data for this study are available on reasonable request to the corresponding author, subject to ethics approvals.
Patient consent for publication
This study involves human participants and was approved by UOW and ISLHD Health and Medical Human Research Ethics Committee (HE 2019/367). Participants gave informed consent to participate in the study before taking part.
The authors thank the general practitioners who were involved in the project. We also thank the Operations Team: CM, Ms Alyssa Horgan, Ms Libby McCardle, Ms Terese Haberle, Ms Jessica McKenzie, Ms Edweana Wenkart and Mr Cheran Gul.
Contributors AB accepts full responsibility for the work and/or the conduct of the study, had access to the data, and controlled the decision to publish. He made substantial contributions to the conception of the study, the study design, interpretation of results, drafting of the manuscript and review of the manuscript. CK made substantial contributions to the study design, data analysis, interpretation of results, drafting of the manuscript and review of the manuscript. JM made substantial contributions to the study design, interpretation of results, drafting of the manuscript and review of the manuscript. CM made substantial contributions to participant recruitment, drafting of the manuscript and review of the manuscript. SB made substantial contributions to the conception of the study, the study design, participant recruitment and review of the manuscript. MB and JJR made substantial contributions to the study design, interpretation of results and review of the manuscript. AB, CK, JM, CM, JJR, SB and MB read and approved the submitted manuscript.
Funding The funding of this project was provided by the Australian Digital Health Agency and Medcast.
Disclaimer The funders did not have any role in the trial design, analysis or interpretation of results, nor placed any restrictions on publication.
Competing interests SB is medical director and a shareholder in Medcast, who provided the medical education for the study and commissioned the independent evaluation. SB did not have access to any study data or contribute to analysis or interpretation of results.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.