Article Text
Abstract
Background The National Stroke Audit has been used to audit and provide feedback to health professionals and stroke care services in Australia since 2007. The Australian Stroke Clinical Registry was piloted in 2009 and numbers of hospitals participating in the registry are increasing. Considering the changing data landscape in Australia, we designed this study to evaluate the stroke audit and to inform strategic direction.
Methods We conducted a rapid review of published literature to map features of successful data programmes, followed by a mixed-methods study, comprising national surveys and interviews with clinicians and administrators about the stroke audit. We analysed quantitative data descriptively and analysed open-ended survey responses and interview data using qualitative content analysis. We integrated data from the two sources.
Results We identified 47 Australian data programs, successful programs were usually funded by government sources or professional associations and typically provided twice yearly or yearly reports.
106 survey participants, 14 clinician and 5 health administrator interview participants were included in the evaluation. The Stroke Audit was consistently perceived as useful for benchmarking, but there were mixed views about its value for local quality improvement. Time to enter data was the most frequently reported barrier to participation (88% of survey participants), due to the large number of datapoints and features of the audit software.
Opportunities to improve the Stroke Audit included refining Audit questions, developing ways to automatically export data from electronic medical records and capturing accurate data for patients who transferred between hospitals.
Conclusion While the Stroke Audit was not perceived by all users to be beneficial for traditional quality improvement purposes, the ability to benchmark national stroke services and use these data in advocacy activities was a consistently reported benefit. Modifications were suggested to improve usability and usefulness for participating sites.
- audit
- quality improvement
- quality measurement
Data availability statement
Data are available on reasonable request. The datasets used and/or analysed during this study will be available from the corresponding author on reasonable request after the publication of results.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
WHAT IS ALREADY KNOWN ON THIS TOPIC
Audit and feedback are commonly used to monitor and improve quality of care. The Australian National Stroke Audit occurs biannually and provides a snapshot of quality of care of Australian stroke services. Given only 40 cases/site are audited every 2 years, it was important to evaluate the benefits of Audit participation.
WHAT THIS STUDY ADDS
Participation in the Audit was valuable to new stroke services and sites with no previous Audit involvement. Sites with long-term participation in the Audit reported equivocal benefit for local quality improvement. Perceived benefits of participating in the audit included availability of data for national advocacy activities and the opportunity to educate clinicians about the stroke guidelines.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
The Audit will continue to provide access to national data for advocacy activities in Australia. Adjustments to Audit reports will occur to allow more sophisticated benchmarking of ‘like’ services. Research and development activities are underway to investigate systems to automatically export data from electronic medical records to the Audit.
Introduction
Audit and feedback is commonly used to improve the quality of healthcare, professional practice and healthcare outcomes.1 2 Audit and feedback consists of measuring performance of individuals, healthcare teams or services against professional standards or evidence-based recommendations, and then providing a summary to those audited.1 In the Cochrane review (140 randomised trials), audit and feedback led to a median 4.3% absolute improvement in compliance with recommended practice, but the effect varied substantially between trials.1
In Australia, Stroke Foundation is a registered charity, which drives quality improvement by developing and maintaining clinical guidelines and coordinating national audits.3 4 The National Stroke Audit (hereafter referred to as the Audit) commenced in 2007 and is conducted annually, alternating between acute and rehabilitation care. The Audit comprises an organisational resource survey, and a retrospective medical record audit of 40 consecutive patient admissions at each participating site to monitor adherence to guideline recommendations. Participation is voluntary and participation rates are high; 89% of acute admissions for stroke5 6 and 92% of inpatient rehabilitation admissions for stroke7 are to hospitals that participate in the Audit. Sites provide in-kind staff time to collect data, and coordination, data analysis and reporting are managed by Stroke Foundation.
The data landscape in Australia has evolved since the Audit was initiated. The Australian Stroke Clinical Registry (AuSCR) was piloted in 2009 to collect data on core processes of acute stroke care and patient reported outcome measures in all consecutive admissions of stroke or transient ischaemic attack.8 The AuSCR provides an additional mechanism for delivering audit and feedback to participating sites, although with fewer datapoints included than the Audit. The number of acute hospitals participating in the AuSCR is steadily increasing, although it has yet to achieve national coverage.8 Concurrently, more hospitals have transitioned to electronic medical records9 10 and maintain local stroke data systems.
Considering the changing context regarding stroke data programmes in Australia, the aim of this study was to inform strategic direction and opportunities to maximise the impact of the National Stroke Audit programme by:
Identifying characteristics of successful data programmes for diseases other than stroke in Australia.
Understanding the views of clinical and administrative stakeholders about the Audit.
Understanding factors influencing site involvement.
Identifying opportunities to improve the Audit.
Methods
Rapid review
To identify national clinical data programmes, we searched grey literature (Google and Google scholar) using combinations of the terms audit OR registry AND Australia. We restricted our search to publications within the last 10 years and programmes which communicated primarily in English. The search was not intended to be exhaustive but to focus on the largest and most established data programmes. One person conducted the search and a second person searched for additional relevant information that may have been missed. We included information sourced from websites, reports, policy documents and scientific papers in academic journals. We included data programmes (audits or registries), which were large in scale (more than one site).
One person extracted data including funding, governance, setting, stakeholders, outputs, longevity of programme, participants, clinical condition into a table (where these data were available). A second person checked the accuracy of data extraction in 10% of entries. We identified data programmes, which appeared to be most successful based on our nominated criteria of longevity (existing for 3 or more years), size (five or more sites) and provision of feedback to sites.
Survey
We used mixed methods (surveys and interviews) to seek perspectives of clinicians and state-based administrators involved in stroke care.
We created a survey comprising demographic information, closed-ended questions about participation in and attitudes to the Audit, and open-ended questions about changes to the Audit (see online supplemental appendix 1). The survey was piloted in Qualtrics by the research team and edited prior to distribution.
Supplemental material
The link to the online survey was emailed to contact people from hospitals that had previously participated in the National Stroke Audit (295 potential participants from 169 hospitals on the Stroke Foundation National Stroke Audit database), on 25 October 2021. Two reminders were sent at weekly intervals. Respondents were invited to indicate their interest in participating in follow-up interviews.
Interviews
We developed two interview guides: one for clinicians and another for state-based administrators involved in stroke care. The interview guide for clinicians was piloted and no adaptations were required.
We emailed interested clinician survey respondents and Stroke Foundation emailed state-based administrators in three states, inviting them to participate in interviews. We interviewed clinicians and administrators separately. When possible, group interviews were organised; individual interviews were organised when only one participant nominated for a proposed time, or to suit participants’ availability. Repeat interviews were not conducted and field notes were not made.
EL, a physiotherapist and researcher, facilitated all interviews via Microsoft Teams. She presented the aims of the research to participants and clarified the research team’s independence from Stroke Foundation. Automatic transcripts were reviewed for accuracy and were not returned to participants. Data from preceding interviews were sometimes presented for discussion with subsequent participants to determine consistency in participants’ views. Data saturation was reached with no new themes identified within the data from the last three clinician interviews.
Analysis
We analysed quantitative survey results descriptively using SPSS version 27. We dichotomised attitudes to the Audit into positive (strongly agree/somewhat agree with listed statements) and not positive (strongly disagree/somewhat disagree/neither agree nor disagree).
EL and TL conducted qualitative content analysis11 on open-ended survey responses and interview data. Data from each transcript pertinent to three topics of interest (views about the Audit; factors influencing participation; strategies to improve Audit) were highlighted and grouped using tables in Microsoft Word. Themes pertaining to each area of interest were inductively identified and illustrative quotes are presented.
We compared data from both sources and presented the integrated findings according to our study aims. We have reported numbers of responses when presenting survey data, and an overview (eg, numerous, some, rarely) of how often interview participants discussed themes. Interview participants were sent a summary of results but did not provide feedback on the findings.
Patient and public involvement
People with stroke and carers provide input into the Stroke Audit report and executive summary. People living with stroke were not directly involved with the design or analysis of this study which was explicitly designed to evaluate the usefulness of the audit for clinical services.
Results
Rapid review
We identified 47 Australian data programmes, comprising 22 audits and 25 registries. Details of these programmes are presented in online supplemental appendix 2. Both audits and registries included state-based and national programmes and a range of clinical conditions. We identified six audits12–17 and six registries18–23 which met our predetermined criteria of success (collected data over 3 years or more, involved five or more sites and provided feedback or access to data to participating sites). These successful programmes involve most or all applicable sites in Australia, and many had obtained long-term funding from government sources (eg, Department of Health) or from the relevant professional association. Participation in three audit programmes is mandatory for the relevant health professionals (surgeons, breast cancer surgeons, vascular surgeons).12–14 Most programmes produce twice yearly or yearly reports and offer information to benchmarks sites’ performances.
Supplemental material
Audit evaluation
There were 106 survey responses included. Response rates were 36% (106/295) for individuals and 52% (88/169) for organisations. All states and territories were represented at similar proportions as in the Audit. About two-thirds of organisations treated ≥100 stroke patients annually. Most respondents were nurses with ≥11 years of experience (table 1), reflecting the role that nurses commonly play in overseeing site participation in the Audits.
Nineteen people (14 clinicians, 5 administrators) from 5 states/territories participated in interviews. Four group (2–4 participants) and three individual interviews were conducted with clinicians (see table 2 for demographic details). One group (three participants from one state) and two individual interviews were conducted with administrators from three states. Interviews lasted between 30 and 60 min.
Views about the Audit
The majority of survey respondents (n=58, 59%) participated in both acute and rehabilitation audits. 19% participated in the acute audit only, and 17% in the rehabilitation audit only. Just over 40% of respondents participated in the AuSCR, and one-third reported using local data programmes. Interview participants suggested that the lack of alternative data programmes relevant to rehabilitation service delivery strengthened the value of the rehabilitation audit.
We are more interested in the rehab [Audit] rather than the acute. And that’s probably because AuSCR doesn't give us much in the rehab space [Clinician_1]
Virtually all survey respondents (n=96, 98%) participated in the Audit to monitor local quality improvement. The next most frequent response was benchmarking against peer hospitals (n=83, 85%). These findings were validated at interview, with all administrators reporting the Audit provided access to national data and the ability to benchmark against other sites and states. However, administrators had conflicting views regarding usefulness of Audit data for improving stroke care; administrators from two states reported that the small numbers audited and the delay between care provision and receiving feedback limited the Audit’s usefulness, whereas a third state reported using Audit data to set state priorities and evaluate changes.
It’s been really useful to review our priorities…to say…these are all the things we're trying to achieve, but what do we need to focus on first? And how do we need to prioritize that and then using that as a benchmark? So then…we use this as a mechanism in which to measure and evaluate the changes that we do try to put in place. [Administrator_1]
Most clinician interviewees reported no longer using Audit data to initiate or monitor quality improvement projects. One exception was a respondent from a site without a dedicated stroke unit who reported using Audit data to advocate for improved clinical care provision.
Having that hard data to feedback to colleagues, managers, the hospital has been incredibly helpful. I feel like I'm doing the battle that the…bigger hospitals had ten years ago, bringing thrombolysis in and I had a lot of resistance to that within the hospital, so…instead of me just being this, “Hey yeah, let’s do it”, it’s me coming along and saying “We've been part of this nation-wide thing, this is where we sit”…It’s also been a way to get…allied health and some nursing staff on board about some of the data we should be collecting and what should be documented and trying to improve our clinical pathways [Clinician_2]
Most survey respondents indicated positive attitudes to the Audit with high (≥80%) levels of agreement with the nominated statements (table 3). Excluding the one negatively worded statement, the item with the lowest level of agreement was about Audit data accurately reflecting clinical practice (66% agreement).
Interview participants frequently described concerns that data were only collected about processes of care received within each hospital, which did not reflect care received by patients transferred between acute and regional hospitals or retrieved via the stroke ambulance.
We are an ECR [endovascular clot retrieval] centre, so we transfer a lot of patients in for ECR, which the Audit doesn't really reflect…The patients are being discharged through us to another…hospital…So then it shows that we're doing poorly in secondary prevention which is not actually true because…they're actually being transferred elsewhere and they'll get all of that work up…I think the data doesn't really reflect what’s happening to our patients [Clinician_3]
Further, participants reported concerns due to the small numbers of cases reviewed.
I always…flag for people to be cautious with the data that’s presented, for example, our thrombolysis rates. So it’s only 40 patients, when we did our audit for 2021, there was only one patient in those 40 that was thrombolysed. And so that’s 3%. And that’s not reflective of our overall 12-month lysis rate [Clinician_4].
Interview participants also suggested that the Audit was not sensitive enough to capture when care was tailored to an individual’s needs, because the questions did not reflect when care was not indicated.
You don't actually know what the reason is that we're not mobilizing people. It might be a haemorrhagic stroke and blood pressure consistently above 170 [Clinician_6]
Factors influencing site involvement in the Audit
Barriers
In the survey, the most frequently reported barrier to participation was time to collect and enter data (n=84, 88%), followed by inconsistency of data entered by different staff/different sites (n=41, 43%) (table 4). In interviews, the time burden was raised frequently. Participants also spoke about the additional time required to answer repetitive or inapplicable questions due to lack of skip logic within the Audit tool.
One of the things I do find frustrating is if you say that the patient is not on any anticoagulants, and still have to go through and say “no” to everything [Clinician_4]
Most clinician interview participants discussed strategies such as providing training or referring colleagues to the data dictionary to ensure data were entered consistently. However, some respondents reported difficulties extracting data due to problems with unclear, or discipline-specific jargon.
I tried to assist with the rehab Audit once. And some of the questions I had to Google…I had no idea [Clinician_6]
Further barriers were raised in interview that had not been noted in the survey. The delay between care provision and receiving the Audit report was a major problem for some. Many respondents kept local datasets or participated in the AuSCR or state-based initiatives, which included more patients and enabled more timely reporting, although with a reduced amount of data for each patient audited.
Numerous clinicians also reported that the Audit collected data that they did not feel were useful; other processes of care were not considered by some participants to be appropriate to deliver during the acute hospital stay.
Things like carer training…It’s rehab, why is it in the acute Audit not in the rehab Audit? [Clinician_9]
Incentives
Incentives were more commonly reported than barriers in the survey. These included benchmarking with other services (n=80, 83%), and monitoring and improving patient care (n=76, 79%).
In contrast to the survey results, interview participants rarely reported that being involved in the Audit highlighted improvements in care. Nonetheless, most intended to continue participating in the Audit, because this was viewed as ‘doing the right thing’.
It wouldn't be good for our reputation if we pulled out [Clinician_5]
Additional incentives were raised in interviews. Participating in the Audit was seen to improve awareness of guidelines, care delivery and documentation, and when more than one person was involved in entering audit data, this provided opportunities to enhance teamwork.
We have many rotating staff coming through and…it’s a great way for staff to have a better understanding…around benchmarking around the guidelines and the KPIs that we're trying to achieve for stroke clients…it’s also a useful reminder to get more staff on board to know what the stroke guidelines are and what we should be aiming for as a team [Clinician_10]
Identifying opportunities to strategically improve the Audit
The most common free-text survey response about how the Audit should be refined (n=29) was to change the questions (table 5). Interview participants suggested reviewing terminology to improve clarity and to remove questions that were not included in reports or were not relevant for all patients. Additional questions were suggested that could guide clinical practice.
Rather than ask me what their motor deficits, speech are, say “Have they had a NIHSS score in ED”…That’s a helpful piece of data….The stuff that we actually need to try and improve clinically [Clinician_12]
The next most suggested improvements in the survey were to improve the Audit tool/software (n=15) and to enable data collection across hospital transitions (n=8). Almost all interview participants spoke about the need to improve ease of Audit data entry. Sites with electronic medical records and administrators spoke about the possibility of exporting routinely collected data to populate the Audit.
What you don't want is someone have to go through and click, click, click, for each patient. Ideally it’s some way that we can send data…to the Audit in…data points that we are already collecting [Administrator_4]
Changing the frequency of the Audit was suggested in surveys (n=5) and at interview. Interview participants who took part in the rehabilitation Audit and those who were new to the acute Audit recommended that the frequency remained biannually.
I wouldn't wanna do it more frequently than bi-yearly because it just doesn't give you enough time to actually implement and see significant change [Clinician_14]
Others felt that the Audit should be conducted more frequently, with the potential to collect more data on fewer areas of focus, or to use the Audit within local projects.
I would love to see the audit cycle more frequently and I would like to see…“Let’s focus on continence, let’s focus on discharge care planning” and drill down further with the questioning and the type of reports and the type of professional development and support that comes out of the Stroke Foundation to sit alongside that [Clinician_5]
I think there needs to be…a core spine of things which are continually collected across all sites…and then a wider choice of things that that can be done…as a spotlight…If there was…a new…treatment…we could use [the Audit] to spotlight to see where the practice was translating [Clinician_13]
Discussion
With a growing prevalence of data programmes in acute stroke, the benefit of the Audit over the continuous minimum data collected via the AuSCR or locally held databases was not universally recognised. Nonetheless, most participants were in support of the Audit continuing. Strengths of the Audit were its national coverage and Stroke Foundation’s track record of using audit data to advocate for improved stroke services.
Unlike other well-established data programmes in Australia we identified, the National Stroke Audit is not supported by government or health departments, but by a charity. Participation in the Stroke Audit is voluntary, as for many (but not all) other Australian data programmes. The tradition of participating in the Audit was important at some sites, but the programme’s longevity brings specific challenges. Externally coordinated audits tend to be effective early in their implementation, but impact decreases over time.24 This was evident in our study, with sites new to the Audit reporting clear benefits of participation, whereas sites with more Audit experience reported that participation did not provide new information. To enhance the Audit’s usefulness, it must be refined to provide benefits to Audit users, beyond them altruistically contributing data for national advocacy purposes.
Currency of data and frequency of audit cycles were important to study participants, both of which are indicators of audit quality.1 25–27 Respondents in our study frequently expressed reservations about collecting data on only 40 patients/site every 2 years. Looking further afield, most international stroke data programmes are registers or registries which continuously collect minimum datasets28; we identified only three ongoing international stroke audit programmes (Sentinel Stroke National Audit Programme, SSNAP (England, Wales and Northern Ireland),29 Scottish Stroke Care Audit,30 Irish National Audit of Stroke.31 While these audit programmes are designed to collect data on 90%–100% of hospital admissions with stroke, feedback is still only published yearly (Scottish and Irish audits)30 31 or quarterly (SSNAP),29 similar to the Australian data programmes identified in the rapid review. Balancing the need for comprehensive, up-to-date data and regular feedback cycles with reducing the data entry requirements is not straightforward. Including data from every patient with stroke in the National Stroke Audit in its current version would increase the time burden, which was already a major barrier to participation. One solution would be to partner with health and research teams in Australia that are striving to integrate health data across different sectors.32 The development of automated data extraction to facilitate inclusion of more patients and provide more frequent feedback of results without additional data collection burden would be of value for Audit effectiveness as well as usefulness and sustainability. Consequently, Stroke Foundation and the AuSCR team are currently investigating rebuilding the Audit tool with a specific focus on improved technologies to allow the automatic transfer of data from electronic medical records or existing databases (K. Hill, Stroke Foundation, personal communication 8 December 2022).
Limitations of the study include the select sample of clinicians who consented to participate; it is likely that people who felt most strongly about the Audit would volunteer to provide feedback. Strengths of this study include that survey respondents had similar proportionate state representation to general Audit respondents, and the geographical and professional diversity of the interview participants. The authorship team brought both content expertise and objectivity to the evaluation, with the team comprising three members EL, KL and TL who were closely familiar with the Audit, and one member TS with extensive health services research expertise and no previous work in stroke. All authors were university employees independent of the Stroke Foundation.
Conclusions
There was strong support for the Audit to continue, with widespread appreciation of the value of the Audit for national benchmarking and advocacy activities, but inconsistent reports on its usefulness for facilitating local quality improvements. Other benefits from Audit participation included staff education and awareness of guidelines. Suggested modifications include developing systems to facilitate automatic data entry and timely feedback, and collecting data that accurately reflect care provision when patients transition between different services. Other value-added propositions include having the flexibility to use Audit infrastructure to conduct local audits on focused areas with the ability to add locally relevant datapoints.
Data availability statement
Data are available on reasonable request. The datasets used and/or analysed during this study will be available from the corresponding author on reasonable request after the publication of results.
Ethics statements
Patient consent for publication
Ethics approval
This study received approval from Flinders University Human Ethics Low Risk Panel (Project no 4840). Participants gave informed consent to participate in the study before taking part.
Acknowledgments
We thank Kelvin Hill and Lisa Murphy from Stroke Foundation for their assistance.
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Twitter @E_A_Lynch
Contributors All authors contributed to study design and write-up of results. EL led data collection, qualitative analysis and is study guarantor, KL led literature review, TL assisted qualitative analysis, TS led quantitative analysis. All authors reviewed and approved the manuscript.
Funding This work was commissioned by Stroke Foundation (no award/grant number).
Competing interests This evaluation was commissioned by Stroke Foundation, which oversees the National Stroke Audit. The authors received payment from Stroke Foundation via their university and conducted the evaluation independent from Stroke Foundation.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.