Article Text

AMU patient list generation: from junior scribe to junior doctor
  1. Zahra Ravat,
  2. Amil Sinha,
  3. Alistair Jellinek,
  4. Nigel Page
  1. Acute Medical Unit, Sandwell and West Birmingham Hospitals NHS Trust, Birmingham, UK
  1. Correspondence to Dr Zahra Ravat; zahra.ravat4{at}nhs.net

Abstract

This quality improvement project (QIP) aimed to assess the impact of automating patient list generation on the acute medical unit (AMU) at Sandwell and West Birmingham Hospitals NHS Trust. The AMU patient list categorises patients requiring ‘clerking’, ‘post-take’ (PTWR) and ‘post-post-take’ (PPTWR) for the morning ward round. During weekdays, this list need only include the patients in AMU. For weekends, this list must include ‘outliers’, that is, patients transferred to different wards (which may lack resident medical teams over the weekends) but still requiring PTWR/PPTWR. The list is created by the junior doctor on their night shift, a daily necessity due to the high AMU patient turnover.

A pilot study, followed by three complete ‘plan-do-study-act’ (PDSA) cycles, was conducted over 2021/2022. Cycle 1 (pre-intervention) and cycle 2 (post-intervention) assessed the impact of the generator on weekdays. This was adapted for the weekend over cycles 2 and 3. The process measure assessed was the time taken for list generation. The outcome measure was the total number of patients clerked per night. The balancing measure was doctors’ attitudes.

The intervention reduced the time taken for list generation by an average of 44.3 min (66.3%) during weekdays and 37.8 min (42%) during weekends. Run charts demonstrated significance for the reduction in weekday list generation time. Both weekdays (63.5% decrease, p<0.00001) and weekends (50.5% decrease, p=0.0007) had significant reductions in total negative attitudes. Both weekdays and weekends had ‘time-consuming’ as the most frequently selected attitude pre-intervention, whereas ‘easy to make’ was most frequently selected post-intervention. Some junior doctors reported the generator enabled clerking of extra patients, supported by non-significant increases in the averages for this outcome.

This QIP demonstrates how the automation of labour-intensive administrative tasks results in notable time-saving outcomes. Thereby improving doctor attitudes and well-being, and facilitating the delivery of quality patient care.

  • Quality improvement
  • Healthcare quality improvement
  • Information technology
  • Efficiency, Organizational

Data availability statement

Data are available on reasonable request. Not available.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

  • A previous quality improvement project (QIP) has demonstrated list generation as a time consuming administrative task which is often delegated to junior doctors, and has shown that the use of third parties (naturally with associated costs) can reduce administrative burden.

WHAT THIS STUDY ADDS

  • Demonstrates how automation of administrative tasks, using a flexible Excel-based generator, has positive impacts on junior doctor attitudes and can reduce workload, in a cost-effective and transferable manner.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

  • Demonstrates time-saving, ease-of-use and transferability (scalability) as key principles for QIP interventions.

Introduction

Problem description

This quality improvement project (QIP), conducted at Sandwell and West Birmingham Hospitals NHS (National Health Service) Trust (SWBH), was based on findings from a pilot study, which identified that acute medical unit (AMU) patient list generation was viewed as time-consuming and inefficient by night shift junior doctors. This QIP was used to ascertain the extent of this inefficiency and address it accordingly.

The night shift junior is responsible for clerking patients, managing unwell patients on AMU, attending emergency response team calls (EMRTs), responding to nursing concerns and creating the patient list for the morning ward round (WR). The AMU consultant reviews clerked patients within 24 hours as part of the 'post-take' WR (PTWR). The patient is reviewed for the third time by a registrar (or higher) the following day as part of the ‘post-post-take WR’ (PPTWR), and for every subsequent day thereafter on the regular WR. The patient list should clearly identify what type of WR, that is, clerk/PTWR/PPTWR/WR (indicating consultant or registrar review, as well as urgency) each patient on the AMU requires, alongside patient name, hospital number, location and an empty column for notes/jobs to be written.

Cerner Millennium’s PowerChart/Unity is the electronic patient record system (EPR) used at SWBH NHS Trust. Unlike other EPRs, PowerChart does not automatically generate clerking lists for accepted medical referrals, nor lists for PTWR patients, nor allow clerking doctors to assign themselves to referred patients. Moreover, the look back and look forward functions do not correctly produce outlier lists for the weekend. Therefore, the details of accepted referrals are transcribed into an AMU referrals book; the clerking team sign their names next to their selected patient and commence management. The night doctor will cross-reference against this book to produce the patient list.

Due to the high patient turnover, creating an updated daily ward list is a critical administerial task to facilitate smooth AMU morning WRs. However, manual patient list generation is time-consuming and may divert juniors from higher priority tasks, such as the clerking of new patients and the completion of jobs.

Available knowledge

Working on call and out of hours is known to be particularly stressful for junior doctors,1 with 'workload while on duty' found to be ‘excessive’ for junior doctors.2 Mistakes are more frequently made during on-call night hours, with 37% of juniors surveyed 'never' achieving a ‘natural break on night shift', and more than half 'never' achieving the 'recommended 20–45 min nap'.3

Previous QIPs have attempted to alleviate the workload on on-call doctors by reducing the time spent on routine tasks.4–6 Manual list generation by junior doctors is one such routine task that has previously been reported to be time-consuming for surgical junior doctors6; one surgical QIP used clinical secretaries and developed new software to expedite this process, however, they had access to an in-house applications developer (with expectation the intervention would become ‘cost-neutral’ in the long term).6 In contrast, this QIP demonstrates a replicable means of automating list generation for newly admitted patients in acute medicine, which is cost-free and easy to implement in hospitals with limited resources.

Rationale

This QIP is based on a pilot study (n=12) which found a sample of junior doctors held negative attitudes towards list generation, finding the current manual process ‘time-consuming’. A summary of the pilot data has been included within this paper.

The pilot study found that weekend list generation was more time-consuming than weekday list generation. This is because the weekend WR list must include outlier patients who have been transferred to other wards (from the AMU). As consultant-led WRs do not occur during the weekend on other wards, it is imperative that new patients who have not been reviewed by seniors on the PTWR and PPTWR are also included on the weekend list, otherwise they will not have senior review until Monday. Therefore, creating the weekend list accurately is key to maintaining safety, but can be time-consuming.

Reductions in the time taken for juniors to produce the list may have several measurable consequences: more patients clerked, improved doctor attitudes or better care for those already clerked, and condensed handovers. Data on potential disruptive factors (interruptions) during list generation were also collected.

Specific aims

The purpose of the project was to determine, if reducing the time taken for list generation would have a positive impact on the clerking of patients and, the impact of the intervention on junior doctors.

Process measure

  • Time taken for the night shift junior to generate the patient list.

Outcome measure

  • Number of patients the night shift junior clerks.

Balancing measure

  • Night shift junior’s attitudes towards list generation.

This report has been written in accordance with the Revised Standards for Quality Improvement Reporting Excellence 2.0.7 This report demonstrates the implementation of three PDSA cycles in fulfilling the above aims and streamlining the process of list generation.

Methods

Context

A pilot study was conducted to estimate time taken for list generation, elucidate the attitudes of junior doctors towards list generation and identify confounding variables (Results section). This retrospective pilot study took the form of an electronic survey distributed to all FY1 doctors who had undertaken night shifts over a 2-month period (from 5 August 2021 to 23 September 2021). 12 responses were returned. Surveys asked doctors to retrospectively estimate the time they had taken to generate the list and their attitudes towards the task. These data highlighted that the time taken for list generation during the weekend was greater than during weekdays; this finding was expected due to the added complexity of including outlier patients in the weekend list (refer to Problem description). Thus, data for the weekdays and weekends were segregated for analysis.

Interventions

PDSA cycle 1, pre-intervention, produced results that were corroborated by the pilot data (refer to the Results section). Therefore, an automated solution to list generation was proposed (online supplemental appendix A). Visual Basic for Applications was used to write the program for list generation on Microsoft Office Excel 2010 (online supplemental appendix C).

Supplemental material

The intervention, the Excel generator, initially requires unlocking by enabling macros (online supplemental appendix F). After importing PowerChart/Unity data into the generator and clicking the appropriate button (online supplemental appendix F), the generator automatically produces a list of all acute medical patients in AMU and in the emergency department (ED), organised by location (bay and bed number), and with allocation as clerk/PTWR/PPTWR/WR. This allocation is made by using the length of hospital stay as an approximation for what type of assessment is required (through a ‘scale’). The scale is based on national standards, that new (stable) patients should be clerked/initially assessed within 4 hours from referral to medicine (from ED) and that secondary review of the new patient (PTWR) should occur within 14 hours if 'outside the working day'.8 The limitation of this approximation is that those with a length of stay close to (or on) the transition point between assessment types may be misallocated. However, all listed patients will need face-to-face review regardless of allocation, thus preserving safety.

To mitigate the error rate, automatic flagging of patients on the boundary alerts the doctor to those requiring manual cross-checking (individually entering the asterisked patient’s notes to determine whether for clerk/PTWR/PPTWR). This flagging system ensures the doctor only needs to confirm the assessment status of a fraction of patients, whereas previously, electronic notes for all patients in the AMU (at least 40+ when full), and for those waiting in the ED had to be individually entered.

The generator was advertised via daily texts and, during PDSA cycle 2, made further accessible via a Google Drive (containing both generator and demo video), which was hyperlinked to the AMU Trust intranet page with assistance from the AMU consultants. Multiple in-person demonstrations were also provided (online supplemental appendix A).

Interventions implemented following PDSA cycle 3 focused on introducing the generator to City Hospital, this included presenting the generator at the Trust acute internal medicine ‘Quality Improvement in Healthcare Day’ and further advertising of the generator through the Trust e-newsletter (online supplemental appendix A).

Study of the interventions

By using a pre-intervention and post-intervention study design, and analysing confounding factors (eg, interruptions by EMRTs/colleagues), any changes observed could be confidently attributed to the intervention itself. One notable confounding factor was the possibility that the increasing experience of doctors could contribute to increases in clerking numbers over time. However, clerking numbers stayed relatively constant over time, with the R2 values for the outcome measure almost 0 (online supplemental appendix A2).

Measures

Pre-intervention and post-intervention data were collected via electronic forms using the ‘Google Forms’ platform. Forms were distributed to the juniors the evening before the night shift and chased the following morning. The form included the following measures.

The process measure was measured by the night doctors themselves, who were asked to record from the time they started list generation to when they printed the list. PDSA cycle 1 (pre-intervention) results were corroborated by the pilot data. Potential confounding factors were also analysed, such as the number of interruptions by doctors, nurses and EMRTs while making the list.

The outcome measure data were obtained from the AMU referral books only. Cross-checking (by two independent investigators) of the Google Forms data against the AMU referral books found a tendency for doctors to overestimate their clerking numbers when self-reporting, compromising validity. Hence, form data were not used for assessing this measure. Night shifts in which the junior clerking was assisted by the registrar (as noted in the books) were excluded.

The balancing measure was measured by multiple choice answers and a free-text section on the form. Forms have traditionally been viewed as high in reliability but low in validity.9 However, for discerning doctor attitudes, this measurement approach was judged here to be the most valid. Bias in question item construction was mitigated by presenting doctors with a list of 10 dichotomous statements (5 negatively phrased and 5 positively phrased, eg, ‘very time-consuming’ vs ‘it takes no time at all’), from which the doctor selected the three that best described their attitudes. The ‘free-text’ section allowed doctors to elaborate their thoughts further, bolstering validity.

Analysis

Forms for WRs taking place on Monday–Friday mornings were collated for the weekday data. Saturday–Sunday morning forms were collated for the weekend data. Discrepancies in raw data (eg, incorrectly inputted dates) were resolved among the investigators.

The time taken for list generation (process measure) was determined by calculating the difference between recorded start and finish times. Run charts were produced and the shift rule was applied, with a single shift defined as 'six or more consecutive points either all above or all below the median (values falling on the median neither adding to/nor breaking a shift)'.10

Clerking numbers (outcome measure) were analysed by using t-tests to determine if there were significant differences between cycles (pre-intervention and post-intervention).

Analysis of attitudes (balancing measure) involved comparing the frequencies of most selected attitudes within a cycle, as well as the proportional change in the frequency of positive versus negative attitude selection pre-intervention and post-intervention. Pearson’s χ2 2×2 test was used to determine if there was a significant association between changes in attitude selection and intervention implementation.

An inductive coding thematic analysis approach was used to analyse the free-text responses.11 Full coding can be found in online supplemental appendices A3,B,B1,E.

Results

Pilot study (mornings of 5 August 2021–23 September 2021)

A two-tailed t-test identified a significant difference (p=0.0209) in the time taken to make the patient list on weekend versus weekday night shifts (table 1). Therefore, weekend and weekday list generation data were analysed separately.

Table 1

Process measure data on time taken to generate the AMU patient list, pre-intervention and post-intervention, by the night doctor on weekdays and weekends

There was no notable difference between the pilot data and the pre-intervention data. This demonstrates the reliability of the forms (ie, doctor reporting) for the process measure.

Pilot study found mean number of patients clerked over weekday nights was 4.40±1.31. Mean number of patients clerked over weekend nights was 3.89±0.928. No positive correlation between clerking numbers and time/dates was found (weekdays R2=0.0005, weekends R2=0.008 (online supplemental appendix A2)).

Finally, pilot surveys asked juniors ‘In your view, why do we make the patient list?’ Free-text answers were inductively coded and three overarching themes were identified (online supplemental appendix A3):

  1. To ensure consultant (/senior) review.

  2. To ensure all patients are seen.

  3. To help out/assist juniors.

Pilot surveys found attitudes to both weekday and weekend list generation were overwhelmingly negative: 83.3% of attitudes during weekdays and 88.9% of attitudes during weekends (online supplemental appendix A1). The attitude profile was similar between weekday and weekend groups, with the majority of doctors in both groups selecting ‘time-consuming’.

PDSA cycle 1: pre-intervention (mornings of 1 October 2021–12 November 2021)

Pre-intervention forms largely followed the format of the pilot surveys with minor adjustments: forms were prospectively completed immediately post-night shift and two optional free-text questions were included. Twenty-one responses were obtained for the weekday analysis and six responses were obtained for weekend analysis.

Mean number of patients clerked over weekday night shifts was 4.00±1.08 and over weekend night shifts was 3.60±1.14.

The average time taken to generate the pre-intervention weekday list was 66.8±26.0 min, whereas weekend list generation took 90.0±39.8 min (figure 1A).

Figure 1

(A) Run chart showing time taken to generate night shift weekday lists: line indicates the median and circles indicate the runs with significant shift. (B) Run chart showing time taken to generate night-shift weekend lists: dashed line indicates the pre-/post- means, median= 60, outlier figure has been circled. The greyed areas represent the time taken for creation and implementation of the intervention. Weekday mean pre-intervention=66.8±26.0 and post-intervention = 22.5±22.6 (3s.f.). Weekend mean pre-intervention=90.0±39.8 and post-intervention=60.3±37.8 (3s.f.). PDSA, plan-do-study-act.

Attitudes to both weekday and weekend list generation were overwhelmingly negative, but more so for the weekend. Most weekday pre-intervention doctors chose ‘time-consuming’ (15 respondents) and ‘tedious (not easy) to generate’ (13 respondents) as their primary attitudes towards list generation (figure 2A). Weekend pre-intervention respondents most commonly selected ‘time-consuming’ (five respondents) and a ‘source of stress’ (four respondents) (figure 2B). 89.4% of the total attitudes towards weekday list generation were negative (figure 2A), while 93.3% of total attitudes towards weekend list generation were negative (figure 2B).

Figure 2

Doctors were asked to select three items which best describe their attitudes towards weekday (A) list generation and weekend (B) list generation. Figures on the bars represent the frequencies that doctors selected the attitudes. Pearon’s χ2 2×2 test was used to determine significant associations. For ‘time consuming' versus 'no time at all’ during weekdays, the association was highly significant (χ2 statistic=14.524, two-tailed p=0.0001). For ‘easy’ versus ‘tedious’ to generate, the association was extremely significant (χ2 statistic=17.740, two-tailed p<0.0001). For ‘source of stress’ versus ‘not a source of stress’, the association was very significant (χ2 statistic=10.154, two-tailed p=0.0014). For ‘useful’ versus ‘pointless’, the association was not found to be significant (χ2 statistic=5.488, two-tailed p=0.0192). For patients missed versus not missed, the association was considered to be significant (χ2 statistic=4.000, two-tailed p=0.0455). Overall, the association between positive versus negative attitudes, before and after the weekday intervention, was found to be extremely significant (χ2 statistic= 50.212, two-tailed p<0.0001). For weekends, ‘easy’ versus ‘tedious’ to generate, the association was significant (χ2 statistic= 6.346, two-tailed p=0.0118). For ‘time-consuming' versus 'no time at all’ during weekends, the association was not significant (χ2 statistic=2.500, two-tailed p=0.114). For ‘source of stress’ versus ‘not a source of stress’, the association was not significant (χ2 statistic=0.933, two-tailed p=0.334). For ‘useful’ versus ‘pointless’, the association was not found to be significant (χ2 statistic=0.178, two-tailed p=0.673). For patients missed versus not missed, the association was not found to be significant (χ2 statistic=2.500, two-tailed p=0.114). Overall, the association between positive versus negative attitudes, before and after the weekend intervention, was found to be extremely significant (χ2 statistic=11.437, two-tailed p=0.0007).

An optional free-text question about the predominant software and strategies used to generate the patient list from PowerChart/Unity was included. Twenty-two answers were submitted, with two main strategies identified (online supplemental appendix B):

  1. Copying and pasting into MS Office Word only (15).

  2. Copying into MS Office Excel±transferring this into Word (7).

Three of these answers additionally highlighted the need to cross-check the clerking (referral) book to locate outliers.

A final optional question, ‘add any other comments on the morning WR patient list creation’ was included for weekday and weekends. All themes identified through the thematic analysis of this question were negative (online supplemental appendix B1).

Tertiary outcomes included analysis of interruptions while making the list. Mean interruptions for weekday nights shifts pre-intervention 3.32±1.37 (online supplemental appendix D).

Pre-intervention data were analysed, and a focus group held among the investigators; an automated list generator (online supplemental appendix F) was proposed as the intervention for PDSA cycle 1 (online supplemental appendix A).

The patient list generator was created and implemented during late November and throughout December (figure 1, shaded areas).

PDSA cycle 2: post-intervention weekdays (mornings of 1 January 2022–20 June 2022)

The second PDSA cycle assessed the impact of the automated patient list generator on weekday night shift doctors.

The mean number of patients clerked over post-intervention weekday night shifts was 4.28±1.27. This was a small increase compared with pre-intervention study results but was not found to be significant (one-tailed p=0.243, t-value=−0.704).

The average time taken using the generator was 22.5±22.6 min (table 1). The intervention, therefore, decreased weekday list generation time by approximately two-thirds (figure 1A). Run chart of weekday data demonstrated a run of 6 data points above the median prior to intervention, and a run of 8 consecutive data points below the median after the intervention was implemented. This shows a significant shift in the time taken to generate the list (figure 1A).

Post-intervention surveys found that 25.9% of total weekday attitudes towards list generation using the generator were negative, as compared with 89.4% pre-intervention (figure 2A). ‘Easy to make’ (17) and ‘NOT a source of stress’ (8) were the most selected attitudes following implementation of the intervention (figure 2A). The positive change in attitudes after the weekday intervention, was extremely significant (χ2 statistic=50.212, two-tailed p<0.0001).

Mean interruptions for post-intervention weekdays were 0.778±0.620. This was a significant reduction (one-tailed p=0.000024, t-value=4.59) (online supplemental appendix D).

Thematic analysis of free-text weekday and weekend post-intervention responses still identified a few negative themes, namely, the mislabelling of some patients and concerns over the occasional missed patient (online supplemental appendix E).

However, most themes were positive. Junior doctors generally felt the generator saved time, was easy to use and a small number reported that use of the generator enabled the clerking of extra patients.

See the ‘Interventions’ section for cycle 2 ‘Act’ details.

PDSA cycle 3: post-intervention weekends (1 January 2022–20 July 2022)

The first half of cycle 3 was conducted concurrently with cycle 2. Cycle 3 involved distributing the same postintervention form to the weekend night shift juniors.

The average time taken to create the weekend patient list using the generator was 60.3±37.8 min (table 1), a reduction of approximately one-third (figure 1B). Furthermore, with the outlier removed (figure 1B), the mean time taken was 52.2±23.7 min, a reduction of 42%. Due to a limited sample size, analysis of the weekend run chart could not be performed to assess the significance of this reduction.

Mean number of patients clerked over post-intervention weekend night shifts was 4.36±1.15 (3s.f.). This was a small increase compared with pre-intervention study results but was not significant (p=0.111, one tailed t-value=−1.27, df=17, SE of difference=0.598).

Post-intervention forms found 42.9% attitudes towards list generation were negative, as compared with 93.3% pre-intervention (figure 2B). Overall, this association was found to be extremely significant (two-tailed p=0.0007). ‘Easy to make’ (11) was the most selected attitude post-intervention (figure 2B).

Weekend free-text responses were collated with cycle 2 (post-intervention weekday free-text data) for the thematic analysis (results in section above, 'PDSA cycle 2').

See the ‘Interventions section’ for cycle 3 ‘Act’ details.

PDSA cycle 4: City hospital feedback (August 2022–onwards)

One of the investigators performed an in-person demonstration of the generator to the AMU Consultants at City Hospital. Verbal feedback was good. Data collection is ongoing for this cycle.

Discussion

Addressing procedural inefficiencies is key to achieving high standards of care, an integral component of robust healthcare systems.12 This QIP demonstrates how automation of labour-intensive administerial tasks, such as generation of the morning AMU WR list, can help to this end. This QIP was conducted in the AMUs of Sandwell and West Birmingham Hospitals NHS Trust, which uses an EPR system called Unity/PowerChart.13 This system does not produce clerk/PTWR lists for newly admitted patients. Consequently, the night-shift junior doctors were expected to produce the AMU morning WR list manually.

A pilot study, conducted prior to the main QIP, found that junior doctors generally understood the basic need for an updated list: to ensure senior review, to ensure all patients are seen and to assist juniors. However, the list also identifies the type of assessment (clerk/PTWR/PPTWR/WR) needed for all patients accepted under the medical team within the first 48 hours following referral.

The pilot study indicated that junior doctors found the process of list generation ‘time-consuming’. Factors contributing to time consumption included the need to enter each AMU patient’s electronic notes to determine for clerk/PTWR/PPTWR/WR and the need to screen the AMU referral book for additional patients, who might be missed as in ED or other wards. This manual process is labour-intensive and error prone for the fatigued night junior. Interruptions from staff and EMRTs can also prolong this process.

During PDSA cycle 1, doctors reported using at least two different software (Word±Excel) and cross-checking the AMU referral book to locate outliers. The intervention, an automated Excel-based program, ensured doctors only needed to use one software to produce the lists.

The PDSA cycle 1 process measure found that doctors were spending over an hour making this list during their night shifts. These results were in line with the findings of a QIP conducted at Royal Devon and Exeter Hospital, which found that (prior to automation) the on-call teams were spending ~90–120 min/day on creating the colorectal surgery list.6

Following the implementation of the intervention, the time taken to make the list was reduced by an average of 44.3 min (66.3%) during weekdays (table 1). This reduction was demonstrated to be significant through the application of the shift rule on the weekday run chart. For weekends, time taken was reduced by an average of 29.7 min (33%). However, if the outlier (circled in figure 1B) reporting 165 min to generate the list (>70 min more than the second longest time) is excluded, there is an average reduction of 37.8 min (42%), which is demonstrated in figure 1.

The balancing measure found an overall positive impact to the intervention for both weekdays and weekends. Both weekdays (63.5% decrease, p<0.00001) and weekends (50.5% decrease, p=0.0007) had highly significant reductions in total negative attitudes (figure 2). Both weekdays and weekends had ‘time-consuming’ as the most frequently selected attitude prior to the intervention, whereas ‘easy to make’ was the most frequently selected attitude following the intervention. For weekdays, there was also an expected reduction in the number of interruptions (p=0.000024).

Free-text answers demonstrated an increase in positive themes; junior doctors felt that the generator saved time, was easy to use, and a small number reported that they felt use of the generator enabled the clerking of extra patients (although the outcome measure did not show significant increases). These themes were consistent with a previous QIP on surgical list automation.6 The generator was also positively received by senior colleagues who awarded this QIP first prize in the Trust's 'welearn' QIP competition.

Limitations and challenges

Limited sample sizes presented a challenge for analysing the data. A larger sample size was needed to interpret the weekend run chart, as well as determine significance for the total number of patients clerked (outcome measure). Nevertheless, the 42% reduction in weekend list generation time, with outlier excluded, was of an appreciable magnitude (figure 1).

Missing data were also a challenge during this QIP. Some of this can be accounted for by the investigators themselves who were also on the on-call night rota, and who were, therefore, excluded from the study. Other missing data were due the night doctor failing to complete the form post-shift, despite reminders.

Reductions in time taken following the intervention may have been underestimated as junior doctors were asked to record the time up until final print. One free-text respondent reported that hardware issues significantly contributed to time-consumption, ‘…Could not find a printer and kept having to log on.’

The generator was not completely error-free. This was identified as a recurring concern in the thematic analysis. However, the time-saving capacity of the intervention appeared to outweigh this concern.

The predominant challenge for this QIP regards sustaining the positive impact; repeated signposting of the intervention to new juniors is vital to ensuring longevity.

Conclusions

The success and positive reception by junior doctors at Sandwell General Hospital led to the generator’s introduction to City Hospital, highlighting the scalability of the system within a single NHS Trust (which share an EPR system). Furthermore, the flexibility of the Excel software means that this generator can easily be adapted for PowerChart/Unity-based sites outside of SWBH NHS Trust.

This QIP shows how automating repetitive, labour-intensive (but essential) administerial tasks can significantly increase the time available to junior doctors, by reducing their work burden, as well as improve their general attitudes and well-being. This QIP also highlights essential features (notably, time-saving, ease and scalability) that make automated systems successful.

Data availability statement

Data are available on reasonable request. Not available.

Ethics statements

Patient consent for publication

Acknowledgments

Thank you to all of the AMU consultants in Sandwell and West Birmingham NHS Trust whose support made this possible.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors ZR and AS contributed equally to the study design, implementation and data analysis. ZR and AS contributed equally to the drafting, revision and submission of the final manuscript. AJ contributed to the implementation and data analysis. AJ contributed to the drafting and revision of the manuscript. NP contributed to the implementation of the project and revision of the manuscript. ZR is acting as guarantor.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Ethical Considerations This study involves human participants but the methodology was approved by the Clinical Effectiveness Project Facilitator for Sandwell and West Birmingham NHS Trust. Participants gave informed consent to participate in the study before taking part.

    All patient identifiable data in the demonstration video was obscured with video editing software. Furthermore, forms did not collect respondent’s personal data. Google Drive was used to disseminate the generator, however, public access was restricted to read-only, with use of the generator only being permitted after download (not in-browser). The blank generator on the Google Drive does not store any patient data and is, therefore, GDPR (General Data Protection Regulation) compliant.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.