Article Text

Download PDFPDF

Reducing delay in laboratory reports for outpatients from 16% to <3% at a non-profit hospital in New Delhi, India
  1. Saru Bhartia1,
  2. Pradaya Wahi1,
  3. Rinu Goyal2
  1. 1Quality, Sitaram Bhartia Institute of Science and Research, Delhi, India
  2. 2Laboratory Medicine, Sitaram Bhartia Institute of Science and Research, Delhi, India
  1. Correspondence to Saru Bhartia; saru.bhartia{at}sitarambhartia.org

Abstract

In early 2013, several outpatients at Sitaram Bhartia Institute of Science and Research in New Delhi, India complained that their laboratory results were not ready at the promised time. We reviewed the data for 3 months and learnt that 16% of outpatient results were not ready when patients returned to receive them. We formed a multidisciplinary team to fix the problem. After conducting a time-and-motion study, process mapping and discussions the team identified two key problems: (1) the laboratory consultant did not have a set time to validate the results and (2) the reasons of delay in laboratory reports were not documented; this made it hard to identify and solve specific reasons. The team decided to set a fixed time for the consultant to verify results and to document reasons for delay in each case. The team used Plan-Do-Study-Act (PDSA) cycles to finalise the verification system and to set up the documentation system. Documentation led to the identification of new problems which were also solved using PDSA cycles. Delay in reports reduced significantly from 16% in March 2013 to less than 3% in a period of 4 months. We have sustained these gains for the past 5 years.

  • healthcare quality improvement
  • laboratory medicine
  • quality improvement

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Problem

In early 2013, several patients from the outpatient clinics of Sitaram Bhartia Institute of Science and Research complained that their laboratory reports were not ready when they came at the appointed time to receive them. This was inconvenient for all patients as it meant they needed to make an additional trip to the hospital to receive their results. In addition, some patients experienced a delay in diagnosis and treatment.

Sitaram Bhartia Institute of Science and Research, a non-profit hospital and medical research centre in New Delhi, serves a middle-income population out of which only 40% have health insurance. It operates an accredited laboratory which functions 24 hours per day and processes approximately 500 tests daily. It provides services in the field of biochemistry, immunoassay, haematology, clinical pathology, microbiology and serology. The laboratory is staffed with 4 consultants and 14 technicians. The laboratory is equipped with an automated analyser, bar code system for sample identification, and bi-directional interface systems to ensure that reports are generated efficiently and without manual errors.

Delay of laboratory reports was analysed for March, April and May 2013. We found that 16%, 13% and 14% of results were not ready at the time patients had been asked to pick them up. We formed a quality improvement team to reduce the proportion of delayed reports. The aim was to reduce the average percentage delays for laboratory outpatient reports to less than 5%, within a period of 3 months.

Background

Timeliness is often used as a benchmark for laboratory performance. Clinicians depend on getting results in time to achieve early diagnosis and treatment of their patients.

Studies at tertiary hospitals and a teaching hospital have documented several interventions during pre-analytical, analytical and post-analytical phase that could help improve timeliness. These studies suggested adoption of ideal phlebotomy practices, bar coding of samples, adoption of laboratory information system (LIS), using fully automated machines, training of technical staff and many other measures that could help in reducing delay.1–3

Another study related to timeliness of surgical pathology reports demonstrated that fixing workflow subprocesses increased the compliance of timely generation of reports. The study focused on developing log sheets to be attached with pathologist request, sending daily reminders to the pathologist to verify completed reports on the same day and fixing login problems for the affected system.4

The most commonly used measure for timeliness is turnaround time (TAT), and we studied the literature for definition of TAT. Review of literature reveals several different approaches to define TAT. TAT can be classified by test (eg, potassium), priority (eg, urgent or routine), population served (eg, inpatient, outpatient, emergency department) and the activities included (eg, from the time of ordering or from the time of receipt of sample in laboratory). The steps in performing a laboratory test were outlined by Lundberg as a series of nine steps: ordering, collection, identification, transportation, preparation, analysis, reporting, interpretation and action. Although the laboratory can and perhaps should be involved in all these steps, many laboratories restrict their definition of TAT to intralaboratory activities.5

Other studies have defined laboratory TAT as ‘time from receipt of the specimen’ until the ‘availability of the result’ and total TAT as ‘time from the physician's request’ until the ‘time the physician views the result’ (total TAT).6

Literature suggests that a single definition of TAT is not adequate for all types of tests or for all types of settings. The definition of TAT that needs to be applied should be based on the type of patient served (intensive care unit/emergency/casualty service), priority (STAT/urgent or routine service). They concluded that hospitals need to evolve their own TAT in consultation with both the laboratory personnel and the clinicians and clients (the users) for using TAT as a quality parameter for the laboratory services.7

In our hospital, TAT is considered as the time from sample collection to report generation. For all samples received by the laboratory from the outpatient clinics from midnight to 13:30, the reports have to be available to the patients at 18:00 the same day, and for the samples collected after 13:30, the reports have to be available at 18:00 the next day.

Patients were complaining that they were not getting the reports when they were coming to the hospital to collect them. We formed a team to work on resolving this problem. The team consisted of staff from the laboratory, front desk and the quality department. Though the team had baseline data on delay, there were no data available for the reasons of delay. So to get some idea about the underlying causes of delay, we mapped the existing process (from sample collection to report preparation; see online supplementary appendix 1), conducted a time and motion study and had discussions with the laboratory staff. All this work was done between 17 May and 23 May 2013. The time motion study brought to light that the laboratory consultant has no defined time to do the final verification of the tests before releasing the reports and that could be the main reason of delay. During discussion, the staff identified requirement of a new sample, equipment breakdown and need for repeat tests as the other possible reasons for delay, but no one was sure how frequent these were as reasons for delay were not documented.

Supplemental material

Measurement

The outcome measure for this improvement was percentage of outpatient laboratory reports that were reported late. Along with the outcome measure, we also reviewed two process measures monthly. The first process measure was the number of delayed reports due to verification not done by the consultant during the set time. The second process measure was percentage of reports for which reasons of delay was filled in the LIS.

We collected baseline data for March, April and May 2013 from the LIS. The analysis revealed that results were not ready at the appointed time in 1050/6621 (16%), 851/6740 (13%) and 850/6387 (13%) reports in March, April and May 2013 respectively.

Design

To identify interventions for improvement, we held a brainstorming session with staff members from the laboratory, front desk (who distributed the laboratory reports), quality department and the Head of the Laboratory Department. Several change ideas were suggested in the session. Out of all the ideas suggested, a consensus was reached to first work on three change ideas. These ideas are described below in detail.

First, based on our learning from the time motion study, the team wanted to standardise the time for the verification of the reports to be released by the consultant. It is hospital policy that the laboratory consultant verifies all the reports before they are released. The consultant has to review the report results on the software, and if he finds any outliers or critical values, he might repeat the tests or discuss the report with the referring physician before releasing it. Before the initiative, the verification of reports was not a priority for the consultant and he would do it as and when he got time. Sometimes, reports that had to be released the same day were verified and released the next day instead. The project team and the senior consultant reached a consensus for a specific time frame between 14:30 to 16:00 (2 hours before the pick-up time given to the patient) for doing the final check so that the reports could be released on time.

Second, the team decided to introduce a practice of specifying the reason of each delayed report in the remark’s column of the LIS on a daily basis. This step was introduced to create awareness so that the staff could study the reasons of delay and take action to avoid them.

Lastly, the team decided to create a system that would inform patients about all report delays in advance so that they could avoid making unnecessary trips to the hospital. Though this change would not directly reduce the delays, it would reduce the inconvenience caused to patients.

The team discussed each change idea in detail and decided to test each idea using a Plan-Do-Study-Act (PDSA) cycle which is one of the most commonly used tools in quality improvement.8 Roles were assigned to team members and assumptions were tested, and once the idea was tested, the team discussed their learnings from the test. Based on the learning, the team decided to either implement, adapt or discardthe change idea. These changes are described in detail in the strategy section and summary of the same can be reviewed in table 1.

Table 1

Plan-Do-Study-Act (PDSA) cycles to test change ideas for reducing delays in laboratory reports

Strategy

Change 1: using a set time to verify laboratory results

The team first decided to test whether or not the laboratory consultant would be able to do a final verification of reports within a scheduled time frame. This was tested with a PDSA cycle. The scheduled time was decided in consensus with the consultant. This PDSA was run from 11 June to 20 June 2013 for biochemistry and immunoassay tests. The PDSA was run with biochemistry and immunoassay tests as they comprised about 77% of all the tests. Samples collected between 00:00 and 13:30 were scheduled for verification (final check) between 14:30 and 16:00. A total of 2180 tests were booked in this PDSA. There was 90% compliance of doing the final check of reports on the scheduled time. As a result, only 5.5% of biochemistry and immunoassay reports were delayed during the 10 days of the PDSA cycle. Seeing the reduction in delays for biochemistry and immunoassay, a scheduled time for final verification was also set for haematology and clinical pathology. We then scaled the change with a second PDSA in July and spread it to haematology, clinical pathology and other sections of the laboratory. Since then, a final check for all tests was done by a scheduled time. After this change, the overall delays reduced from 13% to 6%.

Change 2: document reasons for delayed samples

The second change to be tested through a PDSA cycle was if it would be practical for the laboratory consultant to fill the reasons for delay in LIS daily. The PDSA cycle was run from 25 June to 29 June 2013. During these 5 days, compliance to filling in ‘reason for delay’ was merely 17%—out of seven delayed reports, the consultant filled in the reason for only one report. With such low compliance, it was not possible to understand the reasons for delay. The team met again and the consultant reassured that he would try to comply with the new change of filling reasons for delay. In the discussion, the consultant identified changes to his daily routine that he thought would help him document reasons for delay. He planned to start filling in the reasons of delay in the LIS immediately after he verified the reports. He would not start any training sessions or meetings until he had finished the filling the reasons for delay. To test whether his new routine would help, a second PDSA was planned.

The second PDSA cycle was run for the month of July. Compliance of filling remarks for delayed reports was 63% which was a significant improvement from the previous PDSA. This PDSA verified that it was possible for the consultant to fill in reasons of delay.

We analysed the ‘reasons for delay’ filled by the consultant. Out of the 61 delayed tests, 40 (66%) were due to equipment breakdown. This was an old equipment and an order had already been placed for its replacement. Three (5%) reports were delayed due to fresh sample requirement, seven (11%) due to Internal Quality Controls out of acceptable range and the biggest surprise was that 11 (18%) reports were delayed because the sample was not received!

All these 11 samples were of patients undergoing preventive health check-ups that include several tests requiring urine, stool and fasting and postprandial blood samples. In the laboratory when the patient would give the fasting blood sample, the software booked all tests including urine, stool and postprandial tests in the patient’s name as if the patient had given all samples. This meant that if a patient did not give stool, urine or blood sample for postprandial tests, the software showed that these tests were booked and reports were not ready on time—wrongly inflating the delays. Therefore, the laboratory staff created a new protocol of not booking the samples and generating bar codes until samples were received. We planned another PDSA to test this protocol.

Change 3: booking samples in the software only after the samples were received

This change was identified as part of the learning from the second change. This PDSA was run to try a new protocol that would not book samples and not generate a barcode until samples were received. This protocol was tested from 12 August to 31 August 2013. During the PDSA, 110 samples were not received and were not booked. If this new protocol would not have been created, then all of 110 samples which were not received would have been booked and would have been shown as delayed. For the month of August, there were 89 delays which was 1.3% delays, and had this protocol not been in place, the delays would have been 199 (89+110) inflating the percentage delays to 2.8%. After this PDSA, the protocol was implemented and has become part of the workflow processes of the laboratory.

Change 4: inform patients about any delay in reports

The change the team wanted to test was whether the front desk staff would be able to inform the patients telephonically about the delay. Delay in reports would flash on the computer screen of the front desk staff who would then promptly inform patients. The PDSA cycle was run from 1 October to 12 October 2013 (second and sixth excluded). Twenty-eight reports were delayed during this time period. The front desk called all patients citing the reason for delay using a standard script. After this PDSA cycle, the front desk staff incorporated calling all patients whose reports were delayed in their standard workflow.

Although the second and the fourth change did not directly impact the TATs, they laid the foundation for improvement. Putting in practice documentation of reasons of delays (change 2) made the staff aware of causes of delays on a daily basis and helped them take action to avoid future delays. Informing the patients about delay in report (change 4) created transparency and motivated the laboratory staff to reduce delays. It also reduced unnecessary trips to the hospital for patients whose results were not yet ready.

Results

Once the PDSA cycles were completed, we implemented the tested changes. We set a goal of <5% for delayed tests as the outcome measure. From the time the changes have been implemented, we have met our goal. A major reduction in delays happened quickly after the consultant adhered to the scheduled time of final verification of results (change 1). Since the changes have been made most of the months, the delay has been less than 3% (see online supplementary appendix 2). This is a significant improvement from a baseline of 16% delay.

Supplemental material

Verification of reports during a set time has become a protocol and is followed diligently by the consultant. There are months where no reports are delayed due to the use of this protocol, and other months where only a few reports are delayed if the protocol is not followed. We have therefore depicted the data through a run chart9 with months only in which there have been delays due to verification not done on time (see online supplementary appendix 3). The second process measure for which data were recorded was to see whether the reasons for delay were being documented daily in the LIS. Review of data showed that the documentation for reasons of delay has improved from a median of 77% to 93% (see online supplementary appendix 4). These two process measures along with the outcome measure are reported every month to the Medical Director.

Supplemental material

Supplemental material

Data on these measures were not available in June 2014, September 2014, December 2015 and January 2016 due to error in LIS.

Lessons and limitations

We learnt three key lessons from this project. First, working with all stakeholders helps identify efficient solutions. The main problem behind the delay was that the consultant was verifying results when it was convenient for him. This meant that some were verified too late to give to the patient. This was solved by him setting aside a set time each day to verify. This did not require more work or more resources (he was doing the work already). It just meant that he organised his time to make the whole system work better: better quality of care for patients with the same amount of resources. Second, new issues arise during implementation. Some of the problems that we worked on were unknown at the start of the project (eg, how we counted results even if specimens were not submitted). Third, measurement was an important driver for change. Sharing the data with laboratory staff helped them understand the scope of the problem. Daily measurement of reasons for delay in reports helped to understand causes for delay. Transparency about delays with the laboratory staff motivated them to improve.

The main limitation of our paper was that we did not have a patient representative or a clinician from the outpatient clinic on the project. Because of this, we did not get direct feedback from clients about whether they were happy with the new system. However, we have a patient feedback system already in place (that is how this project started—when we realised we were getting lots of complaints) and are not hearing about delays anymore.

Conclusion

Our team successfully reduced the proportion of delayed reports from a mean of 16% to a mean of 3% in the past 5 years. Continuous measurement of delays, documentation of reasons for delays, clarity on the key tasks necessary to reduce the common reasons for the delays and regular meetings to review the data were essential for this success. The sustained improvements in this project have motivated the laboratory staff to expand this work to reduce delays for patients admitted in the emergency and inpatient departments.

Other hospitals could apply these same lessons to reduce delays in reporting laboratory test results. The causes of delays and their solutions are unlikely to be the same in other hospitals, but the steps of using data to identify the common problems, using participatory decision making to identify possible solutions and then using PDSA cycles to test and adapt these solutions can be adapted to most contexts.

Acknowledgments

We acknowledge the support and encouragement of Mr Abhishek Bhartia, Director and Dr Sneh Bhargava, Medical Director. We also acknowledge the effort put in by Ms Megha Dhingra, Quality Executive.

References

Footnotes

  • Twitter @Saru4q

  • Contributors SB has written the main document except for the background of the manuscript. SB conceptualised the improvement project and provided leadership for applying the quality improvement methodology. SB also worked with RG for data analysis and interpretation. PW has written the background and annotated the outcome and process measure graphs. PW supported the QI process and co-ordinated meetings with all stakeholders. PW also collected data. RG provided leadership in engaging the laboratory staff and actively engaged in the brainstorming sessions for coming up with change ideas.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval This project was deemed to be an improvement project and not a study on human subjects. Therefore, ethical improvement was not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.