Article Text
Abstract
This study was designed guided by the Model for Improvement framework to reduce waiting times and visit duration in the intravitreal therapy clinic, while improving patient and staff experience. In our aim to provide good quality, patient-centred care and constantly improve, we optimised the appointment profile and patient flow.
We involved a multidisciplinary team (one consultant, junior doctors, staff nurses, technicians, and receptionist), as well as patients and relatives, to try to understand the main delays in the clinic. Process mapping, a fishbone diagram, run charts, together with feedback from patients and staff, provided an insight on the possible roots of the delays experienced by our patients. The results of the inquiry led us to take actions focused on optimising appointment scheduling.
After implementing the new scheduling profile (with a gap in the middle of the session), various cycles of plan-do-study-act and a comparative, qualitative study by interviewing 10 patients demonstrated that the waiting times decreased, and patients and staff experience improved.
This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See:
Statistics from Altmetric.com
Problem
This improvement initiative was designed to reduce waiting times in the intravitreal therapy (IVT) clinic, by optimising the appointment schedule. We operate an outpatient clinic service (Figs. 1 and 2) staffed by retina specialists, nurses, healthcare assistants, technicians, receptionists, and administrators. Our aim, aligned with our organisation’s vision and the NHS Five-Year Forward View (2014),[1] is to provide good quality, patient-centred care.[2-5] However, negative sentiments in our Friends and Family (F&F) test[6] showed the service was not optimal. Patients and relatives were complaining of long waits when they arrived in the clinic. This inspired me to design this improvement initiative guided by the Model for Improvement framework.[5, 7, 8]
Background
My inquiry[9,10] included regular meetings with a multidisciplinary team, (one consultant, junior doctors, staff nurses, technicians, and a receptionist) to try to understand the main delays in the clinic. We also involved patients and clinicians outside this team through informal meetings and dialogue. There was a unified desire to try to improve the clinic flow. Process mapping,[11] a fishbone diagram,[12] run charts (Figs. 3 and 4),[13] and feedback from patients[14,15] and staff provided an insight on the possible roots of the problem.[16] The results of my inquiry led me to take actions directed at how the system ought to work and what changes needed to be made. Our service has unique features and faces various challenges for balancing supply and demand. There is uncertainty in the patients’ arrival and individual needs for prompt assessment. We focused on optimising appointment scheduling. A well-designed appointment system, matching supply and demand, would allow us to deliver a timely and convenient service. At the same time, it would improve patient and staff experience.
The vast majority of patients in the IVT clinic require services that can be performed within a fixed time length. Therefore, appointments slots were given at equal intervals (5-15 minutes) (Figs. 5 and 6). The length of time patients have to wait (i.e. the difference between a patient’s appointment time-or his/her arrival time if he/she is late-and the time when he/she is actually seen by the nurse) is a common cause of frustration and inconvenience.[17] The Patients’ Charter,[18] introduced in 1991, set the standard that outpatients should be ‘given a specific appointment time and be seen within 30 minutes of that time’.
See supplementary file: ds5445.pptx - “Figure 1”
Baseline measurement
The objectives agreed with all our stakeholders[19] were: 1) to reduce waiting time to see the nurse (30 minute target) and 2) to improve patients and staff experience. A balanced set of measures[20] linked to these objectives was used to assess the extent to which our targets were being achieved. They included: 1) outcome measures (1.1. time waiting to see the nurse, 1.2. overall visit duration), 2) process measures (number of slots available for appointments in each session), and 3) balancing measures (were the changes causing new problems in other parts of the service? Delays in parallel clinics?). To understand how patients and staff experienced the changes, we looked for evidence of their impact on their satisfaction using their narrative stories.[21-24]
These indicators were chosen considering our stakeholders’ values and interests. Patients and relatives expressed freely in the F&F survey that “time waiting” and “time spent” in the clinic was important for them. Delays and backlog directly affects staff morale and work satisfaction.[25] Clinic managers and the clinical director wanted to see evidence that the number of patients seen per clinic was not reduced. Therefore, we provided volume activity evidence. The “30 minute target to see the nurse” measure also allowed us to compare our performance with evidence of “best practice”, as recommended by the Royal College of Physicians.[26]
See supplementary file: ds5482.pptx - “Figure2 (2)”
Design
We modelled patient arrivals as a first step. Patients are expected to arrive according to their appointment time, but many patients are unpunctual to a degree. Variability in the arrival process contributes to clinic performance deficiencies such as clinician idle time, patient congestion, and prolonged waiting. Our purpose was to provide some insight for improved patient scheduling.
We used run charts (Fig. 3 and 4)[13] to assess our performance measurements. They provide a clear display over time of whether things are stable, improving or deteriorating rather than a general idea of how things are. The data collected in November 2014 highlighted there was common cause but also exceptional (special cause) variations in the operation of our clinics.[27] With a fishbone diagram,[12] we identified some of the sources. Some patients did not arrive at their allocated time. Some patients needed to be given priority (i.e. frail patients on special transport). This resulted in disruption of patient flow. Not seeing patients in order of their appointment caused delays for other patients and inefficiency.
After various meetings and in-depth discussion of our findings and ideas, we considered we needed to focus on optimizing our clinic schedule appointments (originally, 15 consecutive appointments at 10 minute intervals) to provide some internal “slack” and make it easier to absorb variations in demand (patients arriving too early or late for their appointments).
We hypothesized that a 30 minute gap in the middle of the session could improve our system significantly by absorbing any backlog accumulated during the first half of the session (Figs. 5 and 6). Due to the high demand for clinic appointments, the number of patients seen per session had to be kept stable to maintain activity. The first appointment had to be given 10 minutes earlier and the last one 10 minutes later. Receptionists and nurses agreed to start their day 10 minutes earlier (I influenced by empowering, sharing our common vision and building a relationship).
After implementing the new scheduling profile, I used plan-do-study-act (PDSA) method[28] and a comparative, qualitative study by interviewing 10 patients to assess the impact.[16] Waiting times decreased (Table 1), and patient and staff experience with the clinic journey improved.
My leadership role in this initiative was as host.[29] Through collaborative working between staff and patients; and by encouraging, engaging, and stimulating colleagues and patients,[30] I influenced the activities of our service and achieved the progress described below.
Table 1 Comparison of results before (P) and after (N) change in booking schedule
Number of patients (P)76 (N)106
Clinics (P)6 (N)8
Injected patients (P) 44 (57.90%) (N) 48 (45.30%)
Non injected patients (P)32 (42.10%) (N) 58 (54.70%)
Arrival >15 minutes early (P)19 (25%) (N) 32 (30.20%)
Arrival >30 minutes early (P)2 (2.60%) (N) 9 (8.50%)
Arrival >60 minutes early (P)0 (N) 0
Arrival >15 minutes late (P) 7 (9.20%) (N) 5 (4.70%)
arrival >30 minutes late (P) 4 (5.30%) (N) 4 (3.80%)
Arrival >60 minutes late (P) 2 (2.60%) (N) 1 (0.90%)
Waiting >30 minutes to see nurse (P) 29 (38.20%) (N) 2 (1.90%)
Waiting >60 minutes to see nurse (P) 2 (2.60%) (N) 0
>1 hour non-injected visit duration (P) 27 (84.40%) (N) 21 (36.20%)
>2 hour injected visit duration (P) 39 (88.60%) (N) 20 (41.70%)
A first test of change using PDSA cycle (NHS, 2008c) to trial our idea showed the changes implemented resulted in measurable improvements in a set of measures (Table 1). The percentage of patients waiting more than 30 minutes to see the nurse was reduced from 38.2% to 1.9%. Visits with a duration over three hours have reduced from 10.4% to 1.9%. Activity was maintained. Reducing the visit duration of (injected and non-injected) patients allowed staff to see the same number of patients per session and go for their break earlier. This resulted in a more efficient process that took less staff time. More details of the results can be seen in Table 1.
I not only undertook an inquiry into the outcomes of technical actions, I also developed an orientation of inquiry measuring experience to show improvement in the social and organization process.[10, 16, 24, 31] I assessed the impact of the changes on patients and staff satisfaction by listening to their stories (Table 2).[21, 24] I am well aware this type of qualitative evidence is more equivocal than quantitative measures (“minutes waiting” or “time spent in the clinic”), but is very important to understand what is actually going on. It gives a greater depth of insight and key points to consider in future courses of action.
Table 2. Qualitative evidence. Extracts from interviews, emails and correspondence
The unexpected
‘The surprising thing was that although I thought we would need to reduce the number of investigations carried out in each visit, we didn’t have to reduce any tests at all. What we did was try to improve flow and visit duration started reducing.’
The staff recognised the benefits of getting patients seen in order and preventing backlogs. Staff commented on the improvement in clinic environment and reduction in stress.
‘The knock on effect of getting patients seen within 30 minutes of their arrival time has been really good for patients. Personally, I’ve really enjoyed it. Before I used to come to work every day and it could feel patients were unhappy with the long waiting times from the beginning of the clinic. It has made us realise that keeping a steady flow facilitates the clinic dynamic and makes everything to run more efficiently with reduction in visit duration. (Staff nurse)
The relationships between the members of the multidisciplinary team have changed as understanding and respect for each role has grown. The receptionist, nurses, and technicians can see the impact they all have on patient care.
‘It’s led to a rapid improvement both in process time and visit duration and it didn’t cost anything. We just adjusted the booking times and we are really proud of what we have achieved.’ (Clinic manager/coordinator)
The changes put in place brought a range of benefits including smoother running of the clinics, with quicker access to see the nurse; shorter visit duration and higher percentage of patients satisfied with their experience. If this improvement is maintained, we might be able to see more patients per session in the future.
Strategy
I used a realistic evaluation framework[39] in combination with quantitative and qualitative methods [40] to analyse my initiative. Using more than one method produced a more complete picture of the situation and helped me answer questions such as, how does context influence practice? Did leaving a 30 minute gap in the middle of the session impact our service delivery? In what ways, for whom, and how? Additionally, which cultural resources are necessary to sustain the changes?
The key finding in our first round of inquiry was a mismatch between when demand presented (patients arrival) and when capacity (supply, nurse free to see patient) was available, with an amplification effect and backlog getting worse as patients travelled down the multistep pathway. It was clear we could not change when hospital-transport patients arrived, so we had to change our working pattern.
As we continued with the process of inquiry, it became clear from real-time data that leaving a gap in the middle of the session (mechanism) in the IVT clinic (context) resulted in a more steady flow of patients through the pathway (outcome) (context-mechanism-outcome, CMO, configuration),[39] which in turn improved patient and staff experience. These were perceived to be positive developments by patients, doctors, nurses, healthcare assistants, technicians, administrators, and managers.[2, 4, 15, 29, 41-43] An unintended consequence was the perceived “waste” of clinic slots if patients all arrived at the allocated time. Some members of staff thought that surplus in the clinic appointment was a “waste”. In contrast, some nurses and receptionists interviewed felt the gap in the middle of the session provided an “extra” space in the clinic for potential walk-in patients.
Post-measurement
As with any large-scale change in a complex environment,[8, 44, 45] where numerous other things also went on during the period of the work, it was impossible to know for sure that the results we had seen were directly caused by the changes made. My role as “realistic evaluator”[39] was concerned with finding out the contingencies between mechanisms and context, identifying CMO pattern configurations through consultation with stakeholders and developing explanatory theory about how the mechanisms produced the outcomes and how they could be used in real practice.
We needed to determine whether it was the gap in the middle of the session that improved patient flow regardless of patients not arriving at their allocated time or whether the clinic environment supported patients to be seen according to their arrival time rather than their allocated time. In this initiative, there was no change in any clinic staff, which might have affected the interpretation and obscured the analysis. The same nurses were now able to deliver more streamlined care after the introduction of the gap in the middle of the session, which facilitated a more steady demand and kept patient flow. This supports the view that the introduction of the 30 minute gap resulted in the reduction of waiting time and visit duration. However, that is not to say that the same finding would result in different clinical setting; this will be tested through a process of cumulation[46, 47] in the future. There may have been more than one mechanism in operation at the same time. What was very important in this improvement initiative was the process of developing, testing, and refining the CMO configurations. This procedure has the potential to unearth the various permutations to better understand what actually occurred.
Further tests in different contexts will allow us to refine the CMO configurations and the explanatory theory (the use of tailored booking profiles as a function of specific patient population, clinic attendance, staff attitudes, and sustainability). A cumulative understanding[39] of how tailored booking plays out differently in different contexts will produce patterns of outcomes for subsequent action and implementation in any context. As part of the analysis, all the claims, concerns and issues of people involved in the particular setting who are affected by the evaluation (fourth-generation evaluation)[48, 49] will be gathered. We will look for patterns and comparisons.
Tailored appointment scheduling improved patient flow in our clinic. There is evidence to suggest that enhancing patient flow increases patient safety.[12] Therefore, it should underpin improvement tools and programmes such as Optimizing Patient Flow developed by the IHI,[5] the Esther project in Sweden[50] and the NHS innovation to improve the quality of healthcare services.[12] In my opinion, clinic appointment scheduling processes in the NHS are generally not designed intentionally. They tend to grow in response to internal constraints, without data-driven practices, and overelying on behaviour change to accommodate patient flow changes. More emphasis should be placed on appropriate scheduling.
See supplementary file: ds5449.pptx - “Figures 3-6”
Lessons and limitations
As with any large-scale change in a complex environment (Clarke et al., 2009; Powell et al., 2009; Redfern et al., 2003) where numerous other things also went on during the period of the work, it was impossible to know for sure that the results we had seen were directly caused by the changes made. My role as “realistic evaluator” (Pawson and Tilley, 2004) was concerned with finding out the contingencies between mechanisms and context, identifying CMO pattern configurations through consultation with stakeholders, and developing explanatory theory about how the mechanisms produced the outcomes and how they could be used in real practice. This dynamic approach made me feel constantly moving between principle (theory) and practice, which I found very stimulating.
It was difficult to determine whether it was the gap in the middle of the session that improved patient flow regardless of patients not arriving at their allocated time or whether the clinic environment supported patients to be seen according to their arrival time rather than their allocated time. In this initiative, there was no change in any clinic staff, which might have affected the interpretation and obscured the analysis. The same nurses were now able to deliver more streamlined care after the introduction of the gap in the middle of the session, which facilitated a more steady demand and kept patient flow. This supports the view that the introduction of the 30 minute gap resulted in the reduction of waiting time and visit duration. However, that is not to say that the same finding would result in different clinical setting; this will be tested through a process of cumulation (Pawson et al., 2005; Greenhalgh et al., 2009) (discussed below), which I did not have time to accomplish during this improvement initiative. There may have been more than one mechanism in operation at the same time. What has been important in my learning process is the process of developing, testing, and refining the CMO configurations because this procedure has the potential to unearth the various permutations to better understand what has actually occurred.
A cumulative understanding (Pawson and Tilley, 2004) of how tailored booking plays out differently in different contexts will produce patterns of outcomes for subsequent action and implementation in any context. As part of the analysis, I will gather all the claims, concerns, and issues of all the people in the particular setting who are affected by the evaluation (fourth-generation evaluation) (Guba and Lincoln, 1989; Koch, 1994) and will look for patterns and comparisons.
Tailored appointment scheduling improved patient flow in our clinic. There is evidence to suggest that enhancing patient flow increases patient safety (NHS, 2008b). Therefore, more emphasis should be placed on appropriate scheduling. It should underpin improvement tools and programmes such as Optimizing Patient Flow developed by the IHI (2015), the Esther project in Sweden (King’s Fund, 2014) and the NHS innovation to improve the quality of healthcare services (NHS, 2008a). In my opinion, clinic appointment scheduling processes in the NHS are generally not designed intentionally. They tend to grow in response to internal constraints, without data-driven practices, and overelying on behaviour change to accommodate patient flow changes.
In a complex organization like the NHS, a systematic approach securing and maintaining engagement of multiple diverse perspectives is essential for changes to work. I will assess very carefully the readiness and capacity of my team to respond effectively to the adaptive challenge. During my initiative I came across examples of negative adaptive capacity (i.e. no questioning of top management decisions, no bottom-up challenges to decisions). I, as a leader, have to expose conflict, challenge the norms, and allow creative deviance. My role is to stimulate the necessary adaptive work on the part of all the team members. This is an approach I have to develop further.
I am committed to continue with my development and ensure the benefits are long term to myself, my colleagues, our service, patients, and carers. In order to continue on my life-long journey of improvement and capability development (Fraser and Greenhalgh, 2001), I will pursue the unexpected and will collect data on balancing measures to gain a better understanding of the impact of the initiative on the wider system (are patients in parallel clinics suffering longer waits?) and be able to take preventive measures. I will need to provide clear visible leadership and support to give staff confidence to try new approaches, and perhaps not succeed the first time. There is no single cause of disrupted patient flow, and no single intervention that will address the patient flow problem holistically. Improving flow is a system wide undertaking that will require strong and consistent leadership, and widespread commitment from health professionals throughout the entire organization and I will have to develop my leadership skills further to succeed.
Conclusion
Involvement of a multidisciplinary team (one consultant, junior doctors, staff nurses, technicians, and receptionist), as well as patients and relatives was essential to understand the main delays in the clinic and the impact on the quality of our service. Process mapping, a fishbone diagram, run charts, together with feedback from patients and staff, provided an insight on the possible roots of the delays experienced by our patients. The results of the inquiry led us to take actions focused on optimising appointment scheduling.
After implementing the new scheduling profile (with a gap in the middle of the session), various cycles of plan-do-study-act and a comparative, qualitative study by interviewing 10 patients demonstrated that the waiting times decreased, and patients and staff experience improved.
References
1 NHS Five Year Forward View (2014) [online]. Available at www.england.nhs.uk/ourwork/futurenhs/ (accessed 1st of April 2015).
2 Berwick D (2009). ‘What “patient-centered” should mean: confessions of an extremist’ Health Affairs, vol. 28, no. 4, w. 555–65 [online]. Available from: http://ehis.ebscohost.com.libezproxy.open.ac.uk/ ehost/ detail?vid=3&sid=537136c2-d416-47f2-8a95-3b8a98a861e7%40sessionmgr114&hid=109&bdata=JnNpdGU9ZWhvc3QtbGl2ZSZzY29wZT1zaXRl#db=a9h&AN=48469437 (accessed 20th September 2014).
3 Coulter A, Fitzpatrick R and Cornwell J (2009). Measures of Patients’ Experience in Hospital: Purpose, Methods and Uses, The Point of Care, The King’s Fund [online]. Available at www.kingsfund.org.uk/publications/measures-patients-experience-hospital (accessed 10th February 2015).
4 Parand A, Dopson S, Renz A, et al. The role of hospital managers in quality and patient safety: a systematic review. BMJ Open 2014;4(9):e005055.
5 Institute for Healthcare Improvement (2015). How to improve [online]. Available at www.ihi.org/resources/Pages/HowtoImprove/default.aspx (accessed 30th May 2014).
6 NHS Choices (2013). The NHS friends and family test [online]. Available at www.nhs.uk/ nhsengland/ aboutnhsservices/ pages/ nhs-friends-and-family-test.aspx (accessed 28th May 2015)
7 Langley GL, Moen R, Nolan KM, et al. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance (2nd edition). San Francisco: Jossey-Bass Publishers. 2009.
8 Powell AE, Rushmer RK and Davies HTO. A Systematic Narrative Review of Quality Improvement Models in Health Care, Edinburgh, NHS Quality Improvement Scotland [online].2009. Available at http://www.healthcareimprovementscotland.org/ previous_resources/ hta_report/ health_care_improvement_models.aspx (Accessed 15th July 2014)
9 Dewey J. Logic – the theory of inquiry, New York: Henry Holt & Co. Nursing, 1938:5–103.
10 Ramsey CM. Developing Productive Inquiry, Section 5, (B839, Making a difference: the management initiative) Milton Keynes, The Open University.2012.
11 NHS Institute for Innovation and Improvement. Cause and effect (fishbone) [online].2008. Available at www.institute.nhs.uk/quality_and_service_improvement_tools/quality_and_service_improvement_tools/cause_and_effect.html (accessed 1st November 2014)
12 NHS Institute for Innovation and Improvement. Process mapping [online]. 2008. Available at http://www.institute.nhs.uk/quality_and_service_improvement_tools/quality_and_service_improvement_tools/process_mapping_-_an_overview.html (accessed 7th March 2015)
13 Institute for Healthcare improvement. Run charts [online]. 2014. Available at www.ihi.org/resources/Pages/Tools/RunChart.aspx (accessed 12th October 2014)
14 Clarke E and Eby M. Block 4, Box 1.1 (K342 Enhancing the patient experience), Milton Keynes, The Open University. 2012.
15 Carel H (2013). ‘Havi Carel: the role of the patient in improving quality’ The King’s Fund 6 June [online]. 2014. Available at http://www.kingsfund.org.uk/ audio-video/ havi-carel-role-patient-improving-quality (accessed 28th June 2014).
16 Glasby J, and Beresford P (2006). ‘Who knows best? Evidence-based practice and the service user contribution’, Critical Social Policy, 2006;26:268–84.
17 Murray M, Berwick DM. Advanced access: reducing waiting and delays in primary care. JAMA 2003;289:1035-40.
18 Stocking B. Patient's charter. BMJ. 1991 Nov 9;303(6811):1148-9.
19 Hartley J and Benington J (2010). Leadership for Healthcare, Policy Press, Bristol.
20 Institute for Healthcare Improvement (2013). ‘Measures’ [online]. Available at: http://www.ihi.org/knowledge/Pages/Measures/default.aspx (accessed 11th November 2014).
21 Labov W (1972). ‘The transformation of experience in narrative syntax’ in Labov, W. (ed.) Language in the inner city: studies in black English vernacular, Philadelphia, University of Pennsylvania Press.
22 Ramsey CM. Narrative: from learning in reflection to learning in performance. Management Learning 2005;36:219–35.
23 Ramsey CM. Narrating development: professional practice emerging within stories. Action Research 2005b;3:279–95.
24 Riessman CK. Narrative Analysis, London, Sage. 1993.
25 NHS Institute for Innovation and Improvement. Bottlenecks [online]. 2008. Available at http://www.institute.nhs.uk/quality_and_service_improvement_tools/quality_and_service_improvement_tools/bottlenecks.html (accessed 20th October 2014)
26 Royal College of Physicians. How user friendly is your OPD [online]. 2004. Available at https://www.rcplondon.ac.uk/sites/default/files/documents/how-user-friendly-outpatient-department.pdf (accessed 10th March 2015).
27 Ogrinc GS and Headrick LA. Fundamentals of Healthcare Improvement: A Guide to Improving Your Patient’s Care, Illinois. 2008
28 NHS Institute for Innovation and Improvement. Plan-Do-Study-Act [online]. 2008. Available at http://www.institute.nhs.uk/quality_and_service_improvement_tools/quality_and_service_improvement_tools/plan_do_study_act.html
29 Cornwell J. ‘Jocelyn Cornwell: How can organisations support patients to lead quality improvement?’ The King’s Fund 6 June [online]. 2013 .Available at http://www.kingsfund.org.uk/ audio-video/ jocelyn-cornwell-how-can-organisations-support-patients-lead-quality-improvement (accessed 28th June 2014).
30 Berwick, D. Don Berwick: clinical leadership of health care reform. The King’s Fund [video]. 2013. Available at http://www.kingsfund.org.uk/ audio-video/ don-berwick-clinical-leadership-health-care-reform (accessed 28th June 2014).
31 Ramsey CM. Developing Productive Inquiry, Section 5, (B839, Making a difference: the management initiative) Milton Keynes, The Open University. 2012.
32 Senior B and Fleming J. ‘The Leadership of Change’, in Senior B and Swailes S. Organizational change, 3rd edn,Prentice Hall, London. 2006.
33 Batalden PB and Davidoff F (2007). What is “quality improvement” and how can it transform healthcare?, Quality and Safety in Health Care 2007;16:2–3.
34 Department of Health. High Quality Care for All: NHS next stage review final report CM 7432. 2008.
35 Berwick, D. Don Berwick: clinical leadership of health care reform. The King’s Fund [video]. 2013. Available at http://www.kingsfund.org.uk/ audio-video/ don-berwick-clinical-leadership-health-care-reform (accessed 28th June 2014).
36 Bloor G and Dawson P. Understanding professional culture in organizational context. Organization Studies 1994;15:275–96.
37 Martin J. Organizational Culture: Mapping the Terrain, London, Sage. 2002
38 Schein EH. ‘The levels of culture’ in DiDomenico, M., Vangen, S., Winchester, N., Boojihawon, D.K. and Mordaunt, J. (eds). Organizational Collaboration: Themes and Issues, London, Routledge. 2004.
39 Otte-Trojel T, de Bont A, Rundall TG, et al. How outcomes are achieved through patient portals: a realist review. J Am Med Inform Assoc 2014;21:751-7.
40 Institute for Healthcare Improvement. ‘Measures’ [online]. Available at: http://www.ihi.org/knowledge/Pages/Measures/default.aspx (accessed 11th November 2014). Journal of Health Service Research & Policy 2005, 10(1 Suppl 1):21-34. (accessed 20th March 2015).
41 Heifetz R, Grashow A and Linsky M. The Practice of Adaptive Leadership, Boston, MA, Harvard University Press. 2009.
42 King’s Fund, The (2015). Patient-centred care [online]. Available at www.kingsfund.org.uk/projects/pfcc?gclid=CjwKEAjw9bKpBRD-geiF8OHz4EcSJACO4O7ToP6NRfWUV5R2WsLlis8K3tndgH5npXe0dgJ3sYlhPhoCtefw_wcB (accessed 10th April 2015).
43 West MA (2004). Effective Teamwork (2nd edn), The British Psychological Society & Blackwell.
44 Clarke J, Davidge M and Lou J. The how-to guide for measurement for improvement. Patient Safety First [online]. 2009. Available from http://www.patientsafetyfirst.nhs.uk/ ashx/ Asset.ashx?path=/ How-to-guides-2008-09-19/ External%20-%20How%20to%20guide%20-%20measurement%20for%20improvement%20v1.2.pdf (accessed 6th December 2014).
45 Redfern S, Christian S, Norman I. Evaluating change in health care practice: lessons from three studies. Journal of Evaluation in Clinical Practice 2003;9:239-250.
46 Pawson R, Greenhalgh T, Harvey G, et al. Realist review - a new method of systematic review designed for complex interventions. J Health Serv Res Policy 2005;Suppl1:21-34.
47 Greenhalgh T, Humphrey C, Hughes J et al. How do you modernize a health service? A realist evaluation of whole scale transformation in London. Milbank Quarterly 2009;87:391-416.
48 Guba E and Lincoln Y. Fourth Generation Evaluation Sage, Newbury Park, California. 1989.
49 Koch T. Beyond measurement: fourth-generation evaluation in nursing. Journal of Advanced Nursing 1994;20:1148–55.
50 King’s Fund. Esther Project [online]. 2014. Available at www.kingsfund.org.uk/sites/files/kf/goran-henriks-lessons-from-sweden-esther-project-kings-fund-may12.pdf (accessed 20th March 2015).
Declaration of interests
No conflict of interest.
Acknowledgements
This improvement initiative was carried out as part of the NHS leadership academy Mary Seacole Programme. I am greatly indebted to the NHS leadership academy and my organisation, Moorfields Eye Hospital, for providing me with this excellent opportunity to develop my skills and knowledge. The invaluable help of Tarun Sharma, intravitreal clinic coordinator in Moorfields at Croydon University Hospital, is greatly appreciated.
Ethical approval
According to the policy activities that constitute research at Moorfields eye hospital, this work met criteria for operational improvement activities exempt from ethics review.
Supplementary materials
Supplementary Material for Waiting time reduction in intravitreal clinics by optimization of appointment scheduling: balancing demand and supply
Extra information supplied by the author