Article Text

Download PDFPDF

A provider feedback intervention to increase uptake of colorectal cancer screening in a Swiss academic general practice
  1. Pau Mota1,
  2. Reto Auer1,2,
  3. Alexandre Gouveia1,
  4. Kevin Selby1
  1. 1 Center for Primary Care and Public Health (Unisanté), University of Lausanne, Lausanne, Switzerland
  2. 2 Institute of Primary Health Care (BIHAM), University of Bern, Bern, Switzerland
  1. Correspondence to Dr Kevin Selby; kevin.selby{at}hospvd.ch

Footnotes

  • Contributors PMM planned and performed the intervention, collected and analysed data, prepared first draft. AG and RA planned the intervention and gave feedback on drafts. KS: planned the intervention, analysed data, gave feedback on drafts, finalised manuscript.

  • Funding The authors received no specific funding for this work.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Patient consent for publication Not required.

View Full Text

Statistics from Altmetric.com

Problem

Recent data suggested that only 33% of eligible patients were up to date with colorectal cancer (CRC) screening at our academic primary care practice.1 A new cantonal screening programme will not be fully functional until 2022. Given our low screening participation currently and lag time for mailed screening outreach from the screening program to take effect, we aimed to increase screening by focusing on measures not included in the organised programme that might help medical residents to improve screening uptake by their patients.

Background

CRC is the second leading cause of cancer mortality in Switzerland. Screening programmes for CRC can reduce CRC mortality and have shown a good cost-benefit ratio, as early detection can improve survival.2 3 The most recently available data for our outpatient clinic in Lausanne, Switzerland, show that around 33% of patients are up to date with CRC screening.1 4 A systematic CRC screening programme was initiated in our canton that offers the choice between two fully reimbursed screening tests: the faecal immunochemical test (FIT) or screening colonoscopy.5 The Department of Ambulatory Care and Community Medicine at the Lausanne University developed a pilot project to implement the screening programme all over the canton of Vaud. The CRC screening programme started in 2015, with our department identified as a pilot unit to test the e-platform, the patient information delivery process and entry of patients into the programme.6 The programme has a target screening population of 170 000 people who will be mailed invitation letters to assist with a specific medical counselling visit with their general practitioner about bowel cancer screening. The main goals of this project were to design, implement and evaluate measures to increase the uptake of the bowel cancer screening programme in an academic outpatient primary care practice. We aimed to answer two questions: How can we increase CRC screening uptake? Which screening test (FIT or colonoscopy) is used more often? The success of different measures implemented will be measured by the increase of patients included in the cantonal programme, a proxy for increased screening uptake.

Measurement

Our main outcome measure was the proportion of eligible patients enrolled in the cancer screening e-platform by each group of residents. The medical residents in our outpatient clinic are separated into four groups, each supervised by an attending physician. Our clinic’s electronic health record does not allow for easy identification of patients eligible for screening or the tracking of quality metrics such as screening uptake. We therefore relied on the number of inclusions in the cantonal programme by physicians via the programme e-platform as a proxy for increased screening uptake. We did not verify actual screening completion. On inclusion in the programme, patients could be included for screening with FIT or colonoscopy, for temporary exclusion (already up to date with screening or short-term illness), or permanent exclusion (identified as high-risk and not screening eligible or other definitive exclusion criteria). We checked the number of inclusions to the platform at the end of each month, stratified by group to compare intervention groups with control groups before and after the intervention implementation. We used billing reports from August 2014 to August 2015 to identify patients who had attended at least one primary care visit and were born in the years eligible for inclusion (ages 50–69 years). We expected the total number of eligible patients seen by each group of residents to be stable during the duration of the study.

Statistical analyses

Our primary outcome was the cumulative proportion of eligible patients from each group of residents enrolled in the cantonal programme. This proportion was compared between intervention and control groups at 8 months after intervening in the first intervention group (red), and 4 months after intervening in the second group (orange). We also tracked the number of inclusions per month and the proportion of patients included for each test type (FIT vs colonoscopy) and with a temporary exclusion from the programme (most often those already up to date with screening). We started to monitor the e-platform uptakes for each group just after delivering presenting information about the programme to all residents.

Design

This was a non-randomised intervention with two intervention and two parallel control groups of residents using sequential Plan-Do-Study-Act (PDSA) cycles. Our project ran from September 2015 to August 2016. We focused on four groups of medical residents (Red, Orange, Green and Blue) each with seven medical residents in general internal medicine and around 250 patients in the target age population for CRC cancer screening. These groups are used for the supervision of residents and meet weekly for teaching and problem solving.

Baseline intervention in all groups

Our baseline intervention for all residents was to inform them about the new cantonal screening programme via the weekly morning report for all residents and weekly conferences with attending physicians. We also gave tutorials and access to the electronic platform for including new patients in the screening programme and placed patient-information sheets in every consultation room. All materials were available to residents on our outpatient clinic website.

Intervention groups

We implemented and tested a multidimensional, data-driven, sequential quality improvement intervention based on the PDSA method. We intervened in two groups of residents (Red and Orange), while the other two groups served as controls (Blue and Green).

Strategy

We used three improvement PDSA cycles and we collected our outcomes data continuously throughout to test the results of each improvement cycle.

PDSA cycle 1

Our first cycle was based on the intervention delivered in all groups in September 2015, which was to provide all residents with information about the programme, patient materials and access to the on-line platform. We then measured uptake and saw that it was low, at <5% in all groups. We asked residents in the Red group about possible interventions and the feasibility of interventions in their practice. From all medical residents’ proposals, we chose a single action that could fit in our setting and that was not time-consuming or that needed funding. We also showed results from the baseline intervention and discussed possible interventions with senior leadership for the clinic to get their support and buy-in.

PDSA cycle 2

For the second PDSA cycle, beginning in December 2015 we gave a report to each medical resident at the end of each month that listed their patients eligible for screening and the proportion that had been included in the canton programme. Our assumption was that increasing awareness about the proportion of patients screened by each resident would motivate them to take necessary organisational steps and change their prescription behaviour in order to impact their future performance. We were careful to keep information private and residents weren’t given a comparison with their peers. We also gave reports to attending physicians for their patients. We gave the reports in person during weekly meetings with all available residents and supervising physicians, when they also had an opportunity to discuss the reports with their peers. We did not compare between residents the proportion of patients already included in the programme or say what proportion would be desirable.

Feedback after the second round was generally positive, both in terms of the proportion of patients included in the programme (which increased sharply in the red group) and feedback from residents. Some residents felt ashamed at their low proportion of patients included, but said that it motivated them. Residents felt that discrepancies between the reports and their current panel of patients were minor, despite the lists being generated from patients seen the year prior.

PDSA cycle 3

After the success of the individual reports in the Red group, we wanted to see if the same intervention could work in a second group that had not participated in the development of the intervention. Given the positive feedback from the Red group, we made no changes and delivered screening uptake reports to each resident in the orange group. We gave the individual uptakes ratio reports in May and June 2016, and we did a follow-up of the daily uptakes at the e-platform the following weeks. The uptake reports were again well received.

Results

The expected eligible population (patients aged 50–69 years seen for primary care the year prior) for the screening programme was 993 patients, divided in almost equal numbers between the four groups: Red 232, Green 252, Orange 249 and Blue 260. A total of 168 patients was included in the screening programme during the study period (17% of those eligible). The difference between the groups where the intervention was implemented and the other two was significant and temporally related with interventions in those groups (figure 1).

Figure 1

Cumulative proportion of eligible patients included in the cantonal programme. PDSA cycles 1 and 2 were with the Red group, PDSA cycle 3 with the Orange group. PDSA, Plan-Do-Study-Act .

Uptake in the first intervention group (Red) increased from 5 to 20 inclusions from December to the end of January. This means that the proportion of eligible patients included from the Red group increased from 2.2% to 8.6% in the month following the intervention. At the end of April, 4 months later and after two individual uptake report rounds (end of December 2015 and January 2016), the Red group had included a total of 44 patients (19% included). After 8 months of follow-up, and 6 months after the intervention, 58 patients had been included (25% included).

We began two rounds of individual report transmission to medical resident in the Orange group in May 2016, labelled as PDSA 3 (figure 1). We had an 88% increase in inclusions for the Orange group after the first round. The number of inclusions increased from 17 (7% of those eligible) to 32 (17% included) 1 month later (figure 2).

Figure 2

Number of patients included each month between September 2015 and August 2016, stratified by group of residents. Timing of PDSA cycles marked with red arrows. Interventions were in the Red (PDSA cycles 1 and 2) and Orange (PDSA cycle 3) groups. PDSA, Plan-Do-Study-Act .

As a comparison, we measured the number of inclusions in the two control groups (Blue and Green), where the intervention of giving feedback about individual uptake proportions was not implemented. After 1 year of follow-up, the Blue group screening uptake ratio was 13% and the Green group was 10% (figure 2). When using a χ2 test to compare intervention with control groups, 109 of 481 were included in intervention, and 59 of 512 included in control groups, p<0.001.

Taking into account the uptake types, 46 patients were included for FIT and 73 for colonoscopies. There were 59 patients in the 50–69 years age range included in the programme as ‘not eligible’, 18 with a definitive exclusion criteria (high risk of CRC) and 31 with a temporary exclusion (colonoscopy in the last 5 years, concomitant disease or refusal).

Lessons and limitations

We learnt two primary lessons from this project: how to track a quality indicator for routine prevention in our clinic and that individual performance feedback increased the number of inclusions in our local screening programme. The use of billing reports to generate patient lists for each resident and the number of patients included in the canton programme provided a performance indicator that could be easily updated monthly. We wondered which intervention in the literature would be suitable in a Swiss-European university hospital context. Publicly ranking the performance physicians, as has been reported internationally,7 is likely to be seen as too competitive in Switzerland, particularly between physicians in training. We tested individual anonymised performance reports and not public lists of best performers for this reason. Physician reminders in other settings have only resulted in 2%–5% increases in CRC screening completion,8 9 though a meta-analysis showed a 13% increase when looking at all preventive services.10 We observed an 11% increase in the proportion of eligible patients included in the programme. The intervention was well accepted by the medical residents. It would appear that there is a place for this type of intervention in our Swiss academic environment.

Limitations to our approach were that it required manual creation of performance reports and that our outcome variable measured physician inclusion using the e-platform and not actual screening participation. As our electronic health record does not allow for the generation of lists of patients needing preventive services, we were obliged to manually generate the performance reports. While this worked when we had a motivated resident participating in the project, it will be difficult to sustain going forward. Future projects could verify that the majority of patients included in the programme remain up to date with screening.

Conclusion

Our intervention of giving individual provider feedback to medical residents about their screening uptake resulted in 23% of eligible patients being included, as opposed to 12% in control groups, p<0.001. We conclude that giving feedback as individual uptake reports to medical residents can be a good tool of self-awareness and can help to increase the cancer screening programme uptakes. Future projects could include developing automated, computer-generated performance reports and tracking actual screening completion.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
View Abstract

Footnotes

  • Contributors PMM planned and performed the intervention, collected and analysed data, prepared first draft. AG and RA planned the intervention and gave feedback on drafts. KS: planned the intervention, analysed data, gave feedback on drafts, finalised manuscript.

  • Funding The authors received no specific funding for this work.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Patient consent for publication Not required.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.