Article Text
Abstract
Introduction Quality improvement collaboratives (QICs) are a common approach to facilitate practice change and improve care delivery. Attention to QIC implementation processes and outcomes can inform best practices for designing and delivering collaborative content. In partnership with a clinically integrated network, we evaluated implementation outcomes for a virtual QIC with independent primary care practices delivered during COVID-19.
Methods We conducted a longitudinal case study evaluation of a virtual QIC in which practices participated in bimonthly online meetings and monthly tailored QI coaching sessions from July 2020 to June 2021. Implementation outcomes included: (1) level of engagement (meeting attendance and poll questions), (2) QI capacity (assessments completed by QI coaches), (3) use of QI tools (plan-do-check-act (PDCA) cycles started and completed) and (4) participant perceptions of acceptability (interviews and surveys).
Results Seven clinics from five primary care practices participated in the virtual QIC. Of the seven sites, five were community health centres, three were in rural counties and clinic size ranged from 1 to 7 physicians. For engagement, all practices had at least one member attend all online QIC meetings and most (9/11 (82%)) poll respondents reported meeting with their QI coach at least once per month. For QI capacity, practice-level scores showed improvements in foundational, intermediate and advanced QI work. For QI tools used, 26 PDCA cycles were initiated with 9 completed. Most (10/11 (91%)) survey respondents were satisfied with their virtual QIC experience. Twelve interviews revealed additional themes such as challenges in obtaining real-time data and working with multiple electronic medical record systems.
Discussion A virtual QIC conducted with independent primary care practices during COVID-19 resulted in high participation and satisfaction. QI capacity and use of QI tools increased over 1 year. These implementation outcomes suggest that virtual QICs may be an attractive alternative to engage independent practices in QI work.
- Collaborative, breakthrough groups
- PRIMARY CARE
- Implementation science
- Evaluation methodology
- Quality improvement
Data availability statement
Non-proprietary data are available on reasonable request.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
- Collaborative, breakthrough groups
- PRIMARY CARE
- Implementation science
- Evaluation methodology
- Quality improvement
WHAT IS ALREADY KNOWN ON THIS TOPIC
In the literature, quality improvement collaboratives (QICs) have been shown to be effective; however, less attention has been paid to implementation outcomes and virtual delivery in the COVID-19 era.
WHAT THIS STUDY ADDS
Our QIC evaluation was conducted within a clinically integrated network (CIN) and we used data routinely collected by their organisation. We examined engagement, QI capacity, use of QI tools and measures, and satisfaction with the web-based format.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
For research, our results contribute to the growing emphasis on evaluating implementation outcomes in addition to conventional quality of care and patient outcomes. For practice, our case study demonstrates that a virtual QIC format is an acceptable and useful alternative to an in-person QIC format. Adaptations can be made for independent practices within a CIN that result in a successful virtual QIC.
Introduction
Quality improvement collaboratives (QICs) facilitate practice change to improve care delivery in healthcare settings. QICs are structured learning initiatives with multiple practices where interdisciplinary QI teams provide: ongoing training and feedback; a model for improvement with measurable targets; guidance on small tests of change; structured activities; and opportunities for cross-site learning and communication.1 2 For QICs involving primary care practices, common performance metrics include chronic health conditions (eg, diabetes, depression)1–3 and preventive services (eg, immunisations, cancer screening).4–7 Despite high-quality evidence (ie, randomised clinical trials) for QIC effectiveness on care delivery and patient outcomes still being nascent,1 8 QICs remain a popular improvement approach.
Although virtual QIC models were explored before COVID-19,9–11 in-person QIC models have been the focus of most prior evaluations.1 8 12 QIC evaluations are expanding beyond quality-of-care measures to include explanations of ‘how’ and ‘why’ QICs achieve results. Systematic reviews of mechanisms of change13 have found that QIC activities can improve healthcare professionals’ capacity to develop improvement processes and modify clinical workflows. However, feedback mechanisms, leadership engagement and access to best practice examples are needed.1 8 12 Strategies addressing readiness, education, support, monitoring and communication have also been used in QICs.14 15 Rohweder et al16 used an implementation science-informed, mixed-methods approach to examine cancer screening outcomes in a QIC for community health centres. They documented QIC engagement and implementation of QI tools, along with higher rates of colorectal cancer screening,16 illustrating how concepts from implementation science can deepen the understanding of what makes QICs effective.
Due to the global pandemic, healthcare organisations that sponsor QICs pivoted to entirely web-based platforms—a rapid change in delivery mode that warrants examination. We assessed process and implementation outcomes of a virtual QIC with independent primary care practices in a clinically integrated network (CIN) conducted during COVID-19.
Methods
Recruitment of independent practices
The CIN, located in the southeastern USA, includes more than 7000 providers and uses an advanced payment model where practices are paid based on total cost of care and quality performance. The CIN engages in QI activities with both owned / managed and independent practices to achieve better care and lower costs. For this virtual QIC, independent primary care practices were recruited by the CIN based on past performance metrics and perceived ability to engage the curriculum. These practices had a basic QI infrastructure and staff who were trained in QI.
Evaluation model
We used a longitudinal, single case-study design17—with the CIN as the main unit of analysis and participating primary care practices serving as embedded units in the case—to examine processes and changes in implementation outcomes for the virtual QIC over time. A case study design can answer questions such as ‘how was the programme successful?’ within a real-world context when the intervention cannot be manipulated. Strengths of case studies are prospective data collection using multiple methods and data sources to triangulate findings.17
Our evaluation framework was adapted from a published evaluation of an in-person QIC16 that was informed by implementation science18 and includes Inputs, Intervention and Implementation Outcomes (online supplemental appendix 1). The Inputs are the CIN’s clinical guidelines for target performance measures, a measures dashboard and the CIN’s QI infrastructure and personnel. The Intervention is the virtual QIC curriculum and tailored monthly coaching with a QI coach. Implementation Outcomes are engagement, capacity-building, use of QI tools/methods and participant perceptions of acceptability and satisfaction. In addition, we used the SQUIRE (Standards for QUality Improvement Reporting Excellence) reporting guidelines19 to describe our methods and findings (online supplemental appendix 10).
Supplemental material
Inputs
Practices received the CIN’s clinical guidelines for target measures developed by insurers; areas included prevention, chronic disease management and cost/utilisation. The five performance goals are detailed in online supplemental appendix 2 (breast cancer and colorectal cancer screening, blood pressure and hemoglobin A1c (HbA1c) control, and emergency department (ED) visits). When independent practices join the CIN, they are instructed on how to collect and report data for health insurance reimbursement. The Value Care Dashboard (dashboard) is an interactive portal that displays progress on key performance metrics populated by claims data. Physician relations associates are assigned to each practice and serve as relationship managers and a single point of contact.
Virtual QIC (the ‘intervention’)
The first QIC with independent practices was scheduled to launch in-person in July 2020, but switched entirely to virtual due to COVID-19 and ran until June 2021. Primary care practices joining the QIC agreed to attend online group learning sessions every other month. The sessions used peer-to-peer learning about QI principles, recommended performance targets and how to meet them, and recognition for their QI work (online supplemental appendix 3).
Practices also committed to participate in virtual clinic-specific coaching sessions at least monthly for approximately 1–2 hours with a QI coach. QI coaches are employed by the CIN to work with practice representatives participating in the QIC such as practice managers, medical directors and QI personnel. QI coaches support practices in applying QI methods including Six Sigma Methodology, plan-do-check-act (PDCA) cycles and Total Quality Management. QI tools emphasise minimising variation across clinic processes and measures and using their own data to make decisions. First, the QI coach assisted practices in selecting priority value care measures and extracting baseline data from the electronic medical record (EMR) to identify gaps. The QI coach then helped develop and conduct context-specific PDCA cycles at each practice.
Implementation outcomes
Implementation outcomes were organised into four domains: engagement, capacity-building, use of QI tools/methods and participant perceptions of acceptability and satisfaction. Data were collected at the provider, clinic and practice level at different time points. The range of data sources and varied data collection schedule reflect the real-life monitoring and evaluation activities typically undertaken by the CIN. Except for interviews, all data used in the evaluation were collected by CIN staff. Table 1 displays the four implementation outcomes and corresponding sources of data for measurement.
Data collection and analysis
Attendance at QIC meetings
Attendance at six meetings over 1 year was tracked by Webex software. We calculated the range and mean number of attendees across meetings.
Poll questions
Practice members completed poll questions administered quarterly. The research team assisted in developing brief multiple-choice survey items to capture real-time data on engagement, capacity-building and implementation (online supplemental appendix 4). We calculated the number of participants responding to each question and percentages for each response category.
Practice transformation assessment
The practice transformation assessment (PTA) is a CIN-developed instrument (survey) used to measure progress in capacity-building and QI activities. It is completed by the project champion at each practice. Surveys were administered at months 1, 6 and 12 of the QIC. In its original form, survey items were categorised as foundational, intermediate or advanced QI work (online supplemental appendix 5). To tailor the tool for our evaluation model, we used a consensus process to assign survey items to one of three implementation outcomes: engagement, QI capacity and implementing QI methods/tools. An additional category, practice improvements, was created to capture relevant items. A few cross-cutting items were assigned to more than one outcome. Each item received a 0 for not yet completed and 1 for completed. For one practice, two clinic sites reported as a single entity resulting in a total sample size of 6. If either clinic had finished the task, the PTA item was scored as completed. The percentage of clinics with a completed item at each time point was calculated. We averaged scores across the four domains. When domains were combined for an overall score, duplicate questions were omitted.
QI work product tracker and PDCA cycles
CIN staff had QI work product trackers to monitor use of QI tools. One example was a series of PDCA templates in which each practice described the steps taken to accomplish a specific project goal. For analysis, we counted the total number of PDCA cycles initiated and completed per clinic. We also determined the focus area of each PDCA. For one practice, two clinic sites reported as a single entity resulting in a sample size of 6.
Satisfaction survey
The CIN delivered a web-based, 10-item satisfaction survey with multiple choice and open-ended questions at the conclusion of the QIC (online supplemental appendix 6). One to two team members from each clinic site completed the survey. Percentages were calculated for each item and open-ended responses were summarised thematically. Deidentified results were presented and discussed with QIC participants at the final meeting.
Interviews
Interviews were used to complement the quantitative data on implementation outcomes and determine perceptions of acceptability and satisfaction by QIC practice participants. The research team conducted 12 in-depth interviews with 3 types of key informants: QI coaches, physician relation associates and clinic team members. We tailored interview guides for each respondent type (online supplemental appendix 7). Prior to the interview, participants received an information sheet describing the voluntary nature of the data collection and confidentiality procedures. Interviews were scheduled at the end of the collaborative and conducted virtually. Ten of 12 interviews were audiorecorded and transcribed verbatim; 2 interviewees declined recording but allowed notetaking.
Three trained research team members conducted a rapid qualitative analysis20–23 using MS Word for coding and Excel for creating matrices. Codebooks were created with a priori codes for the implementation outcomes specified in our evaluation framework (online supplemental appendix 8). The team practised coding two transcripts, resolved discrepancies and individually coded the remaining transcripts. Summaries for each code included the codebook definition, illustrative quotations and the number of interviewees who mentioned the code. Summary sentences were discussed among the team and themes were identified by consensus. Themes were shared with the rest of the research team and key CIN members for verification.
Patient and public involvement
It was not appropriate to involve patients or the public in the design, conduct, reporting or dissemination plans of our evaluation.
Results
Characteristics of participating practices and clinics
Eight practices expressed interest and five enrolled (63%) for a total of seven clinic sites. Of the seven sites, five (71%) were Federally Qualified Health Centers and three (43%) were located in rural counties. Clinic size ranged from 1 to 7 MDs (table 2).
Results by implementation outcome
Engagement
Engagement was evaluated by attendance at the QIC meetings, poll participation, PTA scores and interview results.
Quantitative results
All practices had at least one member attend all virtual QIC meetings. Polls were conducted during virtual meetings. Between 3 and 6 questions were asked during each poll with 5–7 respondents per poll. During the first poll, 57% (4/7) of respondents felt their clinic leadership supported participation in the QIC and the majority, 71% (5/7), felt the right clinic staff were attending the meetings.
Engaging in QI work and frequency of contact with a QI coach improved over time. Four months into the QIC, 33% (2/6) felt they did not have any capacity to dedicate to improvement work outside the meetings. Another 33% (2/6) felt they could set aside some time to devote to QI work, and the remaining 33% (2/6) were able to dedicate time each month and meet with their QI Coach. One respondent did not answer. Eight months into the QIC, 71% (5/7) of respondents reported communicating with their QI Coach at least 1–2 times per month, and 29% (2/7) reported communicating three or more times per month. Engagement scores as measured by the PTA increased by 20% from baseline to follow-up (online supplemental appendix 9) .
Qualitative results
Interviews were conducted with 12 individuals: 6 members from the independent primary care clinics (1 respondent represented both clinics from 1 practice), 4 QI coaches and 2 physician relations associates. Clinic members had worked at their practice from 3 to 12 years. Most respondents (83%, 10/12) had previous QI experience. Clinic participants initially joined the QIC for various reasons, including not having another source of QI support, desiring more learning opportunities, wanting to better serve their patients and building on existing relationships with CIN personnel. Sixty-six per cent of clinic respondents (4/6) and 33% of CIN respondents (2/6) reported that clinic staff and provider time were limited, creating difficulties in scheduling, attending meetings, documenting QI work and analysing data. However, most felt that meetings via webinar enabled clinic representatives to attend given their time limitations. One CIN interviewee felt that building relationships (ie, trust building) was more difficult in a virtual environment (table 3).
Capacity building
Capacity building was evaluated by results from poll questions, the PTA and interviews.
Quantitative results
During the second poll 6 months into the QIC, participants were asked about familiarity with current clinical guidelines and confidence in baseline EMR data related to the five QIC measures. All participants indicated they were familiar with the guidelines, with 60% (3/5) of respondents being ‘somewhat familiar’ and 40% (2/5) being ‘very familiar.’ Confidence around data accuracy varied more, with 60% (3/5) reporting their data were ‘somewhat’ or ‘very accurate,’ and 20% (1/5) reporting their data were ‘inaccurate’. One respondent did not answer. During the third poll 8 months into the QIC, all respondents felt the claims data provided by the CIN staff had been ‘somewhat’ (57%, 4/7) or ‘very’ (43%, 3/7) helpful in improving their QIC measures. Capacity building scores as measured by the PTA increased by 26% from baseline to follow-up (online supplemental appendix 9).
Qualitative results
In terms of pre-collaborative QI capacity, nearly all (5/6) respondents from clinics reported basic or moderate levels of QI work in their practice prior to joining the QIC. QI coaches and physician relations associates described the practices’ prior QI experience as ranging from very informal to very formal, independent of the QIC. Thus, QI coaches tailored recommendations for practices based on QI capacity. For clinics with lower QI capacity, the QI coach did more heavy lifting and became part of the internal QI team. For clinics with higher capacity, the coach acted as a consultant. One QI coach felt their role was to support clinic staff to recognise their own capabilities and improve their perceptions of self-efficacy. All clinic members (6/6) reported benefits from participating in the QIC, including those with prior QI experience. For example, three interviewees with significant QI expertise still felt the QIC provided a deeper understanding of QI concepts and resources.
Use of QI tools
Implementation of QI methods and tools was evaluated by poll participation, PTA scores, PDCA cycles, and interview results.
Quantitative results
Polls conducted later in the QIC indicated more consistent use of QI tools. Eight months into the QIC, most clinics had started implementing PDCA cycles (86%, 6/7). Other QI tools applied by clinics included key driver diagrams (1/7), standard work (2/7), A3 thinking (2/7) and visual improvement boards (2/7). Ten months into the QIC, 60% (4/6) indicated that PDCA cycles had resulted in at least 1–2 actual practice changes in their clinic (online supplemental appendix 9).
Monthly QI coaching sessions were attended by 1–3 providers/staff per clinic. QI coaches monitored each practice’s QI progress using PDCA trackers developed by the CIN. Across the 7 clinics, 26 PDCA cycles were initiated and 35% (9/26) were completed by the end of the QIC (table 2). The range of PDCA cycles initiated per clinic was 0–9, with one practice reporting on two clinics together. For topics, 46% (12/26) of PDCA cycles focused on diabetes and hypertension, and 15% (4/26) on cancer screening (table 2). Use of QI Tools as measured by the PTA increased from 31% at baseline to 79% at follow-up (online supplemental appendix 9).
Qualitative results
When asked about which QI tools were used, half of clinic respondents (3/6) and most (5/6) QI coaches and physician relations associates mentioned PDCA cycles. Other tools mentioned by clinic members included: gaps in a box (investigation of open care gaps per patient), Value Quality Measures Pocket Guide (measure definitions, criteria for closing gaps and EMR documentation), QI reference books, dashboards with summary data, fishbone diagrams and root-cause analysis. Tools mentioned by QI coaches and physician relations associates included: the PTA, driver diagrams, experiment planners, standard work documents, visual management boards and current vs target state process maps.
Five of the six clinic interviewees reported that they mainly used their practice’s internal EMR to generate data for QI, rather than the CIN’s dashboard. On the contrary, QI coaches and physician relations associates reported relying on claims data in the dashboard because the practices all used different EMRs.
Half of the clinic interviewees and 4/6 CIN respondents mentioned that unavailability of real-time dashboard data made QI work difficult. Clinic members felt discouraged that gap closures were not reflected in the dashboard. One participant said the delay is likely due to insurance not processing measures quickly enough. Additionally, delays in receiving data made it difficult to use dashboard information to improve treatment for patients with care gaps.
All clinic interviewees and one QI coach discussed how only a subset of a practice’s patient population was reflected in the dashboard. In the CIN’s payment model, independent practices are paid for caring for patients ‘attributed’ to them through insurance claims. Clinic interviewees expressed a desire to use QI tools to improve care for all patients. One clinic respondent noted their clinic serves many uninsured patients (who are not ‘attributed’ to the CIN) but could benefit from the practice’s QI work.
Participant perceptions
Participant perceptions, including acceptability and satisfaction, were assessed using survey and interview results.
Quantitative results
Eleven individuals from the seven practices responded to the post-QIC survey. Participants had both clinical and administrative roles and included three providers, two clinical support staff, two practice managers, two quality leaders and two other roles. Eighty-two per cent (9/11) of respondents reported meeting with their QI coach at least once per month. For the satisfaction items, 91% of participants (10/11) were satisfied or very satisfied with the overall QIC experience, the QIC bimonthly meetings and the support received. Eighty-two per cent (9/11) were satisfied or very satisfied with the monthly QI coaching sessions. Consistent with interview results, the dashboard received the lowest level of satisfaction, with 64% (7/11) of respondents being satisfied/very satisfied with the data.
Qualitative results
Acceptability and satisfaction with the virtual QIC environment
Two-thirds (4/6) of the clinic interviewees reported liking the virtual QIC. Meetings via webinar enabled them to attend given their time limitations. This sentiment was also reflected among half of the CIN interviewees (3/6). CIN participants shared that it was easier to collectively view documents on Webex than in person. Additionally, physical space varied across practices and sometimes there was not enough room to meet in person. CIN interviewees (n=6) reported that committing 1–2 hours per month for a virtual meeting was acceptable.
Misalignment of goals during COVID-19
Most clinic interviewees (4/6) noted that some performance measures were a mismatch for their patient population or did not take COVID-19 into account. For example, one respondent explained that their practice was more concerned with housing and COVID-19 testing and vaccinations than recording accurate blood pressure. Another clinic participant described their patients’ challenges related to social determinants of health, and the difficulty those patients had controlling their diabetes at the level recommended by the CIN.
Discussion
During COVID-19, a virtual QIC was conducted with five independent primary care practices. Representatives from seven clinic sites participated in bimonthly meetings for 12 months and received clinic-specific coaching on closing gaps in quality measures. Similar to other virtual QICs,15 24–26 participants found the web-based modality acceptable and useful.
Practices reported steady increases in engagement, QI capacity and use of QI tools. Improvements in QI capacity and use of tools is consistent with a systematic review showing that QIC activities improve healthcare professionals’ knowledge, problem-solving skills and teamwork towards developing processes for improvement.12
Results from surveys and interviews revealed high satisfaction and engagement with the virtual QIC, partly due to the CIN team making real-time changes to the structure and content based on input from participating practices. Six out of seven clinics elected to continue beyond the original 12-month commitment, suggesting perceived value of the QIC. Even with the last-minute pivot to virtual delivery, practices demonstrated improvements that were consistent with systematic reviews of in-person QIC evaluations.1 8 12 13 QI support from coaches and bimonthly online meetings with all practices were integral to the success of the virtual QIC with independent practices, consistent with in-person QIC findings.1 8 12
Primary care practices noted that an area for improvement was the dashboard which has performance measures for attributed patients. During the virtual QIC, the CIN team adapted an existing dashboard used with healthcare system-owned practices. Modifying it for different EMRs used by independent practices was challenging for the CIN, and practices expressed concerns about data quality and timeliness. Interview participants felt they had varying levels of confidence and skills in using EMR data for practice-level decision-making and wanted to focus on this area in future QICs. An additional issue was that only a small proportion of their patients were covered by value-based payment contracts reflected in the dashboard. The independent clinics tended to be smaller and served a greater proportion of uninsured and high-need patients. Clinic interviewees expressed a strong desire for the QIC to include strategies for tracking quality measures for all patients instead of only ‘attributed’ patients.
Limitations
Single case studies are not designed for generalisability, but for in-depth examination of a programme (ie, a virtual QIC).17 Consistent with this design, we used multiple sources of data collected prospectively, building on previous evaluations of in-person QICs. We combined data already being collected by the CIN with interviews that allowed participants to confidentially share their opinions.
Selection bias of participating practices is notable as the CIN recruited practices with capacity to participate during the early stages of the pandemic. This makes sense given that the CIN needed to pilot the virtual model with practices capable of engaging in the QIC and enacting improvements. However, there was variation in practice characteristics such as rurality and practice size. Future cohorts will include practices with more diverse QI capacity and practice characteristics. Developing more rigorous selection criteria is also a priority for subsequent virtual QICs. Finally, the PTA survey was developed by the CIN and could benefit from psychometric testing.
Given the case study approach, we were unable to examine whether confounding variables had an impact on findings. Additionally, we were unable to evaluate changes in value care measures because of the pandemic; comparisons to historical data would have questionable validity. Future work should examine potential confounding variables and impact on acceptability and practice and patient outcomes. Attention to both QIC implementation processes and outcomes may inform best practices for designing and delivering collaborative content.
Despite limitations, our evaluation demonstrates how a partnership between a CIN and implementation science researchers can be mutually beneficial. Researchers strengthened the evaluation approach for the QIC while the CIN shared context-specific details and enabled researchers to collect data prospectively. With the emergence of learning healthcare systems, there is greater opportunity for implementation science researchers to collaborate with quality improvement practitioners as active team members.27–29 Leeman et al describe the challenges of aligning implementation science and improvement practice due to different terminology and sources of knowledge, and limited evidence of effectiveness of specific QI tools and methods.27 To move beyond these challenges, collaborations such as ours will strengthen research and practice partnerships, foster local ownership of implementation, generate practice-based evidence, tailor implementation strategies to the local context and build practice-level capacity.
Conclusion
A virtual QIC conducted with independent primary care practices in a CIN during COVID-19 resulted in consistent participation, and improved QI capacity and implementation of QI methods and tools. Virtual QICs may be an attractive alternative beyond the pandemic as they require less time and fewer resources for participating practices. However, more effort may be needed on the part of CINs and coaches to create virtual QICs. This model has proven to be sustainable for the CIN, which has incorporated the improvement recommendations from participants and is continuing to deliver QICs virtually.
Data availability statement
Non-proprietary data are available on reasonable request.
Ethics statements
Patient consent for publication
Ethics approval
This study involves human participants but IRB #21-0766 approved by the Office of Human Research Ethics, The University of North Carolina at Chapel Hill Designated as Not Human Subjects Research. Participants gave informed consent to participate in the study before taking part.
Acknowledgments
The authors would like to thank Eileen Ciesco, Crystal Hoffman, Helen Rinaldi, Jordan Rapp and Dr. Amy Shaheen for collaborating with our implementation science team. They generously shared their time and expertise and allowed us to join them on their continuous improvement journey. We thank the interview participants for sharing their perspectives on the QIC. We also thank MaryBeth Grewe for her advisory role in the early stages of data collection. Lastly, we appreciate the contributions of Dr Jennifer Leeman to initiating the partnership and her ongoing support of our work with the CIN.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Contributors CLR, AS and CMS conceived and designed the analysis; CLR and AY collected primary data; RB and CR contributed secondary data; AM, KM, CLR, AS and LC analysed the data; CLR, AS, KM, CMS and LC wrote the paper; CLR is responsible for the overall content and serves as guarantor.
Funding The study was funded by National Center for Advancing Translational Sciences (UL1TR002489).
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.