HealthcarePapers

HealthcarePapers 15(Special Issue) March 2016 : 80-89.doi:10.12927/hcpap.2016.24505
Commentary

Evaluating a Chronic Disease Management Improvement Collaboration: Lessons in Design and Implementation Fundamentals

Kaye Phillips, Claudia Amar and Keesa Elicksen-Jensen

Abstract

For the Canadian Foundation for Healthcare Improvement (CFHI), the Atlantic Healthcare Collaboration (AHC) was a pivotal opportunity to build upon its experience and expertise in delivering regional change management training and to apply and refine its evaluation and performance measurement approach. This paper reports on its evaluation principles and approach, as well as the lessons learned as CFHI diligently coordinated and worked with improvement project (IP) teams and a network of stakeholders to design and undertake a suite of evaluative activities. The evaluation generated evidence and learnings about various elements of chronic disease prevention and management (CDPM) improvement processes, individual and team capacity building and the role and value of CFHI in facilitating tailored learning activities and networking among teams, coaches and other AHC stakeholders.

Evidence demonstrating the value and impact of quality improvement collaboratives (QICs) on healthcare improvement and patient outcomes is slowly emerging. While early critics suggested that QICs lack solid evidence (Mittman 2004; Øvretveit et al. 2002) and demonstrate modest effects (Homer et al. 2005; Schouten et al. 2008), recent evidence indicates that QICs have the potential to live up to their promises of enabling the implementation of evidence-based practices for improved healthcare delivery experiences and outcomes. Increasingly, the black box of QICs is being systematically opened and unpacked through mixed methods approaches (Broer et al. 2010) and evaluation designs, which are leading to a better understanding of both the motivations for and core effective components of QICs (Bibby 2014; Knight et al. 2014; Lin et al. 2005; Nadeem et al. 2013; Pearson et al. 2005). Emerging literature is providing greater, albeit still limited, insight into the outcomes and value of QICs (Schouten et al. 2008) as the organizations that are designing, leading, programming and participating in QICs build evaluation capacity and balance the demands of traditional, resource-intensive research evaluation methods with integrative and real-time data collection approaches.

Between January and March 2013, a comprehensive, two-level AHC evaluation plan (Phillips et al. 2013) was designed and validated by an evaluation advisory committee consisting of CFHI staff, AHC coaches and faculty and five external Canadian experts in evaluation and chronic disease management. The evaluation objectives were to assess the IPs and teams' CDPM improvement processes and outcomes as well as CFHI's collaborative programming approach. The evaluation plan was framed around principles of developmental evaluation (Patton 2011), the Chronic Care Model (CCM) (Wagner et al. 2001) and lessons learned from past comparative CFHI programs (CFHI 2013). Developmental evaluation informs and supports innovative and adaptive development in complex dynamic environments. It brings to innovation and adaptation the processes of asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data to support project, program, product and/or organizational development with timely feedback. The AHC evaluation plan was designed to ensure that it did the following:

  • Aligned with CFHI's Improvement Approach (CFHI 2014) and the AHC's three objectives (see Verma et al. 2016)
  • Integrated a mixed methods approach (combining program evaluation and improvement science methods with quantitative and qualitative data)
  • Responded to the unique evaluation context, stages, needs and priorities of IP teams and their respective organizations
  • Encouraged capacity building and sustainability (through performance measurement and evaluation embedded in the curriculum and reporting to ensure continuous learning and to generate measurable results)

The context of the AHC, including similarities and differences among the teams as well as the needs and uses of the evaluation to various stakeholders, had an important influence on the design of the AHC evaluation plan. Although teams participating in the AHC were aligned and grounded by a common set of objectives, cross-regional CDPM priorities and CCM strategies, there were a number of regional and contextual differences. For example, teams addressed different types of diseases (ranging from chronic obstructive pulmonary disease [COPD] to diabetes, mental health and comorbidities) based on their organizational priorities and identified population needs. The IP teams also designed and implemented their initiatives within different organizational settings and entered the AHC with different levels of readiness, capacity and resources to undertake and measure team-based CDPM quality improvement initiatives. Furthermore, participating teams and organizations had various needs and uses for performance measurement and evaluation, such as the need for data to inform plan-do-study-act cycles, continuous quality improvement, evidence of process and outcome impacts, and reporting.

In addition to the AHC IP teams and CFHI, multiple other stakeholders were interested in the evaluation outcomes, including the AHC's executive steering committee, provincial departments of health, regional health authorities, the CFHI board of directors and other collaborative participants. Designing an evaluation plan for such a multifaceted regional CDPM QIC demanded an approach that attended to the nuances of a complex and interconnected network of cross-regional and interprofessional teams, as well as differentiated processes and outcomes, and met the needs and priorities of a variety of stakeholders.

These nuances meant that one standardized, cookie-cutter evaluation approach would not be sufficient to capture all of the important interests. Rather, an approach was designed to capture two evaluation levels. Level-1 evaluation centred on teams' reporting on performance measures specific to their improvement aims and strategies and focused on their IP processes and outcomes. As per CFHI's Improvement Approach (2014), this level included teams' progress in designing and implementing their initiatives; measurable changes in professional and organizational practices, and/or changes in patient and family practices and experiences; measureable improvements in the quality and cost-effectiveness of care; feasibility of IP spread and scale-up; and CDPM improvements. Level-2 evaluation centred on program evaluation of the AHC and CFHI's approach to collaborative improvement. It focused on CFHI's collaborative design and delivery approach and the achievement of the collaborative's objectives of strengthening skills and competencies of AHC team members in leading improvement; building an interprofessional and cross-regional CDPM improvement network; and establishing a valuable mentorship and coaching program.

Multiple strategies were used to undertake and integrate this two-level evaluation plan, including: i) ongoing performance measurement planning, implementing and reporting; ii) building measurement capacity within teams; iii) coaching and support for evaluation and measurement; iv) ongoing surveying to facilitate continuous learning about, and improvements to, the collaborative; and v) complementary networking and costing analyses. Combined, these strategies allowed for ongoing data collection and analysis to inform the IPs' and the AHC's ongoing progress and final results (Verma et al. 2016). As discussed below, the strategies also prompted important reflections and lessons about designing and implementing evaluations for improvement collaboratives, whether focused on CDPM or other priority topics.

Ongoing Performance Measurement Planning and Reporting

As part of the AHC, teams were required to develop their own IP measurement plans using a CFHI template (provided at the outset of the collaborative) and to submit progress reports at two points – December 2013 (with 100% team completion rate) and September 2014 (with 88% team completion rate). CFHI staff and coaches reviewed these documents and provided teams with written and verbal feedback, which was incorporated in the cycle of coaching calls with teams to help them overcome the challenges they experienced. Challenges faced by the IP teams primarily concerned how to identify measures and tools that aligned with their needs and priorities, as well as how to develop evaluation expertise to undertake the collection, analysis and dissemination of data. AHC teams were also required to submit their final IP reports in September 2014 and participated in a 120-day follow-up collaborative webinar in January 2015 to share their IP progress and results.

In the design phase of the evaluation, consideration was given as to whether the evaluation plan and team reporting template struck the appropriate balance between collecting and using the right amount of data and avoiding placing an undue burden on human and financial resources (both for the teams and CFHI) to obtain these data. In order to balance the burden of data collection with its utility, much of the information collected served multiple purposes. This approach allowed a thorough document review as part of the final (summative) evaluation, supplemented by key informant interviews and ongoing surveying. Although this approach facilitated final IP reporting at the end of the collaborative in September 2014 during the summative evaluation, some interviewees identified a heavy workload as a challenge during the collaborative. In future, alignment among the curriculum, tools and reporting requirements will be essential to ensure that the workload and human resources required are manageable for participating IP teams, coaches and other staff who are administering and delivering the IP.

Building Team Measurement Capacity

Supporting IP teams in the development of their measurement capacities was a critical component of the AHC's evaluation approach and training. Evaluation and performance measurement was built into the training curriculum through workshop and webinar sessions (November 2012, May 2013, October 2013 and November 2013), measurement plan development and exercises (January and June 2013), team-coach measurement teleconferences (ongoing) and, in some instances, in-person meetings. Training topics ranged from designing a CDPM measurement framework and identifying indicators and instruments to monitor implementation and evaluate outcomes to data collection and analysis and capturing the cost of doing improvement.

Many of the AHC teams were not equipped with a lead member responsible for IP measurement at the outset of the collaboration and lacked the skills and time to effectively identify, collect, analyze and share the results of their IP initiatives. As Verma and colleagues noted (2016), while many of the IPs in this collaborative aimed to deliver more appropriate and efficient care for patients with chronic conditions, none tracked before–after changes in unit costs. As reported by one respondent in the final AHC evaluation survey, "I learned [enough] about evaluation to know … that I didn't know much about evaluation. What I've learned from this is that you have to bring on the right people at the outset" (Champagne et al. 2015). While CFHI strived to build measurement capacity within IP teams through curriculum and coaching support, it will be crucial to ensure that future IP teams are designed to include a dedicated staff member to undertake measurement activities. Going forward, the requirement for team members to lead measurement activities, including their roles and the level of effort required, ought to be clearly articulated in participating teams' IP evaluation plans.

Across organizations, resources and capacities for evaluation and measurement vary. Recognizing these capacities at the outset of the IP, as well as ensuring that resources are available for the spectrum of evaluation and measurement training needs of participating teams, are important collaborative design considerations that will help ensure that teams are adequately skilled at the outset to carry out performance measurement and evaluation activities. In future, establishing a measurement maturity scale and risk grid for each IP, to make sure teams' specific needs are addressed, would be one step in the right direction.

Coaching and Support for Evaluation and Measurement

Important aspects of the AHC were facilitating team coaching and building a network for the coaches and mentors for peer learning. IP teams were offered a set of two coaches – one academic mentor and one seasoned executive serving as an improvement coach. Calls with these coaches/mentors were held monthly, starting midway through the collaboration, and included discussions of evaluation and measurement design and progress. Results of a survey of AHC coaches indicated that these calls built coaches' and mentors' confidence in reviewing and providing feedback on IP measurement plans; choosing proper performance indicators, information and processes to support evaluation; and determining appropriate data collection methodologies and instruments for the teams.

The final AHC evaluation report (Champagne et al. 2015) revealed that the double coaching model and support of academic and practical expertise were lauded by all interviewees. Despite universal praise for the coaching, there were suggestions that it could be strengthened by ensuring that thematic content expertise and team self-selection of coaches match the foci of the IPs. Such a process would help ensure better responses to the teams' specific measurement and data questions.

A network of faculty, including a core set of curriculum advisors and guest speakers, acted as additional evaluation and measurement resources provided to the teams. A CFHI lead evaluation program staff member was also embedded in the IP teams to provide continuity and support. CFHI staff time was dedicated to scheduling and participating in measurement coaching calls with teams, staying up to date on team progress and challenges, ensuring that curriculum aligned with the teams' stages of IP development, sharing regular progress updates with CFHI staff and providing hands-on coaching to teams. In addition, CFHI's senior director of evaluation, education and performance improvement, as well as an evaluation analyst and an external evaluation advisor, also committed time and support to the AHC throughout its duration – for example, designing the evaluation plans, visiting IP teams, overseeing evaluative/measurement activities, developing curriculum, designing and analyzing AHC workshop and related surveys, coaching teams and supporting design and analysis of IP reports.

Ongoing Surveying

Ongoing surveying to facilitate continuous learning and improvements to the AHC design and delivery occurred throughout the AHC. Survey information was gathered from participants after each of the four in-person workshops. This information included team members' perceived learning needs and expectations, experience with the curriculum/coaching, perceived relevance of the content, intended application of their learning post-event, and perceived value of the curriculum/workshop and specific aspects of the network. A CFHI staff member (dedicated to the AHC) also initiated formal and informal check-ins with teams as a feedback mechanism to ensure they were receiving the support they required. Feedback was used by CFHI staff, faculty and coaches/mentors to develop subsequent AHC training activities. As reported by Verma and colleagues (2016), the preliminary results of the final AHC evaluation report, which triangulated the survey data with additional stakeholder interviews and key document analysis, indicated some learning and IP improvement outcomes. Having multiple data sources during the AHC allowed CFHI to undertake analysis using corroborated evidence. In future, it should be ensured that collaboratives' surveys and associated analyses report on the experience and the specific knowledge and competency gains of participants compared to the workshops' specific learning objectives. This will be essential to demonstrating trends in individual and IP team capacity gains in evidence-based CDPM strategies and change management over time.

Additional Social Networking and Costing Analyses

Social Network Analysis (SNA) is a methodology that analyzes social relationships and connections among people and organizations. This technique was used to corroborate evidence about the value and strength of the regional network built through the AHC. An SNA survey was distributed to all collaborative participants (teams, coaches, faculty, mentors, CFHI staff) at three points during the collaborative to assess the extent to which the AHC was successful in developing an interdisciplinary and cross-regional CDPM improvement network (Survey 1, May 2013; Survey 2, November 2013 and Survey 3, June 2014). The main results from the AHC SNA, as reported by Verma and colleagues (2016), supported by workshop survey and participant interview data, suggested that CFHI was successful in beginning to build a network of regional and provincial teams that shared information and collaborated with each other. New connections were formed between different types of participants, as well as among regions, across the three points in time. As discussed in the final AHC evaluation report (Champagne et al. 2015), although exposed to networking opportunities at the face-to-face workshops, AHC participants felt they could have benefited from more of these types of experiences to establish deeper connections with each other.

In order to understand any economic impacts of the AHC, in 2015, CFHI plans to undertake a partial benefit cost analysis (PBCA) to capture progressive outcomes and sustained impacts realized by selected IPs. CFHI has used PBCA successfully in the past to understand the impacts of its EXTRA Program for Healthcare Improvement IPs (KPMG 2014). By analyzing a sample of high-impact projects to help determine a program's value, PBCA enables investigation of a program's economic impact when there is reason to believe that impacts are unevenly distributed across projects within that program; it is especially useful in situations where a small number of projects create the "lion's share" of impacts. PBCA is an intensive undertaking – it requires sufficient internal analytic resources and capacity as well as IP teams' commitment to, and participation in, follow-up interviews and data submission, but it will provide supplementary downstream results for the AHC. Given that Atlantic Canada is spending on healthcare more than the three largest provinces in Canada (CIHI 2014), cost impacts are important, especially with a weaker fiscal and tax base. This reinforces the case for continuing to use the approach of "learning by doing" in Atlantic Canada.

Key Lessons and Conclusion

The design and implementation of the AHC evaluation between 2012 and 2014 has provided CFHI with rich learnings and options for performance measurement and evaluation approaches that are crucial for consideration in the design of future collaboratives. Having a focus on collaboration, with a well-defined scope, is essential to ensure the effectiveness of CFHI's programming (Nadeem et al. 2013). Although an initial environmental scan and meetings with senior Atlantic leaders identified CDPM as the agreed-upon cross-regional priority and IP focus (to give teams flexibility in developing IPs within their local contexts), the final AHC evaluation revealed that some team members perceived the broad CDPM focus as limiting in terms of targeted curriculum, support activities and transferable learning. When designing future collaboratives, CFHI must make strategic decisions regarding the benefits and limitations of broad versus targeted foci.

The two-year duration of the AHC proved to be too short for the IP teams to effectively undertake evidence-based CDPM design, implementation and evaluation. As reported by Verma and colleagues (2016), at the IP level, only four out of the eight teams who fully participated reached the implementation phase and provided lessons learned about improving CDPM, specifically regarding self-management support, community partnerships, decision support and factors that facilitate or hinder improvement processes. The notion that it takes time to realize change was also underscored in a regional evaluation of CFHI's EXTRA Program for Healthcare Improvement in Quebec (Dubois and Pomey 2013), which indicated that many IP outcomes are attained following participation in improvement programming. IP implementation and outcome barriers reported by AHC teams included the significant amount of time required to undertake IP work amid competing priorities, staff turnover and resource constraints (Champagne et al. 2015). While CFHI did not use team and organizational readiness assessments at the start of the AHC, assessment tools to facilitate early dialogue between teams and CFHI regarding the teams' capacities, readiness and resources to undertake improvement work, and to track its progress, will be used to ensure optimal design and delivery of CFHI's future collaborations.

At the level of the collaborative, CFHI has learned about its role in capacity-building through tailored learning, networking and improvement support via coaching and facilitation. As reported in the final AHC evaluation, there is consensus that these elements of the AHC were well balanced and quite useful (Champagne et al. 2015). Further work to unpack the organizational barriers and success factors to sustaining and spreading the AHC CDPM IPs will be required to understand the full and longer-term impacts of IPs. As CFHI designs future collaboratives, teams' readiness, capacity and resources will be assessed at the outset of the IPs. In addition, given the timelines within and beyond the collaboration, expectations related to IP progress and outcomes will be made more explicit to ensure that the objectives of the IPs and the collaborative as a whole are realistic and achievable.

Several of the AHC's achievements and learnings were important to CFHI's operations, including organizational processes and program resourcing. The AHC helped identify improved ways for CFHI to work and conduct program evaluation in a matrix organizational structure. In alignment with developmental evaluation (Patton 2011), CFHI established a new role for evaluation – embedding an evaluator within one of its programs (the AHC). Fifty percent of a full-time CFHI employee's time was dedicated to supporting IP measurement activities in the AHC, to foster relationships with the teams and ensure that ongoing development and feedback were represented and incorporated in the AHC's training and evaluative activities. Overall, staff learned a great deal about CFHI's own evaluation programming, functioning, approach and tools, which has been, and will be, invaluable to its future QIC.

The AHC's evaluation approach and experience have also contributed important insight into the design of CFHI's forthcoming organizational and collaborative evaluation programming. For instance, they have spurred the design of organization-wide evaluative questions (that each of CFHI's new collaboratives will endeavour to answer), including:

  • What is the value of CFHI's pan-Canadian collaborative improvement approach?
  • What changes to quality result from engaging patients, families and citizens?
  • What changes to provider and organizational capacity, practices, policies and culture result from CFHI's programs?
  • What changes to quality (better coordination of care, patient and family experience, value for money), processes and health outcomes result from CFHI's programs?
  • What are the effectiveness and estimated cost-benefit (or return on investment) of CFHI's programs?
  • What can we learn about the context and conditions for implementing, spreading and sustaining improvement? What can we learn from success and failure?

The design and implementation of the AHC evaluation approach offered a pivotal opportunity to further test and hone CFHI's strategies for facilitating evaluation and performance measurement and tracking progress toward improved care. Throughout these processes, CFHI has supported teams' performance measurement activities; developed teams' evaluation and measurement capacity and coaching connections; assessed learner-level knowledge and skill development against learning objectives; created a system to track improvement progress, outcome measures, achievements and lessons learned; and analyzed the magnitude and growth of the network that the AHC set out to create. These activities have given CFHI a strong foundation on which to build future evaluation and performance measurement design and implementation efforts.

 


 

Évaluer un projet collaboratif visant à améliorer la gestion des maladies chroniques : leçons sur des principes fondamentaux de la conception et de la mise en œuvre

Commentaire

Résumé

Dans le cas de la Fondation canadienne pour l'amélioration des services de santé (FCASS), la Collaboration des organismes de santé de l'Atlantique (COSA) était une occasion décisive de mettre à profit sa vaste expertise et sa riche expérience de formation en matière de gestion de changements régionaux, en plus d'affiner son approche d'évaluation et de mesure du rendement. Cet article porte sur les principes et l'approche de cette évaluation, ainsi que sur les enseignements tirés de l'expérience de la FCASS tandis que celle-ci travaillait diligemment à coordonner les activités de la Collaboration et à soutenir les équipes des projets d'amélioration (PA), ainsi que le réseau des parties prenantes concernées, dans le but de concevoir et d'entreprendre une série d'activités d'évaluation qui permettraient de produire des données probantes et des enseignements concernant divers éléments des processus d'amélioration de la prévention et de la gestion des maladies chroniques (PGMC), ainsi que le renforcement des capacités individuelles et d'équipe. En outre, il s'agissait d'évaluer le rôle et la valeur de la FCASS à titre d'organe de production d'activités d'apprentissage sur mesure et d'occasions de réseautage entre les équipes, les formateurs et d'autres parties prenantes de la COSA.

About the Author(s)

Kaye Phillips, PhD, Senior Director, Canadian Foundation for Healthcare Improvement, Ottawa, ON

Claudia Amar, RN, BScN, MHA, Senior Improvement Lead, Canadian Foundation for Healthcare Improvement, Ottawa, ON

Keesa Elicksen-Jensen, MA, Improvement Analyst, Canadian Foundation for Healthcare Improvement, Ottawa, ON

References

Bibby, J. 2014. "Four Lessons for Running Impactful Collaboratives in Health Care." The Health Foundation. Retrieved February 10, 2015. <http://www.health.org.uk/blog/four-lessons-for-running-impactful-collaboratives-in-health-care/>.

Broer, T., A.P. Nieboer and R.A. Bal. 2010. "Opening the Black Box of Quality Improvement Collaboratives: An Actor-Network Theory Approach." BMC Health Services Research 10(265). doi:10.1186/1472-6963-10-265

Canadian Foundation for Healthcare Improvement. (CFHI). 2013. "Making the Case for Change: Advancing the NWT Chronic Disease Management Strategy." Retrieved March 4, 2015. <http://www.cfhi-fcass.ca/WhatWeDo/Collaborations/NorthwestTerritories/NWTReport.aspx>.

Canadian Foundation for Healthcare Improvement (CFHI). 2014. "CFHI Improvement Approach." Retrieved March 4, 2015. <http://www.cfhi-fcass.ca/WhatWeDo/Collaborations/OurApproach.aspx>.

Canadian Institute for Health Information. 2014. "National Health Expenditure Trends, 1975 to 2014." Retrieved March 4, 2015. <https://secure.cihi.ca/free_products/NHEXTrends Report2014_ENweb.pdf>.

Champagne, F., P. Smits, C. Amar, K. Elickson, J. Verma and K. Phillips. 2015. "Evaluation of the Atlantic Healthcare Collaboration." Ottawa, ON: CFHI.

Dubois, C.A. and M.P. Pomey. 2013. "Évaluation des projets d'intervention conduits dans le cadre du programme FORCES/EXTRA au Québec [Evaluation of Intervention Projects in the framework of the Executive Training for Research Application (EXTRA/FORCES) Program in Quebec]." Ottawa, ON: CFHI.

Homer C.J., P. Forbes, L. Horvitz, L.E. Peterson, D. Wypij and P. Heinrich. 2005. "Impact of Quality Improvement Program on Care and Outcomes for Children with Asthma." Archives of Pediatric and Adolescent Medicine 159(5): 464–69. doi:10.1001/archpedi.159.5.464

Knight, A.W., C. Szucs, M. Dhillon, T. Lembke and C. Mitchell. 2014. "The eCollaborative: Using a Quality Improvement Collaborative to Implement the National eHealth Record System in Australian Primary Care Practices." International Journal for Quality in Health Care 26(4): 411–17. doi:10.1093/intqhc/mzu059

KPMG. 2014. "Five-Year Evaluation of the Canadian Foundation for Healthcare Improvement." Retrieved March 5, 2015. <http://www.cfhi-fcass.ca/sf-docs/default-source/reports/cfhi--five-year-evaluation-e.pdf?sfvrsn=2>.

Lin, M.K., J.A. Marsteller, S.M. Shortell, P. Mendel, M. Pearson, M. Rosen et al. 2005. "Motivation to Change Chronic Illness Care: Results From a National Evaluation of Quality Improvement Collaboratives." Health Care Management Review 30(2): 139–56.

Mittman, B. 2004. "Creating the Evidence Base for Quality Improvement Collaboratives." Annals of Internal Medicine 140(11): 897–901.

Nadeem, E., S.S. Olin, L.C. Hill, K.E. Hoagwood and S.M. Horwitz. 2013. "Understanding the Components of Quality Improvement Collaboratives: A Systematic Literature Review." Milbank Quarterly 91(2): 354–94. doi:10.1111/milq.12016

Øvretveit, J., P. Bate, P. Cleary, S. Cretin, D. Gustafson, K. McInnes et al. 2002. "Quality Collaboratives: Lessons From Research." Quality and Safety in Health Care 11(4): 345–51. doi:10.1136/qhc.11.4.345

Patton, M.Q. 2011. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York, NY: The Guilford Press.

Pearson, M.L., S. Wu, J. Schaefer, A.E. Bonomi, S.M. Shortell, P.J. Mendel et al. 2005. "Assessing the Implementation of the Chronic Care Model in Quality Improvement Collaboratives." Health Services Research 40(4): 978–96. doi:10.1111/j.1475-6773.2005.00397.x

Phillips, K., J. Verma, C. Amar, K. Elicksen and F. Champagne. 2013. "Atlantic Healthcare Collaboration: CFHI's Evaluation Plan." Available upon request.

Schouten, L.M., M.E. Hulscher, J.J. van Everdingen, R. Huijsman and R.P. Grol. 2008. "Evidence for the Impact of Quality Improvement Collaboratives: Systematic Review." British Medical Journal 336(7659): 1491–94. doi:10.1136/bmj.39570.749884.BE

Verma, J.Y., J.-L. Denis, S. Samis, F. Champagne and M. O'Neil. 2016. "A Collaborative Approach to a Chronic Care Problem." Healthcare Papers 15(Special Issue, January): 19–60. doi:10.12927/hcpap.2016.24503

Wagner, E.H., B.T. Austin, C. Davis, M. Hindmarsh, J. Schaefer and A. Bonomi. 2001. "Improving Chronic Illness Care: Translating Evidence Into Action." Health Affairs 20(6): 64–78. doi:10.1377/hlthaff.20.6.64

Comments

Be the first to comment on this!

Related Articles

Note: Please enter a display name. Your email address will not be publically displayed