Healthcare Policy

Healthcare Policy 10(SP) September 2014 : 25-35.doi:10.12927/hcpol.2014.23852
Research Paper

Acute Care Hospitals’ Accountability to Provincial Funders

Seija K. Kromm, G. Ross Baker, Walter P. Wodchis and Raisa B. Deber

Abstract

Ontario's acute care hospitals are subject to a number of tools, including legislation and performance measurement for fiscal accountability and accountability for quality. Examination of accountability documents used in Ontario at the government, regional and acute care hospital levels reveals three trends: (a) the number of performance measures being used in the acute care hospital sector has increased significantly; (b) the focus of the health system has expanded from accountability for funding and service volumes to include accountability for quality and patient safety; and (c) the accountability requirements are misaligned at the different levels. These trends may affect the success of the accountability approach currently being used.

Acute care hospitals in the province of Ontario are private organizations that receive most of their funding from public sources; traditionally, these funds have come from the Ministry of Health and Long-Term Care (MOHLTC). Although they have long been held accountable to patients, the public and the government for their use of public funds, in 2001 it was recommended that accountability of acute care hospitals to the government be expanded to include a broad range of accountability mechanisms in the area of performance measurement, including hospital report cards. Formal accountability agreements between MOHLTC and acute care hospitals were mandated by the 2004 Commitment to the Future of Medicare Act (CFMA) as a condition for funding. As noted in the Introduction to this Special Issue, the agreements included performance measures and targets for financial, service volume and select clinical indicators, and they employ a combination of expenditure, regulatory and information policy mechanisms (Deber 2014).

In 2006, Ontario moved towards a regional model, with provincial funds for selected services (including hospitals) now flowing through 14 geographically based local health integration networks (LHINs). Unlike some other Canadian provinces, Ontario's hospitals retained their independent corporate boards after LHINs were introduced. Each LHIN is a Crown corporation that operates at arm's-length from the government. Accountability for use of these funds employed a series of expenditure policy instruments. The 2006 Local Health System Integration Act (LHSIA) that created the LHINs' mandated ministry–LHIN performance agreements (MLPAs) between each LHIN and MOHLTC to establish the flow of funds to, and set performance targets for, LHINs. The province holds LHINs to account for these targets. In 2007, LHINs were given the responsibility to allocate funds and to sign and monitor hospital service accountability agreements (H-SAAs) with the hospitals in their region, holding hospitals (via their independent corporate boards) accountable for meeting their H-SAA obligations as a requirement for funding. If targets are not met, future funding may be reduced (as outlined in the CFMA and accountability agreements), or in extreme situations, the provincial government may appoint a supervisor to replace the hospital's CEO.

A third piece of legislation, the 2010 Excellent Care for All Act (ECFAA), employs the information policy instrument; it requires each acute care hospital to submit an annual Quality Improvement Plan (QIP) to Health Quality Ontario (HQO), an arm's-length government agency created under the CFMA. This policy mechanism assumes that hospitals will want to improve quality of care if they are given the information to do so (Veillard et al. 2005, 2010). Making hospital performance information publicly available has been found to lead to quality improvement (Fung et al. 2008).

Each of these agreements (H-SAAs, MLPAs and QIPs) contains a standard set of required (or recommended, in the case of the QIP) core performance indicators that are used province-wide, but the targets can differ, depending on the LHIN or acute care hospital.

Ontario's use of legislation and performance measurement follows the examples of the UK and the US (Veillard et al. 2012), as well as the Canadian province of British Columbia. Since the early 1980s, performance measures have been used for UK hospitals, initially focusing on activities and costs, then expanding in the 1990s to include clinical aspects of care (Smee 2002). The US also uses performance measurement for hospital quality improvement along with financial incentives and public reporting of performance information (Blumenthal and Jena 2013; Committee on Quality of Health Care in America, Institute of Medicine 2001; Jha et al. 2005; Lindenauer et al. 2007). British Columbia was the first Canadian province to use performance agreements between the government and acute care hospitals (Quigley and Scott 2004).

This paper examines the Ontario government's use of the policy instruments of legislation and performance measurement to hold LHINs and hospitals accountable for the use of public funds, quality of service or both. This examination reveals several issues that challenge hospitals and the success of this approach used for accountability.

Methods

This study focuses on three accountability documents currently used in Ontario's acute care hospital sector: (a) ministry–LHIN performance agreements (MLPAs), (b) hospital service accountability agreements (H-SAAs) and (c) Quality Improvement Plans (QIPs). Links to these documents and specifications of indicator definitions are provided in Appendices A and B to this paper. For this analysis, we retrieved data on performance indicators from H-SAAs for the years 2005 to the present, MLPAs for the years 2006 to the present and QIPs for 2011 to the present. A performance indicator is defined as a measure of local health system (or acute care hospital) performance for which a specific target is set and for which each LHIN (or hospital) is held accountable. Table 1 lists performance indicators used in H-SAAs over time, noting indicators that align with those used in the MLPAs. Table 2 shows the indicators used in part B of the QIP, and notes those that are also found in the H-SAAs and MLPAs.


TABLE 1. Indicators used in accountability agreements, 2005–2014
PERFORMANCE INDICATORS (Definitions in Appendix B) 2005–2007 2007–2008 2008–2010 2010–2011 2011–2012 2012–2013 2013–2014
SERVICE/GLOBAL VOLUMES
Relative acute length of stay for select case-mix groups            
Relative total acute length of stay            
Ambulatory care visits (total outpatient minus emergency department visits)
Total acute activity (including in-patient and day surgery* weighted cases)
Complex continuing care Resource Utilization Group weighted patient days
In-patient mental health* weighted patient days
Elderly capital assistance program (ELDCAP) in-patient days
Rehabilitation* in-patient days (weighted cases)
Emergency department visits (weighted cases)
ACCOUNTABILITY INDICATORS
Organizational Health
Percentage of full-time nurses        
Financial indicators:
Current ratio
Total margin or balanced budget •• •• •• •• •• ••
Person Experience
Percentage of chronic patients with new stage 2 or greater skin ulcers          
Rate of readmission to own facility for select case-mix groups X X X X
90th percentile ED length of stay for:              
  Admitted patients       X •• •• ••
  Non-admitted complex (CTAS I–III) patients       X •• •• ••
  Non-admitted minor uncomplicated (CTAS IV–V) patients       X •• •• ••
90th percentile wait times for:*              
  Cancer surgery   X X X •• •• ••
  Cardiac bypass surgery   X X X •• •• ••
  Cataract surgery   X X X •• •• ••
  Hip joint replacement surgery   X X X •• •• ••
  Knee joint replacement surgery   X X X •• •• ••
  Diagnostic MRI scan   X X X •• •• ••
  Diagnostic CT scan   X X X •• •• ••
Hospital-acquired infections:*
Cases of ventilator-associated pneumonia (VAP)          
Central line infection rate          
Rates of Clostridium difficile          
Rates of vancomycin-resistant enterococcus          
Rates of methicillin-resistant Staphylococcus aureus          
System Perspective
Percentage of ALC days   X X X X •• ••
* Not all hospitals provide these services; if not, their targets = 0
An indicator used in the H-SAA
•• An indicator used in both the H-SAA and MLPA
X An indicator in MLPA but not the H-SAA

 

Results

Ministry–LHIN performance agreements

MLPAs were first used in 2007 (known then as ministry–LHIN accountability agreements). They outlined performance obligations for LHINs, including financial management; reporting requirements; public accountability; and specific targets for financial, service-level and other performance indicators. Thirteen indicators were used in the 2007 version, and 16 are in the current version, nine of which were carried over. As noted in Table 1, some indicators were dropped and others added over time. The only indicator that holds all LHINs to the same target regardless of their location, size or services delivered by their health service providers is the annual balanced budget requirement, which requires total revenue to be greater than or equal to total expenses. Other indicators that have been retained include percentage of alternate level of care (ALC) days and specific 90th percentile wait time indicators that align with priority areas identified in the federal government's National Wait Times Initiative (Health Canada 2004). Consistent use of these indicators emphasizes their continued importance and the focus of the health system on financial performance and access.

Hospital-service accountability agreements

The two main categories of performance indicators in the H-SAA are service volumes (including global volumes) and accountability indicators. As with the MLPA, the two financial indicators under the accountability indicators subcategory of organizational health require all acute care hospitals in the province to meet the same performance targets, regardless of their size, location or services provided. The province-wide target for "total margin" (see Appendix B for definition) is at least 0%, meaning that each acute care hospital must balance its budget while providing the service levels outlined in its H-SAA. The provincewide target for "current ratio" (see Appendix B) is 0.8 to 2.0. These two financial indicators and seven of the service volume indicators have been consistently used in the H-SAAs.

Quality Improvement Plans

Accountability for quality is emphasized by the use of QIPs, which utilize recommended quality indicators grouped according to five quality dimensions identified by HQO: safety, effectiveness, access, patient-centred and integrated. Hospitals are encouraged to choose at least one recommended indicator in each dimension; this information can then be used for province-wide comparisons. Recommended indicators are used because it is recognized that not all indicators apply to all hospitals. Some hospitals may not provide care associated with the indicator; or in the case of small community hospitals, the volume of services provided is low, reducing the strength of statistical analyses. Accountability for quality was sought prior to the QIPl; MOHLTC had required all acute care hospitals to publicly report information on some quality indicators (e.g., hospital-acquired infections), but the Auditor General of Ontario found that not all hospitals were using the same indicator definitions (Office of the Auditor General of Ontario 2008). The QIP guidance document provides a standard definition for each recommended quality indicator. These standardized definitions are an improvement, making it possible to compare hospitals.

Table 2 shows that all indicators used in the first year of the QIP have been carried over to the present time. The 2012–2013 QIP added two new patient-centred indicators, while the 2013–2014 version added two new safety indicators (including medication reconciliation). Six indicators currently used in the H-SAA, and one in the MLPA, align with indicators used in the QIP.


TABLE 2. Quality dimensions, objectives and indicators used in part B of the QIP
Quality Dimension Objective Measure/Indicator (see Appendix B for definitions) 2011–2012 2012–2013 2013–2014
Safety Reduce C. difficile infections (CDIs) and associated diseases CDI rate per 1,000 patient days
Reduce incidence of ventilator-associated pneumonia (VAP) VAP rate per 1,000 ventilator days
Improve provider hand hygiene compliance Hand hygiene compliance before patient contact
Reduce rate of central line bloodstream infections Rate of central line bloodstream infections per 1,000 central line days
Reduce incidence of new pressure ulcers Pressure ulcers (≥ stage 2)
Avoid patient falls % of complex continuing care residents who fell in the last 30 days
Reduce rates of deaths and complications associated with surgical care Surgical safety checklist
Rate of in-hospital mortality following major surgery    
Reduce use of physical restraints Physical restraints
Medication reconciliation at admission Medication reconciliation at admission    

Effectiveness

Reduce unnecessary deaths in hospitals Hospital standardized mortality ratio
Improve organizational financial health Total margin (consolidated)
Access Reduce wait times in the emergency department 90th percentile ED length of stay for admitted patients

Patient-centred

Improve patient satisfaction

"Would you recommend this hospital to your friends and family?"
"Overall, how would you rate the care and services you received at the hospital?"  
In-house survey (if available): "Willingness of patients to recommend the hospital to friends or family"  

Integrated

Reduce unnecessary time spent in acute care Percentage ALC days
Reduce unnecessary hospital readmission Readmission within 30 days for selected CMGs to ANY facility *
* Only readmissions to own institution
Indicator only in QIP
Indicator in both the QIP and H-SAA
Indicator in both the QIP and MLPA

 

Issues in Hospital Accountability

Analysis of indicator data in Tables 1 and 2 reveals three main issues during the evolution of accountability and use of performance measures that can make accountability more difficult to achieve in Ontario's acute care hospital sector: (a) the number of performance measures being used in the acute care hospital sector has increased significantly; (b) the focus of the health system has expanded from accountability for funding and service volumes to include accountability for quality and patient safety; and (c) the indicators are not always clearly aligned under hospital control and may have different targets specified in different agreements. Each of these issues is presented below.

Increased number of indicators

The number of indicators used in all accountability documents has increased over time. The most significant increase is at the acute care hospital level, from 13 in 2005/06 to 25 in the current H-SAA (Table 1). In the past three years, 16 performance indicators have been added to the H-SAA, while no indicators have been removed, increasing the requirements tied to funding. As well, the number of recommended indicators in the QIP has increased from 14 in its first year to 18 in the current year (six of which overlap with the H-SAA for a total incremental increase of 12 recommended indicators). This means 28 new indicators have been introduced for hospitals in the past three years. Hospitals that report on all recommended QIP indicators are now reporting up to 37 indicators for accountability purposes.

Expanding focus of accountability

The government uses the MLPA and H-SAA to monitor fiscal management and health services delivery through the LHINs. Table 1 shows that the H-SAA initially focused on financial performance (balanced budget or total margin of 0%) and service volumes (Reeleder et al. 2008). These areas continue to be focused on but accountability has expanded to include areas related to integration of hospital performance with community-based providers. For example, the MLPA now has indicators for community care access centre (CCAC) in-home services wait times, and repeat unscheduled emergency department visits for mental health conditions and for substance abuse conditions. As well, the H-SAA has added the indicator of percentage of ALC days in 2012/13. The ECFAA continues the expansion into areas of quality of care and patient safety, requiring hospitals to report on effectiveness of care, nosocomial infections, other areas of patient safety and even patient satisfaction (Government of Ontario 2010).

Alignment, controllability and duplication of accountability indicators

The H-SAA states that the Ontario government has recognized the need for alignment between levels of the healthcare system for accountability purposes. Tables 1 and 2 show that alignment has improved over time as indicators aligning with those used in the MLPA have been added to the H-SAA. For example, percentage of ALC days, emergency department length of stay and 90th percentile wait times for priority areas are now included in the H-SAA. Alignment is important for achieving accountability (Kramer et al. 2009), making increased alignment an improvement. LHINs can now hold acute care hospitals accountable for performance measures aligning with MLPA indicators that are tied to aspects of care provided in an acute care setting or within the control of the acute care hospital.

This increases the controllability of an indicator for LHINs. Controllability is the extent to which a health service provider or LHIN can control its performance on an indicator. Without controllability, organizations may be accountable for performance targets they cannot directly influence, possibly leading to reduced funding as outlined in the CFMA and accountability agreements. The issue of controllability was recognized prior to 2008 by decision-makers when they decided not to include percentage of ALC days as an indicator in the H-SAA. Decision-makers agreed that acute care hospitals would not be held accountable for system issues beyond their control (e.g., lack of suitable discharge locations such as long-term care beds) and because inconsistent definitions of ALC were being used (Ontario Health Quality Council and Ontario Joint Policy and Planning Committee 2008). Standardizing the definition of ALC (see Appendix B) and including the indicator in both the MLPA and H-SAA increase controllability.

Duplication of indicators is also shown in Tables 1 and 2; hospitals may need to report on the same indicator to two different agents: HQO and the LHIN. This may seem inconsequential, but can be problematic if indicators have different targets depending on the agent to which the hospital is reporting, as is sometimes the case. (For further discussion of the reporting burdens of hospitals, see Kraetschmer et al. 2014).

Discussion

As defined in the Introduction to this Special Issue (Deber 2014), accountability means having to be answerable to someone for meeting defined objectives. These objectives are often defined in terms of targets for performance measures for financial, clinical and service volumes. Performance measurement for accountability can be beneficial by establishing key dimensions of hospital performance; encouraging the use of best practices through the sharing of information between hospitals on uniform indicators; managing system-wide organizational performance; and aligning organizational strategy with health system strategy (Veillard et al. 2005, 2010).

The present study is limited in its scope by the information contained in the three main accountability documents used in Ontario's acute care hospital sector. Explanations for the increase in the number of performance indicators over time, why the indicators being used were changed and the key drivers, how hospitals react to these changes, and the cost to hospitals of responding to accountability requirements cannot be answered using these documents. These are all areas of future research.

Even so, the three main accountability documents being used for Ontario's acute care hospital sector and their performance measures revealed issues that can make accountability for the use of public funds more challenging to achieve. However, the situation appears manageable; the significant increase in the number of indicators used for accountability in Ontario's acute care hospital sector is modest when compared to the reporting requirements faced by hospitals in the UK and the US. Even so, the potential of these reporting tools to improve performance would be increased by greater alignment between indicators used at the LHIN and acute care hospital levels (Kramer et al. 2009).

The increased number of indicators also shows an expanding focus of the health system beyond financial and service volumes into important areas such as patient-centred care and quality of care, including integration with community providers. A benefit of this increase and expansion is that hospitals and LHINs are provided with guidance on where to focus their attention and improvement efforts. Even so, it is clear that the expansion and refinement of measures is an evolving process. For example, new measures are introduced for a time, but then discontinued in favour of more commonly used measures such as those for financial performance, access, nosocomial infections and readmissions (Snowdon et al. 2012). Other measures changed over time as their definitions were refined. Changing the measures used, or their definitions, is problematic because inter- or intrahospital comparisons over time become more challenging or not possible.

While a number of accountability measures focus on efforts to coordinate care with community providers, they still fall short of capturing health system coordination and integration in other areas of healthcare provision such as pharmacy services and primary care. As well, continued efforts to increase coordination of care means that controllability is likely to remain an issue, particularly when hospitals are held accountable for performance measures that require collaboration with community-based providers (e.g., readmission rates).

Performance measurement is critical for performance improvement, but a problem arises when hospitals are forced to make trade-offs between measurement activities and attention to improvement. As new measures are developed and added to reporting requirements, hospitals must devote more resources (e.g., finances, time) to performance measurement and reporting. Some organizations may consider these resources better spent on providing more patient care or engaging in improvement activities (not just measurement of activities). Even so, without performance measurement it is not possible to determine whether additional care or improvement initiatives follow best practice guidelines or lead to actual improvements.

Are all performance data valuable and useful for accountability purposes, or is the system moving towards measurement for the sake of measuring what can be measured? Our results support the framework presented in the Introduction (Deber 2014), indicating that measures may be chosen because they capture elements of healthcare that are measurable, or based on their feasibility and the availability of data (Veillard et al. 2010).

Conclusion

This paper has focused on three legislated policies for performance measurement and reporting currently being used in Ontario's acute care hospital sector for accountability and explores issues that may challenge the effectiveness of these arrangements. The performance indicators used over time for LHINs and acute care hospitals show that some indicators are used consistently, some are abandoned and many others newly introduced. These changes show that the focus of accountability has expanded from financial and service volumes to include access, quality, patient safety and the patient experience, emphasizing the importance of these areas. The expansion of accountability has improved alignment between levels of accountability (LHIN and hospitals), increasing the likelihood that performance targets will be achieved as the system is more aligned. Even so, controllability will likely remain an issue, as the focus on collaborative care between hospitals and community-based providers continues. Expansion of accountability has increased the focus on standardizing definitions but also led to duplication of measures being used. Hospitals are required to report similar data to multiple agents and/or meet more than one target for the same measure; this can negatively affect data quality or lead to confusion in reporting, challenging the ability of accountability policies to improve performance while keeping public spending on healthcare in check. In balance, however, the availability of standardized data may help hospitals improve their performance, at least with respect to the indicators being captured. Standardization provides an additional opportunity for future research to evaluate the effect of accountability on hospital performance.

Obligation redditionnelle des hôpitaux de courte durée soins actifs auprès des bailleurs de fonds provinciaux

Résumé

En Ontario, les hôpitaux de soins de courte durée sont assujettis à certaines règles, dont les dispositions législatives et les mesures du rendement à des fins de responsabilité financière ainsi que l'obligation de rendre compte en matière de qualité. L'examen des documents portant sur l'obligation redditionnelle aux niveaux gouvernemental, régional et hospitalier révèle trois tendances: (a) le nombre de mesures du rendement utilisé dans les hôpitaux de soins de courte durée s'est accru de façon significative; (b) les efforts du système de santé se sont développés depuis l'obligation de rendre compte des finances et des volumes de services pour inclure l'obligation redditionnelle quant à la qualité et à la sécurité des patients; et (c) les exigences en matière d'obligation redditionnelle ne sont pas uniformisées parmi les divers niveaux administratifs. Ces tendances peuvent avoir des répercussions sur la réussite des démarches actuelles visant l'obligation de rendre compte.

About the Author(s)

Seija K. Kromm, PhD, Postdoctoral Fellow, Health System Performance Research Network, University of Toronto, Toronto, ON

G. Ross Baker, PhD, Professor, Institute of Health Policy, Management & Evaluation, University of Toronto, Toronto, ON

Walter P. Wodchis, PhD, Associate Professor, Institute of Health Policy Management & Evaluation, University of Toronto, Research Scientist, Toronto Rehabilitation Institute, Adjunct Scientist, Institute for Clinical Evaluative Sciences, Toronto, ON

Raisa B. Deber, PhD, Professor, Institute of Health Policy, Management & Evaluation, University of Toronto, Toronto, ON

Correspondence may be directed to: Seija K. Kromm, Post-Doctoral Fellow, Institute of Health Policy, Management & Evaluation, University of Toronto, Health Sciences Building, 155 College St., Suite 425, Toronto, ON M5T 3M6; e-mail: seija.kromm@utoronto.ca.

Acknowledgment

This study was funded by CIHR-PHSI Grant (CIHR Grant Number PHE-101967) and an Alberta Innovates – Health Solutions Graduate Studentship.

References

Blumenthal, D. and A.B. Jena. 2013. "Hospital Value-Based Purchasing." Journal of Hospital Medicine 8(5): 271–77. doi: 10.1002/jhm.2045.

Committee on Quality of Health Care in America, Institute of Medicine. 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press.

Deber, R.B. 2014. "Thinking about Accountability." Healthcare Policy 10(Special Issue): 12–24.

Fung, C.H., Y.-W. Lim, S. Mattke, C. Damberg and P.G. Shekelle. 2008. "Systematic Review: The Evidence that Publishing Patient Care Performance Data Improves Quality of Care." Annals of Internal Medicine 148(2): 111–23.

Government of Ontario. 2010. Excellent Care for All Act, 2010. SO 2010, C.14. Retrieved June 3, 2014. <http://www.e-laws.gov.on.ca/html/statutes/english/elaws_statutes_10e14_e.htm>.

Health Canada. 2004 (September 16). A 10-Year Plan to Strengthen Health Care. Retrieved June 3, 2014. <http://www.hc-sc.gc.ca/hcs-sss/delivery-prestation/fptcollab/2004-fmm-rpm/index-eng.php>.

Jha, A.K., Z. Li, J. Orav and A.M. Epstein. 2005. "Care in US Hospitals: The Hospital Quality Alliance Program." New England Journal of Medicine 353(3): 265–74. doi: 10.1056/NEJMsa051249.

Kraetschmer, N., J. Jass, C. Woodman, I. Koo, S.K. Kromm and R.B. Deber. 2014. "Hospitals' Internal Accountability." Healthcare Policy 10(Special Issue): 36–44.

Kramer, S., R. Solomon and C. Dingman. 2009. "Achieving Accountability." Healthcare Quarterly 12 (Special Issue): 22–27.

Lindenauer, P.K., D. Remus, S. Roman, M.B. Rothberg, E. M. Benjamin, A. Ma et al. 2007. "Public Reporting and Pay for Performance in Hospital Quality Improvement." New England Journal of Medicine 356(5): 486–96. doi: 10.1056/NEJMsa064964.

Office of the Auditor General of Ontario. 2008. Prevention and Control of Hospital-Acquired Infections. Special report. Retrieved June 3, 2014. <http://www.auditor.on.ca/en/reports_en/hai_en.pdf>.

Ontario Health Quality Council and Ontario Joint Policy and Planning Committee. 2008. Accountability Agreements in Ontario's Health System: How Can They Accelerate Quality Improvement and Enhance Public Reporting? Retrieved June 3, 2014. <http://www.ontla.on.ca/library/repository/mon/22000/285096.pdf>.

Quigley, M.A. and G.W. Scott. 2004. Hospital Governance and Accountability in Ontario. Toronto: Ontario Hospital Association. Retrieved June 3, 2014. <http://www.mcmillan.ca/Files/GScott_MQuigley_HospitalGovernance_OHA_0404.pdf>.

Reeleder, D., V. Goel, P.A. Singer and D.K. Martin. 2008. "Accountability Agreements in Ontario Hospitals: Are They Fair?" Journal of Public Administration Research and Theory 18(1): 161–75. doi: 10.1093/jopart/mul024.

Smee, C.H. 2002. "Improving Value for Money in the United Kingdom National Health Service: Performance Measurement and Improvement in a Centralised System." In P. Smith, ed., Measuring Up: Improving Health System Performance in OECD Countries (p. 57–85). Paris: Organisation for Economic Co-operation and Development.

Snowdon, A., K. Schnarr, A. Hussein and C. Alessi. 2012. Measuring What Matters: The Cost vs. Values of Health Care. London, ON: University of Western Ontario. Retrieved June 3, 2014. <http://sites.ivey.ca/healthinnovation/thought-leadership/white-papers/measuring-what-matters-the-cost-vs-values-of-health-care-november-2012/>.

Veillard, J., F. Champagne, N. Klazinga, V. Kazandjian, O.A. Arah and A.L. Guisset. 2005. "A Performance Assessment Framework for Hospitals: The WHO Regional Office for Europe Path Project." International Journal for Quality in Health Care 17(6): 487–96. doi: 10.1093/intqhc/mzi072.

Veillard, J., T. Huynh, S. Ardal, S. Kadandale, N.S. Klazinga and A.D. Brown. 2010. "Making Health System Performance Measurement Useful to Policy Makers: Aligning Strategies, Measurement and Local Health System Accountability in Ontario." Healthcare Policy 5(3): 49–65.

Veillard, J., B. Tipper and N. Klazinga. 2012. "Quality Legislation: Lessons for Ontario from Abroad." Healthcare Quarterly 15 (Special Issue): 6. doi: 10.12927/hcq.2012.23155.

Comments

Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed