Healthcare Quarterly

Healthcare Quarterly 7(2) March 2004 .doi:10.12927/hcq.2004.17238
Longwoods Review

Why Did the World Health Organization Rate Canada's Health System as 30th? Some Thoughts on League Tables

Raisa Deber

Abstract

Background: Canada's ranking of 30th on the World Health Organization 2000 Health Report has been widely cited as indicating the need for serious reform in how the healthcare system is organized and managed.

Method: The WHO 2000 report was reviewed to analyze how these rankings were derived.

Results: The measure of "overall healthsystem performance" derives from adjusting "goal attainment" for educational attainment. Although goal attainment is in theory based on five measures (level and distribution of health, level and distribution of "responsiveness" and "fairness of financial contribution"), the actual values assigned to most countries, including Canada, were never directly measured. The scores do not incorporate any information about the actual workings of the system, other than as reflected in life expectancy. The primary reason for Canada's relatively low standing rests on the relatively high educational level of its population, particularly as compared to France, rather than on any features of its health system.

Interpretation: The WHO 2000 rankings are not particularly helpful guides to measuring the performance of health systems.

Introduction

How good is Canada's healthcare system? How does it stack up internationally? Currently, heavy emphasis is being placed on "accountability" and "performance measurement," endorsed by the Romanow Commission (Commission on the Future of Healthcare in Canada 2002), the Kirby Committee (Standing Senate Committee on Social Affairs, Science and Technology 2002), and the First Ministers' accord (First Ministers 2003). One major element of such measurement is the ability to perform comparisons across jurisdictions.

WHO's World Health Report 2000 (WHO 2000) was an ambitious effort to compare 191 countries in terms of their ability to meet three main goals - "improving health, increasing responsiveness to the legitimate demands of the population, and ensuring that financial burdens are distributed fairly" (Evans, Tandon, Murray and Lauer 2001). Canada's ranking at only 30th in "overall health-system performance" (Buske 2001) has been characterized as "a further blow to an already shaken collective psyche" (Lewis, Donaldson, Mitton and Currie 2001). This ranking has been repeatedly cited as indicating serious problems in the quality, accessibility, cost-effectiveness, or responsiveness of Canadian healthcare (Bear 2000; McMahon and Zelder 2002). The chairman of the Royal Bank of Canada has claimed that the ranking placed us "30th in the world in terms of quality and accessibility of care" (Saint-Pierre 2002). Indeed, the Canadian Medical Association, in a letter to then-minister of health Allan Rock, wrote: "As undoubtedly you know, Canada was ranked 30th out of 191 countries with respect to overall performance of its healthcare system, and 35th of 191 with regard to performance on the health level. Given the resources available to us in Canada, including the people who are dedicated to excellence in health and healthcare, this report serves as a serious wake-up call to all of us" (Scully 2000).

There is only one problem - the WHO rating indicates no such thing. The exercise was a noteworthy beginning, but has been fiercely criticized on a number of grounds (Almeida et al. 2001; McKee 2001; Nord 2002; Pedersen 2002; Wagstaff 2002; Williams 2001). The WHO has reacted seriously to these critiques, even setting up a website on how to measure health systems performance (at www.who.int/health-systems-performance/) that links to many of the critiques and technical papers, and the WHO team's response. Rethinking of the exercise continues; it is noteworthy that subsequent WHO reports have not included similar league tables. However, the 2000 rankings continue to be highly cited.

Measuring performance is inherently complex. Attempting to summarize a variety of factors into a single dimension is clearly difficult, and the WHO deserves commendation for its attempt to deal with this thorny question. Yet, given the extensive use that has been made of this rating, it may be worth clarifying how it was derived, and some of the problems that may limit its utility in improving health-system performance.

How Were the Rankings Computed?

The report assessed all 191 member states to derive two separate assessments for overall performance: "overall health system performance" and "performance on health level." Both measures attempt to judge "how efficiently health systems translate expenditure into health" (WHO 2000) and they both follow a similar logic:

  1. Compute a measure of performance. The measure for "overall healthsystem performance" is based on another measure called "goal attainment." The measure for the lesswidely- cited indicator "performance on health level" is based on disability- adjusted life expectancy (DALE).
  2. For each of those underlying measures (goal attainment or DALE), compute both the minimum value that would be expected "in the absence of a functioning modern health system, given the other non-health- system determinants that influence health, which are represented by education," and the maximum attainable value, given levels of health expenditures and education.1
  3. Compute each country's performance score by dividing: the difference between the observed value for that country on the underlying measure and the minimum value (numerator), by the difference between the maximum and minimum values (denominator).

The final results placed France atop the world, followed by Italy, San Marino, Andorra, Malta and Singapore.

What Went Into the Measurements?

The WHO based its measure of goal attainment on five other measures, weighted as follows:

  1. Level of health (25%)
  2. Distribution of health (25%)
  3. Level of responsiveness (12.5%)
  4. Distribution of responsiveness (12.5%)
  5. Fairness of financial contribution (25%).

The weights placed on each dimension were somewhat arbitrary; they derived from a survey of more than 1,000 public-health practitioners - primarily WHO employees - from more than 100 countries. Although WHO attached "uncertainty intervals" to each of the five components, countries were then ranked into a "league table" from highest to lowest. In a number of cases, ties were reported (e.g., on the responsiveness dimension, the tables did not distinguish between the countries ranked from three to 38). Table 1 gives the measures, the highest-scoring country, and the ranks for Canada, the U.S., the U.K. and France (which attained the highest overall rank for health-systems performance).


 

How Were These Components Measured?

The level of health was, in theory, based on measures of disability-adjusted life expectancy (DALE). This measure is similar to the Quality-Adjusted Life Year approach used in health economics, in that it reflects the concept that years lived with illness should count as "less" than years in perfect health. To determine how much less, WHO surveyed international panels of health professionals. Results suggested, for example, that a year with a severe sore throat would count as 0.92 of a year, while a year with blindness would count as 0.38 (Nord 2002). The study design called for multiplying these adjusters by the rates of disability within each country to compute a revised measure of life expectancy. However, good data was rarely available on levels of disability within each country. The authors accordingly rescaled the data; when they were finished, the correlation between DALE and life expectancy at birth was 0.996 (McKee 2001), implying that their measure of DALE was really only a slightly adjusted measure of life expectancy.

The distribution of health deliberately avoided looking at identified vulnerable groups. Instead, it computed a summary measure based on the distribution of child mortality; Chile ranked first. In theory, this measure would be based on a combination of demographic and health survey data, and small-area vital registration data. In practice, it was measured for only 58 countries, meaning that results for 70% of countries (including Canada) were estimated using "indirect techniques and information on important covariates of health inequality, such as poverty, educational attainment and the level of child mortality" (WHO 2000, p. 147).

The responsiveness measures were based on a survey of 1,791 key informants in 35 countries, none from North America or western Europe.2 Data from five countries were omitted for unspecified reasons. All key informants were professionals; half were WHO staff. Although 42 questions were asked, only seven were used to create the index. These related to dignity, autonomy and confidentiality, prompt attention, quality of basic amenities, access to social support networks during care and choice of care provider. No justification was given for why only those seven items were used, although the report derived the weights assigned to each component of the index from a second, Internet-based survey of 1,006 participants, half from within WHO (WHO 2000). The elements were scored from 0 to 10 and then combined; all other countries received estimated scores using linear regressions. The variables used in these regressions, which yielded R-square values in the range of 0.2 to 0.6, evidently included such variables as: % of the population over 65, average years of schooling, geographic access rate, health expenditure per capita, GDP per capita and % of private health expenditure (McKee 2001). Thus, the responsiveness scores for 84% of countries (including Canada) were based on extrapolated values rather than direct measurements. It is worth noting that almost none of the variables used in the regressions directly reflect the responsiveness of healthcare or how healthcare systems were organized or managed. Using these measures, the U.S. (whose score was also extrapolated rather than measured) scored highest.

Distribution of responsiveness was also based on responses from the same key informants in the same 30 countries, and derived from how frequently they mentioned particular groups (poor, women, old, and certain ethnic groups) as being discriminated against in terms of responsiveness. Again, all other countries, including Canada and France, received estimated values from regressions based on such data as the % of the population below the poverty line, and the % of the population living within one hour's travel of a health facility (McKee 2001). The United Arab Emirates ranked first. Notably, a comparison of these measures of responsiveness with the survey data about patient satisfaction collected by Blendon et al. for 17 countries revealed little association between the two (McKee 2001).

Fairness of financing was measured in a somewhat counterintuitive manner. Household financial contribution was defined as the "ratio of total household spending on health to its permanent income above subsistence" (WHO 2000, p. 148), which was estimated as income after expenditure for food. A fair financing system was defined as one in which different income groups spent the same percentage of their income on healthcare. Because progressive and regressive situations were treated symmetrically, systems that redistributed the burden of illness from the poor to the affluent were scored as less fair than those imposed more uniform costs (Wagstaff 2002). Colombia received the highest score. Nonetheless, even these results were largely hypothetical, since "fairness" was measured in only 21 countries, meaning that 89% of measures (including those in Canada) were based on extrapolations from regression analysis.

Health expenditures, at least for the 29 OECD countries, were simply taken from the familiar OECD expenditure data (WHO 2000, p. 149); the U.S. had the highest rank in spending.


 


Among the many issues that might be raised - including important but technical considerations about how to estimate particular functions or construct particular indicators (particularly given the rather low R2 found in most of the regressions used to estimate the vast number of missing values) - we will concentrate on five points.

  1. This approach ignored all nonmedical determinants of health other than education. Critics have accordingly objected to the underlying political philosophy of the report; the arguments have been summarized by McKee (McKee 2001). GDP per capita was not explicitly considered, leaving a strong probability that the results reflected national wealth rather than healthcare systems per se (Pedersen 2002). The approach also heavily penalized countries with epidemic disease unrelated to a healthcare system. South Africa, for example, despite its well-developed infrastructure, ranked 182nd on performance on health level, and 175th on system performance, in large part because of its epidemic of HIV/AIDS rather than because of any faults or merits in its healthcare system (Coyne and Hilsenrath 2002). Critics have noted that it was unfortunate that WHO, which has long placed such emphasis on the need to examine determinants of health that extend beyond the healthcare system, adopted such a narrow definition of health achievement in this report (McKee 2001).
  2. This approach did not look at how health systems were organized and managed. The measures did not incorporate measures of supply (e.g., number of health providers per capita), access, utilization or patient satisfaction. By looking primarily at life expectancy, minimal value was placed on care directed toward improving quality of life, which makes up a significant element of healthcare delivery, particularly in richer countries. Neither does this indicator highlight elements of a healthcare system that might be improved. Walt and Mills (2001) noted that the scores would not even pick up such obviously policy-relevant issues as the extent to which countries immunized against vaccine-preventable illness, except indirectly if such illnesses greatly affected life expectancy (Walt and Mills 2001).
  3. Controlling for education strongly penalized countries with higher levels of education, and rewarded those with poorer levels. Because the precise data used by the WHO team to measure educational attainment was not available.3 Table 2 uses 1990 data (the most recent year available) for average schooling years in the total population (over age 25) from a widely employed dataset, available from the World Bank (Barro and Lee 2003) to compare rankings on the WHO 2000 rankings for goal attainment and health-system performance rank for selected countries. The table includes four of WHO's top six performers (San Marino and Andorra are not in the Barro-Lee database), plus OECD countries showing the highest educational levels. The suspicion that better rankings on health-system performance reflected lower education rather than better healthcare appears supported; it is noteworthy that all four countries receiving the highest ranks for overall performance showed relatively low educational levels, which boosted their rank (by between 5 and 26 places) from what they would have obtained looking only at their performance on goal attainment. Conversely, those with higher educational levels have had their rankings substantially depressed (by between 14 and 23 places), seemingly for that reason alone.
  4. Rankings vs. ratings. In horse races, rankings are all that matters - it does not matter whether the victor won by a nose or by several lengths. In measuring performance, however, one might argue that ratings are more important, particularly if one is interested in the absolute level of performance. Trivial differences between countries are arguably irrelevant, except for bragging rights. Ensuring improvement in such dimensions as quality and efficiency would seem to be more important. Table 3 presents the actual scores given to the highest- and lowestranked countries, plus the scores (and ranks) for Canada and for the top-scoring country, France. Extrapolated (imputed) scores are indicated. Scrutiny of the full WHO report makes it clear that, unsurprisingly, the major differences are between richer and poorer countries, with tiny differences among the topranked countries on most of these indicators.
  5. Finally, most data were not available for most countries; the WHO report makes heavy reliance on extrapolation (Almeida et al. 2001; McKee 2001). The fact that data to compute most of these indices were not available (Almeida et al. 2001) has been interpreted as comparing Fictional Denmark and Fictional USA (Williams 2001). Certainly, with only life expectancy and health expenditures directly measured, they reflect Fictional Canada.


 

 

What Does This Mean for Canada?

The WHO 2000 rankings do not look at access, utilization, quality, costeffectiveness or most other dimensions of health systems. Although the rankings do attempt to include such potentially valuable dimensions as responsiveness and fair financing, the way they are measured in the report not only is of dubious validity, but does not in any event incorporate data about Real Canada. Differences in the scores for most developed countries are very small. Even so, Canada ranked seventh in "goal attainment"; the widely publicized value of 30th merely reflects the report's penalizing Canada's relatively high educational level. As Nord stressed, the report reflects a compression of "potentially useful primary measures in summary indices with unclear meaning, dubious validity, and little practical relevance" (Nord 2002). As such, these values are not particularly useful in evaluating the performance of any healthcare system, including that of Canada. It is past time that policy analysts retire this particular pseudo-statistic.

About the Author(s)

Raisa Deber, PhD
Department of Health Policy, Management and Evaluation,
University of Toronto

Address correspondence and reprint requests to: Raisa Deber, PhD McMurrich Building, 2nd Floor, 12 Queens Park Crescent West, Toronto, ON M5S 1A8, Canada
Phone: 416-978-8366, Fax: 416-978-7350, e-mail: raisa.deber@utoronto.ca

References

Almeida, C., P. Braveman, M.R. Gold, C.L. Szwarcwald, J.M. Ribeiro, A. Miglionico, J.S. Millar, S. Porto, N. do R. Costa, V.O. Rubio, M. Segall, B. Starfield, C. Travessos, A. Uga, J. Valente and F. Viacava. 2001. "Methodological Concerns and Recommendations on Policy Consequences of the World Health Report 2000." Lancet 357(9269): 1692-7.

Barro, R. and J.W. Lee. 2003. Barro-Lee Data Set: International Measures of Schooling Years and Schooling Quality. The World Bank Group, Economic Growth Research (Retrieved June 20, 2003. www.worldbank.org/research/growth/ ddbarle2.htm ).

Bear, R. 2000. "Can Medicare Be Saved? Reflections from Alberta." HealthcarePapers 1(3): 60-7.

Buske, L. 2001. "Does Canada Really Rank 30th in the World in Terms of Healthcare?" Canadian Medical Association Journal 164(1): 84.

Commission on the Future of Health Care in Canada (Roy J. Romanow, Commissioner). 2002. Building on Values: The Future of Health Care in Canada: Final Report. Ottawa: Queen's Printer.

Coyne, J. and P. Hilsenrath. 2002. "The World Health Report 2000: Can Healthcare Systems Be Compared Using a Single Measure of Performance?" American Journal of Public Health 92(1): 30, 32-3.

Evans, D.B., A. Tandon, C.J. Murray and J.A. Lauer. 2001. "Comparative Efficiency of National Health Systems: Cross-National Econometric Analysis." British Medical Journal 323(7308): 307-10.

First Ministers. 2003. 2003 First Ministers' Accord on Healthcare Renewal. February 5 (Retrieved April 2003. www.hc-sc.gc.ca/ english/hca2003/accord.html in ).

Lewis, S., C. Donaldson, C. Mitton and G. Currie. 2001. "The Future of Health Care in Canada." British Medical Journal 323(7318): 926-9.

McKee, M. 2001. The World Health Report 2000: Advancing the Debate. Copenhagen: WHO Regional Office for Europe, European Regional Consultation on Health Systems Performance Assessment, September 3-4.

McMahon, F. and M. Zelder. 2002. "Making Health Spending Work (Executive Summary)." HealthcarePapers 2(4): 43-7.

Nord, E. 2002. "Measures of Goal Attainment and Performance in the World Health Report, 2000: A Brief, Critical Consumer Guide." Health Policy 59(3): 183-91.

Pedersen, K.M.2002. "The World Health Report 2000: Dialogue of the Deaf?" Health Economics 11(2): 93-101.

Saint-Pierre, G. 2002. "We Are the State. Montreal: Board of Trade of Metropolitan Montreal, February 26; RBC Financial Group, Executive Speeches" www.rbc.com/newsroom/200226 saint_pierre.html.

Scully, H.E. 2000. Physicians Committed to Partnerships to Revitalize Health Care. Ottawa: Canadian Medical Association. Letter

Standing Senate Committee on Social Affairs, Science and Technology. 2002. The Health of Canadians: The Federal Role: Volume Six: Recommendations for Reform. Final report on the state of the healthcare system in Canada. October.

Wagstaff, A. 2002. "Reflections on and Alternatives to the World Health Organization's Fairness of Financial Contribution Index." Health Economics 11(2): 103-15.

Walt, G. and A. Mills. 2001. "World Health Report 2000: Responses to Murray and Frenk." Lancet 357(9269): 1702-3.

Williams, A. 2001. "Science or Marketing at WHO? A Commentary on 'World Health 2000'." Health Economics 10(4): 93-100.

World Health Organization. 2000. The World Health Report 2000 Health Systems: Improving Performance. Geneva, Switzerland: World Health Organization.

Footnotes

1. The regression equations used to predict minimum value for health achievement were based on 1908 data for education and life expectancy for a set of 25 countries, on the contentious theory that 1908 predated the development of a modern healthcare system. The equations used to predict maximum value used "a frontier production model relating overall health-system achievement to health expenditure and other nonhealth- system determinants represented by educational attainment." Both computations have been widely criticized.

2. The 35 countries surveyed were: Bangladesh, Bolivia, Botswana, Brazil, Bulgaria, Burkina Faso, Chile, China, Cyprus, Ecuador, Egypt, Georgia, Ghana, Guatemala, Hungary, India, Indonesia, Malaysia, Mexico, Mongolia, Nepal, Peru, Philippines, Poland, Republic of Korea, Senegal, Slovakia, South Africa, Sri Lanka, Thailand, Trinidad, Uganda, United Arab Emirates, Vietnam and Zimbabwe.

3. The report cites Working Paper 7, which alone appears to have been deleted from the list of available papers in that series. Indeed, a search of the WHO database for educational attainment or literacy does not yield any data.

Comments

Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed