HealthcarePapers

HealthcarePapers 6(2) November 2005 : 35-39.doi:10.12927/hcpap..17751
Commentary

Why Performance Indicators Have a Place in Health Policy

Terri Jackson

Abstract

Despite the mixed results of Brown and colleagues' review of the evidence for the use of performance indicators in health policy, this paper argues that they have an important place. Healthcare organizations cannot rely on altruism alone to motivate improved performance. Berwick, supporting the use of performance indicators in healthcare, argues "threats to survival are necessary to build will for improvement." He argues that the job of managers is to create organizations where such threats are clearly perceived, but balanced against a culture of "safety" in which individuals can learn and improve the care provided. This step in the causal chain gets insufficient attention from the "KAB+" evaluation model that Brown and colleagues employ. The use of performance indicators in publicly funded healthcare systems goes beyond arming the "consumer" of healthcare with relevant information; it is a fundamental question of democratic accountability. Methods for evaluating the evidence base of public policy must take into account the contextual (economic, social and political) factors that support or impede the achievement of policy objectives.
Brown et al. provide an interesting review of what is known about the effectiveness of performance indicators (PIs) in changing the Knowledge, Attitudes and Behaviour (KAB) of consumers and providers of healthcare.

It makes for dismal reading: a growing body of published PIs that have unclear goals, are sometimes at odds with each other, are badly evaluated and overall demonstrate mixed effectiveness in changing the three parameters on which the authors have focused. Why on earth would anyone bother?

To clarify for myself why I persist in believing in and developing such indicators, I went back through my files to find the sources of evidence or argument that had formed my views. Indeed, when I reread two particular references, it seemed only necessary to cite them because they left me very little new to say. Had I not felt a certain obligation to the editors, I might have submitted only the two citations, with a couple of cautionary paragraphs on applying methods from evidence-based medicine to policy interventions.

But I know my own lazy habits when it comes to following up interesting references, so I'll briefly summarize the two sources that again persuaded me performance indicators have a place in healthcare policy. I urge readers to seek out the originals for their clarity and good sense.

The first is a short essay by Berwick (2002) called "Public Performance Reports and the Will for Change." In it he makes two important points, the first of which is that "human systems resist change" and it is only "discomfort with the status quo" that can motivate system improvement. This is not to "presuppose some type of agreement across [healthcare] providers to work against patient interests" (as Brown and colleagues suggest), but a profound understanding that providers are human and change is difficult and uncomfortable. Most of us would continue working in our comfort zone if we could. And healthcare organizations are particularly insulated from external threats by strong internal group norms of professional autonomy and the high public regard in which most are held. Berwick argues that, as in the commercial sector, threats to survival (or "burning platforms") are necessary to shift healthcare organizations out of this comfort zone in order to improve performance.

His second point is that the role of leadership in the healthcare sector (managers, funders, policymakers) is to find ways of balancing such threats with the need for "safety" within organizations, safety that enables individuals to contribute to organizational learning and improvement. This is a tough challenge to policy and management, and a role I think is overlooked in Brown and colleagues' review. The KAB framework adopted to judge effectiveness of PIs treats the processes of achieving change as a black box, or perhaps as a simple experiment in a behavioural psychology class: changed knowledge plus changed attitudes leads (magically? despite other sources of motivation? with no further enabling intervention?) to changes in behaviour and thence, to better performance.

But there is obviously skill and technique in achieving that balance between threat and safety which allows for performance improvement, and Shojania and Grimshaw (2005) have recently reviewed the growing literature on this intermediate step in the causal pathway. At best, publicly reported PIs can be only a motivator and diagnostic tool in this enterprise. We would not say a thermometer is ineffective because it does not fix the broken furnace.

The section of the paper on the relatively poor effectiveness of performance indicators in changing individual health consumers' behaviour led me to the second source of my thinking about performance indicators: Rice's The Economics of Health Reconsidered (1998). Chapter 3 of this book deals with what economists term the "demand side" of the healthcare market: consumer behaviour.

Here Rice lays out all the reasons for skepticism about the use of markets to allocate healthcare, and about patients behaving as consumers when "consuming" health services. Most necessities of life are provided through markets - food, clothing, shelter, etc. There are properties of markets that make them more or less efficient in distributing these goods (a much longer discussion …), but one thing we have pretty much learned is that they don't work very well in healthcare.

Patients as consumers have little ability to judge what care and how much of it they need in order to return to health. Thus, they have little choice but to rely on the advice of their doctors, who in this "market" also happen to be the predomin-ant sellers of healthcare. If we don't get the outcome we expected, is it because our condition is more severe or because our doctors did a poor job? We don't know. Perhaps with the exception of treatment for chronic conditions, we never acquire enough experience of a particular type of care to develop our own set of expectations and standards for the "product." And often we're making choices when we're not at our best: few consumers will say "Fine, then, I'll take my inflamed appendix (or dodgy heart, or kidney stones) and go elsewhere!" when the choices on offer don't meet our expectations.

Moreover, we are often not the direct "purchasers" of medical services: insurers, both public and private, generally place limits on which practitioners we can see, over what time period or for what conditions, etc., and with what out-of-pocket costs.

As my experience with PIs is in a healthcare system that is largely publicly funded, it took a bit of time to recall the situation in the United States, where report cards are intended to allow consumers to base their health plan purchasing decisions on more than the price of the premiums. PIs are one approach to redressing "information asymmetry" in this market, but as Rice's review of the literature about how consumers actually do use this information shows, PIs have little effect on real purchasing decisions.

Understanding the use of PIs in the United States made clear why Brown and colleagues call for a single set of PIs across jurisdictions and insurers: if one is comparison shopping, it makes sense to have a common set of criteria. But for publicly funded healthcare systems such as Australia's and Canada's, I wonder whether this is sensible. Some systems have problems with access (or at least timely access), while others need to emphasize technical efficiency or patient safety. More complex teaching hospitals may need to focus on infection rates for advanced cardiac surgery, while smaller or longer-term-care institutions may need to focus on pressure ulcers (which the larger hospitals may well need to do something about as well!). Given the authors' cautions about too many and contradictory performance improvement goals at a time, it would be wise to focus on a limited number, but perhaps not a common set.

And understanding the weaknesses of markets for distributing healthcare also gives us insight into why PIs are important in a publicly funded system. In the end, public officials who determine funding and priorities for our healthcare systems must be held to account for the quality of the care provided, and they in turn must have tools by which to hold management accountable for achieving those priorities. Berwick is careful to say that "safety" is essential within organizations, but cannot be expected for the organization as a whole, which must expect to "tell those they serve how well they are doing their jobs." Brown and colleagues seem to view this accountability function as an incidental reason for publishing PIs, but in my view it is a fundamental democratic safeguard. When commercial firms produce bad products, they lose market share and are eventually driven to insolvency. But when there is no market, we need other mechanisms (beyond the altruism of all-too-human providers) to create the "burning platform."

Finally, my note on the systematic review methods used by Brown and colleagues. It is worth remembering that systematic reviews of the medical literature were originally championed on the basis that they could reduce uncertainty about the direction and magnitude of a physiological effect of treatment. Researchers quite properly assumed that response to treatment would be unaffected by ethnicity, culture, geographic location or political affiliation. Evaluating and summarizing the evidence was a way of increasing sample sizes and reducing bias.

By analogy, these methods have come to be adopted to evaluate evidence on a range of healthcare questions far beyond the biological individual. Research subjects range along a spectrum from the individual to families to organizations to whole populations and polities. With each step up the scale of size and complexity of "the subject," social and political variables (confounders) become increasingly important to account for and understand. This is not to say that the literature about, for example, management innovations or national health policies cannot be rigorously evaluated, but only that the way we do it should be adjusted to document and take account of these other factors.

There is a growing literature on how this can and should be done, including a recent report from the Canadian Health Services Research Foundation (Lomas et al. 2005), Rychetnik and colleagues' work in Australia on evidence-based public health interventions (see, for example, Rychetnik et al. 2002) and Lin and Gibson's book of case studies on the use of evidence in policy (2002).

Brown and colleagues are aware of the limitations of their method, and comment on many of the context factors that may have affected outcomes in specific studies. Regardless of how they reached their conclusions, their recommendations are sound: performance data (as indeed, policy reviews) need to be sensitive to context; incentives for improved performance should have bite (and be linked to funding), but "never be the only criteria for funding," clear system goals should be reflected in the PIs employed, and provider profiling is likely to do more harm than good (because, as Berwick notes, the data are "too squirrelly"). The last word also belongs to Berwick: those of us who design PIs "should accept the obligation to measure and continually reduce the costs and burden of such reporting on care providers even while increasing its accuracy and value."

About the Author(s)

Terri Jackson, PhD
School of Public Health, Latrobe University, Melbourne, Australia

References

Berwick, Donald M. 2002. "Public Performance Reports and the Will for Change." Journal of the American Medical Association 288: 1523-24.

Lin, Vivian, and Brendan Gibson. (Eds.) 2002. Evidence-Based Health Policy: Problems and Possibilities. Oxford: Oxford University Press.

Lomas, Jonathan, Tony Culyer, Chris McCutcheon, Laura McAuley and Susan Law. 2005. Conceptualizing and Combining Evidence for Health System Guidance: Final Report. Ottawa: Canadian Health Services Research Foundation.

Rice, Thomas. 1998. The Economics of Health Reconsidered. Chicago: Health Administration Press.

Rychetnik, Lucie, Michael Frommer, Penelope Hawe and Alan Shiell. 2002. "Criteria for Evaluating Evidence on Public Health Interventions." Journal of Epidemiology and Community Health 56(2): 119-27.

Shojania, Kaveh G., and Jeremy M. Grimshaw. 2005. "Evidence-Based Quality Improvement: The State of the Science." Health Affairs 24: 138-50.

Comments

Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed