Healthcare Policy
Indicator Madness: A Cautionary Reflection on the Use of Indicators in Healthcare
Abstract
Indicators are increasingly being used to monitor and evaluate health system performance. However, although indicators can provide valuable information, they also have limitations. The benefits of indicators are vitiated when they are seriously flawed (unreliable, invalid or easily "gamed"), selected before the right question has been posed or used to the exclusion of other sources of information. This critical assessment of the use and misuse of indicators employs practical examples from a Canadian health authority to illustrate common pitfalls. It concludes with some solutions to optimize the benefits of indicator use.
The past two decades have seen a growing interest in the use of healthcare indicators to monitor and evaluate health system performance (Lilley 2000; Wait and Nolte 2005). This trend is not unique to healthcare but parallels a resurgence of interest in social indicators and performance measurement in general (Morris 1998) as governments respond to pressure to cut costs, make evidence-based decisions and be more accountable to the public (Baker et al. 1998). An indicator is a summary statistic used to give an indication of a construct that cannot be measured directly. For example, we cannot directly measure the quality of care, but we can measure particular processes (e.g., adherence to best practice guidelines) or outcomes (e.g., number of falls) thought to be related to quality of care. Health Canada has affirmed the value of national indicator reports in promoting informed decision-making ("allow[ing] governments … to compare data, track changes, see progress and identify areas for improvement") and enhancing public accountability (Health Canada 2006).
Obviously indicators can provide valuable information. However, in our enthusiasm for quantifiable results, it is easy to overlook the limitations both of particular indicators and of indicators in general. As the Canadian Institute for Health Information (CIHI) begins to release data on hospital quality and safety, it is perhaps appropriate to stand back and consider where the emphasis on indicators is taking us. Our observations are based on our experience working with decision-makers within a large urban health authority's Research and Evaluation Unit.
Getting the Right Answers
Not all indicators are created equal
Data derived from an indicator are only as good as the indicator that produced them. As the Alberta Heritage Foundation for Medical Research (1998: 5) noted:
Indicators should actually measure what they are intended to (validity); they should provide the same answer if measured by different people in similar circumstances (reliability); they should be able to measure change (sensitivity); and, they should reflect changes only in the situation concerned (specificity). In reality, these criteria are difficult to achieve, and indicators, at best, are indirect or partial measures of a complex situation.
Mainz (2003) has delineated a rigorous process for developing evidence-based indicators. Unfortunately, such guidelines are not always followed in practice. Often an indicator may be used simply "because it is there," without consideration of its validity or robustness. In one provincial Community Health Assessment (CHA) planning process, participants identified over 200 indicators through a brainstorming activity, all of which were used - without the further step of applying the above criteria - in the next CHA.
An indicator's limitations may not be obvious
Even well-established indicators are sometimes revealed to have serious flaws. For example, risk-adjusted mortality rates (such as the Hospital Standardized Mortality Rate, or HSMR) are widely used as an index of hospital safety. A systematic review of 18 relevant studies confirmed that on average, hospitals with exceptionally high risk-adjusted mortality rates do provide poorer care than hospitals with exceptionally low rates (Thomas and Hofer 1998). However, it concluded that such rates are too unreliable to draw conclusions about the quality of a particular hospital or the relative quality of two hospitals, as calculations are heavily subject to both systematic and random error.
Moreover, different indicators of quality may demonstrate no relationship with one another. Griffith et al. (2002) compared American hospitals on (a) the quality of various care processes, as assessed by the Joint Council on Accreditation of Healthcare Organizations and (b) several aggregate measures of care outcomes (e.g., adjusted mortality rate, complications). No significant correlations emerged among the different process measures, nor between the process and outcome measures. These results suggest that at least some of the most common measures of hospital quality are of dubious validity.
Indicators are often gameable
Another cause for concern is that many indicators are "gameable" - that is, staff can misrepresent the data. In a 2007 British Medical Association survey of emergency department staff, 31% of respondents reported that their department was manipulating data in order to meet wait time targets. Creative strategies included removing the wheels from trolleys in the ED to make them count as beds, and admitting inpatients via the ED to boost the proportion of patients seen in under four hours (Walley et al. 2006). Indicators that are perceived as unfair or inappropriate may not only encourage "gaming," but also decrease confidence in indicators in general.
Even when there is no intent to "game," changes in the way data are coded can produce illusory changes in the underlying construct. For example, Winnipeg's Health Sciences Centre achieved a 40% reduction in its HSMR by rigorously applying national guidelines for coding palliative care patients. However, although the numbers improved, the actual mortality rate did not. This incident underscores the need for caution in interpreting indicators.
A poor indicator may be worse than no indicator
Although researchers and decision-makers would be ill advised to abandon indicators simply because they cannot be perfect, we must be mindful that incorrect information can be worse than no information at all. A poor indicator can identify a problem that is not there or fail to identify a problem that is there, providing false reassurance. For example, breastfeeding initiation is often used as an indicator of child health, as it is more easily measurable than breastfeeding duration. However, lack of clear coding guidelines, combined with pressure on facilities to increase breastfeeding rates, appear to have produced a definition of initiation as, "the mother opened her gown and tried." Many nurses now express concern that the resulting high rates of breastfeeding initiation reported in many regions may serve as a barrier to needed action.
Asking the Right Questions
Evidence informed or data driven?
By focusing exclusively on indicators, decision-makers run the risk of being data driven rather than evidence informed (Bowen et al. 2007). It is very easy to respond to issues for which indicators are readily available, while ignoring potentially more important issues for which data are not available. This pitfall can privilege certain issues in the planning process. The tendency to focus on areas where data are most accessible calls to mind the Sufi fable of the man who lost a key in his house but searched for it under a nearby lamp post because there was more light there.
The tail wagging the dog
In some cases, decision-makers may consult indicators before they have a clear idea of what "key" they are looking for. Developing activities around "what existing data can tell us," while a reasonable course for researchers, can be a dangerous road for decision-makers, who may lose sight of the real questions facing the healthcare system. Like the scientists in Douglas Adams's novel The Hitchhiker's Guide to the Galaxy, whose super-computer Deep Thought defined the meaning of life as "42," they may need to recognize that knowing the answer is useful only when one knows the question.
In our observation, the phrase "we need a program evaluation" is often immediately followed by, "we have these indicators," without consideration of exactly which question the indicators will answer. Such instances are not unique to healthcare. Evaluation expert Michael Patton (1997) has identified a widespread tendency for program staff to establish indicators before they know which underlying construct they wish to measure. Similarly, a report from Australia's Bureau of Rural Sciences criticized "most efforts to date that attempt to develop indicators first, often leading to an unstructured shopping list … . The indicator-driven approach 'puts the cart before the horse' and often fails" (Chesson 2002: 2).
Working in the Right Context
Using indicators may not be cost effective
The collection and analysis of indicator data is not a neutral research exercise; on the contrary, it has significant organizational implications. Although the use of secondary data is commonly assumed to be a cost-effective quality monitoring strategy, this is not always the case. Responding to a poorly understood or inappropriate indicator may have significant resource implications. It can cause neglect of areas that "look OK" (even when practitioners know there is a problem) and result in significant resources being directed to areas where indicators suggest there is a problem. Even the cost of investigating a misleading indicator can be enormous. Significant regional resources were employed in investigating and responding to a recent report on patient safety indicators. While a few safety issues were identified, many other "indicators" were demonstrated (through audit and chart review of trigger cases) to reflect not safety but the effects of regionalization and some overzealous coding. Decision-makers may incur a significant opportunity cost when they use scarce resources for the number-crunching of unhelpful indicators rather than for interventions that would directly improve patient safety.
Indicators may be misunderstood
The meaning and calculation of indicators is often not transparent to users. As Lemieux-Charles et al. (2003: 768) have noted, Canadian healthcare organizations "have tended to invest in information systems rather than in developing the analytic capability of their personnel." Thus, the people who need to apply the results may be unable to fully understand them, let alone critique them. Even those decision-makers who have a gut sense that the data are "not right" may lack the epidemiological or statistical skills necessary to advance a critique.
Numbers are seductive
"Faith in numbers," bolstered by the bias towards quantitative methods in healthcare, may blind users to methodological flaws or poor-quality data. In one working group reviewing drafts of a report using indicators, participants (who were well informed on the issue under review) were initially highly sceptical of the numbers, pointing out serious issues of data quality and availability. Even so, as they began to review the document, they were drawn into making comparisons based on the same data they had appropriately identified as limited.
Promoting, or closing down, critical debate?
Often, indicators are presented as the "gold standard" and providers who try to supplement the picture with contextual information are accused of being "in denial." We have had occasion to hear versions of Berwick's (2004) classic description of the stages of data-related denial misused to silence listeners' legitimate concerns, and close down further exploration of what the numbers actually meant.
It is of course true that providers sometimes react defensively to data that are in fact correct. However, the message that whatever information healthcare professionals can share is of little relevance may result in an adversarial relationship between data suppliers and practitioners. As the challenges facing the healthcare system are complex, and require participation of all stakeholders, every effort must be made to ensure that the insights and experiences of practitioners are incorporated when data are interpreted.
Conclusion
Indicators are not going away - but they are not neutral, and they can contribute to poor planning decisions as easily as good ones. Researchers and decision-makers have a responsibility to use indicators in a responsible and thoughtful way.
What are the solutions?
- First, determine what you want to know.
- In selecting indicators, evaluate them for validity, robustness and transferability before proposing them. Don't use an indicator just "because it's there."
- Understand what the indicator is really telling you - and what it isn't.
- Limit the number of indicators, focusing resources on the strongest ones.
- Choose indicators that cannot be easily "gamed."
- Make indicator selection, development and interpretation a collaborative exercise: include and value the important contextual information and expertise that providers can bring.
- Treat indicators as one useful source of data, not a gold standard against which other evidence is measured. Integrate interpretation of indicators with program evaluation and qualitative research activities.
- Investigate areas where there is a discrepancy in data sources; this is where the greatest learning will occur.
- Most of all, remember that an indicator is just an indicator (Patton, 1997: 159). It is meant to be a "tool, screen, or flag" (CCHSA 1996) to assist in decision-making, not a driver for decisions.
By following these suggestions, researchers and decision-makers may truly realize the benefits of collecting and analyzing indicator data.
La folie des indicateurs : Réflexion sur l'utilisation des indicateurs dans les services de santé
Résumé
On utilise de plus en plus les indicateurs pour évaluer et surveiller le rendement du système de santé. Bien que les indicateurs puissent offrir des informations valables, ils ont aussi leurs limites. Les avantages des indicateurs se trouvent altérés si ces derniers montrent de sérieuses lacunes (soit peu fiables, non valides ou facilement « manipulables »), s'ils sont choisis avant même de formuler les questions adéquates ou s'ils sont utilisés en exclusivité au détriment d'autres sources d'information. Cette évaluation critique de l'utilisation, adéquate ou erronée, d'indicateurs présente des exemples pratiques d'une autorité canadienne de la santé afin de signaler les pièges habituels. L'article se termine en proposant des solutions pour optimiser les avantages liés à l'utilisation d'indicateurs.
About the Author(s)
Sarah Bowen, PhD
Director, Research and Evaluation Unit
Winnipeg Regional Health Authority
Winnipeg, MB
Sara A. Kreindler, DPhil
Researcher, Research and Evaluation Unit
Winnipeg Regional Health Authority
Winnipeg, MB
Correspondence may be directed to: Sarah Bowen, PhD, Director, Research and Evaluation Unit, Winnipeg Regional Health Authority, 1800-155 Carlton St., Winnipeg, MB R3C 4Y1; e-mail: sbowen@wrha.mb.ca.
References
Alberta Heritage Foundation for Medical Research. 1998. SEARCH. A Snapshot of the Level of Indicator Development in Alberta Health Authorities. Toward a Common Set of Health Indicators for Alberta (Phase One). Edmonton: Author.
Baker, G.R., N. Brooks, G. Anderson, A. Brown, I. McKillop, M. Murray and G. Pink. 1998. "Healthcare Performance Measurement in Canada: Who's Doing What?" Healthcare Quarterly 2(2): 22-26.
Berwick, D. 2004 (January 6). "Redesigning Care and Improving Health in Priority Areas." Presentation at the Crossing the Quality Chasm Summit, Washington, DC. Transcript retrieved June 28, 2007 from the Kaiser Family Foundation, < http://www.kaisernetwork.org >.
Bowen, S., T. Erickson, P. Martens and The Need to Know Team. 2007 (submitted for publication). "More Than 'Using Research': The Real Challenges in Promoting Evidence-Informed Decision-Making."
British Medical Association, Health Policy and Economic Research Unit. 2007 (January). Emergency Medicine: Report of National Survey of Emergency Medicine. London: Author. Retrieved May 19, 2008. < http://www.bma.org.uk/ap.nsf/Content/ Emergencymedsurvey07 >.
Canadian Council on Health Services Accreditation (CCHSA). 1996. A Guide to the Development and Use of Performance Indicators. Ottawa: Author.
Chesson, J.C. 2002. "Sustainability Indicators: Measuring Our Progress." Science for Decision Makers 2: 1-7. Retrieved May 19, 2008. < http://www.acera.unimelb.edu.au/materials/brochures/ SDM-SustainabilityIndicators.pdf >.
Griffith, J.R., S.R. Knutzen and J.A. Alexander. 2002. "Structural versus Outcomes Measures in Hospitals: A Comparison of Joint Commission and Medicare Outcomes Scores in Hospitals." Quality Management in Health Care 10(2): 29-38.
Health Canada. 2006 (December 19). Health Indicators. Retrieved May 19, 2008. < http://www.hc-sc.gc.ca/hcs-sss/indicat/index_e.html >.
Lemieux-Charles, L., W. McGuire, F. Champagne, J. Barnsley, D. Cole and C. Sicotte. 2003. "The Use of Multilevel Performance Indicators in Managing Performance in Health Care Organizations." Management Decision 41(8): 760-70.
Lilley, S. 2000 (March). "An Annotated Bibliography on Indicators for the Determinants of Health." Produced for the Health Promotion and Programs Branch, Atlantic Regional Office, Public Health Agency of Canada. Retrieved May 19, 2008. < http://www.phac-aspc.gc.ca/canada/regions/ atlantic/pdf/annotated_bibliography_e.pdf >.
Mainz, J. 2003. "Developing Evidence-Based Clinical Indicators: A State-of-the Art Methods Primer." International Journal for Quality in Health Care 15(Suppl. 1): i5-i11.
Morris, M. 1998. Harnessing the Numbers: Potential Use of Gender Equality Indicators for the Performance, Measurement and Promotion of Gender-Based Analysis of Public Policy. Background paper. Ottawa: Status of Women Canada. Retrieved May 19, 2008. < http://www.swc-cfc.gc.ca/pubs/pubspr/0662274180/ 199901_0662274180_2_e.pdf >.
Patton, M.Q. 1997. Utilization Focused Evaluation (3rd ed.). Thousand Oaks, CA: Sage Publications.
Thomas, J.W. and T.P. Hofer. 1998. "Research Evidence on the Validity of Risk-Adjusted Mortality Rate as a Measure of Hospital Quality of Care." Medical Care Research and Review 55(2): 371-404.
Wait, S. and E. Nolte. 2005. "Benchmarking Health Systems: Trends, Conceptual Issues and Future Perspectives." Benchmarking: An International Journal 12: 436-48.
Walley, P., K. Silvester and R. Steyn. 2006. "Knowledge and Behaviour for a Sustainable Improvement Culture." Healthcare Papers 7(1): 26-33; discussion 74-77.
Comments
Be the first to comment on this!
Personal Subscriber? Sign In
Note: Please enter a display name. Your email address will not be publically displayed