Healthcare Policy

Healthcare Policy 3(4) May 2008 : 53-54.doi:10.12927/hcpol.2008.19920
Discussion and Debate

The Authors Respond

Sarah Bowen and Sara A. Kreindler


Adalsteinn Brown and Jeremy Veillard have done an excellent job of outlining the gains that may be achieved from performance measurement, and the context for increased focus on this area. We do not suggest that indicators should not be used, or cannot be useful. We believe, however, that it is important to differentiate between the efficacy of indicators (their potential in ideal situations) and their effectiveness (what we see happening in actuality). The observation that the Veterans' Administration and Institute for Healthcare Improvement are using indicators in appropriate and helpful ways does not imply that every local health authority or hospital is doing the same. There remains a need for caution - not about the fact that indicators are used, but about the way they are used.
If the potential benefits of indicators are to be realized, we must address certain issues.

First, we must provide respectful space, such as that afforded in this journal, for productive discussion and debate, ensuring that all perspectives be heard. Also, we must put as much effort into developing capacity for understanding and interpreting indicators as we put into generating them and promoting their use. (Some of our "caution" is rooted in our observation that those with the least awareness of how indicators are constructed often have the greatest faith in them.) We further agree that measurement must be linked to strategy; as we suggested, it is foolish to seek answers before one knows the questions.

Finally (and here we may diverge from Brown and Veillard's position), it is essential to dispel the misconception that performance measurement is the only (or even the best) way to bring evidence into decision-making. Performance measurement is essentially an accountability mechanism, not a means of gathering all the information needed to support complex decisions. It can track short-term outcomes, but cannot determine why those outcomes occurred - was it the intervention, the way it was implemented, some other event, or random chance (Blalock 1999)? Even though certain analytic techniques, such as statistical process control, can pinpoint when a change occurred, discovering why is often less straightforward, especially when results are unexpected (see Bailie et al. 2006). In contrast, the broader enterprise of evaluation employs additional methods, a controlled research design or both to describe and also explain observations (Blalock 1999).  

The healthcare system desperately needs to invest in more evaluation in order to answer fundamental questions: Why are we seeing these results? Which interventions will result in improvement? How can we best implement evidence-informed changes? Indicators can be valuable in promoting questions; they cannot be relied on to provide answers. Unfortunately, the concepts of performance measurement and evaluation are often conflated: decision-makers may believe that, if indicators are being monitored, no further evaluation activities are needed. Instead, indicator use should be but one component of a meaningful, multi-method evaluation strategy. Other appropriate sources of evidence must be integrated with indicator data - otherwise, the decisions we make will be dismal indeed.


Bailie, R., G. Robinson, S.N. Kondalsamy-Chennakesavan, S. Halpin and Z. Wang. 2006. "Investigating the Sustainability of Outcomes in a Chronic Disease Treatment Programme." Social Science and Medicine 63: 1661-70.

Blalock, A.B. 1999. "Evaluation Research and the Performance Management Movement: From Estrangement to Useful Integration." Evaluation 5(2): 117-49.


Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed