The Authors Respond
First, we must provide respectful space, such as that afforded in this journal, for productive discussion and debate, ensuring that all perspectives be heard. Also, we must put as much effort into developing capacity for understanding and interpreting indicators as we put into generating them and promoting their use. (Some of our "caution" is rooted in our observation that those with the least awareness of how indicators are constructed often have the greatest faith in them.) We further agree that measurement must be linked to strategy; as we suggested, it is foolish to seek answers before one knows the questions.
Finally (and here we may diverge from Brown and Veillard's position), it is essential to dispel the misconception that performance measurement is the only (or even the best) way to bring evidence into decision-making. Performance measurement is essentially an accountability mechanism, not a means of gathering all the information needed to support complex decisions. It can track short-term outcomes, but cannot determine why those outcomes occurred - was it the intervention, the way it was implemented, some other event, or random chance (Blalock 1999)? Even though certain analytic techniques, such as statistical process control, can pinpoint when a change occurred, discovering why is often less straightforward, especially when results are unexpected (see Bailie et al. 2006). In contrast, the broader enterprise of evaluation employs additional methods, a controlled research design or both to describe and also explain observations (Blalock 1999).
The healthcare system desperately needs to invest in more evaluation in order to answer fundamental questions: Why are we seeing these results? Which interventions will result in improvement? How can we best implement evidence-informed changes? Indicators can be valuable in promoting questions; they cannot be relied on to provide answers. Unfortunately, the concepts of performance measurement and evaluation are often conflated: decision-makers may believe that, if indicators are being monitored, no further evaluation activities are needed. Instead, indicator use should be but one component of a meaningful, multi-method evaluation strategy. Other appropriate sources of evidence must be integrated with indicator data - otherwise, the decisions we make will be dismal indeed.
Bailie, R., G. Robinson, S.N. Kondalsamy-Chennakesavan, S. Halpin and Z. Wang. 2006. "Investigating the Sustainability of Outcomes in a Chronic Disease Treatment Programme." Social Science and Medicine 63: 1661-70.
Blalock, A.B. 1999. "Evaluation Research and the Performance Management Movement: From Estrangement to Useful Integration." Evaluation 5(2): 117-49.
Be the first to comment on this!
Personal Subscriber? Sign In
Note: Please enter a display name. Your email address will not be publically displayed