HealthcarePapers

HealthcarePapers 6(2) November 2005 : 4-6.doi:10.12927/hcpap..17746
Editorial

Notes from the Editor-in-Chief

Peggy Leatt

Abstract

Over the past 10 years, there has been a proliferation of healthcare performance reports both nationally and internationally. This trend seems to be driven by the need for accountability to governments as well as the general public. The need for greater evidence-based care and management of health systems has pushed us to produce and examine data in ways that were not expected in the past. The question still remains open, however, on the extent to which such reporting influences the delivery of healthcare and or the health of populations.
The lead paper Making Performance Reports Work, by Brown, Bhimani and MacLeod of the Ontario Ministry of Health and Long-Term Care, begins to address some of these questions. In the paper, the authors present the results of a systematic review of the societal impact of public reporting in healthcare. They use the concept of societal impact of public performance reporting to include how consumers and providers understand their healthcare system and how performance reporting impacts the outcomes of services. To assist in their analysis they use the KAB model from health promotion with the addition of performance a dependent variable or outcome measure. Over 60 articles were included in the review, and the papers were classified in four ways: impact of performance reporting on individual consumers; impact on groups of consumers; impact on individual providers; and impact on groups of providers.

The conclusions from the review suggest positive impact of public performance reporting on provider groups, but little or no impact on consumers and individual providers. The authors outline the limitations of their work, including the facts that most of the literature is from the US and that data reported in the performance reports (often secondary data) may be flawed. They point out that performance reporting has to be viewed within the context with which activity takes place. The direct effect of performance reporting is limited, and they speculate that this may be because the healthcare system in Canada is not market-driven. Also, performance reporting must be articulate with a clear set of goals and strategies for achieving those goals.

We are grateful to the many and thoughtful individuals from both Canada and elsewhere who were able to provide comments on this important topic. All commend Brown et al. for addressing this very complex topic. Schumacher of the Canadian Medical Association reflects on his own personal experience with report cards and concludes with a preference for their use for internal learning as opposed to external dissemination. Chaudhuri, Brossart, Lewis and White from the Saskatchewan Health Quality Council explore the key elements driving performance measurement in the Veterans Affairs healthcare system in the US and the national primary-care trust in England. The authors highlight the experiences in Saskatchewan in creating a culture for change based on performance reporting. We are fortunate enough to have two commentaries from Australia. First, Jackson from Latrobe University argues that performance reporting has an important place in achieving democratic accountability. Second, Collopy points up the importance of tailoring performance reporting to a context, and the intended audience. Boissonnault, from the US-based Niagara Health Quality Coalition, stresses the need for reporting in Canada even though there is no profit motive to drive competition, and the fact that the emphasis should be on matching rewards to performance. Nicklin and Greco, representing the Canadian Council on Health Services Accreditation, discuss the critical importance of aligning accreditation programs with public reporting while also preserving their integrity. McMurtry at the University of Western Ontario clearly defines the criteria that can be used to select performance indicators. For him, "Reporting on performance in healthcare is not negotiable. Accountability cannot be achieved in its absence."

Guerriere of the Courtyard Group believes it is still too early to judge the impact of performance reporting on healthcare. Part of the problem, he sees, is that there is still too much reliance on paper reporting of data, which is slow and often outdated, and still too little use of electronic data. The system itself is frequently slow to respond and implement changes that may be apparent from the performance reports. He suggests the development of composite measures of performance that paint a broad brush and would be easier to interpret, especially by the public.

Klazinga of the University of Amsterdam commends Brown et al. for their use of a conceptual framework for understanding the use and dissemination of performance reports. In his view this is a signal of increasing maturity of the field from activities that focused on the validation of measures to ones based in the social sciences Although the model may be rather generic, in Klazinga's view this is frequently the case for models used to examine change in both organizational and individual behaviour. He concludes that the most important lesson from Brown et al.'s study would be to improve the actionability of the reports - they must be considered an integral part of the Canadian health system design and functioning.

So the debate goes on. Two recent articles in the New England Journal of Medicine continue to provide evidence that performance reporting is important especially when carried out for specific patient populations. Scott Williams and his associates of the Joint Commission on Accreditation of Healthcare Organization (2005) examined hospitals performance on 18 standardized indicators of the quality of care for acute myocardial infarction, heart failure and pneumonia for patients in over 3,000 hospitals. Tracking the performance of the hospitals over a two-year period showed significant improvements in the performance of the hospitals in 15 out of the 18 measures. Ashish Jha and his colleagues (2005) used data collected by the Centers of Medicare and Medicaid Services on 10 indicators of the quality of care for acute myocardial infarction, congestive heart failure and pneumonia. A total of 3,558 hospitals reported data in 2004. Results showed variation among hospitals and across indicators, and the authors recommend that further studies include a broader range of clinical conditions.

In conclusion, while we may not be able to show that performance reporting is directly linked to changing knowledge, attitudes and behaviour, my hunch is that there are links, but that because they are complex we have not yet discovered ways of measuring and understanding them. So let's continue to pursue a goal of accountability: to report on performance and accept that transparency is a way of life for publicly funded organizations.

About the Author(s)

Peggy Leatt, PhD
Editor-in-Chief

References

Jha, A. K., Li Zhonghe, E.J. Orav and A.M. Epstein. 2005. Care in US Hospitals: The Hospital Quality Alliance Program. New England Journal of Medicine 353(30): 265-274.

Williams, S.C., S.P. Schmaltz, D.J. Morton, M.A. Koss and J.M. Loeb. 2005. Quality of Care in US Hospitals as Reflected by Standardized Measures, 2002-2004. New England Journal of Medicine 353(30): 255-264.

Comments

Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed