Healthcare Quarterly

Healthcare Quarterly 1(3) March 1998 : 26-34.doi:10.12927/hcq.1998.16753

The Toronto Academic Health Science Council Management Practice Atlas

George H. Pink, Theodore J. Freedman and The Members of the Toronto Academic Health Science Council

Abstract

Ontario's acute-care general teaching hospitals demonstrate considerable variation in their average costs. In order to reduce or eliminate unacceptable sources of cost variation and achieve more effective care, hospital managers need better information on the sources and magnitude of variations in their practices.

Increasingly, hospitals are employing systems of performance comparison and "benchmarking" as a method of identifying opportunities for improving their practices. Academic health science centers, with their focus on research, have been particularly active in this regard. In 1995 the Toronto Academic Health Science Council (TAHSC), a joint venture of the University of Toronto and its affiliated teaching hospitals, agreed to focus its research on hospital management practice in the course of developing performance comparisons and suitable benchmark data. Since the achievement of truly comparable data and performance analysis is compromised by significant variations in data reporting, the TAHSC research focused on reducing data-reporting variations among the member hospitals.

The results of this analysis are summarized in the TAHSC Management Practice Atlas, along with relevant financial, statistical and clinical data for each hospital in the survey. The Management Practice Atlas is a significant effort in hospital management research and an important step in developing benchmark standards for Canada's teaching hospitals.

The Need for Comparative Information

Consider the likely five-year outlook facing teaching hospitals in Ontario:

  • Fewer academic resources: Diminished real university funding, reductions in the number of medical trainees, and increased competition for scarce research funding will make realization of the academic mission and goals more challenging.
  • Changing funding formulae: If Ontario moves toward Rate and Volume Equity Funding (Ladak and Pink, 1997), this may result in significant redistribution of funding among teaching hospitals, between teaching and community hospitals, and between hospitals and other health services.
  • Greater cost pressures: Diagnostic and therapeutic technology, higher patient acuity, new drugs, restructuring costs, information technology, year 2000 costs, occupational health and safety regulations, and increases in the quantity and costs of many other factor inputs will exert more pressure on health services to eliminate unnecessary current costs and carefully evaluate new expenditures.
  • Hospital restructuring: Decisions regarding hospital closures and the realignment of programs and services recommended by the Health Services Restructuring Commission will be implemented, including a reduction in teaching hospitals, reduction of acute and increase of nonacute inpatient capacity, and greatly expanded outpatient capacity. Many of these decisions have been based on models that use clinical benchmarks (e.g., length of stay, percent day surgery) and administrative benchmarks for realizing various efficiencies in hospital administrative, support, diagnostic, and therapeutic services.

Although this outlook would suggest a trend towards reduction in cost variation among health services, in fact considerable variation still exists. For example, despite several years of partial adjustment of acute hospital budgets for changes in case mix and volume, Table 1shows that, in 1995-96, there were still large differences between expected and actual cost per weighted case among the acute general teaching hospitals in Ontario. Although comparable case mix - adjusted-cost data are not yet available for paediatric, oncology, chronic care, and rehabilitation hospitals, many managers believe that there is similarly wide variation in cost per patient day within these hospital groups. In summary, there are still large differences in hospital average costs that cannot be explained by differences in current measures of case mix.

Table 1: 1995/96 Expected and Actual Cost per Weighted Case
Ontario Acute General Teaching Hospitals
Hospital 1995/96 Expected
Cost per
Weighted Case
1995/96 Actual
Cost per
Weighted Case
Percent
Difference
London Health Science Centre $3,308 $3,297 -0.3
St. Joseph's, London $2,997 $2,915 -2.8
Hamilton Civic Hospitals $3,066 $3,264 +6.5
Chedoke McMaster $3,218 $3,270 +1.6
St. Joseph's, Hamilton $2,840 $2,781 -2.1
Mount Sinai, Toronto $3,184 $3,221 +1.2
Sunnybrook, Toronto $3,357 $3,426 +2.0
St. Michael's, Toronto $3,653 $3,368 -7.8
The Toronto Hospital $3,590 $3,561 -0.8
Wellesley Central, Toronto $2,981 $3,163 +6.1
Womens' College, Toronto $3,054 $3,034 -0.7
Ottawa General $3,091 $3,305 +6.9
Ottawa Civic $3,146 $3,247 +3.2
Kingston General $3,344 $3,188 -4.7
Kingston Hotel Dieu $3,243 $3,246 +0.1
Source: Ontario Ministry of Health

A proportion of these cost differences is due to variation in clinical practice, an area of investigation of the Institute of Clinical Evaluative Sciences. However, a proportion is also due to variation in hospital management practice, an area of investigation that has not received a lot of attention. The existence of variation in hospital management practice is recognized; however, the specific sources, magnitude, and reasons for the variation are not well known. Most hospital managers would probably consider some of the reasons for variation to be acceptable (e.g., patient needs) and some to be not acceptable (e.g., historical practice). The bottom line is that in an era of fewer academic resources, changing funding methods, increasing costs and restructuring, many managers are attempting to reduce or eliminate unacceptable sources of variation.

At the same time, trustees, government, the public and others are demanding more accountability for decisions made by hospital managers and more information about these decisions. For example, recent articles by Lisa Priest in the Toronto Star on the lack of consumer information about their health care providers generated a lot of discussion about the need for report cards. Many hospital managers view this increased accountability as overdue and embrace the new era when provision of comparative information to the public is routine.

Performance Comparisons and Benchmarks

One method of reducing variation that has recently attained a lot of attention is performance comparisons and benchmarking. An operational definition of benchmarking as developed at Xerox is "finding and implementing best practices" (Camp and Tweet, 1994). More specifically, benchmarking uses measures of comparative performance to develop an understanding of what is possible and how others have achieved higher levels of performance. Hospitals measure each other's performance, identify the best performer out of a group, and then seek to identify and understand the practices that can improve both clinical and administrative operations (Czarnecki, 1996). Performance comparisons and benchmarking provide a method of identifying and realizing opportunities for improvement (Lenz et al., 1994) and creating standards for restructuring (Hines, 1996) such as the methods being used for decisions about hospital closures, mergers and reconfigurations by the Health Services Restructuring Commission in Ontario.

Performance comparisons and benchmarking have been used extensively in the hospital industry, particularly at academic health science centers. The University of Cincinnati Hospital is one example of a hospital that has effectively used benchmarking for process improvement (Czarnecki, 1996). The University of California-Davis Medical Center has used benchmarking to reduce overall operational costs and costs for the clinical processes involved in treating patients with specific conditions (Hobde et al., 1997). The Foster G. McGaw Hospital of Loyola University reduced its operating costs by $33 million and put in place the structure for sustained progress in cost reduction (Cohen and Anderson-Miles, 1996). Recently, benchmarking has begun to move outside of "data-focused" areas of the hospital into more qualitative aspects of care delivery such as patient satisfaction. Lund (1996) describes how patient satisfaction can be benchmarked. There is also a belief among some that benchmarking may be one of the best ways of preparing for increasing consumer information about health care (Bartlett, 1997).

Increasingly, hospitals are realizing the benefits of a collaborative approach to performance comparisons and benchmarking, such as that used by Voluntary Hospitals of America (Mosel and Gift, 1994). Similarly, 12 large childrens' hospitals established the Benchmarking Effort for Networking Childrens' Hospitals and for three years they have been comparing data on cost and quality (Porter, 1995). The University HealthSystem Consortium, an alliance of 70 academic health centers, began a patient satisfaction benchmarking project in 1991 (Drachman, 1996). The feasibility of developing national benchmark questions on patient satisfaction with hospital care has recently been assessed in Australia (Draper and Hill, 1996). Nine nursing facilities in Mississippi participated in the American Health Care Association's Quality Indicator Index and Education (QUIX-Ed) project to apply quantitative performance measurements to long-term care (Fitzgerald et al., 1996). In Canada, the Canadian Council on Health Services Accreditation is pilot testing six utilization indicators.

Table 2: Members of the Toronto Academic Health Science Council

Hospital CEOs:

Ms. Pat Campbell, Women's College Hospital
Mr. Tom Closson, Sunnybrook Health Science Centre
Mr. Theodore Freedman, Mount Sinai Hospital
Mr. Stephen Herbert, Baycrest Centre for Geriatric Care
Dr. Alan Hudson, The Toronto Hospital (General, Western and Ontario Cancer Institute / Princess Margaret Hospital Divisions)
Dr. Sandra Jelenich, Wellesley Central Hospital
Dr. Perry Kendall, Addiction Research Foundation
Mr. Jeffrey Lozon, St. Michael's Hospital
Mr. Cliff Nordal, Rehabilitation Institute of Toronto
Ms. Jean Simpson for Dr Paul Garfinkle, Clarke Institute of Psychiatry
Mr. Michael Strofolino, Hospital for Sick Children

The University of Toronto:

Dr. Arnold Aberman, Dean, Faculty of Medicine
Dr. Dorothy Pringle, Dean, Faculty of Nursing
Professor Robert Prichard, President

The TAHSC Project

The Toronto Academic Health Science Council (TAHSC) was established by the University of Toronto and its affiliated teaching hospitals in January 1992. Table 2 shows the members of TAHSC. The goals of the Council are to:

  • provide a forum for discussion, exchange of information, and development of shared policy directions on issues of concern to its members;
  • facilitate and support changes within and between TAHSC institutions, thereby moving TAHSC towards a common vision of the future; and
  • advocate for the special needs of TAHSC institutions, thereby enabling these institutions to fulfill their clinical and academic missions.

TAHSC members constitute one of the largest academic health science centers in the world. In 1995-96, TAHSC hospitals in aggregate had revenues that exceeded $1.8 billion, provided over 2.5 million ambulatory visits, and recorded over 1.0 million acute and 0.3 million nonacute inpatient days.

In June 1995, researchers from the University of Toronto and TAHSC agreed to investigate variation in hospital management practice and to produce performance comparisons and data that could be used for benchmarks. The goals of the project were:

  • to enhance understanding of the management of the patient care, research, and educational activities of academic health science centers;
  • to use appropriate theory and empirical methods to produce comparative data and rigorous and objective analysis;
  • to produce discussion documents that are valuable to TAHSC members and that can be disseminated to a broad audience of health management practitioners and researchers; and
  • to provide research opportunities for master's and doctoral students in Health Administration.

The approach used in the project was based on the following key principles:

  • Open access to hospital data: Data access, with proper regard for patient privacy and other confidentiality concerns, was necessary to ensure that a complete picture of each hospital was being drawn. TAHSC agreed that the researchers would be provided unfettered access to hospital data.
  • Academic freedom to investigate: A cherished principle held by most scholars is the notion of academic freedom, or the ability to conduct research without interference or fear of reprisal in the case of adverse results. TAHSC agreed to allow the researchers to "open any closet, look under any rock, or go down any alley."
  • Full disclosure of all findings: Selective disclosure of research findings can skew the conclusions one draws from analysis. TAHSC agreed that all findings would be disclosed, both positive and negative.
  • Commitment to peer comparison: The value of benchmarking is the ability to compare one's own performance with that of a peer or peers. A hospital can then follow up with a superior-performing hospital for the purpose of adopting similar practices. TAHSC agreed that all hospitals would be named in all analysis.
  • Focus on data quality improvement: Performance comparisons are only as good as the data upon which they are based. TAHSC agreed that a focus of the study should be improving data quality in order to increase the comparability of hospital performance.
  • Hospital and university collaboration: A study that requires data from multiple hospitals and that involves inter-organizational performance comparisons is not one that can easily be undertaken by a hospital or a university by itself. TAHSC agreed that the hospitals would provide the funding and data for the research and, in return, the university would provide objective and methodologically rigorous research.
  • Information for real-life decisions: Performance comparisons are more likely to be used if the information produced can be used for everyday decisions. TAHSC agreed to establish various subcommittees that would advise the researchers about the relevance of various avenues of investigation as well as methodological issues.
  • Disseminate findings to field: Research and publication are fundamental activities of any teaching hospital. TAHSC agreed that an important outcome of the project would be dissemination of research findings to the hospital management community at large.

The first project task was to establish a comprehensive database incorporating the Ontario Hospital Reporting System (OHRS) and Canadian Institute of Health Information inpatient and day-surgery abstract data for all of the TAHSC hospitals. Data quality problems were plentiful, because 1994-95 was the first year of OHRS data (which are based on the Guidelines for Management Information Systems in Canadian Health Care Facilities published by the Canadian Institute of Health Information). The lack of experience with the OHRS data resulted in many reporting variations, rendering many hospital comparisons difficult if not impossible. During the first year of the study, much researcher time was spent consulting with hospital finance departments about how to interpret the MIS Guidelines and attempting to get agreement about account definitions and reporting practices.

When the researchers received and analysed the 1995-96 OHRS data, further reporting variations were discovered, despite one year's experience with the data. The researchers considered the data quality problems to be extensive and important enough to warrant a systematic documentation of the reporting variations and a formal attempt to find ways to minimize them. This task was assigned to a committee of TAHSC finance managers and university researchers, who spent several meetings analyzing the reporting variations and discussing accounting changes that would reduce and, in some cases, eliminate the variations. The committee produced a report which describes and makes recommendations for seven types of reporting variations: the use of separate fund types, research revenues and expenses, marketed services, interdepartmental recoveries, depreciation expense, Ontario Health Insurance Plan professional billing, and outpatient activity. In the near future, this report will be submitted to a journal for publication.

The researchers have focused on reducing reporting variations among the hospitals in order to provide truly comparable data and analysis of the performance of the TAHSC hospitals. In addition, the projects have addressed the establishment of performance indicators related to the other important components of organizational performance, such as customer satisfaction, internal business processes, and organizational capacity for continual improvement, with the ultimate goal of tracking these indicators to augment the financial indicators of performance.

In July 1997, a compilation of these studies was produced and distributed to the TAHSC members. The TAHSC Management Practice Atlas provides a summary of the financial, statistical and clinical data for each hospital and identifies known reporting variations. In addition, the Atlas provides an array of performance comparisons and benchmark data. Three examples of these performance comparisons and benchmarks are provided below.

Examples of Performance Comparisons and Benchmarks

Nursing - One early priority for investigation was nursing. An advisory committee of TAHSC Chief Nursing Officers and other University of Toronto researchers was established to advise about how to investigate inpatient nursing staffing and costs as well as variations in the use of professional and nonprofessional staff, nurse practitioners and advanced practice nurses, and management. Furthermore, because of shared and overlapping responsibilities, it was considered impossible to look at nursing on inpatient units without looking at all of the other occupations assigned to the units.

The first challenge was the OHRS trial balance data, which include little detail about the types of staff who work on inpatient units, other than whether they are management or unit producing. The researchers had to resort to data from the hospital payroll systems. Among the ten TAHSC hospitals, only five could provide the payroll data in a format amenable for data analysis. The researchers received the 1994-95 payroll data for all of the staff who worked on inpatient nursing units, regardless of occupation type. Some hospitals found this data request challenging to fulfill, and others found it relatively straightforward.

The hospital payroll data identified a total of 183 hospital-specific occupation code descriptions among the five hospitals. The researchers assigned each occupation code to one of 25 subgroups, which were, in turn, assigned to one of four groups: (1) Management - the managers of the unit; (2) Management support - those whose work supports the managers of the unit; (3) Patient care - direct caregivers; and (4) Patient care support - those whose work supports the direct caregivers. When this categorization was presented to the advisory committee, several changes to the assignment of specific occupations were suggested. More important, the committee advised the addition of two more categories: "advanced practice nurse," which would include clinical nurse specialists, nurse clinicians, and nurse practitioners, and "Management and patient care," which would include nurse managers who were also direct caregivers. This breakdown was suggested to better depict hospital variations in the use of specialized nurses and to ensure that the categories consisted of relatively homogeneous groupings, an important requirement for the use of benchmarks (McKeon, 1996).

After the nursing staffing and cost analysis were revised using six categories, the next step was to use the 1994-95 Canadian Institute of Health Information inpatient discharge abstract data to calculate the number of inpatient weighted cases, to adjust for the variations in case mix among the hospitals. Weighted cases was considered to be an important denominator because comparison of a hospital's actual cost per weighted case to its expected cost per weighted case (as determined by a regression equation) has been used for several years by the Ontario Ministry of Health to revise Ministry hospital funding; for most hospitals, nursing is the single largest component of cost per weighted case.

Nursing hours per inpatient weighted case for 1994-95 are shown in Figure 1. At first glance, there appears to be little variation, as the hospitals averaged between a total of 40 to 45 hours of nursing care per weighted case. However, closer inspection reveals substantial variation in the mix of staff that make up the total nursing hours per weighted case. More specifically, the hospitals differ with respect to the number of patient care hours and patient care support hours per weighted case. Hospitals C and D employ nurses who are both managers and direct caregivers, but the other three hospitals either did not or employed very few. Although it is not apparent from the figure, there are variations in the use of advanced practice nurses, but the numbers are small. Also not shown is substantial variation in the mix of professional and nonprofessional nurses that make up the patient care category.

The question arises, What are the reasons for these variations? For example, why did hospital B allocate substantially more patient care hours and fewer patient care support hours per weighted case compared with hospital E? The simple answer is, we don't know and this question is the focus of further analysis. Perhaps variations in patient acuity, philosophy of care, traditional staffing patterns, nursing ideology, or other factors will be identified.

Nursing compensation per inpatient weighted case for 1994-95 is shown in Figure 2. Interestingly, there is even less variation in this ratio - all hospitals but one generated a cost of around $1000 per weighted case. Although there are variations in the mix of nursing staff, the variations end up costing approximately the same amount. The exception is hospital E, which had the lowest number of nursing hours but the highest nursing compensation per inpatient weighted case, indicating that the hospital has relatively high compensation per hour. Again the question arises about the causes of this variation and, again, the answer is unknown. Perhaps variations in seniority, use of agency or part-time nurses, or other factors will be identified in further analysis.

Nursing data for eight TAHSC hospitals for 1995-96 and 1996-97 are currently being analysed with a view to producing three years of comparable information about changes in nursing staff and costs.

Diagnostic imaging - Another high-cost hospital service that was selected for investigation was diagnostic imaging. Close inspection of the OHRS diagnostic imaging data for eight TAHSC hospitals revealed reporting variations. For example, medical compensation was not comparable, primarily because of different physician payment arrangements among the hospitals. For this reason, medical compensation was excluded and, after several other adjustments, the staff hours were calculated using the two MIS categories of unit producing and management/support personnel. These hours were converted to minutes and divided by the total Workload Measurement System (WMS) units for diagnostic imaging.

Diagnostic imaging earned minutes per workload unit for 1995-96 is shown in Figure 3. There is substantial variation among the eight hospitals, ranging between 1.71 and 3.23 earned minutes per WMS unit. Because the variation could have been due to different amounts or types of capital equipment or prices of factor inputs, total diagnostic imaging expense (including depreciation) per workload unit was also calculated and is shown in Figure 4. As can be seen, there is also substantial variation, ranging between $1.43 and $2.91 per WMS unit.

What are the reasons for the variations? Perhaps there are inconsistencies in the methods used to calculate WMS units or perhaps there are real efficiency differences. Anecdotally, when presented with these data, a radiologist commented that Hospital G is known to have an efficient diagnostic imaging department - it has good technical staff, new equipment, functional space, efficient processes, and competent clinical and administrative heads. He also stated that his hospital was in the process of benchmarking against Hospital G and that the Atlas data had confirmed that this was a good decision.

Materials management - A final example of the performance comparisons in the Management Practice Atlas was materials management. A standard accounting ratio is average days in inventory, defined as inventory / (supplies expense/365). The ratio indicates how long, on average, that supplies remain in inventory before being used. A value which is too low indicates a hospital which may be at risk of out-of-stock situations and a value which is too high may indicate too large an investment in inventory, tying up funds that could be put to better use elsewhere. This type of ratio lends itself to benchmarking - for example, the Health Services Restructuring Commission uses a benchmark of 18 days in inventory in its material management costing model.

Average days in inventory for 1995-96 is shown in Figure 5. The Health Services Restructuring Commission benchmark is depicted by the horizontal line across the figure. It can be seen that most of the 10 TAHSC hospitals are performing around the benchmark level of performance but some are not. However, this figure also illustrates the importance of data quality. When shown the figure, Hospital I identified a reporting variation that significantly distorted its ratio and therefore its figure was not comparable. The hospital was unaware of its atypical reporting practice until it saw the average days in inventory ratio depicted in the Atlas.

Lessons Learned

The TAHSC hospitals and the University of Toronto researchers have learned many lessons during the course of this study. Among the more important ones are the following:

  • Data quality is job one: During the course of the study, the most frequent feedback the researchers received was in relation to data quality - "You forgot this," "This isn't right," "How did you calculate this?" and "This is misleading," were some of the most common comments heard. In virtually all cases, these comments were prompted by data quality problems and not calculation errors. Data quality lessons can be summarized as follows: (1) use of performance comparisons and benchmark data is dependent upon valid, reliable and comparable data -- managers and clinicians are quick to discount (and discard) data in which they have no confidence; (2) bad data are often hard to detect and sometimes not revealed until used in a performance comparison; (3) there are some data quality issues that will never be resolved or eliminated because of idiosyncratic institutional reporting. In such cases, either don't report the data or report it and identify the data quality issues.
  • The art and science of presenting data are as important as the data themselves: A table of numbers may contain exactly the same data as a graph, but well constructed visual displays communicate information faster and facilitate the identification of benchmarks, outliers and data problems.
  • Providing information generates action: In particular, presentation of performance comparisons often generates an immediate response. On one occasion, a performance comparison of the clinical laboratories was made at an 8 am meeting of TAHSC. The comparison identified one clinical laboratory as having substantially higher unit costs compared with the other TAHSC laboratories. At 1.30 pm on the same day, the researchers received a telephone call from a pathologist of the outlier laboratory inquiring about the comparison. During the course of the discussion, the pathologist admitted that he knew their costs were out of line, but didn't know by how much until he saw the comparison with peer laboratories.
  • Comparisons are valuable: TAHSC hospital CEOs, clinicians, and managers have provided formal and informal feedback to the researchers that the Management Practice Atlas is useful, not only for performance comparison but also for planning and as a basic reference source about "who does what." Some hospitals have distributed the Atlas to their senior management, Board trustees, Foundation Board members, medical staff and others.
  • Consultation and continual feedback are critical: Planners, researchers and others who undertake analysis using hospital data without consultation with managers and clinicians do so at their peril. The advisory committees of practitioners helped the researchers to identify data quality problems and methodological issues that could not possibly have been identified by the researchers in isolation. Similarly, the researchers helped the practitioners to identify data quality issues that were unknown to the hospitals.
  • Information is political: The availability of comparative information generates a set of new and often unforeseen uses of the data. When such information is provided to trustees, medical staff, and others not in the direct employ of an institution, it is only a matter of time until the information is disseminated beyond the intended audience. Although this dissemination can be considered as progress towards new levels of accountability, it can also be destructive if data are not interpreted correctly or are used inappropriately. Hospitals need to consider carefully the potential consequences of information dissemination and be prepared to react in the event of inappropriate use.
  • Real variations exist: After adjusting for accounting practices, casemix differences and other known explanatory factors, there remain real variations in the way that hospital managements allocate resources. An obvious example is the percent of nursing staff that are professionally trained. The reasons for these variations are beginning to be revealed through reports such as the TAHSC Management Practice Atlas and the "Annual Benchmarking Comparison of Canadian Teaching Hospitals" study being undertaken by HayGroup for the Association of Canadian Teaching Hospitals, but much remains to be done before we have a comprehensive understanding of management practices.

Conclusions and Next Steps

Research is pervasive at the TAHSC hospitals and management research is no exception. The Management Practice Atlas was developed and, more important, has been used by these hospitals because research is central to their mission and raison d'etre. The primary conclusions to be drawn are that the research has: (1) increased the amount of accountability between the hospitals and their stakeholders by providing valid and comparable information, where possible, and identifying data limitations where it is not possible; (2) created new approaches to performance comparison that will benefit the hospital management community in general through the publication of articles and other types of dissemination; and (3) perhaps, most important, improved management practices and outcomes.

In terms of next steps, the Management Practice Atlas has spawned a series of new projects. The first is to develop a balanced scorecard for Canadian teaching hospitals, a concept that is beginning to be implemented by some Canadian hospitals (Baker and Pink, 1995). The scorecard was developed by a committee of TAHSC hospital managers and researchers and it is currently being reviewed by the hospital leadership before implementation in late 1998. In turn, the scorecard generated a project to develop a report card designed to inform the public about the TAHSC hospitals and to provide a range of comparative information that consumers can use to evaluate them. At present, a TAHSC committee and the researchers are working to produce a draft report card that will be tested on patient focus groups in April and May 1998. After the report card is finalized, the next steps will be trustee and media education about the purpose, interpretation, and use of report cards, followed by their public dissemination.

The second version of the Management Practice Atlas will be issued in July 1998, and it will likely provide more and better information in comparison with the first, primarily because of data quality improvement. The Management Practice Atlas and related projects are tangible evidence of the ongoing commitment of the TAHSC hospitals to the development and dissemination of performance comparisons, benchmarks, and comparative information, and reaffirms the importance of hospital management research in general.

About the Author(s)

George H. Pink, PhD, is Associate Professor in the Department of Health Administration, Faculty of Medicine, The University of Toronto.

Theodore J. Freedman, DHA, FHCHE, is Chair, Toronto Academic Health Science Council and President and CEO of Mount Sinai Hospital in Toronto.

Acknowledgment

The authors are grateful to Lina Johnson, Linda McGillis Hall, Ellen Schraa, Brenda Tipper and Jennifer Van Sickle for their assistance in preparing this article, to David Shedden and Lin Grist for their administrative assistance, and to the CEOs of the Toronto Academic Health Science Council hospitals who provided the data for this study. Please address all correspondence to George H. Pink, Toronto Academic Health Science Council, 555 University Avenue, Room 6432, Toronto, Ontario, M5G 1X8. Tel (416) 813-5289 and FAX (416) 813-5282.

References

N. Ladak and G.H. Pink. Funding Ontario Hospitals in the Year 2000: Implications for the JPPC Hospital Funding Committee, Joint Policy and Planning Committee Discussion Paper #DP 3-4, December 19, 1997.

R.C. Camp and A.G. Tweet. Benchmarking Applied to Health Care. Joint Commission Journal on Quality Improvement, 20(5):229-38 (1994).

M. Czarnecki. Benchmarking: A Data-Oriented Look at Improved Health Care Performance. Journal of Nursing Care Quality, 10(3):1-6 (1996).

S. Lenz, S. Myers, S. Nordlund, D. Sullivan and V. Vasista. Benchmarking: Finding Ways to Improve. Joint Commission Journal on Quality Improvement 20(5):250-9 (1994).

P.A. Hines. Using Benchmarking to Identify Standards for Restructuring. Recruitment, Retention and Restructuring Report 9(5):1-5 (1996).

M. Czarnecki. Using Benchmarking in the Hospital Environment: A Case Study. Best Practices and Benchmarking in Healthcare 1(4):221-4 (1996).

B.L. Hobde, P.B. Hoffman, P.K. Makens, M.B. Tecca. Pursing Clinical and Operational Improvement in an Academic Medical Center. Joint Commission Journal on Quality Improvement 23(9):468-84 (1997).

E. Cohen and E. Anderson-Miles. Benchmarking: A Management Tool for Academic Medical Centers. Best Practices and Benchmarking in Healthcare 1(2):57-61 (1996).

C.M. Lund. Benchmarking Patient Satisfaction. Best Practices and Benchmarking in Healthcare 1(4):203-6 (1996).

D.R. Bartlett. Preparing for the Coming Revolution in Health Care. Journal of Health Care Finances. 23(4)33-9 (1997).

D. Mosel and B. Gift. Collaborative Benchmarking in Health Care. Joint Commission Journal on Quality Improvement 20(5):239-49 (1994).

J.E. Porter. The Benchmarking Effort for Networking Childrens' Hospitals (BENCHmark). Joint Commission Journal on Quality Improvement 21(8):395-406 (1995).

D.A. Drachman. Benchmarking Patient Satisfaction at Academic Health Centers. Joint Commission Journal on Quality Improvement 22(5):359-67 (1996).

M. Draper and S. Hill. Feasibility of National Benchmarking of Patient Satisfaction with Australian Hospitals. International Journal on Quality Health Care 8(5): 457-66 (1996)

R.P. Fitzgerald, B.N. Shiverick, D. Zimmeran. Joint Commission Journal on Quality Improvement 22(7):505-17 (1996).

T. McKeon. Benchmarks and Performance Indicators: Two Tools for Evaluating Organizational Results and Continuous Quality Improvement Efforts. Journal of Nursing Care Quality 10(3):12-17 (1996).

G.R. Baker and G.H. Pink. A Balanced Scorecard for Canadian Hospitals. Healthcare Management Forum 8(4):7-13 (1995).

Comments

Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed