Abstract

Introduction: Building on a survey of key decision-makers within the Provincial Health Services Authority (PHSA) of British Columbia, a process for prioritizing new service options within this organization for the 2005/06 budget cycle was developed and implemented by senior managers and researchers at the University of British Columbia.

Methods: A case study approach was taken in which development and implementation of the prioritization process was documented and feedback was obtained from decision-makers to evaluate the activity. Information from the literature was also used to identify areas for improvement.  

Results: The 13-member Executive Committee of the PHSA conducted the prioritization. Criteria were defined and weighted, and options for new funding were explicitly rated against them. Multi-attribute decision software was used to rank options based on an overall benefit score. Cost-benefit ratios were also derived and program options were further ranked according to decision-makers' "gut-feel." Ranking between the methods was comparable, and each method contributed to the final decisions by the Executive Committee regarding which programs would receive limited new funding.

Conclusion: Through a collaborative effort between decision-makers and researchers, the PHSA was able to shift from allocating new resources on the basis of politics and bargaining to an evidence-driven, transparent, defensible process. Lessons learned from the current activity will help inform future priority setting in the PHSA and should provide insight for decision-makers elsewhere.

[To view the French abstract, please scroll down.] As resources in healthcare are limited, some mechanism is re-quired within health organizations to decide which services to fund and which not to fund (Madden et al. 1995; Ham and Coulter 2001). Recent evidence has shown that decision-makers often do not have an adequate understanding of tools available to assist in such activity (Lomas et al. 1997; Mitton and Donaldson 2002). As a result, they are likely to revert to historical allocation processes (Miller and Vale 2001; Mitton and Prout 2004) whereby a given year's expenditure is largely based on the previous year's allocation, with some adjustments for demographics or the political call of the day. The main criticism of this approach is that there is little chance of getting the most out of limited resources, as costs and benefits of services are not explicitly examined (Birch and Chambers 1993). This paper outlines a formal approach to prioritization applied in 2005 within the Provincial Health Services Authority (PHSA) of British Columbia.

Context

With its provincial, rather than regional, mandate, the PHSA is distinct from the five other health authorities in British Columbia (Cranston and Powell 2004). As a provider of specialized services, it coordinates the activities of eight provincial agencies: B.C. Cancer Agency, B.C. Centre for Disease Control, B.C. Children's Hospital and Sunny Hill Health Centre for Children, B.C. Provincial Renal Agency, B.C. Transplant Society, B.C. Women's Hospital & Health Centre, Forensic Psychiatric Services Commission, and Riverview Hospital. In addition, the PHSA is responsible for cardiac services and the provincial coordination of emergency and surgical services. In most other provinces, specialized services, such as tertiary children's hospitals, are housed within regional health authorities while disease-oriented institutions, such as those for cancer and mental health, function as stand-alone entities.

Leadership of the PHSA is provided through an Executive Committee comprising 13 individuals of both clinical and managerial backgrounds. The Executive Committee is responsible for overall planning for the PHSA, although each agency also has its own leadership team.  

In fall 2003, senior decision-makers in the PHSA decided to work towards a formal prioritization process. They contacted researchers from the University of British Columbia early in 2004 to collaborate on the development and implementation of an approach for prioritization of new investments for the 2005/06 budget cycle.  

The primary objective of this paper is to report on the most recent prioritization process for new services and programs undertaken within the PHSA. First, we outline several observations from a survey with PHSA decision-makers that was undertaken before the prioritization exercise.

Survey of Key Decision-Makers

To ensure a better understanding of the organizational context with respect to priority setting and to investigate the possibility of moving towards a more explicit process, a survey was conducted of 25 key PHSA decision-makers in spring 2004. Results were categorized into (a) current organizational practices; (b) strengths and weaknesses of priority-setting activity to date; (c) areas for improvement, particularly in relation to cultural change, stakeholder involvement and fairness of process; and (d) barriers and facilitators in moving forward with an explicit approach to priority setting.

Decision-makers indicated that up until the time of the survey, priority setting had been largely based on the adage, "the squeaky wheel gets the grease": resources tended to go to whomever "yelled the loudest." Prioritization was described as an ad hoc process, with resources allocated to satisfy the most people and incur the least opposition. Decision-makers stated that there had not been discussion of re-allocating resources across PHSA agencies owing to an agreement among the agencies when the PHSA was formed.

Decision-makers identified a number of strengths pertaining to priority setting. First, many respondents identified the creation of the Strategic Plan as an organizational strength. The planning process enabled stakeholders to come together and discuss the future directions of the PHSA. The aim of the strategic planning exercise was to establish a unified vision across the agencies. Decision-makers viewed the plan as the first step towards a more "fair, open and transparent" process. Another strength the decision-makers identified was the openness of the PHSA in moving towards an explicit, more formal process of priority setting.

One identified weakness was a lack of structural and cultural integration within the organization. This was attributed to the newness of the PHSA (created in 2002) and related to the challenge, noted above, of re-allocating resources across agencies. In addition, decision-makers said that an organizational "do-it-all" mentality prevailed rather than an acceptance of the necessity to make overt rationing decisions. A further perceived weakness was that decision-makers were unlikely to release resources from their own program budgets to fund investments elsewhere.

Survey participants noted that the main strategy for improvement was to develop a process that was more transparent and defensible. Further, they identified a need for developing a culture that supports explicit priority setting through education and training in economic principles and prioritization practices. While the PHSA has used stakeholder opinion in setting priorities, decision-makers believed that stakeholder involvement could be broadened. For example, members of the public could be asked for their insights into the criteria on which decision-makers base allocation decisions.

Despite the desire for greater transparency as a whole, decision-makers identified a number of barriers in moving to an explicit priority-setting process: lack of shared vision in the PHSA, lack of priority-setting skills among the management team and lack of decision-maker buy-in for such a move. Conversely, these decision-makers also highlighted several facilitators that could aid in the implementation of an explicit priority-setting process: a strong leadership team, commitment to explicit priority setting, consistent application of the process, demonstrated results and adequate resources for re-allocation across services.

Methods

Our research activity within the PHSA followed what Martin and Singer (2003) call a "describe-evaluate-improve" strategy for priority setting. The first stage is to describe the process using qualitative methods, as was done with the key decision-maker survey. Following implementation, the process can be evaluated against known frameworks; subsequently, areas for improvement can be identified. The current process was documented through participation in relevant Executive Committee meetings over a 10-month period (May 2004 through February 2005) and then evaluated through discussion with two sets of decision-makers. Information from the literature was also used to identify areas for improvement.  

Table 1 outlines the basic steps taken for this process, with the scope (i.e., stage 1) being to prioritize new services and programs. A work plan for the process was developed jointly by a senior decision-maker within the PHSA (JM) and a researcher at the University of British Columbia (CM). The plan was presented to the CEO and Executive Committee in June 2004. The Executive Committee agreed to serve as the advisory panel (see stage 3 in Table 1), while support would be provided by managers and clinicians within each agency.  

Table 1. Stages in PHSA prioritization process
Generic Stages* PHSA Action
1) Determine the aim and scope of the priority-setting exercise Focus on prioritizing new program and service options for 2005/06 budget cycle (corporate services and independent capital proposals not included)
2) Compile a map of current activity and expenditure Explicit review and compilation of current activity and expenditure map not conducted
3) Form a multi-disciplinary advisory panel 13 member Executive Team of the PHSA, comprised of both clinicians and managers
4) Determine locally relevant decision-making criteria based on decision-maker, board or public input, including review of strategic documents Initial set of criteria generated based on organizational values and standard criteria from the literature. These were revised by the Executive Team and then were weighted according to relative preference.
5) Advisory panel to identify options for service growth and rank those options with explicit consideration given to decision- making criteria Program and service options from each agency were presented to the Executive Team. A business case was developed for each option and a scoring sheet was used to rate options against the criteria.
6) Validity checks with additional stakeholders, final decisions to inform budget planning process, and communication of decisions and rationales internally and externally Finance personnel reviewed business case cost projections; final decisions were made by the Executive Team and results communicated back to the agencies through the Executive Team members.
7) Evaluate the process and make refinements for future years Informal feedback obtained from two sets of decision-makers and insight drawn from relevant literature; more formal evaluation using qualitative and quantitative methods to be carried out over time
*Adapted from Mitton and Donaldson (2004)

An initial set of decision-making criteria was generated by JM and CM, based on (a) organizational values as evidenced through various internal planning and strategic documents and (b) knowledge of criteria found in the literature (e.g., Gibson et al. 2005a; Mitton et al. 2003). Definitions of each criterion were drafted and presented to the Executive Committee in June 2004. Through an iterative process involving extensive discussion at Executive Committee meetings over a three-month period, the definitions were formalized (stage 4). This process included ranking a series of mock proposals to gauge the validity of the criteria. Next, the criteria were weighted according to relative preference. Weighting involved each member of the Executive Committee assigning a total of 100 points across each criterion. Points were tallied and averages, not including high and low scores, were calculated. From these averages, a relative percentage weighting of each criterion was derived.  

A simple, one-page scoring sheet (see Appendix 1) was developed by JM and CM so that each new program or service proposal could be rated against the criteria in a standardized format. The criteria in the scoring sheet were those that were initially generated as described above, and formally agreed to by the Executive Committee following extensive discussion and iteration. The columns represent a score, between 1 and up to 6, on which each funding proposal was rated (stage 5). The number of levels for each criterion (i.e., 1 to 4, 1 to 5 or 1 to 6) was selected based on the natural intervals for each criterion, as agreed upon by the Executive Committee. In some cases, epidemiologists were consulted in deriving the final set of levels and corresponding definitions. The column farthest to the right provides an indication of whether the assessment was based on evidence or expert opinion.

Based on the expected amount of new resources available from the government for the 2005/06 fiscal year, and to keep the process manageable, the Executive Committee decided that each agency could submit a maximum of two proposals for prioritization. The scoring sheet and corresponding business case were completed for each proposal by the program requesting the funding and submitted by the relevant agency's executive sponsor prior to a one-day decision-making retreat held in January 2005. The business cases included a summary description of the investment option, a clear statement of the objectives and expected outcomes, including assessment of the proposal against each of the pre-defined criteria, risk assessment and cost analysis. During the retreat, a manager or clinician from the particular program area made a 30-minute presentation on each proposal. The Executive Committee then vetted the scoring sheet ratings, and adjustments were made accordingly.

Three further steps were undertaken during the day-long decision-making retreat. First, the service proposal ratings, along with the pre-defined criteria weights, were entered into an off-the-shelf, multi-attribute decision-making software package (VISA 1995), which produced an overall benefit score for each proposal. The software calculated the benefit score based on a simple linear function, with the weight of each criterion multiplied by the score of the proposal against that criterion. Then, these scores were added across all the criteria and entered into Excel for reporting.

Second, the overall benefit score for each proposal was divided by the five-year projected capital plus operating cost to produce a cost-benefit ratio. These costs were available in the business case for each proposal. As the costs were cardinal and the benefit scores were ordinal, a skewed list of ratios was produced, with the most costly options receiving the lowest cost-benefit scores. Nonetheless, the intention was to produce some information on the relative "value" (i.e., expected benefits for resources spent) of the investment proposals.

Third, a set of blank cards was handed out to each Executive Committee member, who rated his or her top five proposals based on "gut-feel." These proposals were hand-tallied to produce an additional score, with first-place proposals on each card receiving a "3," second-place proposals a "2," third-place proposals a "1" and "0" for those in fourth or fifth place. The multi-attribute benefit score, the derived cost-benefit ratio and the "gut-feel" score were presented to the Executive Committee at the end of the retreat.  

Two weeks following the retreat, at a regularly scheduled Executive Committee meeting, the prioritization process was reviewed and the three sets of rankings were re-presented. During the interval between the retreat and the final decision-making meeting, PSHA financial personnel reviewed the costing of each proposal to ensure consistency of reporting and projection of various costs (step 6 in Table 1). Adjustments were made and incorporated into the final cost-benefit ranking. Prior to final decisions, the senior manager leading the process (JM) presented three key observations to the team:

  1. There is no "right" or "wrong" ranked list, and of the three methods presented, ultimately each may provide information upon which the team might base its final decisions. Additional factors may also play a role in decision-making, such as providing services to highly specialized populations; or, other criteria that were not fully captured in the scoring sheet might be applied. So long as evidence is provided and decisions are made in an open and transparent manner, in principle it is legitimate to produce a final ranking that differs from any of the three methods used in this process.  
  2. The "gut-feel" approach does not include process values that the Executive Committee had indicated were important at the outset of the exercise, including transparency, equity and accountability.  
  3. The overall benefit score on its own does not provide an indication of value for money spent. The cost-benefit ratio is driven largely by cost, as the variation between programs on the benefit score is limited.

Group discussion followed the presentation of results, and a final decision was made by consensus regarding the top three service options. Next, we immediately embarked on further discussion regarding improvement of the process for future years (stage 7 in Table 1). Additional insight was elicited in a separate meeting with a group of managers from two of the agencies that met regularly to strategize priority-setting activity within their agencies. Key points from these two feedback sessions were recorded. Main themes are reported below.

Table 2. Ranked results of program and service options by overall benefit score (OBS)
Program Option OBS Total Cost (000's)
(Operating+Capital)
Benefit/ Cost*
Alcohol and Drug Treatment 85 12580 6.76
Colorectal Cancer Screening 82 3414 24.02
Prenatal Genetics Screening 81 5120 15.82
Youth Substance Use Program 80 3071 26.05
Environmental Health Centre 76 15919 4.77
Mental Health Networks 75 5913 12.68
Pediatric Nutritional Needs 74 4685 15.80
Pediatric Oncology Network 68 1353 50.26
HPV Co-testing 62 7119 8.71
*Pediatric Oncology Network, Youth Substance Use Program, Colorectal Cancer Screening and Prenatal Genetics Screening are the top four ranked programs on the basis of the benefit-cost ratio.

Results

In total, nine proposals for funding were put forward by the agencies. Rank order by overall benefit score is presented in Table 2. This table includes the cost-benefit ratio for each program. The "gut-feel" scores are presented in Table 3, with the number in parentheses representing the total score received based on three points for each first-place vote, two points for second place and one point for third.  
Table 3. Program options ranked by gut-feel scores
1. Colorectal cancer screening (31
2. Provincial youth substance (18)
3. Prenatal genetics (12)
3. HPV co-testing (12)
5. Pediatric nutritional needs (6)
5. Alcohol and Drug Treatment (6)
7. Pediatric Oncology Network (0)
7. Mental Health Networks (0)
7. Environmental Health Centre (0)

In the end, the Executive Committee decided that the top three programs to receive funding, should new resources become available, were: (1) Colorectal Cancer Screening, (2) Prenatal Genetics Screening and (3) Youth Substance Use Program. The remaining, unfunded programs were turned back to the individual agencies for internal re-allocation of resources, should they be deemed of higher priority than existing services within those agencies. In the end, the team made this decision by taking into account the "gut-feel" scores along with the cost of the proposals and the overall benefit score (but not considering the cost-benefit ratios explicitly).  

The final ranking corresponds closely with the "gut-feel" rank order scores and, except for the Alcohol and Drug Treatment program, also follows the overall benefit score rank order. The program option with the best "value" based on the cost-benefit ranking (i.e., the Pediatric Oncology Network) did not make the final three ranked options and received zero points on the "gut-feel" scoring.

Following these decisions, the Executive Committee raised a number of important points related to the process. Overall, the process was seen to be fair, and the team appreciated the due diligence undertaken by program managers and clinicians on the business cases and retreat presentations. The managers and clinicians on the Executive Committee felt that there was ample opportunity for dissenting voices to be heard and that the process was open and transparent among senior staff across the agencies. As well, despite some concerns over the criteria, on balance decisions were made with adequate information at hand, with the caveat that evidence and data will always be limited.

In terms of areas for improvement, the main concern was that the "benefit" of some proposals (e.g., the networks) was not fully captured through the defined criteria. However, the team also understood that results from each of the three methods were simply inputs into the decision-making process, and that no single rating of, for example, a 6 versus a 4 on the benefit score of an individual criterion would have swayed the final ranking decision. A related issue was concern that the team was unable to fully adjudicate the benefit scores presented by program managers at the retreat, and thus the vetting focused more on assessment of individual managers' understanding of the criteria as opposed to a genuine assessment of the program against the given criteria. The team also recognized challenges with the criteria that may have arisen, in part, through the vast and disparate array of PHSA program areas requiring assessment.

Further points arose in discussion with a small group of managers charged with leading priority-setting processes within two of the agencies. These managers clearly felt that the executive-level process should be continued, noting a need for some refinement. Viewed as perhaps most important was the need for even greater standardizing of the process so that all areas of the organization (e.g., human resources, corporate services) were vying for limited resources on an equal footing. As an indication of the cultural shift required, the managers pointed out that some parts of the organization had not used business cases for examining funding options in the past. A new way of thinking about priority setting, brought about through an explicit process with clearly defined criteria, was considered a major advance. It was also noted that project roll-out within the agencies was underway, and that there should be scope at this level not just for prioritizing new programs, but also for re-allocating existing resources between program areas.

Specific recommendations for moving forward included having wider scales for the benefit scoring in order to capture greater variation between options, and outlining the principles of the process in a one-page document for broad circulation to bolster transparency. It was also suggested that the business cases be streamlined so that only relevant information related to the decisions at hand would be included. In addition, the managers suggested that a filter process be examined whereby a proposal would have to achieve a certain level on some criteria in order to qualify for further consideration. The group also indicated that the denominator in the cost-benefit ratio could be adjusted to reflect a cost per case rather than an overall program cost, and a mechanism should be developed to deal with new developments, or new information on existing proposals, as the fiscal year progresses. Finally, while it was recognized that the Board supported the process undertaken by the Executive Committee, this group of middle managers wondered about the political will of sticking with the prioritized ranking when the next crisis - and corresponding media outcry - materializes.  

Discussion

Overall, the PHSA took an important cultural step in moving towards an explicit, transparent method for prioritizing programs competing for limited new resources. The collaboration between researchers and decision-makers led to a natural laboratory for knowledge exchange whereby the evidence base of the process could be fostered while working within the constraints of real-world complexity and timelines. While the decision-makers viewed the process described here as an improvement over previous processes (inherently political methods, e.g., "the squeaky wheel gets the grease"), improvement and further iteration should nonetheless continue.  

A review of key developments in the priority-setting literature provides insight for evaluating the PHSA process and, with the suggestions from the decision-makers reported above, helped identify areas for improvement and refinement. A recent paper by Gibson et al. (2005a), building on initial work by Daniels and Sabin (1998), outlines a number of case studies on priority setting conducted in Canada. Of central importance is the notion of process fairness, which can be assessed against five criteria: whether decisions are made on the basis of reasons (i.e., evidence, principles, values) that fair-minded people can agree upon; whether decisions and rationales are publicly accessible; whether there are opportunities to re-visit and challenge decisions; whether there is a regulatory mechanism to enforce the first three conditions; and whether efforts have been made to ensure broad participation and minimization of power differences. These criteria form a conceptual framework for the evaluation of priority-setting processes (Gibson et al. 2005b).

For example, while relevant criteria were used in the PHSA process and data were collected related to each criterion, the Executive Committee should have developed a clearer rationale in determining the final top three ranked programs. Also, in future years, the PHSA may wish to place more emphasis on effectively communicating the goals, criteria, processes and decisions to internal and external stakeholders, and should develop a consultation process with a broad set of stakeholders when formalizing the criteria. Further, the Executive Committee should develop a formal review process to resolve any disputes that may arise, and form an iterative review process to decide how to address funding requests that arise during the budget cycle. Finally, efforts should be undertaken to evaluate subsequent prioritization processes on a continuing basis.

This work can also be set in the context of previous work that has taken an economic perspective to priority setting (Donaldson 1995; Peacock 1998; Astley 2001; Mitton et al. 2003; Donaldson et al. 2005). The major contribution from this body of work, in relation to the PHSA process, would be to consider the concept of re-allocating resources at the margin. The basic process would involve ranking options for disinvestment alongside proposals for service growth. Once new resources are exhausted, the question becomes whether additional benefit can be derived through shifting or re-allocating resources within the current mix of funding.  

In theory, such re-allocation should occur until the cost-benefit ratios across programs are equal (Mitton and Donaldson 2004). In practice, information limitations prevent full assessment of relative benefit, but the principle of marginal analysis should nonetheless be considered by all organizations endeavouring to set priorities. In resource-rich environments, there may never be an imperative to disinvest from existing programs, but with current fiscal challenges, the "wish list" of most organizations will likely far exceed available resources. Thus, in reality, decision-makers have a built-in incentive to release resources from within (i.e., it's the only way proposed programs will be funded).

A further extension of the activity undertaken by the PHSA, again drawing on work by health economists, would be to examine alternative methods of benefit scoring. In particular, economic techniques such as discrete choice experiments (DCEs) and willingness to pay have been used with some success to gauge program benefits (Olsen and Donaldson 1998; Farrar et al. 2000). Although academic expertise may be required to apply these methods, the incremental resources to apply a technique like DCE are actually not that great. The first step in such a process is to identify and define the criteria and levels within those criteria. Precedents for applying such an approach within an actual priority-setting process in Canada can be found with the Calgary Health Region (Ryan et al. 2005), although more research is needed to assess whether decision-makers will readily accept and utilize more technical benefit-scoring approaches. Further, commercially available software, based on hierarchical modelling, can help identify the optimal bundle of program choices for the resources available.

As the PHSA moves forward with prioritization of new services, and perhaps ultimately chooses to engage in setting priorities across agencies through re-allocating resources at the margin, several lessons can be highlighted. First, the lasting sustainability of an explicit process will rest on the "credible commitment" of the organization to the process (Jan 2003), along with a willingness by senior decision-makers to be consistent in its application and in their ability to stand up to political pressures once decisions are made. Second, a transparent, evidence-driven process that draws upon a well-defined and well-communicated set of values will in itself contribute to the perception of fairness and thereby foster buy-in across stakeholder groups (Gibson et al. 2005). Third, as decision-makers in the PHSA themselves have indicated, education with internal stakeholders is an important part of gaining cultural acceptance for explicit approaches to priority setting. A related issue is the need for agreement on the role of both the public and physicians (Mitton and Donaldson 2004).  

Further specific points can be highlighted from this exercise, such as the need for earlier integration of financial personnel to ensure accuracy in cost projections; due diligence in outlining the systemwide impact of program investments; and the need for more detailed evaluation, both in tracking the outcomes of investment decisions against the defined criteria and in the fairness and utility of the process itself. Both qualitative and quantitative methods, over time, can provide insight into these issues.  

As this case study reports on a prioritization process from a single health authority, its generalizability may be limited. As well, the evaluation insights provided by decision-makers were not formally analyzed, but rather simply documented and reported. Nonetheless, the underlying issue that this study addresses is familiar to every health authority in the country: given limited resources, decision-makers must make choices about which services to fund and which not to fund. The dilemma of resource scarcity is universal, and the organizational context will affect the decisions and processes utilized. Thus, the indicators employed in this study will likely be useful to others embarking on similar work elsewhere. Drawing insight from the literature also helps to round out the opinions presented by our decision-makers.

Finally, what can be made of the issue of having three different rankings from three different scoring methods? In actuality, the rankings were quite close, save two important differences. The Pediatric Oncology Network scored the highest on the cost-benefit ratio, with a relatively low overall benefit score and very low relative cost. If the Executive Committee wanted to maximize value for dollars spent, this would seem the best choice. However, the committee likely did not pursue this option vis-√†-vis others, owing to the low benefit score. In fact, the Executive Committee did not use the cost-benefit scores for two reasons: (1) the committee held that prioritization decisions should be based on the benefit of the programs, not the costs, and (2) the cost-benefit ratios as presented were driven by the cost of the program, owing to the manner in which they were calculated. As discussed above, consideration of program size should alleviate this second challenge in future iterations.  

Further, the Alcohol and Drug Treatment program received the highest overall benefit score but ranked second lowest on cost-benefit and relatively low on the "gut-feel" score. The high cost of this program for the relative benefit achieved was an obvious factor in the decision to exclude it from the top three ranked options. The take-home message here is that the highest-cost and lowest-benefit options are unlikely to receive funding, a point that was elucidated through the scoring methods used in this exercise.

While some might argue that a formal process is unnecessary owing to the close match between the overall benefit scores and the "gut-feel" ranking, in actuality both this set of decision-makers, as evidenced in the survey results outlined above, and the literature on fair process, also cited above, clearly indicate that a shift towards a more transparent, evidence-based process is warranted. Results from a more formal, evidence-based process can lend credibility to decisions that previously would have been arrived at through less rigorous activity.

Conclusion

The prioritization process undertaken by the Executive Committee of the PHSA for the 2005/06 planning cycle was by no means perfect. Specific concerns were raised over how well the criteria captured the concept of benefit across such a diverse set of programs, and those outside the process may well question that the final top three rankings closely reflected the "gut-feel" scores. On the other hand, the process does reflect the real-world complexities faced by a group of senior managers trying to make sense of a wide range of information from multiple sources on a tight timeline.  

The literature would suggest greater attention to process fairness, and indeed, an economic perspective would indicate the need to release resources from one area to invest in another, thereby improving the use of resources against the given criteria. Importantly, however, the key is to take things one step at a time. Priority setting as a management process needs to adapt to the context, and change can thus be slow. The process described here is clearly more transparent and more evidence-based than activity carried out previously in the PHSA. In time, with proper evaluation, the PHSA can continue to improve its priority setting, thereby making strides towards getting the most out of the limited resources available.

[To view the Appendix, please Click Here.]


L'établissement des priorités à la Provincial Health Services Authority : étude de cas pour le cycle de planification 2005-2006

Résumé

Introduction : À partir d'un sondage mené auprès des principaux décideurs de la Provincial Health Services Authority (PHSA) de Colombie-Britannique, des cadres supérieurs et des chercheurs de la University of British Columbia ont établi et mis en application une procédure d'établissement des priorités concernant les nouvelles options de service au sein même de l'organisme pour le cycle budgétaire 2005-2006.

M√©thodologie : Une approche d'√©tude de cas a permis de consigner les d√©tails de l'√©laboration et de la mise en application de la proc√©dure d'√©tablissement des priorit√©s, et la r√©action des d√©cideurs a √©t√© not√©e en vue de l'√©valuation de l'activit√©. Des donn√©es tir√©es de la documentation disponible ont aussi servi √† l'identification des domaines o√Ļ des am√©liorations sont possibles.  

R√©sultats : L'ex√©cutif de la PHSA, compos√© de 13 membres, a proc√©d√© √† l'√©tablissement des priorit√©s. Les crit√®res ont √©t√© d√©finis et pond√©r√©s, puis explicitement √©valu√© dans l'optique des nouvelles options de financement. Un logiciel de d√©cision multi-attributs a servi √† ordonner les options selon la mesure d'ensemble de leurs bienfaits. Des rapports co√Ľts-avantages ont √©t√© √©tablis, et les options de programme ont aussi √©t√© class√©es en fonction de la r√©action instinctive des d√©cideurs. Les diverses m√©thodes ont produit des classements comparables, et chaque m√©thode a contribu√© aux d√©cisions finales de l'ex√©cutif quant aux programmes auxquels accorder un nouveau financement limit√©.  

Conclusion : Gr√Ęce √† un effort de collaboration entre d√©cideurs et chercheurs, la PHSA a r√©ussi √† d√©laisser les m√©thodes d'allocation des nouvelles ressources fond√©es sur la politique et la n√©gociation pour adopter une proc√©dure transparente et d√©fen-dable bas√©e sur la preuve. Les le√ßons tir√©es de l'activit√© en cours pourront influencer l'√©tablissement futur des priorit√©s √† la PHSA et offrir de nouvelles perspectives √† d'autres d√©cideurs.

About the Author

Craig Mitton, PhD
Centre for Healthcare Innovation and Improvement
B.C. Research Institute for Children's and Women's Health, Vancouver, BC
Faculty of Health and Social Development
University of British Columbia Okanagan, Kelowna, BC

Jennifer MacKenzie, MBA
Provincial Health Services Authority of British Columbia
Vancouver, BC

Lynda Cranston, MScN
Provincial Health Services Authority of British Columbia
Vancouver, BC

Flora Teng, MPH
Centre for Healthcare Innovation and Improvement,
B.C. Research Institute for Children's and Women's Health
Vancouver, BC

Correspondence may be directed to: Craig Mitton, PhD, Faculty of Health and Social Development, University of British Columbia Okanagan, 3333 University Way, Kelowna, BC V1V 1V7; tel.: 250-807-8707; fax: 250-805-8505; email: craig.mitton@ubc.ca.

Acknowledgment

At the time of this work, Craig Mitton held a New Investigator Award from the Canadian Priority Setting Research Network. The views expressed are those of the authors, not the Provincial Health Services Authority of British Columbia.

References

Astley, J. and W. Wake-Dyster. 2001. "Evidence-Based Priority Setting." Australia Health Review 24(2): 32-39.

Birch, S. and S. Chambers. 1993. "To Each According to Need: A Community-Based Approach to Allocating Health Care Resources." Canadian Medical Association Journal 149: 607-12.

Cranston, L. and W. Powell. 2004. Leveraging Strengths, Transforming Health Care: The PHSA Strategic Plan. Vancouver: Provincial Health Services Authority.

Daniels, N. and J. Sabin. 1998. "The Ethics of Accountability in Managed Care Reform." Health Affairs 17: 50-64.

Donaldson, C. 1995. "Economics, Public Health and Health Care Purchasing: Reinventing the Wheel?" Health Policy 33(2): 79-90.

Donaldson, C., A. Bate, C. Mitton, S. Peacock and D. Ruta. 2005. "Priority Setting in the Public Sector: Turning Economics into a Management Process." In J. Hartley et al., eds., Managing Improvement in Public Service Delivery: Progress and Challenges. London: Nuffield Trust.

Farrar, S., M. Ryan, D. Ross and A. Ludbrook. 2000. "Using Discrete Choice Modelling in Priority Setting: An Application to Clinical Service Developments." Social Science and Medicine 50: 63-75.

Gibson, J., D. Martin and P. Singer. 2005. "Evidence, Economics and Ethics: Resource Allocation in Health Services Organizations." Healthcare Quarterly 8(2): 50-59.

Gibson, J.L., C. Mitton, D.K. Martin, C. Donaldson and P.A. Singer. In press. "Ethics and Economics: Does Program Budgeting and Marginal Analysis Contribute to Fair Priority Setting?" Journal of Health Services Research and Policy.

Ham, C. and A. Coulter. 2001. "Explicit and Implicit Rationing: Taking Responsibility and Avoiding Blame for Health Care Choices." Journal of Health Services Research and Policy 6(3): 163-69.

Jan, S. 2003. "A Perspective on the Analysis of Credible Commitment and Myopia in Health Sector Decision Making." Health Policy 63(3): 269-78.

Lomas, J., G. Veenstra and J. Woods. 1997. "Devolving Authority for Health Care in Canada's Provinces: 2. Backgrounds, Resources and Activities of Board Members." Canadian Medical Association Journal 156(4): 513-20.

Madden, L., R. Hussey, G. Mooney and E. Church. 1995. "Public Health and Economics in Tandem: Programme Budgeting, Marginal Analysis and Priority Setting in Practice." Health Policy 33: 161-68.

Martin, D. and P. Singer. 2003. "A Strategy to Improve Priority Setting and Healthcare Institutions." Healthcare Analysis 11(1): 59-68.

Miller, P. and L. Vale. 2001. "Programme Approach to Managing Informed Commissioning." Health Services Management Research 14: 159-64.

Mitton, C. and C. Donaldson. 2002. "Setting Priorities in Canadian Regional Health Authorities: A Survey of Key Decision Makers." Health Policy 60(1): 39-58.

Mitton, C. and C. Donaldson. 2004. The Priority Setting Toolkit: A Guide to the Use of Economics in Health Care Decision Making. London: BMJ Books.

Mitton, C., S. Patten, H. Waldner and C. Donaldson. 2003. "Priority Setting in Health Authorities: A Novel Approach to a Historical Activity." Social Science and Medicine 57: 1653-63.

Mitton, C. and S. Prout. 2004. "Setting Priorities in the South West of Western Australia: Where Are We Now?" Australian Health Review 28(3): 301-10.

Olsen, J.A. and C. Donaldson. 1998. "Helicopters, Hearts and Hips: Using Willingness to Pay to Set Priorities for Public Sector Health Care Programmes." Social Science and Medicine 46: 1-12.

Peacock, S. 1998. An Evaluation of Program Budgeting and Marginal Analysis Applied in South Australian Hospitals. Melbourne: Center for Health Program Evaluation, Monash University.

Ryan, M., S. Kromm and C. Mitton. 2005 (July 10-13). "Into the Margin: Using Discrete Choice Experiments to Develop the PBMA Framework." Presentation at the International Health Economics Association meeting, Barcelona.

Teng, F., C. Mitton and J. MacKenzie. Under review. "Priority Setting in the Provincial Health Services Authority: Survey of Key Decision Makers 2005."

VISA: Visual Interactive Sensitivity Analysis. 1995. Copyright © V. Belton, University of Strathclyde. Visual Thinking International, Ltd. Retrieved July 10, 2006. http://www.simul8.com/products/visa.htm.