Abstract

Objective: This research looks back at a 10-year period (2004–2014) to understand the development and outlook for healthcare organization performance measurement in the Quebec healthcare system, in an attempt to objectivize relationships within the configuration of its principal institutional actors.

Methods: This is a qualitative study combining the use of official publications and fieldwork based on 13 semi-directed interviews, conducted in 2014, with informers in key performance measurement positions within the Quebec healthcare system.

Results: Performance measurement has generated tensions, both internally between different branches of the Department of Health and externally against a strong coalition of external institutional actors, which were defending a shared homogeneous vision of performance. Four major types of political power plays, owing to the power struggles around performance models and indicators, converged around the same implicit issue of the need to attain greater legitimacy in order to impose an authoritative frame of reference.

Introduction

For the past decade, the Quebec health system has been committed to achieving a "shift to performance." Responsibility for being transparent and accountable to the population and more demanding efficiency requirements in its healthcare organizations has imposed performance assessment as a strategic priority. The new role assigned to Quebec healthcare organizations in 2005 has encouraged the emergence of such an orientation. As the Department of Health (DOH) has gradually introduced reforms, healthcare organizations have been integrated into a more hierarchical architecture with comprehensive governance, under a "symbolic quest for coordination" (Dupuis and Farinas 2010). The publication, a few years later, of the Castonguay Report (Castonguay et al. 2008) reinforced the need to systematize facility performance assessments vis-à-vis health objectives in both clinical and economic terms. However, the quest to objectify performance measurements in a coherent manner had so far been in vain.

This article investigates how the Quebec principal institutional actors pursued this quest for coherence in terms of performance measurement by examining the issues raised within what became a relatively fragmented process. Although considerable work has been performed designing performance measurement systems in healthcare organizations (e.g. Deber 2014; Kruk and Freedman 2008; Marchal et al. 2014), the underlying institutional issues remain relatively unexplored. In this sense, an analysis of the development of public policies offers a useful conceptual framework, particularly when one gets beyond the "statist vision," based on the observation that there has been a reduction in the number of institutional actors involved in these processes. By focusing on the role played by specialists, the conflicts between actors and their influential resources, we have based our work on currents of analysis such as the Advocacy Coalition Framework (Sabatier and Jenkins-Smith 1999) and the "epistemic community" (Haas 1992) to answer the following question: What was the role played by institutional actors in framing how performance is measured in Quebec healthcare organizations? With this in mind, we analyzed the tensions that have been generated by this process in Quebec's healthcare system over the last 10 years.

Methods

This work is based on the methodology typically used in surveys of public action, combining the use of the official publications of the public authorities and the interviews with key institutional actors concerned. Thirteen semi-directed interviews were conducted in 2014 by two authors (PF and CS) with a panel of the main institutional actors involved in performance assessment in Quebec's healthcare system. Participants were selected to meet two requirements (Belorgey 2012): the need to collect the personal accounts of the high-level actors involved, and the need to conduct a complete institutional review by integrating all four levels of governance:

  • Local: healthcare organizations that, over a given geographical area, are responsible for acute hospital care, extended and residential care and primary care and services.
  • Regional: healthcare agencies.
  • Provincial: DOH; Auditor General of Quebec; Commissioner of Health and Well-Being (CHWB), mandated by law to monitor the Quebec healthcare performance; Quebec Association of Healthcare Organizations (QAHO); and Accreditation Québec.
  • National: Canadian Institute for Health Information (CIHI); Accreditation Canada.

The survey protocol was part of a diachronic perspective based on both the retrospective dimension of participants' accounts and our "informative and narrative" use of the discussions (Pinson 2007). The strategy for conducting and analyzing the interviews was based on the "life stories" methodology (Bertaux 1997) and on systematically cross-checking the various discussions and comparing them to the written information to ensure that the researchers would have the critical distance required from the subjective views of each interviewee (Friedberg 1997). The protocol dealt with the origins, development and outlook for performance measurement in an attempt to objectivize relationships within this configuration of actors (Sabatier 1986).

Results

Piecemeal start to performance appraisal

The legitimacy of performance appraisal in the Quebec healthcare system would now appear to be broadly established. All the institutional actors agree that it is essential in order to improve the health system and the quality of service provided by healthcare organizations. These actors represent, first and foremost, a true "epistemic community," a network of professionals with recognized expertise and competencies in specific fields, who can articulate relevant knowledge on public policies in their fields (Haas 1992). Yet, this consensus on the merits of performance appraisal was accompanied by a scattered series of initiatives. It was developed within numerous institutional frameworks that were relatively independent of each other, through isolated approaches and efforts. The DOH has undoubtedly played an ambiguous role over the last few years, maintaining a certain dissonance between rhetoric that was resolutely favourable to performance assessment and a relative lack of involvement in the field in terms of concrete initiatives to encourage such developments. In terms of policy timing, the DOH has been late in dealing with these issues. This has resulted in reprimands from the Auditor General of Quebec, which, in its 2010–2011 report, underscored the department's failure to monitor the performance of organizations in the health and social services system. It was criticized for not exercising the necessary leadership and lagging behind other public administrations (VGQ 2011). This public blame came with recommendations on how to clarify actors' roles and responsibilities in the monitoring of performance by setting up a structured program, including a definition of performance and a measurement model (VGQ 2011).

A historical review of departmental action on these issues attests to a process that has been considered abnormally slow by several institutional partners. The first departmental task forces were established in 2008–2009 and resulted in creating a position of assistant director general of performance in 2010, followed by a commitment to performance assessment in DOH strategic planning and, finally, to a 2012–2015 action plan (MSSS 2012). The plan was largely inspired by recommendations from the Auditor General, for whom the sine qua non condition of successful performance assessment was collaboration and coordination among all the institutional actors involved (VGQ 2011). However, it is important to note prior substantial efforts made by these external institutional actors regarding the adoption of formal performance assessment mechanisms. Indeed, in response to the priorities set by public authorities 15 years earlier (including in the Public Administration Act of 2000), three major institutional actors – CHWB, QAHO and one regional agency – had already adopted firm approaches and designed operational tools to measure the performance of healthcare organizations, undertaking their own initiatives and experiments due to an absence of concrete DOH action. These efforts led to the publication of several analytical reports that were diffused widely.

This prolonged history of performance assessment helped create an asymmetry of expertise regarding the skills acquired in assessment methodology, indicator development and data interpretation. The result was a de facto climate of mistrust, shared by these various actors who had developed strong expertise that was, in some ways, superior to that of the DOH. In particular, this was because the DOH clearly intended to play a role of leadership and control, which was perceived as an attempt to unilaterally control the domain. Pointing to a loss of legibility in assessment approaches and the absence of a shared vision, the DOH was struggling to become a "conductor" as affirmed in their own terms. To convince pioneers in this area, the DOH used its "nodality," meaning "the property of someone in the middle of a social network or an information network" (Baudot 2014; Hood 1986) through the manufacture and mastery of instruments, in this case, performance assessment indicators and models.

So even if this initiative was conducted with a reassuring "desire to work together," it has been very poorly received due to a lack of consultation and transparency. Several interviews revealed that many institutional actors regretted the failure to listen, and the top-down approach adopted by the DOH, which appeared to want to operate in a vacuum, with no transparency, making a clean sweep of the past. They saw this as a way to take control of the situation, particularly in terms of design (by imposing a DOH performance model [ARSSM 2013] that was different from the model adopted by most of the external institutional actors), metrology (by re-beginning the work of selecting, defining and calculating indicators from scratch) and access to source data (through an effort to impose an exclusive control of the databanks needed to calculate indicators). Furthermore, economic factors related to budget cuts were exacerbating uncertainties around the political use of this performance assessment expertise. As a result, the stakeholders who developed the approach and instruments based on the public discourse of the time feared that it would be misused politically and applied for the sole purposes of control and sanctions. These issues were especially apparent in the tense relations between the DOH and the CHWB, at the crossroads between rationales of power and expertise (Box 1).


BOX 1. The agile Commissioner of Health and Well-Being
The Commissioner of Health and Well-Being (CHWB) was created in 2006 out of the government's desire to have a strongly independent organization, removed from politics, to assess performance in the healthcare system. With a small team of 16 people, the CHWB published its initial performance report in 2008, adopting the EGIPSS performance model (an acronym for comprehensive and integrated assessment of the performance of healthcare systems). This model was developed by a team of researchers at the Department of health administration at Université de Montréal (Marchal et al. 2014; Minvielle et al. 2008; Sicotte et al. 1998). One of the advantages of the model was that it related production indicators to quality indicators, allowing for inter-organizational benchmarking. Since then, the CHWB has proven to be extremely active and entrepreneurial, taking risks due to its independence, but also its credibility, which it obtained through the publication of an annual report on the performance of the entire Quebec system, as well as work carried out in Quebec's regions. To this end, it established strong partnerships with other major institutional actors, such as the Quebec Association of Healthcare Organizations (QAHO) and the Canadian Institute for Health Information (CIHI), for discussion and pooling practices. On the other hand, its relations with the Department of Health (DOH) were characterized by tensions and misunderstandings, largely fed by the strong will of the DOH to impose the use of the department's performance indicators and model rather than the EGIPSS performance model that was used by several other institutional actors. The CHWB has regretted the top-down imposition and standardization of a performance appraisal approach. On the other hand, it has praised the comprehensive approach taken to measuring performance in Quebec. This diversity was seen as enriching the discussion and providing a source of creativity and innovation, as models were adapted and adjusted to the missions of each facility. This vision had direct links to CHWB's institutional experience, as its survival strategy was largely based on its ability to innovate and stay ahead of other, larger institutions (such as the DOH), which, for structural reasons, were slower to break new ground. The CHWB was more productive with limited human and financial means, and it intended to defend this agility.

 

Stresses and strains in performance appraisal

The legitimation of the DOH's emerging governance has generated tensions, both internally between different branches of the department, which presented a fragmented vision of performance appraisal, and externally against a strong coalition of external institutional actors, which were defending a shared homogeneous vision of performance.

First, there was an inherent paradox between the rhetoric of the DOH's performance branch, which spoke to external institutional actors about a need for consistency (in fact, about the need to rally around its own vision) and the department's structure, which divided the work among different branches. Structurally, there were three divisions working independently of each other involved in performance appraisal: performance, quality and finance, each of which was developing its own set of performance indicators. Two key thrusts of the DOH's policy were the need to achieve balanced budgets in healthcare facilities and manage waiting lists for access to care, mainly in surgery and cancer treatment. These key dimensions of performance were still the responsibility of the finance division, which closely monitored budget adherence and signed performance contracts with healthcare facilities to ensure that service levels would reduce waiting lists. The quality and performance divisions were smaller and were created more recently. The quality division was concerned with quality of care and patient safety, a broader definition of performance that was also manifest in other Canadian jurisdictions (e.g., Ontario: Kromm et al. 2014). The performance division was trying to define an area of performance, while, internally, the finance division already exercised considerable power through its control of budgets. The DOH was, therefore, notable for its fragmented approach to performance appraisal, which lacked a coherent vision that could formalize and organize it, while, externally, powerful institutional actors had already adopted a homogeneous approach by using the same comprehensive and integrated model of performance assessment.

So there were two opposing visions of performance: external institutional actors notable for a comprehensive approach to performance, which was seen as an integrated phenomenon in which several types of indicators interact with each other, and the DOH, which operated under a fragmented vision of performance, parsimoniously developing limited sets of indicators for various sectors (e.g., balanced budgets, performance contracts, several waiting times, etc.).

This fragmented approach to performance appraisal had a corollary in the problems experienced by healthcare organizations trying to operate under a consistent concept of performance. In the healthcare organizations, considerable effort was being made to find an optimal balance between the different sets of DOH indicators and the model – the performance model (Marchal et al. 2014; Minvielle et al. 2008; Sicotte et al. 1998) – that was supported by the main institutional actors, which had high visibility in performance appraisal (CHWB, QAHO, a regional agency). Once these various frameworks were superimposed, it was very difficult to create a coherent whole.

Performance appraisal in search of a scientific foundation

When the DOH entered the field of performance assessment, it took several courses of action. It refused to endorse the dominant model promoted by external institutional actors and began building its own indicators. This took more time than necessary because existing work was ignored. Simultaneously, there were attempts to take exclusive control of the data used to calculate indicators, i.e., the critical resources needed by external actors.

The DOH's positioning was based on the idea of a system "inundated with indicators," so it was attempting, in a sense, to step in and curb their uncontrolled proliferation. This was the recurring theme of "indicator chaos" that was so readily brandished both quantitatively (too many indicators) and qualitatively (different measures for the same indicator). The DOH's interpretation was that this plethora of indicators was incompatible with management requirements, and there was a need to take back control and install order. The DOH employed a two-pronged strategy. It was trying to distinguish its action from that of other institutional actors while, at the same time, saying that it wanted to encourage more consistent performance appraisal. This strategy involved the selection of a different model from the one adopted by the three major institutional actors. The DOH model was unilaterally created from internal resources and its own experts. This contrasted with the approach taken by external actors, who collaborated with one another and drew on the expertise of academic researchers to develop and operationalize their performance indicators. The DOH argued that while the existing performance models can be used to make comparisons, its approach was suitable for managing performance. But this argument was contradicted by the experiences of two important institutional actors who had specifically framed their approach as a path to improve performance (AQESSS 2013; Roy 2008) (Box 2).


BOX 2. The Montérégie regional agency and what was learned

This regional agency was highly innovative, taking inspiration from all the latest currents of thought in performance management. The Montérégie regional agency carried out sustained work on comprehensive and integrated analysis of performance under the EGIPSS model (in partnership with the Université de Montréal), testing various methodological approaches for measuring clinical continuums, sharing performance results and comparing facilities (AQESSS 2013; Roy 2008). The strategic focus was on counterbalancing an approach based on control of performance that was limited to the financial dimension. The performance assessment was legitimized with particular concern for maximizing the odds that it would be approved and accepted. It was, therefore, developed with a focus on "self-assessment," giving the healthcare organizations control over which aspects of their activities should provide the basis for assessment based on their own questions about their performance level, and then converting them into a series of indicators selected from a bank of indicators in the EGIPSS model. Beginning in 2004, the agency developed a structure for supporting the region's healthcare organizations in the performance improvement process, creating a separate performance improvement branch (in other regions, performance was often associated with or even confused with management agreements ["performance contracts"]). Having achieved this, the agency was able to adopt an approach focused on continuous performance improvement rather than simply on accountability. These experiments highlighted an aspect of the performance appraisal approach regarding the conditions for achieving its expected virtues: they encouraged the sharing of good practices in the system and, as priority had been given to collaboration and joint reflection, brought actors together who had previously not known each other. This galvanization of the regional system therefore operationalized the concept that had led to the creation of those healthcare organizations in 2005. It proved to be a positive approach, whose efficiency was based in part on how these new public policy instruments (assessment models, indicators) gave value to learning effects and broke down silo mechanisms as soon as one became aware of them in an "open coordination" method (Kerber and Eckardt 2007).

The performance reports were conceived as a tool for emulating, disseminating and transferring good practices to healthcare organizations, but, above all, as a management tool rather than an accountability instrument. This rationale was in line with the idea of sharing lessons learned in different organizations, with some organizations serving as true laboratories for experiments. The Montérégie agency's strategy reveals a tension that arises where accountability and continuous improvement meet. These two paradigms for action were not part of the same approach, neither in the healthcare organizations nor even in public actions taken by the DOH. The modes for appropriating performance assessment and management were, in effect, based on the specific structural dimensions of each organization: the existence of a quality manager (in which case the approach was more easily accepted) and performance management through the finance branch (in which case, there was more reluctance to accept external control). Similarly, based on various factors such as the economic environment, the configuration of actors or the vision advocated by stakeholders, one paradigm took precedence over the other, producing a variation. But such variations were not neutral to external institutional actors: continuous improvement (through a comprehensive vision of population health) was generally perceived as a more inspiring model, while accountability was perceived as prone to produce more bureaucratic and coercive shifts. This oscillation between the two in the deployment of an appraisal approach was the source of ambivalence. Rather than being a linear process, moving assuredly through the development of instruments and methodologies, it was rather fragile, with a constant risk of encountering pitfalls. This fragility was apparent in the rhetoric of certain actors who had long been involved in the process, often saw future changes and feared getting bogged down or, worse, losing ground.


 

Objectifying the inter-relationships between the various institutional actors allows us to see how they were characterized by several political power plays, owing to the power struggles around performance instruments. Bringing together these power plays helps clarify the rhetoric, positioning and rivalries among the various institutional actors. Table 1 organized these power plays into four major types converging around the same implicit issue of the need to attain greater legitimacy in order to impose an authoritative frame of reference. This issue was particularly important to the DOH's new performance division, which was having difficulty carving out a place for itself and creating legitimacy in the performance field, where it competed with actors that had very well-established relations with healthcare organizations (the CHWB, through its legal mandate; the QAHO, as a lobbyist of healthcare organizations; and a regional agency, with its excellent reputation for innovation).


TABLE 1. Power plays in performance measurement
Type of power play Concrete examples
Choice of a performance measurement model

The fight over the selection of a legitimate performance measurement model.
The DOH developed its own in-house system, while several institutional actors were already using the same performance measurement model.
The main institutional actors selected the same model, developed out of research conducted by academics and published in scientific journals.

Selection of a series of indicators for performance appraisal

Tensions around the perimeter of the indicators.*
Issues around expanding the financial indicators.
Political use of the "chaos of indicators" slogan.
The DOH's parsimony in contrast with the more comprehensive lists of indicators selected by the external institutional actors.

Metrological construction of indicators§

Rhetoric around the "complexity defining and measuring indicators."
Arguments over the "quality of the existing data" as a tactic to delay the current approach (the department's timing).
Exclusive use of the DOH's internal experts to the detriment of external experts (academic research).

Access to the data and the various information systems needed to calculate performance indicators

Issue of controlling and sharing the source databases. Project by the DOH to exclusively control the databases needed to calculate the indicators.
Many battles and rivalries around combining all the databases to analyze care paths. Potential discovery of perverse effects of management agreements.

*It should be noted that the institutional actors – external to the DOH – agreed on the quality and rigour of their respective approaches to performance. In addition, many of them met to share their expertise, often drawing inspiration from each other for more standardized performance appraisal. This collaboration went as far as sharing services (e.g., the Montérégie agency produced the QAHO indicators).
§Examples of methodological choices and indicator selection may be consulted online, including the CHWB ("Document méthodologique de l'analyse globale et intégrée de la performance," 2014, <https://www.csbe.gouv.qc.ca/publications.html>) and the QAHO ("Performance en ligne. Formation sur le rapport méthodologique du rapport performance," 2013, <www.aquesss.qc.ca>).

 

Conclusion

In both theoretical and political terms, the selection of which definitions and models will be used as a basis for public action to assess performance is important. They situate actors as "political entrepreneurs" (Baumgartner and Jones 2009) who, cognitively, play a role in framing or reframing governance.

Hence, the existence of inter-institutional power games aimed at imposing legitimate definitions and models that can be authoritative for public action. In the specific case of performance appraisal, we have shown that they were due to a conflict-laden implementation dynamic, in the sense that the DOH initiative was fragmented and multi-faceted, with highly contrasting timelines for change (Hill and Hupe 2009). Various institutional actors in the network have undergone "instrumental learning" (May 1992) in their use of models and indicators, with independent capacities for appropriation. This has allowed them to propose an alternative path to the DOH's attempt to reformulate policy design.

Similarly, the analysis of performance could lead to a reframing of the issues and problems at the core of health system governance, as it helps reveal and objectivize certain mechanisms. It is from this perspective that we can best understand the DOH's position over the last few years – one that was prudent but also characterized by a wait-and-see attitude – in the sense that the performance appraisal process may lead to the emergence of issues beyond its control. This is demonstrated by the openly multidimensional nature of the performance models used by external institutional actors, which introduced indicators of available resources. It may reveal cases of "non-performance" due to limited budget resources that could raise doubts about the department's role in resource allocation (for example, by determining that a healthcare organization was efficient but that it nevertheless was in deficit due to a shortage of funds). Hence, the DOH's strategy – which is well illustrated at the structural level where responsibilities were split into three divisions – appeared to eliminate the dimension of the budget granted to the facility (balanced budgets) from performance. In public policy, it reveals the problems encountered trying to reconcile the stated intention of good public management practices, including performance appraisal, with highly pragmatic imperatives such as balanced budgets and restoring the health of public finances.

 


 

Pour une évaluation cohérente de la performance des organismes de santé. Le Québec est-il à la croisée des chemins?

Résumé

Objectif: Cette recherche se penche sur une période de 10 ans (2004–2014) afin de comprendre le développement et le panorama de l'évaluation de la performance des organismes dans le système de santé québécois, et ce, pour tenter d'objectiviser les relations dans la configuration des principaux acteurs institutionnels.

Méthodes: Il s'agit d'une étude qualitative qui combine l'utilisation de publications officielles et le travail de terrain, à l'aide de 13 entrevues semi-dirigées menées en 2014 auprès d'informateurs qui œuvrent dans des postes clés de l'évaluation de la performance au sein du système québécois de la santé.

Résultats: L'évaluation de la performance a créé des tensions tant à l'interne, entre diverses directions générales du ministère de la Santé, qu'à l'externe, face à une forte coalition d'acteurs institutionnels en faveur d'une vision homogène et commune de la performance. Quatre principaux types de jeux de pouvoir politique, causés par les luttes de pouvoir quant aux modèles et indicateurs de la performance, convergent vers le même enjeu implicite, soit l'atteinte d'une plus grande légitimité afin d'imposer un cadre de référence qui fasse autorité.

About the Author

Philippe Fache, PhD, ICD/Université Paris XIII, Paris, France

Claude Sicotte, PhD, Université de Montréal & École des hautes études en santé publique (EHESP), Montréal, QC

Étienne Minvielle, MD, PhD, École des hautes études en santé publique (EHESP), Paris, France

Correspondence may be directed to: Claude Sicotte, Professeur, Département d'administration de la santé, École de santé publique Université de Montréal, 7101, avenue du Parc, 3e étage, Montréal, QC H3N 1X9; e-mail: claude.sicotte@umontreal.ca

References

AQESSS. 2013. Le réseau en 4 questions, un regard cible sur la performance. Québec: Association québécoise de la santé et des services sociaux.

ARSSSM. 2013. Info Performance. № 1, Direction de la performance et des connaissances, Agence de la santé et des services sociaux de la Montérégie, Québec.

Baudot, P.Y. 2014. "Le temps des instruments. Pour une sociohistoire des instruments d'action publique." In Halpern C.P., P. Lascoumes and Le Galès, eds., L'instrumentation de l'action publique (pp. 193–225), Paris: Sciences Po Les Presses.

Baumgartner, F. and B. Jones. 2009. Agendas and Instability in American Politics. Chicago, IL: University of Chicago Press.

Belorgey, N. 2012. "De l'hôpital à l'Etat: le regard ethnographique au chevet de l'action publique." Gouvernement et action publique 2(avril–juin): 11–40.

Bertaux, D. 1997. Les récits de vie. Paris: Nathan.

Castonguay, C., J. Marcotte and M. Venne. 2008. En Avoir Pour Notre Argent – Rapport du Groupe de Travail sur le Financement du Systeme de Santé. Québec, PQ: Gouvernement du Québec.

Deber, R.B. 2014. "Thinking about Accountability." Healthcare Policy 10 (SP): 12–24. doi:10.12927/hcpol.2014.23932.

Dupuis, A. and L. Farinas. 2010. "La gouvernance des systèmes multi-organisationnels. L'exemple des services sanitaires et sociaux du Québec." Revue française d'administration publique 135(3): 549–65.

Friedberg, E. 1997. "L'entretien dans l'approche organisationnelle de l'action collective: le cas des universités et des politiques culturelles municipales." In Cohen S. eds., L'art d'interviewer des dirigeants (pp. 85–106). Paris: PUF.

Haas, P. 1992. "Introduction: Epistemic Issue on Epistemic Communities." International Organization 46(1): 1–35.

Hill, M. and P. Hupe. 2009. Implementing Public Policy. Thousand Oaks: Sage.

Hood, C. 1986. The Tools of Government. Chatham: Chatham House.

Kerber, W. and M. Eckardt. 2007. "Policy Learning in Europe: The Open Method of Co-Ordination and Laboratory Federalism." Journal of European Policy 14(2): 227–47.

Kromm, S.K., G.R. Baker, W.P. Wodchis and R.B. Deber. 2014. "Acute Care Hospital's Accountability to Provincial Funders." Healthcare Policy 10(SP) 25–35. doi:10.12927/hcpol.2014.23852.

Kruk, M.E. and L.P. Freedman. 2008. "Assessing Health System Performance in Developing Countries: A Review of the Literature." Health Policy 85(3): 263–76.

Marchal, B., T. Hoerée, V. Campos da Silveira, S. Van Belle, N.S. Prashanth and G. Kegels. 2014. "Building on the EGIPPS Performance Assessment: The Multipolar Framework as a Heuristic to Tackle the Complexity of Performance of Public Service Oriented Health Care Organisations." BMC Public Health 14(1): 378.

May, J.P. 1992. "Policy Learning and Failure." Journal of Public Policy 12(4): 331–54.

Minvielle, É., C. Sicotte, F. Champagne, A.P. Contandriopoulos, M. Jeantet, N. Préaubert et al. 2008. "Hospital Performance: Competing or Shared Values?" Health Policy 87(1): 8–19.

MSSS. 2012. Plan d'action ministériel 2012–2015 en matière d'évaluation de la performance du système de santé et de services sociaux à des fins de gestion. Ministère de la santé et des services sociaux, Gouvernement du Québec.

Pinson, G. 2007. "Peut-on vraiment se passer de l'entretien en sociologie de l'action publique?" Revue française de science politique 57(5): 555–97.

Roy, D.A. 2008. Gouvernance et performance en Montérégie. Agence de la santé et des services sociaux de la Montérégie. Direction de la gestion de l'information et des connaissances. Longueuil.

Sabatier, P. 1986. "Top-Down and Bottom-Up Approaches to Implementation Research: A Critical Analysis and Suggested Synthesis." Journal of Public Policy 6(1): 21–48.

Sabatier, P. and H. Jenkins-Smith. 1999. "The Advocacy Coalition Framework, an Assessment." In P. Sabatier ed., Theories of the Policy Process (pp. 117–66). Boulder (Co): Westview Press.

Sicotte, C., F. Champagne, A.P. Contandriopoulos, F. Béland, J.L. Denis, H. Bilodeau et al. 1998. "A Conceptual Framework for the Analysis of Health Care Organizations' Performance." Health Services Management Research 11(1): 24–48.

VGQ. 2011. Rapport du Vérificateur Général du Québec à l'Assemblée Nationale pour l'année 2010–2011, Tome II, Chapitre 7, Gouvernement du Québec.