Healthcare Policy

Healthcare Policy 5(2) November 2009 : e141-e160.doi:10.12927/hcpol.2013.21178
Research Paper

Factors Affecting Physician Performance: Implications for Performance Improvement and Governance

Elizabeth F. Wenghofer, A. Paul Williams and Daniel J. Klass


Background: A physician's personal and professional characteristics constitute only one, and not necessarily the most important, determining factor of clinical performance. Our study assessed how physician, organizational and systemic factors affect family physicians' performance.

Method: Our study examined 532 family practitioners who were randomly selected for peer assessment by the College of Physicians and Surgeons of Ontario. A series of multivariate regression analyses examined the impact of physician factors (e.g., demographics, certification) on performance scores in five clinical areas: acute care, chronic conditions, continuity of care and referrals, well care and records. A second series of regressions examined the simultaneous effects of physician, organizational (e.g., practice volume, hours worked, solo practice) and systemic factors (e.g., northern practice location, community size, physician-to-population ratio).

Results: Our study had three key findings: (a) physician factors significantly influence performance but do not appear to be nearly as important as previously thought; (b) organizational and systemic factors have significant effects on performance after the effects of physician factors are controlled; and (c) physician, organizational and systemic factors have varying effects across different dimensions of clinical performance.

Conclusions: We discuss the implications of our results for performance improvement and physician governance insofar as both need to consider the broader environmental context of medical practice.

A growing literature suggests that a physician's ability to provide good patient care and avoid medical errors depends on multiple factors (Donabedian 1966, 1988; Skinner 2002; Caulford et al. 1994; Ely et al. 1995; Grol 2002; Becher and Chassin 2001; Berwick 2003; Barach and Moss 2001; Chen and Hou 2002) including, but not limited to, their personal and professional characteristics. For example, numerous studies have demonstrated that physician characteristics such as age, sex, education/training credentials and competence (i.e., knowledge, skills and attitudes) may all influence how well physicians perform (Caulford et al. 1994; Ely et al. 1995; Norton et al. 1994, 1997; McAuley et al. 1990; Norman et al. 1993; Jansen et al. 2000). However, it has also been noted that these physician characteristics account for a surprisingly small proportion of total variation observed in performance; other factors are also at play (Donabedian 2000).

For example, some studies have concluded that older physicians do not perform as well as their younger counterparts (Norton et al. 1997; McAuley et al. 1990), a finding that seems to suggest that older physicians are generally less competent. However, it has also been observed that, compared to their younger colleagues, older physicians tend to work in different practice types, such as solo practice, which may offer fewer supports for effective record keeping and workload management; with different patient populations, including older individuals with more complex continuing care needs; and in different geographic locations, which, particularly outside urban areas, may offer less access to required tests, treatments and specialist referrals (Tepper et al. 2005; Donabedian 1992). Thus, it is possible to imagine an older physician who is well trained and competent, but who nonetheless performs poorly according to standard measures because of organizational and systemic problems (Grol 2002; Kopelow et al. 1992; Rethans et al. 2002). Such different interpretations of the sources of poor performance have major implications for designing and targeting policies and interventions aimed at improving and ensuring performance.

In addition to physician characteristics, administrative and organizational structures (Caulford et al. 1994; Grol 2002; Norman et al. 1993; Donabedian 2000; Robinson 1994; Jones 2000; Ram et al. 1998; Long 2002) and financial incentives/disincentives (Robinson 1994; Safran et al. 2000; Morrow et al. 1995; Gillett et al. 2001; Goldfarb 1999; Hopkins 1999; Safran et al. 2002; Geneau et al. 2008), to name a few factors, can all have different effects on clinical performance and affect clinical behaviour. Yet, performance has traditionally been viewed as devoid of context (LaDuca 1994; LaDuca et al. 1984; Klass 2000, 2007a,b; Geneau et al. 2008), excluding both the context of the patient and the context of the organizational or systemic environments. A reason for this view may be the current lack of a comprehensive and unified conceptual framework of what individual physician performance entails (Klass 2000, 2007b). The concept needs to acknowledge the impact of the practice environment, including both the influence of organizational structures and the larger healthcare system as a whole, on the ability of physicians to perform adequately (Grol 2002; Robinson 1994; Klass 2007b; Long 2002).

In a previous paper (Wenghofer et al. 2006b), we explored the importance of the patient context in physician performance and demonstrated that performance is indeed a multidimensional construct rooted in the unique requirements of different types of physician–patient encounters. In this paper, we go on to explore how performance in these dimensions is influenced by physician factors and, additionally, by the broader organizational and systemic contexts to provide a conceptual framework within which physician performance can be studied. To do this, we analyze data from actual performance assessments of general/family practitioners (GP/FPs). We hypothesized that physicians' personal and professional characteristics constitute only one, and not necessarily the most important, determining factor of performance. We consider the implications of our findings on physician governance and performance improvement.

Data and Methods

Performance data

In 1980, the College of Physicians and Surgeons of Ontario (CPSO) initiated a peer assessment program that includes practice visits to a random sample of the province's approximately 28,000 physicians by trained physician assessors (peers). Approximately 2% to 3% of the total practising physician population of Ontario is assessed annually. In this study, we analyzed data from 532 GP/FPs randomly selected for peer assessments conducted between 1997 and 2000 by the CPSO. Since a detailed description of the CPSO's peer assessment process can be found in previous published studies (Norton et al. 1994, 1997, 1998, 2004; Norton and Faulkner 1999; McAuley and Henderson 1984; McAuley et al. 1990; Wenghofer et al. 2006a,b), we note here only that during their visits to a physician's practice, a single peer assessor typically reviews 20 to 30 complete patient records, discusses the findings with the physician and then fills out a 46-item protocol relating to records and care quality. The inter-rater reliability between assessors has been shown to be excellent (kappa = 0.89) (unpublished internal studies from the CPSO). In our previous work (Wenghofer et al. 2006b), we discussed how we computed scores on multiple-item measures of performance from the assessment protocols for five dimensions of GP/FP performance (see Table A1 in the Appendix for detailed definitions):

  1. managing patients with acute conditions and new presentations (acute)
  2. managing patients with chronic conditions (chronic)
  3. providing patients with continuity of care and referrals (continuity)
  4. providing patients with well care and health maintenance (well care)
  5. managing patient records (records)

The calculated scores for each dimension range from a minimum score of 1.0, indicating poor performance, to a maximum score of 4.0, indicating excellent performance (Wenghofer et al. 2006b).

Factors affecting performance

In this paper, we focus on the extent to which variation in physicians' scores along each performance dimension are explained by physician, organizational and systemic factors.

  • Physician factors. We define physician factors as those attributes of the individual that have traditionally been the object of interest regarding physician performance and competence assessment. Physician factors specifically focus on those features that physicians "bring with them" to any practice setting or community. In our study these include age; sex; years in practice; medical school (North American vs. Other); College of Family Physicians of Canada (CFPC) certification; years practising in current setting (i.e., as a proxy indicator of experience with current patient population); and whether or not the physician had been previously peer assessed by the CPSO.
  • Organizational factors. We define organizational factors as representative of the characteristics of the immediate setting in which the physician works. These are features that may change if a physician moves from one setting to another. In this study, these include solo practice, episodic care practice/walk-in clinic (WIC), total number of clinical and administrative staff; hours worked per week in primary practice; number of patient visits per week in primary practice; active hospital appointment (yes/no); teaching (yes/no); and focused practice scope (yes/no). The effects of solo (Norman et al. 1993; Shine 2002) and WIC (Jones 2000, 2006; Brown et al. 2002) practice structures were specifically evaluated because both are often considered to have potentially negative effects on practice.
  • Systemic factors. The systemic factors we have selected are intended to provide a snapshot of several key features associated with the broader community in which a physician's practice is situated. These include access to 911 services at the time of assessment (yes/no); estimated minutes for access to emergency medical services (EMS); availability of four core diagnostic tests (expressed as a proportion); physician per 1,000 population ratio and northern practice locations (yes/no).

Data for physician, organizational and systemic factors were either extracted from the CPSO registry (which is verified through documentation reviews and extensive credentialling processes) or self-reported by physicians in a pre-assessment questionnaire as a required component of the peer assessment process. The physician-per-1,000-population ratio was calculated by linking CPSO registry data for primary practice address to the 1996 Canadian Census data at the census subdivision level, which closely mirrors municipal divisions. Northern location of practice was indicated at a very high level by the "forward sortation area" (FSA) code of primary practice address postal code (FSA="P").


Descriptive statistics were produced for each of the five dimensional scores. We conducted two series of multiple regressions using different models. The first independent regression model involved regressing only the physician factors on each of the multiple-item measures of performance. The independent model thus estimates the effects of personal and professional characteristics without controlling for organization or system factors. The second full regression model examined the effects of the physician factors when organizational and systemic factors are entered simultaneously into the regressions. The variance estimates generated by the full regression model indicate the marginal (or net) increase in the variance explained by the group of variables representing the physician, organizational and systemic factors. The variance estimates, regression coefficients, standard errors of the coefficients for each model are reported. Variance inflation factors (VIFs), tolerance and between-predictor correlations were evaluated to determine the level of collinearity in the models. In view of the large number of independent variables entered in our model, we did not explore the potentially large number of interaction effects, as we were somewhat concerned with overparameterizing the model given our sample size (Lewis 2007).


Physician and practice description

The average age of physicians in the sample was 51.0±9.91 years with a median of 50. This is comparable to the 51.2-year average age of Ontario physicians (CPSO 2008a). The sex distribution of the sample shows that 88.9% of the assessed individuals were male and 11.1% female. The sample comprised more male physicians compared to the CPSO registry database, which shows that 67.9% of the physicians in Ontario are male and 32.1% female (CPSO 2008a).

The sample physicians worked an average of 29.8 hours and saw an average of 131 patients per week in their primary office setting. The sample physicians indicated that 50% (median) of the practices employed two or more administrative or clinical staff members (or both). This value did not differentiate between clinical and administrative staff, nor did it distinguish between part-time and full-time staff. In addition, 20.2% of sample physicians engaged in teaching, 5.4 % had clinically focused practices and 64.7% had active hospital appointments. Solo and WIC practices were the primary practice settings for 42.1% and 7.9% of the sample physicians, respectively.

Descriptive statistics of dimensions of performance

The majority (78%) of assessed physicians had satisfactory practices; 14.1% required a reassessment and 7.9% required an interview because of care concerns. This finding is consistent with the typical distribution of assessment results since the inception of the CPSO peer assessment program. The descriptive statistics for the scores on the five performance dimensions were positively skewed, reflecting the propensity of most physicians to do well on assessment (Table 1). However, as reported in earlier studies, the variations present in the dimensional scores are sensitive to significant differences in assessment outcomes (Wenghofer et al. 2006b).


TABLE 1. Descriptive statistics of performance dimension scores from peer assessment
n=532 Acute Chronic Continuity Well care Records
Mean 3.52 3.66 3.85 3.29 3.59
Standard Deviation 0.49 0.41 0.34 0.61 0.34
Minimum Score* 1.63 1.71 1.60 1.33 1.92
Maximum Score* 4.00 4.00 4.00 4.00 4.00
* Note: Possible range on all dimensional scores is a minimum score of 1.0 and a maximum score of 4.0.

Independent regression model

Results from the independent regression model, in which only the physician factor was evaluated, are presented in Table 2. Collinearity diagnostics indicated that years in practice is highly correlated with physician age (r=0.94); thus, years in practice was removed from all regression models (Kleinbaum et al. 1988). As in previous studies of the peer assessment results (Norton et al. 1994, 1997; McAuley et al. 1990), our results confirmed that personal and professional characteristics, particularly sex and certification, and to a lesser degree age, significantly influenced performance with the exception of continuity of care, for which the independent regression model was not significant. However, unlike previous studies, the effects were found to vary across performance dimensions. For example, the regression results indicated that females perform better in acute care, well care and records management, but sex differences are not found in the other dimensions. Similar variation across performance dimensions were also found with age and CFPC certification. Increasing age was a significant predictor of declining performance in records only, while holding CFPC certification had a positive impact on performance in acute, chronic and well care as well as records. Attending a North American medical school, the number of years in the current practice setting and having been previously assessed did not significantly affect assessment performance in any of the dimensions.


TABLE 2. Independent regression model of multiple-item measure scores on physician factors
  Acute regression coefficient (std. error) Chronic regression coefficient (std. error) Continuity regression coefficient (std. error) Well care regression coefficient (std. error) Records regression coefficient (std. error)
Independent Model R2 0.074** 0.046** 0.023 0.079** 0.120**
Age –0.005
Males –0.174*
Attended North American School 0.052
Years in Current Practice
at Time of Assessment
Holds CFPC Certification 0.107*
Has Been Previously Assessed 0.066
* Significant at p<0.05
** Significant at p<0.01

Full regression model

The results of the full regression model measuring the simultaneous impact of physician, organizational and systemic factors on performance (Table 3) revealed that the way in which physician factors influence performance change when organizational and systemic factors are taken into account. For example, unlike the independent model, in the full model age was not a significant predictor in any of the performance dimensions, and CFPC certification remained a significant predictor only in well care and records. In addition, years in current practice setting became significant for acute care in the full model. A similar pattern was also found with performance in the chronic and continuity of care dimensions, in that the physician characteristics were no longer significant once the effects of organizational and systemic factors were incorporated in the full model.


TABLE 3. Significant factors in the full regression model of multiple-item measure scores on physician, organizational and systemic factors
    Acute regression coefficient (std. error) Chronic regression coefficient (std. error) Continuity regression coefficient (std. error) Well care regression coefficient (std. error) Records regression coefficient (std. error)

Model R2 0.199** 0.142** 0.123** 0.193** 0.233**
Significant Physician Factors Males       –0.236*
Years in Current Practice –0.007*
Holds CFPC Certification       0.208**
Significant Organizational Factors WIC Practice   –0.166*
Number of Patient Visits per Week –0.002**
Holds Active Hospital Appointment         0.080*
Significant Systemic Factors Proportion of Basic Diagnostic Tests Available   0.350**
Physician to 1,000 Population Ratio 0.0328*
Northern Practice Location –0.345**
* Significant at p<0.05
** Significant at p<0.01
Note: Regression coefficients for variables that were included in the full model but were not significant are not listed owing to space constraints.

In the full regression model, several specific variables from the organizational factors had significant effects on performance. Practice type, patient visits per week and holding an active hospital appointment each had varying effects in several of the dimensions. For example, physicians working in WICs performed less well in the chronic care dimension. The most consistent organizational effects were found with patient visits per week, where performance in all five dimensions improved with declining numbers of patient visits per week.

Specific system variables were also significant in the full regression models. Physicians working in locations with low physician-to-population ratios performed more poorly in the acute, chronic and continuity care performance dimensions. Physicians with better availability of basic diagnostic tests performed better in the chronic, continuity and well care dimensions. Physicians with their primary practices in northern locations performed more poorly in acute care, well care and records than their southern counterparts, even after the effects of the physician-to-population ratio and number of patient visits per week had been taken into account.

The variance estimates from the full regression model are presented in Table 4. The physician factors were significant predictors, to varying degrees, for acute care (R2=0.058; p<0.01), well care (R2=0.067; p<0.01) and records (R2=0.087; p<0.01), but not for chronic conditions or continuity of care. In comparison, the organizational factors had a varying impact on all dimensions except continuity of care, where the systemic factors predominated (R2=0.057; p<0.01). The systemic factors significantly contributed to the variance in all five performance dimensions, but to varying degrees.


TABLE 4. R2 values for regression of multiple-item measure scores on blocks of physician, organizational and systemic factors

Total variance explained by independent model Net R2 values for each factor Total variance explained by full model
Physician factor Organizational factor Systemic factor
Acute 0.074** 0.058** 0.071** 0.068** 0.199**
Chronic 0.046** 0.012     0.061** 0.045** 0.142**
Continuity 0.023     0.015     0.038     0.057** 0.123**
Well Care 0.079** 0.067** 0.054** 0.045*   0.193**
Records 0.120** 0.087** 0.052** 0.051** 0.233**
* Significant at p<0.05
** Significant at p<0.01

Tolerance, VIFs and between-predictor correlations do not indicate any concerning levels of collinearity. The maximum VIF and minimum tolerance in either the independent or full model were 3.3 and 0.30, respectively. The highest level of correlation was found between number of patient visits per week and hours worked per week, with a correlation of r=0.73. As a precaution, hours worked per week was removed from the regressions because it was thought that number of patients per week would give a better indication of practice load than hours alone. All collinearity statistics were well out of range of levels meriting concerns (Kleinbaum et al. 1988), with the one other exception of years in practice, which was noted earlier and was addressed by modifying the regression models.


While strategies for improving and ensuring physician performance are increasingly seen as crucial considerations for improving outcomes for patients and the healthcare system, there remains a tendency to address them rather narrowly, as primarily or solely a function of the credentials, training and attributes of individual physicians (Klass 2007b). We suggest that this approach fails to take into account factors in the broader context of practice that are beyond physicians' direct control. We believe it has also led to a relatively negative view of the current strategies employed to improve performance, which place inordinate emphasis on the agency of individual physicians and, in the process, appear to blame them for shortcomings in the organizations or health systems in which they work. Indeed, our data, drawn from actual practice-based assessments of GP/FPs, suggest that in addition to the personal and professional characteristics of physicians, the characteristics of the organizations in which they work and the communities in which those organizations are located also have important and concurrent effects on their ability to provide appropriate care to their patients across a number of key performance dimensions.

Three key findings emerge from our analyses.

First, our findings challenge the assumption that assessment can, or should, be targeted on the basis of individual characteristics alone. Although the results of both the independent and full regression analyses support the findings of previous research that sex, age and certification do affect performance, they do not appear to be nearly as important as previously thought (Norton et al. 1994, 1997; McAuley et al. 1990). For example, our data indicate that while female physicians outperformed males on some dimensions, such as well care or acute care, there were no differences in others (e.g., chronic care) once organizational and systemic differences were taken into consideration. Similarly nuanced findings were found with respect to CFPC certification. The results of previous studies that focused primarily on physician factors have led to several regulatory practices that may now need to be reexamined. For example, in Ontario a physician is selected for peer assessment at age 70 (CPSO 2008b). We are not suggesting that continuing age-related assessment is not important, but rather that other organizational structures may have a greater influence than age alone. Organization-related assessments might also be considered. Initiatives to improve performance targeted on the basis of personal attributes alone may likely miss their mark more often than they hit. Clearly, the broader practice context needs to be considered in regulatory and improvement policies.

Further support for this idea is illustrated in our second key finding, which is that specific organizational and systemic factors, in addition to physician factors, all have significant effects on performance after controlling for physician factors. Of course, the idea that such external factors may influence physician behaviour is not new. For example, many studies have found evidence of small area variation in patterns of health services and physician practice patterns (Jin et al. 2003; Brownell 2002; Brownell et al. 2002; Veugelers et al. 2003; Chaudhry et al. 2001; Hospital Report Research Collaborative 2004a,b,c; Chan 2002; CIHI 2007b; Konkin et al. 2004; CMA 2008), including those found in northern and rural locations (Norton et al. 1997; Tepper et al. 2005; Baldwin et al. 1999; Probst et al. 2002; Chan and Shultz 2005; May et al. 2007; CIHI 2007a). Our findings support these earlier studies, which suggest an impact of the broader practice environment on physician performance. For example, physicians who have better access to diagnostic tests and specialist consults more appropriately diagnose, treat and refer patients; and physicians located in northern locations face practice challenges that are different from those seen among physicians in southern Ontario. Thus, we need to consider that working in different practice environments may require different skills and knowledge specific to the practice context.

A third key finding is that individual, organizational and systemic factors appear to have varying effects across different dimensions of performance, emphasizing the need to conceptualize and measure performance as multidimensional. As a result, the answer to the question, "What influences physician performance the most?" and its corollary, "Where should incentives and policies for improvement be placed?" is, "It depends on the specific dimension of performance under scrutiny." For example, our finding that the management of chronic conditions in walk-in clinics is poorer than other settings while acute condition management is not, suggests that certain organizational structures may be more supportive and effective for certain types of care over others. As new practice structures are introduced and promoted as part of primary care reform initiatives, this finding may be particularly important for planning. This finding also suggests the importance of systematically monitoring organizational and systemic factors and linking change in these factors, particularly during periods of health system restructuring, to variations in physician performance. For instance, Ontario has implemented two major reforms that affect physicians: a reform of primary care aimed at encouraging more GP/FPs to work in multidisciplinary teams (i.e., family health teams) with shared patient records and alternatives to fee-for-service such as capitation; and the regionalization of hospital, home care and long-term care services into local health integration networks (LHINs). Knowing more about how such reforms affect physician performance could go some considerable way towards identifying and redressing organizational and systemic problems that lead to poor performance, and equipping individual physicians to respond constructively and proactively to a changing environment.

Limitations and strengths

There are some limitations to consider when interpreting the results of these analyses. Most obviously, there is a considerable amount of residual variation that is not explained by the data; the sources of such variation remain to be understood. A likely possibility is that this is related to limitations in the data. While chart reviews are considered one of the standard methods of practice evaluation (Wakefield et al. 1995), charts alone have been shown to represent only a subset of activities actually performed by physicians during a patient visit (Rethans et al. 1994). However, data gathered in the CPSO assessment protocols are augmented with additional information (Brook et al. 1996) from the physician-assessor interview and unpublished CPSO internal quality control studies (e.g., inter-assessor rating and decision validation) have shown the methodology to be reliable. Further, the data representing physician, organizational and systemic factors are by no means exhaustive; neither are our categorizations of the data variables into each of physician, organizational or systemic factors set in stone.

Finally, this study focuses on clinical dimensions of performance. There are other important aspects to performance, such as patient communication, patient outcomes and team performance, to name a few, that were not looked at in this study. Our future work will further investigate the impact of individual practitioner, organizational and systemic factors in these important areas to help complete the performance picture.

Despite these limitations, we think that this study has important implications for physician performance policies in two main areas: performance improvement and governance. We believe the strength of our study lies in understanding physician performance within the broader constructs of the practice environment and demonstrating the importance of collecting these data for future research. Better physician practice data concerning organizational structure and systemic resources will further improve our ability to investigate the impact of the practice environment on performance.

Implications of the study

A core purpose of performance evaluation is needs assessment for education and performance improvement. While continuing medical education (CME) and continuing professional development (CPD) initiatives have typically focused on refreshing the physician's clinical skills and knowledge, our findings suggest that such initiatives may be ineffective if they ignore the broader context in which clinical decision-making takes place, particularly where organizational and systemic factors may be a source of poor performance. While individual competence remains a crucial prerequisite for high performance, it may not be sufficient to conclude that poor performance can simply be rectified through "upgrading." For example, on dimensions such as chronic care and continuity of care, the results suggest that quality improvement initiatives should also consider organizational and systemic factors because physician factors appear to have less impact on performance in these dimensions. Performance issues that are more heavily influenced by organizational and systemic factors will be more effectively addressed through organizational and systemic policies or programs (e.g., organizational performance incentives, systemic resource allocation, or professional governance) rather than an exclusive reliance on the CPD of individual practitioners as the panacea for performance improvement. This approach speaks to the need both to carefully target CME/CPD to performance issues that are more heavily influenced by individual-level factors, and more generally, for CME/CPD curricula to include content that will assist individual physicians in identifying and coping with external factors that affect their practices.

We feel that our findings have governance implications, particularly suggesting the need for remodelling regulatory and tort systems, which are designed, among other things, to apportion accountability in the health workplace. Such issues become increasingly salient, particularly in jurisdictions such as Ontario, where ongoing primary care reforms have resulted in the introduction of family health teams and the promotion of interdisciplinary care provision, producing increasingly more complex practice environments that involve multiple regulated healthcare professions. The interdependence of competence is not easily accommodated in a system that has been designed to apportion accountability and responsibility only on an individual level. The determination of liability or professional accountability needs to reflect the reality of complex interdependence of physicians in organizations within systems.

Picturing how these concepts might be operationalized is somewhat tricky. Consider the example of physician migration as an illustration. Ensuring the mobility of the physician workforce without compromising patient safety and standards of care has primarily been evaluated by ensuring equivalency of physician training, credentials and certifications across jurisdictions (HealthForceOntario 2007; Norcini and Mazmanian 2005). However, with each move of a physician's practice, it is possible that the population needs, organizational structures and resource availability may differ from those in which the physician was originally trained or gained his or her practice experience. These differences may require physicians to develop new sets of competencies and performance skills to meet local needs and provide care that may be considered specialized or outside their typical scope of practice (Baldwin et al. 1999; Probst et al. 2002; Tulloh et al. 2001; Breon et al. 2003). Yet currently, these contextual aspects of performance are not taken into consideration when evaluating the readiness of a physician to enter a new practice environment. In other words, the skills and knowledge required in one practice setting may not be sufficient for another. As a result, differences in physician performance should no longer be conceptualized simply as the outcome of credentials, training and personal attributes, but rather the product of complex and concurrent effects of physician, organizational and systemic factors.


Our analysis has demonstrated that organizational and systemic factors, in addition to physician factors, can all significantly affect physician performance. Concepts of physician performance have for too long focused primarily or solely on the individual practitioner, with emphasis on attributional elements of competence rather than valid measures of performance. Employing a conceptual framework that considers physician performance within a broader environmental construct will allow us to develop better processes of performance evaluation, to design appropriate interventions and to support performance improvement and governance models for individuals, teams and systems.


Five dimensions of GP/FP performance


Performance dimension Description
Managing Patient with Acute Conditions and New Presentations (ACUTE) Physician's performance in dealing with new patients or known patients presenting a new complaint or condition. Conditions are generally non-urgent and will often involve the formulation of a diagnosis, for either acute or chronic conditions, and recommendation(s) for treatment.
Managing Patients with Chronic Conditions (CHRONIC) Physician's performance in dealing with patients with chronic conditions. Conditions will usually require long-term monitoring and may be present with or without co-morbidities.
Providing Patients with Continuity of Care and Referrals (Continuity Care) Physician's performance in dealing with patients who are referred for treatment, surgical procedures, diagnostic procedures or otherwise, to the care of other physicians. Includes the appropriateness of referral (i.e., indications) and follow-up.
Providing Patients with Well Care and Health Maintenance (Well Care) Physician's performance in well care visits and preventive health maintenance, including patient visits for annual check-ups, screening, well baby visits, etc.
Managing Patient Records and Recording Skills (Records) Physician's performance in records management and recording skills. This reflects the mandatory elements of record format required by legislation and some additional features of the organization and recording tools used.

Facteurs qui influent sur le rendement des médecins : répercussions pour l'amélioration du rendement et pour la gouvernance


Contexte : Les caractéristiques personnelles et professionnelles des médecins ne constituent qu'un, et non nécessairement le plus important, des facteurs déterminant le rendement clinique. Dans cette étude, nous avons évalué comment les facteurs personnels, organisationnels et systémiques affectent le rendement des médecins de famille.

Méthodologie : Nous avons étudié 532 médecins de famille choisis au hasard et soumis à une évaluation par les pairs effectuée par le Collège des médecins et chirurgiens de l'Ontario. Une série d'analyses de régression multivariée a permis d'examiner l'incidence des facteurs personnels des médecins (aspects démographiques, homologation, etc.) sur la cote de rendement dans cinq domaines cliniques : soins de courte durée, états chroniques, continuité des soins et recommandations aux spécialistes, soins de routine et dossiers médicaux. Une seconde série d'analyses de régression a permis d'examiner l'effet simultané des facteurs personnels, organisationnels (par exemple, volume de la pratique, heures effectuées, pratique en solo) et systémiques (par exemple, pratique en région nordique, taille de la communauté, ratio médecin/population).

Résultats : Notre étude dégage trois conclusions principales : (a) les facteurs personnels influencent de façon significative la pratique, mais ne semblent pas aussi importants que nous le pensions au départ; (b) les facteurs organisationnels et systémiques ont un effet significatif sur le rendement, et ce, après avoir effectué le contrôle des effets associés aux facteurs personnels; (c) les facteurs personnels, organisationnels et systémiques ont des effets variables sur les divers aspects du rendement clinique.

Conclusions : Nous discutons des répercussions de nos résultats sur l'amélioration du rendement et sur la gouvernance pour les médecins, puisque toutes deux doivent être prises en compte dans le contexte général de la pratique médicale.

About the Author

Elizabeth F. Wenghofer, PHD, Assistant Professor, School of Rural and Northern Health, Laurentian University, Assistant Professor, Human Sciences Division, Northern Ontario School of Medicine, Sudbury, ON

A. Paul Williams, PHD, Professor, Department of Health Policy, Management and Evaluation, Faculty of Medicine, University of Toronto, Toronto, ON

Daniel J. Klass, MD, Associate Registrar and Senior Medical Officer, Quality Management Division, College of Physicians and Surgeons of Ontario, Adjunct Professor, Department of Medicine, University of Toronto, Toronto, ON

Correspondence may be directed to: Elizabeth F. Wenghofer, PhD, Assistant Professor, School of Rural and Northern Health, Laurentian University, 935 Ramsey Lake Rd., Sudbury, ON P3E 2C6; tel.: 705-675-1151, ext. 3925; fax: 705-671-6603; e-mail:


The College of Physicians and Surgeons of Ontario provided access to the data used for this study. The CPSO, as an organization, was not involved in the design, conduct of the study, management, analysis or interpretation of the data, or the preparation, review or approval of the manuscript.


Baldwin, L.M., R.A. Rosenblatt, R. Schneeweiss, D.M. Lishner and L.G. Hart. 1999. "Rural and Urban Physicians: Does the Content of Their Medicare Practices Differ?" Journal of Rural Health 15(2): 240–51.

Barach, P. and F. Moss. 2001. "Delivering Safe Health Care." British Medical Journal (Clinical Research Ed.) 323(7313): 585–86.

Becher, E.C. and M.R. Chassin. 2001. "Improving Quality, Minimizing Error: Making It Happen." Health Affairs 20(3): 68–81.

Berwick, D.M. 2003. "Improvement, Trust, and the Healthcare Workforce." Quality & Safety in Health Care 12(6): 448–52.

Breon, T.A., C.E. Scott-Conner and R.D. Tracy. 2003. "Spectrum of General Surgery in Rural Iowa." Current Surgery 60(1): 94–99.

Brook, R., E. McGlynn and P. Cleary. 1996. "Measuring Quality of Care. Part 2." New England Journal of Medicine 335: 966–69.

Brown, J.B., L.M. Bouck, T. Ostbye, J.M. Barnsley, M. Mathews and G. Ogilvie. 2002. "Walk-in Clinics in Ontario. An Atmosphere of Tension." Canadian Family Physician 48: 531–36.

Brownell, M. 2002 (Winter). "Tonsillectomy Rates for Manitoba Children: Temporal and Spatial Variations." Healthcare Management Forum/Canadian College of Health Service Executives: 21–26.

Brownell, M., A. Kozyrkyj, N. Roos, D. Friesen, T. Mayer and K. Sullivan. 2002. "Health Service Utilization by Manitoba Children." Canadian Journal of Public Health 93: S57–62.

Canadian Institute for Health Information (CIHI). 2007a. Distribution and Internal Migration of Canada's Physician Workforce. Ottawa.

Canadian Institute for Health Information (CIHI). 2007b. Health Indicators. Ottawa: Author.

Canadian Medical Association (CMA). 2008. National Physician Survey 2007: Response Rates. Retrieved October 17, 2009. <>.

Caulford, P.G., S.B. Lamb, T.B. Kaigas, E. Hanna, G.R. Norman and D.A. Davis. 1994. "Physician Incompetence: Specific Problems and Predictors." Academic Medicine 69(10): S16–18.

Chan, B.T. 2002. "The Declining Comprehensiveness of Primary Care." Canadian Medical Association Journal 166(4): 429–34.

Chan, B.T. and S.E. Shultz. 2005. Supply and Utilization of General Practitioner and Family Physicians Services in Ontario: ICES Investigative Report. Toronto: Institute for Clinical and Evaluative Sciences.

Chaudhry, R., V. Goel and C. Sawka. 2001. "Breast Cancer Survival by Teaching Status of the Initial Treating Hospital." Canadian Medical Association Journal 164(2): 183–88.

Chen, J. and F. Hou. 2002. "Unmet Needs for Health Care." Health Reports 13(2): 23–34.

College of Physicians and Surgeons of Ontario (CPSO). 2008a. Reaping the Rewards – Striving for Sustainability: 2007 Registration Statistics and Survey Findings. Retrieved October 17, 2009. <>.

College of Physicians and Surgeons of Ontario (CPSO). 2008b. Selection for Assessment. Retrieved October 17, 2009. <>.

Donabedian, A. 1966. "Evaluating the Quality of Medical Care." Milbank Memorial Fund Quarterly 44(3 Suppl.): 166–206.

Donabedian, A. 1988. "The Quality of Care. How Can It Be Assessed?" Journal of the American Medical Association 260(12): 1743–48.

Donabedian, A. 1992. "The Role of Outcomes in Quality Assessment and Assurance." Quality Review Bulletin 18: 356–60.

Donabedian, A. 2000. "Evaluating Physician Competence." Bulletin of the World Health Organization 78: 857–60.

Ely, J.W., W. Levinson, N.C. Elder, A.G. Mainous and D.C. Vinson. 1995. "Perceived Causes of Family Physicians' Errors." Journal of Family Practice 40(4): 337–44.

Geneau, R., P. Lehoux, R. Pineault and P. Lamarche. 2008. "Understanding the Work of General Practitioners: A Social Science Perspective on the Context of Medical Decision Making in Primary Care." BMC Family Practice 9: 12.

Gillett, J., B. Hutchison and S. Birch. 2001. "Capitation and Primary Care in Canada: Financial Incentives and the Evolution of Health Service Organizations." International Journal of Health Services 31(3): 583–603.

Goldfarb, S. 1999. "The Utility of Decision Support, Clinical Guidelines, and Financial Incentives as Tools to Achieve Improved Clinical Performance." Joint Commission Journal on Quality Improvement 25(3): 137–44.

Grol, R. 2002. "Changing Physicians' Competence and Performance: Finding the Balance between the Individual and the Organization." Journal of Continuing Education in the Health Professions 22(4): 244–51.

HealthForceOntario. 2007. Entry to Practice Requirements for Healthcare Professionals Outside Ontario. Retrieved October 17, 2009. <>.

Hopkins, J.R. 1999. "Financial Incentives for Ambulatory Care Performance Improvement." Joint Commission Journal on Quality Improvement 25(5): 223–38.

Hospital Report Research Collaborative. 2004a. Hospital Report 2003: Acute Care. Retrieved October 17, 2009. <>.

Hospital Report Research Collaborative. 2004b. Hospital Report 2003: Emergency Department Care. Retrieved October 17, 2009. <>.

Hospital Report Research Collaborative. 2004c. Hospital Report 2003: Complex Continuing Care. Retrieved October 17, 2009. <>.

Jansen, J.J., R.P. Grol, C.P. Van Der Vleuten, A.J. Scherpbier, H.F. Crebolder and J.J. Rethans. 2000. "Effect of a Short Skills Training Course on Competence and Performance in General Practice." Medical Education 34(1): 66–71.

Jin, Y., T.J. Marrie, K.C. Carriere, G. Predy, C. Houston, K. Ness and D.H. Johnson. 2003. "Variation in Management of Community-Acquired Pneumonia Requiring Admission to Alberta, Canada Hospitals." Epidemiology and Infection 130(1): 41–51.

Jones, D. 2006. "BC Walk-in Clinics Warned." Canadian Medical Association Journal 175(12): 1512.

Jones, M. 2000. "Walk-in Primary Medical Care Centres: Lessons from Canada." British Medical Journal (Clinical Research Ed.) 321(7266): 928–31.

Klass, D. 2000. "Reevaluation of Clinical Competency." American Journal of Physical Medicine & Rehabilitation 79(5): 481–86.

Klass, D. 2007a. "Assessing Doctors at Work: Progress and Challenges." New England Journal of Medicine 356(4): 414–15.

Klass, D. 2007b. "A Performance-Based Conception of Competence Is Changing the Regulation of Physicians' Professional Behavior." Academic Medicine 82(6): 529–35.

Kleinbaum, D.G., L.L. Kupper and G. Muller. 1988. "Regression Diagnostics." In D.G. Kleinbaum et al., eds., Applied Regression Analysis and Other Multivariable Methods (vol. 2). Belmont, CA: Duxbury Press.

Konkin, J., D. Howe, T.L. Soles and Society of Rural Physicians of Canada. 2004. "SRPC Policy Paper on Regionalization, Spring 2004." Canadian Journal of Rural Medicine 9(4): 257–59.

Kopelow, M., G.K. Schnabel, T.H. Hassard, D.J. Klass, G. Beazley, F. Hechter and M. Grott. 1992. "Assessing Practicing Physicians in Two Settings Using Standardized Patients." Academic Medicine 67(10): S19–21.

LaDuca, A. 1994. "Validation of Professional Licensure Examinations." Evaluation & the Health Professions 17(2): 178–97.

LaDuca, A., D.D. Taylor and I.K. Hill. 1984. "The Design of a New Physician Licensure Examination." Evaluation & the Health Professions 7(2): 115–40.

Lewis, S. 2007. "Regression Analysis." Practical Neurology 7(4): 259–64.

Long, M.J. 2002. "An Explanatory Model of Medical Practice Variation: A Physician Resource Demand Perspective." Journal of Evaluation in Clinical Practice 8(2): 167–74.

May, J., P.D. Jones, R.J. Cooper, M. Morrissey and G. Kershaw. 2007. "GP Perceptions of Workforce Shortage in a Rural Setting." Rural and Remote Health 7(3): 720.

McAuley, R.G. and H.W. Henderson. 1984. "Results of the Peer Assessment Program of the College of Physicians and Surgeons of Ontario." Canadian Medical Association Journal 131: 557–61.

McAuley, R.G., W.M. Paul, G.H. Morrison, R.F. Beckett and C.H. Goldsmith. 1990. "Five-Year Results of the Peer Assessment Program of the College of Physicians and Surgeons of Ontario." Canadian Medical Association Journal 143(11): 1193–99.

Morrow, R.W., A.D. Gooding and C. Clark. 1995. "Improving Physicians' Preventive Health Care Behavior through Peer Review and Financial Incentives." Archives of Family Medicine 4(2): 165–69.

Norcini, J.J. and P.E. Mazmanian. 2005. "Physician Migration, Education, and Health Care." Journal of Continuing Education in the Health Professions 25(1): 4–7.

Norman, G.R., D.A. Davis, S. Lamb, E. Hanna, P. Caulford and T. Kaigas. 1993. "Competency Assessment of Primary Care Physicians as Part of a Peer Review Program." Journal of the American Medical Association 270(9): 1046–51.

Norton, P.G., E.V. Dunn, R. Beckett and D. Faulkner. 1998. "Long-Term Follow-up in the Peer Assessment Program for Nonspecialist Physicians in Ontario, Canada." Joint Commission Journal on Quality Improvement 24(6): 334–41.

Norton, P.G., E.V. Dunn and L. Soberman. 1994. "Family Practice in Ontario: How Physician Demographics Affect Practice Patterns." Canadian Family Physician 40: 249–56.

Norton, P.G., E.V. Dunn and L. Soberman. 1997. "What Factors Affect Quality of Care? Using the Peer Assessment Program in Ontario Family Practices." Canadian Family Physician 43(10): 1739–44.

Norton, P.G. and D. Faulkner. 1999. "A Longitudinal Study of Performance of Physicians' Office Practices: Data from the Peer Assessment Program in Ontario, Canada." Joint Commission Journal on Quality Improvement 25(5): 252–58.

Norton, P.G., L.S. Ginsburg, E. Dunn, R. Beckett and D. Faulkner. 2004. "Educational Interventions to Improve Practice of Nonspecialty Physicians Who Are Identified in Need by Peer Review." Journal of Continuing Education in the Health Professions 24(4): 244–52.

Probst, J C., C.G. Moore, E.G. Baxley and J.J. Lammie. 2002. "Rural–Urban Differences in Visits to Primary Care Physicians." Family Medicine 34(8): 609–15.

Ram, P., R. Grol, P. van den Hombergh, J.J. Rethans, C. van der Vleuten and K. Aretz. 1998. "Structure and Process: The Relationship between Practice Management and Actual Clinical Performance in General Practice." Family Practice 15(4): 354–62.

Rethans, J.J., E. Martin and J. Metsemakers. 1994. "To What Extent Do Clinical Notes by General Practitioners Reflect Actual Medical Performance? A Study Using Simulated Patients." British Journal of General Practice 44(381): 153–56.

Rethans, J.J., J. Norcini, M. Baron-Maldonado, D.E. Blackmore, D.M. Jolly, A. LaDuca, S.R. Lew, G.G. Page and L. Southgate. 2002. "The Relationship between Competence and Performance: Implications for Assessing Practice Performance." Medical Education 36: 901–9.

Robinson, M.B. 1994. "Evaluation of Medical Audit." Journal of Epidemiology and Community Health 48(5): 435–40.

Safran, D.G., W.H. Rogers, A.R. Tarlov, T. Inui, D.A. Taira, J.E. Montgomery, J.E. Ware and C.P. Slavin. 2000. "Organizational and Financial Characteristics of Health Plans: Are They Related to Primary Care Performance?" Archives of Internal Medicine 160(1): 69–76.

Safran, D.G., I.B. Wilson, W.H. Rogers, J.E. Montgomery and H. Chang. 2002. "Primary Care Quality in the Medicare Program: Comparing the Performance of Medicare Health Maintenance Organizations and Traditional Fee-for-Service Medicare." Archives of Internal Medicine 162(7): 757–65.

Shine, K.I. 2002. "Health Care Quality and How to Achieve it." Academic Medicine 77(1): 91–99.

Skinner, L. 2002. "Measuring Physician Performance." Journal of the Medical Association of Georgia 91(1): 38–99.

Tepper, J., S.E. Schultz, D.M. Rothwell and B.T. Chan. 2005. Physician Services in Rural and Northern Ontario: ICES Investigative Report. Toronto: Institute for Clinical Evaluative Sciences.

Tulloh, B., S. Clifforth and I. Miller. 2001. "Caseload in Rural General Surgical Practice and Implications for Training." ANZ Journal of Surgery 71(4): 215–17.

Veugelers, P.J., A.M. Yip and D.C. Elliott. 2003. "Geographic Variation in Health Services Use in Nova Scotia." Chronic Diseases in Canada 24(4): 116–23.

Wakefield, D.S., C.M. Helms and L. Helms. 1995. "The Peer Review Process: The Art of Judgment." Journal for Healthcare Quality 17(3): 11–15.

Wenghofer, E.F., D. Way, R.S. Moxam, H. Wu, D. Faulkner and D.J. Klass. 2006a. "Effectiveness of an Enhanced Peer Assessment Program: Introducing Education into Regulatory Assessment." Journal of Continuing Education in the Health Professions 26(3): 199–208.

Wenghofer, E.F., A.P. Williams, D.J. Klass and D. Faulkner. 2006b. "Physician–Patient Encounters: The Structure of Performance in Family and General Office Practice." Journal of Continuing Education in the Health Professions 26(4): 285–93.


Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed