Evaluation of the Executive Training for Research Application (EXTRA) Program: Design and Early Findings
The authors of this paper describe the EXTRA Program, its intended outcomes, the approach they used to evaluate the program and some initial findings regarding the program's effects on the EXTRA fellows after the initial two-year period. The program's mission is to develop capacity and leadership to optimize the use of research-based evidence in Canadian healthcare organizations. Using Kirkpatrick's four-level model for evaluating training effectiveness, the authors conclude that after two years the program appears to be having the desired effects on the fellows. There is now a need to develop a richer understanding of the effects within the host organizations and to consider ways of transferring the new knowledge to other healthcare organizations outside the EXTRA Program umbrella.
In 2004, a new national training program was introduced for Canadian health services executives to enhance evidence-based decision-making in the healthcare system. Funding for the Executive Training for Research Application (EXTRA) Program is for up to 10 years and comes from the Canadian federal government. The specific populations targeted are health services professionals in senior management positions - nurse executives, physician executives and other health administration executives. Successful applicants - approximately 24 fellows each year - join the program for a two-year period, engaging in several residency training sessions as well as mentoring, project development and networking outside the sessions.
The partners who developed the program include:
- Canadian Health Services Research Foundation (CHSRF), where the program is operationally housed;
- Canadian College of Health Service Executives (CCHSE);
- Canadian Nurses Association (CNA);
- Canadian Medical Association (CMA); and
- AETMIS, a consortium of Quebec partners represented by the Agence d'évaluation des technologies et des modes d'intervention en santé (AETMIS).
The underlying assumption is that the model of learning provided to the fellows will lead to improved use of research evidence that will inform decision-making in the fellows' host organizations. Increased use of research will lead to improved ways of providing healthcare, which will improve health outcomes.
The purpose of this paper is threefold:
- To describe the EXTRA Program;
- To describe the evaluation design used to evaluate the program; and
- To present early findings of the program's effects on the first cohort of fellows.
The EXTRA Program1
The overall mission of the program is "to support evidence-informed decision-making in the organization, management and delivery of health services through funding research, building capacity and transferring knowledge" (CHSRF n.d.).
The program has three major expected outcomes:
- Fellows apply the skills learned and use research-based evidence to bring about organizational change;
- The skills needed for improved use of research in management are spread beyond those formally enrolled as fellows in EXTRA;
- Fellows improve their capacity to collaborate in evidence-based decision-making across professional streams.
|Table 1. Residency modules|
To facilitate these outcomes, four residency sessions are held over the fellowship period, during which six educational modules are covered (Table 1). An intervention project, designed to apply research-based evidence to effect change in the fellows' respective organizations, is ongoing. This project is complemented by networking and a mentoring component involving academic and decision-making mentors. Information technology and desktop support are provided to fellows by the Centre for Health Evidence in Alberta. At any given time after the program's first year, two cohorts are actively enrolled in the program, following a well-defined and detailed program timeline. The fellows receive continuing support from EXTRA following their two-year fellowship.
The expectation is that individual capacity building will lead to organizational change, whereby research evidence informs decision-making. The causal linkages among EXTRA program activities and expected outcomes are depicted in Figures 1 and 2.
Our overall design employs a longitudinal, multiple-method approach (Anderson et al. 2004). It is based on utilization-focused evaluation (Patton 1997), responsive evaluation (Stake 2004) and theory-driven evaluation (Donaldson 2003).
The framework we use is the model devised by Pettigrew and colleagues (1992), which situates change in an organization based on three core components: Context, Content and the Process of change. Any changes introduced into healthcare organizations will be understood in terms of the context in which they are introduced (i.e., internal and external environments), the content (the focus of the changes) and the process(es) by which the changes are introduced (Anderson 2006; Pettigrew et al. 1992).
A receptive organizational context for change is crucial for the effective transfer of knowledge, but this is a major challenge for organizations (Anderson 2006; Bate et al. 2002; Greenhalgh et al. 2004; Huy 1999; Iles and Cranfield 2004; Pettigrew et al. 1992). The social context in which any organizational intervention occurs will influence the intervention's effectiveness (Dopson and Fitzgerald 2005).
The overarching question guiding the evaluative research is: Does the EXTRA Program result in improved knowledge transfer and uptake of evidence-based decision-making by individuals and organizations?
There are four subquestions that also guide research:
- Do fellows acquire the necessary skills to use research-based evidence to bring about organizational change?
- Do fellows apply the skills learned and use research-based evidence to bring about organizational change?
- Are the skills needed for improved use of research in management spread beyond those formally enrolled as fellows in EXTRA?
- Do fellows improve their capacity to collaborate in evidence-based decision-making across professional streams?
There are some obvious parallels with Kirkpatrick's (1994, 1998) four-level model for evaluating training effectiveness. Briefly, Kirkpatrick identified four levels of evaluation: Reaction - responses of the participants (in this case, the fellows) to the training (did they like it, was it relevant and so on); Learning - assessment of the amount of learning gained; Transfer - assessment of how much of the new skills and knowledge is being applied by the participants; and Results - the outcomes.
The focus of the latter part of this paper is Question 1 and the first two levels of Kirkpatrick's model - Reaction and Learning.
Our multiple-method approach enables triangulation of the data to strengthen the validity of the findings. We employed a combination of predominantly qualitative methods combined with administrative data review, content analysis of intervention project material and the ongoing collection of survey data. The methods are summarized in Table 2.
The methods are currently focused on the fellows themselves. They are the conduits for the knowledge exchange - the change agents being trained in the application of research evidence to inform decision-making. We will also examine the host organizations to fully understand how context mediates the potential changes, and how, indeed, the process of implementation and knowledge transfer ensues.
Given fixed resources for the evaluation, we decided to focus first on examining the extent to which new skills and knowledge were acquired by fellows before committing to more in-depth investigations within the organizations to see the nature of the new knowledge and the extent to which it was transferred.
|Table 2. Multiple methods used in the EXTRA evaluation|
|Methods||Year 1||Year 2||Post-program|
|Surveys of fellows||x 2||x 2||X|
|Interviews with fellows||x||X|
|Focus groups with fellows||x|
|Content analysis of intervention projects||x||X|
|Ongoing review of program component data: Use of IT desktop support, organizational liaison reports and training module, mentoring and regional mentoring centre evaluations||x||x|
|Case studies in host organizations||Currently being developed|
The first cohort
Eleven fellows were based in acute care hospital settings (many were also affiliated with academic health science centres). Six fellows were with regional health authorities. One was from public health and two were from community-based organizations. Four fellows were from long-term care or rehabilitation organizations. Nine were senior executives in their organizations. Twelve were directors, and a further three held managerial positions. Four fellows had been with their organization for over 20 years, while seven others were with their organization for between 11 and 19 years. Eight fellows were with their organization for between 4 and 10 years.
The fellows were highly educated. Nineteen of the 24 fellows held master's degrees; two of these also had doctoral degrees, and several held medical degrees. The regional distribution is shown in Table 3.
|Table 3. Regional distribution of Cohort 1 fellows across Canada|
|Region||Number||Per cent||Region||Number||Per cent|
|British Columbia||2||8.3||Prince Edward Island||1||4.2|
|New Brunswick||0||0||NWT, Yukon, Nunavut||0||0|
|Newfoundland and Labrador||1||4.2|
Findings from the first cohort
Each new cohort of fellows was surveyed four times during their fellowship (Table 2). The data presented here are from four rounds of surveys with the first cohort (n=24): August 2004, February 2005, August 2005 and February 2006. The survey items cover a range of topics related to various program components, with a number of items related to the acquisition of skills and knowledge repeated in each survey.
Paper-based surveys were given to the fellows during each of their four residency sessions. The response rate was almost 100% over the course of the four rounds.2Data were entered into SPSS for analysis. It is beyond the scope of this paper to report on all the findings collected from other sources, but it is worth noting that the triangulation supports the survey results provided here.
The first level of Kirkpatrick's model is Reaction - in our context, what were the fellows' perceptions of the training? Table 4 identifies the survey responses given by the fellows regarding their training experience with the six modules of the program.
Overall, the fellows' assessment of the training was favourable. Four of the modules were rated Excellent to Very good by over 70% of the fellows, and there was general satisfaction with the modules' length. Networking opportunities received very high scores, as did the ability of the fellows to participate in their language of choice (French or English). Similar high scores were attained for the contact and engagement with faculty members and EXTRA staff.
|Table 4. Assessment of the training modules|
|Module 1||Module 2||Module 3||Module 4||Module 5||Module 6|
|Overall assessment of module|
|Excellent to Very good||92%||74%||59%||91%||44%||72%|
|Length of module|
|Neither too long nor short||50%||13%||90%||83%||50%||70%|
|Excellent to Very good||87%||100%||95%||96%||61%||80%|
|Could participate in all module activities in my official language of choice?|
|Strongly agree to Moderately agree||96%||100%||100%||96%||94%||100%|
|Easy to contact and get feedback from faculty and staff?|
|Strongly agree to Moderately agree||100%||91%||95%||100%||78%||90%|
|Note: Likert scales used were: Excellent, Very good, Good, Average, Barely acceptable, Poor and Very poor; Way too long, Too long, Long, Neither long nor short, Short, Too short, Way too short; and Strongly agree, Moderately agree, Slightly agree, Neither agree nor disagree, Slightly disagree, Moderately agree and Strongly disagree.|
Interviews and focus groups conducted with the first cohort of fellows reaffirmed the above data and reinforced what was known anecdotally. Program staff were highly responsive to the fellows' concerns and suggestions regarding improvements to the program.
The first critical step in understanding whether involvement in the program led to organizational change (the underlying assumption) was to establish the efficacy of the training (i.e., Kirkpatrick's Learning level). We asked a number of questions repeatedly in the four rounds of surveys. The fellows' knowledge of research-based evidence increased between Round 1 (August 2004) and Round 4 (February 2006). While just 25% of fellows rated their knowledge as Very good or Excellent at the beginning of the program, this figure increased to 85% near the completion of the two-year fellowship (Figure 3).
The fellows' skill set for assessing the quality of evidence also increased. Twenty-five per cent of fellows felt their skill set was Very good or Excellent at the beginning of the program. This figure doubled to 50% near the completion of their fellowship (Figure 4). Similarly, while 37% felt their skill set was Poor or Fair at the beginning, this rating changed to just 10% near completion.
While only 37% of fellows rated their knowledge of change management as Very good or Excellent at the beginning of the program, this number increased to 95% near the completion (Figure 5).
There are also encouraging signs in the data that the fellows were able to improve their own organizations' context for informed decision-making based on research evidence. At the beginning, only 8% rated their ability to create a more evidence-based decision environment in their organization as Very good or Excellent. Near completion, however, this rating had increased to 65%. Similarly, while 51% felt their ability was Poor or Fair at the beginning, only one fellow felt this way near completion of the program (Figure 6).
The fellows increased their use of research evidence with other professionals in their own organization. While 42% of fellows reported that their collaboration in this regard was either Frequently, Most of the time or All the time at the beginning, this number increased to 80% near completion. Similarly, 46% of fellows initially noted that their use of evidence when collaborating with professionals in other organizations was Frequently or Most of the time; this figure increased near completion to 80%.
In the Round 1 survey, fellows were asked to list the three main objectives they hoped to achieve by participating in the two-year EXTRA Program. The clustering of the top 10 objectives is shown in Table 5.
|Table 5. Top 10 objectives identified by the Cohort 1 fellows for participating in the EXTRA Program (N=24)|
|Objectives||Number of times cited by fellows|
|1. Apply research evidence to their work environment||10|
|2. Apply skills learned to conduct a successful intervention project||10|
|3. Acquire new knowledge||9|
|4. Establish contacts and networking opportunities||8|
|5. Develop an evidence-supported decision culture in their organization||7|
|6. Improve leadership and management skills through utilizing evidence||5|
|7. Improve organizational outcomes and delivery of services||5|
|8. Share use of research evidence knowledge with colleagues||4|
|9. Enhance awareness and understanding of evidence-based healthcare||3|
|10. Be a role model and inspire colleagues in use of research evidence||3|
In the Round 2 survey, 92% of fellows (n=22) stated that the program was helping them in the way they anticipated, while the remaining said Yes, but only sometimes. All fellows stated they were able to assist colleagues in the use of research-based evidence.
It appears that the program has had the desired effect of improving the knowledge base of the Cohort 1 fellows. But there are layers of complexity to consider further. The ability to transfer knowledge gained from the program through structured residency sessions, mentoring and networking will always be mediated by the complexity of the various organizational contexts, the nature of the intervention projects and the diverse intuitive and responsive social actions of the fellows and their colleagues in the organization. Moreover, we need to consider the differences between transferring codified and tacit knowledge within organizations.
For these reasons, our evaluation has moved more significantly into examining the organizational context. Our desire is to learn more from the fellows' experiences by delving more deeply into the organizations to gain a comprehensive understanding of how the changes are occurring (and if not, why not) and identifying the key attributes of organizations that are receptive to change. For example, what knowledge transfer strategies are successful and, importantly, why?
We need to use the knowledge of how certain approaches work in some organizations and not in others so that we can expand upon the effective approaches, or seriously rethink those that are less effective. This point is critical if we are to take the learnings - the experiences of the program fellows - and be able to apply these in other healthcare contexts.
There are challenges with the evaluative research. One challenge is to unpack the extreme heterogeneity of the variables that affect the nature and extent of change, such as the fellows' varied backgrounds, personalities and so on, their external and internal organizational contexts and the various ways in which the knowledge may be transferred within and beyond their organizations; there is no prescribed method of enhancing the use of evidence that the fellows are expected to follow. A second challenge is the simple fact that the use of evidence to inform decision-making has become an increasingly popular strategy for organizations as they recognize that high-quality care demands the best evidence possible, balanced against the pragmatics of fiscal constraints. Evidence seems more essential now than ever before.
As currently designed, the ultimate success of the program will depend on factors over which program staff have little control - the organizational context. As evaluators, we need to tease out the causal connections between the fellows and the program, and subsequent changes occurring in the host organizations.
The purpose of this paper has been to describe the EXTRA Program and its evaluation design, and to present some early findings. Our ongoing research continues with the methods described, but we are also developing a deeper, richer understanding of the host organizational context. Our early findings are encouraging, for they suggest the program is achieving the positive effects that were anticipated.
Évaluation du programme Formation en utilisation de la recherche pour cadres qui exercent dans la santé (FORCES) : conception et résultats préliminaires
Les auteurs de cet article font la description du programme FORCES, de ses résultats escomptés, de la méthode employée pour son évaluation ainsi que de certains résultats préliminaires de ses effets sur les boursiers après les deux années initiales du programme. FORCES a comme mission de renforcer les capacités et de développer le leadership afin d'optimiser l'utilisation des données probantes issues de la recherche dans les organismes de services de santé au Canada. L'emploi du modèle de Kirkpatrick à quatre niveaux permet aux auteurs de conclure qu'après deux ans, le programme semble avoir les effets escomptés quant aux boursiers. Il est cependant nécessaire de comprendre plus en profondeur ses effets au sein des organismes et de penser aux moyens de diffuser les nouvelles connaissances aux autres organismes de santé en dehors du cadre du programme FORCES.
About the Author(s)
Malcolm Anderson, PhD
Faculty of Health Sciences, Queen's University
Mélanie Lavoie-Tremblay, RN, PhD
School of Nursing, McGill University
Researcher, Research Centre Fernand-Seguin, Louis-H. Lafontaine Hospital
Correspondence may be directed to: Malcolm Anderson, PhD, Assistant Professor, Faculty of Health Sciences, Queen's University, K7L 3N6, tel.: 613-533-6000, ext. 75126, email: email@example.com.
Anderson, M. 2006. The Evolution of Innovations in Health Care Organizations: Empirical Evidence from Innovations Funded by The Change Foundation. Research manuscript prepared for The Change Foundation, Toronto.
Anderson, M., L. Atack, S. Donaldson, D. Forbes, M. Lavoie-Tremblay, M. Lemonde, L. Romilly, L. Shulha, I. Sketris, R. Thornley and S. Tomblin. 2004. The EXTRA/FORCES Program Evaluation Design. Working document. Kingston, ON: Queen's University.
Bate, S.P., G. Robert and H. MacLeod. 2002. Report on the "Breakthrough" Collaborative Approach to Quality and Service Improvement in Four Regions of the NHS. A Research-Based Evaluation of the Orthopaedic Services Collaborative within the Eastern, South and West, South East, and Trent Regions. Birmingham, UK: Health Services Management Centre, University of Birmingham.
Canadian Health Services Research Foundation (CHSRF). n.d. "Statement of Institutional Purpose." Retrieved September 25, 2008. < http://www.chsrf.ca/about/do_statement_purpose_e.php > .
Donaldson, S.I. 2003. "Theory-Driven Program Evaluation in the New Millennium." In S.I. Donaldson and M. Scriven, eds., Evaluating Social Programs and Problems: Visions for the New Millennium (pp. 109-41).
Dopson, S. and L. Fitzgerald. 2005. "The Active Role of Context." In S. Dopson and L. Fitzgerald, eds., Knowledge to Action? Evidence-Based Health Care in Context. Oxford: Oxford University Press, 79-103.
Greenhalgh, T., G. Robert, P. Bate, O. Kyriakidou, F. Macfarlane and R. Peacock. 2004. How to Spread Good Ideas. A Systematic Review of the Literature on Diffusion, Dissemination and Sustainability of Innovations in Health Service Delivery and Organization. London, UK: National Coordinating Centre for NHS Service Delivery and Organisation Research and Development Programme.
Huy, Q.N. 1999. "Emotional Capability, Emotional Intelligence and Radical Change." Academy of Management Review 24: 325-45.
Iles, V. and S. Cranfield. 2004. Developing Change Management Skills. A Resource for Health Care Professionals and Managers. London, UK: NHS Service Delivery and Organisation Research and Development Programme.
Kirkpatrick, D.L. 1994. Evaluating Training Programs: The Four Levels. San Francisco: Berrett-Koehler.
Kirkpatrick, D.L. 1998. Another Look at Evaluating Training Programs. Alexandria, VA: American Society for Training and Development.
Patton, M. 1997. Utilization-Focused Evaluation. Thousand Oaks, CA: Sage Publications.
Pettigrew, A.M., E. Ferlie and L. McKee. 1992. Shaping Strategic Change: Making Change in Large Organizations. The Case of the National Health Service. Thousand Oaks, CA: Sage Publications.
Stake, R.E. 2004. Standards-Based and Responsive Evaluation. Thousand Oaks, CA: Sage Publications.
1 The French acronym for EXTRA is FORCES - Formation en utilisation de la recherche pour cadres qui exercent dans la santé. For the purpose of this paper, we use just the English acronym.
2 In the Round 4, 83.3% (n=20) fellows completed the survey.
Be the first to comment on this!
Personal Subscriber? Sign In
Note: Please enter a display name. Your email address will not be publically displayed