We undertook a case study in order to explore deliberative dialogue as a system-level knowledge translation and exchange (KTE) strategy and to describe the design features and intended effects of this dialogue. Our data included observations made during the dialogue, evaluations completed by dialogue participants and interviews. We placed these data in the context of our broader experience. We learned that (a) all the design elements we examined could be maintained in future dialogues, but organizers of dialogues that address similar issues and take place in similar contexts should consider the relative importance of these features and (b) the intended effects of a deliberative dialogue that addresses a low-priority policy issue are mainly apparent at the individual level among dialogue participants. Further research is required to explore the key features and intended effects of deliberative dialogues used to address other issues or in different contexts.
System-level knowledge translation and exchange (KTE) strategies aim to support the use of evidence in decision-making about problems or issues affecting the health system (CHSRF 2006). Deliberative dialogues are a group process that can help to integrate and interpret scientific and contextual data for the purpose of informing policy development (Culyer and Lomas 2006), among other objectives. Dialogue processes have been used and studied in the public policy arena for many years. For example, a "citizen's parliament" has been used and studied in Australia as a process to learn about and discuss issues of public importance and to make recommendations to public policy makers that reflect the views of the public (Dryzek 2009). Other research has focused on designing deliberative processes to help policy makers assess the strengths and weaknesses of a broad range of policy options and understand stakeholder values and opinions (Kerkhof 2006). More recently, deliberative dialogues have been used as a system-level KTE strategy by bringing various players involved in the policy making process together (e.g., policy makers, stakeholders such as professional and consumer groups, and researchers) to learn from one another and from the research evidence about a specific problem, options for addressing it and implementation considerations (Boyko et al. 2012; Lavis et al. 2009).
Although several key design features of deliberative processes have been identified as promising (e.g., skillful chairing, consultation with all parties affected by the outcome) (Kerkhof 2006; Lomas et al. 2005), Lavis (2008) suggests that it is very unlikely that a single approach to organizing a dialogue will work for all issues or in all contexts. An important step towards matching specific organizational elements to specific issues or contexts would be to seek input about key design features from dialogue participants. This knowledge can then be used to prepare for future research that examines the effectiveness of specific features that have been appropriately matched to issues or contexts (Lavis 2008).
We undertook a case study of a deliberative dialogue used as a system-level KTE strategy in order to explore the design features and intended effects of this dialogue. More specifically, our objectives were to explore how the participants in one particular deliberative dialogue (a) viewed and experienced specific design features and (b) used the deliberations to support their decision-making following the dialogue. Lessons learned from this study will contribute to our understanding about deliberative dialogues in general, and can inform the organization and evaluation of dialogues that address similar types of issues and that take place in similar contexts.
Sample and setting
Our sample comprised one case of a deliberative dialogue held to integrate and interpret scientific and contextual data for the purpose of informing policy development. The dialogue was convened by the McMaster Health Forum (www.mcmasterhealthforum.org) and included 14 individuals who were selected by a steering committee as part of the forum's standard planning process. The forum's deliberative dialogue approach generally includes preparatory consultations to help clarify the problem, its causes and possible ways to address it; preparation and circulation (before the event) of an evidence brief; convening a group of 18 to 22 policy makers, stakeholders and researchers for an off-the-record dialogue about the issue; preparation and circulation (after the event) of a dialogue summary; post-event briefings to dialogue partners; a year-long evidence service; and evaluation of the evidence brief and dialogue (McMaster Health Forum 2014). The dialogue participants were drawn from across Canada in order to capture a broad Canadian perspective and to ensure the dialogue engaged a broad array of those involved in the issue, including health system policy makers, managers, stakeholders and researchers. The issue addressed by the dialogue was strengthening chronic pain management in health systems across Canada. The evidence brief prepared by the McMaster Health Forum characterizes the issue in four key ways: (a) the burden of chronic pain that the healthcare system must prevent or manage is high; (b) effective chronic pain management programs, services and drugs are not always available or accessible to all Canadians; (c) current health system arrangements do not support chronic pain management for all Canadians; and (d) there is no "home" for the development, updating, implementation and monitoring of clinical practice guidelines for the management of chronic pain (Lavis and Boyko 2009).
The context within which our case study dialogue took place can be characterized by contrasting it with other dialogues convened by the McMaster Health Forum. First, the focus of dialogues convened by the forum has been local (e.g., Enhancing Patient Transitions from Treatment in Regional Cancer Centres), provincial (e.g., Coordinating the Use of Genetic Tests and Related Services in British Columbia), national (e.g., Supporting Chronic Pain Management across Provincial and Territorial Health Systems in Canada) and international (e.g., Engaging Civil Society in Supporting Research Use in Health Systems), with our case study having a national focus (McMaster Health Forum 2013). Second, previous dialogues convened by the forum included government policy makers, but our case study did not. The forum's steering committee was unable to identify public policy makers who include chronic pain as an issue within their portfolio of responsibilities (despite concerted effort to identify such representation by linking with partners and collaborators in the field of chronic pain). The lack of identifiable contact individuals in government who prioritize engaging with the issue suggests that chronic pain is not a governmental priority. Third, our case study took place within the broader context of a research project entitled Community Alliances for Health Research and Knowledge Translation on Pain (www.cahr-pain.ca). This is a salient difference between the dialogue we studied and others convened by the forum, because the dialogue participants were also part of a research study. An implication of this approach is that the deliberations were observed by a third party. Typical dialogues are "closed" in order to create an environment that enables participants to engage in off-the-record discussions.
We collected three types of data over a six-month period that started when our case study dialogue was held. Informed consent was obtained from all dialogue participants (the study protocol was approved by the Hamilton Health Sciences/Faculty of Health Sciences Research Ethics Board, Project #09-402). As part of the consent, dialogue participants agreed to (a) have their participation in the dialogue observed; (b) allow the evaluation data collected by the McMaster Health Forum as part of its standard evaluation procedures to be used for this study; (c) participate in a 30-minute telephone interview two weeks after the dialogue; and (d) participate in a 30-minute telephone interview six months after the dialogue. The first data we collected included observations about the dialogue's design features in context. One researcher (JB) recorded observations during the day-long dialogue, as well as observations about the documents that were prepared by the forum and that participants received as part of their participation (e.g., invitation letter, agenda, evidence brief and evaluation forms).
The second type of data we used included interviews with dialogue participants at two different time intervals. We scheduled interviews with all willing participants two to three weeks after the dialogue. During this first round of interviews, we gathered information about participants' views and experiences in relation to design features and efforts made to address the featured policy issue, including what they have personally done. We invited participants to a follow-up interview six months later. During this second round of interviews, we gathered information about participants' views about and experiences with the dialogue overall, themes emerging from the data we already collected, whether and how participants had used what they learned from the dialogue, and perceived barriers and facilitators to taking action. The interviews at both time intervals were semistructured, with standard open-ended questions supplemented by probes. Audio recordings and notes were taken during all interviews. We arranged for each audio recording to be transcribed, which allowed us to revisit themes that were not captured through our interview notes. All interviews were conducted by telephone except for two that were held in person.
Finally, we used secondary data that were originally collected by the McMaster Health Forum as part of their ongoing evaluation of deliberative dialogues. The forum conducts formative and summative evaluations of all their dialogues. The questionnaire that the forum uses includes general questions about the dialogue, specific questions about design features, questions about the intentions of participants to use what they learned (Boyko et al. 2011) and questions about participants' roles and backgrounds. The evaluation questions require both ratings and written responses. Details about the forum's questionnaire and evaluation procedures may be obtained by contacting the McMaster Health Forum directly.
Data analysis and coding
Our analysis included both quantitative and qualitative approaches. We calculated simple descriptive statistics of the evaluation ratings provided to us by the McMaster Health Forum. We coded all the qualitative data (i.e., field notes, interview transcripts and written comments from the forum's evaluations) according to the key features of the forum's deliberative dialogues (Box 1). A matrix was devised in order to compare findings about the key features from across the data. The team met on several occasions to reflect upon, discuss and come to mutual agreement on how to interpret the data. We made every effort throughout to establish credibility and to ensure that our findings represented the deliberative dialogue we studied. For example, we used triangulation to compare findings from across data sources in order to strengthen interpretations and create a more meaningful description of our case. We considered the types of data we used in our study to be "equal" given that all our data were based on human judgments (i.e., field notes, interviews, survey comments and ratings). In order to demonstrate transferability of our findings to other similar dialogues, we maintained detailed notes that helped us to provide an account of our case study that we hoped would allow readers to determine whether the findings were applicable to their situation. NVivo 8 software was used to organize and keep track of all qualitative data. Our overall analysis was informed by our knowledge and experience of deliberative dialogues in general, as well as the specific approach used by the McMaster Health Forum.
|BOX 1: Key features of the McMaster Health Forum's deliberative dialogues|
1. Addressed a high-priority policy issue.
2. Opportunity to discuss different features of the problem, including (where possible) how it affects particular groups.
3. Opportunity to discuss options for addressing the problem.
4. Opportunity to discuss key implementation considerations.
5. Opportunity to discuss who might do what differently.
6. Informed by a pre-circulated evidence brief.
7. Informed by discussion about the full range of factors that can inform how to approach a problem, possible options for addressing it and key implementation considerations.
8. Brought together many parties who could be involved in or affected by future decisions related to the issue.
9. Fair representation among policy makers, stakeholders and researchers.
10. Facilitator to assist with the deliberations.
11. Allowed frank, off-the-record deliberations by following the Chatham House rule.
12. Did not aim for consensus
Participants' views about and experiences with specific design features
A summary of our findings related to participants' views about and experiences with specific design features appears in the Appendix. Our quantitative data demonstrate that the mean evaluation ratings for the features that we examined were 5.7 or higher on a scale of 1 (very unhelpful) to 7 (very helpful). The lowest-rated design feature was 5.7 (fair representation among policy makers, stakeholders and researchers) and the highest was 6.9 (facilitator to assist with the deliberations). The most common response was 7 or "very helpful" across all the questions pertaining to views about how the dialogue was designed. Several design features had a mean evaluation rating of 6.7 or higher on a scale of 1 (very unhelpful) to 7 (very helpful).
Our qualitative findings provide further insight into the views and experiences of the dialogue participants. We considered four design features particularly noteworthy because each included data from field notes, interviews and evaluation comments, and each included a range of observations (i.e., neither overwhelmingly positive or negative). The first of these design features is that the dialogue provided an "opportunity to discuss different features of the problem, including (where possible) how it affects particular groups." Although the dialogue did not include a focus on how chronic pain affects different groups, two participants that were affiliated with consumer groups shared their personal experiences and challenges in terms of receiving healthcare. Interview comments reflected that participants liked the broad discussion (i.e., without segmentation into groups) given the issue is so complex and a one-day event can only "scratch the surface" in terms of understanding the issue and how to address it. Written comments reflected a lack of discussion about how the problem relates to particular groups.
The second noteworthy design feature relates to how the dialogue provided an "opportunity to discuss options for addressing the problem." The evidence brief that each participant received prior to the dialogue included three options for addressing the issue of chronic pain management in health systems across Canada. These options were the starting point for deliberation about ways to address the problem. Although written comments supported the usefulness of this element, interview comments were mixed. Two participants very clearly indicated that they liked that the options focused the discussion and thinking of participants. Other participants found the options distracting and suggested alternative ways of focusing discussion about policy options such as more actively soliciting what participants think policy options might be.
The third noteworthy design feature is that the dialogue was "informed by a pre-circulated evidence brief." During the dialogue, suggestions were made for improving the evidence brief, including incorporating more qualitative and cost-effectiveness data. During the interviews one participant expressed concern about the transparency of who wrote the evidence brief and whether there was truly an "arm's-length" distance from the funding body of the initiative. Another interviewee found the evidence brief "hard to read" from a visual perspective.
Written comments regarding the evidence reflected enthusiasm for this design element.
Finally, the feature "fair representation among policy makers, stakeholders and researchers" is noteworthy. Several comments were made throughout the day-long event that reflected participants' disappointment with the lack of policy maker representation at the dialogue. Interview comments also reflected the need for more policy maker representation. One participant noted: "I see the problem more that we didn't have the policy people there and several of the people that are in leadership roles and related areas." Written comments also clearly reflected that participants perceived there was not fair representation of policy makers.
Participants' use of the deliberations to support their decision-making
We examined participants' use of the deliberations to support their decision-making by considering their intended and actual use of the full range of factors (including research evidence) affecting the problem, possible options for addressing it and key implementation considerations. Our main quantitative finding related to this objective comes from the forum's evaluation data. The mean of three questionnaire items that measured intention to use research evidence was 6.5 on a scale from 1 (strongly disagree) to 7 (strongly agree).
Our qualitative data related to this objective include written comments from the forum's evaluation data, as well as findings from the second round of interviews that we conducted. The written comments include the results of written response questions that aim to gather comments about future efforts to address the policy issue. All the written comments generally reflect that participants did intend to use what they learned at the dialogue, and the findings from the interviews suggest that many actually did so. We asked participants during the second round of interviews to reflect on what they learned from participating in the dialogue and to identify an important action that they personally have done better or differently to address the featured policy issue. Several participants identified such actions as speaking with others in their organization about the policy options discussed at the dialogue, and sharing learning with other stakeholders (consumers, colleagues) who were not involved in the dialogue. Some participants did not identify actions related to using research evidence, but rather skills or resources gained as a result of their participation. It is also important to note that although participants were able to identify ways in which their participation in the dialogue had affected them individually, most participants also found it difficult to attribute a specific action to the deliberative dialogue.
Our case study of a deliberative dialogue used as a system-level KTE strategy demonstrates two key lessons. First, our study highlights the importance of considering specific design elements of deliberative dialogues that depend on the nature of the issue being addressed and the policy context. All the design elements we examined in the current case study were perceived as useful and could be maintained in future dialogues. However, future dialogues that address similar issues and that take place in similar contexts should consider the relative importance of these features. For example, our study demonstrates that "fair representation among policy makers, managers, stakeholders and researchers" is a challenge for deliberative dialogues that address low-priority policy issues (i.e., those that are not on the "radar" of government decision-makers). Lack of policy maker representation is a salient difference between our case study and other dialogues with which we have been involved (which for the most part have included policy makers) and speaks to there being no one directly affiliated with government that has policy related to the issue within their portfolio of responsibilities. Participants in our case study dialogue perceived this lack of representation by policy makers as a barrier to sparking policy change related to the issue at hand. Thus, it is important to consider the most appropriate geographical context within which to address a low-priority health system issue using a deliberative dialogue. If a dialogue cannot ensure appropriate representation from government decision-makers, then it may be important to address the issue at a local or provincial level in order to generate interest and action on a smaller geographic scale.
Second, our case study suggests that intended effects of a deliberative dialogue that addresses a low-priority policy issue should be expected at the individual level (as opposed to sparking policy change). Immediately following the case study dialogue the participants intended to use the research evidence of the type that was discussed, and within six months of the dialogue some had done so. This finding suggests that the deliberative dialogues may have had some effect on the development of policy measures. However, the evidence of this effect is visible only at the individual level through actions that dialogue participants have performed themselves. This suggests that short-term (i.e., over six months) intended effects of deliberative dialogues that take place in similar contexts or address similar issues should be limited to measures of evidence use among deliberative dialogue participants.
Strengths and limitations
A central strength to our study is the contribution it makes to understanding the key features and intended effects of deliberative dialogues used to address health system issues. We used case study methodology to provide a rich description of the key design features and intended effects of our case study dialogue, which will be useful for future dialogues that address similar issues and that take place in similar contexts. Furthermore, we used our insight and experience from planning and facilitating other deliberative dialogues to highlight similarities and differences between our case study and other dialogues that we have been involved with, and to present a comprehensive picture of our findings.
It is also important to evaluate the findings and interpretations we have provided in consideration of certain limitations. First, because this study primarily involved a single deliberative dialogue, it is limited in terms of generalization of the results. Second, this study had a limited duration (i.e., six months). As a result, participants may not have had sufficient opportunity to use the knowledge they gained at the dialogue to address the policy issue. The third limitation is that only the perspective of dialogue participants was sought. Incorporating perspectives from others involved in the dialogue, such as staff, researchers and steering committee members, may have strengthened the credibility of the findings and contributed a richer understanding.
Through this study of a deliberative dialogue used as a KTE strategy to support action at the system level, we have explored what participants think about specific design features and how they used the deliberations to support evidence-informed decision-making and other actions. Our study has demonstrated that deliberative dialogues that aim to address a low-priority health system issue could maintain (with some modification) all 10 of the key design features we explored and evaluate short-term intended effects at the individual level among dialogue participants. Future research is required that compares the effects of deliberative dialogues that are similar in terms of design but different in terms of the issues addressed or the contexts in which they take place.
Le dialogue délibératif comme stratégie d'échange et de transfert de connaissances au niveau du système
Nous avons mené une étude de cas pour étudier le dialogue délibératif comme stratégie d'échange et de transfert de connaissances (ETC) au niveau du système et pour décrire les caractéristiques conceptuelles ainsi que les effets visés par ce type de dialogue. Nos données comprennent des observations notées au cours du dialogue, des évaluations faites par les participants et des entrevues. Nous avons situé ces données dans le contexte d'une expérience plus vaste. Nous avons appris que (a) tous les éléments conceptuels examinés peuvent se maintenir dans d'éventuels dialogues, mais les organisateurs de dialogues qui se penchent sur des enjeux semblables et qui ont lieu dans des contextes similaires doivent tenir compte de l'importance relative de ces caractéristiques et que (b) les effets visés par un dialogue délibératif qui porte sur des enjeux politiques de faible priorité apparaissent principalement au niveau personnel parmi les participants au dialogue. Il faut effectuer davantage de recherches pour explorer les principales caractéristiques et les effets visés par un dialogue délibératif utilisé pour aborder d'autres enjeux ou dans des contextes différents.
About the Author
Jennifer A. Boyko, PhD, Postdoctoral Fellow, Faculty of Health Sciences, University of Western Ontario, London, ON
John N. Lavis, MD, PhD, Professor, Department of Clinical Epidemiology & Biostatistics, Director, McMaster Health Forum, McMaster University, Hamilton, ON
Maureen Dobbins, RN, PhD, Professor, School of Nursing, Scientific Director, National Collaborating Centre for Methods and Tools, McMaster University, Hamilton, ON
Correspondence may be directed to: Jennifer A. Boyko, PhD, Postdoctoral Fellow, Faculty of Health Sciences, University of Western Ontario, Labatt Health Sciences Building, Rm. 403, London, ON N6A 5B9; e-mail: email@example.com.
This study took place in the broader context of a research project entitled Community Alliances for Health Research and Knowledge Translation on Pain (CAHR-Pain), which is an initiative that brings together research projects focused on developing, disseminating and evaluating knowledge translation and exchange strategies aimed to improve health outcomes and quality of life for individuals living with chronic pain. The initiative was funded by the Canadian Institutes of Health Research. The authors would like to thank the CAHR-Pain for allowing us to work with them to carry out this study.
Boyko, J.A., J.N. Lavis, J. Abelson, M. Dobbins and N. Carter. 2012. "Deliberative Dialogues as a Mechanism for Knowledge Translation and Exchange in Health Systems Decision-Making." Social Science and Medicine 75(11): 1938–45.
Boyko, J.A., J.N. Lavis, M. Dobbins and N.M. Souza. 2011. "Reliability of a Tool for Measuring Theory of Planned Behaviour Constructs for Use in Evaluating Research Use in Policymaking." Health Research Policy and Systems 9(1): 29.
Canadian Health Services Research Foundation (CHSRF). 2006. "Weighing Up the Evidence: Making Evidence-informed Guidance Accurate, Achievable, and Acceptable." Ottawa, ON: Author.
Culyer, A.J. and J. Lomas. 2006. "Deliberative Process and Evidence-Informed Decision-Making in Health Care: Do They Work and How Might We Know?" Evidence and Policy 12(31): 357–71.
Dryzek, J. 2009. "The Australian Citizens' Parliament: A World First." Journal of Public Deliberation 5(1): 9.
Kerkhof, M. 2006. "Making a Difference: On the Constraints of Consensus Building and the Relevance of Deliberation in Stakeholder Dialogues." Policy Sciences 39(3): 279–99.
Lavis, J.N. 2008. "Research Proposal: Evaluating Knowledge Translation Platforms in Low and Middle Income Counties." Hamilton, ON: McMaster University, Department of Clinical Epidemiology and Biostatistics.
Lavis, J.N. and J. Boyko. 2009. "Supporting Chronic Pain Management in Provincial and Territorial Health Systems in Canada." Hamilton, ON: McMaster Health Forum.
Lavis, J.N., J.A. Boyko, A.D. Oxman, S. Lewin and A. Fretheim. 2009. "SUPPORT Tools for Evidence-Informed Health Policymaking (STP) 14: Organising and Using Policy Dialogues to Support Evidence-Informed Policymaking." Health Research Policy and Systems 7(1): S14.
Lomas, J., T. Culyer, C. McCutcheon, L. McAuley and S. Law. 2005 (May). Conceptualizing and Combining Evidence for Health System Guidance. Final Report. Ottawa: Canadian Health Services Research Foundation. Retrieved April 7, 2014. <http://www.cfhi-fcass.ca/migrated/pdf/insightAction/evidence_e.pdf>.
McMaster Health Forum. 2013. "Events." Retrieved April 7, 2014. <http://www.mcmasterhealthforum.org/about-us/our-work/events>.
McMaster Health Forum. 2014. "Evidence Briefs and Stakeholder Dialogues." Retrieved April 7, 2014. <http://mcmasterhealthforum.org/stakeholders/evidence-briefs-and-stakeholder-dialogues>.
Be the first to comment on this!
Personal Subscriber? Sign In
Note: Please enter a display name. Your email address will not be publically displayed