Essays

Essays January 2015

Blurring the Lines between Research, Evaluation and Quality Improvement

Diane Finegood and Hugh MacLeod

I am joined on the “balcony of personal reflection” by President/CEO and Researcher Diane Finegood. We begin discussing the difference between research and evaluation. Diane shares... She was privileged to be at the heart of the transformation of our health research system, as an inaugural Scientific Director of the Canadian Institutes of Health Research (CIHR). The role was to foster the discovery of new knowledge and its translation into improved health. She came into that role with the experience of a successful basic scientist and came by the question “What is the difference between research and evaluation?”

Between us, we’ve asked the question “What’s the difference?” countless times and received countless different answers ranging from the arrogant “research is what you do when you plan up front, evaluation is what you do after the fact” to “research has no client, evaluation does” to “no difference at all.” The role of the “client” is an interesting one which we will come back to, but we have come to realize that we need to expand the question to include quality improvement and need to ask the more nuanced question “when would it be good to blur the lines and when should they be kept clear?”

The CIHR website acknowledges that “the similarities and differences between research and evaluation have long been the subject of intense debate” and provides some arguments for keeping research and evaluation distinct and some for blurring the lines. Many of the differences seem artificial and unhelpful, such as the notion that evaluators require a unique set of skills or the idea that theory is more important in research than evaluation. These may be truths that could be spoken by the “Ghost of Healthcare Past,” but they are not likely the words of the “Ghost of Healthcare Hope.”

Another argument that seems central to the “keep clear distinctions” camp is that research is to create “generalizable new knowledge” while program evaluation and quality improvement are about informing decisions and identifying improvements in internal processes and practices. In an era where we have come to see the wisdom of patient-centered, personalized healthcare, if research is only about “generalizable knowledge” then it will become increasingly less relevant to the transformations that are needed and the ones coming whether we like it or not. The growing complexity of life in our highly networked and technology-driven world can be at the heart of both “future despair” and “future hope.” But clear distinctions between research, evaluation and quality improvement means reinforced silos and reinforced silos make it difficult for the depth of expertise available in our academic institutions to have impact on the strength of our health systems and populations.

Research, evaluation and quality improvement have a tendency to handle extraneous variables differently. Most research today adheres to a reductionist paradigm where we try to control or measure all extraneous variables thought to be relevant. In program evaluation, the culture is to use multiple lines of evidence to answer evaluation questions and minimize confounding results. In quality improvement, the tradition is to acknowledge extraneous variables but not try to interfere. When the challenge is “wicked,”the reductionist approach is no longer helpful. Complete control is impossible and even partial control of extraneous variables means the results may not be applicable in the real world. This notion of control is also linked to the notion that there is one “real truth” such as those that come from the highest form of evidence in the framework of evidence-based medicine, the randomized controlled trial. But the trend toward personalized healthcare means we need a much deeper understanding of the “range of truths” and how to apply them to individuals.

The Ghost of Healthcare Hope appears and suggests:

“Healthcare is complex (not just complicated) and complex problems need solutions appropriate for complex problems. Solutions that focus on recognizing that individuals matter and matching their capacity to the complexity of their role in the system. Solutions that focus on the building of networks and teams with authentic trust between individuals and organizations involved. Solutions that recognize complexity demand distribution of decision, action and authority and can benefit by moving away from “making” or “letting things happen” to “helping them happen.”

Today, you continue to fit the world into tidy categories. Convention dictates that there is something called academic research, something called healthcare delivery, and something else again called policy-making. They happen in different silos, use different language, and depend on different types of expertise. You will never optimize progress in any of these domains unless you begin to think of them holistically. Taken together, these core elements of your healthcare system should be tightly integrated so that it is impossible to think of any one without necessarily involving the other two.”

What does this mean for the lines between research, evaluation and quality improvement? It suggests that all of these traditions need to focus on developing a new kind of evidence base that supports adaptation and learning. The world of evaluation is giving rise to the important new idea of developing shared value through shared measurement. Shared measurement requires a common platform for data capture and analysis. When participants get to define measures together, a shared measurement platform enables comparative performance. And when participation is ongoing, such a platform can be used to create systems of influence and an environment of adaptive learning through collective efforts to measure, learn, coordinate and improve performance.

Academic researchers have been developing new science in domains with labels like “research on knowledge translation” and “dissemination and implementation science.” This has given rise to published research on topics like the scale-up and spread of programs and evidence-based de-implementation for contradicted and unproven healthcare practices.

These new areas for study must acknowledge the role of so-called “extraneous variables” that can’t be controlled and find new ways to understand them and how they impact process and outcomes. Monique Begin famously said “Canada is a country of pilot projects.” Breaking down the silos between research, evaluation and quality improvement might help us get beyond this sad and seemingly everlasting truth.

Amongst all the arguments for and against blurring the lines, one in particular points to a domain researchers and evaluators alike must pay close attention to. On the CIHR website it argues that evaluation is inherently political and by inference it suggests that research is not. Most researchers know that academic research is political, but the politics are different from the politics of evaluation performed by a contractor. Evaluators tend to have a transactional relationship with their client, whereas for researchers the “client” is at a distance from the research. The “client” for research could be considered the taxpayer or citizen or member of a population that could benefit from the accumulation of knowledge about a particular disease. The knowledge user tends to be more defined and easier to identify for evaluation and quality improvement. In evaluation, the transactional nature of the relationship means the client does not have to make the results available to others and negative results are often suppressed. In research the paradigm is “publish or perish” and what is made available is determined through the peer review system where it is also difficult to publish negative results. Either way, we seem to be doomed to repeat our mistakes. Systems that embrace mistakes and learn from them are more resilient. The growth of connectivity and a proliferation of ways to share information is changing this landscape rapidly.

The Ghost of Healthcare Hope returns:

“My hope is for a new conversation where the boundaries of research, healthcare delivery and policy-making intersect. By tackling wicked system problems together you can surface the assumptions you hold. If you remember that for wicked problems you need to consider assumptions, actions and choices at many different levels you will have the opportunity to see patterns and differences in your collective thought. These patterns and differences can be used to discover common ground, and/or to find creative alternatives for stubborn problems. Their value lies in their capacity to be provocative, to open up alternatives, to invite inquiry, and to surface the fundamental issues that need to be addressed to make improvement leaps. I hope you engage in a new dialogue that seeks interdependent solutions to research, policy and organizational challenges.”

When we were hunter-gatherers, a simple control structure with one clear leader for each organizational structure was quite adequate. As we passed from early civilizations through the industrial revolution, more complicated and hierarchical control structures were required. But with the advent of the internet and the current technological revolution, we need networked control structures which enable more local action and authority, since top-down control is no longer effective. It seems that healthcare is struggling with this paradigm shift and all hands (researchers, evaluators and system performance leaders) are needed on deck to transform our measurement systems to ones designed for this increasingly complex world

we are living in. As noted above, in complex systems individuals matter and we need to engage everyone in ways where we can successfully make the decisions we need to make in our little corner of this complex world.

Join next week’s conversation about relationships and the soft side of healthcare meeting the hard side. 

See essays in this series.

See essays from series 2

See essays from series 1

About the Author

Diane T. Finegood, President and CEO, Michael Smith Foundation for Health Research.

Hugh MacLeod, Concerned Citizen.

References

Begin, M., L. Eggertson and N. MacDonald. (2009). A country of perpetual pilot projects. CMAJ 180(12): 1185. 2009.

Fraser Health: Department of Evaluation and Research Services. (2011). Differentiation of Research, Quality Improvement and Program Evaluation. Retrieved from http://research.fraserhealth.ca/media/2011_09_12_Research_QI_Program_Evaluation_Differentiation.pdf

Government of Canada: Canadian Institutes of Health Research. (2012). A Guide to Evaluation in Health Research. Retrieved from http://www.cihr-irsc.gc.ca/e/documents/kt_lm_guide_evhr-en.pdf.

Macleod, H. (2011). “A Call for a New Connectivity”, Longwoods Healthcare Papers, 11(2).

Y. Bar-Yam. (1997). Complexity rising: From human beings to human civilization, a complexity profile, NECSI Report.  

Comments

Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed