The June 2013 conversation on the balcony about the complexity and dynamics of a Stanley Cup hockey game, and the lessons it provides for healthcare managers, provides an important opportunity to discuss resilience engineering and its application to two key issues confronting healthcare: quality of care and patient safety. Resilience engineering has, in the last decade, become a key conceptual and practical approach to these issues. As noted in the Stanley Cup discussion, healthcare’s attempts to enhance quality and patient safety remain wedded largely to the rational and formational teleologies. What about new perspectives?
The Ghost of Healthcare Consciousness appears and with a heavy sigh intones:
“For a system that has seen many management fads come and go, endless restructuring, keeping staff busy so they don’t notice that nothing has really changed, perhaps we should all give up on resilience engineering too, go home and just let the healthcare system be.”
Well, that would be a good idea, if the healthcare system didn’t continue to kill and harm people unnecessarily, the very people who come to it for help! The efforts to deal with this problem to date have forced us to reflect on the history of thinking about safety in risk-critical industries (healthcare being one). In the 19th and early 20th centuries, safety was almost entirely concerned with the problem of faulty technology. By the mid-20th century, it was realized that people actually ran the technology and so human error became a focus of safety activities, and by the late 20th century it was realized that humans work in large organizations, which through their policies and procedures may actually end up constraining humans from getting their work done, and inadvertently appear to create significant trade-offs: Cheaper, Faster, Better – the problem being that everyday practice has shown that one can’t have all three.
Everyday work in healthcare settings thus became a focus of attention beyond considering professional training and competence. Resilience engineering acknowledges the dynamic nature of this work (much like the dynamics of a hockey game, though generally a bit less violent), and is the intrinsic capacity of an organization, its component parts and its people to adjust what it/they does/do prior to, during and after disturbances to maintain required operations under both expected and unexpected conditions.
Given the inevitability in any dynamic system for the unexpected (i.e., surprises), resilience thinking has identified four capabilities organizations require to maintain reasonable stability in the face of change: the capability to learn from past events; the capability to monitor current conditions and threats; the capability to respond to regular and irregular conditions flexibly; and the capability to anticipate long-term threats, challenges and opportunities.
Learning, and developing a “learning culture,” is not new in healthcare but the key issue is: What is it that in fact is being learned? The ascension of “root cause analysis” since the IOM report, “To Err is Human,” inculcated a rather limited approach to learning, focusing as it did on a “root cause,” which is completely at odds with what is now understood about the source of failure in highly complex, dynamic, risk-critical organizations.
Resilience engineering scholars have coined the phrase, “What you look for is what you find (WYLFIWYF),” to describe the primary result of root cause analysis. This, sadly, has usually tended to emphasize the human element (in what is believed to be a perfectly normal and well-run organization), and this is very easy to do since healthcare is necessarily provided by human beings – who are almost always in close proximity to critical incident. But proximity is not a cause. Thus it must be recognized that “human error” is not an explanation, but demands an explanation; and such an explanation must include a wider consideration of the context in which healthcare work is done. The limitations of root cause analysis have been recognized by experts who changed the language from root cause(s) to contributing factors and incident findings and from root cause analysis to incident analysis. The change in language should drive a change in the focus of analysis from individuals to allowing broader consideration of the complexity in healthcare delivery. The Canadian Incident Analysis Framework provides a wealth of resources to support such systemic analyses including importantly bringing the patient voice into the investigation.
It is clearly the case that “responding to events” is also not a new phenomenon, but the nature of the response, which often focuses on finding the culpable human in an otherwise well-ordered organization, suggests a severe poverty of understanding about the dynamics of healthcare (or any complex dynamic system) and reinforces the “blame and shame” culture. Only just recently recognized as self-defeating, it nevertheless remains alive and well in many instances. The capability to monitor and anticipate, however, remains underdeveloped.
Resilience thinking emphasizes that it is possible to learn a lot more from understanding how everyday work gets done under varying conditions rather than learning limited to a consideration of critical events. Thus Safety I (learning from failure) evolved to Safety II (learning from the overwhelming success of everyday work) providing a much greater opportunity to learn since there is so much more successful everyday work than failures. This insight seems to be thought counterintuitive in healthcare in part because of the deeply held and popular view that healthcare organizations do the right thing by design (e.g., policies and procedures), professional expertise, and good governance and management. However, resilience recognizes that safety is an emergent property of the work done; it is not a result of a fixed Newtonian design. Safety is made every minute of every day by people managing the trade-offs inherent in the work itself, and the surprises that seemingly crop up out of nowhere. As Richard Cook, a well-known anesthesiologist and patient safety expert, observes: “Safety is created by practitioners and has the half-life of epinephrine” (Richard Cook, personal communication). Thus it is continually created and recreated by those doing the work, not because healthcare organizations are perfectly designed. They can’t be if for no other reason than the highly dynamic nature of healthcare delivery, including the infinite variety of human biology and pathology that challenges them.
The resilience perspective is thus an exemplar of the transformational teleology described in the essay, “Lessons from the Stanley Cup Playoffs.” It focuses on the dynamic interactions among people doing the work, as well as between people and organizations. It recognizes design is only the starting point to examine how work gets done in healthcare and, as a conceptual framework, it provides insights that point to a better understanding of how safety is created in complex adaptive systems. By arguing for an emphasis of how everyday work gets done, it leads to a deep understanding of what healthcare work actually is, and it gives us a much broader and deeper sense of how critical incidents happen. It thus also provides us with the means of learning how to manage complexity rather than trying to erase it. It poses questions for us that lead to a greater capacity to understand and appreciate variation.
Resilience thinking is thus, perhaps more important to be understood at the governance levels of healthcare than the front line, as intuitively front line workers do appreciate variation because they live with it everyday. Senior management and boards, on the other hand, still feel compelled to believe that their designs are the answer to organizations' problems. They may attempt to reduce variation which is often perceived and described as non-compliance, at best or deviance, at worst. This approach has not worked in healthcare or other risk-critical industries.
Very few people think about how things go right or, alternatively, how things don’t go wrong more often (Mesman 2009). Thus the issue of what Safety II represents could be a tough sell to busy people taking care of patients, and who are trying to get through their day safely while completing necessary tasks. They might not fully appreciate why a Safety II perspective is useful.
- What do you think?
- Do you think there is practical value in shifting the focus from examining what goes wrong, to what goes right? Why or why not?
- If you think there is practical value in the Safety II perspective, how would you promote the idea with your colleagues?
Suchman (1995) comments that the way work gets done is not highly visible – “in the case of many forms of service work, we recognize that the better the work is done, the less visible it is to those who benefit from it.” Furthermore, Suchman (1995) posits that the farther you are away from the work, the harder it is to understand how the work gets done. It can be extremely difficult to understand why things go wrong or right!
We invite you to reflect on the way you get your work done, and in particular think back on a moment of success amidst what, on all accounts, could have been a disaster … step back and consider:
- What is it that you and your colleagues (if team dynamics are relevant in your example) were doing?
- How did you manage to create a good outcome, despite the circumstances?
- Can you describe why you think things turned out well?
To paraphrase Einstein’s insight: “It is the height of folly to keep doing things that don’t work, and expecting a better outcome.”
We close with a passage from the essay titled, “Passive Following vs. Future Focused Learning”:
“Most challenges in human systems can’t be solved with a quick technical fix. Even so, healthcare people usually want such a thing. Leaders, eager to please, respond accordingly by taking the problem on their shoulders and developing solutions that might alleviate a symptom, but not the underlying problem. A major pitfall of leadership is assuming that you’re the one who must come up with the answers, instead of developing adaptive capacity, the capacity of people, to face root cause problems and take responsibility for them. Change requires more than identifying the problem and then a call to action. It requires looking beyond the problem and finding the source of trouble. The real problem is frequently located where we would least expect to find it – inside ourselves.”
Join next week’s conversation titled: “A Different Way of Thinking”
About the Author(s)Dr. Sam Sheps, Professor, School of Population and Public Health, University of British Columbia
Karen Cardiff, Researcher, School of Population and Public Health, University of British Columbia
Hugh MacLeod, CEO Canadian Patient Safety Institute … Patient, Husband, Father, Brother, Grandparent … Concerned Citizen
MacLeod, H. and E. Meuser. 2013. “Lessons from the Stanley Cup Playoffs.” Longwoods Ghost Busting Essays.
MacLeod, H. and G. Dickson. 2013. “Passive Following vs Future Focused Leadership.” Longwoods Ghost Busting Essays.
Mesman, J. 2009. “The Geography of Patient Safety: A Topical Analysis of Sterility.” Social Science and Medicine 69: 1605-712.
Suchman, L. 1995. Making work visible. Communications of the ACM. September 1995; 38(9): 56-64.
Canadian Incident Analysis Framework. ISBN 978-1-926541-44-0. <www.patientsafetyinstitute.ca/toolsResources/teamworkCommunication>
Be the first to comment on this!
Personal Subscriber? Sign In
Note: Please enter a display name. Your email address will not be publically displayed