Healthcare Quarterly
Clinicians as Designers and Leaders of Quality Improvement
No system has made substantial improvements in quality of care without the engagement and empowerment of clinicians to design and lead quality improvement efforts. In one of two interviews that speak to the role of physicians, Chris Carruthers (CC) interviews Ward Flemons (WF) – a professor of medicine at the University of Calgary and a leader in quality improvement – who talks about the critical role of creating and supporting physician leadership in quality improvement. He also discusses the importance of aligned expectations around quality and clear and strong accountabilities for quality.
Chris Carruthers, in conversation with Ward Flemons
CC: I noticed in your biography that you've obviously had a strong interest in quality and safety, and before the amalgamation you put some groups together that included physicians, to address the issues. Could you start by telling us how you got that going and how you got the physicians involved?
WF: Yes, I think a history lesson is always interesting. You learn from mistakes, and you learn from things that worked. It's an interesting history in Calgary. A lot of the work on quality and safety was in place before I took over, but it really came from the predecessor of Accreditation Canada. They surveyed the landscape and said, "Either Calgary doesn't have a quality plan at all, or it's very rudimentary." This was the first survey of the full region; before, we were all separate hospitals, like Ontario.
There was a really insightful and pretty powerful chief medical officer at the time (in Ontario the equivalent position might be chief of staff). The bottom line is, he said, "They're right; we don't do this very well and we have to do something better; this is a reason to make a major change in how we do business. We have a whole pile of analysts all working on the finance side, and yet that's not our business. Our business is healthcare." He was able to make the argument at the executive level to create a new entity within the Calgary Health Region that was focused on quality and had a physician leader. He got a lot of the analytical power in the region reassigned to report on the clinical side of the equation rather than on the financial side. He also got new funding for teams of physicians – they were called quality consultants at the time – at a departmental level, to lead quality in their department.
CC: One of the key issues was, there had to be additional resources. It wasn't voluntary on top of their existing clinical workload?
WF: Absolutely not voluntary; it was investment up front.
CC: Were they token stipends, or were they appropriate?
WF: They were appropriate. I was the first one in the Department of Medicine, and they paid me one third of my time.
CC: Based on income relative to clinical? If you're going to get physicians involved, you have to pay market value, don't you?
WF: Yes, I truly believe that. Now, it's a question of what physicians you pay and what you pay them for. I think you pay for leadership. I don't think you can afford to, nor would you want to, tell physicians, "You do your day job and then we'll pay you for quality on the side." I think the expectation should be that quality is one of the reasons we get paid – however we get paid, fee for service or whatever – so you appeal to the greater good to participate in the projects. But the person who's actually taking the 30%, or 50%, of their life to lead it, to come up with the plan and be the backbone, I think you have to pay those physicians.
CC: Was it difficult to recruit docs to these roles?
WF: By and large it wasn't too bad. Partly, it was the person who was recruiting; they got the former head of intensive care for the entire region. Like most critical care guys, he was visionary and very forceful, but he knew what he was doing. When he called you, or he called a department head and said, "We've got this new program; I need somebody out of your department to participate," people paid attention. They knew that the region had taken it seriously by putting money up front. They'd hired somebody on a full-time basis to do a lot of the lifting, and then they'd got them attached with the data analysts. That's what got me interested – access to data in the region. I was an outcomes researcher; that's what I was interested in.
CC: Going back to the very beginning, after Accreditation Canada's report, was leveraging resources out of administration a challenge, or did they buy in up front, without balking at freeing up the resources for the physicians to do this?
WF: One thing you learn over the years is, often there's not a lot of unified vision at the very top in terms of how to move things ahead. Everybody's got their own idea, often a strongly held position. The docs, as represented by the chief medical officer, have a different perspective from the chief nursing officer and a different perspective from the chief operating officer. In this case, their very influential and visionary chief medical officer could convince his colleagues around the budget table that they needed to invest in this, and he used the Accreditation Canada report as the leverage to convince them. Once he was able to get that sign-off from his colleagues at the executive suite and from the CEO, he started building what he thought was the right way to go. But I think that as time went by, there was strong support and buy in from the whole organization. Initially, there were probably some challenging discussions to get the money put aside, but when they got it working, I think it was supported throughout the entire organization.
CC: Once they'd got some early outcomes and results, the investment seemed good?
WF: I think it's like anything; everybody's sitting back, asking "Who's involved; how likely is this to succeed; and have they made enough of an investment to be successful?" Once it's starting to look good, people want to know how they can join, as opposed to how they can ignore it. I think you do that partly with the leaders you put in place and what you signal by investing in it. Also, there's the model, and how successful you are at communicating the vision for why this is important and what it's going to accomplish.
CC: How big was the group that you put together at that time?
WF: They developed a portfolio called "Quality Improvement in Health Information"; it was about 80 people strong. They weren't all new positions. Many were data analysts pulled out of finance; others were pulled in from population health, so many were physicians. Then they developed six or seven new positions around quality improvement, plus all the new physician positions. They started with five departments, and we eventually got that up to ten. Most had a physician at one third of his or her time, plus a quality improvement consultant full-time. By the time they disbanded the portfolio, we had about 110 people; that was for a region with a budget of about $3 billion.
CC: Can elaborate on why it was disbanded? Maybe it was under the reorganization?
WF: That was Alberta Health Services. It wasn't completely disbanded; it was just reorganized, and we were the best-funded unit in Alberta by a country mile and the most visionary. Unfortunately, before we all became one big happy region in Alberta, not many of the other regions had much in the way of resources, so they took what we had in Calgary and started using it for the entire province – which watered down the effect. As with many major reorganizations, the key leaders and visionaries left, so it hasn't been as successful on a province-wide scale. But it worked well on a regional scale, which was three big hospitals and about a $2 billion budget. It's a very complicated region, but the model worked.
CC: Once you had your physician leaders from the ten departments in place, how did they work, and can you give me any examples of some positive outcomes? How did they rally the others in their department?
WF: In one way, the physicians were a gift to the department, so they went to the department heads and said, "We've got new funding for one of your physicians, so you get support for one of your physicians. If you can find a key person who's influential, then you're going to do well." I think there was a subtle hint that if you didn't do well, there were other departments that would probably like this. The suggestion was, "We don't have funding to support all the departments right now, and we may need to move funding around, depending on where we think we're going to get the best bang for the buck." By and large, department heads were motivated to find some of their senior, more influential people with had an interest in this area, and they started incorporating them as part of their executive. They started looking at it as one of the key outcomes that the department was interest in, in addition to education, clinical service and research. In a lot of departments, the quality improvement physician became part of their executive, and that physician's activities became part of the executive's monthly meetings and part of their agenda.
CC: Did they choose certain metrics depending upon the department? And did they adjust those on a regular basis? Or how did these quality and safety physician leaders pick their agenda?
WF: It was really variable. One of the downsides to doing it this way was that there wasn't a strong vision for how the work should get done. This was back in 2000, 2001, so the area was pretty new. Even though we had access to data analysts, the data itself was not fantastic. There was no formal way to train people in the key aspects of how you drive quality improvement in a healthcare department or a healthcare organization. Certainly, people wanted to see activity, and there was some reporting out, but it wasn't outcome based. The agenda was driven by the interests of the department, and primarily that meant the interest of the physician leading the quality improvement initiative, rather than the organization saying, "Here are our key strategies; we've got to meet these targets, and we've got to get you in line with what we want to accomplish." That was just a reflection of the organization itself. The leadership and the organization hadn't done this before and were pretty clueless in how you align this kind of investment with the key strategies of the region.
CC: Obviously, they've learned in Ontario from what's been done elsewhere, and the metrics flow up to the board and ultimately up to the Quality Council, so they're more part of the global strategy at the hospital.
What would you do today, from an educational point of view, if you took these physician leaders interested in quality improvement? What kind of education or advice would you give them now, and what kind do you provide now?
WF: We went through a journey, so most of us learned by going to Institute of Health Improvement (IHI) conferences. We'd go to the quality forums in December, to their pre-conference workshops, and we picked up a lot that way. Learning was sort of haphazard. Along the way I met Brent James from Intermountain Healthcare, and I was impressed that although the IHI talks good theory, Brent was actually putting it into practice. Brent had a formal training program, he had buy in from his board, he had a vision for how this would work, and he had incredible overall buy in. We started sending people down to Brent's training course; he offered, so we probably sent 20 people.
My ultimate vision was to develop our own quality and safety course in Calgary, available to quality improvement docs and consultants, people trying to lead this for the organization so they could get standard training in terms of how do you do this business, because there really isn't anything out there. We were part way down that path when the big reorganization happened, so the vision didn't materialized.
CC: So, that peer learning, going down to Intermountain and speaking with the docs and seeing how quality and safety were being applied at the front line was a key aspect in their enthusiasm and participation?
WF: I still remember the first conference we went to. You left saying, "Wow, this is really cool; there are some people doing some really neat stuff." It was enlightening and invigorating, and that got a lot of my colleagues turned on. They were saying, "Hey, we can do this stuff; we just have to learn a bit more, and we've been given this incredible opportunity and some protected time to do it." I thought it was a cheap way of getting people enthusiastic about making change, even though the conferences themselves weren't that cheap.
CC: Then what happened? Take us up to where things are now in that journey with the docs.
WF: We were very focused on the quality improvement side of the equation. Then in 2004 we had a disaster in the region: two patients died related to a mix-up in dialysis solution in our Intensive Care Units. That swung the pendulum pretty hard, not away from quality improvement, but adding in patient safety. We struggled to develop a complementary model around safety that would work with the quality improvement model we'd already built in. But the quality improvement efforts got watered down because we got so focused on safety.
Having said that, we're managing both pretty well. We needed to get some doctors involved in safety who weren't the quality improvement doctors doing the process improvement work. That meant expanding our base, but we didn't have money to pay doctors for the safety work. We had created safety committees, so we went to the chairs of those departments and said, "We would like to work with a member of your department to chair a safety committee; this is what their terms of reference would be."
We were partly successful in engaging people, and they would be doing it for the greater good, without getting a funded position for doing it. Unfortunately, that infrastructure unravelled as well when Alberta Health Services happened.
I think a lot of people are prepared to buy in if they can see what you're trying to do and why, how it ties into the greater whole, and that they'll have influence in it. If they lose that, they quickly start looking around for better ways to spend their limited amounts of time and energy.
CC: Those are very good points. What about your champion leaders? Did they disappear? Was that another reason for the momentum falling off?
WF: Yes, they did. We started to get a better idea of how to align quality improvement initiatives in the departments with regional priorities. The best example was, we had decided in 2006 that Emergency Department wait times were terrible and that nothing short of a region-wide initiative was going to address it. We acknowledged that Emergency Department wait times were not caused by the Emergency Department but by problems elsewhere in the system. We just about killed ourselves engaging all the departments and realigning their priorities. They were used to setting their own priorities and doing what they wanted, thank you very much, so we had to slowly turn them around by saying, "You can do some of what you want, but some of what you have to do is aligned with these priorities of the region, and the top priority is the Emergency Department." We got people to the table to talk about and lead that; and the leaders swung their focus around. What finally killed it was, the region took their eye off the wait-time ball, and then Alberta Health Services happened. Everybody realized that nobody cared any more, because they were too worried about how to restructure this large organization; there was really nobody in charge. As soon as people believe that what they're doing doesn't matter to somebody higher up than them, they lose their ability to make decisions and to spend their limited budget on what they think is important. When you take away authority from them – the interest in trying to make change evaporates overnight.
CC: What about education now, in clinical quality and safety? There's IHI in the United States still, and Intermountain. Is anything available in Alberta?
WF: Well, when I decided to leave the clinical leadership role that I had, I came back to faculty full-time. Within a year I'd put together a course that the faculty now offers – a quality and safety course. We run it every other Tuesday evening for most of the year, about 13 or 14 sessions. We offer it to anybody in the healthcare system, to try and keep the thread of how do you improve healthcare alive and to give people practical advice and information they can use.
Alberta Health Services has a very small education department that is swamped just trying to get very basic information out to a large number of people. The Canadian Patient Safety Institute (CPSI) has done some work around offering training in various aspects of quality and safety, and the BC Patient Safety Quality Council led by Doug Cochrane has a quality improvement training program for people in BC. I think each province is trying to address things in a slightly different way; there are no standards that any of us are being held accountable to. If somebody in the province will do it, good on them. We're trying to keep it alive until, at least within Alberta, the next vision comes forward to say how are we actually going to do this work.
CC: Quality and safety education: any thoughts of how far it should be pushed down to the members of a department or division?
WF: I've partnered with the Health Quality Council of Alberta on this, because it gives us the option of trying to get everybody across the province on the same page. It's been a bit challenging, but at least we have a model for it. Our focus is, what are the key things that people need to learn throughout the healthcare system – from the C-suites, the CEO and the board level, to directors, frontline providers, and people who aren't professionals but are the backbone of the system – the cleaners and the unit clerks. We've tried to outline the very basics of what everybody needs to learn. Our challenge, of course, is getting it launched, getting enough people who understand it and could teach it. We are trying to work with our office, the continuing medical education (CME) and undergraduate medical and post-graduate education. We've just barely scratched the surface, but at least we have a plan.
CC: Does the Alberta Health Quality Council have any direct relationship between metrics that are coming out of Alberta Health Services or even metrics from your local area, involving monitoring them to drive different results or change in behaviour?
WF: They certainly have that mandate, and they have access to data. I think in the past their focus was a bit like Ontario's: "How do we get the health regions to play in the same sandbox and think about the same things, and we can be a facilitator and an advocate for them?" There were nine health regions in Alberta, but with the creation of Alberta Health Services, that role got wiped off the map. I think they're still trying to find that sweet spot of monitoring to push the system forward, knowing that it's such a funny model in Alberta as the government is overseeing a single health entity. It's not like this in Ontario, but in Alberta sometimes it seems like the health minister is actually the CEO running the healthcare system, so there's a very strange accountability going on. Although the Health Quality Council does have access to data, and they could hold people accountable, what they're holding accountable is the single entity that the government's created. By virtue of that, they're holding the government directly accountable, and that relationship is still in its infancy.
CC: Yes, there are huge political changes out there.
WF: As you can imagine, with the Excellent Care for All Act in Ontario, you have a Health Quality Council almost acting on behalf of the government to hold these different entities accountable for their quality plan, the metrics they're coming up with, and how they're doing on their metrics. That isn't the model at all in Alberta. I'm not sure that it will get there because it's almost the government asking the Quality Council to hold government accountable and governments like to have other people held accountable, not themselves.
CC: From what you know of the Ontario situation, do you think this is a good step forward?
WF: It's an interesting step. I don't know a lot about it, just the concept of forcing organizations to come up with a plan that they could be measured against. My only concern about that is, unless the data systems in Ontario are vastly better than the ones we have in Alberta, most of the data that you'd like to hold people accountable for isn't available, or it's not collected in a way that you can use. We hold people accountable for things that we can measure, not necessarily for things that we should hold them accountable for. Sometimes I think that prompts people to massage the data so that it looks better than it actually is. I'm always worried about how you put the incentives in place, and whether you've got them in the right order.
If you look at the model of Intermountain Healthcare, they first put in place very good data systems, so they could get the information they needed to run their business. Obviously they were motivated by the fee structure in the United States, but, nevertheless, they've got what appears to be believable data about things that matter, as opposed to data that you query about things that may matter less than people might think. I'm a bit worried about holding people accountable in a metric structure, if you haven't got the data system.
CC: That's a very good point. What's the momentum now? You did some fantastic work almost ten years ago, but where's the momentum in your knowledge and on the CPSI Board across Canada to address the quality and safety agenda and get physicians involved? Is it moving ahead well, is it variable between provinces; or is it stalled because of the financial crisis we're all in?
WF: I think it's stalled. Maybe it's mostly the financial problems, but I think it's stalled because of a lack of a unified vision. Everybody's off doing their own thing, doing it in different ways, with a different philosophy; and it changes so quickly that nobody could keep up with it if they tried; you'd need a program that got updated every month about who's doing what, where, and how they're doing it. The fact is, we don't have a single healthcare system in Canada – we have 13 – so everybody's doing something different. I think that makes it incredibly difficult for CPSI or a national organization to read the tea leaves and figure out how they can have the greatest impact across the country. The focus seems to shift from one topic to another, and we've migrated from safety issues to access issues, and other forms of improvement. I don't think any of it's wrong; it's just incredibly challenging when everybody's doing something different. It speaks to the idea that we don't have a unified vision for how to move forward. I think that's one of our challenges in this environment – working collaboratively with one another to reduce the amount of work by not reinventing the wheel in every province.
CC: When you're faced with apathy in a medical staff that's not engaged, do you have any thoughts on how to engage them?
WF: Yes, I think it's important to listen. The other critical thing is data. I was pleasantly astounded at the times I've been able to get believable data and put it in front of physician leaders and say, "Here's what we could be doing; here's what we're currently doing. We're not here to debate the data; the data is the data." If they can buy into the data, they sit back and say, "Wow, that's terrible."
The example I'd give is about when we started trying to get people to buy into the fact that Emergency Department wait times had a lot to do with how services were functioning within the hospital, not within the Emergency Room. We all provided a service to the Emergency Department, a service called consulting. I was able to show them data on the average time and 80th percentile time of how long it was taking individual services, from the point of being asked to see a patient, to the time they saw the patient, to the time they made a decision on whether the patient was being admitted. Then I took that data and put it in front of people who were actually accountable.
They looked at it and said, "You mean to tell me that our department is averaging four and a half hours to make a decision to admit a patient?" We go, "Yes, and on your bad days it's getting to be nine hours." They were astounded and sickened by the fact that it was so terrible. That was hugely motivating for them to go back and say, "We've got to do better." They didn't come to the table and say, "How much are you going to pay us to get better, or what do we get if we get better?" They just looked at it and said, "This is not acceptable."
It's about trying to get data for situations where people are accountable and feel accountable, and then feel they want to change. Then, you give them the ability to change, enough resources so they can change, and the data that lets them know if they're making an improvement. The natural leaders and early adopters in any department take hold and see what they can do with it. In the right environment, I think that's extremely helpful. There are places where it doesn't work, but to me, the key is getting believable data that people can't dispute and a level of accountability that no one can dismiss.
CC: There are some cynical people that say that doctors always challenge data, even good data. That's their opening line. Any thoughts on that?
WF: I'm reminded of the data journey that Don Berwick used to talk about. He said, "Whenever you show data to a group of physicians or a group of anybody, the first thing they'll tell you is, this isn't our data; you don't have it right; this isn't our data." He said then you move past that to, "Okay, maybe it's our data, but it's wrong; there are problems with the data." The next stage is, "Okay, maybe it's our data and maybe it's not that wrong, but there's nothing we can do about it." The last stage is, "Okay, it's our data; it's reasonably valid; it isn't very good. Now what are we going to do to change it?" You have to expect and anticipate the push back at each of those stages, because nobody wants to admit that as a collective group they're not performing as well they'd like to. I think it's really important that you never show data on individual physicians; you've got to get people working together as a team and then tell team, "This is how you're functioning," not "Gee, there's a really good doctor in your group and there's a really bad doctor in your group. How do we get the really bad doctor to buck up and the rest of you laggards to start performing like this really top-notch guy?"
CC: I agree with you, but if you put averages out there, how does the individual know that he's not meeting them, or she's doing a lot less? Give him data personally, but not share it with the whole group?
WF: Yes, I think that's what you do. But you can go about it in a couple of ways. Say you're comparing hospitals to hospitals; you show the hospital its own data and you show it the average across the rest of the group and say, "What do you think?" I'm not a huge fan of doing that. My view is that a benchmark like that is always a great way to achieve mediocrity. The better discussion begins with, "Here's what we're doing as a group, and here's what we think we could do or what other groups have done. There's a gap, so what are we going to do about the gap?" I'm a big fan of gap analysis and comparing people to what they think they could do or should be able to do.
CC: You went to Intermountain; Kaiser Permanente also has good programs. Any thoughts on collaboratives or benchmarking against hospitals in Canada, and working as partners to address this issue?
WF: Yes. People are always interested in how somebody else is doing. It's probably part of our competitive spirit, and nobody wants to think that they're in the bottom half. I think it can be motivating. But what's more motivating is if it's done in the spirit that we can all get better. One of the things I learned at IHI was the concept of running collaborative projects around common topics, where groups got together and were able to see their results plus the results of other teams participating in the collaborative. Everyone was trying to help each other improve, as opposed to, "How can we improve and look better than somebody else?" I think tapping into people's natural inclination to compete, if it's done in the right way, can be really helpful; if it's not done in the right way it's potentially damaging.
CC: When you set up your physicians at one third of their time, I guess you had specific expectations and position descriptions, and what they were accountable for?
WF: Initially, not very well. I think the very first contract I signed as one of those physicians said, basically, "Just get out and start doing something. We want to start seeing some activity." They appealed to the people who were self-starters, who didn't need a lot of direction and didn't need a contract that said, "You'll accomplish this by this date or else." I think ultimately we were too unstructured when we started, and we needed better accountability and better position descriptions. At the end of the day the kind of people you want to attract are the self-starters who don't need specific marching orders. They need general direction; they need to be given a vision; they need to be given buy in to some expectations without being micromanaged. For this to be really successful, you want to attract the kind of people – and there are a lot of them in medicine – who get charged up with the idea that, "I can make this better. The healthcare system is not functioning as well as I think it could. They've given me the tools to make it better, and that's exciting. That's an opportunity I can't pass up."
CC: Very good. I don't think there's anything else. This has been excellent. Thank you very, very much.
WF: The only other thing I might add, Chris, from my perspective is having been involved in lots of quality improvement projects and watched organizations get better and then self-destruct, I truly believe that it all comes down to culture, and I wouldn't be the first person to say that.
At the end of the day, you look for leaders and you look for unity around a common vision that gets you to a change in the culture of the organization. I think the incentive program that Ontario is embarking on has the potential in some places to change the culture for the better, and I think it also has the potential for changing culture to the detriment of the organization. I'd be just a little nervous about the impact on the culture of how this is being structured; just because it helps some organizations doesn't mean that it isn't going to adversely affect a lot of others.
CC: Good point. They need to have champions throughout the organizations, who may or may not be there when you implement this, and there are consequences. The other thing is, changing a culture takes time.
WF: Absolutely. The big problem with governments is that they want results yesterday. If they don't get the results fast enough, they change everything. That's the absolute worst thing you can do. They've got the model all wrong. They've got to invest in the right model and then stick with it and have some constancy of purpose for several years in order to start seeing the returns on their investment. I quite liked Ross Baker's book, High Performing Healthcare Systems: Delivering Quality by Design, where he looked at seven high-performing healthcare systems in several parts of the world. He has a lot of valuable take-home lessons that hopefully people are still paying attention to in Ontario.
CC: One of the key issues is to have physician leaders. Do you think governments invest enough in developing or educating physician leaders to step up and address these issues on a global basis?
WF: Absolutely not. Now if governments were smart, they would combine their efforts and put together a quality training program – maybe not just for physicians, but primarily aimed at physicians – that really addresses what physicians need to do to be successful. We always talk about physicians being the lynchpin; they're an important part of it, but they're not the be-all and end-all. Answering the question – What are the key component parts that need to be in place in any system to allow it to improve, physicians being one of them? – if we could get a common understanding of that and then a common way to address it through education, plus expectations and systems to support it, we'd be a lot further ahead than by just putting in structures like, "Okay, we're going to start measuring now, and that's going to be the motive to get everybody to improve." It's part of the answer to the formula, but it ain't the whole formula, so you'd better go back and say, "What are the other component parts in that formula?" I'm pretty sure if you've got a zero in any one of those parts, you're going to get zero at the end of it.
CC: They've ask for these metrics and most of the physicians don't have the knowledge or tools to effect metrics. I think it's also key, as you mentioned, that the Province take some ownership of this challenge and invest in it, which I don't think they have yet.
WF: Well, a key metric that came out four years ago was hospital standardized mortality rates (HSMRs). There's certainly a signal there, but there's a lot of noise. Say you went to a group and said, "Okay, we're in trouble here; our HSMRs are not good. We're being held to account and we need to change them, and in the next year." If you put that on a table in front of any physicians, they'd all look at it and say, "So how are going to do that? We don't understand the metric you put in front of us to the point of being able to change it, because we don't know what drives it." If you don't give people the data that they can actually do something about, that they can see how to change, it's inappropriate to hold them accountable for it. HSMR is a classic example of that, so I'm interested in what metrics people are being held accountable for.
Comments
Be the first to comment on this!
Personal Subscriber? Sign In
Note: Please enter a display name. Your email address will not be publically displayed