Abstract

[This article was originally published in Healthcare Quarterly, 13(Sp)]

In this article, we describe a framework that we have developed for improving the effectiveness of critical decision-making in selecting information systems. In our framework, we consider system selection in terms of strength of evidence obtained from the testing of candidate systems in order to reduce risk and increase the likelihood of selection and implementation of an effective and safe system. Two case studies, one from a major North American hospital and one from a major European hospital, are presented to illustrate how methods such as usability testing can be applied to improve system selection as well as customization (through early identification of system-organization mismatches and error-prone system features). It is argued that technology-organization fit and consideration of the potential for technology-induced error should be important selection criteria in the procurement process. Here, implications are discussed for the development of improved procurement processes to lead to safer healthcare systems.

The appropriate selection of health information technology (HIT; in particular, electronic health record [EHR] systems) is one of the most critical decisions in the journey toward streamlining healthcare and making it safer. Indeed, research has indicated that the selection of systems that match user and organizational needs and effectively support work practices can lead to decreased medical error and increased patient safety (Borycki and Kushniruk 2008). However, there is also a growing body of literature indicating that systems that do not match the purchasing organization's needs and work practices may lead to safety hazards. Furthermore, specific features of health information systems and user interfaces have been shown to be highly related to the occurrence of medical error (Kushniruk et al. 2005). Along these lines, the literature now contains numerous examples of purchased systems that failed to meet user needs and that ultimately became safety issues. For example, work by Koppel and colleagues (2005) showed that the implementation of a commercially available electronic health system resulted in a range of errors, related both to gaps in interfacing of information and human factors issues, that created healthcare safety hazards (e.g., access to the wrong records by physicians, missing information and error-prone user-computer sequences). A subsequent study by Han et al. (2005) of a commercially available system indicated that deaths actually increased in a hospital unit after the implementation of the system. Furthermore, Kushniruk and colleagues (2005) have experimentally shown that specific features of a system's usability (e.g., how information is displayed to a user of a medication administration system, the style of human-computer interaction sequences etc.) are directly related to specific types of technology-induced error (e.g., errors in user interaction with a system that can lead to incorrect entry of patient medication information by physicians). With this growing body of evidence that the selection of the wrong system can lead to serious safety issues, the question remains: what can be practically done to decrease the risk of selecting a system that does not fit with user needs and organizational structures and that may ultimately become a safety issue? In this article, we explore the use of rigorous clinical scenarios and the usability testing of candidate information systems to improve decision-making in purchasing expensive HIT and to lead to safer and more effective system implementations. Two case studies are described of organizations that have applied some of these approaches to their choice of effective and safe healthcare systems.

Toward a Framework for Improved System Selection and Safety

The appropriate selection of systems such as hospital-wide EHR systems represents a critical decision-making task. However, despite the potentially huge expenditure of money in purchasing large systems, decision-makers involved in the process are often allowed only very limited access to candidate systems prior to the system purchase (Kushniruk et al. 2009). Furthermore, the standard processes for health system procurement are unlikely to provide the decision-makers selecting systems with detailed information about the potential for system safety issues and hazards prior to purchase. In this section, we propose a framework for considering possible system selection methods in terms of the ability to get hands-on access to candidate systems to apply realistic test scenarios (customized to the purchasing organization) as well as to apply methods emerging from the area of usability testing to ensure that appropriate decisions are made regarding system safety. In subsequent sections, we describe two case studies, one from a major North American hospital and one from a major European hospital, where rigorous testing of systems prior to purchase have been conducted.

The framework we propose considers possible system selection methods in terms of a continuum (Figure 1) that ranges from weak evidence (simply involving a demonstration by the vendor to the selection committee) to strong evidence (involving hands-on analyses of the usability and impact of the system on hospital workflow within realistic or real settings prior to selection) to support decision-making regarding choosing from candidate systems. The continuum was developed based on an analysis of the literature and our experiences in consulting with and advising healthcare organizations in the use of new approaches to procurement (e.g., the application of usability testing and the use of low-cost methods for testing candidate EHR systems in situ, which are described below). This process involved convening an expert panel consisting of PhD-prepared experts in human factors and medical errors; these experts classified reported procurements along the continuum from weak to strong evidence for supporting the choice of a "safe" health information system. Decision-makers can use this continuum to support organizational decision-making in selecting from candidate systems.


Click to Enlarge
 

In Figure 1, CLIPS refers to clinical information processing scenarios, which represent clinical situations that could be expected to occur within the local healthcare environment (Lincoln 1996). CLIPS can be used to test systems to determine if they respond appropriately to the situations described, and they should focus on special needs and unusual situations in addition to normal activities. In Figure 1, we can see that vendor demonstrations of products that do not include a rigorous set of CLIPS to guide testing can be seen as providing only weak evidence of how the system will respond to situations that might be error prone or lead to safety issues.

It should be noted that most current procurement processes can be located on the left-hand side of the continuum, with only a few published examples of procurements involving the collection of evidence at the far right of the continuum. It should also be noted that methods for analysis that have emerged from the field of usability engineering are located to the right of the continuum. The two most popular usability engineering methods are usability testing and heuristic evaluation. Usability testing refers to observing representative users interacting with a system (typically involving video and screen recording of these interactions) while carrying out representative tasks. For example, this may involve observing health professionals (e.g., physicians or nurses) interacting with a health information system to enter or retrieve patient data (Kushniruk and Patel 2004). In contrast, heuristic evaluation involves an analyst systematically "stepping through" a user interface or system (i.e. examining the main screens of the interface or system in sequence) to identify violations of principles (or heuristics) associated with good design and usability (Nielsen 1993). Recent work by Carvalho et al. (2009) has extended this approach to the development and creation of a set of evidence-based heuristics that can be used by healthcare organizations to assess the safety of computerized physician order entry systems.

Case Study One: Procurement Involving Workflow-Based CLIPS Testing – Experiences at Mount Sinai Medical Center

The safety of healthcare information systems is directly related to their "fit" within the organization in which they are implemented (Borycki and Kushniruk 2008). This refers to the socio-technical aspects embodied in the system, such as how the system will respond to complex work sequences in the institution, how well the system responds to unusual or unique situations in the organization and how well the technical aspects of the system match and integrate seamlessly with the institution's technical infrastructure. In order to test candidate systems' fit with local practices in hospitals and ultimately their potential to be effective and safe systems, the development of realistic CLIPS is essential. To address this, Kannry and colleagues at Mount Sinai Medical Center in New York have worked to develop processes to create realistic CLIPS that can be used to test candidate systems not only on their basic functionality but also in terms of how well they respond to unusual situations and how well they integrate into the complex workflows and activities characteristic of large complex healthcare organizations.

In his previous work, Kannry has identified the unique challenge in HIT procurement – how to obtain user input in the procurement process (Kannry 2008; Kannry et al. 2006). Careful involvement of users during selection as well as implementation is critical and can be the difference between failure and success (Gray and Felkey 2004; Kannry 2007; McDowell et al. 2003). Yet, clinical users frequently have no prior education, training or experience to draw upon (Kannry 2007, 2008; Kannry et al. 2006). Users are frequently called upon to attend demonstrations as part of the selection process (McDowell et al. 2003) and asked to map the functionality demonstrated to their daily clinical needs. Many vendors prefer to demonstrate functionality and play to existing strengths while at the same time shying away from system and software weaknesses (Campbell et al. 1989; Einbinder et al. 1996). In addition, the workflow shown may not reflect that of the selection site as much as the workflow of the site at which the vendor developed the system. Vendor demonstrations are determined by the script, if any, that an institution supplies the vendor. Much like a film or television show, the script determines what is shown and in what order.

The approach taken at Mount Sinai Medical Center was to employ workflow-based scripting as opposed to functionality-based scripting (Kannry et al. 2006); workflow-based scripting follows the clinical provider through typical patient care scenarios, whereas functionality-based scripting asks whether the system can do x and y and tries to follow a checklist organized by section. The workflow-based approach to scripting has been shown to more accurately represent users' preferences (Einbinder et al. 1996; Laerum and Faxvaag 2004).

Extensive scripts were created by a selection team member who is also a practising physician and were then reviewed by practitioners in multiple specialties. The focus of the scripting was on primary care because it accounts for the largest number of visits in the hospital-based practices. The scripts also emphasized the numerous hand-offs that occur, especially in an academic setting. The script and the evaluation form included six required scenarios and four optional scenarios that were used depending on audience composition. For example, the cardiology-specific scenario was only used when members of the Cardiology Unit attended demonstrations. The Sinai selection team then derived questions from the scripted clinical scenarios for an evaluation form, and showed early versions of the evaluation form to potential attendees to determine if the form could be realistically completed in terms of time and the length of the form.

Every demonstration of candidate systems at Mount Sinai Medical Center was monitored to ensure that vendors followed the script and represented the functionality that was live at an existing site. At the end of each scenario, users were encouraged to grade the scenario on an evaluation form. The form was designed to carefully follow the scripted workflow scenarios and result in an evaluation of the scripted demonstration. On the evaluation form, each clinical scenario was organized into sections; clinical users did not have to deal with "mysterious" section headers that used information technology terminology such as interfaces, screen design and security layer. Scenario sections were labelled to reflect the workflow and employed headings such as physician begins patient care, physician sees new patients and physician sees patient. Users were encouraged to provide additional comments.

When the scoring was completed, the earlier mapping of core functionality to workflow was employed to analyze the user responses along core functionality lines as well as in terms of workflow. For example, the scores could be analyzed in terms of how users graded the workflow "view list of previous notes from multiple specialties/providers" and in terms of core functionality such as "data retrieval and clinical documentation."

By applying the process described above, in conjunction with an analysis of published evidence on the safety of particular vendor products (described in Kannry et al. 2006), a single system was determined on all major categories to best match the needs at Mount Sinai and was since implemented with considerable buy-in at the institution at all levels, from clinical staff to management.

This case study would be placed at the left to mid-point of the continuum shown in Figure 1 as carefully crafted CLIPS were created (which were designed to tease out the impact of a system on workflow as well as test system functionality), however the scripts were given to the vendors prior to the product demonstrations.

Case Study Two: Procurement Involving Usability Testing and Usability Inspection – Experiences at Lille Regional University Hospital

As illustrated in Figure 1, one form of strong evidence for system choice involves usability testing of candidate systems. The approach has been described previously (Kushniruk and Patel 2004) and has typically been used to evaluate systems that are currently being designed or those that are about to be deployed (e.g., Borycki and Kushniruk 2005; Kushniruk et al. 2006) in order to determine if the system will lead to potential problems or safety issues. In addition, the approach can be applied within healthcare organizations at a low cost (see Kushniruk and Borycki 2006). The results of such study are typically fed back to either the redesign or customization of the system before its full release within the organization (e.g., hospital). The same methods have potentially huge impact if applied early in the system development life cycle, far before design or deployment phases, in particular within the actual system selection process itself (during the comparison of possible candidate vendor systems for selection).

There have been few reported applications of this type of usability-focused methodology for system selection (e.g., Graham and colleagues' work on the selection of infusion pumps is one exception; see Graham et al. [2004]) and fewer reported applications of usability testing inserted directly into the procurement process at a large hospital institution (see Beuscart-Zéphir et al. [2002]).

Lille Regional University Hospital in France is a large 3,000-bed hospital that has begun to integrate a range of usability engineering methods directly into system procurement processes, including usability testing and related methods of usability inspection (Beuscart-Zéphir et al. 2001, 2005). In order to support the choice and acquisition process for a clinical information system in anesthesiology, several forms of evidence were collected to inform the decision-making (Beuscart-Zéphir et al. 2005). This included assessing the following three dimensions of candidate systems: (1) quality management, (2) usability and (3) performance (which focused on assessing the quality and exhaustiveness of documentation – including the percentage of relevant information made available to the anesthetist and the number of alerts generated). Of particular interest to this article is the work that was conducted around the assessment of quality management and usability to ensure that the product selected would both fit with the organizational workflow and lead to a system that was both effective and safe. The usability testing involved trained analysts observing and recording dialogues of users interacting with the candidate systems while these users carried out both simulated tasks (involving clinical information processing scenarios) and real tasks.

In this case, the usability tests included the study of actual end users (the anesthesiologists in the unit) and real patients, using a portable usability testing approach in which all the actions on the computer were video recorded to identify problems and issues during subsequent video review. The system testing took place in the real work environment where the selected system would ultimately be installed. By using this approach, software problems were identified and the impact of candidate systems on workflow could be compared directly in the real context of the hospital (Beuscart-Zéphir et al. 2005).

These data were used in conjunction with the results of a heuristic evaluation, which involved usability analysts stepping through and analyzing the candidate systems compared against a set of usability heuristics (guidelines that reflect good design practices – see Kushniruk and Patel [2004]). This approach demonstrated that one of the two candidate systems was shown to have a low score for adaptability, to consist of two different subproducts that were not fully integrated at the time of the test, and to contain some labels in a foreign language (as well as having other usability problems that could potentially lead to an unsafe system). Thus, the approach taken allowed for the assessment of vendor products regarding their potential to inadvertently cause technology-induced errors. Along these lines, recent work by Carvalho et al. (2009) has led to a set of heuristics to guide the usability inspection of commercial medication order entry systems; these heuristics can be used in the head-to-head comparison of commercial vendor-based HIT products.

A benefit of incorporating usability evaluation in the procurement process at Lille Hospital was that it allowed the hospital to select a usable and safe product (with the results of the analyses made by the usability analysts given to the vendor, who modified certain aspects of the product accordingly). This anesthesiology clinical information system is now installed and running routinely in all the anesthesiology departments of Lille Regional Hospital (109 operating rooms, 118 post-operative beds and 110 consultation sites). In addition, an internal quality study of the anesthesiology records has shown a major improvement in terms of accessibility and reliability of medical information.

There was also a commercial positive side effect for the company marketing the system. The good level of usability of this application, as demonstrated in the last round of usability evaluation during the procurement process, has been used by the vendor when responding to other calls for proposals. This argument, plus the company's successful implementation in a large hospital, has progressively led to additional market share for this particular vendor, which is now the leader in this specific healthcare domain for information systems in France. (In 2007, it won 100% of the calls for proposals in French hospitals.) Although usability was not the only factor in this successful procurement process (i.e., other factors such as cost, vendor reputation, support, standardization and capability for interoperability with existing systems were critical as well), it was a key factor when considering how to select a "safe" system and avoid risky choices that might lead to technology-induced errors (Kushniruk et al. 2005).

In Figure 1, we can see that hands-on testing of candidate systems within the actual clinical setting of potential use (i.e., high-fidelity usability testing, as described in Kushniruk and Borycki [2006]) prior to purchase has the potential to lead to a strong level of evidence regarding effectiveness and safety of systems within that particular organizational context. In the example of the procurement process at the Lille Regional University Hospital, this was taken to a further level by conducting both usability testing (involving real end users and patients "in situ", i.e., installed in the real working environment) and usability inspection of candidate systems installed within the hospital prior to making the system selection choice (Beuscart-Zéphir et al. 2005). This case study from France lies at the far right of the continuum shown in Figure 1 as it involves both heuristic evaluation and in situ usability testing of candidate systems installed and running in the actual clinical environment.

Lessons Learned

Lessons learned from our analyses to date include the following:

  • It is not only possible but also feasible to increase the level of evidence available to decision-makers regarding the fit of candidate systems within their organization (as well as assessing the potential safety of those systems prior to implementation).
  • The stronger the level of evidence obtained, the more confident the organization can be of a good system-organization fit.
  • Major issues regarding system usability or safety that need to be addressed can be identified prior to signing contracts with the vendors involved, thereby allowing for the possibility of improvements to systems prior to installation.
  • Some degree of knowledge of practices and processes involved in applying methods described in this article are needed to move to a stronger level of evidence.

Ultimately, the success of our investments in HIT (including the important aspect of ensuring system safety and effective healthcare) depend on how rigorous and accountable our system procurement practices are.

Conclusions

The case studies above describe approaches to the testing of candidate systems that involve CLIPS and varied levels of system testing regarding the match to organizational workflow. There are many examples of procurement that could be considered to have applied a weak level of evidence to inform decision-making. This includes the "conventional" approach of rating candidate systems by a selection panel who passively watch vendor representatives demonstrate system features and capabilities. (For example, the author [A.K.] was an observer on a recent procurement made by a large regional health authority in which the final choice of a region-wide EHR system was based on such demonstrations made by two short-listed vendors.) An approach based on a further level of evidence is that of Kannry and colleagues (described in this article), which proposes that "evidence-based" system selection should include an analysis of reported experience with candidate systems to predict to how well a system responds to complex scenarios (Kannry et al. 2006). Current work to extend this further has involved usability testing methods (Beuscart-Zéphir et al. 2005) to allow for a stronger level of evidence than is typically currently undertaken, as exemplified by the case study of the system selection process at Lille Regional University Hospital. Usability testing applied during the procurement process ideally involves the installation of demonstration systems on site at an organization and observational analysis of representative users interacting with the system in testing. This permits systems to be tested in situ by the selection team (rather than demonstrated by the vendor). Along these lines, it can be argued that CLIPS ideally should not be a prearranged set of questions given to potential vendors in advance, in order to ensure that the vendor does not modify the demonstration system to appear to contain the desired functionality.

We are currently using the framework described in this article to analyze current approaches to system testing in procurement and to assist in the development of new selection processes for use by hospitals, health authorities and regions in order to improve the chances of safe and successful HIT implementations.

About the Author

Andre Kushniruk is a member of the School of Health Information Science, University of Victoria, in Victoria, British Columbia. He can be contacted at andrek@uvic.ca.

Marie-Catherine Beuscart-Zéphir is a member of Université Lille Nord de France; Institute nationale de la santé et de la recherche médicale (INSERM) CIC-IT-CHU (Clinical Investigation Centre for Innovative Technology Network) Lille; and UDSL EA 2694; in Lille, France.

Alexis Grzes is a member of Université Lille Nord de France; INSERM CIC-IT-CHU Lille; and UDSL EA 2694.

Elizabeth Borycki is a member of the School of Health Information Science, University of Victoria.

Ludivine Watbled is a member of Université Lille Nord de France; INSERM CIC-IT-CHU Lille; and UDSL EA 2694.

Joseph Kannry is a member of Mount Sinai Medical Center, in New York, New York.

References

Beuscart-Zéphir, M., F. Anceaux, H. Menu, S. Guerlinger, L. Watbled and F. Evrard. 2005. "User-Centered, Multidimensional Assessment Method of Clinical Information Systems: A Case-Study in Anaesthesiology." International Journal of Medical Informatics 74: 179–89.

Beuscart-Zéphir, M., F. Anceaux, V. Crinquette and J. Renard. 2001. "Integrating Users' Activity Modeling in the Design and Assessment of Hospital Electronic Patient Records: The Example of Anesthesia." International Journal of Medical Informatics 64: 157–71.

Beuscart-Zéphir, M.C., L. Watbled, A.M. Carpentier, M. Degroisse and O. Alao. 2002. "A Rapid Usability Assessment Methodology to Support the Choice of Clinical Information Systems: A Case Study." Proceedings of the AMIA Fall Symposium 46–50.

Borycki, E. and A.W. Kushniruk. 2005. "Identifying and Preventing Technology-Induced Error Using Simulations: Application of Usability Engineering Techniques." Healthcare Quarterly 8: 99–105.

Borycki, E.M. and A.W. Kushniruk. 2008. "Where Do Technology-Induced Errors Come From? Towards a Model for Conceptualizing and Diagnosing Errors Caused by Technology." In A. W. Kushniruk and E.M. Borycki, eds., Human, Social, and Organizational Aspects of Health Information Systems. Hershey, PA: IGI Press.

Campbell, J.R., N. Givner, C.B. Seelig, A.L. Greer, K. Patil, R. Wigton et al. 1989. "Computerized Medical Records and Clinic Function." MD Computing 6(5): 282–87.

Carvalho, C.J., E.M. Borycki and A.W. Kushniruk. 2009. "Ensuring the Safety of Health Information Systems: Using Heuristics for Patient Safety." Healthcare Quarterly 12: 49–54.

Einbinder, L.H., J.B. Remz and D. Cochran. 1996. "Mapping Clinical Scenarios to Functional Requirements: A Tool for Evaluating Clinical Information Systems." Proceedings of the AMIA Annual Fall Symposium 747–51.

Graham, M., T. Kubose, D. Jordan, J. Zhang, T. Johnson and V. Patel. 2004. "Heuristic Evaluation of Infusion Pumps: Implications for Patient Safety in Intensive Care Units." International Journal of Medical Informatics 73(11): 771–79.

Gray, M.D. and B.G. Felkey. 2004. "Computerized Prescriber Order-Entry Systems: Evaluation, Selection, and Implementation." American Journal of Health-System Pharmacy 61(2): 190–97.

Han, Y.Y., J.A. Carcillo, S.T. Venkataraman, R.S. Clark, S. Watson, T. Nguyen et al. 2005. "Unexpected Increased Mortality after Implementation of a Commercially Sold Computerized Physician Order Entry System." Pediatrics 116(6): 1506–12.

Kannry, J. 2007. "Computerized Physician Order Entry and Patient Safety: Panacea or Pandora's Box?" In K. Ong, ed., Medical Informatics: An Executive Primer. Chicago, IL: Healthcare Information and Management Systems Society.

Kannry, J. 2008. "Operationalizing the Science: Integrating Clinical Informatics into the Daily Operations of the Medical Center." In A.W. Kushniruk and E. Borycki, eds., Human, Social and Organizational Aspects of Health Information Systems. Hershey, PA: IGI Global.

Kannry, J., S. Mukani and K. Myers. 2006. "Using an Evidence-Based Approach for System Selection at a Large Academic Medical Center: Lessons Learned in Selecting an Ambulatory EMR at Mount Sinai Hospital." Journal of Healthcare Information Management 20(2): 99.

Koppel, R., J.P. Metlay, A. Cohen, B. Abaluck, A.R. Localio, S.E. Kimmel et al. 2005. "Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors." Journal of the American Medical Association 293: 1197–203.

Kushniruk, A. and E. Borycki. 2006. "Low-Cost Rapid Usability Engineering." Healthcare Quarterly 9(4): 98–100.

Kushniruk, A., E. Borycki, S. Kuwata and J. Kannry. 2006. "Predicting Changes in Workflow Resulting from Healthcare Information Systems: Ensuring the Safety of Healthcare." Healthcare Quarterly 9(Special Issue): 114–18.

Kushniruk, A.W., E.M. Borycki, K. Myers and J. Kannry. 2009. "Selecting Electronic Health Record Systems: Development of a Framework for Testing Candidate Systems." In J.G. McDaniel, ed., Advances in Information Technology and Communication in Health (Vol. 143: Studies in Health Technology and Informatics). Fairfax, VA: IOSPress.

Kushniruk, A.W., M. Triola, E. Borycki, B. Stein and J. Kannry. 2005. "Technology Induced Error and Usability: The Relationship between Usability Problems and Prescription Errors When Using a Handheld Application." International Journal of Medical Informatics 74: 519–26.

Kushniruk, A.W. and V.L. Patel. 2004. "Cognitive and Usability Engineering Approaches to the Evaluation of Clinical Information Systems." Journal of Biomedical Informatics 37(1): 56–57.

Laerum, H. and A. Faxvaag. 2004. "Task-Oriented Evaluation of Electronic Medical Records Systems: Development and Validation of a Questionnaire for Physicians." BMC Medical Informatics and Decision Making 4: 1.

Lincoln, T. 1996. "Clinical Information Processing Scenarios." In J. Anderson, ed., Evaluating Health Care Information Systems. London: SAGE.

McDowell, S.W., R. Wahl and J. Michelson. 2003. "Herding Cats: The Challenges of EMR Vendor Selection." Journal of Healthcare Information Management 17(3): 63–71.

Nielsen, J. 1993. Usability Engineering. London: Academic Press.