Nursing Leadership

Nursing Leadership 24(2) June 2011 : 38-57.doi:10.12927/cjnl.2011.22464
Knowledge Synthesis

Towards a National Report Card in Nursing: A Knowledge Synthesis

Diane Doran, Barbara Mildon and Sean Clarke

Abstract

This paper is an abridged version of a knowledge synthesis undertaken to inform the proceedings of a collaborative forum of nurse leaders convened under the auspices of Health Canada, the Academy of Canadian Executive Nurses, the Canadian Nurses Association and Canada Health Infoway for the purpose of discussing the development of a nursing report card for Canada. The synthesis summarized the state of the science in the measurement of nursing-sensitive outcomes and the utilization of nursing report cards – information that informed forum participants' dialogue and planning. This condensed version of the synthesis focuses on initiatives related to outcomes and performance monitoring in nursing, including specific indicators and reporting systems and the development, implementation and utilization of nursing report cards.

Background

The need for standardized information on health outcomes for the evaluation of health services is coming into greater focus each year as healthcare reform and dialogue continue across the country. Doran and Pringle (2011) undertook a knowledge synthesis of the state of science on nursing outcomes measurement and international nursing report card initiatives. This paper presents a condensed version of this knowledge synthesis.1

Methodology

Preparation of the knowledge synthesis was guided by two central research questions: (1) What is known about outcomes/performance monitoring initiatives in nursing, including specific indicators and reporting systems? and (2) What is known about the development, implementation and utilization of nursing report cards?

Data for the synthesis were collected through review and content extraction from Nursing Outcomes: State of the Science (Doran 2011), retrieval of literature suggested by experts in outcomes research, an Internet search and information from key informants.

Origins of outcomes/performance monitoring

Efforts to identify nursing's contribution to high-quality care and to conduct research into patient outcomes date back to Nightingale (Dossey 2000; Maas et al. 1996; Magnello 2010; Montalvo 2007). However, the systematic collection of data to assess outcomes did not gain widespread attention until the late 1970s, when concerns about quality of care prompted the development of the Universal Minimum Health Data Set, which was followed shortly thereafter by the Uniform Hospital Discharge Data Set (Kleib et al. 2011). These data sets facilitated consistency in data collection among healthcare organizations by prescribing the data elements to be gathered. The aggregated data informed the assessment of care quality in hospitals and provided discharge information about patients.

In Canada, Standards for Management Information Systems (MIS) were developed in the 1980s and were eventually used to collect and report financial and statistical data from health service organizations' daily operations (CIHI 2011). The Canadian Institute for Health Information later implemented a national Discharge Abstract Database (DAD), which became a key information resource. However, these data sets did not include information about nursing care delivered to patients in hospital (Kleib et al. 2011), revealing the need for a nursing minimum data set.

Overview of nursing minimum data sets

The definition of nursing minimum data set (NMDS) most often cited in the literature is "a minimum set of items of information with uniform definitions and categories concerning the specific dimension of nursing which meets the information needs of multiple data users in the health care system" (Werley et al. 1991: 422). As described by Kleib and colleagues (2011), the development of an NMDS involves four stages:

  1. Data collection. Identification of the data elements (variables), language and coding scheme, decisions about data collection frequency, definitions for sample size and design, determination of data collection format (e.g., paper, electronic, etc.) and development of data collection processes.
  2. Converting data to information. Establishment of data validity and reliability and design of a database for data storage and analysis.
  3. Applying the data. Employing the data for decision-making and other applications such as quality evaluation.
  4. Focused application. Using the data to examine, inform or address a specific problem.

The development of nursing minimum data sets

The Canadian Nurses Association convened the first Nursing Minimum Data Set Conference in the early 1990s (Haines 1993; CNA 2000). The resulting NMDS was named the Health Information: Nursing Components (HI:NC) data set and included recommendations concerning the following data elements:

  • Eight patient demographic items (already being gathered by MIS);
  • Medical care items, such as medical diagnosis, procedures, and whether the patient was alive at the time of classification (already being gathered by MIS);
  • Service elements, such as provincial/institutional chart number, doctor identifier, nurse identifier and principal nurse provider;
  • Episode items, such as admission date and hour, discharge date and hour and length of stay; and
  • Other related data, such as the institution, main point of service and payer.

Nursing leaders lobbied to have five additional elements included in the CIHI Discharge Abstract Database: (1) client status, (2) nursing interventions, (3) client outcome, (4) principal nurse provider (using a nurse identifier) and (5) nursing intensity/workload (Haines 1993; CNA 2000). Over time, coding has become available for client status, nursing interventions and client outcomes, and some nursing workload data are available using the MIS standards gathered at CIHI. Establishment of a national nurse identifier is still outstanding.

Nursing minimum data sets and classification systems

Nursing classification systems and standardized nursing languages or vocabularies contributed to NMDS development. Currently referred to as standardized clinical terminologies, their development and use became increasingly important as the introduction of electronic documentation and electronic health records into the healthcare system advanced (Rutherford 2008). Standardized nursing terminologies enable the data elements in an NMDS to be systematically organized and cross-mapped to other nursing or health clinical terminologies. Accordingly, they facilitate accurate communication regarding nursing care, measurement and coding (Rutherford 2008), and enable comparison of nursing information across practice settings, sectors and jurisdictions (CNA 2000, 2006). As computerization and informatics evolved, a variety of classification systems were developed, including the North American Nursing Diagnosis Association (NANDA), the Nursing Interventions Classification (NIC) and the Nursing Outcomes Classification (NOC). These classification systems informed the development of new nursing minimum data sets by providing a language for classifying the concepts/phenomena of interest.


A National Nursing Report Card

Susan VanDeVelde-Coke, PhD.
Executive Vice President, Chief Health Professions & Nursing Executive
Sunnybrook Health Sciences Centre
President, ACEN

In September 2010, the Academy of Canadian Executive Nurses (ACEN) made the decision to focus on the development of a national nursing report card. This report card would contain structure, process and outcome nursing-sensitive indicators that could be used by all healthcare sectors to measure nursing care. A collaboration with the Canadian Nurses Association (CNA), support from Health Canada and participation by Canada Health Infoway and the Canadian Institute for Heath Information led to the February 13, 2011 Think Tank towards a National Report Card.

The Think Tank was attended by representatives from ACEN, provincial nursing officers, provincial nursing associations, the federation of nursing unions, universities, and national and international researchers who are experts in the field. Content shared at the session included the current status of research completed on nursing-sensitive outcomes. The objective of the Think Tank was to develop a shared vision and critical path for the national report card, as well as support and collaboration from the nursing community, with the intent to create a draft by early 2012.

In preparation for the Think Tank, the planning committee engaged Diane Doran, RN, PhD, scientific director, Nursing Health Services Research Unit (Toronto site); Barb Mildon, RN, PhD; and Sean Clarke, RN, PhD, from the Lawrence S. Bloomberg Faculty of Nursing, University of Toronto, to write a synthesis entitled Toward a National Report Card in Nursing; A Knowledge Synthesis. This synthesis identified "what is currently known about outcomes/performance monitoring initiatives in nursing, including specific indicators and reporting systems and what is known about the development, implementation and utilization of nursing report cares" (Doran et al. 2011). This synthesis has been condensed for publication in CJNL. The CNA/ACEN Steering Committee is confident you will find this synthesis to be comprehensive and a helpful background to the creation of national nursing-sensitive structure, process and outcome indicators.

The work of the CNA/ACEN Steering Committee to develop national nursing-sensitive indicators is on track to complete a draft for a pilot project by early 2012. The research team that developed the synthesis is continuing to collaborate on the project, as well as Health Canada and Canada Health Infoway. We look forward to sharing the results of this important project over the next year.


 

International Classification of Nursing Practice

The International Classification of Nursing Practice (ICNP) was developed under the auspices of the International Council of Nurses as an overarching classification system. The ICNP enables organizations to use their preferred classification systems without interruption, while still achieving the benefits of broader data comparison and utilization through the cross-mapping of data to the ICNP, recognized in Canada and globally as the standard international reference terminology for nursing (CNA 2006).

History and evolution of nursing-sensitive outcomes

Building on experience in developing, gathering and using NMDS data, nursing outcome databases began to be created. These data repositories generally house clinical outcomes determined to be sensitive to nursing care (Doran and Pringle 2011; Kleib et al. 2011). Nursing-sensitive outcomes have been described as those that are "relevant, based on nurses' scope and domain of practice, and for which there is empirical evidence linking nursing inputs and interventions to the outcome" (Doran 2003: vii).

Nursing-sensitive indicators are the data elements that are collected and analyzed to identify nursing-sensitive outcomes. Reflecting Donabedian's (1966) organizing framework for factors that influence patient care quality, nursing-sensitive indicators are identified for the structure, process and outcomes of nursing care (Doran and Pringle 2011). Indicators of structures for nursing care (also known as inputs) encompass the supply, skill level and education/certification of nursing staff; process indicators include components of nursing care such as assessment and interventions; and nursing-sensitive patient outcomes are those that improve if more nursing care, or higher-quality nursing care, is provided (NDNQI 2010).

The identification of specific nursing-sensitive indicators was assisted by the development and testing of conceptual models such as the Quality Health Outcomes Model (Mitchell 2001, cited in Doran and Pringle 2011) and the Nursing Role Effectiveness Model (Irvine et al. 1998). However, as noted by Doran and Pringle (2011), the designation of outcomes as being nursing sensitive also depends on primary research. Such research is influenced by several factors, including theoretical explanations to link nursing inputs and processes to outcomes, access to and recruitment of large patient samples, and the availability and use of measurement tools/instruments that have demonstrated reliability and validity.

Initial efforts to examine nursing-sensitive outcomes focused on patient safety outcomes such as mortality, adverse events and complications during hospitalization (Doran and Pringle 2011). However, over time the perspective on relevant nursing-sensitive outcomes for measurement has broadened. The Kaiser Permanente Medical Care Program, Northern California Region was one of the first organizations to develop and measure positive patient outcomes, such as patients' engagement in healthcare, their functional status and social and mental well-being (Doran and Pringle 2011). Kaiser's initiative was successful, and in 1996 it joined the Collaborative Alliance for Nursing Outcomes, described later in this paper. Measurement of positive client outcomes related to nursing care were included in other key initiatives. For example, the Nursing Staff Mix Outcomes Study conducted in all 17 teaching hospitals in Ontario included outcomes for patient well-being and satisfaction (McGillis Hall 2003). Since those early beginnings, many initiatives related to nursing-sensitive indicators and outcomes have been reported or are underway, and several are now described.

Nursing-Sensitive Indicators and Related Initiatives

Health Outcomes for Better Information and Care (HOBIC) program

Overview

The HOBIC program originated with the Nursing and Health Outcomes Project, which was established in 1999 with funding from the Ontario Ministry of Health and Long-Term Care (Pringle and White 2002). Through successive phases of the program, a set of nursing-sensitive patient outcomes was identified. Thereafter the feasibility of collecting the nursing-sensitive patient outcomes in hospital, home care, long-term care and continuing complex care was tested and demonstrated (HOBIC 2009). To date, collection of HOBIC outcomes has been successfully implemented in over 187 institutions across four health sectors: acute care and complex continuing care (62), home care (4) and long-term care (121) (P. White, personal communication, January 28, 2011).

Methodology

HOBIC involves the collection of outcomes data by nurses at the time patients are admitted to healthcare services and at discharge, using standardized reliable and valid measurement tools. Staff nurses are coached on how to collect the outcomes data using the standardized tools and in how to record their assessments as part of routine documentation.

Nursing-sensitive indicators

HOBIC consists of a set of generic outcomes (Table 1) relevant for adult populations in acute care, home care, long-term care and complex continuing care settings.


Table 1. Comparative overview of nursing-sensitive outcome initiatives
Initiative Indicators

HOBIC (1999) Ontario

C-HOBIC Manitoba Saskatchewan

  • Functional health status
  • Therapeutic self-care
  • Safety
  • Falls
  • Pressure ulcers
  • Symptoms:
    • Pain
    • Dyspnea
    • Fatigue
    • Nausea
  • Patient satisfaction with nursing care

NDNQI (1998)

USA (National)

  • Nursing hours/pt-day
  • Turnover
  • Nosocomial infections
  • Pt falls
  • Pt falls with injury
  • Pressure ulcer rates
  • VAP
  • Pediatric pain assessment intervention & reassessment
  • Pediatric peripheral IV infiltration
  • Psychiatric assault – physical/sexual
  • RN education/verification
  • RN job satisfaction
  • RN practice environment
  • Restraints
  • Staff mix
  • % agency staff

CALNOC (1996) USA; Europe; Australia

Structure

  • Nursing hours/pt-day
  • Skill mix
  • Nurse–pt ratios
  • Contract staff use
  • Turnover
  • Workload (admin./DCs, transfers)
  • Sitter use
  • RN education/certification
  • RN experience

Process

  • Falls & HAPU
  • Risk assessment
  • Time since last risk assessment
  • Risk score (pressure ulcers)
  • Risk status (falls & HAPU)
  • Prevention protocols (falls & HAPU)
  • Medication administration – 6 safe practices
  • PICC practices
  • Restraint use prevalence

Outcome

  • Community PU prevalence
  • HAPU prevalence
  • Pt fall rate & injury rate
  • Restraint prevalence
  • CABSI rate in PICC lines
  • Medication administration errors

MilNOD (2001) USA: Military Hospitals

Structure

  • Nursing care hours
  • Staff mix
  • Staff category
  • Nurse education/experience

Outcome

Patients

  • Pressure ulcers
  • Restraints
  • Falls
  • Med. errors

Nursing

  • Job satisfaction
  • Needle-stick injuries

Contextual Indicators

  • Nursing work environment

Explanatory Indicators

  • Patient turnover
  • Patient acuity

Measurement tools used include:
- Morse Falls Assessment Scale
- Braden Scale

VANOD (2002) USA: Veterans Affairs Health Providers

  • Nursing hours/pt-day
  • Nursing hours & cost per outpatient encounter
  • Skill mix
  • RN education & certification
  • Nursing staff injuries
  • Nursing turnover
  • RN job satisfaction
  • Patient falls
  • Patient satisfaction
  • Pressure ulcer data

B-NMDS (1988) Belgium National (Hospitals)

  • Care r/t:
    • Hygiene
    • Mobility
    • Elimination
    • Feeding
  • Tube feeding
  • Mouth care
  • Pressure sore prevention
  • Assist to dress
  • Trach. care
  • Endotrach. tube care
  • Nursing admission
  • ADL training
  • Emotional support
  • Care of disoriented pt
  • Isolation care
  • Monitor V/S
  • Monitor clinical signs
  • Cast care
  • Blood samples
  • Med. management (IM, SC, IV)
  • Infusion therapy
  • Surgical wound care
  • Trauma wound care

United Kingdom (2009)

Collected nationally:

  • Falls
  • Pressure ulcers
  • Infections

Collected in ambulatory chemotherapy settings:

  • Safe med. administration
  • Sepsis
  • RN experience
  • RN education & communication
  • Well-being and function
  • Nausea and vomiting
  • Pain
  • Diarrhea
  • Fatigue
  • Oral mucositis
  • Nutrition

ADL = activities of daily living; CABSI = central line acquired bloodstream infection; DC = discharge; HAPU = hospital-acquired pressure ulcer; IM = intramuscular; IV = intravenous; PICC = peripherally inserted central catheter; pt = patient; r/t = related to; RN = registered nurse; SC = subcutaneous; VAP = ventilator-associated pneumonia; V/S = vital signs. UAP – Unlicensed Assistive Personnel


 

Uses/benefits

HOBIC information is abstracted into a database that provides nurse executives with outcome reports that can be linked to staffing and financial information. The reports can also be used to examine the quality of care for their organizations and examine the effectiveness of best practice guidelines. The aggregate information is abstracted into a central repository, which provides the capacity for decision support, healthcare planning and research. Nurses can access their patients' HOBIC outcome scores throughout the time they are receiving care, compare their patients' scores with others of similar age, gender and diagnosis, determine when they are achieving their best outcomes and know when they are sufficiently prepared to care for themselves after discharge (Hannah et al. 2009; Kleib et al. 2011; White et al. 2005).

Canadian Health Outcomes for Better Information and Care: C-HOBIC

Overview

The C-HOBIC project implemented the collection of standardized patient outcome data related to nursing care in Saskatchewan and Manitoba. C-HOBIC builds on the Ontario HOBIC program described above. It is implemented in Manitoba in long-term and home care and in Saskatchewan in long-term care. The objectives of the C-HOBIC project were to (1) standardize the language concepts used by HOBIC to the International Classification for Nursing Practice (ICNP); (2) capture patient outcome data related to nursing care across four sectors of the health system: acute care, complex continuing care, long-term care and home care; and (3) store the captured and standardized data in relevant, secure jurisdictional data repositories or databases in preparation for entry into provincial electronic health records (EHRs).

Methodology

C-HOBIC used the methodology developed through the Nursing and Health Outcomes Project and the HOBIC program to implement the collection of outcome data in Saskatchewan and Manitoba. Each of the outcomes (Table 1) has a concept definition, a valid and reliable measure, and empirical evidence linking it to nursing inputs or interventions.

Uses/benefits

In addition to providing real-time information to nurses about how patients are benefiting from care, the collection of nursing-related outcomes can provide valuable information to administrators in understanding their organization's performance related to patient outcomes, including how well it prepares patients for discharge. Furthermore, at an aggregate level, this information is useful to researchers and policy makers in examining how well the system is meeting people's healthcare needs. The project mapped the HOBIC concepts to the standardized clinical terminology, ICNP and produced the C-HOBIC/ICNP catalogue for use by vendors and facilities in implementing EHRs. Work is now underway to map the C-HOBIC/ICNP Catalogue to SNOMED – CT (Systematized Nomenclature of Human and Veterinary Medicine – Clinical Terminology), which is Canada Health Infoway's standardized clinical terminology of choice for the EHR.

The National Database of Nursing Quality Indicators (NDNQI)

Overview

One of the earliest outcomes databases and the first national one in the United States, the NDNQI database was established in 1998 by the American Nurses Association (Montalvo 2007). It is a voluntary national nursing quality measurement program in which structure, process and outcome indicator data are collected at the nursing unit level for the purpose of evaluating nursing care. Currently, NDNQI has over 1,500 participating organizations. Participation in NDNQI meets requirements for the Magnet Recognition Program, and 20% of database members participate for that reason. The remaining 80% of participants participate voluntarily to inform their efforts to evaluate and improve nursing care quality and outcomes (Montalvo 2007).

Methodology

Detailed guidelines for data collection, including definitions and decision guides, are provided by NDNQI (2010). Data are submitted electronically via the World Wide Web. Statistical methods such as the hierarchical mixed model are used to examine the correlation between the nursing workforce characteristics and outcomes (Montalvo 2007). Quarterly and annual reports of structure, process and outcome indicators (Table 1) are available six weeks after the close of the reporting period. The database is housed at the Midwest Research Institute (MRI) at Kansas City, Missouri and is managed by MRI in partnership with the University of Kansas School of Nursing (Alexander 2007).

Uses/benefits

The data inform efforts to improve the safety and quality of patient care. Data may be trended over several quarters to examine progress and establish organizational goals. Comparison reports are benchmarked to state, national and regional results.

Collaborative Alliance for Nursing Outcomes California (CALNOC) database project

Overview

Originally launched in 1996 as the California Nursing Outcomes Coalition (Donaldson et al. 2005), a statewide nursing quality report card pilot project funded by the American Nurses Association, CALNOC was a joint venture of the American Nurses Association/California and the Association of California Nurse Leaders. Membership is voluntary and comprises approximately 300 hospitals from the United States, with pilot work in Sweden, England and Australia. As its membership grew nationally, CALNOC was renamed the Collaborative Alliance for Nursing Outcomes. A not-for-profit corporation, member hospitals pay a size-based annual data management fee to participate and access CALNOC's industry-leading, Web-based benchmarking reporting system.

Methodology

Hospital-generated, unit-level acute nurse staffing and workforce characteristics and processes of care data, as well as key National Quality Forum–endorsed nursing-sensitive outcomes measures, are submitted electronically. In addition, the CALNOC database includes unique measures such as its Medication Administration Accuracy metric. CALNOC data are stratified by unit type and hospital characteristics, and reports can be aggregated to division, hospital and system/group/geographic levels. Unit types currently included in the CALNOC data set include adult critical care, step-down and medical–surgical units; adult post-acute and rehabilitation units; pediatric critical care, step-down and medical–surgical units. Table 1 lists the nursing-sensitive indicators as extracted from Kleib and colleagues (2011) and CALNOC (2010).

Uses/benefits

CALNOC reports are used for performance trend analysis, variation analysis and assessment of staffing effectiveness. Data guide administrative and clinical decision-making, strategic performance improvement and public policy. The CALNOC database is also used for research purposes. For example, CALNOC investigators were the first to examine the impact of mandated staffing ratios on outcomes in California (Donaldson et al. 2005), and CALNOC investigators recently examined the impact of selected medical–surgical acute care nurse characteristics and practices on patient outcomes, including falls, pressure ulcers and medication administration accuracy (Donaldson and Aydin 2010). CALNOC member hospitals also use their CALNOC data in meeting requirements for application to the Magnet Recognition Program.

The Military Nursing Outcomes Database (MilNOD)

Overview

The MilNOD project began in 1997 with a pilot study in a single army hospital. Through three successive phases, a total of 13 military hospitals ultimately joined the database, which was modelled on the CALNOC measures and methods. Nurse staffing and patient outcome data were collected from all medical and surgical, step-down and critical care units of participating facilities (Patrician et al. 2010). MilNOD is currently inactive because its research funding has ended.

Methodology

Staffing data in MilNOD were collected prospectively each shift, in contrast to CALNOC, which collects aggregated retrospective staffing data. The reliability of staffing data elements entered into the database was found to be extremely high (82 to 99%) and inter-rater reliability was also monitored (Patrician et al. 2010). Other indicator data were collected at the shift, unit and hospital levels. Instruments used included the Morse Falls Assessment Scale and the Braden Scale to predict pressure ulcers (Loan et al. 2011). Table 1 lists the indicators gathered for the MilNOD database.

Uses/benefits

MilNOD data are used for decision-making and to compare performance to other hospitals within the military system and external to it. Data also enable the evaluation of staffing effectiveness and patient safety (Patrician et al. 2010).

The Veterans Affairs Nursing Outcome Database (VANOD)

Overview

The VANOD database project began in 2002 and was originally modelled after CALNOC. However, VANOD methods have increasingly focused on extraction of data from the Veterans Association electronic medical record as an alternative to hospital data collection efforts. An initiative of the Veterans Association Office of Nursing Services (ONS), VANOD is viewed as an integral part of the ONS for its utility in identifying trends and areas for nursing practice improvements and supporting the evaluation of nurse staffing and practice environments in association with patient outcomes.

Methodology

Indicators (Table 1) are collected at the unit and hospital levels to facilitate evaluation of quality and enable benchmarking within and among Veterans Affairs facilities (Alexander 2007).

Uses/benefits

Data are reported as being useful for "quantifying the impact of patient care interventions; identifying successful nurse retention and recruitment strategies and health policy decision-making" (Haberfelde et al. 2005, cited by Kleib et al. 2011: 495). VANOD is also contributing to the development and introduction of a structured, standardized clinical language for use in all Veterans Affairs clinical documentation systems (Veterans Affairs ONS 2009).

The Belgian Nursing Minimum Data Set

Overview

The Belgian Nursing Minimum Data Set (B-NMDS) adopted a unique approach to data collection by focusing on tracking variation in practice patterns (nursing interventions) rather than changes in single data elements such as length of stay (Kleib et al. 2011). The B-NMDS was implemented in 1988 to address the absence of nursing information in the existing hospital discharge data set. It has since been revised and incorporates the Nursing Interventions Classification (NIC) system (Van den Heede et al. 2009a). Belgium has been identified as the country that has achieved the most widespread national use of NMDS data (MacNeela et al. 2006).

Methodology

The participation of all Belgian hospitals is mandated by the Ministry of Health. A snapshot of data is captured for five days at random within the first 15 days of March, June, September and December (Van den Heede et al. 2009a). In addition to the nursing care elements that are captured, patient demographic items and ICD-M codes are gathered to enable alignment of the NMDS data with the hospital discharge data set. Service data are collected, including identifiers (hospital, ward), number of beds, episode of care descriptors (e.g., admission date, length of stay) and resources, including the number of nursing hours available and nursing staff mix (Kleib et al. 2011).

Indicators

A total of 23 nursing interventions and activities of daily living (Table 1) are captured in the B-NMDS using prescribed response sets, and these data are linked to the Belgian Hospital Discharge Dataset to capture patient mix and intensity of service use. For example, intensity of care (i.e., service use) relating to hygiene would be coded as "no assistance; supportive assistance, partial assistance or complete assistance" (Sermeus et al. 2008: 1013).

Uses/benefits

Data from the B-NMDS are used to measure nursing care in acute hospitals and to inform decision-making related to nurse staffing and hospital financing (Sermeus et al. 2008). B-NMDS data also inform examination of 10 nursing-sensitive indicators listed in Table 1 (Van den Heede et al. 2009c; Van den Heede 2009a).

United Kingdom initiatives

Overview

The United Kingdom's National Health Service recently published a national set of nursing-sensitive outcome indicators (Table 1). It focuses on quality improvement and reflects principles identified to support measurement of nursing's contribution to patient care and outcomes (Department of Health, United Kingdom 2010; Association of UK University Hospitals n.d.). Nursing-sensitive outcomes and indicators in the specialty of ambulatory chemotherapy have also been developed in the United Kingdom (Griffiths et al. 2009).

Methodology

The development of the United Kingdom's national indicators was informed by the work of CALNOC, VANOD and NDNQI. A detailed description of each indicator/measure has been published along with the rationale for implementing it and the expected outcomes. For the ambulatory chemotherapy nursing-sensitive indicators, the evidence to support a list of proposed indicators was evaluated through scoping reviews of the literature and consultation with clinical experts.

Uses/benefits

The data will enable comparison with outcomes nationally and will inform an understanding of the relationship between outcomes and staffing.

From Data to Information: Nursing Report Cards

With the availability of data from NMDS and outcomes measurement initiatives, the next step was to turn those data into information that could be effectively reported and shared. Kaplan and Norton (1992) advocated for a "balanced scorecard" approach to reporting in which data were collected from four perspectives: (1) How do customers see us? (customer perspective), (2) What must we excel at? (internal business perspective), (3) Can we continue to improve and create value? (innovation and learning perspective) and (4) How do we look to shareholders? (financial perspective).

In 1998, the balanced scorecard approach was used in the first of a series of hospital reports in Ontario, with the four quadrants named as (1) system integration and change, (2) clinical utilization and outcomes, (3) patient satisfaction and (4) financial performance and condition (McGillis Hall et al. 2003; Pink et al. 2007). The Hospital Report 2001 contained a comprehensive nursing report (McGillis Hall et al. 2001). The nursing report was the first step in the development of a balanced scorecard for nursing services in Ontario. It provided recommendations and supporting evidence for the inclusion of nursing data in each quadrant of the balanced scorecard. The specific indicators (Table 2) were selected based on outcomes of care and included those experienced by the patient, nurses, informal caregivers (e.g., family and friends) and hospital. Three key characteristics for the indicators of interest were identified: "(a) the indicators can be evaluated using efficient, valid and reliable methods; (b) they are relevant to the patient, informal or formal care provider, provider agency, consumer or government; and (c) they represent intended or unintended effects of hospital nursing care" (McGillis Hall et al. 2001: 15). In 2005 the core set of measures reported in the Hospital Report was rigorously reviewed and redeveloped to reflect "changes in the hospital industry, the data collected and performance criteria" (Pink et al. 2007: 87).


Table 2. Nursing indicators recommended for inclusion in Ontario's hospital reports
Quadrant Indicators
System Integration and Change
  1. Clinical information technology
  2. Clinical data
  3. Intensity of information use
  4. Nursing databases
  5. Development and use of clinical pathways
  6. Coordination of care
  7. Nursing–community integration
  8. Continuity of care
  9. Strategies for managing ALC patients
  10. 10. Nursing health human resources

* The authors noted that collection of data on each of these indicators was feasible for measurement at the time of the report.

Clinical Utilization & Outcomes
  1. Functional status
  2. Self-care status
  3. Symptom control
  4. Patient falls
  5. Urinary tract infections
  6. Pneumonia
  7. Pressure ulcers
  8. Upper gastrointestinal bleeding
  9. Failure to rescue

* The authors noted that collection of data on patient falls, UTIs, pneumonia and pressure ulcers was feasible at the time of the report; however, data on functional status, self-care status and symptom control may not be readily available.

Patient Satisfaction
  1. Information you were given
  2. Instructions
  3. Ease of getting information
  4. Information given by nurses
  5. Informing family or friends
  6. Involving family or friends in your care
  7. Concern and caring by nurses
  8. Attention of nurses to your condition
  9. Recognition of your opinion
  10. Consideration of your needs
  11. The daily routine of nurses
  12. Helpfulness
  13. Nursing staff response to your calls
  14. Skill and competence of nurses
  15. Coordination of care
  16. Restful atmosphere provided by nurses
  17. Privacy
  18. Discharge instructions
  19. Coordination of care after discharge
  20. Overall quality of care and services you received
  21. Overall quality of nursing care

* The authors noted that collection of data on each of the indicators was then feasible with the implementation of the Patient Judgment of Hospital Quality Survey.

Financial Performance & Conditions
  1. Inpatient nursing-earned hours per inpatient weighted case
  2. RN, RPN/LPN and non-professional staff-earned hours per inpatient weighted case
  3. RN staff-earned hours per inpatient weighted case
  4. Percentage of total inpatient nursing-earned hours utilized for direct nursing care
  5. Percentage of professional nursing staff hours utilized for RNs
  6. Percentage of direct nursing care-earned hours utilized for non-professional staff
  7. Percentage of nursing care-earned hours utilized for full-time RNs
  8. Percentage of nursing care-earned hours utilized for full-time RPNs/LPNs
  9. Percentage of nursing care-earned hours utilized for full-time non-professional staff
  10. Percentage of nursing care-earned hours utilized for part-time RNs
  11. Percentage of nursing care earned hours utilized for part time RPNs/LPNs
  12. Percentage of nursing care-earned hours utilized for part-time non-professional staff
  13. Percentage of nursing care-earned hours utilized for casual RNs
  14. Percentage of nursing care-earned hours utilized for casual-time RPNs/LPNs
  15. Percentage of nursing care-earned hours utilized for casual non-professional staff
  16. Percentage nursing staff hours utilized for agency staff
  17. Percentage nursing staff hours utilized for orientation
  18. Percentage nursing staff hours utilized for absenteeism
  19. Percentage nursing staff hours utilized for ongoing education
  20. Percentage nursing staff hours utilized for overtime

* The authors noted that collection of data on the first 16 indicators was then feasible; however, data on the remaining indicators was not then readily available.

Source: McGillis Hall et al. 2001: 50–52.
ALC = alternative level of care; RN = registered nurse; RPN = registered practical nurse; LPN = licensed practical nurse; UTI = urinary tract infection.

 

National report card initiatives

The Canadian Hospital Reporting Project (CHRP) is a national initiative being led by the Canadian Institute for Health Information. Launched in 2010, it is a quality improvement tool that gathers and analyzes performance indicators in the acute care sector. The CHRP includes nursing-sensitive adverse events for medical and for surgical conditions in its data collection related to patient safety. The CHRP is also collecting inpatient nursing productivity and RN-earned hours as a percentage of total inpatient nursing-earned hours as indicators to inform assessment of efficiency/financial performance. These data elements are primarily drawn from the Discharge Abstract Database (clinical indicators) and the Canadian MIS Database (financial indicators). The CHRP will enable hospitals to compare their performance with other hospitals regionally, provincially and nationally.

The quality connection

Donaldson and colleagues (2005) have described how participants in the CALNOC collaborative use the data to improve quality of care. For example, noting its high prevalence of hospital-acquired pressure ulcers, one organization implemented a performance improvement process that was successful in reducing the incidence of pressure ulcers. Research also determined that nurse leaders found the MilNOD information helpful. For example, the only measures of staffing effectiveness available were from MilNOD (Loan et al. 2011). Moreover, MilNOD prompted the addition of risk assessments for falls and pressure ulcers to routine nursing care, and MilNOD data were used for safety initiatives and accreditation purposes.

Limitations and Recommendations

In order to generate good-quality data to inform patient care, evaluate organizational performance and support quality improvement, measurement systems must be valid, reliable and capable of being adjusted for risk when making comparisons between units or organizations and for research purposes. A limitation of systems such as NDNQI and CALNOC is their inability to be fully adjusted for risk with respect to differences in patient characteristics across units and facilities, because the collection of outcomes data is not linked to other databases where data that could be used for risk adjustment resides. The system in Belgium is one exception. In the Belgian Nursing Minimum Data Set (B-NMDS), patient demographic items and ICD-9-M codes are gathered to enable alignment of the NMDS data with the Hospital Discharge Data Set, thus enabling risk adjustment. A second limitation of many of the NMDS is their failure to capture nursing process data such as nursing interventions. Again, the B-NMDS is an exception because it was designed to capture nursing interventions using the Nursing Interventions Classification system (Van den Heede et al. 2009b).

Canada has many advantages that other countries do not have because of our national data sets. A nursing minimum data set could be linked to data that resides at the Canadian Institute of Health Information, such as the DAD, MIS, RAI-HC, RAI-LTC and the RAI-MH. These data sets contain the types of information about patients and facilities that are essential for risk adjustment. However, to date it has not been possible to link such data to the nursing unit level. HOBIC is an exception because it has been designed to capture nursing-sensitive outcomes observed at and responsive to nursing care and conditions at the unit level. Measurement at the unit level would need to be an essential feature of an NMDS in order to support improvements in direct care.

Almost every initiative described in this synthesis collects data pertaining to nursing inputs/structures, such as nursing hours per patient day, staff mix, nurse–patient ratio and, in some cases, nurse education and experience. In Canada, it is feasible to collect information about nurse staffing in acute care hospitals through the MIS. Several NMDS have also included a work environment survey that enables examination of the impact of work environment change on nurse and patient outcomes. The majority of NMDS focus on a core set of patient safety outcomes, such as pressure ulcers, falls and nosocomial infections. HOBIC and C-HOBIC have taken a broader perspective to include such outcomes as functional status, symptoms and therapeutic self-care. The World Health Organization has defined quality as "the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge" (WHO 2009: 18). Patient safety is defined as freedom for a patient from unnecessary harm (WHO 2009). Accordingly, it would seem that a balanced approach to outcomes measurement should include both quality and safety indicators.

Conclusion

This knowledge synthesis has reported the existence of reliable and valid nursing-sensitive indicators and outcomes for both safety and quality outcomes for patients, and has affirmed the feasibility of collecting and reporting such data. A critical mass of participating organizations is required to enable nursing-sensitive outcomes to be included in national publications, and such participation could be enabled through a national report card framework. Moreover, there is evidence that the collection, reporting and benchmarking of nursing-sensitive data make a powerful contribution to the quality of care, which is a central focus in today's healthcare systems.

About the Author(s)

Diane Doran, RN, PhD, FCAHS, Scientific Director, Nursing Health Services Research Unit (Toronto Site), Professor, Lawrence S. Bloomberg Faculty of Nursing, University of Toronto, Toronto, ON

Barbara Mildon, RN, PhD, CHE, CCHN(C), VP Professional Practice and Research, Chief Nurse Executive, Ontario Shores Centre for Mental Health Sciences, Whitby, ON

Sean Clarke, RN, PhD, FAAN, RBC Chair in Cardiovascular Nursing Research, University Health Network and University of Toronto, Deputy Director, Nursing Health Services Research Unit (Toronto Site), Associate Professor, Lawrence S. Bloomberg Faculty of Nursing, University of Toronto, Toronto, ON

Correspondence may be directed to: Dr. Diane Doran (diane.doran@utoronto.ca).

Acknowledgment

The authors gratefully acknowledge the Office of Nursing Policy, Health Canada and the Academy of Chief Executive Nurses for funding to develop this knowledge synthesis. The authors also extend sincere thanks to members of the Think Tank Planning Committee for their review of, and feedback to, the original synthesis: Dr. Karima Velji, Ms. Nora Hammell, Ms. Maureen Charlebois, Ms. Barb Foster, Dr. Kathryn Hannah, Ms. Patty O'Connor, Ms. Sandra McDonald Rencz, Dr. Susan VanDeVelde-Coke, Dr. Ann Tourangeau and Dr. Linda McGillis Hall.

References

Alexander, G.R. 2007. "Nursing Sensitive Databases: Their Existence, Challenges and Importance." Medical Care Research and Review 64(2): 44S–63S.

Association of UK University Hospitals. n.d. Patient Care Portfolio: AUKUH Nurse Sensitive Indicators: Implementation Resource Pack. London: Author. Retrieved May 10, 2011. <http://www.aukuh.org.uk/members/documents/4NurseSensitiveIndicatorsImplementationResourcePack.pdf>.

Canadian Institute for Health Information (CIHI). 2011. Frequently-Asked Questions about the MIS Standards. Ottawa: Author. Retrieved May 10, 2011. <http://www.cihi.ca/cihi-ext-portal/internet/en/document/standards+and+data+submission/standards/mis+standards/mis_faq>.

Canadian Nurses Association (CNA). 2000. "Collecting Data to Reflect Nursing Impact: A Discussion Paper." Ottawa: Author. Retrieved May 10, 2011. <http://www.cna-aiic.ca/CNA/documents/pdf/publications/collct_data_e.pdf>.

Canadian Nurses Association (CNA). 2006. "Nursing Information and Knowledge Management." Position statement. Ottawa: Author. Retrieved May 10, 2011. <http://www.cna-aiic.ca/CNA/documents/pdf/publications/PS87-Nursing-info-knowledge-e.pdf>.

Callaborative Alliance for Nursing Otcomes (CALNOC). 2010. Orientation to CALNOC. California: Author. Retrieved January 3, 2011.<https://www.calnoc.org/globalPages/mainpage.aspx>

Department of Health, United Kingdom. 2010. Nurse Sensitive Outcome Indicators for NHS Commissioned Care. Ottawa: Author. Retrieved May 10, 2011. <http://www.ic.nhs.uk/webfiles/Services/Clinical%20Innovation%20Metrics/Nurse_Sensitive_Indicators_DH.pdf>.

Donabedian, A. 1966. "Evaluating the Quality of Medical Care." Milbank Memorial Fund Quarterly 44: 166–206.

Donaldson, N. and C. Aydin. 2010. "Impact of Medical Surgical Acute Care Microsystem Nurse Characteristics and Practices on Patient Outcomes." Unpublished report. San Francisco and Los Angeles: Authors.

Donaldson, N., D.S. Brown, C.E. Aydin, M.L.B. Bolton and D.N. Rutledge. 2005. "Leveraging Nurse-Related Dashboard Benchmarks to Expedite Performance Improvement and Document Excellence." Journal of Nursing Administration 35(4): 163–72.

Doran, D.M. 2003. "Preface." In D.M. Doran, ed., Nursing-Sensitive Outcomes: State of the Science (pp. vii–ix). Sudbury, MA: Jones and Bartlett.

Doran, D.M., ed. 2011. Nursing Outcomes: The State of the Science (2nd ed.). Sudbury, MA: Jones and Bartlett.

Doran, D.M. and D. Pringle. 2011. "Patient Outcomes as an Accountability." In D.M. Doran, ed., Nursing Outcomes: The State of the Science (2nd ed.) (pp. 1–28). Sudbury, MA: Jones and Bartlett.

Dossey, B.M. 2000. Florence Nightingale: Mystic, Visionary, Healer. Springhouse, PA: Springhouse.

Griffiths, P., A. Richardson and R. Blackwell. 2009. Nurse Sensitive Outcomes and Indicators in Ambulatory Chemotherapy. London: National Nursing Research Unit, King's College. Retrieved March 10, 2011. <http://www.nursingtimes.net/Journals/1/Files/2010/2/5/Nurse%20Sensitive%20Outcomes%20and%20Indicators%20in%20Ambulatory%20Chemotherapy%20-%20Report.pdf>.

Haines, J. 1993. Leading in a Time of Change: The Challenge for the Nursing Profession. A Discussion Paper. Ottawa: Author. Retrieved March 10, 2011. <http://www.cna-aiic.ca/CNA/documents/pdf/publications/leading_time_change_august_1993_e.pdf>.

Hannah, K.J., P.A. White, L.M. Nagle and D.M. Pringle. 2009. "Standardizing Nursing Information in Canada for Inclusion in Electronic Health Records: C-HOBIC." Journal of the American Medical Informatics Association 16(4): 525–30.

Health Outcomes for Better Information and Care (HOBIC). 2009. "HOBIC Newsletter: HOBIC Anniversary, 10 Years of Achievement." Toronto: Author. Retrieved March 10, 2011. <http://www.hinext.com/assets/HOBIC_Anniversary_Newsletter.pdf>.

Irvine, D., S. Sidani and L. McGillis Hall. 1998. "Linking Outcomes to Nurses' Role in Health Care." Nursing Economics 16(2): 58–64, 87.

Kaplan, R.S. and D.P. Norton. 1992 (January–February). "The Balanced Scorecard: Measures That Drive Performance." Harvard Business Review 71–79. Retrieved March 10, 2011. <http://www.srsdocs.com/bsc/bsc_ref/artigos/BSC_DrivePerf.pdf>.

Kleib, M., A. Sales, D.M. Doran, C. Mallette and D. White. 2011. "Nursing Minimum Data Sets." In D.M. Doran, ed., Nursing Outcomes: State of the Science (2nd ed.) (pp. 487–512). Sudbury, MA: Jones and Bartlett.

Loan, L.A., P.A. Patrician and M. McCarthy. 2011. "Participation in a National Nursing Outcomes Database: Monitoring Outcomes Over Time." Nursing Administration Quarterly 35(1): 72–81.

Maas, M.L., M. Johnson and S. Moorhead. 1996. "Classifying Nursing-Sensitive Patient Outcomes." IMAGE: Journal of Nursing Scholarship 28(4): 295–301.

MacNeela, P., P.A. Scott, M.P. Treacy and A. Hyde. 2006. "Nursing Minimum Data Sets: A Conceptual Analysis and Review." Nursing Inquiry 13(1): 44–51.

Magnello, M.E. 2010. "The Passionate Statistician." In S. Nelson and A.M. Rafferty, eds., Notes on Nightingale: The Influence and Legacy of a Nursing Icon (pp. 115–29). Ithaca, NY: Cornell University.

McGillis Hall, L. 2003. "Nursing Staff Mix Models and Outcomes." Journal of Advanced Nursing 44(2): 217–26.

McGillis Hall, L., D. Doran, H. Spence Laschinger, C. Mallette, L.L. O'Brien-Pallas and C. Pedersen. 2001. Hospital Report 2001: Preliminary Studies, Volume 2: Exploring Nursing, Women's Health, Population Health. Toronto: HSPRN. Retrieved May 10, 2011. <http://www.hsprn.ca/reports/2001/prelim_studies_v02.pdf>.

McGillis Hall, L., D. Doran, H. Spence Laschinger, C. Mallette, C. Pedersen and L. O'Brien-Pallas. 2003. "A Balanced Scorecard Approach for Nursing Report Card Development." Outcomes Management 7(1): 17–22.

Montalvo, I. 2007. "The National Database of Nursing Quality Indicators (NDNQI)." Online Journal of Issues in Nursing 12(3). Retrieved May 10, 2011. <http://www.nursingworld.org/MainMenuCategories/ANAMarketplace/ANAPeriodicals/OJIN/TableofContents/Volume122007/No3Sept07/NursingQualityIndicators.aspx>.

National Database of Nursing Quality Indicators (NDNQI). 2010 (May). ANA's NQF-Endorsed Measure Specifications: Guidelines for Data Collection on the American Nurses Association's National Quality Forum Endorsed Measures: Nursing Care Hours per Patient Day; Skill Mix; Falls and Falls with Injury. Retrieved May 10, 2011. <https://www.nursingquality.org/Default.aspx>.

Patrician, P.A., L. Loan, M. McCarthy, L.R. Brosch and K.S. Davey. 2010. "Towards Evidence-Based Management: Creating an Informative Database of Nursing-Sensitive Indicators." Journal of Nursing Scholarship 42(4): 358–66.

Pink, G.H., D. Imtiaz, L. McGillis Hall and I. McKillop. 2007. "Selection of Key Financial Indicators: A Literature, Panel and Survey Approach." Healthcare Quarterly 10(1): 87–98.

Pringle, D.M. and P. White. 2002. "Happenings. Nursing Matters: The Nursing and Health Outcomes Project of the Ontario Ministry of Health and Long-Term Care." Canadian Journal of Nursing Research 33: 115–21.

Rutherford, M. 2008. "Standardized Nursing Language: What Does It Mean for Nursing Practice?" Online Journal of Issues in Nursing 13(1). Retrieved May 10, 2011. <http://www.nursingworld.org/MainMenuCategories/ANAMarketplace/ANAPeriodicals/OJIN/TableofContents/vol132008/No1Jan08/ArticlePreviousTopic/StandardizedNursingLanguage.aspx>.

Sermeus, W., L. Delesie, K. Van den Heede, L. Diya and E. Lesaffre. 2008. "Measuring the Intensity of Nursing Care: Making Use of the Belgian Nursing Minimum Data Set." International Journal of Nursing Studies 45(7): 1011–21.

Van den Heede, K., E. Lesaffre, L. Diya, A. Vleugels, S.P. Clarke, L.H. Aiken and W. Sermeus. 2009a. "The Relationship between Inpatient Cardiac Surgery Mortality and Nurse Numbers and Educational Level: Analysis of Administrative Data." International Journal of Nursing Studies 46(6): 796–803.

Van den Heede, K., D. Michiels, O. Thonon and W. Sermeus. 2009b. "Using Nursing Interventions Classification as a Framework to Revise the Belgian Nursing Minimum Data Set." International Journal of Nursing Terminologies and Classifications 20(3): 122–31.

Van den Heede, K., W. Sermeus, L. Diya, S.P. Clarke, E. Lesaffre, A. Vleugels and L.H. Aiken. 2009c. "Nurse Staffing and Patient Outcomes in Belgian Acute Hospitals: Cross-Sectional Analysis of Administrative Data." International Journal of Nursing Studies 46(7): 928–39.

Veterans Affairs Office of Nursing Services (ONS). 2009. Annual Report 2009. VA Nursing: Connecting All the Pieces of the Puzzle to Transform Care for Veterans. Washington, DC: Author. Retrieved May 10, 2011. <http://www.va.gov/NURSING/Published_Reports.asp>.

Werley, H.H., E.C. Devine, C.R. Zorn, P. Ryan and B.L. Westra. 1991. "The Nursing Minimum Data Set: Abstraction Tool for Standardized, Comparable, Essential Data." American Journal of Public Health 81(4): 421–26.

White, P., D. Pringle, D. Doran and L.M. Hall. 2005. "The Nursing and Health Outcomes Project." Canadian Nurse 101(9): 14–18.

World Health Organization (WHO). 2009. Conceptual Framework for the International Classification for Patient Safety, Version 1.1: Final Technical Report. Geneva: Author. Retrieved May 10, 2011. <http://www.who.int/patientsafety/taxonomy/icps_full_report.pdf>.

Footnotes

1 This paper has been condensed for the Canadian Journal of Nursing Leadership. The full knowledge synthesis is available at http://www.nhsru.com/wp-content/uploads/Knowledge-Synthesis-Toward-a-National-Nursing-Report-Card-March_11-_2_.pdf.

Comments

Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed