Healthcare Policy

Healthcare Policy 15(2) November 2019 : 100-114.doi:10.12927/hcpol.2019.26068
Online Exclusive

Development and Validation of a Brief Hospital-Based Ambulatory Patient Experience Survey (HAPES) Tool

Shabnam Ziabakhsh, Arianne Albert and Edwina Houlihan

Abstract

Recognition of the value of the patient perspective on services has led healthcare organizations to measure patient care experiences. A brief, generic and psychometrically sound scale to measure patient experiences in ambulatory/outpatient settings in Canada would be useful and is currently lacking. The purpose of this study was to develop and validate an English-language hospital-based ambulatory patient experience survey tool in a Canadian context. Based on a review of more than 20 instruments measuring experiences predominately in non-acute care settings, we initially selected 27 items to be included in the questionnaire, addressing quality dimensions of access, communication, continuity and coordination, shared decision making, emotional support, trust/confidence, privacy, patient-reported impact and physical environment. The survey instrument was subsequently tested among 1,219 ambulatory patients, and its psychometric properties were assessed. A final questionnaire was produced with 14 items and two emerging subscales: Patient–Provider Communication and Overall Quality of Experience, as determined by a factor analysis. The items within the scale showed high construct validity. Reliability was also excellent for the instrument. The applicability of this tool in supporting quality improvement initiatives is discussed.

Introduction

In the past two decades, the focus of patient feedback tools has shifted from probing about "satisfaction" to inquiring about "experiences" (Cleary 1999; Doyle et al. 2013; LaVela and Gallan 2014). While satisfaction surveys measure attitudes about care, they say very little about the nature of the services received. Experience surveys, on the other hand, focus on whether processes or events occurred during the care encounter, providing more actionable insights (Jenkinson et al. 2002). Patient experience refers to any process perceptible by patients. This can include subjective experiences (e.g., felt supported), objective experiences (e.g., waited 15 minutes) and observable experiences (e.g., answered questions; Price et al. 2014). Regardless of how experiences are processed, likely filtered through a subjective experience lens (LaVela and Gallan 2014), research has shown that better patient care experiences are associated with higher levels of adherence to treatment, better clinical outcomes, better safety and less care utilization (Boulding et al. 2011; Doyle et al. 2013; Glickman et al. 2010; Issac et al. 2010; Price et al. 2014).

The importance of capturing patient voice has led organizations to measure and monitor patient care experiences. Continuous monitoring of patient experiences using self-reported tools, combined with feedback mechanisms to managers and healthcare providers, can lead to service improvements and a culture of quality and patient engagement (Boyer et al. 2006; Jangland et al. 2012; Larsen 2011; Rogers and Smith 1999). Using patient feedback for improvement does require a concerted effort, often requiring an existing culture of quality improvement to support potential changes (Davies and Cleary 2005; Luxford and Sutton 2014) and a robust feedback mechanism for service providers involving structured debriefing activities (Larsen 2011).

Having a validated and standardized tool to measure hospital ambulatory (outpatient) experiences is timely, especially in light of standardization of patient experience measurement across acute settings in Canada (Canadian Institute for Health Information [CIHI] 2014). Despite a number of patient experience tools that measure care in both primary and some ambulatory settings, to our knowledge, a brief, validated and generic instrument that measures hospital ambulatory experiences is currently lacking in Canada. The standardized tools that do exist are often lengthy, not generic (Benson and Potts 2014; Sjetne et al. 2011), may have a primary care focus and may lack the important dimensions of patient experience (Sjetne et al. 2011; Wong and Haggerty 2013).

Wong and Haggerty (2013) identified a need for a standardized tool to measure patient experiences in the primary healthcare system in Canada. We argue that the need can be extended to the ambulatory settings. Ambulatory/outpatient care is distinct from primary care in that ambulatory patients may receive care from a team of specialized care providers, patients may visit different providers at each care encounter and care is often discontinuous, with the expectation that patients will return to their primary care provider for ongoing support.

Looking at existing non-acute survey tools, a few are notable. The Picker's ambulatory surveys often have between 60 and 100 questions, depending on the patient population (NRC+Picker 2003; Picker Institute Europe 2015). Survey length is a major barrier to survey completion, often contributing to survey fatigue and low response rates (Benson and Potts 2014; Haggerty et al. 2011a; Hojat et al. 2011; Patwardhan and Spencer 2012; Sjetne et al. 2011). Shorter tools sometimes used in ambulatory settings have a predominately primary care focus. The CAHPS Clinician and Group Survey, one of the widely used surveys in the US, and also used in ambulatory clinics, is a 31-item questionnaire with provider-specific questions, including questions regarding relational continuity (e.g., "Is this the provider you usually see if you need a checkup, want advice about a health problem or get sick or hurt?"; AHRQ 2015). The 34-item Massachusetts Ambulatory Care Experiences Survey, despite its name, has also a primary care focus, with most questions framed toward "your personal doctor" (Safran et al. 2006). The General Practice Assessment Questionnaire, widely used in the UK, consists of 46 items and, as the name implies, focuses on patients' primary care experiences. Benson's and Potts' (2014) howRwe tool is a very brief (four-item) tool that can be used in a variety of care contexts for continuous feedback but does not address all of the experience quality dimensions important to patients (Price et al. 2014).

Hence, there is a need for a validated, ambulatory-focused instrument that is brief, comprehensive and yet generic enough to be used across a wide range of ambulatory clinics. This article describes the development and validation of an English-language hospital-based ambulatory patient experience survey tool in one Canadian context.

Methods

Questionnaire development

Existing validated patient experience tools used in non-acute care settings were reviewed. This review was predominately informed by the work of Wong and Haggerty (2013), who conducted a scoping review and identified 17 publicly available instruments from Canada, the UK and the US that measure patients' experiences in non-acute care settings, including the CAHPS Clinician and Group Survey, the Ambulatory Care Experiences Survey and the General Practice Assessment Questionnaire. In particular, the 87 questions they selected as the result of their review and deemed as important in capturing dimensions of patient experience were assessed. In addition to the instruments/questions identified by Wong and Haggerty, a number of publicly available tools were identified and reviewed, namely, the Ontario Primary Care Patient Experience Survey (Health Quality Ontario [HQO] 2015), the Australian Bureau of Statistics (2014) Survey, the Massachusetts Health Quality Partners (2009) Survey, the Communication Assessment Tool (Makoul et al. 2007) and the Patient Experience Questionnaire (Steine et al. 2001). The review also included the Canadian Institute for Health Information's (2014) Canadian Patient Experiences Survey. This tool, developed to support pan-Canadian comparisons of acute patient experiences, is currently being used in many health jurisdictions across Canada. Due to the relevancy of this tool in the Canadian context, it was important to explore its potential for adaptation to the ambulatory/outpatient setting. In fact, all of the survey instruments noted earlier were reviewed for applicable questions across a wide variety of ambulatory/outpatient environments.

Questions from the above-mentioned survey tools were compiled and organized by the following experience domains: access, communication, continuity and coordination, shared decision-making, emotional support, trust/confidence, privacy, patient-reported impact, physical environment and overall assessment/satisfaction. These domains are similar to the quality dimensions proposed in the literature, namely, the Picker Institute's dimensions of patient-centred care (Gerteis et al. 2005; Jenkinson et al. 2002; Kitson et al. 2012). Questions pertaining to in-patient care (e.g., response time to call bell) were omitted from this compilation.

A working group (n = 9) comprising BC Women's Hospital + Health Centre (BC Women's) managers and directors was formed. A few members of the working group had clinical backgrounds and were involved in direct patient care in their previous roles. Some of the members were public health professionals. One manager was a quality and system improvement expert, and a few of the working group members were involved in research and had expertise in questionnaire development. Working group members reviewed each question and voted on its inclusion/exclusion via a modified Delphi process (Hagen et al. 2008). Voting was done in private; individuals selected the items they favoured to keep and sent their choices back to the first author. The instruction was to keep at least one item from each experience domain (e.g., communication). Results of the voting rounds were presented to the group, followed by discussion to reach consensus on which questions to retain and how to best modify/adapt them as necessary. Three rounds of voting and discussion resulted in the inclusion of 23 questions. Another four questions were added to the questionnaire related to the use of interpretive services, ease of wayfinding and the "Hello, My Name Is" campaign, an initiative to encourage providers to introduce themselves by name to establish rapport and show respect (National Health Service [NHS] 2013).

Many of these questions were selected and/or modified to address patient care and flow in ambulatory environments. For example, questions pertaining to "communication between team members," "coordination of appointments," "provider introducing himself/herself by name" and "wayfinding" are particularly relevant in hospital ambulatory care settings. Other questions were not selected due to their primary care focus (e.g., "How often were you taken care of by the same person?" "When you made an appointment for a checkup or routine care with this provider, how often did you get an appointment as soon as you needed?"). Some of the questions were also rephrased to ask about experience as opposed to satisfaction. Response scales were similarly kept consistent across questions, as much as possible, for ease of completion. The response categories of "yes," "somewhat" and "no" were opted wherever applicable (Jenkinson et al. 2002) because experience, as opposed to satisfaction (e.g., excellent, very good), was mainly assessed. Furthermore, the frequency of care (e.g., always, sometimes), often gauged in acute and primary care surveys (CIHI 2014; HQO 2015), was not evaluated because in ambulatory settings, patients' contact with healthcare providers and staff may be time-limited.

The resulting questionnaire included questions addressing access, environment, continuity and coordination, communication, shared decision-making, emotional support, trust/confidence, privacy, self-reported impact and overall assessment dimensions that are deemed as important for measuring patient experience (Gerteis et al. 1993; Price et al. 2014; Wong and Haggerty 2013).

The new 27-item questionnaire was pretested with 20 patients from various BC Women's ambulatory clinics. The instrument was pretested in English. After survey completion, patients were asked about length, flow, clarity, simplicity and importance. For example, questions regarding importance included the following: "Were the questions included important to ask?" "Anything about your experience that we did not ask in the survey?" All of the patients provided favourable responses and viewed all questions as relevant. After the pretest, minor revisions (wording changes) were made to the survey, and no changes were made to the flow or order of the questions. Table 1 (click here) shows the final questionnaire before psychometric testing, including the instrument from which the questions originated or were adapted from and the service/care domain these represent.

Participants and procedures

The paper survey was distributed to all unique patients who visited BC Women's ambulatory clinics in the month of October 2016. BC Women's ambulatory clinics, with over 30 outpatient clinics and approximately 60,000 patient visits annually, provide diverse services ranging from high-risk maternity care and diagnostics (e.g., Diabetes Clinic, Hematology Clinic, Internal Medicine), gynecology, sexual and reproductive health (e.g., Recurrent Pregnancy Loss Clinic, Continence Clinic, Abortion and Counselling Services) and specialty services such as medical genetics, HIV care, complex chronic diseases program, heart health program and health services for new immigrants. The questionnaire was distributed at the time of check-in by clerical staff at each of the ambulatory clinics. Patients were instructed to complete the anonymous survey after their visit (on-site) and to place the completed survey in designated collection boxes. Patients who had already completed the survey during the survey month were not asked to complete it again. Staff were fully briefed and trained before survey launch. In total, 1,411 surveys were completed, resulting in a 55% response rate.

Data preparation

Among the 1,411 returned surveys, the second page was not completed (Questions 15 to 27) in 192 (14%) of the cases. Hence, those surveys were excluded from further analysis, bringing the survey count to 1,219. Questions that had more than 10% of missing or non-applicable responses or had categorical responses were removed from the subsequent factor analysis (see Table 2 for description, click here). However, questions omitted from the analysis do not necessarily need to be removed from the survey (Floyd and Widaman 1995; van der Eijk et al. 2012). All remaining ordinal or binomial items were used in the factor analysis, except the overall assessment questions (Questions 26 and 27).

Data analysis

Psychometric properties of the survey tool were assessed using exploratory factor analysis (EFA) to reveal underlying constructs and to calculate construct validity and internal reliability. All analyses were carried out in R version 3.5.0 (R Core Team 2018). The EFA was based on polychoric correlations because the items were mostly ordinal with three or five options (Revelle 2016). The polychoric correlation matrix was used in a minimum residual factor analysis with oblique rotation. The number of factors to include was chosen using a combination of the very simple structure criterion (Revelle and Rocklin 1979) and the Velicer (1976) minimum average partial criterion. Any items with < 0.3 loading on any factor were removed, and the factor analysis was rerun until all items had loadings ≥ 0.3 on a factor.

Once factors (scales) were identified, scale scores were constructed by summing the values of the items that were included in that scale. If the scale contained items with different numbers of possible responses, the values of the responses were centred and scaled before summation. Validity was assessed by calculating a Spearman's ranked correlation between the scale scores and the overall experience rating (Question 26). Correlation would suggest that the scales are measuring experience in a meaningful way. Construct validity was also assessed by determining the strength of the relationship between all individual questions and the overall experience score. This allowed further assessment of the merit of the questions that were not pulled into any scale. Polyserial correlations were calculated assuming ordinal, binary or categorical structure of the items where appropriate. Question 24 (on courtesy and respect) was excluded because of lack of variance in the responses, with 1,179 (96.7%) responding that they had been treated with courtesy and respect.

Internal reliability was evaluated using ordinal alpha, as calculated from polychoric correlations (Zumbo 2007) for the overall instrument and within each identified scale. An alpha value of > 0.70 was considered an indication of reliability.

This study was conducted for quality improvement and monitoring and, therefore, did not fall under the scope of the Research Ethics Board, as per the University of British Columbia Guidance notes, Article 4.4.1 and Tri-Council Policy Statement 2 (TCPS2) Article 2.5. However, verbal consent was gathered by the clerical staff at the time of survey distribution. Data collection occurred in accordance with the agency's privacy laws.

Results

Factor structure

After two rounds of EFA, 14 items remained in the analysis. Two factors emerged as the best solution by the very simple structure criterion and one factor by the Velicer minimum average partial criterion. However, the two-factor solution makes the most sense from a construct validity perspective. The first factor contains Items 13, 15, 16, 17, 18, 19, 21, 22 and 23 and relates to provider–patient communication. The second factor contains Items 1, 2, 6, 8 and 24 and relates to the quality of the experience in relation to operations and interaction with providers. Table 2 gives a summary of the item loadings on the factors.

Construct validity

There was a strong and significant negative correlation between the scale scores and the overall experience score (Question 26; Factor 1 Spearman's ρ = −0.38, p = < 0.0001, and Factor 2 Spearman's ρ = −0.51, p = < 0.0001). The correlation is negative because the scale scores are higher for those who had a worse experience.

In addition, polyserial correlations were calculated for all survey items with the overall experience score (Question 26; Table 2). Most of the correlations are negative because the items are scored such that increasing scores indicated poorer experiences. The two exceptions were Questions 5 and 25, which were reverse scored relative to the other items. The strongest item correlations with overall experience were for Question 21 ("Did you feel supported by the clinic team?") and Question 23 ("Did you have confidence in the healthcare provider(s) treating you at the clinic?"). Most items had fairly high correlations with overall experience (> 0.3), whereas only four items had correlations lower than 0.2 (Questions 5, 10, 11 and 12). These questions include wayfinding, the need for interpretation and introduction of healthcare provider by name.

Internal reliability

Overall ordinal alpha, for all 14 items, was 0.91. For the first scale, the ordinal alpha was 0.90, and for the second scale, the ordinal alpha was 0.83 (Table 2), indicating very good internal reliability of these scales.

Discussion and Conclusion

Following a review of existing non-acute patient experience survey tools, a valid instrument to measure ambulatory patient experiences was developed. This 14-item tool, with its two subscales – Patient–Provider Communication and Overall Quality of Experience (both covering provider and operational issues) – is brief and can be completed quickly (in five minutes) in waiting rooms. The items within the scale showed strong correlation with the overall experience score, suggesting that the scale has high construct validity, measuring some aspect of positive care experience. Reliability is also excellent for the instrument as a whole and within its subscales. Furthermore, the low proportion of missing or "not applicable" responses of the items retained in the scale indicates good acceptability and applicability of this tool across a wide range of health services – making it suitable as a generic tool (Sjetne et al. 2011).

The success of experience measurement tools lies in the extent to which these reflect what matters most to patients (LaVela and Gallan 2014). Both patient–provider communication and interaction are important components of experience (Dang et al. 2012) that were captured by this tool. The Patient–Provider Communication subscale measures the communication aspects of the clinical encounter, whereas the Overall Quality of Experience subscale includes items related to the quality of patient–provider/staff interaction, in terms of feeling respected and having a positive first contact. Results showed that the strongest item correlation with overall experience was the question on "feeling supported." This finding concurs with other studies that have shown patient–provider interaction far exceeding other components of experience in terms of predicting positive patient experiences (Dang et al. 2012; Sjetne et al. 2011; Steine et al. 2001; Van de Ven 2014). The importance of patient–provider communication for promoting treatment adherence and improved health outcomes has also been well documented (Gordon et al. 2007; Street et al. 2009; Zolnierek and Dimatteo 2009). Hence, the fact that conceptually we are measuring what matters most to patients in their care experience provides weight to the relevancy of this scale.

Implications for practice and policy

Decision-makers need to provide directions to support site- and agency-wide patient experience surveys and initiatives. Ideally, the measurement systems should be consistent and used across organizations, have scientific rigour, be brief and generic to be accepted and applicable in a variety of settings and should be translated into quality improvement plans and inform the delivery of patient-centred care. Given the move toward standardization of in-patient

experience survey in Canada (CIHI 2014), a validated hospital-based ambulatory survey tool becomes all the more timely.

The value of having a validated patient experience survey tool lies in not only how well it is implemented (e.g., in terms of appropriate sampling and response rate) but also the extent to which the findings are used in patient improvement initiatives (Patwardhan and Spencer 2012). Often a quality improvement culture becomes a prerequisite to organizational change; otherwise, surveys may be used as accountability checks, without any meaningful improvement intentions behind them. Coulter and colleagues (2014) argued that "it is unethical to ask patients to comment on experiences if these comments are going to be ignored" (p. 3). They further argued that only a limited number of hospitals take actions on patient experience survey findings. Factors that increase an organization's likelihood to make changes as a result of patient feedback include commitment of leadership, clarity of objectives, identification of champions, patient and family engagement, skillfulness of staff, training and capacity, availability of resources and depth of understanding of the patient perspectives (Luxford et al. 2011). Hence, it is not enough to just have the right tool and use the right methods, but to also have a plan of action within a culture that supports patient-centred improvements.

Limitations and future research directions

This study has several limitations. The results are based on a single organization, which may limit generalizability, although the services at BC Women's are quite diverse, with over 30 clinics that serve both pregnant and non-pregnant women and their families. Nonetheless, the vast majority of patients at BC Women's are women, requiring further investigation of the acceptability of this tool in other populations and ambulatory settings.

The survey was pretested with patients before survey launch, but the initial questionnaire development phase could have been strengthened from participation of patients in the survey instrument review process. Patients were not included in this process due to the level of time commitment it would have required. However, the working group members often put on their patient hats and/or their review took shape in the context of patient/family feedback they had received as seasoned managers and leaders. Nonetheless, it can be argued that the working group members may have been more inclined to select survey questions pertaining to the areas that they could impact and improve upon. Given that few organizations take actions based on survey data (Coulter et al. 2014), survey selection being tainted by its actionability may not necessarily be a bad thing. Yet, a more balanced approach would be to include the patient voice early on in the survey development process, as it is consistent with a more patient-centred approach (Stevenson 2002).

Pretesting of the survey with patients (once it was developed) yielded positive feedback, and patients deemed all of the selected questions as very important. However, cognitive or "think-aloud" interviewing with patients during pretesting in order to gain a more in-depth understanding of how they comprehend and respond to survey questions would have benefited the pretesting and is highly recommended for any future survey development work (Willis and Artino 2013). The next iteration of this tool should ideally include both cognitive interviewing and patient engagement in question selection and prioritization.

Patients were instructed to complete the survey on-site immediately after their encounter; this method was easy to administer, was not resource intensive and proved to provide a reasonable response rate (55%). However, the timing of survey distribution has been shown to impact patient-reported experiences, with less favourable ratings ensuing as more time lapses after the care encounter (Bjertnaes 2012). Hence, survey mode and timing should be given due consideration before any agency-wide decision on survey distribution, and standardization should be implemented in order to avoid timing and survey mode as confounding variables (Bjertnaes 2012).

It was beyond the scope of this study to collect patient outcome data; thus, future studies can examine the predictive validity of this tool by exploring the relationship between scale scores and outcome indicators (e.g., treatment adherence). Discriminant validity can also be looked at in future studies to determine differences in scores based on known operational or resource issues (e.g., wait time). Test–retest reliability can similarly be studied in repeated measures within a patient sample.

Furthermore, the validity of this tool was not examined across clinic types (e.g., maternity services, gynecology/sexual health services and specialized programs), but rather a set of generic questions were identified that would be applicable across all services. To develop a generic tool, many questions were not considered for inclusion (in the review process), and some of the included items that received high percentage of "not applicable" or missing responses were subsequently excluded from the scale (in the validation process). The decision to use a generic tool versus lengthier contextualized measures ultimately lies in the purpose of the patient evaluation. In addition, the resultant 14-item tool can be used in conjunction with other clinic-specific outcome and patient-reported experience measures when deemed necessary (Kingsley and Patel 2017). Regardless, having a brief set of standardized questions that can be applied across a wide variety of ambulatory services is highly valued and greatly needed.

Although not a limitation per se, it can be argued that questions that were omitted from the scale can potentially be used on a per-item basis, when scores are not pulled into particular scales (van der Eijk et al. 2012). The questions that are highly correlated with the overall experience score (Questions 20, 9, 3, 14, 25 and 7) are likely the best candidates for such usage. Some of these items did not make the scales due to a high number of non-applicable or missing responses. Survey items should be applicable to as many respondents as possible, especially when developing a generic tool, because non-applicability leads respondents to view the entire instrument as not relevant (Jenkinson et al. 2002; Sjetne et al. 2011). If the nature of the service makes these questions more applicable, then it may be worthwhile to consider including them in the questionnaire, but treating them as single items, rather than as part of the scale. Yet, single items normally require larger sample sizes to achieve reliable results (Streiner and Norman 2003).

Future studies showcasing how patient experience survey findings can promote organizational change and improvements are also needed. Such studies can highlight successful strategies on how to make survey data more actionable.

Finally, it should be noted that patient experience can be captured through a variety of means, other than self-reported surveys. In fact, qualitative methods may elicit a deeper understanding of the patient experience and can provide added insights if used in conjunction with experience surveys. Besides the standard qualitative approaches, such as interviews and focus groups, some new innovative methods have begun to emerge, including ethnographic approaches, photovoice and guided tours (LaVela and Gallan 2014).

Conclusions

The results support the reliability, validity and acceptability of an ambulatory patient experience questionnaire with emphasis on patient–provider communication and overall quality of care experience, with a focus on both provider and operational issues. The relevance of this tool in other ambulatory settings and populations requires further investigation.

About the Author(s)

Shabnam Ziabakhsh, PhD, Evaluation Specialist, BC Women's Hospital + Health Centre, Vancouver, BC

Arianne Albert, PhD, Biostatistician, Women's Health Research Institute, Vancouver, BC, Adjunct Associate Professor, University of British Columbia, Vancouver, BC

Edwina Houlihan, RN, BSCN, MBA, Senior Director, Patient Care Services, BC Women's Hospital + Health Centre, Vancouver, BC

Correspondence may be directed to: Shabnam Ziabakhsh, PhD, BC Women's Hospital + Health Centre, 4500 Oak Street, Vancouver, BC V6H 3N1; tel.: 604-875-2424; ext. 6486; e-mail: sziabakhsh@cw.bc.ca.

Acknowledgment

This study was funded by BC Women's Hospital + Health Centre (BC Women's), an agency of the Provincial Health Services Authority. We would like to thank the patients who completed the survey. A special thank you to the working group members who contributed to the questionnaire development, namely, Dr. Ann Pederson (who also provided insightful comments on an earlier draft), Esther Pang-Wong, Jill Pascoe, Caitlin Johnston, Edyta Kowalska, Anna Bloomfield and Bal Lashar. Thanks also go to Pavandeep Gill and all the clerical staff at BC Women's who participated in data collection.

References

Agency for Healthcare Research and Quality (AHRQ). 2015. CAHPS Clinician & Group Survey. Retrieved June 4, 2017. <https://www.ahrq.gov/cahps/surveys-guidance/cg/instructions/index.html>.

Australia Bureau of Statistics (ABS). 2014. The Measurement of Patient Experience in Non-GP Primary Health Care Settings. Retrieved February 22, 2017. <http://www.aihw.gov.au/WorkArea/DownloadAsset.aspx?id=60129547330>.

Benson, T. and H.W.W. Potts. 2014. A Short Generic Patient Experience Questionnaire: howRwe Development and Validation. BMC Health Services Research 14: 499. doi:10.1168/s12913-014-04499-z.

Bjertnaes, O.A. 2012. The Association between Survey Timing and Patient-Reported Experiences with Hospitals: Results of a National Postal Survey. MBC Medical Research Methodology, 12: 13. doi:10.1186/1471-2288-12-13.

Boulding, W., S.W. Glickman, M.P. Manary, K.A. Schulman and R. Staelin. 2011. Relationship between Patient Satisfaction with Inpatient Care and Hospital Readmission Within 30 Days. American Journal of Managed Care 17(1): 41–48.

Boyer, L., P. Francois, E. Doutre, G. Weil and J. Labarere. 2006. Perception and Use of the Results of Patient Satisfaction Surveys by Care Providers in a French Teaching Hospital. International Journal of Quality in Health Care 18(5): 359–364. doi:10.1093/intqhc/mzl029.

Canadian Institute for Health Information (CIHI). 2014. Canadian Patient Experiences Survey – Inpatient Care Procedure Manual. Retrieved February 22, 2017. <https://www.cihi.ca/en/cpes_ic_procedure_20140501_en.pdf.>.

Care Quality Commission. 2013. NHS Patient Surveys. Retrieved February 22, 2017. <http://www.nhssurveys.org/>.

Cleary, P.D. 1999. The Increasing Importance of Patient Surveys. British Medical Journal 319: 720–21. doi:10.1136/bmj.319.7212.720.

Coulter A., L. Locock and S. Ziebland. 2014. Collecting Data on Patient Experience Is Not Enough: They Must be Used to Improve Care. British Medical Journal 348: g2225. doi:10.1136/bmj.g2225.

Dang, B.N., R.A. Westbrook, M.C. Rodriguez-Barradas, T.P. Giodano. 2012. Identifying Drivers of Overall Satisfaction in Patients Receiving HIV Primary Care: A Cross-Sectional Study. PLOS 7(8): e42980. doi:10.1371/journal.pone.0042980.

Davies, E. and P.D. Cleary. 2005. Hearing the Patient's Voice? Factors Affecting the Use of Patient Survey Data in Quality Improvement. Quality & Safety in Health Care 14: 428–32. doi:10.1136/qshc.2004.012955.

Doyle, C., L. Lennox and D. Bell. 2013. A Systematic Review of Evidence on the Links between Patient Experience and Clinical Safety and Effectiveness. British Medical Journal 3: e001570. doi:10.1136/bmjopen-2012-001570.

Floyd, F. J. and K.F. Widaman. 1995. Factor Analysis in the Development and Refinement of Clinical Assessment Instruments. Psychological Assessment 7: 286–99. doi:10.1037/1040-3590.7.3.286.

General Practice Assessment Questionnaire Administration (GPAQ). 2012. Retrieved February 22, 2017. <http://www.phpc.cam.ac.uk/gpaq/home/downloads/>.

Gerteis, M., S. Edgman-Levitan, J. Daley and T.L. Delbanco. 1993. Through the Patient's Eyes: Understanding and Promoting Patient-Centred Care. San Francisco, CA: Jossey-Bass.

Glickman, S.W., W. Boulding, M. Manary, R. Staelin, M.T. Roe, R.J. Wolosin et al. 2010. Patient Satisfaction and its Relationship with Clinical Quality and Inpatient Mortality in Acute Myocardial Infraction. Circulation: Cardiovascular Quality and Outcomes 3(2): 188–95. doi:10.1161/CIRCOUTCOMES.109.900597.

Gordon, K., F. Smith and S. Dhillon. 2007. Effective Chronic Disease Management: Patients' Perspectives on Medication-Related Problems. Patient Education and Counseling 65: 407–15. doi:10.1016/j.pec.2006.09.012.

Hagen, N.A., C. Stiles, C. Nekolaichuk, P. Biondo, L.E. Carlson, K. Fisher et al. 2008. The Alberta Breakthrough Pain Assessment Tool for Cancer Patients: A Validation Study Using a Delphi Process and Patient Think-Aloud Interview. Journal of Pain and Symptom Management 35(2): 136–52. doi:10.1016/j.jpainsymman.2007.03.016.

Haggerty, J.L., C. Beaulieu, B. Lawson, D.A. Santor, M. Fournier and F. Burge. 2011a. What Patients Tell Us about Primary Healthcare Evaluation Instruments: Response Formats, Bad Questions and Missing Pieces. Healthcare Policy 7: 66–78. doi:10.12927/hcpol.2013.22693.

Haggerty, J.L., D. Roberge, G.K. Freeman, C. Beaulieu and M. Breton. 2011b. When Patients Encounter Several Providers: Validating of a Generic Measure of Continuity of Care. Annals of Family Medicine 10(5): 443–51. doi:10.1370/afm.1378.

Health Quality Ontario (HQO). 2015. Primary Care Patient Experience Survey: Support Guide. Retrieved February 22, 2017. <http://www.hqontario.ca/Portals/0/documents/qi/primary-care/primary-care-patient-experience-survey-support-guide-en.pdf>.

Hojat, M., D.Z. Louis, K. Maxwell, F.W. Markham, R.C. Wender and J.S. Gonnella. 2011. A Brief Instrument to Measure Patients' Overall Satisfaction with Primary Care Physician. Family Medicine 43(6): 412–17.

Issac, T., A.M. Zaslavsky, P.D. Cleary and B.E. Landon. 2010. The Relationship between Patients' Perception of Care and Measures of Hospital Quality and Safety. Health Services Research 45: 1024–40. doi:10.1111/j.1475-6773.2010.01122.x.

Jangland, E., M. Carlsson, E. Lundgren and L. Gunningberg. 2012. The Impact of an Intervention to Improve Patient Participation in a Surgical Care Unit: A Quasi-Experimental Study. International Journal of Nursing Studies 49: 528–38. doi:10.1016/j.ijnurstu.2011.10.024.

Jenkinson, C., A. Coulter, S. Bruster, N. Richards and T. Chandola. 2002. Patient's Experiences and Satisfaction with Health Care: Results of a Questionnaire Study of Specific Aspects of Care. Quality & Safety in Health Care 11: 335–39. doi:10.1136/qhc.11.4.335.

Kingsley, C. and S. Patel. 2017. Patient-Reported Outcome Measures and Patient-Reported Experience Measures. BJA Education 17(4): 137–144. doi:10.1093/bjaed/mkw060.

Kitson, A., A. Marshall, K. Bassett and A. Zeitz. 2012. What Are the Core Elements of Patient-Centred Care? A Narrative Review and Synthesis of the Literature from Health Policy, Medicine and Nursing. Journal of Advanced Nursing 69(1): 4–15. doi:10.1111/j.1365-2648.2012.06064.x.

Larsen, D. 2011. Using Real Time Patient Feedback to Introduce Safety Changes. Nursing Management 18(6): 27–31. doi:10.7748/nm2011.10.18.6.27.c8718.

LaVela, S.L. and A.S. Gallan. 2014. Evaluation and Measurement of Patient Experience. Patient Experience Journal 1(1): 28–36. doi:10.35680/2372-0247.1003.

Luxford, K., D.G. Safran and T. Delbanco. 2011. Promoting Patient-Centred Care: A Qualitative Study of Facilitators and Barriers in Healthcare Organizations with a Reputation for Improving the Patient Experience. International Journal for Quality in Health Care 23: 510–15. doi:10.1093/intqhc/mzr024.

Luxford, K. and S. Sutton. 2014. How Does Patient Experience Fit into the Overall Healthcare Picture? Patient Experience Journal 1(1): 20–27. doi:10.35680/2372-0247.1002.

Makoul, G., E. Krupat and C.H. Chang. 2007. Measuring Patient Views of Physician Communication Skills: Development and Testing of the Communication Assessment Tool. Patient Education and Counseling 67(3): 333–42. doi:10.1016/j.pec.2007.05.005.

Massachusetts Health Quality Partners (MHQP). 2009. MHQP 2009 Patient Experience Survey Report – Adult Primary Care. Retrieved February 22, 2017. <http://www.massmed.org/advocacy/key-issues/tiering-and-pay-for-performance/sample-patient-experience-survey-report-(pdf)/>.

National Health Service (NHS). 2013. #hellomynameis. Retrieved February 22, 2017. <http://www.nhsemployers.org/campaigns/hello-my-name-is>.

NRC+Picker. 2003. Development and Validation of the Picker Ambulatory Oncology Survey Instrument in Canada. Markham, ON: NRC+Picker.

Patwardhan, A. and C.H. Spencer. 2012. Are Patient Surveys Valuable as a Service-Improvement Tool in Health Services? An Overview. Journal of Healthcare Leadership 4: 33–46. doi:10.2147/JHL.S23150.

Picker Institute Europe. 2015. Young Outpatient Survey: Final Report. Retrieved July 5, 2017. <file:///C:/Users/sziabakhsh/Downloads/young-outpatient-department-survey-2015%20(3).pdf>.

Price, R.A., M.N. Elliott, P.D. Cleary, A.M. Zaslavsky and R.D. Hays. 2014. Should Health Care Providers be Accountable for Patients' Care Experiences? Journal of General Internal Medicine 30(2): 253–56. doi:10.1007/s11606-014-3111-7.

R Core Team. 2018. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.

Ramsay, J., J.L. Campbell, S. Schroter, J. Green and M. Roland. 2000. The General Practice Assessment Survey (GPAS): Tests of Data Quality and Measurement Properties. Family Practice 17(5): 372–79. doi:10.1093/fampra/17.5.372.

Revelle, W. 2016. Psych: Procedures for Psychological, Psychometric, and Personality Research. Evanston, IL: Northwestern University.

Revelle, W. and T. Rocklin. 1979. Very Simple Structure: An Alternative Procedure for Estimating the Optimal Number of Interpretable Factors. Multivariate Behavioral Research 14: 403–14. doi:10.1207/s15327906mbr1404_2.

Rogers, G. and D. Smith. 1999. Reporting Comparative Results from Hospital Patient Surveys. International Journal Quality Healthcare 11: 251–59. doi:10.1093/intqhc/11.3.251.

Safran, D.G., M. Karp, K. Coltin, H. Chang, A. Li, J. Ogren et al. 2006. Measuring Patients' Experiences with Individual Primary Care Physicians: Results of a Statewide Demonstration Project. Journal of General Internal Medicine 21: 13–21. doi:10.1111/j.1525-1497.2005.00311.x.

Shi, L., B. Stratfield and J. Xu. 2001. Validating the Adult Primary Care Assessment Tool. Journal of Family Practice 50(2): 161.

Sjetne, I.S., O.A. Bjertnes, R.V. Olsen, H.H. Iversen and G. Bukholm. 2011. The Generic Short Patient Experiences Questionnaire (GS-PEQ): Identification of Core Items from a Survey in Norway. MBC Health Services Research 11: 88. doi:10.1186/1472-6963-11-88.

Steine, S., A. Finset and E. Laerum. 2001. A New, Brief Questionnaire (PEQ) Developed in Primary Health Care for Measuring Patients' Experience of Interaction, Emotion and Consultation Outcome. Family Practice 18(4): 410–18. doi:10.1093/fampra/18.4.410.

Stevenson, A.C.T. 2002. Compassion and Patient Centred Care. Australian Family Physician 31(12): 1103–06.

Street, R.L, G. Makoul, N.K. Arora and R.M. Epstein. 2009. How Does Communication Heal? Pathways Linking Clinician–Patient Communication to Health Outcomes. Patient Education and Counseling 74: 295–301.

Streiner, D.L. and G.R. Norman. 2003. Health Measurement Scales: A Practical Guide to Their Development and Use. Oxford, United Kingdom: Oxford University Press.

van der Eijk, M., M.J. Faber, I. Ummels, J.W. Aarts, M. Munneke and B.R. Bloem. 2012. Patient-Centeredness in pD Care: Development and Validation of a Patient Experience Questionnaire. Parkinsonism & Related Disorders 18: 1011–16. doi:10.1016/j.parkreldis.2012.05.017.

Van de Ven, A.H. 2014. What Matters Most to Patients? Participative Provider Care and Staff Courtesy. Patient Experience Journal 1(1): 131–39. doi:10.35680/2372-0247.1016.

Velicer, W. 1976. Determining the Number of Components from the Matrix of Partial Correlations. Psychometrika 41: 321–27. doi:10.1007/BF02293557.

Willis, G. and A.R. Artino. 2013. What Do our Respondents Think We're Asking? Using Cognitive Interviewing to Improve Medical Education Surveys. Journal of Graduate Medical Education 5(3): 353–56. doi:10.4300/JGME-D-13-00154.1.

Wong, S.T. and J. Haggerty. 2013. Measuring Patient Experiences in Primary Health Care: A Review and Classification of Items and Scales Used in Publicly-Available Questionnaires. Vancouver, BC: UBC Centre for Health Services and Policy Research.

Zolnierek, K.B. and M.R. Dimatteo. 2009. Physician Communication and Patient Adherence to Treatment: A Meta-Analysis. Medical Care 47(8): 826–34. doi:10.1097/MLR.0b013e31819a5acc.

Zumbo, G.B.D. 2007. Ordinal Versions of Coefficients Alpha and Theta for Likert Rating Scales. Journal of Modern Applied Statistical Methods 6(1): 21–29. doi:10.22237/jmasm/1177992180.

Comments

Be the first to comment on this!

Note: Please enter a display name. Your email address will not be publically displayed