- Research article
- Open Access
- Open Peer Review
The health literate health care organization 10 item questionnaire (HLHO-10): development and validation
BMC Health Services Researchvolume 15, Article number: 47 (2015)
While research on individual health literacy is steadily increasing, less attention has been paid to the context of care that may help to increase the patient’s ability to navigate health care or to compensate for their limited health literacy. In 2012, Brach et al. introduced the concept of health literate health care organizations (HLHOs) to describe the organizational context of care. This paper presents our effort in developing and validating an HLHO instrument.
Ten items were developed to represent the ten attributes of HLHO (HLHO-10) based on a literature review, an expert workshop, a focus group discussion, and qualitative interviews. The instrument was applied in a key informant survey in 51 German hospitals as part of a larger study on patient information and training needs (PIAT-study). Item properties were analyzed and a confirmatory factor analysis (CFA) was conducted to test the instrument’s unidimensionality. To investigate the instrument’s predictive validity, a multilevel analysis was performed that used the HLHO-10 score to predict the adequacy of information provided to 1,224 newly-diagnosed breast cancer patients treated at the sample hospitals.
Cronbach’s α of the resulting scale was 0.89. CFA verified the one-factor structure after allowing for the correlation for four pairs of error terms. In the multilevel model, HLHO-10 significantly predicted the adequacy of information as perceived by patients.
The instrument has satisfactory reliability and validity. It provides a useful tool to assess the degree to which health care organizations help patients to navigate, understand, and use information and services. Further validation should include participant observation in health care organizations and a sample that is not limited to breast cancer care.
The issue of health literacy has gained increased attention ever since the U.S. Department of Education issued a report in 1993 that showed a significant portion of the country’s adult population may have too limited literacy skills to comprehend written information needed for dealing with daily activities including health care . Early research on health literacy included the development of instruments to adequately assess individuals’ health literacy level [2,3], analyses of the effects of low health literacy on disease knowledge, health behaviors, and other health outcomes , and interventions to mitigate those adverse effects e.g. [4,5]. By and large, research suggests that improving individual health literacy may not only produce better health outcomes [6,7] but also decrease health care costs [8-10], thereby saving resources to improve health at a population level. Recently, efforts have shifted to develop short and group-specific instruments to assess health literacy [11-14] and to implement measures that help individuals with specific health conditions or from especially vulnerable groups e.g. [15-17].
In Europe, health literacy research has advanced rapidly through major projects such as the European Health Literacy Survey (HLS-EU) that involves eight EU member states  and as part of a research initiative by the German Federal Ministry of Health/National Cancer Plan . The HLS-EU study revealed that, as in the U.S., a large proportion of the population does not have adequate health literacy and that variation exists between countries participating in the project. The HLS-EU project follows a broad definition of health literacy proposed by Sørensen and colleagues, which covers a wide range of aspects mentioned in earlier definitions. According to Sørensen et al.’s definition, health literacy is:
“linked to literacy and entails people’s knowledge, motivation and competences to access, understand, appraise, and apply health information in order to make judgments and take decisions in everyday life concerning healthcare, disease prevention and health promotion to maintain or improve quality of life during the life course [20: p.3].”
With the increase of research on individual health literacy, attention has also shifted to the specific context in which care is provided. The Healthy People 2010 health literacy action plan and the Institute of Medicine 2004 report [21,22] laid the foundation for the recent focus on care context. The U.S. National Action Plan to Improve Health Literacy  advanced the perspective that context and individual skills both matter in affecting the care for patients. Specifically, Baker , Nutbeam , Rudd [26-29] and colleagues pointed out that individual health literacy is conditional and contextual, because patients’ ability to understand medical information and navigate the care-seeking process is related to the demands that health delivery systems place on them and because the challenges that each patient experiences in the care process can only be understood within the organizational context of care. Alternatively, the specific organizational context where care is provided, such as a physician practice or hospital, may contribute to compensating for patients’ limited health literacy. Many health literacy-related interventions – such as using media other than written information to convey health information, designing plain language information or consent forms, and equipping premises with easy to understand signposts and directories – are applicable in any health care organization [30,31]. However, such interventions require additional resources and have not yet been implemented in all settings. Besides helping patients navigate the health care organization, these measures themselves might contribute to increasing patients’ individual health literacy.
To advance research on care context and health literacy, Brach et al. proposed the concept of health literate health care organizations (HLHOs) to characterize and assess how health care organizations perform in dealing with patients’ health literacy issues . Referring to earlier work by the Institute of Medicine [22,33] and others e.g. [24,27], cf. 32 for an extensive list, they contended that health literacy research and interventions should take the demands of the health care system into account and endeavor to ‘tackle system-level factors’ [32: p.1]. They defined HLHOs as health care organizations that “make it easier for people navigate, understand, and use information and services to take care of their health” [32: p.2]. Specifically, they proposed that HLHOs displayed the ten attributes listed in below.
Ten attributes of a health literate health care organization [32: p.3]
Has leadership that makes health literacy integral to its mission, structure, and operations.
Integrates health literacy into planning, evaluation measures, patient safety, and quality improvement.
Prepares the workforce to be health literate and monitors progress.
Includes populations served in the design, implementation, and evaluation of health information and services.
Meets the needs of populations with a range of health literacy skills while avoiding stigmatization.
Uses health literacy strategies in interpersonal communications and confirms understanding at all points of contact.
Provides easy access to health information and services and navigation assistance.
Designs and distributes print, audiovisual, and social media content that is easy to understand and act on.
Addresses health literacy in high-risk situations, including care transitions and communications about medicines.
Communicates clearly what health plans cover and what individuals will have to pay for services.
To date, there is no instrument for measuring the degree of implementation of these attributes in health care organizations. An adequate assessment of the attributes would need to reflect reliably the extent of their implementation in health care organizations and to demonstrate that they indeed are linked to improved outcomes as perceived by patients.
This paper presents the development of an instrument for assessing the degree to which the 10 HLHO attributes are implemented in health care organizations. The instrument was psychometrically validated using data from surveys of key informants and patients in German breast care center hospitals. To satisfy as a psychometrically valid scale, the 10 items we developed should reliably and consistently measure a common, latent variable – i.e., HLHO (the property of unidimensionality). We assessed reliability (internal consistency) and unidimensionality of the HLHO-10 measure and tested its criterion and predictive validity.
Data collection and samples
Data were collected as part of a larger study (‘Strengthening patient competence: Breast cancer patients’ information and training needs’ (“PIAT”)). The study was conducted in a sample of German breast cancer center hospitals. We included hospitals that were certified according to the criteria of the German Cancer Society and the German Society for Senology  as of May 31, 2012. We excluded hospitals that took part in a mandatory patient survey conducted in one federal state (North Rhine-Westphalia)  to avoid surveying patients twice. A total of 247 breast cancer center hospitals met these inclusion criteria. From these hospitals, we randomly selected 98 to participate in the study. Fifty-six (57%)a of the 98 hospitals consented to voluntarily participate in the study. The main reason of refusal was that the hospital already participated in another patient survey.
We used self-administered questionnaires to collect data from hospital directors (or their proxies) – the key informant survey – and patients – the patient survey. The study protocol was approved by the Ethics Committee of the Medical Faculty of the University of Cologne.
Hospital key informant survey
The key informant survey was conducted between June 26 and August 31, 2013. We mailed the questionnaire to a representative (the director of the breast center or a designated proxy) of each of the 56 consenting hospitals and asked them to fill out and return the questionnaire. The questionnaire contained the items we designed to assess HLHO (to be described in detail in Section HLHO items) as well as questions regarding other structural and process characteristics of the hospital (e.g., size, teaching status, ownership). The survey was designed according to the Dillman’s method – that is, we made three mail contact attempts , plus a final telephone reminder. Fifty-one of the consenting 56 hospitals returned the questionnaire (91%). These 51 hospitals make up the sample for analysis at the hospital level.
The patient survey was conducted in the 56 consenting hospitals between February 1 and August 31, 2013. Patients were included if they: (1) had undergone inpatient surgery for newly diagnosed breast cancer between February 1 and August 31, 2013; (2) had at least one malignancy; and (3) had at least one postoperative histological evaluation. Shortly after the surgery and at the end of their hospital stay, eligible patients were asked to give written consent to participate in the survey. Once they had consented, the questionnaire was handed to them to be filled out shortly before discharge, i.e. at the end of their hospital stay. In addition to the survey responses given by patients, participating hospitals provided information on each surveyed patient’s disease and treatment characteristics (e.g., cancer stage, type of surgery). Patients were also followed with additional surveys at 10 and 40 weeks after discharge. For the purpose of this study, we used only data obtained from the first wave of patient survey – the survey right before the hospital discharge.
Of the 1,846 patients meeting the eligibility criteria, 1,543 consented to the study (83.6%). Of these, 1,359 returned the questionnaire before the discharge (88.1%). Five responses were deemed unusable because they missed the hospital identifier and could not be matched to the hospital data. Of the remaining responses, 1,224 were treated in one of the 51 hospitals responding to the key informant survey and these patients made up the patient sample for the study. Our analysis showed that patients in the study sample did not differ from patients treated in the five hospitals that did not return the key informant survey with regard to cancer stage, age and education.
We employed a mixture of methods to develop items for assessing HLHO. First, we drafted a pool of provisional items based on the Brach et al. paper  and earlier research e.g. [21-29], a thorough review of the literature on health literacy and context as mentioned in the introduction, and a focus group in October 2012 with six representatives from different breast care center hospitals that discussed the role of hospitals in providing patients with adequate information and addressing poor individual literacy. Second, we held a workshop with employees of breast care center hospitals in January 2013 to discuss and select items that best reflected hospitals’ implementation of the 10 HLHO attributes defined in Brach et al. . The workshop participants (N = 15) included quality managers (1), doctors (2), registered/specialist nurses (9), center coordinators (2), and self-help representatives (1). A consensus was reached that the final set of items should be parsimonious, easily understandable to respondents, and pertinent to the practices in German breast cancer center hospitals. The discussion at the workshop resulted in a draft of 10 items, with one item measuring each of the 10 HLHO attributes. Third, the 10 items were then reviewed by researchers from different disciplines (nursing, sociology, psychology, health economics), many of whom trained in developing survey questions to improve the wording and to ensure face validity.
Each of the final set of items (Table 1) was answered on a seven-point scale, ranging from ‘not at all’ to ‘to a very large extent’. This scale format was chosen to reflect the continuous character of each of the 10 attributes and to avoid the agree/disagree format . Note that the order of items in Table 1 differs from the order of HLHO attributes appeared in Brach et al.  (see list above). The third attribute (‘workforce’) is represented by the tenth item in Table 1. Participants in the January 2013 workshop were perplexed by the workforce question and later discussion suggested that the issue should be addressed later in relation to other HLHO attributes. Moreover, a short introduction was added, which included a brief (perhaps oversimplified) definition of health literacy to familiarize respondents with the concept and the role of hospitals in promoting patient health literacy.
Other hospital characteristics
In order to identify patterns of HLHO implementation and to adjust for relevant hospital differences, the ‘conventional set’  of hospital structure characteristics was assessed in the hospital questionnaire, including teaching status (non-teaching vs. teaching), ownership status (public, charitable, for-profit), and patient volume (annual number of surgeries performed on newly diagnosed breast cancer patients).
To assess the predictive validity, we used information from the patient survey to construct a variable regarding the adequacy of information provided in the breast care hospital. The variable was based on eight items asking patients their perceived adequacy of information that they received from the hospital regarding: (1) breast cancer self-help groups; (2) psychological support programs; (3) rehabilitation possibilities; (4) “patient guideline”, a brochure by the German Cancer Society and the German Cancer Aid; (5) obtaining a second opinion from another doctor; (6) dealing with side effects of treatment; (7) possible critical incidents that may occur at home; and (8) activities that should be avoided during treatment.
For each item, there were six possible answers (‘I received too little information’ , ‘the information was exactly right for me’ , ‘I received too much information’ , ‘I was over-challenged with the information’ , ‘I wasn’t offered information’ , and ‘I didn’t want any information’). ‘The information was exactly right for me’ was coded 1; ‘I didn’t want any information’ was coded missing; and all other answers were coded 0. Exploratory factor analysis of the eight items (principal component extraction method with varimax rotation) suggested two latent factors, whose initial eigenvalues were 3.4 and 1.2, respectively. Because the second factor had a small eigenvalue (barely larger than 1), we decided to use the average of responses to all eight items to present the perceived adequacy of information provided by the hospital. The composite variable was constructed for patients with a least five valid (i.e. non-missing) answers. The Cronbach’s alpha of the variable was 0.81, suggesting satisfactory internal reliability.
Patient attributes may be related to different information demands. Our analysis accounted for patient age (categorized into younger than 40 years, 41 to 50, 51 to 60, 61 to 70, over 71), type of surgery (mastectomy vs. breast conserving treatment), UICC cancer stage (stages 0-I vs. stages II-IV) , and health literacy. Health literacy was assessed using the three ‘best performing’ items provided by Chew et al. [12,13], which have been widely used in surveys [40,41]. We used the mean value of the items for patients with at least two valid answers to represent health literacy (Cronbach’s α = .75).
Validity and reliability assessment
We imputed the missing values on the HLHO items using the expectation-maximization (EM) algorithm in the software NORM [42,43]. The EM algorithm estimates missing data using an iterative maximum-likelihood estimation procedure .
We employed classical measurement theory to validate the 10-item instrument. In classical measurement theory, the two key psychometric properties of an instrument are its reliability, defined as the extent to which the instrument produces consistent results, and validity, the degree to which the instrument measures what it purports to measure. In assessing the psychometric properties of the instrument, we assumed the 10 items contributed to a total measure of the concept, Health Literate Healthcare Organization. On the basis of this assumption, the following four steps were taken to assess the reliability and validity of the instrument. First, we performed item analysis to examine the extent to which each item was correlated with the score of the total instrument. Each item’s relationship with the total score was assessed using corrected item-total correlation. In addition, we calculated the Cronbach’s alpha to examine the internal consistency of the items, or the degree to which hospital key informants answered consistently on the 10 items.
Second, we performed exploratory factor analysis to confirm the existence of a dominant latent factor and confirmatory factor analysis to confirm the unidimensionality of the ten items. To assess the factor structure of the item set, a principal components analysis with varimax rotation was performed. The global fit of a one-factor model in confirmatory factor analysis was assessed using the following measures and criteria: a non-significant chi square value (p > 0.05), a root mean square error of approximation (RMSEA) value of <0.08, and comparative fit index (CFI) and Tucker-Lewis Index (TLI) values of ≥0.90 .
Third, we performed several bivariate tests to examine criterion validity and to identify patterns of HLHO implementation among German breast cancer center hospitals. We calculated Spearman’s rho to examine the correlation between HLHO-10 and hospital volume. After testing of normality (Shapiro-Wilk test) and homogeneity of variance (Levene statistic), we conducted t-test and ANOVA to examine differences in HLHO-10 by teaching and ownership status.
Finally, to assess predictive validity, we tested in a hierarchical linear model of whether hospitals’ HLHO-10 score was positively correlated with the perceived adequacy of information provided to patients . Both the HLHO-10 score and the adequacy of information variable were transformed into z-scores to facilitate interpretation . In performing the hierarchical linear analysis, we first fit the two-level model without predictors (null model) to calculate an intraclass correlation coefficient (ICC). The ICC of the null model represents the proportion of variance in the dependent variable attributable to the hospital level. Following this, both patient characteristics and HLHO-10 (grand-mean centered) were added to the model to test the association between HLHO-10 and the extent to which patients consider the information provided as adequate. To account for hospital-level differences, hospital volume, teaching status, and ownership status were added to the model in a final step. Variables representing missing values on categorical variables were included in the models but omitted in the results presented below. Observations with missing values on continuous variables (i.e., health literacy, adequacy of information provided) were excluded from the analysis. IBM SPSS Statistics 22.0 was used for descriptive analysis, IBM SPSS AMOS 22.0 for the structural equation modeling, and HLM 7 for multilevel analysis.
Characteristics of the patients included in the analysis are presented in Table 2. Approximately half of the patients in the sample had stages 0/I or II-IV cancer and 78% received breast conservation treatment. Patients’ mean age was 59 with most patients in the 51 to 60 year old group.
The items of the HLHO-10 instrument, their means, medians, skewness, minimum and maximum, corrected item-total correlation, and difficulty are displayed in Table 1. The ten HLHO items showed acceptable to good item-total correlations of between 0.44 and 0.83 and difficulties between 0.43 and 0.84. The Cronbach’s alpha of HLHO-10 was 0.89. Together, these findings suggest satisfactory internal reliability.
Exploratory factor analysis with varimax rotation revealed a two-factor structure. However, the first factor explained 51% of total variance, which was 4.25 times that of the variance explained by the second factor (eigenvalue = 1.2; 12% of total variance), suggesting the existence of a dominant latent factor. Confirmatory factor analysis of the one-factor model (Figure 1) showed an acceptable fit after allowing for the correlation of four pairs of error terms (‘leadership’/’integration’; ‘inclusion of the served’/’health literacy skills range’; ‘communication standards’/’provide access’; ‘inclusion of the served’/’media variety’), resulting in a model with χ2/df [39.477/31] = 1.27, RMSEA: .073; TLI: .952; CFI: .967 (Table 3). Taken together, these results verify the property of unidimensionality – that is, the 10 items contribute to a total measure of HLHO.
The Shapiro-Wilk test statistic was non-significant for the HLHO-10 score, suggesting normality and homogeneity of variance (p-value = 0.638; df = 51). No difference in HLHO-10 was found by teaching (t-test, P = 0.968) and ownership (ANOVA, P = 0.512) status; neither was there a significant association between HLHO-10 and patient volume (Spearman’s r, P = 0.361) (Table 4).
The ICC of the null model was 0.04. It decreased after adding HLHO-10 and patient control variables (Table 5). HLHO-10 was significantly and positively associated with the dependent variable (p < 0.05), suggesting that an increase in the implementation of the HLHO attributes was related to a better perception of information adequacy among breast cancer patients treated at the hospital. Among the patient characteristics, only individual health literacy was significantly associated with perceived adequacy of the information provided. These associations persisted after the inclusion of control variables at the hospital level, none of which was statistically significant at p < 0.05. In additional analyses patient characteristics (including education, native language, partnership status, and type of health plan) were not significantly associated with the dependent variable.
Overall, our analysis showed that the ten-item instrument we developed based on the Brach et al.  framework had acceptable to good psychometric properties. Importantly, the multilevel model found a significant association between HLHO-10 and patients’ perceived adequacy of information provided by hospitals, adjusted for patient and hospital characteristics. In addition to verifying the instrument’s predictive validity, the result is indicative of the usefulness of the HLHO concept in explaining patient experience.
There are two basic ways to use HLHO-10. First, it can be used in research as a measurement tool to assess the extent to which hospitals, and possibly other healthcare organizations, are a health-literate healthcare organization and their ability to deal with patients’ health literacy constraints. Second, healthcare organizations can use HLHO-10 as a self-assessment tool to identify areas that need improvement and to devise plans to improve their ability to address patients’ health literacy issues. Besides assessing patient perception of care, future research may try to assess the impact of health literacy in health organizations on objective changes in care delivery as well as on health outcomes.
A few caveats of the study need to be discussed. First, the Brach et al.  paper describes the ten HLHO attributes in a lot more detail than they are operationalized here. Our decision to “simplify” the items used in the instrument was based on the considerations of designing items that were relevant and understandable to hospital key informants and selecting items that were actionable. Second, the items were developed within the setting of the German health care system and, more specifically, within the setting of breast cancer care. Breast cancer care in Germany is highly standardized, not only with regard to guideline adherence and staffing, but also with regard to the emphasis on patient-centeredness, which is acknowledged and emphasized in the certification processes [34,35]. This may explain why we did not find significant differences in HLHO by hospitals’ patient volume, ownership status and teaching status. The fact that the items were developed in the specific context of the study also suggests that further work may be needed to test the utility of the instrument in other countries, other care domains, and in other types of healthcare organizations. Third, the assessment of HLHO was conducted using data from a survey of hospital key informants – directors of breast cancer hospitals or their designated proxies. We are unsure if the HLHO assessment results may be different from the viewpoints of other stakeholders. Further validation of the instrument would therefore be needed by using, for example, participant observations or researchers assessing the HLHO attributes independent of hospital representatives. Fourth, the study sample consisted of 51 breast cancer hospitals. The small sample size may be another explanation of the lack of significant association between HLHO and patient volume, hospital ownership, and teaching status. Further assessment of the instrument using larger and more diverse samples of healthcare organizations is warranted.
The differences in the extent to which the attributes are already implemented require the analysis of facilitators and barriers for change at an organizational level. A recent value-oriented investigation showed only marginal associations with culture change measures in U.S. nursing homes  and earlier attempts to track or systematically influence organizational change did not exactly result in step-by-step guidelines . The conceptual development of HLHOs has so far resulted in a well-received discussion paper that launched a number of research initiatives. The ten attributes stated in the paper were meant to exemplify HLHOs and not to be exhaustive. When revising, adding, or prioritizing the attributes, it is preferable to emphasize the function of compensating for impaired individual health literacy – that is, to put effort into those who are in most need of health literacy-related support. The work we present in this paper represents one of few current attempts that relate individual health literacy issues to efforts at the organizational level.
The HLHO-10 instrument provides a useful tool to assess the degree to which healthcare organizations focus on dealing with patients’ health literacy issues. Further validation should include alternative data collection methods (e.g., participant observation in healthcare organizations) and larger and more diverse samples.
aBreast care centers may consist of one to four hospitals, with the majority of breast centers consisting of one. Analyses reported here are done at the hospital level, not the breast care center level.
U.S. Department of Education: Adult literacy in America: A first look at the findings of the national adult literacy survey. Washington, DC, USA: NCES 93275; 1993.
Davies TC, Crouch MA, Long SW, Jackson RH, Bates P, George RB, et al. Rapid assessment of literacy levels of adult primary care patients. Fam Med. 1991;23:433–5.
Parker RM, Baker DW, Williams MV, Nurss JR. The test of functional health literacy in adults: A new instrument for measuring patients’ literacy skills. J Gen Intern Med. 1995;10:537–41.
DeWalt DA, Berkman ND, Sheridan S, Lohr KN, Pignone MP. Literacy and health outcomes: A systematic review of the literature. J Gen Intern Med. 2004;19:1228–39.
Davis TC, Fredrickson DD, Arnold C, Murphy PW, Herbst M, Bocchini JA. A polio immunization pamphlet with increased appeal and simplified language does not improve comprehension to an acceptable level. Patient Educ Couns. 1998;33:25–37.
Cho YI, Lee S-YD, Arozullah AM, Crittenden KS. Effects of health literacy on health status and health service utilization amongst the elderly. Soc Sci Med. 2008;66:1809–16.
Peterson PN, Shetterly SM, Clarke CL, Bekelman DB, Chan PS, Allen LA, et al. Health literacy and outcomes among patients with heart failure. JAMA. 2011;305:1695–701.
Hardie NA, Kyanko K, Busch S, Losasso AT, Levin RA. Health literacy and health care spending and utilization in a consumer-driven health plan. J Health Commun. 2011;16:308–21.
Koh HK, Berwick DM, Clancy CM, Baur C, Brach C, Harris LM, et al. New federal policy initiatives to boost health literacy can help the nation move beyond the cycle of costly ‘crisis care’. Health Aff. 2012;31:434–43.
The Center for Health Care Strategies and National Academy on an Aging Society: Low Health Literacy Skills Increase Annual Health Care Expenditures by $73 Billion [http://www.agingsociety.org/agingsociety/publications/fact/fact_low.html]
Baker DW, Williams MV, Parker RM, Gazmararian JC, Nurss J. Development of a brief test to measure functional health literacy. Patient Educ Couns. 1999;38:33–42.
Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med. 2004;36:588–94.
Chew LD, Griffin JM, Partin MR, Noorbaloochi S, Grill JP, Snyder A, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. 2008;23:561–6.
Powers BJ, Trinh JV, Bosworth HB. Can this patient read and understand written health information? JAMA. 2010;304:76–84.
Kandula NR, Nsiah-Kumi PA, Makoul G, Sager J, Zei CP, Glass S, et al. The relationship between health literacy and knowledge improvement after a multimedia type 2 diabetes education program. Patient Educ Couns. 2009;75:321–7.
Ratanawongsa N, Handley M, Quan J, Sarkar U, Pfeifer K, Soria C, et al. Quasi-experimental trial of diabetes self-management automated and real-time telephonic support (SMARTSteps) in a Medicaid managed care plan: Study protocol. BMC Health Serv Res. 2012;12:22.
Schillinger D, Piette J, Grumbach K, Wang F, Wilson C, Daher C, et al. Closing the loop: Physician communication with diabetic patients who have low health literacy. Arch Intern Med. 2003;163:83–90.
HLS-EU Consortium: Comparative Report of Health Literacy in Eight EU Member States, The European Health Literacy Survey HLS-EU [http://www.healthliteracy.ie/wp-content/uploads/2012/09/HLS-EU_report_Final_April_2012.pdf]
BMG (German Federal Ministry for Health) (2013): [http://www.bmg.bund.de/fileadmin/dateien/Downloads/N/Nationaler_Krebsplan/Uebersicht_ueber_die_gefoerderten_Forschungsprojekte_im_NKP_korr.pdf]
Sørensen K, Van den Broucke S, Fullam J, Doyle G, Pelikan J, Slonska Z, et al. Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health. 2012;12:80.
U.S. Department of Health and Human Services: Communicating Health: Priorities and Strategies for Progress, 2003. http://odphp.osophs.dhhs.gov/projects/HealthComm/.
Institute of Medicine. Health literacy: A prescription to end confusion. Washington, DC: The National Academies Press; 2004.
Office of Disease Prevention and Health Promotion: National action plan to improve health literacy. http://www.health.gov/communication/HLActionPlan; 2010.
Baker DW. The meaning and the measure of health literacy. J Gen Intern Med. 2006;21:878–83.
Nutbeam D. The evolving concept of health literacy. Soc Sci Med. 2008;67:2072–8.
Rudd RE: Navigating hospitals: Literacy barriers. Literacy Harvest 2004;11:19–24
Rudd RE. Improving Americans’ Health Literacy. N Engl J Med. 2010;363:2283–5.
Rudd R, Anderson J: The health literacy environment of hospitals and health centers. http://www.hsph.harvard.edu/healthliteracy/practice/environmental-barriers/; 2006.
Rudd RE, Renzulli D, Pereira A, Daltory L. Literacy demands in health care settings: The patient perspective. In: Schwartzberg JG, Van Geest JB, Wang CC, editors. Understanding health literacy: Implications for medicine and public health. Chicago: AMA Press; 2005.
Delp C, Jones J. Communicating information to patients: The use of cartoon illustrations to improve comprehension of instructions. Acad Emerg Med. 1996;3:264–70.
DeWalt DA, Callahan LF, Hawk VH, Broucksou KA, Hink A, Rudd R, Brach C: Health literacy universal precautions toolkit [http://www.ahrq.gov/qual/literacy/healthliteracytoolkit.pdf]; 2010.
Brach C, Keller D, Hernandez LM., Baur C, Parker R, Dreyer B, Schyve P, Lemerise AJ, Schillinger D: Ten Attributes of Health Literate Health Care Organizations. Discussion Paper; 2012
Institute of Medicine. Crossing the quality chasm. Washington, DC, USA: National Academy Press; 2001.
Deutsche Krebsgesellschaft & Deutsche Gesellschaft für Senologie: Fachliche Anforderungen für die Zertifizierung von Brustzentren durch die Deutsche Krebsgesellschaft und die Deutsche Gesellschaft für Senologie [http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=8&ved=0CFwQFjAH&url=http%3A%2F%2Fwww.dggg.de%2Fdownloads%2F%3FeID%3Ddam_frontend_push%26docID%3D127&ei=Fj0rUIy-IKmJ0AXYy4CgDQ&usg=AFQjCNET6Og0Kwrk9VdH_FNQid324MrP6A]
ÄKZERT (2014): Verfahren zur Zertifizierung von Brustzentren in NRW, Version 2014 [http://www.mgepa.nrw.de/mediapool/pdf/gesundheit/Anforderungskatalog_Brustzentren_NRW_2014.pdf]
Dillman DA. Mail and telephone survey: The total design method. New York, USA: Wiley & Sons; 1978.
Saris WE, Revilla M, Krosnick JA, Schaeffer EM. Comparing Questions with Agree/Disagree Response Options to Questions with Item-Specific Response Options. Survey Res Methods. 2010;4:61–79.
Hearld LR, Alexander JA, Fraser I, Jiang HJ. How do hospital organizational structure and processes affect quality of care? A critical review of research methods. Med Care Res Rev. 2008;65:259–99.
Sobin LH, Wittekind C. UICC TNM Classification of malignant tumors. 6th ed. New York, USA: John Wiley & Sons; 2002.
Douma KFL, Koning CCE, Zandbelt LC, De Haes HCJM, Smets EMA. Do patients’ information needs decrease over the course of radiotherapy? Support Care Cancer. 2012;20:2167–76.
Hawley ST, Janz NK, Lillie SE, Friese CR, Griggs JJ, Graff JJ, et al. Perceptions of care coordination in a population-based sample of diverse breast cancer patients. Patient Educ Couns. 2010;81:S34–40.
NORM: Multiple imputation of incomplete multivariate data under a normal model (Version 2) [Software] (1999). University Park: The Methodology Center, Penn State. [http://methodology.psu.edu/pubs/books/missing]
Schafer JL, Graham JW. Missing Data: Our View of the State of the Art. Psychol Methods. 2002;7:147–77.
Graham JW, Cumsille PE, Elek-Fisk E. Methods for handling missing data. In: Schinka JA, Velicer WF, editors. Research Methods in Psychology. New York: Wiley; 2003. p. 87–114.
Hair JF. Multivariate data analysis. 6th ed. Upper Saddle River, New Jersey: Pearson/Prentice Hall; 2006.
Hox JJ. Multilevel Analysis: techniques and applications. New York: Routledge; 2010.
Smith ER. Research Design. In: Reis HT, Judd CM, editors. Handbook of Research Methods in Social and Personality Psychology. Cambridge: Cambridge University Press; 2000. p. 17–39.
Banaszak-Holl J, Castle NG, Lin M, Spreitzer G. An assessment of cultural values and resident-centered culture change in U.S. nursing facilities. Health Care Manage Rev. 2013;38:295–305.
Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4:67.
We would like to thank the patients who participated in the survey and the breast cancer centers supporting this study as well as the two reviewers for their valuable input. We confirm that all patient identifiers have been removed or disguised so that the patients described are not identifiable. Thanks to Axel Deters, Isabelle Schellartz, and to Markus Alich, care research lab manager at the IMVR, who assisted in patient and key informant data collection. The study was funded by the German Federal Ministry of Health and by KölnFortune. The hospitals provided patients’ addresses and clinical information, as reported. The funding institutions and the hospitals were not involved in the analysis and interpretation of results, or preparation of this manuscript. The writing was done during the first author’s stay at the University of Michigan that was financed by the German Research Foundation (Grant number: Ko 3856/2).
The authors declare that they have no competing interests.
CK and SYDL planned data analysis and drafted the manuscript. CK, SYDL, and MAW analyzed the data. SW, NE, CK, AS and HP developed the study framework, the patient and the key informant survey. NE and AS conducted the pretests. AS additionally coordinated data collection and recruited hospitals and interviewees. All authors read and approved the final manuscript.