- Research article
- Open Access
The Health Literacy Questionnaire (HLQ) at the patient-clinician interface: a qualitative study of what patients and clinicians mean by their HLQ scores
BMC Health Services Research volume 17, Article number: 309 (2017)
The Health Literacy Questionnaire (HLQ) has nine scales that each measure an aspect of the multidimensional construct of health literacy. All scales have good psychometric properties. However, it is the interpretations of data within contexts that must be proven valid, not just the psychometric properties of a measurement instrument. The purpose of this study was to establish the extent of concordance and discordance between individual patient and clinician interpretations of HLQ data in the context of complex case management.
Sixteen patients with complex needs completed the HLQ and were interviewed to discuss the reasons for their answers. Also, the clinicians of each of these patients completed the HLQ about their patient, and were interviewed to discuss the reasons for their answers. Thematic analysis of HLQ scores and interview data determined the extent of concordance between patient and clinician HLQ responses, and the reasons for discordance.
Highest concordance (80%) between patient and clinician item-response pairs was seen in Scale 1 and highest discordance (56%) was seen in Scale 6. Four themes were identified to explain discordance: 1) Technical or literal meaning of specific words; 2) Patients’ changing or evolving circumstances; 3) Different expectations and criteria for assigning HLQ scores; and 4) Different perspectives about a patient’s reliance on healthcare providers.
This study shows that the HLQ can act as an adjunct to clinical practice to help clinicians understand a patient’s health literacy challenges and strengths early in a clinical encounter. Importantly, clinicians can use the HLQ to detect differences between their own perspectives about a patient’s health literacy and the patient’s perspective, and to initiate discussion to explore this. Provision of training to better detect these differences may assist clinicians to provide improved care.
The outcomes of this study contribute to the growing body of international validation evidence about the use of the HLQ in different contexts. More specifically, this study has shown that the HLQ has measurement veracity at the patient and clinician level and may support clinicians to understand patients’ health literacy and enable a deeper engagement with healthcare services.
Data derived from patient-reported outcomes measures (PROMs) affect care decisions for individual patients through to decisions about nationwide health plans. Data are used to justify, endorse or exclude treatments, interventions and policies. Such responsibility requires the measurement tool and its data to be valid for the purpose [1, 2]. Meaning ascribed to data must be representative of the constructs the tool purports to measure, and the consequences of that interpretation must be valid for the intended purpose [2,3,4,5,6,7]. This means that validation of the data generated by a measurement tool is required for each new context in which it is used [2, 8].
As well as rigorous psychometric testing during the construction and initial validation of a questionnaire, it is also incumbent on researchers and decision-makers to demonstrate that the inferences made from questionnaire data are an acceptable representation of respondents’ real world situations within particular contexts. Whether measurement is to occur at the population level or at the individual level, or both, it is critical that the items measure what they intend to measure in all settings in which the questionnaire is applied. Construct validity relies on a questionnaire measuring what it purports to measure in all relevant contexts and, consequently, that the measurement of a particular construct can occur systematically among groups and settings [9,10,11].
Adamson and Gooberman-Hill  explored the meanings and interpretations behind people’s responses to commonly applied questions and questionnaires by eliciting narrative data from participants as they completed a questionnaire or set of questions. The narrative data revealed definitions and meanings of words and phrases in the questions that were different from the intention of the items. The study demonstrated that items are often not clear, precise and brief, and that double or ambiguous meanings can be embedded within an item. Validity relies on respondents’ collective understanding of items and the associated response options, and consequently that respondents with similar characteristics, in relation to the construct being measured, will systematically respond to items in the same way . This also suggests that there will be idiosyncratic variations in the interpretation of individual questions by individual people and that, even when aggregate data can be safely interpreted for a population, considerations must be applied when interpreting and making decisions based on scores from individuals. Many PROMs, including recent health literacy PROMs, have been designed and tested for use at the population level, and have not been tested for use with individual patients [13, 14].
Measurement of health literacy has proved complex because it is a multidimensional concept and definitions for it have evolved over many years [15, 16]. The World Health Organization definition of health literacy was used for the development of the Health Literacy Questionnaire (HLQ) : health literacy ‘…is the cognitive and social skills which determine the motivation and ability of individuals to gain access to, understand and use information in ways which promote and maintain good health’ . While the purpose of this definition is to convey the broad meaning of the concept to researchers, practitioners, policymakers and others, it is not a concept that is easy to capture and measure at the individual person level. Consequently, development of the HLQ used a validity-driven approach [1, 14] with extensive patient engagement, including during the conceptual development of constructs and items, and for the cognitive testing of items.
The HLQ was designed using a grounded, validity-driven approach and initially tested in diverse samples of individuals in Australian communities and is shown to have strong construct validity, reliability and acceptability to clients and clinicians [14, 18, 19]. The HLQ measures nine independent domains of health literacy to capture the lived experiences of people attempting to understand, access and use health information and health services. The scales generate profiles of individuals, groups and populations. Importantly, the data also reflect the quality of health and social service provision. Service providers can use the profiles to better understand the needs of communities, and assist with planning, designing and evaluating interventions. It was designed for self-administration using pen and paper and can also be interviewer-administered to ensure inclusion of people who cannot read or have other difficulties with self-administration.
The HLQ is used in many countries and in many settings, including for population health surveys , development of interventions , and for evaluation of health programs [21, 22]. Validation of the interpretation of data for an intended purpose is recommended for each new setting [1, 2]. Osborne et al support a validity-driven approach to the validation of the data derived from measurement tools, stating that the HLQ is ‘now ready for further testing and validation of the interpretations of each scale’s data in the intended application settings; that is, applications in specific demographic groups, within health promotion, public health and clinical interventions, and in population health surveys’ (p.13) . If individual patient HLQ data are to be interpreted and used by clinicians to make decisions about treatment for those patients then validation of patient and clinician interpretations of HLQ data must be undertaken.
The purpose of this study was to establish the extent of concordance and discordance between patient and case manager (clinician) HLQ scores and the corresponding interview narratives (interpretations of those scores) across the nine independent HLQ scales, and to identify the reasons for discordance. To do this, the study examined interpretations of HLQ item scores in a setting with individual patients who had chronic and complex health conditions, who were participating in intensive case management, and whose clinician thought were likely to have low health literacy. Both the patient and their clinician completed the HLQ and were interviewed, and the data compared. If some systematic discordance exists between patient and clinician interpretations of HLQ scores, and this is known, then clinicians will be able to use the HLQ data in a more informed way in support of clinical practice.
The study sought to answer the following research questions:
What do patients really mean by their HLQ scores? That is, how well do patients’ HLQ scores match their interview narrative data?
What is the extent of concordance between patients’ HLQ scores and narratives and their clinician’s HLQ scores and narratives about the patients, and what are the reasons for discordance?
The first of these questions directly addresses validation of HLQ data for individual patients (not populations) within a chronic and complex care context, and contributes to the ongoing development of the web of evidence about the HLQ and its clinical and public health utility. The second question addresses the concordance of patients’ perspectives with their clinicians’ perspectives to determine the utility of the HLQ as a tool to inform clinicians about their patients’ health literacy needs, and to facilitate discussions with patients when HLQ scores differ from clinicians’ expectations.
A qualitative design using HLQ scores and semi-structured interviews was employed so that interview narratives revealed patient and clinician experiences and reasons behind why they chose particular HLQ scores. Patient and clinician data were assessed for match between HLQ scores and corresponding interview narratives, and then for concordance and discordance between patient and clinician score/narrative responses. Patient and clinician data were analysed thematically across HLQ scales to determine the extent of concordance between patient and clinician HLQ responses (scores and narratives), and the reasons for discordance.
The study was conducted at a large regional Australian public health service, Barwon Health, which comprises a range of community care services and a major teaching hospital. Staff and patients were recruited from the organisation’s Hospital Admission Risk Program (HARP), an intensive case management service to support people who have complex and chronic conditions and/or frequently attend emergency departments. In this service, clinicians come to know their patients very well, including their personal and domestic situations, through home visits and attending medical appointments with them.
A priority for this study was to include individuals who might have low health literacy, which is a group often overlooked, omitted or missed in research projects, usually because they are difficult to engage. This is often the case for clients assigned to the HARP service and was the reason this site was chosen for recruitment. People with higher health literacy are more likely to be well educated and competent in accessing health care and in answering questionnaires, and are likely to more strongly endorse the items of the HLQ (i.e., answer Strongly Agree and Very Easy). In order for this study to be more likely to rigorously explore the depth and breadth of the HLQ constructs – and therefore to test the validity of the HLQ data in this individual patient context – all existing patients of the participating HARP clinicians who met the criteria were recruited to the study. A high response rate from this group of patients was not expected so, as HLQs were returned, all who met the criteria were included.
HARP clinicians were specifically requested, based on their extensive knowledge of their clients, to deliberately include clients who they thought may have health literacy difficulties. Inclusion criteria for participants included engagement for four or more months in HARP case management and care coordination, a comprehensive HARP assessment, and at least six contacts with the HARP clinician. This criteria maximises the opportunity for the clinician to get to know the patient well and, as such, respond to HLQ items about a patient in a way that is reflective of that patient’s health context. It was a way of confirming patients’ HLQ responses in the absence of external data about the patients’ actual lived experiences. Patients were invited to participate in the study by their HARP clinician. The professions of the clinicians included nursing, social work and dietetics.
The project was approved by the Human Research Ethics Committees of Barwon Health (ID: 11/85) and Deakin University (ID: 2011-077).
Consenting patients either self-completed the HLQ or were assisted by a friend, relative or carer (but not their HARP clinician). Demographic and health data were also collected from the patients. Clinicians were asked to complete the HLQ about their patient in two ways: first, from their own perceptions of the patient’s health literacy status and, second, how they think their patient would respond to the items. This paper reports only on the comparison of patient scores with the first set of scores from each clinician’s perspective of their patient’s health literacy.
Semi-structured telephone interviews were conducted by authors MH or SG. Most interviews were conducted between 3 and 8 weeks after an HLQ was completed by the patient. Interviews consisted of reading HLQ questions to patients and clinicians, reminding them of the answer they had given to that item, then prompting with questions such as ‘Can you tell me why you chose that answer?’ and ‘What were you thinking about when you selected that answer?’. The interviewers did not inform clinicians of their patient’s scores during the clinician interviews.
Development and validation of the HLQ is described elsewhere . The development and validation study showed the HLQ has strong construct validity, reliability and acceptability to clients and clinicians [14, 18, 19]. The original scale reliability estimates ranged from 0.77 to 0.90 , and were reproduced in a more diverse replication sample with scores ranging from 0.80 to 0.89 . The a priori 9-factor structure was confirmed in both the original development study and the replication study. Detailed analysis of the relationships between the health literacy scales and socioeconomic position in the vulnerable groups demonstrated expected small to large associations with key demographic factors . Table 1 displays the high and low descriptors and psychometric properties for each of the nine HLQ scales . Each of the nine scales comprise between 4 and 6 items (44 items in total). Each item has a corresponding description of the meaning and intent of the item, which supports the purpose and positioning of the item within the scale. Items are scored from 1-4 in the first 5 scales (Strongly Disagree, Disagree, Agree, Strongly Agree), and from 1-5 in scales 6-9 (Cannot Do, Very Difficult, Quite Difficult, Easy, Very Easy). HLQ validation and testing included extensive cognitive testing to confirm the items were understood as intended.
In this study, a ‘patient-clinician dyad’ refers to a patient and that patient’s clinician. To reduce interviews to a manageable length, dyads were administered subsets of the nine scales. Dyads were alternately assigned to each group as completed HLQs were received. Group 1 consisted of Scales 1, 2, 3 (Disagree/ Agree response options), 6 and 7 (Difficult/ Easy response options). Group 2 consisted of Scales 4, 5 (Disagree/ Agree response options), 8 and 9 (Difficult/ Easy response options). Data were collected from 9 dyads for Group 1 and 7 dyads for Group 2. A ‘patient-clinician item-response pair’ refers to a patient’s HLQ score and interview narrative paired with the corresponding clinician’s HLQ score and interview narrative for one HLQ item. For reporting purposes, patients and clinicians are identified with a P or C, respectively, and their study number (for example, P101 and C101).
Data analysis was two-fold: 1) determine if interview narrative data were consistent with patients’ and clinicians’ HLQ scores (and if the narrative reflected the intent of the items); and 2) determine the extent of concordance (and discordance) between patient HLQ scores and narratives and clinician HLQ scores and narratives (that is, the extent of concordance within patient-clinician item-response pairs).
In the first step, patient and clinician data were examined separately. To assist researchers’ understanding of items and scales, and to guide the linguistic and cultural adaption of items to other languages and cultures, a short description of each item has been written to explain what the item intends to convey (and not to convey). These item intents are part of the HLQ support documentation. In the current study, the first step for both patient and clinician data was to compare an HLQ score (e.g., Agree or Always Easy to do) with the corresponding narrative to assess if the narrative made sense in light of the score (i.e., if it matched the score) and the item intent. For example, if a score was that a task was ‘Always Easy’ then the narrative was examined for confirmation that the respondent agreed with this score and/or a description of how or why it was always easy to do. A score and narrative were considered a match if the narrative indicated that the respondent agreed with the score they had assigned to an item, and the interview narrative matched the intent of the item. Accordingly, a score and narrative did not match if the narrative did not provide a statement that clearly demonstrated support for the score. Although this analysis was conducted on both patient and clinician data, only patient data from this step was required to answer the first research question. Clinician data was examined only to confirm match for the purposes of answering the second research question.
For the second step, patient HLQ scores and interview narratives were compared with their clinician’s HLQ scores and interview narratives (for each item) to determine the extent of concordance within patient-clinician item-response pairs across items within each HLQ scale. There were three ways that these data were categorised: 1) concordant, 2) discordant, or 3) unclear (that is, concordance or discordance could not be assigned to a patient-clinician pair because the patient or the clinician narrative did not match their corresponding score, or the patient or clinician changed their score during interview). Descriptions of the requirements for these categories are in Table 2.
Each HLQ scale comprised between 4 and 6 items with data collected for 7 or 9 dyads per scale (i.e., from 35 to 63 patient-clinician item-response pairs across the 9 scales), such that there was a total of 408 item-response pair interactions. Two researchers (MH and SG) independently examined all HLQ scores and corresponding narrative data and then sought consensus, including specific reasons for concordance, discordance, and unclear responses. Data were then reanalysed to confirm boundaries and categories for concordance, discordance, and unclear pairs. Analysis of interview narratives included initial coding of narratives for match with corresponding HLQ scores and for reasons why a score was chosen; categorisation of narratives to determine common reasons for choice of scores within scales; and then thematic analysis of these categories across patient-clinician item-response pairs for common themes for discordance across scales [25, 26].
Patient and clinician HLQ scores located on the same side of the response option scale (e.g., Cannot Do and Quite Difficult, or Agree and Strongly Agree) were classified as concordant, whereas score pairs located at opposing ends of the response option scale (e.g., Disagree and Agree) were classified as discordant.
Forty-five HLQs were distributed to HARP patients, of which 22 were returned, and full consent was received by 20 of those. Interviews were conducted with 18 patients because 2 were subsequently unable to be contacted. There were 2 patients who were particularly difficult to contact and were interviewed 12 weeks (P114) and 21 weeks (P104) after returning their HLQs. HARP clinicians needed to facilitate the contact between these patients and the researchers, with one patient preferring to be interviewed face-to-face. There were 9 clinicians interviewed, each of whom were responsible for between 1 and 4 patients. Overall, both HLQ scores and narrative data were collected for 16 patient-clinician dyads.
Demographic characteristics for patients are shown in Table 3. The median age of the 16 patients was 43 years (range 18-77; SD 18) with 11 people under 55 years. There were 10 females, 7 participants did not complete high school, 13 lived alone, 15 spoke English at home, 13 were born in Australia, and 6 had four or more chronic conditions.
The majority of the 38 unclear patient-clinician item-response pairs were because clinicians changed their scores during the interview (13 changes across 6 clinicians) with this followed closely by patient narratives that did not support the HLQ scores (12 non-matches across 6 patients). There were 9 instances (also across 6 patients) when patients changed their scores during the interview (only 1 of these patients had also provided a narrative that did not match the score). There were 4 instances (across 3 clinicians) when a clinician’s narrative did not support the HLQ (2 of these clinicians also changed a score during interview).
Given that some clinicians completed HLQs and were interviewed about more than one patient, it was possible that the data may have revealed clinician response patterns. However, systematic assessment of the data from each clinician could find no evidence of response patterns for any one clinician.
1. What do patients really mean by their HLQ scores? That is, how well do patients’ HLQ scores match their narrative data? (Patient data only)
Overall and across scales, patient interview narratives gave clear reasons to support the chosen response options, and these reasons reflected the intention of the HLQ items. Table 4 shows the match between patient scores and narratives for items across the nine HLQ scales.
Two patients exhibited some difficulty with some items. P114 had several co-morbidities, exhibited confusion during the interview, and had difficulty concentrating on items and providing answers. P115 changed her responses for 4 of the 5 items in scale ‘8. Ability to find good health information’ from the ‘Difficult’ end of the response options scale to the ‘Easy’ end. She seemed unsure as to why she had originally answered that these tasks were difficult. These two participants contributed to scale ‘8. Ability to find good health information’ having the lowest match between patient scores and narratives (30 of the 35 responses [7 patients x 5 items] for that scale), but still high at 86%.
For scale ‘4. Social support for health’ and ‘6. Ability to actively engage with healthcare providers’, all patient narratives clearly supported the corresponding HLQ scores. There were no unclear narratives, no opposing narratives, and no patients changed their answers during the interviews.
2. To what extent are patients’ HLQ scores concordant with those provided by their clinician, and what are the reasons for discordance? (Patient and clinician data)
The number of concordant, discordant and unclear patient-clinician item-response pairs across HLQ scales is shown in Table 4.
Highest concordance between patient and clinician item-response pairs was seen in ‘1. Feeling understood and supported by healthcare providers’ (80%). Highest discordance (56%) was seen in ‘6. Ability to actively engage with healthcare providers’. Lowest concordance (given the unclear category) was 40% for ‘9. Understand health information well enough to know what to do’, closely followed by ‘6. Ability to actively engage with healthcare providers’ (42%) and ‘8. Ability to find good health information’ (43%). Three scales had 8 unclear patient-clinician item-response pairs: ‘7. Navigating the healthcare system’, ‘8. Ability to find good information’ and ‘9. Understand health information well enough to know what to do’.
Concordance means that both patients and clinicians perceived that the patient had or did not have resources or skills (e.g., was able to form relationships), or could or could not do certain tasks (e.g., fill in medical forms). That is, both respondents scored (with narratives supporting this score) on the same side of the response options scale. In the following example, both patient and clinician scored Agree in response to an item about her relationships with healthcare providers, and their narratives support their scores. P108 (HLQ response option selected = Agree) said my GP for instance has phoned me at home and followed up on a couple of things and actually saved my life once by doing so, so I trust her. Her clinician C108 (HLQ response option selected = Agree) said I’ve been to the GP with this client and she has a long relationship with the GP and a fond relationship with the GP. See Table 5 for more examples of concordance.
Four main themes were identified for discordance between patient and clinician data across HLQ items.
Technical or literal meaning of specific words
Patients’ changing or evolving circumstances
Different expectations and criteria for assigning HLQ scores
Different perspectives about a patient’s reliance on healthcare providers
Some examples of these themes are presented in the results. See Table 6 for further examples.
Theme 1. Technical or literal meaning of specific words
In some cases, discordance related to specific words such as ‘sure’, ‘all’ and ‘plenty’. Patients did not comment on these words specifically. Clinicians, however, when thinking about a patient, sometimes read these words in a literal sense. While patient P103 (Agree) said he had all the information he needed (‘2. Having sufficient information to manage my health’), clinician C103 rated the item as Disagree explaining: I guess it was in regards to the wording of ‘being sure’; it’s an absolute sort of word and so I think that is why I’ve done that again because of being 100% sure about something. I’m not sure that he might have all the information he needs. A second example again shows how the clinicians notice the qualifier words and adjust their response accordingly. P113 disagreed with an item that asked about having plenty of people to rely on (‘4. Social support for health’), but his clinician (C113, Agree) stated I wouldn’t say ‘plenty’ but the ones he’s got would be very reliable if he needs help.
Theme 2. Patients’ changing or evolving circumstances
Theme 2 is about patients who are learning to trust new healthcare providers and learning, over a period of time, to understand their own health conditions. This theme was categorised separately from themes 3 and 4 because of the specific context of patients’ relationships and understanding about their health being in a state of flux. Themes 3 and 4 are related to more stable and ongoing health contexts, and established and ongoing relationships with and reliance on healthcare providers.
In ‘1. Feeling understood or supported by healthcare providers’, patient-clinician perspectives differed around trusting healthcare providers when relationships with healthcare providers were new, evolving or changing. P112 described how she was recently establishing new relationships with healthcare providers, was learning to trust them and discuss her health with them: I’ve only over the last year got certain, I suppose you could say ‘go-to people’ for my healthcare needs…I don't have anybody to discuss specific issues with…I'm finding people that I can trust with my health issues as well, because I've had a lot issues with that in the past, finding people that I can trust to deal with my health issues (P112, Disagree). C112 scored Agree and, referring to these recently forming relationships with healthcare providers, explained: Yes, she does have a healthcare person that she can speak with; whether she does or not is another matter.
Some patients reported that their knowledge and understanding about their health was evolving (often because of previous lack of access to health information and care) and that they did not yet know all they would eventually know. In ‘2. Having sufficient information to manage my health’, P101 (Disagree) stated: I don’t think I’ve got enough information at all. C101 (Agree) said the patient had the information but because of ambivalence and some medication issues she didn’t deal with it well.
Theme 3. Different expectations and criteria for assigning HLQ scores
This theme encompasses four overlapping sub-themes that reflect differences between patients and clinicians when it comes to assigning scores to the way patients respond to the provision of health information and services or health support: a) Action is a more important criterion for clinicians than for patients; b) Patients don’t always know what they don’t know; c) There are different points of comparison (providers compare across patients, patients compare across providers); and d) There are different expectations for support when ill.
Sub-theme 3a) Action is a more important criteria for clinicians than for patients
Clinicians tended to expect to see patients take action to improve their health and often applied this criteria when answering the HLQ items. For example, although patients may have information about their health, clinicians sometimes determined that they didn’t always have the capacity to understand, retain or, in particular, use or act on the information they received. In ‘2. Having sufficient information to manage my health’, P103 (Strongly Agree) felt he had good information about his health because he could talk with his GP, ask questions and get the answers, and check books and the Internet. His clinician’s perspective (Disagree) was that although he had access to good information, he doesn’t take it on board, he doesn’t act on it, which indicated that she felt he only had good information if he used it to improve his health.
Discordance in ‘9. Understand health information well enough to know what to do’ was due to different expectations about patients’ abilities to understand and, from the clinicians’ perspectives, comply with health instructions and information (P115). In ‘3. Actively managing my health’, discordance was about the extent to which setting a goal, or making plans to be healthy, was seen by patients as actively managing their health, yet clinicians wanted to see patients actively carrying out the goal or plan (P105).
Sub-theme 3b) Patients don’t always know what they don’t know
Discordance in ‘6. Ability to actively engage with healthcare providers’ emerged when clinicians expected patients to manage interactions with healthcare providers differently from the way they often did. The clinicians sometimes attended medical appointments with their patients, and reported that their patients didn’t always know about gaps in their knowledge and so they didn’t know what to ask healthcare providers. In response to an item about asking healthcare providers questions to get information, P110 (Very Easy) said that she asks healthcare providers to explain information in plain language until she understands it and that this sometimes takes time. Her clinician C110 (Quite Difficult) said she wouldn’t be able to instigate the questioning because she doesn’t know what she doesn’t know. Although patients tended to say it was easy to discuss things with their doctors, the clinicians said that although patients might have a friendly chat with their doctors, they did not ask questions (P112, P116), did not always understand their health issues and did not leave the consultation with useful information about their health (P103, P113).
Sub-theme 3c) There are different points of comparison (providers compare across patients, patients compare across providers)
In ‘7. Navigating the healthcare system’, when asked about finding the ‘right’ or ‘best’ care, P114 (Quite Difficult) compared having many healthcare professionals with his preference to have one who got to know him well: I get a dozen of them [health professionals] in the week and they are all different and not the same one all the time and it is very hard to understand them. If it was the same person coming all the time then you get to know them and understand everything and they would understand the situation. The clinical perspective of C114 (Quite Easy) was that, compared with other patients, this patient’s complex healthcare needs required a range of healthcare professionals to attend him: He has the right healthcare because of the severity of his healthcare needs. He doesn’t fall through the gaps. He just has to get to his appointments.
Sub-theme 3d) There are different expectations for support when ill
Discordance in ‘4. Social support for health’ revealed different expectations between patients and clinicians for the level of social support and understanding thought reasonable to expect and that could be considered good support (P102, P107, P111). P113 (Strongly Disagree) said Yeah. It’s hard for family members or anybody to understand that unless they are really in the same situation or have really studied the illness…I find it very, very hard for anybody else to understand the same thing that I’m going through. C113 (Agree) could see that the family tried to understand his situation: I think so. I think they try. His family. I’ve only seen him really, really sick a couple of times and they have been very supportive.
Theme 4. Different perspectives about a patient’s reliance on healthcare providers
Discordance in this theme was centred on patients and clinicians knowing that patients relied on their healthcare providers to provide and explain information and treatment to them. Patients considered this as knowing where to get health information, being able to appraise health information, and knowing what to do and where to go. The clinicians’ perspective was that patients could not do any of this without the help of a healthcare provider.
In ‘7. Navigating the healthcare system’, patients relied on healthcare providers to tell them what to do and which services to use, and their clinicians knew this (P116). In ‘8. Ability to find good health information’, discordance was due to patients seeking or relying on receiving health information from their known and trusted healthcare providers, with clinicians knowing that they would not search further afield (P104).
Patient narratives in ‘5. Appraisal of health information’ usually explained that they accepted what their healthcare providers told them about health information or that, if they had questions, they asked their trusted healthcare providers. The clinicians’ responses were mostly that the patients could not appraise information by themselves and that they either didn’t do it or needed help to do it. P111 (Strongly Agree) said I just believe in my GP and specialist. I’m not sure if the information is correct or not. I don’t have a way to look up if the information is correct or not. C111 (Disagree) said that this patient wouldn’t know how to check if information was right for her or not.
It is incumbent on researchers to demonstrate that the measurement tools they create and use are accurate and fit for their intended purpose [1, 2]. In this study, we worked with people who were disadvantaged and living with complex medical and social situations, many with low education. When asked what they mean by the way they scored the HLQ questions, patients’ narratives matched the intent of items in the majority of cases. These data have important implications for health workers applying the questionnaire in settings where respondents have low socioeconomic position and/or high comorbidity of disease and possibly low health literacy. Alongside robust psychometric studies , these qualitative data provide further evidence that the HLQ items and constructs are understood as intended. With this face and construct validity confirmed, researchers, policymakers and funders can have confidence in decisions about projects and programs generated from HLQ data collected at the group and population levels.
This study has also generated new information about the HLQ at the individual patient level by comparing how patients view their health literacy with how clinicians view their patients’ health literacy. A key finding was that clinicians read the words of HLQ items more literally than patients (perhaps because of their technical training and because they have the breadth of experience of the situations of many patients). In addition, patients and their clinicians sometimes have different perspectives about patients’ evolving circumstances; have different expectations for and apply different criteria to assigning scores to some aspects of a patient’s health literacy; and, in terms of health literacy, have different interpretations of patients’ reliance on healthcare providers. These findings have important implications for the use of data derived from a PROM that is used to make assertions about the health literacy status of individual patients.
The data from this study revealed that a clinician can have a perspective about a patient’s health literacy status that differs from the patient’s perspective. This is of clinical importance because, in a small number of instances, if a clinician took the patient’s HLQ score at face value (that is, interpreting it through their own view of the patient’s health context) then opportunities for social and clinical support could be lost. If a patient’s HLQ scores differ from those that a clinician might expect then this can facilitate discussions with the patient. As one set of rich information about a patient’s health literacy status, HLQ data should be triangulated with other data such as patient history, direct observation and clinician intuition.
Some HLQ scales appear to show strong similarities between patient and clinician perspectives (concordance). The clinicians engaged in this study were specifically selected because, as case managers, they were deeply connected with their patients (e.g., consultations in the home, attending clinical appointments with the patients) and they had a good understanding of their patients’ health and health contexts. In other clinical and social settings, clinicians do not have the opportunity to acquire this depth of knowledge – at least not over relatively short periods (i.e., months) – and so their perspectives may, in fact, be even less similar. The findings indicate that the HLQ has the potential to be a powerful adjunct to clinical practice. The provision of patients’ HLQ scores to clinicians early in the patient-clinician relationship may hasten the clinician’s knowledge and understanding of patients’ struggles and capacities, particularly when used to facilitate clinical discussions to uncover barriers to patient self-care and to enable a deeper patient engagement with healthcare services.
Discordance between patient and clinician views were most often observed in scales ‘6. Ability to actively engage with healthcare providers’, ‘4. Social support for health’, ‘2. Having sufficient information to manage my health’, ‘9. Understand health information well enough to know what to do’, and ‘8. Ability to find good health information’. At times, patients rated themselves as being able to easily talk with healthcare providers, having the social support they needed, having sufficient information and understanding of information to manage their health, and knowing how to find the information they needed. However, their relative community assets or functional capacity in these areas were often described as weak by clinicians, and that some patients had little social support or ability to engage with health information or health providers. Some patients admitted that they unquestioningly accepted or relied on information from their clinicians (and so they felt had the information they needed), but clinicians reported that the patients had little ability to independently understand information. Even if a patient’s HLQ scores indicate that they have sufficient information about their health, it is important that clinicians do not assume that the patient has a good understanding of that information. Conversely, if a patient’s scale score indicates they do not have sufficient information, it may be that they do not understand the information they have. Reliance on the patient’s perspective in a self-report questionnaire could exclude important opportunities to instigate health-literacy-related interventions early in a patient’s care.
A key component of this study was that the clinicians knew the patients well. This factor allowed for detailed information to emerge about the everyday things that patients do for their health, and also, importantly, the things they do not do for or do not know about their health. The data asserted that a few patients felt that their intentions to do something active for their health was as indicative of them managing their health as of them actually doing it. Consequently, patient HLQs with scores that indicate the patients actively manage their health may be reflecting something other than what the clinician may expect (i.e., a difference between patient and clinician expectations for scale ‘3. Actively managing my health’). Discordance usually indicated situations where the clinician was expressing their need for perceptible outcomes (e.g., behaviour change after the patient has been given information), or they wanted to see concrete goal setting to help patients achieve that behaviour change. This was a difference between patient responses that reflected patients’ expressed intentions and clinician responses that were looking for (but not seeing) action from the patient. It is important to note that this paper does not report on the second set of scores from clinicians (how they think their patients would respond to the HLQ items), which, in some cases, may be the same as the patient’s score, even if the score from their clinical perspective differs. However, these other data answer a different research question about the difference between clinicians’ perspectives of their patients and what they think their patients’ perspectives would be. This is likely to be a valuable future research direction.
The primary technical reason for discordance in this study was when clinicians applied a literal reading to three words within items: sure, all and plenty. These words were designed to contribute to item difficulty within a scale. Part of the challenge of writing psychometric questionnaire items is to generate items that are easy to endorse (i.e., even people with low levels of the trait can easily respond Strongly Agree or Very Easy) through to items that are harder to endorse (i.e., it is difficult to respond Strongly Agree or Very Easy even with a high level of the trait). Each item within a scale earns its place by measuring a different and defined aspect of the scale. The HLQ wording was derived using a grounded approach, which means the items were derived from a wide range of responses to open questions about engagement in health and health services. This conversational style was deliberately used in item construction where community members rarely read these words as absolute.
Given their in-depth knowledge of a patient, and informed by their knowledge of potentially thousands of other patients, a clinician can, at times, assign a level of health literacy to a patient that differs from the patient’s own assessment. The presence of discordance in HLQ scores does not necessarily mean that the patient’s perspective is wrong, nor that the clinician’s perspective is wrong. Rather, their answers may come from different reference points and they may be using different appraisal criteria . To advance the field, provision of training to better detect these differences may assist clinicians to provide improved care.
Limitations and strengths of this study
The length of time between respondents completing the HLQs and being interviewed was mostly between 3 and 8 weeks. However, two patients were interviewed 12 weeks (P114) and 21 weeks (P104) after completing their HLQs. Reasons for these delays with patients were because interviews were sometimes difficult to schedule because some patients were difficult to contact, which is consistent with our intent to engage participants who would usually be overlooked for research because they are difficult to access. The clinicians explained that some patients had trust issues and would not answer their phones if they did not recognise the incoming number. In some cases, the clinician facilitated contact between a patient and a researcher. P104 was interviewed face-to-face at a Barwon Health site because the patient’s attention span for a telephone interview was limited, and also the patient’s trusted clinician introduced the patient to the researcher (SG) lessening the patient’s concern about not knowing the researcher. Despite the sometimes long period of time between HLQ completion and interview, the narratives of these patients indicated that recall of the scores and why they chose the scores seemed strong. In fact, as an incidental finding, some respondents were able to describe change between the scores they chose when they completed the HLQ and what they would score at the time of the interview, which indicates that the HLQ may be able detect change in health literacy over time.
This study did not obtain data about non-responders, which may be seen as a limitation. However, a response rate of 18 of the 45 patients (40%) who were asked to complete an HLQ is exceptional from this group of people who required extensive assistance from a case manager to cope with their chronic and complex health conditions. This study conducted research never previously undertaken about the use of the HLQ at the individual patient level to assess this as a possible use of the HLQ. Use of the HLQ in other clinical contexts with individual patients will require validation of score interpretation for that context . The outcomes of this study contribute to the growing body of international validation evidence about the use of the HLQ in different contexts.
Another limitation of this study is that the interview schedule grouped HLQ items within their scales, which is not the order in which respondents completed them on the HLQ. This may have caused participants to respond in interview in a way that was different from how they may have responded if the interview questions had followed the order in which the items appear on the HLQ. However, each participant was asked questions from only a selection of scales, so many of the HLQ items would have been omitted from the interview schedule anyway, which would have led to items appearing in a random way, matching neither the HLQ sequencing of items nor the items within the scales. To maintain consistent organisation of items and ensure all items were covered, it was deemed best to conduct the interviews using sets of items within the scales.
An important strength of this study is that it accessed a group of people who are often missed by research. That is, people who are rarely invited to participate in research because of how difficult it is to engage them. These are often people with low or very low health literacy. Further strengths of this study include that the study was conducted in a real world clinical setting using a psychometrically robust PROM.
This research lays the groundwork for further work (already being undertaken by the authors about validation of the interpretations of PROM data) because it is an initial exploration into qualitative validation methods that go beyond the cognitive interviews that are used to support validation of the psychometric properties of PROMs for aggregated population data. These studies also put into practice long-held theories of validity that it is the inferences derived from data that are determined to be valid for each new context, not the properties of the tool itself [6,7,8].
The HLQ and the field of health literacy have been identified by global organisations such as the United Nations (UN) and the World Health Organization (WHO) as having the potential to make substantive contributions to public health and health equality [28,29,30,31]. Health literacy is now seen as an opportunity to understand and intervene in social inequalities in health. However, much of the recent research in the field is at the group and population levels. Our research demonstrates that the HLQ has measurement veracity at the patient and clinician level. It also indicates the important implications for the depth and quality of care that a patient might receive if clinicians can detect when they perceive a patient’s health literacy to be different from the way the patient sees it. A primary recommendation of this paper is the use of the HLQ to highlight areas of discordance between clinician and patient perspectives. Awareness of these differences in perspectives can pave the way for clinicians to engage in conversation with patients to better understand their health context, and plan well-founded treatment and care solutions that reflect a patient’s individual health literacy challenges and strengths. This study, in line with the validity driven approach, is part of the ongoing development of the web of quantitative and qualitative evidence about the clinical and public health utility of the HLQ.
Health Literacy Questionnaire
World Health Organization
Patient-reported outcome measure
Buchbinder R, Batterham R, Elsworth G, Dionne CE, Irvin E, Osborne RH. A validity-driven approach to the understanding of the personal and societal burden of low back pain: development of a conceptual and measurement model. Arthritis Res Ther. 2011;13(5):R152.
American Educational Research Association. American Psychological Association, Joint Committee on Standards for Educational and Psychological Testing (U.S.), National Council on Measurement in Education. Standards for educational and psychological testing. Washington, DC: American Educational Research Association; 1999.
Cronbach LJ. Test Validation. In: Thorndike RL, Angoff WH, Lindquist EF, editors. Educational measurement. Washington: American Council on Education; 1971. p. 483–507.
Stenner AJ, Smith III M, Burdick DS. Toward a theory of construct definition. J Educ Meas. 1983;20:305–16.
Pedhazur E, Schmelkin LP. Measurement, design, and analysis: an integrated analysis. Hillsdale: Erlbaum; 1991.
Messick S. Foundations of validity: Meaning and consequences in psychological assessment. ETS Res Rep Ser. 1993;1993(2):i–18.
Moss PA. Shifting conceptions of validity in educational measurement: implications for performance assessment. Rev Educ Res. 1992;62(3):229–58.
Kane MT. An argument-based approach to validity. Psychol Bull. 1992;112(3):527.
Elsworth GR, Nolte S, Osborne RH. Factor structure and measurement invariance of the health education impact questionnaire: Does the subjectivity of the response perspective threaten the contextual validity of inferences? SAGE Open Med. 2015;3:2050312115585041.
Cronbach LJ, Meehl PE. Construct validity in psychological tests. Psychol Bull. 1955;52(4):281.
Nunnally J, Bernstein I. Psychometric Theory. 3rd ed. New York: MacGraw-Hill; 1994.
Adamson J, Gooberman-Hill R, Woolhead G, Donovan J. ‘Questerviews’: using questionnaires in qualitative interviews as a method of integrating qualitative and quantitative health services research. J Health Serv Res Policy. 2004;9(3):139–45.
Sørensen K, Pelikan JM, Röthlin F, Ganahl K, Slonska Z, Doyle G, et al. Health literacy in Europe: comparative results of the European health literacy survey (HLS-EU). Eur J Public Health. 2015;25:1053. ckv043.
Osborne RH, Batterham RW, Elsworth GR, Hawkins M, Buchbinder R. The grounded psychometric development and initial validation of the Health Literacy Questionnaire (HLQ). BMC Public Health. 2013;13:658.
Nutbeam D. The evolving concept of health literacy. Soc Sci Med. 2008;67(12):2072–8.
Sorensen K, Van den Broucke S, Fullam J, Doyle G, Pelikan J, Slonska Z, et al. Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health. 2012;12:80.
Nutbeam D. Health promotion glossary. Health Promot Int. 1998;13(4):349–64.
Maindal HT, Kayser L, Norgaard O, Bo A, Elsworth GR, Osborne RH. Cultural adaptation and validation of the Health Literacy Questionnaire (HLQ): robust nine-dimension Danish language confirmatory factor model. SpringerPlus. 2016;5(1):1232.
Batterham RW, Buchbinder R, Beauchamp A, Dodson S, Elsworth GR, Osborne RH. The OPtimising HEalth LIterAcy (Ophelia) process: study protocol for using health literacy profiling and community engagement to create and implement health reform. BMC Public Health. 2014;14(1):694.
Bo A, Friis K, Osborne RH, Maindal HT. National indicators of health literacy: ability to understand health information and to engage actively with healthcare providers-a population-based survey among Danish adults. BMC Public Health. 2014;14(1):1095.
Livingston PM, Osborne RH, Botti M, Mihalopoulos C, McGuigan S, Heckel L, et al. Efficacy and cost-effectiveness of an outcall program to reduce carer burden and depression among carers of cancer patients [PROTECT]: rationale and design of a randomized controlled trial. BMC Health Serv Res. 2014;14(1):1.
Faruqi N, Stocks N, Spooner C, el Haddad N, Harris MF. Research protocol: management of obesity in patients with low health literacy in primary health care. BMC Obesity. 2015;2(1):1.
Elsworth GR, Beauchamp A, Osborne RH. Measuring health literacy in community agencies: a Bayesian study of the factor structure and measurement invariance of the health literacy questionnaire (HLQ). BMC Health Serv Res. 2016;16(1):508.
Beauchamp A, Buchbinder R, Dodson S, Batterham RW, Elsworth GR, McPhee C, et al. Distribution of health literacy strengths and weaknesses across socio-demographic groups: a cross-sectional survey using the Health Literacy Questionnaire (HLQ). BMC Public Health. 2015;15:678.
Green J, Willis K, Hughes E, Small R, Welch N, Gibbs L, et al. Generating best evidence from qualitative research: the role of data analysis. Aust N Z J Public Health. 2007;31(6):545–50.
Saldaña J. The coding manual for qualitative researchers. London: Sage Publications Ltd; 2015.
Schwartz CE, Rapkin BD. Reconsidering the psychometrics of quality of life assessment in light of response shift and appraisal. Health Qual Life Outcomes. 2004;2:1–11.
Greenhalgh T. Health literacy: towards system level solutions. BMJ. 2015;350:h1026.
Dodson S, Good S, Osborne RH. Health literacy toolkit for low- and middle-income countries: a series of information sheets to empower communities and strengthen health systems New Delhi: WHO Regional Office for South-East Asia; 2015 [cited 12 Feb 2015]. Available from: http://www.searo.who.int/entity/healthpromotion/documents/hl_tookit/en/
Australian Commission on Safety and Quality in Health Care. Health Literacy: Taking Action to Improve Safety and Quality Sydney: ACSQHC, 2014.
United Nations Economic Social Council. Health literacy and the Millennium Development Goals: United Nations Economic and Social Council (ECOSOC) regional meeting background paper (abstracted). J Health Commun. 2010;15(S2):211–23.
The authors wish to acknowledge Jan Byrnes, Team Leader for the Barwon Health Hospital Admission Risk Program (HARP), for her support with this project, and also the HARP clinicians who provided their time to participate in the study.
MH was funded in part by a small internal Deakin University grant.
RHO is funded in part through a National Health and Medical Research Council (NHMRC) Senior Research Fellowship #APP1059122.
Availability of data and materials
Data are available from the authors upon request. The raw data include the HLQ items, which are copyright.
MH and RHO conceived the study, and RB, GE and SG contributed to the design. MH and SG undertook data collection and analysis. All authors contributed to the data synthesis. MH and RHO lead the development of the initial draft. All other authors then contributed to subsequent drafts, and approved the final draft.
The authors declare that they have no competing interests.
Consent for publication
All authors have consented for this manuscript to be submitted for publishing.
Ethics approval and consent to participate
The project was approved by the Human Research Ethics Committees of Barwon Health (ID: 11/85) and Deakin University (ID: 2011-077).
All participants gave informed consent to participate in this research.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Hawkins, M., Gill, S.D., Batterham, R. et al. The Health Literacy Questionnaire (HLQ) at the patient-clinician interface: a qualitative study of what patients and clinicians mean by their HLQ scores. BMC Health Serv Res 17, 309 (2017). https://doi.org/10.1186/s12913-017-2254-8
- Health Literacy Questionnaire
- Patient centred care
- Patient reported outcomes