It takes patience and persistence to get negative feedback about patients’ experiences: a secondary analysis of national inpatient survey data
© Barron et al.; licensee BioMed Central Ltd. 2014
Received: 25 March 2013
Accepted: 28 March 2014
Published: 4 April 2014
Patient experience surveys are increasingly used to gain information about the quality of healthcare. This paper investigates whether patients who respond before and after reminders to a large national survey of inpatient experience differ in systematic ways in how they evaluate the care they received.
The English national inpatient survey of 2009 obtained data from just under 70,000 patients. We used ordinal logistic regression to analyse their evaluations of the quality of their care in relation to whether or not they had received a reminder before they responded.
33% of patients responded after the first questionnaire, a further 9% after the first reminder, and a further 10% after the second reminder. Evaluations were less positive among people who responded only after a reminder and lower still among those who needed a second reminder.
Quality improvement efforts depend on having accurate data and negative evaluations of care received in healthcare settings are particularly valuable. This study shows that there is a relationship between the time taken to respond and patients’ evaluations of the care they received, with early responders being more likely to give positive evaluations. This suggests that bias towards positive evaluations could be introduced if the time allowed for patients to respond is truncated or if reminders are omitted.
KeywordsPatient satisfaction/statistics and numerical data Hospitals/standards Health care surveys/methods Bias (epidemiology) Questionnaires
Concerns about quality of healthcare have led to a proliferation of patient experience surveys. The national patient survey programme for England was first proposed in The National Health Service: Modern, Dependable  as a way of assessing patients’ experiences of care and how they change over time. The surveys were part of a more general commitment to make the National Health Service (NHS) more responsive to patients. The reasoning was—and still is—that if hospitals are given information about how patients evaluate the quality of the care they received, managers and clinicians will be able to respond to any identified shortcomings, leading to improvements in the quality of care. The surveys are a potentially important resource for NHS Trusts as they provide detailed information on experiences of care from probability samples of recent patients. However, their usefulness depends on the representativeness of those who respond.
The first hospital-based national survey of adult inpatients was reported in 2002  and the survey has been repeated almost every year since then under a programme centrally monitored by the Care Quality Commission (CQC). Each NHS Trust in England is asked to conduct a postal survey of 850 consecutively-discharged recent inpatients. They may conduct the survey themselves or use a CQC-approved survey contractor, but all Trusts are required to adopt a standard methodology that attempts to maximise response rates by making up to three attempts to contact patients. The initial questionnaire is sent, followed by a reminder letter to non-responders around 21 days later, and then a second reminder with a duplicate questionnaire is sent to those who have still not responded after a further 21 days. Postage-free return envelopes are included with the first mailing and second reminder . Each year, approximately 160 English acute NHS trusts participate in the national inpatient survey, and the results of their question scores are published by the CQC. Currently, each Trust’s survey scores are re-weighted to adjust for differences among trusts in responders’ age, sex and route of admission (planned or emergency). Each responder’s weight is calculated by dividing the proportion of respondents in the national data set for that year in their age/sex/admission route group by the Trust’s proportion. An upper limit for the weight is set at 5.
In 2009, the response rate for the annual inpatient survey was 52%. Response to the first mailing without the need for a reminder was 41%, and a further 11% were received after the reminder and second questionnaire had been dispatched. This raises the important question of whether the 11% of patients who responded after reminders differed in some systematic way from those who responded at the first invitation. If there are systematic differences, this suggests that closing the survey too soon after the first questionnaire and/or failing to send out reminders would have led to bias in measurements of patients’ experiences in hospitals in the NHS.
A systematic review into methods of increasing response rates to health surveys,  citing studies going back to 1921, found that a second and third mailings typically attracted further responses from 12% and 10% of the original sample, respectively, although these averages masked considerable variability. More recently, Nakash et al.  found that “more intense follow-up” increased response rates, although the different methods used by the studies (including telephone calls in one case) reduced its generalisability. A larger review of randomised controlled trials by Edwards et al.  found evidence for the effectiveness of follow-up contact, with the odds of response after follow-up being 1.35 (95% CI 1.18 to 1.55).
These studies suggest, then, that the use of repeat mailings, and sending a second copy of the questionnaire, are likely to increase response rates. However, that in itself does not demonstrate that response bias is reduced: questionnaire responses received after follow-up may not be systematically different from those received in response to the initial mailing. Mazor et al.  found a positive correlation between response rates to a survey about patient satisfaction with individual physicians and the physicians’ patient satisfaction scores—that is, more satisfied patients were more likely to respond. In a simulation study, they then showed that non-response bias would most likely lead to patient satisfaction being overestimated. Further, as they were dealing with data about patient satisfaction with individual physicians, they were able to conclude that the scores for the physicians with whom the patients were least satisfied would have the greatest magnitude of error.
Evidence of systematic differences between responders and non-responders is difficult to obtain, but some studies have shown that early and late responders to mail surveys sometimes differ. For example, one study of responders to a US patient satisfaction survey that involved nearly 20,000 patients in 76 hospitals  found significant differences between the first 30% of responders and the remainder of responders on nine out of thirteen scales. Similarly, Perneger et al.  showed that early responders reported significantly fewer problems with the healthcare they received than late responders or non-responders. In Norway, Bjertnaes conducted a national study of 10,912 recently-discharged patients based on a survey with important similarities to NHS national inpatient survey, which is the focus of this paper . He found that satisfaction on five of the six reported patient satisfaction scales decreased as response time (the time it took patients to return questionnaires after they had been received) increased. More recently, Hutchings et al.  compared early and late responders to a large (n ≈ 80,000) UK survey of patient reported outcomes after four surgical procedures. After controlling for a range of variables previously found to be associated with non-response, including age, ethnicity, deprivation and health status , they found that late responders were slightly more likely to report poorer outcomes. These results are consistent with a number of other studies that have found an association between late response and patients’ tendency to report poorer clinical outcomes [9, 13–16]. In summary then, the balance of evidence seems to suggest that there is a difference between early and late responders with the latter being less satisfied with their care or with the clinical outcomes of their treatment.
The evidence for the differences between initial and post-reminder responders is less clear. Yessis and Rathert  have suggested that reminders are important as they found that patients responding to a reminder were significantly less satisfied than were initial respondents. However, other researchers have found no significant difference between initial respondents and those who required several reminders and follow-ups to obtain a response . Therefore, it is important that we investigate whether there is indeed a difference between initial and post follow-up responders in this survey.
It is also possible to go beyond seeing using repeated mailings as a way of increasing response rate. Some authors have suggested that people who respond later to mail surveys be treated as proxies for people who do not respond at all. Halbesleben and Whitman  explain that “[t]he logic behind this approach is based on a process called the continuum of resistance, which suggests that each subsequent wave of participants demonstrates greater resistance in completing the survey. By this logic, one could use the last people to respond (thus, the most difficult to obtain) as proxies for nonrespondents, as they are closest to nonrespondents on the continuum of resistance. Thus, we can compare the last group to respond with the others in the survey to examine potential differences that might approximate nonresponse bias”. (p. 11).
This study seeks to add to the available evidence by using a large sample, a large number of hospitals and a single mode of data collection. The key question is whether later respondents—and in particular those who respond to reminders—differ systematically from those who respond quickly. In this paper, we test whether there are significant differences between early and late responders, examine the relationship to reminders and to explore the possibility of using data from late responders as a proxy for non-responders.
Is there an association between whether people are early or late respondents to the survey and their evaluations of the quality of the care they received?
Is the use of reminders an effective way of reducing non-response bias in survey-based estimates of patients’ evaluation of the quality of their care?
Can data from late responders be used as a way of estimating the effect of non-response bias?
National inpatient survey data
This study uses the data from the Care Quality Commission’s (CQC’s) 2009 English national inpatient survey. Annually, these data are archived in the UK Data Archive, but do not include questionnaire return dates. In addition, Picker Institute Europe, who collate and clean the data for the CQC, supplied the questionnaire return dates for the purposes of conducting this study, as agreed by the CQC. Further details of the sampling and survey methods have been described elsewhere . For the 2009 survey, questionnaires were sent to a total of 137,360 recently discharged inpatients, of whom 69,348 returned usable responses. Excluding 1,831 undelivered questionnaires and 2,069 deceased patients, this corresponds to a response rate of 52%.
“Overall how would you rate the care you received?” Responses are “Excellent”, “Very good”, “Other”.
“In your opinion, how clean was the hospital room or ward that you were in?” Responses are “Very Clean”, “Fairly Clean”, “Other”.
“Did you have confidence and trust in the doctors treating you?” Responses are “Yes, always”, “Yes, sometimes”, “No”.
“Did you have confidence and trust in the nurses treating you?” Responses are: “Yes, always”, “Yes, sometimes”, “No”.
“How many minutes after you used the call button did it usually take before you got the help you needed?” Responses are: “0 minutes/right away”, “1-2 minutes”, “3-5 minutes”, “More than 5 minutes”, “I never got help when I used the call button”.
Responses to the care satisfaction questions
Overall, how would you rate the care you received?
In your opinion, how clean was the hospital room or ward that you were in?
Not very clean or Not at all clean
Did you have confidence and trust in the doctors treating you?
Did you have confidence and trust in the nurses treating you?
How many minutes after you used the call button did it usually take before you got the help you needed?
I never got help
The main explanatory variable was whether a response was received without a reminder, following the first reminder, or following the second reminder. We also controlled for other factors that previous research has suggested may be associated with satisfaction with care. A systematic review  of all the published research outputs produced using the patient survey data showed that several patient characteristics are associated with their evaluation of care. In this study therefore we control for these factors including age, sex, length of stay in hospital, and whether the person was admitted as an emergency or not. Analysis was performed using ordinal logistic regression . We used Stata 12 to perform the analysis, obtaining robust standard errors that control for the clustering of observations within Trusts .
Distribution of responses across three mailings
Replied after 1st mailing
Replied after 2nd mailing
Replied after 3rd mailing
Cumulative response rate
Overall care: Excellent
Overall care: Very good
Overall care: Good, Fair or Poor
Cleanliness: Very clean
Cleanliness: Fairly clean
Cleanliness: Not very clean or Not at all clean
Confidence in doctors: Always
Confidence in doctors: Sometimes
Confidence in doctors: No
Confidence in nurses: Always
Confidence in nurses: Sometimes
Confidence in nurses: No
Call button answered: 0 minutes
Call button answered: 1-2 minutes
Call button answered: 3-5 minutes
Call button answered: Over 5 minutes
Call button answered: Never
Ordinal logistic regression estimates with robust standard errors
Log (length of stay)
Wald chi sq
Predicted probabilities of responses to the satisfaction question
a) Overall rating of care
Good, Fair or Poor
b) Cleanliness of ward
Not very clean or Not at all clean
c) Confidence and trust in doctors
d) Confidence and trust in nurses
e) Time to respond to call button
More than 5 minutes
The largest difference in predicted probabilities between those who respond without a reminder and those who respond after the second reminder is in the first table, representing the analysis of responses to the overall rating of care question. The predicted probability of rating care as ‘Excellent’ declines from 0.41 to 0.33, a decline of 19%. The largest part of this is accounted for by an increase in the predicted probability of rating care as less than ‘Very good’ from 0.22 to 0.28, an increase of 27%.
Observed and predicted frequencies of response to the overall rating question
Good, Fair or Poor
Differences in the other predicted probabilities shown in Table 4, while still noticeable, are not as large. For example, the predicted probability of always having trust in doctors drops from 0.71 to 0.65, a decline of eight per cent, while the equivalent decline for nurses is six per cent.
The necessity for repeat mailings may be questioned on economic grounds, or out of concern not to harass patients. This issue is sometimes raised, for example, in discussions with NHS hospitals contracting for the annual survey (personal communication with an authorised contractor for the NHS inpatient survey), or by ethics committees when reviewing a research proposal that includes the patient experience survey as a data collection instrument. However, this paper shows that there is a relationship between a patient’s overall evaluation of their care and whether they are responding to the initial mailing or to a reminder. Less satisfied patients are less likely to respond to the initial mailing, but significant numbers of them do respond to reminders. This demonstrates that repeat mailings reduce response bias in patient surveys. Without the repeat mailings, the proportion of people reporting their care was Excellent or Very Good would be significantly higher. This study suggests that both patience—giving patients time to respond, and persistence—sending reminders, is required to ensure that the survey data do not exclude patients who have had a more negative experience of care. The wider implication of this paper is that bias could be introduced through small changes to the survey protocol. As health care systems become more and more dependent on patients’ evaluations of their care it is essential that we work to produce data that gives a true picture of patients’ experiences, rather than data that are misleading. In a paper titled “25 Years of Health Surveys: Does more data mean better data?”, Berk, Schur and Feldman  reflected that, in the US “…survey designers are the victims of their own success; as policy makers understand the value of survey data in assessing policy changes, growing demands for data force agency budgets to emphasize short-term efforts while postponing longer term investments in data quality”. One of their main recommendations is that more be invested in research on survey methods.
We might ask whether response rates could be increased further by sending more reminders and/or by extending the data collection period. We have already alluded to the potential ethical concerns that would arise from sending more reminders, to which we would have to add the fact that still more time would have elapsed from the actual inpatient experience to the completion of a questionnaire. In the case of the NHS survey of inpatient experience this currently means that most of the survey patients are discharged around June, and data collection ends the following January. Many of the final reminders in this survey are dispatched relatively late in the data collection period, effectively giving respondents little more than a month to respond. Although the majority of people who intend to respond will have done so in this time period, about 20 percent of people who responded after receipt of the second reminder took more than a month to do so. On balance, it would seem preferable to ensure that there is a period of two months from dispatch of the final reminder before the close of data collection, but further extension would probably not result in a great increase in responses, Figure 1 shows that the rate of responses does decline markedly after three weeks.
Non-response bias is not the only potential problem that we face in obtaining valid estimates of patient satisfaction. For example, post-discharge mail surveys may be superior to methods that involve questioning patients in hospital in that the more impersonal, anonymous nature of the data collection method may encourage more negative feedback. They may also be felt to be less intrusive by patients than methods involving face-to-face or telephone contact with researchers. On the other hand, it is possible that mail survey questionnaires are completed by someone other than the actual patient and such responses may differ from those that would have been given by the patients themselves .
One possible area for future research would be the extent to which the most reluctant responders to these questionnaires could be used as proxies for non-responders. Further information about their similarities and differences could lead to the development of non-response weight. We have shown that if we were to assume that non-responders were indeed similar to patients responding to the final reminder, then the change in estimated levels of satisfaction would be noticeable, but not substantial. However, it is conceivable that levels of satisfaction among non-responders are much lower than even the late responders, which would seriously undermine the validity of the data. The fact that we found a consistent relationship—satisfaction declining with the number of reminders—suggests that in this case the assumption that non-responders are similar to late responders may not be unreasonable, but further research in this area would be very useful, particularly given the importance of this survey in monitoring standards in the NHS. If it is shown that non-responders are similar to late responders, then we can more confidently claim that the method by which this survey is currently conducted is an effective way of obtaining reasonable estimates of patient satisfaction with care.
We set out to investigate the importance of reminders in relation to the national (England) inpatient survey and found that late responders and those whose questionnaires were received after reminders had been sent were significantly less satisfied than those who responded to the initial mailing. We conclude that reminders have a significant and important effect, and that the current practice for the national surveys of sending two reminders to non-responders is appropriate and proportionate to the benefits of reducing non-response bias.
The authors would like to acknowledge the research assistance of Anna DeCourcy who assisted in the acquisition of the data and with the literature search. They would also like to thank the Picker Institute which supplied the data and for their helpful responses to our requests for further information once the data had been received.
Sources of funding
EW, and DB are full-time staff at University of Greenwich and RR is part-time (.2). DNB is a full-time member of staff of the University of Oxford. Funding for manuscript preparation and publication costs came from RAE Formulaic Funding to the Nursing Research Group.
- Department of Health: The new NHS Modern and Dependable. 1997, London: HMSOGoogle Scholar
- Bullen N, Reeves R: Acute Inpatient Survey: National Overview 2001-02. 2003, London: Department of Health, http://www.nhssurveys.org/Filestore/Inpatient_2002/2002_Final_Report_--_from_web.archive.org.pdf Accessed 02/04/2012.Google Scholar
- The Co-ordination Centre for the NHS Patient Survey Programme: Guidance Manual for the NHS Annual Inpatient Survey 2012. 2013, Oxford: The Co-ordination Centre for the NHS Patient Survey ProgrammeGoogle Scholar
- Heberlein T, Baumgartner R: Factors affecting response rates to mailed surveys: A quantitative analysis of the published literature. Am Sociol Rev. 1978, 43: 447-462. 10.2307/2094771.View ArticleGoogle Scholar
- Nakash R, Hutton J, Jørstad-Stein E, Gates S, Lamb S: Maximizing response to postal questionnaires-A systematic review of randomised trials in health research. BMC Med Res Methodol. 2006, 6: 5-10.1186/1471-2288-6-5.View ArticlePubMedPubMed CentralGoogle Scholar
- Edwards P, Roberts I, Clarke M, Diguiseppi C, Pratap S, Wentz R, Kwan I: Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009, 8: 1-12.Google Scholar
- Mazor K, Clauser B, Field T, Yood R, Gurwitz J: A demonstration of the impact of response bias on the results of patient satisfaction surveys. Health Serv Res. 2002, 37 (5): 1403-1417. 10.1111/1475-6773.11194.View ArticlePubMedPubMed CentralGoogle Scholar
- Barkley W, Furse D: Changing priorities for improvement: the impact of low response rates in patient satisfaction. J Qual Improve. 1996, 22: 427-33.Google Scholar
- Perneger TV, Chamot E, Bovier PA: Nonresponse bias in a survey of patient perceptions of hospital care. Med Care. 2005, 43 (4): 374-380. 10.1097/01.mlr.0000156856.36901.40.View ArticlePubMedGoogle Scholar
- Bjertnaes OA: The association between survey timing and patient-reported experiences with hospitals: results of a national postal survey. BMC Med Res Methodol. 2012, 12: 13-10.1186/1471-2288-12-13.View ArticlePubMedPubMed CentralGoogle Scholar
- Hutchings A, Grosse Frie K, Neuburger J, van der Meulen J, Black N: Late response to patient-reported outcome questionnaires after surgery was associated with worse outcome. J Clin Epidemiol. 2013, 66: 218-225. 10.1016/j.jclinepi.2012.09.001.View ArticlePubMedGoogle Scholar
- Hutchings A, Neuburger J, Grosse Frie K, Black N, van der Meulen J: Factors associated with non-response in routine use of patient reported outcome measures after elective surgery in England. Health Qual Life Out. 2012, 10: 34-10.1186/1477-7525-10-34.View ArticleGoogle Scholar
- Sheikh K, Mattingly S: Investigating non-response bias in mail surveys. J Epidemiol Community Health. 1981, 35: 293-296. 10.1136/jech.35.4.293.View ArticlePubMedPubMed CentralGoogle Scholar
- Emberton M, Black N: Impact of non-response and of late-response by patients in a multi-centre surgical outcome audit. Int J Qual Health Care. 1995, 7 (1): 47-55. 10.1093/intqhc/7.1.47.View ArticlePubMedGoogle Scholar
- Gasquet I, Falissard B, Ravaud P: Impact of reminders and method of questionnaire distribution on patient response to mail-back satisfaction survey. J Clin Epidemiol. 2001, 54 (11): 1174-1180. 10.1016/S0895-4356(01)00387-0.View ArticlePubMedGoogle Scholar
- Kim J, Lonner J, Nelson C, Lotke P: Response bias: effect on outcomes evaluation by mail surveys after total knee arthroplasty. J Bone Joint Surg. 2004, 86: 15-21.View ArticlePubMedGoogle Scholar
- Yessis J, Rathert C: Initial versus prompted responders to patient satisfaction surveys: implications for interpretation and patient feedback. JAME. 2006, 11 (4): 49-64.Google Scholar
- Davern M, McAlpine D, Beebe T, Ziegenfuss J, Rockwood T, Call K: Are lower response rates hazardous to your health survey? An analysis of three state telephone health surveys. Health Serv Res. 2010, 45: 1324-44. 10.1111/j.1475-6773.2010.01128.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Halbesleben JRB, Whitman MV: Evaluating Survey Quality in Health Services Research: A Decision Framework for Assessing Nonresponse Bias Health Services Research. 2012Google Scholar
- Reeves R, West E, Barron D: Facilitated patient experience feedback can improve nursing care: a pilot study for a phase III cluster randomised controlled trial. BMC Health Serv Res. 2013, 13 (1): 259-10.1186/1472-6963-13-259.View ArticlePubMedPubMed CentralGoogle Scholar
- Bruster S, Jarman B, Bosanquet N, Weston D, Erens R, Delbanco T: National survey of hospital patients. Brit Med J. 1994, 309 (6968): 1542-1546. 10.1136/bmj.309.6968.1542.View ArticlePubMedPubMed CentralGoogle Scholar
- DeCourcey A, West E, Barron D: National adult inpatient survey conducted in the English national health service from 2002-2009: How have the data been used and what do we know as a result?. BMC Health Serv Res. 2012, 12: 71-10.1186/1472-6963-12-71.View ArticleGoogle Scholar
- Agresti A: Categorical Data Analysis. 2002, Hoboken, NJ: WileyView ArticleGoogle Scholar
- StataCorp: Stata Statistical Software: Release 12. 2011, College Station, TX: StataCorp LPGoogle Scholar
- Berk ML, Schur CL, Feldman J: Twenty-five years of health surveys: does more data mean better data. Health Aff. 2006, 26 (6): 1403-1417.Google Scholar
- Bjertnaes O: Patient-reported experiences with hospitals: comparison of proxy and patient scores using propensity-score matching. Int J Qual Health C. 2014, 26 (1): 34-40. 10.1093/intqhc/mzt088.View ArticleGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/14/153/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.