- Research article
- Open Access
Increasing response to a postal survey of sedentary patients – a randomised controlled trial [ISRCTN45665423]
BMC Health Services Research volume 4, Article number: 31 (2004)
A systematic review identified a range of methods, which can influence response rates. However, analysis specific to a healthcare setting, and in particular, involving people expected to be poor responders, was missing, We examined the effect of pre-warning letters on response rates to a postal survey of sedentary patients whom we expected a low rate of response.
Participants were randomised to receive a pre-warning letter or no pre-warning letter, seven days before sending the main questionnaire. The main questionnaire included a covering letter and pre-paid return envelope. After seven days, non-responders were sent a reminder letter and seven days later, another reminder letter with a further copy of the questionnaire and return envelope.
627 adults, with a mean age of 48 years (SD 13, range 18 to 78) of whom 69.2% (434/627) were women, were randomised. 49.0% (307/627) of patients were allocated to receive a pre-warning letter and 51.0% (320/627) no pre-warning letter, seven days in advance of posting the main questionnaire. The final response rate to the main questionnaire was 30.0% (92/307) amongst those sent a pre-warning letter and 20.9% (67/320) not sent a pre-warning letter, with an adjusted odds ratio of 1.60 (95% CI 1.1, 2.30).
The relatively low cost method of sending a pre-warning letter had a modest impact on increasing response rates to a postal questionnaire sent to a group of patients for whom a low response rate was anticipated. Investigators should consider incorporating this simple intervention when conducting postal surveys, to reduce the potential for nonresponse bias and to increase the study power. Methods other than postal surveys may be needed however when a low response rate to postal surveys is likely.
Postal surveys are routinely used to obtain information from patients and groups within the general population, over a range of topics. Postal surveys are a cost-efficient method compared with intensive methods such as face-to-face interviews and capable of obtaining systematically, information on many thousands of people. A key quality component of postal surveys relates to the number of people sampled, and the proportion returning a completed useable questionnaire . Lower response rates can reduce the statistical power of the study, and mask statistically significant relationships, which 'truly' exist within the population studied. Responders may also be different to non-responders. This can introduce bias into the survey findings, if the decision to respond (or not) relates to the outcome being analysed within the survey, thereby reducing generalisability to the initial reference population .
Many studies conclude that non-responders in surveys and other epidemiological studies can differ to responders with respect to a range of specific health, lifestyle and social variables. Non-responders have been found to differ with respect to their sex, age, race, social class, home circumstances, education, and healthy lifestyle behaviours [3–7]. They can also differ in terms of existing health and healthcare utilisation [8–11] with differences extending through to higher rates of mortality  compared with responders. However, it can be difficult to make clear conclusions about the characteristics of non-responders in surveys and other types of studies, as factors such as the purpose of the study and the way in which it was carried out, will no-doubt have some effect. Furthermore, differences have not always been found between responders and non-responders, at least in terms of what factors were assessed, and nonresponse will not always affect estimates of prevalence [13–15]. Nevertheless, nonresponse bias should always be considered a possibility [16, 17]. Moreover, there is evidence that in general, response rates to postal questionnaires are falling, , making this topic worthy of continued investigation. There is however, no agreed level of acceptable response in postal surveys [19–21].
A systematic review  found a number of factors associated with postal questionnaires can influence the likelihood of response. This included providing incentives; questionnaire length and appearance, method of delivery, method for return, if any pre-warning/contact was given, the content and layout of the questions, through to the origin/sponsor of the questionnaire and how it was communicated. The review found that monetary incentives, recorded delivery and using an 'interesting' questionnaire were the three strongest influences on response rates. Statistical heterogeneity was found for all of these factors, thus limiting the extent that pooling of results was viable. Moreover, only a third of studies were from medical/epidemiological/healthcare journals and with no distinction between studies of patients or of staff and subgroup analyses were absent. As such, it is difficult to generalise the findings from the review to particular groups of patients in a healthcare setting.
In the current study, we carried out a randomised controlled trial to examine the effect of a pre-warning letter on response rates to a postal questionnaire. The questionnaire was sent to patients who had previously been referred to a community based exercise referral scheme because of a sedentary lifestyle and sought information on the quality of the service offered. An earlier study  of a similar population suggested poor response rates could be a problem. With a limited budget, we were unable to offer financial incentives, as suggested in the review by Edwards et al , but wanted to explore the suggestion that pre-warning letters might increase final response.
The sample consisted of patients who had been referred to a community based exercise referral scheme during the past 12 months, identified from the service register. Patients were referred by a primary care practitioner because of concerns about their sedentary behaviour and its impact on their health. The questionnaire formed part of a project examining the relationship between patient service-expectations and service outcomes.
We examined the effect of a pre-warning letter, posted to patients seven-days before sending the main questionnaire, compared with no pre-warning letter. The pre-warning letter was printed on one-side of letter-headed paper, and informed the respondent that a survey would be sent to them within the next few days. It informed them about the purpose of the survey and the importance of it being completed and returned.
The main questionnaire was sent with a covering letter and a pre-paid business franked addressed envelope for its return . A standard reminder letter was sent to all non-responders seven days after posting the questionnaire, and after a further seven days, persistent non-responders were sent a further copy of the questionnaire, with a standard letter and return envelope. Randomisation was done using computer generated random numbers, and stratifying by age and sex. Participants remained unaware as to group allocation.
The primary outcome was the final response rate to postal questionnaires after sending all reminder letters. This was calculated at least 6 weeks after sending the initial questionnaire, to allow for late responders. Differences in proportions between groups were examined using Pearson's chi-square test and logistic regression to adjust for age and sex with 95% confidence intervals (95% CI). To observe a difference of at least 10% between trial arms required 752 participants, based on a return rate of 60% in the control groups, with 80% power. Approval for the study was received in advance from the local research ethics committee and the research governance committee.
The number of patients referred to the exercise referral scheme in the past year with complete name and address information was 627. Their mean age was 48 years (SD 13, range 18 to 78) and 69.2% (434/627) were women. Randomisation allocated 49.0% (307/627) of patients to receive a pre-warning letter and 51.0% (320/627) no pre-warning letter (Figure 1). The two groups were balanced with respect to sex (66.8% female in the pre-warning group compared with 71.6% in the control group) and age (mean 48.7 years, SD 13.3, in the pre-warning group compared with mean 47.6 years, SD 13.9 in the control group).
The final response rate to the postal survey, after completing two stages of follow-up was 25.4% (159/627). In the pre-warning group, the response rate was 30.0% (92/307) compared with 20.9% (67/320) in the control group (χ2 6.75, p = 0.009) (Figure 2). Thus giving a difference between the two groups of 9.1% (95% CI for risk difference, 2.2% to 15.8%) (Figure 2). In a logistic regression model, the pre-warning letter increased the odds of returning the questionnaire by 1.61 (95% confidence interval 1.12, 2.32) and this was not altered after adjusting for age (in years) or sex (ORadj age sex 1.60, 95% CI 1.11, 2.30).
Sending a pre-warning letter seven days in advance of mailing out a postal questionnaire had a modest impact on increasing final response rates by almost 10%, with a relative 43% increase compared with sending no pre-warning letter. Patients in our study were selected on the basis of a previous referral to a community based exercise referral scheme, and the questionnaire sent to them sought information about their perceptions of the service quality. An earlier trial examining the impact of this service on a similar group of patients, in terms of increasing physical activity, had achieved average response rates of 60% . The much lower than expected response rate in the current study could have been influenced by the topic and purpose of the questionnaire along with the layout of the questionnaire .
Our findings are consistent with the evidence from a systematic review  that pre-contact can lead to an increased odds of response by as much as 50%. Our study confirms that this benefit extends to groups of sedentary patients known in advance, to be reluctant to reply. Pre-warning letters are simple to administer and relatively low cost compared with more labour intensive methods to increase response rates, such as telephone reminders or face-to-face visits. Moreover, this intervention method does not require any additional information or administrative systems over and above those required for sending the main questionnaire. Give that the response rate in the intervention group still remained low, at 30%, we may need to consider alternative approaches to postal questionnaires to obtain information from this group of patients, particularly if non-response could be associated with the outcomes examined within the survey – in this case, the patient experience of the exercise referral service.
Abramson JH: Survey Methods in Community Medicine. 1990, Edinburgh: Churchill Livingstone, 4
Armstrong BK, White E, Saracci R: Principles of exposure measurement in epidemiology. (Monographs in epidemiology and biostatistics, vol 21). 1992, New York : Oxford University Press, 294-321.
Bostrum G, Hallquist J, Haglund BJ, Romelsjö A, Svanström L, Diderichsen F: Socioeconomic differences in smoking in an urban Swedish population. The bias introduced by non-participation in a mailed questionnaire. Scand J Social Med. 1993, 21: 77-82.
Hill A, Roberts J, Ewings P, Gunnell D: Non-response bias in a lifestyle survey. J Public Health Med. 1997, 19: 203-207.
Jackson R, Chambless LE, Yang K, byrne T, Watson R, Folsom A, Shahar E, Kalsbeek W, for the Atherosclerosis Risk in Communities (ARIC) Study Investigators: Differences between respondents and nonrespondents in a multicenter community-based study vary by gender and ethnicity. J Clin Epidemol. 1996, 49: 1441-1446. 10.1016/0895-4356(95)00047-X.
Lahaut VMHCJ, Jansen HAM, Mheen DVD, Garretsen HFL: Non-response bias in a sample survey on alcohol consumption. Alcohol Alcoholism. 2002, 37: 256-260. 10.1093/alcalc/37.3.256.
Launer LJ, Wind AW, Deeg DJH: Nonresponse patterns and bias in a community-based cross-sectional study of cognitive functioning among the elderly. Am J Epidemiol. 1994, 139: 803-812.
Etter JF, Perneger TV: Analysis of non-response bias in a mailed health survey. J Clin Epidemiol. 1997, 50: 1123-1128. 10.1016/S0895-4356(97)00166-2.
Rönmark E, Lundqvist A, Lundbäck B, Nyström L: Non-responders to a postal questionnaire on respiratory symptoms and diseases. Eur J Epidemiol. 1999, 15: 293-299. 10.1023/A:1007582518922.
Rupp I, Triemstra M, Boshuizen HC, Jacobi CE, Dinant HJ, Van Den Bos GAM: Selection bias due to non-response in a health survey among patients with rheumatoid arthritis. Eur J Public Health. 2002, 12: 131-135. 10.1093/eurpub/12.2.131.
Van Amelsvoort LGPM, Beurskens AJHM, Kant I, Swaen GMH: The effect of non-random loss to follow-up on group mean estimates in a longitudinal study. Eur J Epidemiol. 2004, 19: 15-23. 10.1023/B:EJEP.0000013401.81078.84.
Barchielli A, Balzi D: Nine-year follow-up of a survey on smoking habits in Florence (Italy): higher mortality among non-responders. Int J Epidemiol. 2002, 31: 1038-1042. 10.1093/ije/31.5.1038.
Adams MME, Scherr PA, Branch LG, Hebert LE, Cook NR, Lane AM, Brock DB, Evans DA, Taylor JO: A comparison of elderly participants in a community survey with nonparticipants. Public Health Rep. 1990, 105: 617-622.
Bakke P, Gulsvik A, Lilleng P, Overå O, Hanoa R, Eide GE: Postal survey on airborne occupational exposure and respiratory disorders in Norway: causes and consequences of non-response. J Epidemiol Community Health. 1990, 44: 316-320.
Shahar E, Jackson R, for the Atherosclerosis risk in communities (ARIC) study investigators: The effect of nonresponse on prevalence estimates for a referrent population: Insights from a population-based cohort study. Ann Epidemiol. 1996, 6: 498-506. 10.1016/S1047-2797(96)00104-4.
Barriball KL, While AE: Non-response in survey research: a methodological discussion and development of an explanatory model. J Adv Nurs. 1999, 30: 677-686. 10.1046/j.1365-2648.1999.01117.x.
Kessler RC, Little RJA, Groves RM: Advances in strategies for minimizing and adjusting for survey nonresponse. Epidemiol Rev. 1995, 17: 192-204.
Office for National Statistics: 60 years of Social Survey 1941–2001. 2004, Office for National Statistics
Birnbaum D, Hoch J: Answers to previous questions. J Health Serv Res Policy. 2004, 9: 127.
Hoch J: Answers to previous questions. J Health Serv Res Policy. 2004, 9: 126-10.1258/135581904322987571.
Spicker P, Hoch J: Answers to previous questions. J Health Serv Res Policy. 2004, 9: 127.
Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, Kwan I: Increasing response rates to postal surveys: systematic review. BMJ. 2002, 324: 1183-1185. 10.1136/bmj.324.7347.1183.
Harrison RA, Roberts C, Elton PJ: Does primary care referral to an exercise programme increase physical activity one year later? A randomised controlled trial. J Public Health.
Harrison RA, Holt D, Elton PJ: Do postage-stamps increase response rates to postal surveys? A randomized controlled trial. Int J Epidemiol. 2002, 31: 872-874. 10.1093/ije/31.4.872.
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/4/31/prepub
The authors declare that they have no competing interests.
RAH conceived the study design, drafted the main protocol, carried out randomisation, analysed the results and wrote the first draft of the paper.
DC was responsible for completing the study, for data entry, for assisting with data analysis and significant comments on the manuscript.
About this article
Cite this article
Harrison, R.A., Cock, D. Increasing response to a postal survey of sedentary patients – a randomised controlled trial [ISRCTN45665423]. BMC Health Serv Res 4, 31 (2004). https://doi.org/10.1186/1472-6963-4-31
- Sedentary Behaviour
- Postal Questionnaire
- Postal Survey
- Nonresponse Bias
- Increase Response Rate