- Research article
- Open Access
- Open Peer Review
Satisfaction of patients hospitalised in psychiatric hospitals: a randomised comparison of two psychiatric-specific and one generic satisfaction questionnaires
BMC Health Services Researchvolume 6, Article number: 108 (2006)
While there is interest in measuring the satisfaction of patients discharged from psychiatric hospitals, it might be important to determine whether surveys of psychiatric patients should employ generic or psychiatry-specific instruments. The aim of this study was to compare two psychiatric-specific and one generic questionnaires assessing patients' satisfaction after a hospitalisation in a psychiatric hospital.
We randomised adult patients discharged from two Swiss psychiatric university hospitals between April and September 2004, to receive one of three instruments: the Saphora-Psy questionnaire, the Perceptions of Care survey questionnaire or the Picker Institute questionnaire for acute care hospitals. In addition to the comparison of response rates, completion time, mean number of missing items and mean ceiling effect, we targeted our comparison on patients and asked them to answer ten evaluation questions about the questionnaire they had just completed.
728 out of 1550 eligible patients (47%) participated in the study. Across questionnaires, response rates were similar (Saphora-Psy: 48.5%, Perceptions of Care: 49.9%, Picker: 43.4%; P = 0.08), average completion time was lowest for the Perceptions of Care questionnaire (minutes: Saphora-Psy: 17.7, Perceptions of Care: 13.7, Picker: 17.5; P = 0.005), the Saphora-Psy questionnaire had the largest mean proportion of missing responses (Saphora-Psy: 7.1%, Perceptions of Care: 2.8%, Picker: 4.0%; P < 0.001) and the Perceptions of Care questionnaire showed the highest ceiling effect (Saphora-Psy: 17.1%, Perceptions of Care: 41.9%, Picker: 36.3%; P < 0.001). There were no differences in the patients' evaluation of the questionnaires.
Despite differences in the intended target population, content, lay-out and length of questionnaires, none appeared to be obviously better based on our comparison. All three presented advantages and drawbacks and could be used for the satisfaction evaluation of psychiatric inpatients. However, if comparison across medical services or hospitals is desired, using a generic questionnaire might be advantageous.
Continuous quality improvement, comparison of quality across hospitals, and demands for accountability are some of the reasons that drive hospitals to measure patient satisfaction. An unresolved issue is whether surveys of psychiatric patients should employ generic instruments or questionnaires specifically designed for this healthcare setting. Generic instruments allow comparisons across settings but may lack content validity compared with condition-specific instruments. Several condition-specific instruments for psychiatric inpatients are available, such as the Perceptions of Care questionnaire  and the Saphora-Psy questionnaire in France .
The choice of the most suitable questionnaire is often left to professionals and patients are rarely consulted. Studies conducted to determine whether a questionnaire would be more suitable than another are rare , and none has addressed this issue for psychiatric patients. Several authors have, however, analysed psychiatric inpatient satisfaction [4–10], or developed and validated instruments designed specifically to assess inpatient psychiatric services [1, 11–13].
In this study, we compared two psychiatric-specific and one generic questionnaires aimed at evaluating the opinion and satisfaction of psychiatric patients regarding the care they received during hospitalisation. We assessed their characteristics and asked the patients to rate the questionnaire they had just completed.
Study design and setting
Between April and September 2004, we conducted a survey of three satisfaction questionnaires administered at random (computer-generated randomisation) to all eligible patients discharged from psychiatric hospitals of Vaud and Geneva university centres, Switzerland, which are spread over three sites in Vaud and one in Geneva. The randomisation was stratified per site. A random number was assigned to each patient on the monthly discharge list; this list was then sorted by the random number, and consecutive thirds of the patient list received a different questionnaire. On two of the three sites in Vaud (which are smaller hospitals), only two instruments were distributed: either the Perceptions of Care and Picker, or Saphora-Psy and Picker questionnaires. This was motivated by the primary goal of the survey which was to obtain a measure of patient satisfaction in all sites using the Picker Institute questionnaire, already in use there. In Geneva and in the remaining centre in Vaud, all three questionnaires were administered.
All psychiatric inpatients aged 18 years and over who had been hospitalised for more than 24 hours were included in the study. Patients treated in psycho-geriatric wards, staying in prison, residing outside Switzerland, deceased during hospitalisation or transferred to another hospital during that stay were excluded. Patients hospitalised multiple times during the survey period were contacted once only. Secondary exclusions, which had been defined a priori, were carried out during data collection (patients who considered themselves or were considered by their proxies to be too sick to complete a questionnaire, who had died after discharge, who did not understand French, or whose address was invalid). Based on the number of patients hospitalised in the psychiatric wards of Geneva and Vaud during April 2003, and on an probable participation rate of 50%, we estimated that the number of patients who could reasonably be included in this study would be 200 per questionnaire.
As for all patient satisfaction surveys and other quality improvement activities, this project was exempted from review by the research ethics committees of the Lausanne and Geneva University Hospitals.
The survey was conducted by mail. The first survey package was sent out 4 to 8 weeks after discharge. A postcard reminder was sent one week after the initial mailing and a full survey package was sent to non-respondents two weeks later, and again approximately 4 to 6 weeks from the initial mailing if no reply was received in-between. The survey package included a cover letter, the randomly assigned questionnaire, a set of evaluation and patients' characteristics items, and a stamped return envelope. The cover letter presented the survey and indicated that participation was voluntary. Confidentiality and anonymity issues were emphasized. In addition, a telephone hotline was set up 3 days a week to answer any questions raised by this survey. A screening question identifying patients too sick to respond, who did not understand French or declined participation, was incorporated on the front page of the questionnaire.
We compared three developed satisfaction questionnaires intended to evaluate hospital stays. We selected three candidate instruments that were previously validated, available in French (Picker Institute questionnaire and Saphora-Psy questionnaires) and/or already used for surveys of psychiatric patients in Switzerland (Perceptions of Care questionnaire and Picker Institute questionnaire). Two were specific to adult psychiatric inpatients (Perceptions of Care survey , Saphora-Psy ), while the third one was designed for adult acute care inpatients (Picker Institute questionnaire for acute inpatient somatic care) [14–16]. Both the Saphora-Psy and the Picker Institute questionnaire were based on report, experience whereas the Saphora-Psy was mainly a satisfaction questionnaire, per-se. While the Picker Institute and the Saphora-Psy questionnaires were developed with the use of qualitative methods aimed to explore patients' needs and concerns, the Perceptions of Care questionnaire was developed by selecting items related to specific domains, from previous work and literature review. In addition, the Saphora-Psy questionnaire was developed in French and a French translation was available for the Picker Institute questionnaire. Our group prepared a French translation of the Perceptions of Care questionnaire. We made six parallel translations in French followed by a consensus version obtained after discussion, which was back translated in English by an independent person of English mother tongue. The questionnaires are extensively described in Table 1.
At the end of each satisfaction questionnaire, the patient was asked to record the time needed to complete it, and to reply to ten questions evaluating the questionnaire. A few patients' characteristics were also recorded, whereas patient's age and sex were retrieved from administrative charts.
Main outcome measures
The three satisfaction questionnaires were compared in terms of four characteristics and patient evaluation ratings. Characteristics included the response rate, the completion time, the mean proportion of missing items and the mean proportion of responses which gave the highest available rating (ceiling effect). These outcomes were selected because they, respectively, reflect: the acceptability and relevance to patients of the questionnaire as a whole, the respondent's burden, the adequacy of the items and ease of finding a suitable answer, and the ability to discriminate a high level of performance.
The ten evaluation ratings assessed the patient's opinion about the questionnaire just completed: difficulty to fill in the questionnaire, clarity of question formulation, importance of the questions addressed, ease of assessment of patients' concerns, missing of important questions, ease of finding suitable answers, length of questionnaire, lay-out, appropriateness of wording, general opinion about the questionnaire
We used chi-squared tests for comparisons of categorical or dichotomous variables, oneway ANOVA for comparisons of means of continuous variables, and Kruskall-Wallis for non-normal variables.
For the calculation of i) the mean proportion of missing values and ii) the mean proportion of responses with the highest available rating (ceiling effect), the denominator corresponded to the total number of valid answers. For i) all questions except open-ended questions were considered valid; for ii) in addition, questions for which a yes/no answer was available were not counted as valid. Thus, we performed the calculation of the mean proportion of missing values on 34 out of 36 questions for the Saphora-Psy questionnaire, 18 out of 21 questions for Perceptions of Care questionnaire and all 50 questions for the Picker questionnaire. To compute the proportion of responses corresponding to the highest available rating, we used 31 out of 36 questions for the Saphora-Psy questionnaire, 13 out of 21 questions for the Perceptions of Care questionnaire and 35 out of 50 questions for the Picker questionnaire.
A total of 1764 survey questionnaires were mailed to patients. After three complete mailings, 214 patients were found to be ineligible because of invalid address, patient death, patient too sick or inability to understand French. 728 answered the questionnaire, resulting in an overall participation rate of 47%.
The mean age of respondents was 40 years (SD 12.5); 59% were women, 71% were Swiss and the majority received compulsory or vocational training (63%). Except for depressive symptoms, the three randomised groups of patients were not significantly different (Table 2).
Characteristics of the questionnaires
Table 3 shows a comparison of the characteristics of the questionnaires. The response rates varied from 43% to 50%. Mean completion time was significantly lowest for the Perceptions of Care questionnaire. The mean proportion of missing values was highest for the Saphora-Psy questionnaire, which, however, showed the lowest mean ceiling effect. The Perceptions of Care questionnaire had the lowest mean proportion of missing values but the highest proportion of ceiling effect. The Picker Institute questionnaire showed intermediate results.
Although the three instruments differed and were not all primarily targeted towards psychiatric inpatients, patients rated them similarly (Table 4). There was often a statistical bivariate relationship between the ten evaluation questions, which did not appear to be completely independent from each other (median Spearman correlation coefficient 0.256 (range -0.013 to 0.502), however. In addition, 4 of the 21 Perceptions of Care questions were indicated as being unclear or difficult to understand by more than 10 respondents.
Our findings show that, despite differences in design and purposes, content, lay-out, length and initial target population of the three questionnaires, none appeared to be obviously better than the others, both when examining their characteristics and the patients' assessments. All three appeared to have advantages and drawbacks.
Response rates did not differ across questionnaires, regardless of their lengths. In previous studies, higher response rates have not been consistently associated with shorter questionnaires [3, 17, 18]. It is possible that the patients' motivation and interest in the topic may be more important than the actual length of the questionnaire.
The high proportion of missing responses found in the Saphora-Psy questionnaire could be at least partly explained by the presence of numerous questions starting with a conditional clause, such as "If you have had ...", that does not offer answer options for patients who do not meet this initial criterion. In addition, Saphora-Psy is supposed to be completed at the end of the hospitalisation, not after discharge. This is however also true for the Perceptions of Care questionnaire, which showed the lowest mean proportion of missing answers. The way questions and answers options are organised in the Saphora-Psy questionnaire may thus not be optimal. Developed for acute care inpatients, the Picker instrument includes a set of questions related to pain and surgery. Interestingly, these items did not get a high proportion of missing data, although pain may not have the same meaning for acute care and psychiatric patients, and surgery is rarely used during psychiatric hospitalisation. This may reflect the appropriateness of skip patterns.
Not surprisingly, the Perceptions of Care questionnaire, which has the lowest mean number of response categories per question, showed the highest ceiling effects. This questionnaire may therefore be less sensitive to changes and less able to discriminate at the high end of the satisfaction spectrum.
Because disease-specific health status or quality-of-life questionnaire often perform differently from generic instruments [19, 20], we were surprised not to detect differences between the psychiatry-specific and generic questionnaires. Examination of their contents suggests that the two psychiatry-specific questionnaires did not differ much from the generic instrument. An alternative hypothesis is that patients hospitalised in mental health facilities have the same basic needs and expectations as any other patients. Another possibility is that we did not consider all the discriminative characteristics allowing to detect true differences between those three questionnaires.
The randomised allocation of questionnaires, the use of three methodically developed satisfaction instruments and the relatively high number of psychiatric inpatients included were important strengths of this study. However, our study does have limitations. First, the participation rate was relatively low  but similar to those reported in other satisfaction studies of psychiatric patients, who are less likely to respond to questionnaires compared to other patients [8, 22]. The effect of selection bias on the measures of interest is unpredictable; however, because the participation rate was similar for all three instruments, the comparison of the questionnaires are likely to be internally valid. Second, the sample size could have prevented the detection of true differences between questionnaires. Third, patients did not make comparative judgements on the three questionnaires, which may have given different evaluation results. They may also not be the best judges to objectively assess questionnaires. In addition, there was a correlation between the ten evaluation questions assessing the patient's opinion about the questionnaires, with a between item correlation which was moderate on average. Even though this statistical association could suggest that these outcome measures should not be considered separately, we analysed each question separately, because we were interested in each outcome variable as such. Moreover, given the lack of differences among the three questionnaires, it would be unlikely that a composite index would show differences. Fourth, we centred our study on patients and did not assess other healthcare stakeholders opinion. Indeed, for quality improvement purposes, their opinion about the usefulness of selected satisfaction questionnaire might be of interest. Finally, we evaluated only three out of several satisfaction questionnaires.
As we did not address all possible aspects related to the selection of a satisfaction questionnaire, further research would be needed to assess, for example, the opinion of other healthcare stakeholders, whether questionnaires perform equally with all types of patients hospitalized in psychiatric wards, and which questionnaire would be best for a specific situation.
Our results suggest that all three satisfaction questionnaires, one generic and two psychiatry-specific instruments, presented advantages and drawbacks, and obtained similar results in patients' evaluation ratings for the features we examined, despite differences in their design and purposes, intended target population, content, lay-out and length. Accordingly, they could all be used for the satisfaction evaluation of patients admitted in psychiatric hospitals. However, other criteria might be considered to decide which questionnaire is the most suitable in a given context. For instance, if comparison across medical services or hospitals is the main objective, at least on a set of items, and practical constraints have to be taken into account, using a single generic questionnaire might be advantageous.
Eisen SV, Wilcox M, Idiculla T, Speredelozzi A, Dickey B: Assessing consumer perceptions of inpatient psychiatric treatment: the perceptions of care survey. Jt Comm J Qual Improv. 2002, 28: 510-526.
Manuel Saphora-Psy. [http://www.ccecqa.asso.fr]
Perneger TV, Kossovsky MP, Cathieni F, di Florio V, Burnand B: A randomized trial of four patient satisfaction questionnaires. Med Care. 2003, 41: 1343-1352. 10.1097/01.MLR.0000100580.94559.AD.
Langle G, Baum W, Wollinger A, Renner G, U'Ren R, Schwarzler F, Eschweiler GW: Indicators of quality of in-patient psychiatric treatment: the patients' view. Int J Qual Health Care. 2003, 15: 213-221. 10.1093/intqhc/mzg032.
Greenwood N, Key A, Burns T, Bristow M, Sedgwick P: Satisfaction with in-patient psychiatric services. Relationship to patient and treatment factors. Br J Psychiatry. 1999, 174: 159-163.
Holcomb WR, Parker JC, Leong GB, Thiele J, Higdon J: Customer satisfaction and self-reported treatment outcomes among psychiatric inpatients. Psychiatr Serv. 1998, 49: 929-934.
Rosenheck R, Wilson NJ, Meterko M: Influence of patient and hospital factors on consumer satisfaction with inpatient mental health treatment. Psychiatr Serv. 1997, 48: 1553-1561.
Hoff RA, Rosenheck RA, Meterko M, Wilson NJ: Mental illness as a predictor of satisfaction with inpatient care at Veterans Affairs hospitals. Psychiatr Serv. 1999, 50: 680-685.
Hermann RC, Ettner SL, Dorwart RA: The influence of psychiatric disorders on patients' ratings of satisfaction with health care. Med Care. 1998, 36: 720-727. 10.1097/00005650-199805000-00011.
Kelstrup A, Lund K, Lauritsen B, Bech P: Satisfaction with care reported by psychiatric inpatients. Relationship to diagnosis and medical treatment. Acta Psychiatr Scand. 1993, 87: 374-379.
Gigantesco A, Morosini R, Bazzoni A: Quality of psychiatric care: validation of an instrument for measuring inpatient opinion. Int J Qual Health Care. 2003, 15: 73-78. 10.1093/intqhc/15.1.73.
Kolb SJ, Race KE, Seibert JH: Psychometric evaluation of an inpatient psychiatric care consumer satisfaction survey. J Behav Health Serv Res. 2000, 27: 75-86. 10.1007/BF02287805.
Barker DA, Orrell MW: The Psychiatric Care Satisfaction Questionnaire: a reliability and validity study. Soc Psychiatry Psychiatr Epidemiol. 1999, 34: 111-116. 10.1007/s001270050120.
Jenkinson C, Coulter A, Bruster S: The Picker Patient Experience Questionnaire: development and validation using data from in-patient surveys in five countries. Int J Qual Health Care. 2002, 14: 353-358. 10.1093/intqhc/14.5.353.
Jenkinson C, Coulter A, Bruster S, Richards N, Chandola T: Patients' experiences and satisfaction with health care: results of a questionnaire study of specific aspects of care. Qual Saf Health Care. 2002, 11: 335-339. 10.1136/qhc.11.4.335.
Cleary PD, Edgman-Levitan S, Roberts M, Moloney TW, McMullen W, Walker JD, Delbanco TL: Patients evaluate their hospital care: a national survey. Health Aff (Millwood). 1991, 10: 254-267. 10.1377/hlthaff.10.4.254.
Subar AF, Ziegler RG, Thompson FE, Johnson CC, Weissfeld JL, Reding D, Kavounis KH, Hayes RB, Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial Investigators: Is shorter always better? Relative importance of questionnaire length and cognitive ease on response rates and data quality for two dietary questionnaires. Am J Epidemiol. 2001, 153: 404-409. 10.1093/aje/153.4.404.
Hoffman SC, Burke AE, Helzlsouer KJ, Comstock GW: Controlled trial of the effect of length, incentives, and follow-up techniques on response to a mailed questionnaire. Am J Epidemiol. 1998, 148: 1007-1011.
Bessette L, Sangha O, Kuntz KM, Keller RB, Lew RA, Fossel AH, Katz JN: Comparative responsiveness of generic versus disease-specific and weighted versus unweighted health status measures in carpal tunnel syndrome. Med Care. 1998, 36: 491-502. 10.1097/00005650-199804000-00005.
Patrick DL, Deyo RA: Generic and disease-specific measures in assessing health status and quality of life. Med Care. 1989, 27: S217-S32.
Sitzia J, Wood N: Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care. 1998, 10: 311-317. 10.1093/intqhc/10.4.311.
Perneger TV, Chamot E, Bovier PA: Nonresponse bias in a survey of patient perceptions of hospital care. Med Care. 2005, 43: 374-380. 10.1097/01.mlr.0000156856.36901.40.
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/6/108/prepub
We are grateful to the following persons, who also participated in the organization of the survey and in the translation process of the Perceptions of Care survey: Philippe Budry, Didier Camus, Agathe Charvet, Tania Larequi Lauber, Maliza Nzabonimpa, Tamara Olsen, Christian Perler.
The authors were supported by their institutions. The quality processes and programs at both institutions supported the study. The funding body did not intervene in the design, analyses, interpretation of data, writing of the manuscript, nor in the decision to submit the manuscript.
The author(s) declare that they have no competing interests.
IPB analysed and interpreted the data, and drafted the manuscript. FC, FS, LP, and VK participated in the design of the study, helped in the acquisition and analysis of the data, and to draft the manuscript. CB participated in the conception and design of the study and helped to draft the manuscript. AC (deceased) participated in the design of the study and helped in the acquisition of the data. BB and TVP conceived the study, participated in its design and coordination, helped in the analysis of the data and critically revised the manuscript.
All authors read and approved the final version of the article