Respondents
The Dutch version of the HSOPS was distributed in eight hospitals in the Netherlands in June 2005. The hospitals differed by teaching status: four general hospitals, three teaching hospitals and one university hospital. The capacity of these hospitals varied from 530 to 1120 beds. The participating hospitals were located across the Netherlands. Within the eight hospitals, 23 units participated (two to five units per hospital): six internal medicine units, five intensive care units, three surgical units, three emergency departments, two pediatrics units, two neurology units and one psychiatry unit. Units and hospitals were not randomly selected; units that participated were about to introduce an incident reporting system at their unit and they wanted to assess their patient safety culture prior to the implementation of the new system. In each unit, a random sample of about 30 healthcare providers was drawn, depending on unit size. When the amount of staff in a unit was less than 30 people, all healthcare providers of the unit were asked to participate.
The questionnaire was disseminated on paper through the mail sorting boxes of all selected healthcare providers at the unit. A research coordinator in the hospital took care of the distribution. To allow for confidentiality, respondents could send the questionnaire directly to the researchers outside the hospital in a postage-paid return envelope. The management board and medical board of each participating hospital formally consented to participate in the study. Formal ethical approval was not needed for this study according to Dutch law.
A total of 583 respondents completed the questionnaire. Most respondents worked as registered nurses (59.8%). Other respondents worked as medical consultants (6.8%), resident physicians (6.0%), administrative staff (4.3%), nurses in training (2.6%) or in management (2.4%). These percentages give a reasonable reflection of the real distribution of disciplines at the units.
Questionnaire
Background variables
Work-related information, e.g. the respondent's primary department in hospital, how long he/she has been working at this unit, how many hours a week and in which function.
Items on patient safety culture
Most items of patient safety culture can be answered using a five-point scale reflecting the agreement rate: from 'strongly disagree' (1) to 'strongly agree' (5), with a neutral category 'neither' (3). Other items can be answered using a five-point frequency scale from 'never' (1) to 'always' (5). In addition, there are two mono-item outcome variables: 1) Patient safety grade: measured with a five-point scale, from 'excellent' (1) to 'failing' (5), and 2) Number of events reported: how often the respondent has submitted an event report in the past 12 months (answer categories: 'none', '1–2 event reports', '3–5 event reports', '6–10 event reports' and '11–20 event reports').
The original items have been validated by the Agency for Healthcare Research and Quality (AHRQ) for the USA hospital setting [15]. Factor analysis resulted in 12 factors (dimensions). The codes in brackets after each dimension refer to the sections in the questionnaire and the numbers of the questions.
F1 Teamwork across hospital units (F2, F4, F6, F10)
F2 Teamwork within units (A1, A3, A4, A11)
F3 Hospital handoffs and transitions (F3, F5, F7, F11)
F4 Frequency of event reporting (D1, D2, D3)
F5 Nonpunitive response to error (A8, A12, A16)
F6 Communication openness (C2, C4, C6)
F7 Feedback and communication about error (C1, C3, C5)
F8 Organisational learning – continuous improvement (A6, A9, A13)
F9 Supervisor/manager expectations and actions promoting patient safety (B1, B2, B3, B4)
F10 Hospital management support for patient safety (F1, F8, F9)
F11 Staffing (A2, A5, A7, A14)
F12 Overall perceptions of safety (A10, A15, A17, A18)
Data screening and pre-analyses
Completeness of the data was checked. Five respondents were excluded from the analyses, because they had completed less than half of all items. When a respondent had chosen two or more options at one item, this item was marked as missing, which rarely occurred. Missing values have been replaced by the respondents' mean scores on the item. The highest numbers of missing values were found at part D (Frequency of event reporting): 3.8% to 4.5% of the responses to these items were missing. No items were excluded based on the percentage of missing values. The distribution of only one variable was skewed, i.e. Number of events reported. There were no variables with 80% or more answers in one category.
We checked whether the inter-item correlations were sufficient, by an examination of the correlation matrix. Questions belonging to the same underlying dimension will correlate as they measure the same aspect of patient safety culture. Items that do not correlate, or correlate with only a few other variables are not suited for factor analysis [20]. Bartlett's test demonstrated that the inter-item correlations were sufficient: χ2 = 6456.8; df = 861; p < 0.001.
We also checked whether the opposite occurred: too much correlation between the items. Ideally, every aspect of patient safety culture uniquely contributes towards the concept of patient safety culture. A high correlation between two items means that patient safety culture aspects overlap to a large extent. The overlap in the answer patterns is about 50% when a correlation is 0.7 [20]. No correlations exceeded this boundary score.
In addition, The Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) was determined. This value can range from 0 to 1. A value near 1 indicates that there is hardly any spread in the correlation pattern, enabling reliable and distinctive dimensions by factor analysis [20]. The KMO-score was 0.9; far above Kaiser's criterion of 0.5. The pre-analyses demonstrate that the data can be used for factor analysis.
Statistical analyses
Factor analysis defines which items are closely linked and refer jointly to an underlying dimension (or factor). The items can thus be reduced to the smallest possible number of concepts that still explain the largest possible part of the variance [20]. A confirmative factor analysis was performed (principal component analysis with Varimax rotation) in order to investigate whether the factor structure of the American questionnaire can be used with Dutch data. The data were also studied with explorative factor analysis (principal component analysis with Varimax rotation), in order to check whether the items form different factors in the Dutch situation. When establishing the number of factors, the Eigen value (Eigen value > 1: Kaiser's criterion) was taken into account, beside the extent of explained variance, the shape of the scree plot and the possibility of interpreting the factors. Kaiser's criterion is reliable in a sample of more than 250 respondents and when the average communality equals or is larger than 0.6. The shape of the scree plot gives reliable information when the sample is larger than 200 respondents [20]. The data satisfy these conditions.
The internal consistency of the factors was calculated with Cronbach's alpha (α), a value between 0 and 1. If different items are supposed to measure the same concept, the internal consistency (reliability) should be greater than or equal to 0.6 [20]. Since the questionnaire contains positively as well as negatively worded items, the negatively formulated items were first recoded to make sure that a higher score always means a more positive response.
The construct validity was studied by calculating scale scores for every factor (after any necessary reverse coding) and subsequently calculating Pearson correlation coefficients between the scale scores. The construct validity of each factor is reflected in scale scores that are moderately related. High correlations (r > 0.7), however, would indicate that factors measure the same concept and these factors may be combined and/or some items could be removed. In addition, correlations of the scale scores were calculated with the outcome variable: Patient safety grade. No correlations were calculated with the other outcome variable, Number of events reported, because of the lack of variability and skewed nature of this item (40% of the respondents indicated not to have reported any events during the past 12 months and 41% had reported only one or two events). All statistical analyses were performed using SPSS 12.0.