Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Effects of auditing patient safety in hospital care: design of a mixed-method evaluation

  • Mirelle Hanskamp-Sebregts1Email author,
  • Marieke Zegers2,
  • Wilma Boeijen1,
  • Gert P Westert2,
  • Petra J van Gurp1 and
  • Hub Wollersheim2
BMC Health Services Research201313:226

DOI: 10.1186/1472-6963-13-226

Received: 21 February 2013

Accepted: 18 June 2013

Published: 22 June 2013



Auditing of patient safety aims at early detection of risks of adverse events and is intended to encourage the continuous improvement of patient safety. The auditing should be an independent, objective assurance and consulting system. Auditing helps an organisation accomplish its objectives by bringing a systematic, disciplined approach to evaluating and improving the effectiveness of risk management, control, and governance. Audits are broadly conducted in hospitals, but little is known about their effects on the behaviour of healthcare professionals and patient safety outcomes. This study was initiated to evaluate the effects of patient safety auditing in hospital care and to explore the processes and mechanisms underlying these effects.

Methods and design

Our study aims to evaluate an audit system to monitor and improve patient safety in a hospital setting. We are using a mixed-method evaluation with a before-and-after study design in eight departments of one university hospital in the period October 2011–July 2014. We measure several outcomes 3 months before the audit and 15 months after the audit. The primary outcomes are adverse events and complications. The secondary outcomes are experiences of patients, the standardised mortality ratio, prolonged hospital stay, patient safety culture, and team climate. We use medical record reviews, questionnaires, hospital administrative data, and observations to assess the outcomes. A process evaluation will be used to find out which components of internal auditing determine the effects.


We report a study protocol of an effect and process evaluation to determine whether auditing improves patient safety in hospital care. Because auditing is a complex intervention targeted on several levels, we are using a combination of methods to collect qualitative and quantitative data about patient safety at the patient, professional, and department levels. This study is relevant for hospitals that want to early detect unsafe care and improve patient safety continuously.

Trial registration

Netherlands Trial Register (NTR): NTR3343


Hospital Patient safety Safety management Risk management Complications Management system audit Clinical governance Professional practice Adverse events Auditing


Many patients face adverse events during their hospital stay. The occurrence of adverse events varies from 3% to 17% of all hospital admissions worldwide. A significant proportion of these adverse events result in death (5–21%), of which half could be prevented [17]. To obtain insight into safe hospital care, reliable data about the occurrence, causes, and preventability of adverse events have to be collected and made available. Commonly used methods for analyses of unsafe hospital care and improvement of patient safety include accreditation, external peer reviews, internal audits, patient safety systems, and performance indicators [8, 9].

An internal audit should be an independent, objective assurance and consulting system for detecting patients’ risks of adverse events early, and it should encourage the continuous improvement of patient safety. An internal audit helps an organisation accomplish its objectives by providing a systematic, disciplined approach for evaluating and improving the effectiveness of risk management, control, and governance processes [10]. “Internal” means that trained employees of the hospital’s own organisation audit in one department, but work in another to guarantee some level of independent judgement. A major advantage of auditing is that, unlike registration of hospital data and mortality rates, it may also reveal the underlying causes of safety problems and could give clues to which improvements should be made to prevent adverse events. Auditing also ensures involvement of healthcare professionals in a peer–to-peer evaluation approach [11]. This bottom-up approach engages healthcare professionals at an early stage in the plan-do-check-act (PDCA) quality-improvement cycle.

Systematic literature reviews demonstrate that the effects of auditing and feedback on the behaviour of healthcare professionals and on patient outcomes range from none to substantial, with a maximum of a 70% increase in compliance with desired professional practice [12, 13]. Audits in these reviews focused on improving professional practice and guideline adherence within the group of professionals responsible for patient care. Little is known about the effects of audits organised at the hospital level and directed at several levels of patient care, including policy, patient safety culture, guideline adherence of professionals, and outcomes at the patient level [12, 13].

In order to be accredited, Dutch hospitals are required to have an internal audit system in place. All Dutch hospitals make efforts to control the quality and safety of care by means of some kind of auditing. These audits in the context of accreditation focus more on organisational preconditions and less on the behaviour of healthcare professionals and patient outcomes. However, whether these audits lead to early detection of risks of unsafe hospital care and, as a result, to safer healthcare, is unknown. Therefore, our study aims to evaluate the effects of auditing on patient safety outcomes and the performance of healthcare providers. Our main research questions are:
  1. 1.

    Does auditing improve patient safety outcomes and professional practice in hospitals?

  2. 2.

    What are the underlying processes and mechanisms of the effects of patient safety auditing?


The aim of this study protocol is to describe the study design. The effect of the audit system on various outcome measures and the process evaluation will be published in a separate manuscript.


Study design and setting

The study is a mixed-method evaluation study with a before-and-after study design.

The study is taking place in a 953-bed university hospital in the Netherlands. Eight hospital departments have been included; namely, general surgery, neurosurgery, obstetrics and gynaecology, orthopaedics, pulmonary medicine, general internal medicine, cardiology, and paediatrics. Audit procedures were or are planned for these departments from October 2011 until April 2013. The departments were selected because of the estimated high risks of preventable adverse events. The selected departments reasonably represent the medical practice in Dutch hospitals. Outpatient care and one-day hospital stays are excluded.

Auditing of patient safety

We are evaluating the audit system of the Radboud University Nijmegen Medical Centre (RUNMC) because this hospital optimised their audit system (Table 1) after a disaster in 2006. In 2006 it became known that, in the previous 2 years, the RUNMC had a significantly higher mortality rate than the national average (6.7% versus 2.7%) for adult cardiothoracic surgery patients with cardiac failure [14]. The Dutch Health Care Inspectorate and the Dutch Safety Board reported that the factors of unsafe care consisted of the lack of the following: leadership, standards and protocols to deliver high-quality patient care, well-structured handovers, discussions about the mortality and complications rate, integrated care, and successful evaluation systems for patient safety [14, 15].
Table 1

The development of an audit system


Steps for developing the audit system in the Radboud University Nijmegen Medical Centre


Introduction of an audit system for a test accreditation of hospital care from the Dutch Institute for Accreditation in Healthcare (NIAZ) → formal test of preconditions for good hospital care.


The first accreditation from the NIAZ was achieved.


The second accreditation from the NIAZ was achieved. However, despite the second accreditation, the Radboud case occurred. After the Radboud case, more focus on professional practice, leadership, team work, and patient safety outcomes were incorporated into the audit system. Valid and reliable instruments were selected to measure these aspects.


An independent Institute for Quality Assurance and Patient Safety to monitor patient safety and quality of care was established.

The audit process was professionalised:

• The audit team must report to the Board of the Institute for Quality Assurance and Patient Safety instead of to the Board of Directors of the hospital.

• The audit team was expanded with carefully selected physicians, nurses, and allied healthcare workers.

• Extensive training for internal auditors to increasing the inter-rater agreement was set up.

• The use of a reference framework made the audits more normative.

• Follow-up: revisiting was implemented to examine the progress of patient safety.


The audit system was optimised with:

• Structural audit analyses.


• Standard evaluation of experiences with auditing.

The RUNMC audit system is embedded in an organisational structure that measures, gives departmental feedback, and provides follow-up (Figure 1). This audit system combines professional activities and applied instruments to measure and analyse patient safety. The outcomes are compared to the legal, national, and professional standards of healthcare [16]. The methods used in the audit system are document studies, interviews, observations, surveys, medical record reviews, and appraisal and assessments (Table 2).
Figure 1

Audit components.

Table 2

Methods and instruments used within the audit system

Audit system




Studying policy and quality indicators

Online self-assessment tool based on legal, national, and professional practice standards [16]

Semi-structured interviews of health care providers

Standardised interview forms [17]

Systematic observations (e.g. physicians’ discussions of complications and patient handovers)

Standardised observation forms [17]

Questionnaire about team functioning of healthcare providers

Team Climate Inventory [18]

Feedback of audit findings by presentation and report

Patient record review to measure adverse events

Standardised record review form based on a protocol originally developed by the Harvard Medical Practice Study [19]

Assessment of the quality of medical and nursing patient records

Standardised assessment forms [19, 20]

Appraisal of document management (e.g. protocols and procedures) and guideline adherence

Standardised assessment forms partly based on the AGREE instrument [21]

Follow-up: revisiting 15 months after the audit to monitor improvements

Appraisal and assessment of quality of consultation and collaboration by main internal and external partners

Standardised appraisal and assessment questionnaire [22]

All of the 40 hospital departments that deliver or facilitate patient care are audited once every 4 years according to a fixed procedure and scheme. The auditing (Table 3) consists of nine steps (Figure 2). An independent Institute for Quality Assurance and Safety organises the audit procedures to ensure some kind of independence. To stress the independence of the institute: it is headed by a board of six representatives of physicians, nurses, allied healthcare workers, and healthcare researchers of the hospital. The audit team consists of five auditors, with at least one physician and one nurse. None have a direct relationship with the audited department. The chairman of the audit team is a medical department head of a contiguous specialism. Audit teams report audit findings to the board of the Institute for Quality Assurance and Safety. The board prioritises audit findings for the head of the audited department. The department head establishes and implements an improvement plan on the basis of the audit report and the prioritisation. The Board of Directors of the hospital controls the risk of unsafe care as indicated by means of the audit information and monitors the progress of improvement plans.
Table 3




Internal auditing

An independent, objective assurance and consulting activity designed to add value and improve an organisation's operations. It helps an organisation accomplish its objectives by bringing a systematic, disciplined approach to evaluating and improving the effectiveness of risk management, control, and governance [10].

Audit process

A set of established methods for conducting the audit of a department. It describes the activities of auditors needed to achieve the audit objectives [23].

Audit team

A group of experienced, trained, and knowledgeable individuals selected to perform an internal audit. The audit team is responsible for auditing selected departments in its own hospital [23].


The department or employee of the department being audited [23].
Figure 2

Audit steps and activities.

Conceptual framework

For this evaluation study, we developed a conceptual framework (Figure 3) on the basis of theories from the field of quality improvement [24, 25], implementation science [26], and Kirkpatrick’s learning model [27]. The conceptual model helps to explain the relationship between auditing and the possible effects on patient safety outcomes.
Figure 3

Conceptual framework.

The PDCA cycle is a well-known model for continuous process improvement [28]. It can be operationalised as:
  1. 1.

    Plan: the department makes an improvement plan on the basis of the audit findings.

  2. 2.

    Do: the department implements the improvement plan.

  3. 3.

    Check: the audit team revisits to assess the progress of implementing improvement actions as ordered by the Board of Directors.

  4. 4.

    Act: the department takes action on the basis of the revisit findings. If the improvements cannot guarantee patient safety, the improvement plan should be adjusted.


In this study, we assume that internal audits affect Kirkpatrick’s four learning levels for healthcare providers and management [27]. The four levels of evaluation consist of:

  •  Level 1. Reaction – experiences of healthcare providers and management about auditing

  •  Level 2. Learning – the increase in knowledge and skills and the change in attitudes

  •  Level 3. Behaviour – change of behaviour as a result of learning

  •  Level 4. Results – the results in terms of a reduced level of adverse events after auditing.


We measure several outcomes at the patient, professional, and department levels (Table 4). The primary outcomes are adverse events and complications. The secondary outcomes are patient experiences, the standardised mortality rate (SMR), prolonged hospital stay, team climate, and patient safety culture. These outcomes are assessed using medical record reviews, questionnaires, routine hospital administrative data, and observations. We collect data 3 months before the audit and will collect data again 15 months afterwards. We measure patient experiences also 9 months after auditing; the SMR and prolonged hospital stay are generated from the routine administrative hospital data monthly.
Table 4

Methods and instruments for measuring the effects of auditing

Outcome variable

Data source

Frequency (type) of measurement and sample size per measurement

Moments of measurement

Unit of analysis

Primary outcome

Adverse events and complications

Retrospective patient record review based on a protocol originally developed by the Harvard Medical Practice Study [19]

2 (before–and-after measurement) n = 400

- 3 months; + 15 months


Secondary outcomes

Patient experiences

Consumer quality-index questionnaire [29] based on the Consumer Assessment of Healthcare Providers and Systems [30]

3 (before-and-after measurement) n = 800

- 3 months; + 9 months and 15 months


Standardised mortality rate

Routine administrative data of the hospital

Continuously (time series) n = 233*



Prolonged hospital stay

Routine hospital administration data

Continuously (time series) n = 3268**



Team climate

Team Climate Inventory [18]

2 (before and after measurement) n = 132***

- 3 months; + 15 months

Professional or team

Patient safety culture

Hospital Survey on Patient Safety Culture [31, 32]

2 (before-and-after measurement) n = 132***

- 3 months; + 15 months



Safety walk arounds [33, 34]

2 (before-and-after measurement) n = 8

- 3 months; + 15 months


*Number of patients in the eight departments who died in 2012.

**Number of patients with prolonged stay in 2012 in the eight departments.

***Average number of clinical healthcare providers per department.

Patient level

We define an adverse event as an unintended injury that results in temporary or permanent disability, death, or prolonged hospital stay, and it is caused by healthcare management rather than by the patient’s underlying disease [35, 36]. A complication is an unintended and unwanted event or state during or following medical specialist treatment that has an unfavourable effect on the health of the patient to such an extent that adjustment of the medical treatment is necessary or to the extent that irreparable harm has occurred [37]. We use a structured method of patient record review, based on a protocol originally developed by the Harvard Medical Practice Study [19], to measure the incidence of adverse events and complications. Five trained physicians with a minimum of 10 years of clinical experience review retrospectively and independently the patient records of the sampled admissions that are positive for one of more of the screening criteria [6]. They assess the occurrence, cause preventability, and responsibility of the specialty for the adverse events. Complications are registered according to the classification of surgical complications [37]. To assess the inter-rater agreement between pairs of physicians, 10% of the patient records are independently reviewed a second time. In the case of disagreement about the presence or absence of complications and adverse events, both reviewers consider and discuss both reviews and reconsider their reviews to obtain consensus [38]. If they fail to reach agreement, the first review is leading.

We measure the experiences of patients with hospital care with the consumer quality (CQ) index [29]. The CQ index, based on the Consumer Assessment of Healthcare Providers and Systems, is a standardised and valid methodology for measuring, analysing, and reporting the patient customer experience in healthcare [30]. The survey comprises one specific dimension of patient safety and the other 13 dimensions of the survey are more or less related to patient safety. The survey is posted to a random sample of discharged patients. Non-respondents receive one reminder.

We will measure two patient safety indicators: SMR and prolonged hospital stay. The SMR is a method for comparing mortality ratios over time, or between subpopulations, taking into account the differences in population structure [39]. The ratio is of the observed to expected deaths, conventionally multiplied by 100. A prolonged hospital stay is the actual hospital stay of a clinically admitted patient that is more than 50% longer than expected [40]. A prolonged hospital stay takes into account the fact that patient stays tend to become prolonged after complications [41]. The SMR and hospital stay data are generated from the routine administrative data of the hospital.

Professional and team level

We use the Team Climate Inventory (TCI) to measure the climate of the various teams of healthcare providers. Good team climate is an important characteristic of successful healthcare teams in hospitals. Working in teams is essential to provide proper and safe care. The TCI is a valid, reliable, and discriminating self-report measure of the climate of a hospital team [17]. The TCI is based on a four-factor theory of team climate for innovation, and it assesses the factors, vision, participative safety, task orientation, and support for innovation in 13 subscales.

To measure the patient safety culture at the department level, we use the Hospital Survey on Patient Safety Culture (HSOPS) [31, 32]. The HSOPS is a valid and reliable survey for assessing the patient safety culture in hospitals. The HSOPS measures 12 dimensions of patient safety culture (Table 5) on the basis of ideas of unsafely designed care processes or systems that increase the likelihood of the occurrence of adverse events.
Table 5

Twelve dimensions of the Hospital Survey on Patient Safety Culture

Dimensions of patient safety culture [31],[32]


Teamwork across hospital departments


Teamwork within departments


Hospital handovers and transitions


Frequency of event reporting


Non-punitive response to error


Openness of communication


Feedback and communication about error


Organisational learning – continuous improvement


Supervisor/manager expectations and actions promoting patient safety


Hospital management support for patient safety




Overall perceptions of safety

Department level

One-hour safety walk arounds are used for observing patient safety culture [33, 34]. The safety walk around consists of a person literally walking around on the ward, using a standardised observation list (Table 6), and paying specific attention to patient safety. During the walk around, the participants talk with employees about any risk situations in order to correctly interpret the items on the checklist as ‘safe’ or ‘unsafe’. The participants in the safety walk arounds are two healthcare providers of the ward who are responsible for the quality assurance tasks and one internal auditor.
Table 6

Items of patient safety culture checked during the safety walks

Topic [33],[34]

Some of the 65 items

Medication safety

Double check before administration of the drug (the right drugs and right doses to the right patient at the right time)

Keep medication inaccessible to unauthorized persons

Infection prevention

Wash hands before and after treatment of the patient

Not wearing hand or wrist jewellery


Reduce risks of patients falling

Make leaflets easily accessible to patients

Test whether alarm systems work

Protocols and procedures of care

Ensure that only up-to-date instructions for the protocols and procedures of care are available

Ensure that protocols and procedures are accessible

Information security

Keep medical and nursing patient record inaccessible to unauthorized persons

Keep conversations between healthcare providers confidential

Sterile medical aids

Keep packaging of sterile materials closed

Sterile materials for which the expiration date has passed must always be removed

Medical devices

Monitor maintenance periodically

Provide training before use

Patient identification

Ensure that patients wear an identification bracelet

Ensure that demonstrable checking takes place before blood products are given

Food safety

Check that the nutrition assistant ensures fluid and/or nutritional balance is complete

Ascertain the temperature of the hot meal before serving

Reserved procedures

Determine that nurses have been trained and examined for risky medical procedures

Overall safety

The department must be clean and tidy


Clean desk policy must be maintained in the reception room

Sample size and precision calculation

With a sample of 50 medical records of patients from each of the eight hospital departments, we can estimate the difference in preventable adverse events and complications before and after auditing with a precision of 7%. The precision calculation of this study is based on Zegers et al. [42] and accounts for clustering in the before-and-after measurements. Since the patient records were drawn from the same departments, there will be some similarity in the before and after groups, i.e. a correlation between the before group and the after group.

Statistical analyses

During the data collection, data are checked on a regular basis to identify out-of-range answers, inconsistent responses, and missing data. We will use SPSS version 20.0 for data analysis. Descriptive statistics will be used to describe baseline characteristics of patients, professionals, and departments.

Patient level

A logistic regression analysis will analyse changes in the rates of patient admissions with adverse events and complications between before-and-after measurements, while correcting for clustering on the hospital department level. The intra-class correlations, the ratio of the between group, and the total variance will be calculated. Since the patient records will be different in the before–and-after measurements, the effect of timing (i.e. auditing) will be accounted for with a fixed effect for time (dummy variable for the second measurement). The incidence rates of adverse events and complications will be calculated with 95% confidence intervals (CIs). The inter-rater agreement within pairs of physicians will be expressed as a kappa (K) statistic with 95% CIs and as a percentage of records that agree about the presence or absence of complications and adverse events.

A linear mixed model will be used to analyse the before-and-after outcomes of the quality of hospital care according to the patients. To account for the possibility that changes over time will be influenced by changes in patient mix, terms will be added to the model for age, sex, education, and self-reported general and mental health status. Since the patients will be different in the before-and-after measurements, the effect of the timing of the auditing will be accounted for with a fixed effect for time (dummy variable for the second and third measurements).

We will use a linear mixed model with a Poisson distribution for the SMR and prolonged hospital stay to compare the rates from baseline to the end of the initial 18-month evaluation period (3 months before and 15 months after auditing). We will adjust this model for patient and departmental characteristics.

Professional and team level

To account for the influence of department type on team climate and patient safety culture, we will analyse the before-and-after outcomes using a linear mixed model with the department type as a random effect. Since the professionals are almost the same in the before-and-after measurements, the difference scores in team climate and patient safety culture will be considered as dependent variables.

Departmental level

The paired t-test (by department) will be used on the percentage of observed patient safety data collected during safety walk arounds to measure difference in patient safety on departments before and after auditing.

Process evaluation

We will carry out the process evaluation to explain the effects. We will determine the components of internal auditing or factors other than auditing that could have influenced patient safety. We will document the different audit steps and the participation level of the healthcare providers (physicians, nurses, and allied healthcare workers) and department leaders. We will also measure the experience of these healthcare providers and department leaders with the auditing (level 1 of Kirkpatrick’s theory [27]) in an online survey, which has been tested and reviewed by experts for face validity. The survey will be e-mailed to those who participate in the audit within 2 weeks after the audit. Non-respondents will receive one reminder. According to the Kirkpatrick’s theory, the degree of exposure is equal to the extent that healthcare providers and leaders are familiar with auditing and have learned during auditing (level 2) and the way they have adapted their behaviour to improve patient safety (level 3). Semi-structured interviews will be used to measure the level of learning, behaviour change, and the related facilitators and barriers. Purposive sampling will be used to select a varied group of healthcare providers and department leaders [43]. We expect that 24 interviews will be enough to reach the point of theoretical saturation. If not, additional interviews will take place until no new information about the experiences with the audit system appears. An interview guide will be developed to facilitate these interviews. The interviews will be audio-taped and transcribed according to a standardised format. Atlas.ti.6 will be used for the qualitative analysis. We will study any changes in the safety policy and the number and type of implemented safety interventions after the audits by analysing audit reports, improvement plans, and revisit reports.

Ethical approval and principles

The Ethical Committee of the Radboud University Nijmegen Medical Centre (RUNMC) approved the study on 11 July 2011. It conforms to Dutch law and privacy regulations and was judged not to involve human-subject research. The manuscript was registered on 12 March 2012 in the Dutch Trial Registry for clinical trials. The Dutch Trial Registration record number is NTR3343.

The participation of hospital departments in the study is voluntary. The researcher gave an oral presentation and written information to each department head about the consequences of participating in the study. Written consent has been obtained from all participating departments of the hospital. The data will be separated from the names of the participants (departments, healthcare providers, and patients) and published anonymously. Each participant will be identified in the database with a number and an identity code. These codes are available only to the researchers and the research assistant.


We evaluate the effects of patient safety auditing in hospital care. This study is relevant for hospitals that intend to use early detection of unsafe care and to improve patient safety continuously. Our study provides a better understanding of how auditing can help improve patient safety.

This study has several strengths and limitations. Most patient safety interventions, including auditing, are complex interventions because they often consist of multiple parts (multicomponent) and aim at several levels of the health pyramid (multilevel) and various actors (multitargeted) [44]. Therefore, we set up a mixed-method evaluation study in which we combine qualitative and quantitative measurement instruments to measure several outcomes at different levels: the patient, professional, and departmental levels. We use widely applied and thoroughly studied instruments to measure adverse events, patient experiences, team work, and patient safety culture among healthcare providers [5, 18, 32, 45].

The causal relationship between internal auditing and change in adverse event rates is difficult to identify because of the presence of other hospital initiatives to improve patient safety [44, 46]. Therefore, we will carry out a process evaluation to analyse the mechanism and processes responsible for the outcomes of the effect evaluation. We use questionnaires and interviews to study the factors that facilitate or hinder the effectiveness of auditing. This is also important for giving recommendations to other hospitals that want to implement some or all of our audit system.

We will analyse time series data of SMRs and prolonged hospital stay for better evidence of cause-and-effect relationships. Possible underlying secular trends will be detected, and perhaps they will help explain some of the effects [47].

To improve the internal validity by reducing bias, the following actions are taken. To reduce selection bias, patients were randomly selected from the hospital database. All the care personnel in the department who are working in clinical care are invited to fill in the TCI and HSOPS. To prevent participant recall bias, we survey patients 3 months after hospital admission at the latest. The experiences of healthcare personnel and department leaders with the audit system are measured within 2 weeks after the auditing (survey) and 1 month after the revisiting (interviews).

To decrease response bias, non-respondents receive a reminder. Missing values in the medical record review are checked. Inter-rater agreement of retrospective assessments is measured and given back to the physician reviewers to improve the reliability of the assessment of adverse events and complications.

Information bias is reduced by having a research assistant enter the research data. The data will be separated from the names of the participants (departments, healthcare providers, and patients). Each participant will be identified in the database with a number and an identity code. These codes will be available only to the research assistant and researchers. An independent researcher will check the data entries. An independent statistician will check the analyses.

To prevent confounding bias, we will study potentially confounding determinants of the relationship of auditing and patient safety by means of regression analysis and time series analyses.

To prevent publication bias, the study has been registered in the Dutch register for clinical trials. Additionally, we have installed an external Advisory Board with the special tasks of checking the integrity of the study group and giving methodological feedback.

The generalisability of the study will be low because it is carried out in a single hospital in a before-and-after design without concurrent controls. Ideally, the evaluation of the effects of auditing on hospital care should be studied in a large, multicentre trial with randomisation of hospitals with and without internal auditing [46, 47]. Randomised controlled trials are the gold standard for evaluating healthcare interventions. A randomised, controlled, multicentre trial is not possible in this case. All Dutch hospitals have a legal obligation to systematically monitor, control, and improve the quality of care. Therefore, each one has some kind of auditing. However, for a hospital to temporarily refrain from a systematic analysis of quality assurance is unethical and illegal. Additionally, each hospital is unique in its management structure and the way internal auditing is performed. Therefore, a comparable control hospital with exactly the same management structure, organisation, and patient safety culture would be nigh impossible to find. We therefore decided to limit the evaluation of the effects of auditing on patient outcomes and professional practice to the one hospital that the other Dutch university hospitals regard as having the best practice for internal auditing [48]. Nevertheless, before-and-after designs are useful for studies that are part of local quality improvement projects such as audits, PDCA cycles, or action research [49]. In our circumstances, an evaluation of a local initiative in a before-and-after design is the only feasible option. This way of evaluating patient safety auditing with several measures and methods at different levels will provide the scientific proof we need for improving patient safety in hospitals.


Plan–Do–Check–Act (PCDA): 

is associated with the US statistician, educator, and consultant W. Edwards Deming, 1900–1993.



This study is being funded with general resources of the Radboud University Nijmegen Medical Centre. The study has been peer reviewed by the research committee of the Nijmegen Centre for Evidence-Based Practice (NCEBP). We thank everyone who contributes to the data collection and the departments of the RUNMC for participating in this study.

Authors’ Affiliations

Institute of Quality Assurance and Patient Safety, Radboud University Nijmegen Medical Centre
Scientific Institute for Quality of Healthcare (IQ healthcare), Radboud University Nijmegen Medical Centre


  1. Aranaz-Andres JM, Aibar-Remon C, Vitaller-Burillo J, Requena-Puche J, Terol-Garcia E, Kelley E, Gea-Velazquez de Castro MT, group Ew: Impact and preventability of adverse events in Spanish public hospitals: results of the Spanish National Study of Adverse Events (ENEAS). Int J Qual Health Care. 2009, 21 (6): 408-414. 10.1093/intqhc/mzp047.View ArticlePubMedGoogle Scholar
  2. Aranaz-Andres JM, Aibar-Remon C, Vitaller-Murillo J, Ruiz-Lopez P, Limon-Ramirez R, Terol-Garcia E, group Ew: Incidence of adverse events related to health care in Spain: results of the Spanish National Study of Adverse Events. J Epidemiol Commun H. 2008, 62 (12): 1022-1029. 10.1136/jech.2007.065227.View ArticleGoogle Scholar
  3. Aranaz-Andres JM, Limon R, Mira JJ, Aibar C, Gea MT, Agra Y, Group EW: What makes hospitalized patients more vulnerable and increases their risk of experiencing an adverse event?. Int J Qual Health Care. 2011, 23 (6): 705-712. 10.1093/intqhc/mzr059.View ArticlePubMedGoogle Scholar
  4. Smits M, Zegers M, Groenewegen PP, Timmermans DR, Zwaan L, van der Wal G, Wagner C: Exploring the causes of adverse events in hospitals and potential prevention strategies. Qual Saf Health Care. 2010, 19 (5): e5-10.1136/qshc.2008.030726.PubMedGoogle Scholar
  5. Zegers M, de Bruijne MC, Wagner C, Hoonhout LH, Waaijman R, Smits M, Hout FA, Zwaan L, Christiaans-Dingelhoff I, Timmermans DR, et al: Adverse events and potentially preventable deaths in Dutch hospitals: results of a retrospective patient record review study. Qual Saf Health Care. 2009, 18 (4): 297-302. 10.1136/qshc.2007.025924.View ArticlePubMedGoogle Scholar
  6. Zegers M, de Bruijne MC, Wagner C, Groenewegen PP, Waaijman R, van der Wal G: Design of a retrospective patient record study on the occurrence of adverse events among patients in Dutch hospitals. BMC Health Serv Res. 2007, 7: 27-10.1186/1472-6963-7-27.View ArticlePubMedPubMed CentralGoogle Scholar
  7. Zegers M, De Bruijne MC, Spreeuwenberg P, Wagner C, Van Der Wal G, Groenewegen PP: Variation in the rates of adverse events between hospitals and hospital departments. Int J Qual Health Care. 2011, 23 (2): 126-133. 10.1093/intqhc/mzq086.View ArticlePubMedGoogle Scholar
  8. Shaw CD: External quality mechanisms for health care: summary of the ExPeRT project on visitatie, accreditation, EFQM and ISO assessment in European Union countries. External Peer Review Techniques. European Foundation for Quality Management. International Organization for Standardization. Int J Qual Health Care. 2000, 12 (3): 169-175. 10.1093/intqhc/12.3.169.View ArticlePubMedGoogle Scholar
  9. Groene O, Klazinga N, Walshe K, Cucic C, Shaw CD, Sunol R: Learning from MARQuIS: future direction of quality and safety in hospital care in the European Union. Qual Saf Health Care. 2009, 18 (Suppl 1): i69-i74. 10.1136/qshc.2008.029447.View ArticlePubMedPubMed CentralGoogle Scholar
  10. The Institute of Internal Auditors. []
  11. Godlee F: How can we make audit sexy?. BMJ quality & safety. 2010, 340: c2324.Google Scholar
  12. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, O'Brien MA, Johansen M, Grimshaw J, Oxman AD: Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012, 6: CD000259-OnlinePubMedGoogle Scholar
  13. Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD: Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback. Qual Saf Health Care. 2006, 15 (6): 433-436. 10.1136/qshc.2006.018549.View ArticlePubMedPubMed CentralGoogle Scholar
  14. Externe Onderzoekscommissie (EOC): Een tekortschietend zorgproces. Een onderzoek naar de kwaliteit en veiligheid van de cardiochirurgische zorgketen voor volwassenen in het UMC St Radboud te Nijmegen. 2006, Utrecht, 24 AprilGoogle Scholar
  15. Onderzoeksraad voor veiligheid: Een onvolledig bestuurlijk proces: hartchirurgie in UMC St Radboud. Onderzoek naar aanleiding van berichtgeving op 28 September 2005 over te hoge mortaliteit. 2008, Den Haag, Scholar
  16. Derksen P, Ouwens M: Integraal audit systeem UMC St Radboud. 2008, Nijmegen: Model en AchtergrondGoogle Scholar
  17. Ouwens M, Roumen-Klappe E, Keyser A, van Gurp P, Kokke D, Wollersheim H, Braat DM, Samsom M: Eigen vinger aan de pols. Intern auditsysteem om veiligheid en kwaliteit van de zorg te bewaken. Medisch Contact. 2008, 38: 1554-1557.Google Scholar
  18. Ouwens M, Hulscher M, Akkermans R, Hermens R, Grol R, Wollersheim H: The Team Climate Inventory: application in hospital teams and methodological considerations. Qual Saf Health Care. 2008, 17 (4): 275-280. 10.1136/qshc.2006.021543.View ArticlePubMedGoogle Scholar
  19. Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, Newhouse JP, Weiler PC, Hiatt HH, Harvard Medical Practice Study I: Incidence of adverse events and negligence in hospitalized patients: results of the Harvard Medical Practice Study I. 1991. Qual Saf Health Care. 2004, 13 (2): 145-151. 10.1136/qshc.2002.003822. discussion 151-142View ArticlePubMedPubMed CentralGoogle Scholar
  20. Verpleegkundigen & Verzorgenden Nederland (V&VN): Richtlijn Verpleegkundige en verzorgende verslaglegging. 2011, Utrecht, 8 SeptemberGoogle Scholar
  21. AGREE Collaboration: Development and validation of an international appraisal instrument for assessing the quality of clinical practice guidelines: the AGREE project. Qual Saf Health Care. 2003, 12 (1): 18-23.View ArticleGoogle Scholar
  22. Overeem K, Wollersheim HC, Arah OA, Cruijsberg JK, Grol RP, Lombarts KM: Factors predicting doctors' reporting of performance change in response to multisource feedback. BMC Med Educ. 2012, 12: 52-10.1186/1472-6920-12-52.View ArticlePubMedPubMed CentralGoogle Scholar
  23. Internal Standard ISO 19011. Guidelines for auditing management systems. 2011-11-15. ISO 19011:2011(E). ISO 2011. [], Second
  24. Blumenthal D, Kilo CM: A report card on continuous quality improvement. Milbank Q. 1998, 76 (4): 625-648. 10.1111/1468-0009.00108. 511View ArticlePubMedPubMed CentralGoogle Scholar
  25. Shortell SM, Bennett CL, Byck GR: Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998, 76 (4): 593-624. 10.1111/1468-0009.00107. 510View ArticlePubMedPubMed CentralGoogle Scholar
  26. Grol R, Wensing M, Eccles M: Improving Patient Care: The Implementation of Change in Clinical Practice. 2005, Oxford: ElsevierGoogle Scholar
  27. Kirkpatrick D, Kirkpatrick J: Evaluating Training Programs: The Four Levels. 2006, San Francisco: Berret-KoehlerGoogle Scholar
  28. Langley GL, Nolan KM, Nolan TW, Norman CL, Provost LP: The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 2009, San Francisco: Jossey Bass, 2Google Scholar
  29. Sixma H, Spreeuwenberg P, Zuidgeest M, Rademakers J: CQ-index Ziekenhuisopname: meetinstrumentontwikkeling. 2009, NIVEL: UtrechtGoogle Scholar
  30. Arah OA, ten Asbroek A, Delnoij DM, de Koning JS, Stam PJ, Poll AH, Vriens B, Schmidt PF, Klazinga NS: Psychometric Properties of the Dutch Version of the Hospital-Level Consumer Assessment of Health Plans Survey® Instrument. Health Serv Res. 2006, 41 (1): 284-301. 10.1111/j.1475-6773.2005.00462.x.View ArticlePubMedPubMed CentralGoogle Scholar
  31. Jones KJ, Skinner A, Xu L, Sun J, Mueller K: The AHRQ Hospital Survey on Patient Safety Culture: A Tool to Plan and Evaluate Patient Safety Programs. Advances in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign). 2008, Rockville MD: Agency for Healthcare Research and Quality (US)Google Scholar
  32. Smits M, Wagner C, Spreeuwenberg P, van der Wal G, Groenewegen PP: Measuring patient safety culture: an assessment of the clustering of responses at unit level and hospital level. Qual Saf Health Care. 2009, 18 (4): 292-296. 10.1136/qshc.2007.025965.View ArticlePubMedGoogle Scholar
  33. Campbell DA, Thompson M: Patient safety rounds: description of an inexpensive but important strategy to improve the safety culture. Am J Med Qual. 2007, 22 (1): 26-33. 10.1177/1062860606295619.View ArticlePubMedGoogle Scholar
  34. Thomas EJ, Sexton JB, Neilands TB, Frankel A, Helmreich RL: The effect of executive walk rounds on nurse safety climate attitudes: a randomized trial of clinical units. BMC Health Serv Res. 2005, 5: 46-10.1186/1472-6963-5-46.View ArticlePubMedPubMed CentralGoogle Scholar
  35. Baker GR, Norton PG, Flintoft V, Blais R, Brown A, Cox J, Etchells E, Ghali WA, Hebert P, Majumdar SR, et al: The Canadian Adverse Events Study: the incidence of adverse events among hospital patients in Canada. CMAJ. 2004, 170 (11): 1678-1686.View ArticlePubMedPubMed CentralGoogle Scholar
  36. Davis P, Lay-Yee R, Briant R, Ali W, Scott A, Schug S: Adverse events in New Zealand public hospitals II: preventability and clinical context. N Z Med J. 2003, 116 (1183): U624.PubMedGoogle Scholar
  37. Dutch Society of Medical Specialists (in Dutch: Orde van Medisch Specialisten). []
  38. Zegers M, de Bruijne MC, Wagner C, Groenewegen PP, van der Wal G, de Vet HC: The inter-rater agreement of retrospective assessments of adverse events does not improve with two reviewers per patient record. J Clin Epidemiol. 2010, 63 (1): 94-102. 10.1016/j.jclinepi.2009.03.004.View ArticlePubMedGoogle Scholar
  39. Bottle A, Jarman B, Aylin P: Hospital standardized mortality ratios: sensitivity analyses on the impact of coding. Health Serv Res. 2011, 46 (6pt1): 1741-1761. 10.1111/j.1475-6773.2011.01295.x.View ArticlePubMedPubMed CentralGoogle Scholar
  40. Borghans I: PhD thesis. Reducing hospital length of stay by improving quality and safety of care?. 2012, Nijmegen: Radboud University Medical Centre NijmegenGoogle Scholar
  41. Silber JH, Rosenbaum PR, Koziol LF, Sutaria N, Marsh RR, Even-Shoshan O: Conditional length of stay. Health Serv Res. 1999, 34 (1 Pt 2): 349-363.PubMedPubMed CentralGoogle Scholar
  42. Lubberding S, Merten H, Wagner C: Patiëntveiligheid en Complexe Zorg bij oudere patiënten met een heupfractuur. 2010, Utrecht: NIVEL en EMGO Institute for Health and Care ResearchGoogle Scholar
  43. Patton MQ: Qualitative Evaluation and Research Methods 2nd edition. 1990, Newbury Park, CA: SageGoogle Scholar
  44. Zegers M, Wollersheim H: [Dutch national safety programme will not lead to a fifty per cent reduction in avoidable mortality in hospitals]. Ned Tijdschr Geneeskd. 2012, 156 (33): A4768.PubMedGoogle Scholar
  45. Delnoij D: Measuring patient experiences in Europe: what can we learn from the experiences in the USA and England?. Eur J Public Health. 2009, 19 (4): 354-356. 10.1093/eurpub/ckp105.View ArticlePubMedGoogle Scholar
  46. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ: An epistemology of patient safety research: a framework for study design and interpretation. Part 4. One size does not fit all. Qual Saf Health Care. 2008, 17 (3): 178-181. 10.1136/qshc.2007.023663.View ArticlePubMedGoogle Scholar
  47. Eccles M, Grimshaw J, Campbell M, Ramsay C: Research designs for studies evaluating the effectiveness of change and improvement strategies. Qual Saf Health Care. 2003, 12 (1): 47-52. 10.1136/qhc.12.1.47.View ArticlePubMedPubMed CentralGoogle Scholar
  48. Nederlandse Federatie van Universitair Medische Centra (NFU): []
  49. Brown C, Lilford R: Evaluating service delivery interventions to enhance patient safety. BMJ (Clinical research ed). 2008, 337: a2764-10.1136/bmj.a2764.View ArticleGoogle Scholar
  50. Pre-publication history

    1. The pre-publication history for this paper can be accessed here:


© Hanskamp-Sebregts et al.; licensee BioMed Central Ltd. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.