Skip to main content
  • Research article
  • Open access
  • Published:

Measuring progress with clinical governance development in New Zealand: perceptions of senior doctors in 2010 and 2012

Abstract

Background

Clinical governance has become a core component of health policy and services management in many countries in recent years. Yet tools for measuring its development are limited. We therefore created the Clinical Governance Development Index (CGDI), aimed to measure implementation of expressed government policy in New Zealand.

Methods

We developed a survey which was distributed in 2010 and again in 2012 to senior doctors employed in public hospitals. Responses to six survey items were weighted and combined to form the CGDI. Final scores for each of New Zealand’s District Health Boards (DHBs) were calculated to compare performances between them as well as over time between the two surveys.

Results

New Zealand’s overall performance in developing clinical governance improved between the two studies from 46% in 2010 to 54% in 2012 with marked differences by DHB. Statistically significant shifts in performance were evident on all but one CGDI item.

Conclusions

The CGDI is a simple yet effective method which probes aspects of organisational commitment to clinical governance, respondent participation in organisational design, quality improvement, and teamwork. It could be adapted for use in other health systems.

Peer Review reports

Background

The concept and use of the term `clinical governance’ emerged in the late-1990s in the United Kingdom and has since become central to health policy in a range of countries [1]-[3]. Built around an arguably indistinct set of ideas, clinical governance has been defined in various ways, making it difficult for both managers and clinicians in terms of what they should be aiming for by way of structures, processes and outcomes [4]. In a general sense, and following the classic Scally and Donaldson definition [5], clinical governance involves health care professionals leading the way in quality improvement efforts, ensuring practices are evidence-based, and working to build team-based and systematised service delivery processes. Central to clinical governance is the idea that clinicians are best placed to encourage performance improvement amongst peers [6]. Clinical governance may therefore be expressed in terms of health professionals having two roles: improving the care delivery system, as well as providing care. Extracting from this, clinical governance might be seen in both organisational structures and processes, with an expectation that improved outcomes will be delivered.

An increasing volume of clinical governance research is largely focused on the United Kingdom National Health Service (NHS) and based on case study approaches. For the most part, the research reveals challenges in developing leadership models that feature genuine health professional involvement and often limited opportunities and management support for clinical governance per se [7]-[9]. These studies also find difficulties in getting health professionals to see governance and leadership as a `step up’ and an `important calling’ above and beyond clinical service delivery. In contrast with many other health services and management fields, there is a dearth of clinical governance research aimed at assessing development of structures and processes or tracking development over time. A 2010 study presented a method for tracking clinical governance development across a disparate range of activities, providing something of a conceptual framework for evaluation [10]. However, as far as we are aware, this has not been put to practical use. Others have sought to measure levels of `medical engagement’, developing proprietary measures for this [11],[12]. Then there are various tools produced by government agencies for health care providers to use for self-assessment. Examples include a Western Australian tool which asks organisational leaders to rate their performance on eight dimensions of clinical governance development and provide evidence to support their assessment [13]. The Irish National Health Executive has taken a similar approach [14]. Such tools can play an important role in helping providers reflect on their efforts and performance and highlight for policy makers areas in which additional resources or incentives may be required. However, they are mostly applied at the organisational level, can be time-consuming to complete, and may not necessarily capture the perspectives of front-line health professionals. There is, therefore, a need for tools that can be used consistently and over time to compare performances amongst health care providers and provide an indication of commitment to clinical governance and its development. Such tools need to be independently-developed, easy-to-use and incorporate the perspectives of front-line professionals. There is also a need to test and validate tools through practical application.

With this in mind, in an earlier project we created the Clinical Governance Development Index (CGDI) designed for tracking clinical governance development, with a focus on structures and processes and on measuring implementation of government policy [15]. The setting was the government-dominated health system of New Zealand. This caters to a population of 4.4 million, is around 80% government-funded from general taxes with the remainder from private sources. In 2013, total health expenditure was 10.3% of GDP. The institutional arrangements that underpin New Zealand’s health system are not dissimilar to those of other government-funded jurisdictions [16],[17], although being a small country with a centralised political system means that change can be rapid [18]. The public sector dominates hospital care which is provided via 20 District Health Boards. Public hospitals are free of patient charges; private hospitals offer only elective services without government subsidy, and there are no private emergency services. Primary medical care, by contrast, is largely privately provided, albeit with considerable government subsidies to offset patient co-payments.

In early-2009, and drawing heavily on material and experiences from the NHS - particularly the Leadership Qualities Framework [19], a government-commissioned working party made several recommendations for clinical governance development [20]. These included that: District Health Boards (DHBs) must establish governance structures ensuring a partnership of clinical and corporate management; these Boards and their Chief Executives must enable strong clinical leadership and decision making and promote this throughout the organisation; clinical governance must cover the whole patient journey, with decision making devolved to the appropriate level; and management must identify clinical leaders and support their development. The Minister of Health announced the working party’s recommendations were to become government policy, with implementation by DHBs an immediate priority. In 2010, we therefore sought to measure the extent to which clinical governance was being implemented by surveying public hospital medical specialists, a group finely attuned to the nuances of clinical governance and likely to be aware of changes in leadership and organisational structures. From this project, we developed the CGDI. In 2012, we conducted a follow-up study to investigate progress two years on. This article outlines the methods for the two surveys, results, derivation of the index and implications for utilising it.

Methods

For the 2010 study, we developed a fixed-response 11-item survey, with an additional eight background questions and comments box. Survey items related directly to the key policy statements in the clinical governance working party’s report [20]. Thus, respondents were asked to rate the extent of familiarity with clinical governance concepts and policy. In a series of related questions, they rated the extent to which they perceived their employer organisation was working to develop and support clinical governance and partner with clinicians, from the level of the board and senior management through to front-line services. The survey was peer-reviewed through the development process by six researchers and medical professionals, along with the 10 members of the National Executive of the Association of Salaried Medical Specialists (ASMS), the national public hospital specialists union. A draft was then piloted among two groups of hospital specialists (22 in total) with further adjustments following feedback. Through the scrutiny of these groups, the survey questions and design fulfilled the standards of both `face’ and `construct’ validity [21].

All New Zealand public hospital specialists are salaried employees of one of the 20 District Health Boards they work for. Over 90% of these specialists are members of the ASMS. The self-completed survey was distributed in paper form by ASMS to its 3402 members in June 2010, with two follow-up reminders. In August 2010, a web-based survey sought participation with an email invite to those who had not responded to the paper version, with two subsequent email reminders. Data from paper surveys were converted into electronic form and merged with data from the web survey, with no significant differences in response patterns found.

The CGDI was formed through combining responses from seven survey items representing related aspects of processes involved in developing clinical governance. The index demonstrated good internal consistency, with a Cronbach’s alpha of 0.80.

The 2012 survey was modified, and its validity enhanced, in a consultative process that involved over 200 health professionals employed by DHBs, government policy makers and DHB managers. The 2012 study had a broader scope in that it included all health professionals. Thus, a total of 41030 professionals, including doctors, nurses, midwives and allied service providers, were invited by their DHB human resources department to participate in an online survey. Inclusion criteria were that invitees must be registered health professionals, in on-going full- or part-time employment with their DHB, and with an official DHB email address (in theory, any such DHB employee has one). Invites, containing a link to the survey website, were sent to employee email addresses. Three follow-up emails were sent to employees at weekly intervals. A national communications campaign ensured that all 19 participating DHBs distributed standard instructions (Canterbury DHB did not participate due to the earthquake recovery process).

The 2012 survey did not include one of the survey items which contributed to the CGDI in the 2010 survey, and minor changes in wording were made to some items. This was to improve their relevance to an inter-professional audience and reduce any perceived biases associated with the ASMS involvement in the earlier survey. An amended CGDI was therefore created for the 2012 survey using the remaining common six CGDI survey items. Table 1 lists the items from each version of the CGDI, as well as how the items are scored. An individual’s CGDI was arrived at by summing their scores for each item. These raw scores were converted to percentages for final presentation. `Don’t know’ responses were treated as missing values. A CGDI was only calculated for individuals with no missing values. To obtain the overall CGDI for a DHB, the mean CGDI of all individuals from that DHB was calculated. For the purposes of analyses for this article, only data from medical specialists in the 2012 survey were included in the comparisons (since the 2010 survey only included these professionals). All analyses were performed using the R statistical system version 2.12 [22].

Table 1 Survey items from the 2010 and 2012 versions of the CGDI

The study protocol and survey tool were reviewed, including for ethical considerations, and approved by the National Executive of the Association of Salaried Medical Specialists, National Executive of the National Health Board, Board and executive team of the Health Quality and Safety Commission, and CEO and leadership teams of the participating DHBs.

Results

The 2010 survey achieved a response rate of 52%; some 32% of senior medical officers responded to the 2012 survey. Response rates differed by DHB, as may be expected of a large multi-regional study of this nature. In 2010, the response rate ranged from 39% to 70% between individual DHBs (mean = 51%, SD = 7%); in 2012, the range was 10% to 73% (mean = 39%, SD = 16%). Respondent characteristics in both surveys were compared with DHB health workforce data and found to be close to those of all potential respondents.

After applying the various exclusions outlined in the previous section, 1487 respondents from the 2010 survey and 1313 from the 2012 survey were included in the analysis. Of these, 944 (63%) and 856 (65%) had complete data and could have a CGDI score computed. The results are presented in Table 2. As represented in the increase from 46 to 54% in the mean score, the results show considerable progress across the DHBs in the two years between the surveys. Several DHBs demonstrated substantial improvement with, in one case, an increase from 44 to 64% in the index score. One DHB’s score decreased. It needs to be acknowledged that this is one of the smallest DHBs and a small number of respondents to the 2012 survey meant the score was highly sensitive to small changes in response patterns. There was evidence of substantial variation in the CGDI scores of different DHBs in both the 2010 and 2012 surveys. In 2010, the range was 38-55%; in 2012, the level of variation broadened to 36-65%.

Table 2 Comparison of CGDI scores for the 2010 and 2012 surveys

Table 3 contains the mean of the mean scores for respondents across DHBs for 2010 and 2012. The fourth column in the table lists the change in the means over the two surveys for each CGDI item, showing significant improvement in all but item 3, with the most marked improvement in items 5 and 6.

Table 3 Change in mean DHB ratings (i.e. mean of the DHB means) for each of the CGDI items across the two surveys, with higher scores indicating more positive mean responses

Discussion

With our 2010 study, we developed the CGDI and, following feedback from New Zealand health care professionals, their employers and policy makers, modified the original survey tool for our 2012 follow-up. In this regard, and given the input of a wide range of individuals into the 2012 instrument design, we believe we have enhanced its validity.

We argue that the CGDI offers a straightforward and efficient means of measuring the extent to which a health care organisation is working to facilitate clinical governance. There are various reasons for this, including that the CGDI draws its measures from the perspective of health professionals. While there is an obligation on the part of professionals to take up opportunities to get involved in clinical governance activities, robust clinical governance demands managerial facilitation. Thus, professionals should be able to gauge the extent to which managerial and organisational activities are focused on building clinical governance, as well as the level of emphasis on clinical governance components such as quality and safety improvement. Importantly, the CGDI mixes items that probe both management and clinician activities; in our study, these items sought to measure the implementation of stated policy. The CGDI can also be used as a yardstick for assessing performances on clinical governance over time as well as between organisations which can be useful as a stimulant for improvement. Finally, in contrast with other tools for measuring clinical governance [10], the CGDI is focused on a defined set of important goals and processes that includes relationships between managers and health professionals, involvement in organisational design, and quality improvement which are central to the original Scally and Donaldson definition of clinical governance. This involved developing:

... a system through which health organisations are accountable for continuously improving the quality of their services and safeguarding high standards of care by creating an environment in which excellence in clinical care will flourish [5].

Our use of the CGDI suggests positive progress in clinical governance development across New Zealand’s DHBs, at least from the perspective of senior doctors. In this regard, it indicates traction with the policy implementation process, with some DHBs making considerably more progress than others. The magnitude of change in several DHBs leads to questions of how they achieved this, and why others did not. In some cases, DHBs obviously had better scores in the initial 2010 survey so started from a much stronger position. In other cases, the DHB may have had a more concerted strategy for implementing clinical governance in place, in turn leading to survey respondent perceptions of better performance. In 2012, we conducted in-depth case studies of each DHB in parallel with the survey study through a mix of DHB self-reviews, document analysis and key informant interviews. Our aim was to analyse each DHB’s clinical governance strategy, the structures to facilitate clinical governance and leadership that had been implemented, and to learn about the challenges the DHB faced in the clinical governance development process [23]. As a general rule of thumb, the 2012 CGDI scores reported in this article matched closely the levels of development in each DHB and interviewee perceptions of performance. Thus, perhaps the most compelling explanation for the CGDI changes lies in where, in reality, on the trajectory of change the DHB was: some were clearly further along in their developmental journey, or had made considerably more progress in a short time than others, or had started to see the payoff from an early commitment to clinical governance. In some DHBs, it may be that there was simply more acceptance and knowledge of key concepts and terminology. In 2010 and 2012, we asked the question: `Clinical leadership is described as “...a new obligation to step up, work with other leaders, both clinical and managerial, and change the system where it would benefit patients”. How familiar are you with this concept?’ Some 51% of responding specialists expressed familiarity in 2010, [15] compared with 58% in 2012 [23]. Beyond the individual DHB setting, there had been no substantial changes to national policy settings between 2010 and 2012 so explanations for the CGDI changes would appear to lie in activities at the individual DHB level rather than the broader health system level, in contrast with explanations often provided in other policy studies [24],[25].

While there was an overall CGDI score increase between 2010 and 2012, analysis of changes in the scores for individual index items is revealing with 5 of the 6 showing marked improvement. Survey respondents are obviously seeing development of partnership governance and decision making structures, increased attention to quality and safety in terms of both organisational resourcing as well as in care delivery, and increased clinical team responsibility for decision making. The item probing respondent participation in organisational design processes, on which improvement was not shown to be statistically significant, indicates that this is an area the DHBs perhaps need to focus more upon.

While our follow-up study does imply positive improvement, it also suggests on-going effort is required if New Zealand is to develop robust clinical governance across all DHBs. Some are obviously making more progress than others; some have made limited, if any, progress. Indeed, it might be suggested that the 2012 performance, as a whole, remains mediocre at 54% with the top scoring DHB achieving only 64%. The individual CGDI items, which link into both New Zealand government policy as well as literature on health system improvement, collectively provide pointers for where efforts might be directed. As noted in the introduction, and probed in these survey items, studies variously show that a strong focus on clinical leadership, quality and safety and teamwork can improve health system performance and patient outcomes while reducing costs [26]-[28]. Thus, the DHBs would do well to intensify their efforts in these areas.

Yet New Zealand, with its perhaps middling performance on clinical governance, is not alone. Elsewhere, studies show the struggle continues with attempting to bridge the managerial-clinical divide and build entities with strong clinical leadership focused on quality and safety [2],[29],[30]. The reasons for this may, again, be due to leadership capacity, as well as the context-dependent nature of any management and organisational development project which means it is difficult to simply transplant a well-functioning model of clinical governance from one organisation into another. Organisational re-development also takes considerable time, meaning higher CGDI scores could be envisaged as the developmental and implementation processes mature. The reason for New Zealand’s middling performance may also be due to health professionals, particularly senior doctors, having enough time in their already busy schedules to dedicate to clinical governance and leadership activities. They may also have limited interest in the `higher calling’ of leadership, especially if they have received limited training in the field and perhaps see this as a non-core duty. While our survey study did not probe this area, there is a strong argument that leadership training should feature early in health professional education and be promoted more widely [31]-[33]. This was a common theme in site visits and interviews with health professionals which we conducted in association with the 2012 survey, reported on elsewhere [23].

The research reported in this article has some limitations. First, the self-completed survey method relies on individual perceptions and has associated biases. However, response patterns were consistent across returned surveys from the different districts in both the 2010 and 2012 studies. Second, response rates above 52 and 32% for the two surveys respectively would have been desirable, but these rates were around the level obtained in comparable studies and there is a growing recognition that higher response rates are increasingly difficult to attain [34]. A review of research in a related field concluded that 55% is considered to be `one of the best response rates’ [35]. The low response rates were compounded by missing responses in the CGDI items, meaning that the 2012 results were ultimately based on responses from 21% of senior doctors. Very importantly, as noted, our respondent characteristics were similar to those of the broader health workforce suggesting that response bias would be limited. While we do not know whether non-responders had different perspectives from those who did respond, feedback received at presentations to New Zealand medical specialist conferences gives us no reason to believe a higher participation rate would have substantially changed the findings. That said, it is possible that the non-responders featured a larger proportion of professionals who felt un-engaged. If so, then the results in our study could indicate a higher level of clinical governance development than the reality. Third, our survey included only medical specialists. A follow-up study will permit tracking progress with clinical governance development from the perspective of the full range of New Zealand health professionals, given their participation in the 2012 survey. Fourth, the simple survey method means we were unable to probe why some DHBs performed better than others, or why some had achieved substantial changes between the two surveys. Answers to these questions might be explored through in-depth research methods such as reviews of policy documents and interviews with key informants. Finally, it would be useful to look at whether CGDI performances are predictive of performances on other measures in areas such as finance, organisational efficiency and patient safety.

Conclusions

The CGDI and its application in the New Zealand context provides a baseline for further research. This might involve additional work on validating the questionnaire and CGDI, and adapting it for use in other health systems. It could also continue to be used for monitoring progress with clinical governance development in New Zealand. Of course, following from the above, a key question that arises is whether the New Zealand case is substantially different from that of other countries and, in turn, how applicable the CGDI is for use elsewhere. As noted in the introduction, New Zealand’s clinical governance policy drew heavily from the United Kingdom’s NHS; New Zealand’s health system is not dissimilar from other government-funded systems. Where New Zealand’s experience does differ from some countries that have pursued clinical governance could be in the relatively directive nature of policy emanating from central government, and in its smallness. Indeed, in a country such as North America, clinical governance is mostly an individual hospital-specific policy [36],[37]. Yet this does not mean the CGDI could not be used in quite different health systems. In our study, we measured both local and national progress with clinical governance development through a lens predominantly that of specialists employed in individual hospitals. This implies that the index could be used in individual hospitals elsewhere, within a differently-structured health system, or to measure and compare performance more broadly in a system with characteristics that are closer to New Zealand’s.

Authors’ contributions

Both authors contributed equally to the conception and design of the study. Both authors participated in drafting the manuscript. Both authors read and approved the final manuscript.

Abbreviations

ASMS:

Association of Salaried Medical Specialists

CGCI:

Clinical Governance Development Index

DHB:

District Health Board

NHS:

National Health Service

References

  1. Bohmer RMJ: Fixing health care on the front lines. Harv Bus Rev. 2010, (April): 62-69.

    Google Scholar 

  2. Freeman T, Walshe K: Achieving progress through clinical governance? A national study of health care managers’ perceptions in the NHS in England. Qual Health Care. 2004, 13: 335-343. 10.1136/qshc.2002.005108.

    Article  CAS  Google Scholar 

  3. Gauld R: The New Health Policy. 2009, Open University Press, Maidenhead

    Google Scholar 

  4. Brennan N, Flynn M: Differentiating clinical governance, clinical management and clinical practice. Clin Govern Int J. 2013, 18 (2): 114-131. 10.1108/14777271311317909.

    Article  Google Scholar 

  5. Scally G, Donaldson L: Clinical governance and the drive for quality improvement in the New NHS in England. Br Med J. 1998, 317 (7150): 61-65. 10.1136/bmj.317.7150.61.

    Article  CAS  Google Scholar 

  6. Dorgan S, Layton D, Bloom N, Homkes R, Sadun R, Van Reenen J: Management in Healthcare: Why Good Practice Really Matters. 2010, McKinsey and Company/London School of Economics, London

    Google Scholar 

  7. Hogan H, Basnett I, McKee M: Consultants’ attitudes to clinical governance: barriers and incentives to engagement. Public Health. 2007, 121: 614-622. 10.1016/j.puhe.2006.12.013.

    Article  CAS  PubMed  Google Scholar 

  8. Som CV: Making sense of clinical governance at different levels in NHS hospital trusts. Clin Govern Int J. 2009, 14 (2): 98-112. 10.1108/14777270910952252.

    Article  Google Scholar 

  9. Staniland K: A sociological ethnographic study of clinical governance implementation in one NHS hospital trust. Clin Govern Int J. 2009, 14 (4): 271-280. 10.1108/14777270911007782.

    Article  Google Scholar 

  10. Specchia ML, La Torre G, Siliquini R, Capizzi S, Valerio L, Nardella P, Campana A, Ricciardi W: OPTIGOV-a new methodology for evaluating clinical governance implementation by health providers. BMC Health Serv Res. 2010, 10 (174): 1-15.

    Google Scholar 

  11. Spurgeon P, Mazelan P, Barwell F: Medical engagement: a crucial underpinning to organizational performance. Health Serv Manag Res. 2011, 24: 114-120. 10.1258/hsmr.2011.011006.

    Article  Google Scholar 

  12. Spurgeon P, Barwell F, Mazelan P: Developing a medical engagement scale (MES). Int J Clin Leader. 2008, 16: 213-223.

    Google Scholar 

  13. Western Australian Clinical Governance Framework. 2005, Department of Health Western Australia, Perth

  14. Clinical Governance Development... An Assurance Check for Health Service Providers. 2012, Quality and Patient Safety Directorate, Health Service Executive, Dublin

  15. Gauld R, Horsburgh S, Brown J: The clinical governance development index: results from a New Zealand study. BMJ Qual Saf. 2011, 20: 947-953. 10.1136/bmjqs.2011.051482.

    Article  PubMed  Google Scholar 

  16. Six Countries, Six Reform Models: The Healthcare Reform Experience of Israel, the Netherlands, New Zealand, Singapore, Switzerland and Taiwan. 2010, World Scientific Publishers, Singapore

  17. Tenbensel T, Eagle S, Ashton T: Comparing health policy agendas across eleven high income countries: islands of difference in a sea of similarity. Health Pol. 2012, 106: 29-36. 10.1016/j.healthpol.2012.04.011.

    Article  Google Scholar 

  18. Gauld R: Big country, small country: how the United States debated health reform while New Zealand just got on with it. Int J Clin Pract. 2010, 64 (10): 1334-1336. 10.1111/j.1742-1241.2010.02477.x.

    Article  CAS  PubMed  Google Scholar 

  19. NHS Leadership Qualities Framework. 2005, National Health Service Institute for Innovation and Improvement, London

  20. In Good Hands: Transforming Clinical Governance in New Zealand. 2009, Ministerial Task Group on Clinical Leadership, Wellington

  21. Bowling A: Research Methods in Health: Investigating Health and Health Services. 1997, Open University Press, Buckingham

    Google Scholar 

  22. R: A Language and Environment for Statistical Computing. 2012, R Foundation for Statistical Computing, Vienna

  23. Gauld R, Horsburgh S: Clinical Governance Assessment Project: Final Report on a National Health Professional Survey and Site Visits to 19 New Zealand DHBs. 2012, Centre for Health Systems, University of Otago, Dunedin

    Google Scholar 

  24. Comparative Studies and the Politics of Modern Medical Care. 2009, Yale University Press, New Haven

  25. Tuohy CH: Accidental Logics: The Dynamics of Change in the Health Care Arena in the United States, Britain and Canada. 1999, Oxford University Press, New York

    Google Scholar 

  26. Goodall AH: Physician-leaders and hospital performance: is there an association?. Soc Sci Med. 2011, 73 (4): 535-539. 10.1016/j.socscimed.2011.06.025.

    Article  PubMed  Google Scholar 

  27. Thomas EJ: Improving teamwork in healthcare: current approaches and the path forward. BMJ Qual Saf. 2011, 20: 647-650. 10.1136/bmjqs-2011-000117.

    Article  PubMed  Google Scholar 

  28. Shekelle P, Pronovost P, Wachter R, McDonald K, Schoelles K, Dy S, Shojania K, Reston J, Adams A, Angood P, Bates DW, Bickman L, Carayon P, Donaldson L: The top patient safety strategies that can be encouraged for adoption now. Ann Intern Med. 2013, 158: 365-368. 10.7326/0003-4819-158-5-201303051-00001.

    Article  PubMed  Google Scholar 

  29. Benson LA, Boyd A, Walshe K: Learning from regulatory interventions in healthcare: the commission for health improvement and its clinical governance review process. Clin Govern Int J. 2006, 11 (3): 213-224. 10.1108/14777270610683146.

    Article  Google Scholar 

  30. Rittenhouse DR, Casilino L, Gilles R, Shortell SM, Lau B: Measuring the medical home infrastructure in large medical groups. Health Aff. 2008, 27 (5): 1246-1258. 10.1377/hlthaff.27.5.1246.

    Article  Google Scholar 

  31. Pathak S, Holzmueller CG, Haller KB, Pronovost PJ: A mile in their shoes: interdisciplinary education at the Johns Hopkins University School of Medicine. Am J Med Qual. 2010, 25 (6): 462-467. 10.1177/1062860610366591.

    Article  PubMed  Google Scholar 

  32. Leape L, Berwick D, Clancy CM, Conway J, Gluck P, Guest J, Lawrence D, Morath J, O’Leary D, O’Neill P, Pinakiewicz D, Isaac T: Transforming healthcare: a safety imperative. Qual Health Care. 2009, 18: 424-428. 10.1136/qshc.2009.036954.

    Article  CAS  Google Scholar 

  33. Clinical Leadership: Bridging the Divide. 2009, Quay Books, London

  34. Morton S, Bandara D, Robinson E, Atatoa Carr P: In the 21st century, what is an acceptable response rate?. Aust N Z J Public Health. 2012, 36 (2): 106-108. 10.1111/j.1753-6405.2012.00854.x.

    Article  PubMed  Google Scholar 

  35. Scott T, Mannion R, Davies H, Marshall M: The quantitative measurement of organizational culture in health care: a review of the available instruments. Health Serv Res. 2003, 38 (3): 923-945. 10.1111/1475-6773.00154.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Paulus RA, Davis K, Steele GD: Continuous innovation in health care: implications of the Geisinger experience. Health Aff. 2008, 27 (5): 1235-1245. 10.1377/hlthaff.27.5.1235.

    Article  Google Scholar 

  37. James BC, Savitz LA: How Intermountain trimmed health care costs through robust quality improvement efforts. Health Aff. 2011, 30 (6): 1185-1191. 10.1377/hlthaff.2011.0358.

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledge funding and support from the Association of Salaried Medical Specialists, the National Health Board, the Health Quality and Safety Commission, New Zealand 20 DHBs and the University of Otago. The funding agreement ensured the authors’ independence in designing, conducting and reporting on the study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robin Gauld.

Additional information

Competing interests

The study reported on in this article was funded by the Association of Salaried Medical Specialists, the National Health Board, the Health Quality and Safety Commission, New Zealand 20 DHBs and the University of Otago. The authors were commissioned to undertake an independent study, albeit in collaboration with the aforementioned agencies. The authors assert that they had no conflict of interest in designing the study and reporting the results.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gauld, R., Horsburgh, S. Measuring progress with clinical governance development in New Zealand: perceptions of senior doctors in 2010 and 2012. BMC Health Serv Res 14, 547 (2014). https://doi.org/10.1186/s12913-014-0547-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-014-0547-8

Keywords