Skip to main content
  • Research article
  • Open access
  • Published:

The use of external change agents to promote quality improvement and organizational change in healthcare organizations: a systematic review

Abstract

Background

External change agents can play an essential role in healthcare organizational change efforts. This systematic review examines the role that external change agents have played within the context of multifaceted interventions designed to promote organizational change in healthcare—specifically, in primary care settings.

Methods

We searched PubMed, CINAHL, Cochrane, Web of Science, and Academic Search Premier Databases in July 2016 for randomized trials published (in English) between January 1, 2005 and June 30, 2016 in which external agents were part of multifaceted organizational change strategies. The review was conducted according to PRISMA guidelines. A total of 477 abstracts were identified and screened by 2 authors. Full text articles of 113 studies were reviewed. Twenty-one of these studies were selected for inclusion.

Results

Academic detailing (AD) is the most prevalently used organizational change strategy employed as part of multi-component implementation strategies. Out of 21 studies, nearly all studies integrate some form of audit and feedback into their interventions. Eleven studies that included practice facilitation into their intervention reported significant effects in one or more primary outcomes.

Conclusions

Our results demonstrate that practice facilitation with regular, tailored follow up is a powerful component of a successful organizational change strategy. Academic detailing alone or combined with audit and feedback alone is ineffective without intensive follow up. Provision of educational materials and use of audit and feedback are often integral components of multifaceted implementation strategies. However, we didn’t find examples where those relatively limited strategies were effective as standalone interventions. System-level support through technology (such as automated reminders or alerts) is potentially helpful, but must be carefully tailored to clinic needs.

Peer Review reports

Background

Change agents play an essential role in healthcare organizational change efforts. In his influential book Diffusion of Innovations [1], Everett Rogers introduces the concept of change agents as people who “introduce innovations into a client system that they expect will have consequences that will be desirable, direct, and anticipated.” Change agents can be internal to a client organization (for example, organizational leaders who support change; project champions who actively promote change; or organizational “opinion leaders” who, through their endorsement, promote change implicitly). This paper focuses on the role of external change agents. The Consolidated Framework for Implementation Research, a compendium of terms and constructs used in implementation research, defines external change agents as: “Individuals who are affiliated with an outside entity who formally influence or facilitate intervention decisions in a desirable direction. They usually have professional training in a technical field related to organizational change science or in the technology being introduced into the organization. This role includes outside researchers who may be implementing a multisite intervention study and other formally appointed individuals from an external entity (related or unrelated to the organization); e.g., a facilitator from a corporate or regional office or a hired consultant.” [2] In the research literature on organizational change in healthcare, external change agents go by various names, including (but not limited to) facilitator, coach, preceptor, consultant, and mentor. A compilation of discrete implementation strategies lists tactics such as “use an improvement/implementation advisor,” “Conduct educational outreach visits,” “Conduct ongoing training,” and “External facilitation,” all of which are examples of implementation strategies where external change agents play an essential role [3].

External change agents are fundamental to many implementation strategies. Often, the external change agent is the individual responsible for delivering the implementation strategy, and is thus largely responsible for fidelity to the intended strategy and its ultimate success in achieving desired organizational changes. This systematic review is designed to examine the role that external change agents have played in promoting organizational change in healthcare. We have chosen to focus our analysis on organizational change efforts within primary care clinics which are often relatively small, community-based clinics that generally lack the kind of internal quality improvement resources that are relatively common in larger healthcare organizations (including designated, internal change agents). However, we did not exclude any study solely based on the size of practice. Ultimately, this systematic review seeks to provide useful information to health services and implementation researchers in designing effective implementation strategies that involve external change agents to promote change in primary care settings and community- based clinics.

Methods

Review design

The protocol for this systematic review was designed in accordance with PRISMA guidelines. We engaged a research librarian (M.H.) to assist in developing the search strategy. The librarian ran a number of search strategies to fine-tune the search terms to eliminate off-topic results. This stage consisted of an iterative review and revision process between the first author and the librarian to refine the list of abstracts. The list of search results included 477 abstracts.

Data sources and search strategy

With the assistance of the librarian, we searched PubMed, CINAHL, Cochrane, Web of Science, and Academic Search Premier Databases in July 2016 for studies published between January 1, 2005 and June 30, 2016. We incorporated MeSH terms, and CINAHL headings to refine our search results. The index terms and text words we used in our search consisted combinations of the following: quality improvement OR change OR process improvement OR (organizational improvement OR organizational change) AND (outpatient clinics or primary care OR drug use OR drug abuse OR opioid OR substance abuse OR addiction) AND (coaching or facilitation or mentoring OR precept* or consulting OR academic detail*). Originally we thought we would include addiction treatment clinics, but later excluded them because they operate very differently from primary care clinics.

We applied filters for study type, omitting articles classified as case reports, clinical conferences, comments, editorials, letters, lectures, meta-analyses, opinion pieces, or reviews.

We also consulted with field experts to ask about studies that they would suggest we include in our review. The suggestions and reference lists forwarded by the experts were reviewed by the first author according to the search criteria and included in the final list of 339 abstracts.

Inclusion and exclusion criteria

We included randomized controlled trials that investigate process improvement activities in healthcare organizations (general practices, primary care practices, private practices, and outpatient clinics) through external change agents. Because external change agents are referred by a number of different titles, we included studies that provide facilitating, coaching, mentoring, precepting, and academic detailing to the organizations’ staff members (physicians, nurses, administrators, etc.) through an external change agent. We focused our review on peer- reviewed clinical trials published since 2005 that measure the effects of 1 or more interventions.

We excluded studies describing patient coaching, as we wanted to focus on organizational change rather than individual health behavior change. We also excluded non-randomized studies, studies without a control group, and studies without the full text available in English, as well as reviews, conference proceedings, qualitative studies, and opinion pieces. In general, the initiatives targeted larger, clinic-wide changes rather than changes at the level of the individual practitioner.

Study selection and data extraction

We selected articles in 2 phases. In phase 1, two of the authors (EA and MC) independently screened titles and abstracts of the search results after removing duplicates based on the inclusion and exclusion criteria. We obtained full articles for the abstracts that we identified as relevant. After a second round of elimination that included reviewing full text articles based on the inclusion criteria, the final set of studies were identified. A number of clinical trials were excluded because they did not report on the effectiveness of the interventions. For phase 2, we independently reviewed the final list of full-text articles for the following variables: 1. intervention type, 2. setting, 3. study design, 4. intervention components, 5. background of change agents, 6. background of settings, 7. intervention duration and frequency of contact, 8. outcome measures, and 9. significance. A final review of the search results was conducted by the senior author (AQ) to finalize the list of included studies and the set of study variables and outcomes used to summarize them. Table 1 summarizes these variables.

Table 1 Summary of studies included in the review

Data analysis

Because the research domains, study methods, and measures were disparate, we could not perform the necessary data pooling for a meta-analysis. We performed a qualitative synthesis to categorize how study interventions –specifically the methods of change implementation-- affected the outcomes of the research. We analyzed the multifaceted interventions employed in each study under 5 separate categories based on our qualitative synthesis.

Definitions of Types of Interventions

  1. 1.

    Academic detailing. Academic detailing is delivered by a trained professional in a face-to-face meeting at varying intervals. These are informative sessions aimed at improving a provider’s knowledge on a specific subject matter and usually aim to influence prescribing patterns. Detailers don’t instruct clinicians to practice in a certain way or provide hands-on facilitation of organizational change.

  2. 2.

    Audit and feedback. Audit and feedback is a summary of clinic and/or provider performance delivered to healthcare professionals to improve their practices. Although audit and feedback is often delivered in face-to-face sessions, some studies also send the reports via regular or electronic mail. Benchmarking, which is comparing a clinician’s practice with a standard, is included under this component.

  3. 3.

    Provision of educational materials. This component involves types of educational materials that are delivered by mail or electronically, or in-person. Provision of educational materials is often included as a component in interventions that feature external change agents.

  4. 4.

    Practice facilitation (or coaching). Practice facilitation involves an external change agent who visits sites on a regular basis to assist with implementing changes and answering change-related questions. The change agent, unlike an academic detailer, follows up with the clinic on a regular basis to provide individualized feedback.

  5. 5.

    System support. If the intervention includes an aspect of technical assistance, we identified that as system support (EHR systems, billing systems, IT support, etc.).

Authors’ note: the terms “academic detailing,” “facilitation,” and “coaching” are not well defined nor rigorously applied in the research literature. In this review, ‘practice facilitation’ includes facilitating, coaching, and mentoring. We have supplied general terms for reference here In the study arms column in Table 1, we specify the frequency of contact the external change agent had with clinics.

Quality assessment

We also assessed the quality of clinical trials included in this study. The lead author (EA) assessed risk for bias using the Cochrane Handbook for Systematic Reviews of Interventions for randomized clinical trials. The senior author (AQ) randomly selected 4 of the articles to rate for risk of bias. Studies were rated on five elements affecting risk of bias and each study received an overall quality rating of high, low, or unclear risk of bias. Of the 4 randomly selected articles, the co-authors agreed on 85% (17/20) of their ratings in the five subdomains, indicating high concordance in ratings of risk of bias. Of note, the search terms we employed led us to review only articles with rigorous study designs, meaning those with randomization to treatment and control groups. Hence, no studies were excluded on the basis of risk of bias.

Results

Search outcome

The initial search of 5 databases identified 477 articles published from January 1, 2005 to June 31, 2016 (Fig. 1). Three additional titles were identified through other sources (i.e. expert feedback). One hundred forty-one of these articles were duplicates. We screened the remaining 339 titles and abstracts for relevance. Of these, 226 articles were excluded because they did not meet the inclusion criteria. Full text articles of 113 studies were reviewed. Thirty-four of these studies were selected for inclusion. After the final selection, we examined the studies in more detail. During this process, we identified a number of other articles that were not eligible. The reasons for exclusion are as follows. The first article we excluded did not report on the effectiveness of the interventions but reported on the parts of a larger clinical trial. We searched for the larger trial that the study was part of. However, it was published before 2005 [4]. The second study investigated the acceptability of the intervention and did not report on effectiveness [5]. The third study was eliminated because it reported the cost analysis of a randomized controlled study (RCT) conducted in 2005 [6]. We excluded a group of studies that were conducted in disparate healthcare and community settings outside of primary care, including intensive care units, dental care, and addiction treatment clinics [7,8,9,10,11,12,13]. After reading through the articles, we decided to exclude two studies conducted in developing nations [14, 15] to make sure the results we report are compatible in terms of context. We also excluded one study that upon further examination proved not to be a RCT [16]. We extracted data from a final set of 21 studies.

Fig. 1
figure 1

Flowchart

Study, clinic, and intervention characteristics

Table 1 summarizes and details the study characteristics of the set of 21 studies that are included in this review. The studies represented 9 countries, with most studies occurring in the United States. Most of the included studies were cluster RCTs in which clinics were randomized. All included studies focused on Primary Care Practices (PCP) and General Practices (GP). Hence, the target clients were mostly physicians and nurses.

The studies reported on a number of interventions, combining the five components described previously. Table 1 lists details of the interventions for each study. Most of the studies focused on changing policies regarding chronic conditions care. Seventeen studies reported patient outcomes while 4 studies reported on measures developed by the investigators (such as surveys, self-defined objectives, etc.) [17,18,19,20]. The studies reporting on patient outcomes pulled the data from an external electronic database (such as an electronic health record). The majority of studies measured the effects of intervention components as a single intervention.

Quality of included studies and risk for bias

The methodologic quality of included randomized clinical trials is available in Table 2. Most of the studies had low risk of selection and performance bias. The participants and staff were unblinded to the interventions in all cases. However, we decided that lack of blinding would have minor impact on study outcomes because most studies analyzed the intervention effects on patient outcomes while randomizing at the clinic level. A number of unblinded studies utilized self-assessments (either clinician or patient) as outcome measurements. Those cases are reported as high risk. Overall, majority of the clinical trials in this review demonstrated low risk of bias.

Table 2 Quality of Included Randomized Controlled Trials (low, high, unclear)

Measured intervention strategies

All studies except one that focused solely on the effects of academic detailing [21] investigated the impact of multifaceted interventions on practice change in healthcare practices. Nine studies investigated the effects of two-component interventions [10,11,12,13, 16, 18, 19, 21,22,23,24,25,26]. Others investigated three or more components. Sixteen studies out of the 21 included in this review used academic detailing (AD) as part of a multi-component intervention strategy [10, 13, 17, 21, 23,24,25,26,27,28,29,30,31,32,33,34]. Thirteen studies had a form of audit and feedback integrated into their intervention [17,18,19, 22,23,24,25, 28, 29, 32,33,34,35]. Eleven studies employed a type of practice facilitation or coaching during their interventions [17,18,19,20, 25, 27, 32,33,34,35,36]. All studies that included practice facilitation reported significant effects in one or more study outcomes. None of the studies that demonstrated ‘no effect’ in primary outcomes employed practice facilitation as a component of their intervention except two [9, 36] which showed significant change only in secondary outcomes.

Five studies reported having a form of information technology or system support (IT) [17, 26, 29, 30, 35]. Bertoni et al. [26], Clyne et al. [30] Feldstein et al. [29], and Mold et al. [17] utilized automated reminders that alerted clinicians when there was an error in the system. Smidth et al. [35] provided online forms or informational websites to participants. Only 2 studies utilized regular phone calls [27, 37] with sites. Both studies reported this strategy as not significantly effective. Two studies mailed educational materials to patients [30, 31]. Neither demonstrated a significant effect.

Background of academic detailers and external change agents

Most of the studies employed pharmacists and pharmacologists to deliver the academic detailing [9,10,11,12, 21, 24, 30]. Physicians and nurses were also engaged as academic detailers [8, 13, 24, 26, 27, 29, 33, 34, 36]. In some studies, the investigators employed quality improvement experts who were trained in organizational change implementation [10, 17, 18, 20, 25, 32, 37]. These studies usually focused on change implementation and targeted practice facilitation as part of their intervention rather than delivering only academic detailing. All of the studies that included practice facilitation through external change agents demonstrated positive effect regardless of the change agent’s specific background [17,18,19,20, 25, 27, 32,33,34,35,36].

Discussion

Effectiveness of multifaceted interventions

Because the context of each study varied and there was not a uniform reporting measure of effect size, we measured the effectiveness of an intervention based on the p values of change between study arms reported in each study. Thirteen of the 21 studies reported a significant positive change (p < .05) in their primary outcomes on the intervention arm [17,18,19,20, 22, 23, 25, 27,28,29, 32, 34, 35]. All of these studies investigated the effects of multi-faceted interventions that included at least two of the intervention categories described under the data analysis section.

Four studies reported that the intervention demonstrated mixed results; there was significant increase in several outcomes whereas there was no improvement in other outcomes. Bertoni et al. [26] reported that two of their four outcome measures had significant improvements. This study tested the effectiveness of academic detailing coupled with system support. Clyne et al. [30] reported a significant decrease in one of the primary outcomes whereas the improvement in second outcome was not significant. Mold et al. [17] reported significant positive impact in one of the three primary outcomes. Ornstein et al. [33] reported positive change in only a specific age group.

Importance of individualized follow up

Follow-up individualized to the clinic stands out as an integral component of multi-faceted interventions for organizational change. A majority of the studies that reported significant improvement in their results included individualized follow up in their interventions. Studies use a number of titles to describe a change agent. These titles include practice facilitator, practice enhancement assistant, and outreach visitor. Facilitation is always conducted by an expert change agent who is trained in quality improvement tools and methods. The change agents perform the follow-up meetings in-person, and on a regular basis that averages a minimum of once a month. The changes that are implemented in the clinics are always tailored to clinic needs. Only one study that included change agents demonstrated no change [36].

Overall, six of the studies reported that their intervention did not demonstrate any significant improvement on the primary outcomes [13, 21, 24, 28, 31, 36]. One thing that these studies had in common was that all of them except one [36] tested academic detailing coupled with audit and feedback with little to no follow up, which strengthens the importance of facilitation on organizational change. A number of reasons were listed as possible cause for no effect. Hennesy et al. [31] emphasized that lack of intensive follow up may have caused the unfavorable outcomes. Varonen et al. [13] pointed out that some of the practices were already implementing the targeted policies of the intervention at baseline, limiting their outcomes. Two studies [21, 24] that showed no significant difference between study arms reported that both intervention and control groups demonstrated positive impact, which may be attributed to effects of larger scale campaigns taking place simultaneously with the study.

This review should be considered in the context of some limitations. Although our search results include a number of studies that reported no significant effect, it is possible that other clinical trials that did not demonstrate significant effects have not been published, and hence not included in this review; publication bias may have decreased the number of studies with negative findings. A second limitation is that variation in the types of labels used to identify the components of the interventions can act as a barrier to identifying relevant studies. For instance, practice facilitation can be labeled as coaching, consulting, or mentoring, whereas academic detailing can also be referred to using a wide set of terms, including learning collaboratives or problem-based learning. A third limitation is that our systematic review was rather tightly circumscribed through the application of our search criteria (we limited our search to randomized controlled trials featuring the use of external change agents in primary care settings in developed countries). We sought to minimize heterogeneity in terms of settings, study designs, and interventions in order to clearly isolate the effect of external change agents. This decision does not imply that there is not significant knowledge to be gained by studying the role of external change agents in developing countries, in healthcare settings besides primary care, and in otherwise eligible primary care studies that have used non-RCT study designs. Examination of the role of external agents across different contexts should be the subject of future research. Finally, the type, duration, and intensity of follow up (in-person or over the phone) provided through the practice facilitation is not always clearly reported, which may have an effect on the reported outcomes. The studies also vary in terms of how the educational materials are delivered (face to face vs online or mailed). This information was not always explicit in the reports.

Conclusions

This systematic review outlines the characteristics and effectiveness of implementation strategies led by external change agents to promote improvement in healthcare organizations. Our review suggests important findings and points out critical gaps in knowledge that require further investigation. As an implementation strategy, simply informing clinics of opportunities to improve (via audit and feedback) or advising them on what they should be doing (via educational materials or system-level supports) appears generally insufficient to change clinical practice. In-person education (or persuasion) delivered via academic detailing also is insufficient to change clinical practice in the absence of frequent, individualized follow up with clinics. Overall, our results suggest that a multi-faceted implementation strategy featuring regular, tailored follow up via practice facilitation is most likely to promote successful organizational change. Provision of educational materials, audit and feedback, and system support are often integral components of such a strategy, but those components cannot function as implementation strategies on their own. Our findings, consistent with theories on organizational change, suggest that a more comprehensive strategy is required to change clinical practice, involving the thoughtful selection and deployment of external change agents who work intimately with clinic sites as part of a multifaceted implementation strategy.

References

  1. Rogers E. Diffusion of innovations. New York: The Free Press; 1995.

  2. Damschroder LJ, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Powell BJ, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci. 2015;10(1):21.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Polinski JM, et al. Educational outreach (academic detailing) regarding osteoporosis in primary care. Pharmacoepidemiol Drug Saf. 2005;14(12):843–50.

    Article  PubMed  Google Scholar 

  5. Lawson G, et al. Acceptability of physician directed academic detailing to increase colorectal cancer screening: an application of the RESPECT approach. Health Promot Perspect. 2015;5(3):169.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Simon SR, et al. Economic analysis of a randomized trial of academic detailing interventions to improve use of antihypertensive medications. J Clini Hypertens. 2007;9(1):15–20.

    Article  Google Scholar 

  7. Curtis JR, et al. Effect of a quality-improvement intervention on end-of-life Care in the Intensive Care Unit: a randomized trial. Am J Respir Crit Care Med. 2011;183(3):348–55.

    Article  PubMed  Google Scholar 

  8. Silva JM, et al. Academic detailing and adherence to guidelines for group B streptococci prenatal screening: a randomized controlled trial. BMC Pregnancy Childbirth. 2013;13(1):1.

    Article  Google Scholar 

  9. Jones J, et al. Effectiveness of an intervention to facilitate the implementation of healthy eating and physical activity policies and practices in childcare services: a randomised controlled trial. Implement Sci. 2015;10(1):1.

    Article  Google Scholar 

  10. Pearson FS, et al. Efficacy of a process improvement intervention on delivery of HIV services to offenders: a multisite trial. Am J Public Health. 2014;104(12):2385–91.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Seager JM, et al. A randomised controlled trial of clinical outreach education to rationalise antibiotic prescribing for acute dental pain in the primary care setting. Br Dent J. 2006;201(4):217–22.

    Article  CAS  PubMed  Google Scholar 

  12. Tjia J, et al. Dissemination of evidence-based antipsychotic prescribing guidelines to nursing homes: a cluster randomized trial. J Am Geriatr Soc. 2015;63(7):1289–98.

    Article  PubMed  Google Scholar 

  13. Varonen H, et al. Implementing guidelines on acute maxillary sinusitis in general practice—a randomized controlled trial. Fam Pract. 2007;24(2):201–6.

    Article  PubMed  Google Scholar 

  14. Awad A, Eltayeb I, Baraka O. Changing antibiotics prescribing practices in health centers of Khartoum state, Sudan. Eur J Clin Pharmacol. 2006;62(2):135–42.

    Article  CAS  PubMed  Google Scholar 

  15. Khanal S, et al. Evaluation of academic detailing Programme on childhood Diarrhoea management by primary healthcare providers in Banke District of Nepal. J Health Popul Nutr. 2013;31(2):231–42.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Schuster R, Tasosa J, Terwoord N. Translational research—implementation of NHLBI obesity guidelines in a primary care community setting: the physician obesity awareness project. J Nutr Health Aging. 2008;12(10):764–9.

    Google Scholar 

  17. Mold JW, Aspy CA, Nagykaldi Z. Implementation of evidence-based preventive services delivery processes in primary care: an Oklahoma physicians resource/research network (OKPRN) study. J Am Board Fam Med. 2008;21(4):334–44.

    Article  PubMed  Google Scholar 

  18. Dickinson WP, et al. Practice facilitation to improve diabetes care in primary care: a report from the EPIC randomized clinical trial. Ann Fam Med. 2014;12(1):8–16.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Engels Y, et al. The effects of a team-based continuous quality improvement intervention on the management of primary care: a randomised controlled trial. Br J Gen Pract. 2006;56(531):781–7.

    PubMed  PubMed Central  Google Scholar 

  20. Parchman ML, et al. A randomized trial of practice facilitation to improve the delivery of chronic illness care in primary care: initial and sustained effects. Implement Sci. 2013;8(1):1.

    Article  Google Scholar 

  21. Naughton C, Feely J, Bennett K. A RCT evaluating the effectiveness and cost-effectiveness of academic detailing versus postal prescribing feedback in changing GP antibiotic prescribing. J Eval Clin Pract. 2009;15(5):807–12.

    Article  PubMed  Google Scholar 

  22. Lowrie R, et al. A cluster randomised controlled trial of a pharmacist-led collaborative intervention to improve statin prescribing and attainment of cholesterol targets in primary care. PLoS One. 2014;9(11):e113370.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Magrini N, et al. Long term effectiveness on prescribing of two multifaceted educational interventions: results of two large scale randomized cluster trials. PLoS One. 2014;9(10):e109915.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Rognstad S, et al. Prescription peer academic detailing to reduce inappropriate prescribing for older patients: a cluster randomised controlled trial. Br J Gen Pract. 2013;63(613):e554–62.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Aspy CB, et al. Improving mammography screening using best practices and practice enhancement assistants: an Oklahoma physicians resource/research network (OKPRN) study. J Am Board Fam Med. 2008;21(4):326–33.

    Article  PubMed  Google Scholar 

  26. Bertoni AG, et al. Impact of a multifaceted intervention on cholesterol management in primary care practices: guideline adherence for heart health randomized trial. Arch Intern Med. 2009;169(7):678–86.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Sheffer MA, et al. Fax referrals, academic detailing, and tobacco Quitline use: a randomized trial. Am J Prev Med. 2012;42(1):21–8.

    Article  PubMed  Google Scholar 

  28. Dignan M, et al. Effectiveness of a primary care practice intervention for increasing colorectal cancer screening in Appalachian Kentucky. Prev Med. 2014;58:70–4.

    Article  PubMed  Google Scholar 

  29. Feldstein AC, et al. Reducing warfarin medication interactions: an interrupted time series evaluation. Arch Intern Med. 2006;166(9):1009–15.

    Article  PubMed  Google Scholar 

  30. Clyne B, et al. Effectiveness of a multifaceted intervention for potentially inappropriate prescribing in older patients in primary care: a cluster-randomized controlled trial (OPTI-SCRIPT study). Ann Fam Med. 2015;13(6):545–53.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Hennessy S, et al. Effectiveness of a two-part educational intervention to improve hypertension control: a cluster-randomized trial. Pharmacotherapy. 2006;26(9):1342–7.

    Article  PubMed  Google Scholar 

  32. Mold JW, et al. Implementing asthma guidelines using practice facilitation and local learning collaboratives: a randomized controlled trial. Ann Fam Med. 2014;12(3):233–40.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Ornstein S, et al. Colorectal cancer screening in primary care: translating research into practice. Med Care. 2010;48(10):900.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Ornstein SM, et al. Integration and sustainability of alcohol screening, brief intervention, and pharmacotherapy in primary care settings. J Stud Alcohol Drugs. 2013;74(4):598–604.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Smidth M, et al. The effect of an active implementation of a disease management programme for chronic obstructive pulmonary disease on healthcare utilization-a cluster-randomised controlled trial. BMC Health Serv Res. 2013;13(1):1.

    Article  Google Scholar 

  36. Hogg W, et al. Improving prevention in primary care: evaluating the effectiveness of outreach facilitation. Fam Pract. 2008;25(1):40–8.

    Article  CAS  PubMed  Google Scholar 

  37. Gustafson DH, et al. Which elements of improvement collaboratives are most effective? A cluster-randomized trial. Addiction. 2013;108(6):1145–57.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors acknowledge the contribution of Roberta Johnson from UW Family Medicine & Community Health for assisting in editing of the manuscript.

Funding

This study was funded by National Institute of Drug Abuse (NIDA).

Availability of data and materials

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

EA and AQ conceived and designed the study. Literature review is conducted by EA, MYC and MH. EA and AQ wrote the manuscript. RB reviewed and revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Esra Alagoz.

Ethics declarations

Ethics approval and consent to participate

Not applicable

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alagoz, E., Chih, MY., Hitchcock, M. et al. The use of external change agents to promote quality improvement and organizational change in healthcare organizations: a systematic review. BMC Health Serv Res 18, 42 (2018). https://doi.org/10.1186/s12913-018-2856-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-018-2856-9

Keywords