Skip to main content
  • Research article
  • Open access
  • Published:

Perspectives on program mis-implementation among U.S. local public health departments

Abstract

Background

Public health resources are limited and best used for effective programs. This study explores associations of mis-implementation in public health (ending effective programs or continuing ineffective programs) with organizational supports for evidence-based decision making among U.S. local health departments.

Methods

The national U.S. sample for this cross-sectional study was stratified by local health department jurisdiction population size. One person was invited from each randomly selected local health department: the leader in chronic disease, or the director. Of 600 selected, 579 had valid email addresses; 376 completed the survey (64.9% response). Survey items assessed frequency of and reasons for mis-implementation. Participants indicated agreement with statements on organizational supports for evidence-based decision making (7-point Likert).

Results

Thirty percent (30.0%) reported programs often or always ended that should have continued (inappropriate termination); organizational supports for evidence-based decision making were not associated with the frequency of programs ending. The main reason given for inappropriate termination was grant funding ended (86.0%). Fewer (16.4%) reported programs often or always continued that should have ended (inappropriate continuation). Higher perceived organizational supports for evidence-based decision making were associated with less frequent inappropriate continuation (odds ratio = 0.86, 95% confidence interval 0.79, 0.94). All organizational support factors were negatively associated with inappropriate continuation. Top reasons were sustained funding (55.6%) and support from policymakers (34.0%).

Conclusions

Organizational supports for evidence-based decision making may help local health departments avoid continuing programs that should end. Creative mechanisms of support are needed to avoid inappropriate termination. Understanding what influences mis-implementation can help identify supports for de-implementation of ineffective programs so resources can go towards evidence-based programs.

Peer Review reports

Background

Mis-implementation of public health programs, policies, and services can occur in two ways: ending effective programs that should continue (inappropriate termination), or continuing ineffective programs that should end (inappropriate continuation) [1,2,3]. Here the term program refers to public health policies, environmental or system changes, educational and media activities, and services such as immunizations or screening for disease detection. De-implementation refers to ending ineffective or low-value programs, and is studied more often in medicine than in public health [1, 4,5,6,7,8,9,10,11,12]. The international Choosing Wisely initiative has recommended numerous medical procedures for de-implementation [13, 14]. McKay and colleagues (2018) recently outlined several public health and social service initiatives that have been discontinued or warrant de-implementation because they are harmful (prone infant sleeping position), ineffective (D.A.R.E. school-based drug prevention program), low value (routine HIV counseling with HIV testing), or the issue dissipated (Ebola) [15]. Evidence suggests these phenomena could have negative impacts on our public health systems [15].

In public health it is necessary to address both types of mis-implementation. Governmental public health departments in the US have experienced budget cuts in the past decade and high staff turnover [16, 17]. Finding ongoing funding is often challenging [15, 18,19,20,21]. For example, pressure to deliver programs within funders’ deadlines despite lack of funding for staff led to insufficient planning and incomplete statewide implementation of an evidence-based arthritis program [22]. External politics also influence funding and implementation decisions [21, 23]. Additional aspects influencing sustainment likely vary by program type and include implementation monitoring to improve program adaptation and delivery, partnerships, planning, and communications [18, 24,25,26]. Therefore, it is important to assess and address both types of mis-implementation in public health practice.

The impact of organizational supports for evidence-based decision making (EBDM) on mis-implementation of public health programs is not yet understood, though organizational structures and processes have been found to affect implementation of medical procedures and mental health services [7, 8, 20, 27, 28]. EBDM fosters implementation of effective programs and prevents mis-implementation through use of the best available surveillance data and intervention evidence in setting priorities and selecting programs, application of systematic prioritization and program planning methods, community engagement, and evaluation to inform adaptation and implementation [29,30,31]. Capacity building for EBDM includes training to increase skills of individual staff members and management practices to enhance organizational supports for EBDM.

A literature review identified five domains of organizational supports that are associated with agency performance: leadership, workforce development, organizational climate and culture, relationships and partnerships, and financial practices [32]. Leadership support for EBDM includes leadership skills, active modeling of EBDM processes, communication of expectations for use of EBDM processes, and participatory decision-making [32,33,34]. Workforce development includes in-service training and access to technical assistance. An organizational climate and culture supportive of EBDM has a free flow of information, values evidence and continued learning, and supports methods that may be new to the organization, such as specific prioritization or quality improvement processes [32, 33]. Relationships and partnerships with organizations from different sectors that align their missions and build EBDM capacity are essential, as no public health agency can accomplish the complex multi-level interventions in isolation [32]. Supportive financial practices are transparent and incorporate outcomes-based contracting and allocation of funds for quality improvement, EBDM, information access, and staff training; and diversify funding [32, 35, 36]. While these management practices support EBDM, little is known about direct relationships of these organizational supports with mis-implementation frequency.

To support use of EBDM and prevent mis-implementation, the Centers for Disease Control and Prevention (CDC) and other federal funders in the US increasingly require use of evidence-based interventions (EBIs) by grantees [37, 38]. In chronic disease prevention, CDC includes evidence-based practice requirements in funding state health departments that then pass on funding to local public health agencies. State health departments vary in the extent to which they in turn require local grantees to implement evidence-based strategies [33], even though they rely on local health departments (LHDs) to lead population health efforts to prevent chronic conditions [29]. Delivery of chronic disease prevention programs was not part of the historical regulatory responsibilities of LHDs and remains highly dependent on flow-through funds from federal and state agencies and private foundations. The Public Health Accreditation Board emphasizes workforce development for and documentation of evidence-based practice in its requirements for accreditation of state and local public health departments [39]. Public health workforce competencies include skills needed to plan and implement effective programs [40].

The purposes of the present study are to: 1) describe self-reported LHD frequency of and reasons for mis-implementation among a national sample of LHD chronic disease directors and 2) explore associations between perceived organizational supports for evidence-based processes and mis-implementation.

Methods

Study design

This national cross-sectional study was part of a larger study that used a stratified random sampling design to invite one individual from each included US LHD to complete an online survey [41]. Eligible LHDs were those that screened for diabetes or body mass index, or conducted or contracted for population-based nutrition and physical activity efforts, according to the 2016 National Association of County and City Health Officials (NACCHO) National Profile survey. Of the 1677 eligible LHDs, a total of 600 were randomly selected, 200 in each of three jurisdiction population size strata: small (< 50,000), medium (50,000-199,999), and large (≥ 200,000). The Washington University in St. Louis Institutional Review Board approved the study.

Participants and data collection

The person responsible for making decisions about chronic disease prevention and control in each selected LHD was invited by email to complete the online Qualtrics survey. In some LHDs, this was the LHD director, while in other LHDs, this was a division director or program manager, according to lists provided by NACCHO. Invitees with invalid email addresses were deemed ineligible. Email invitations included materials for informed consent, as did the cover letter of the online survey. All participants gave online consent to the survey. Invitees could decline by email, phone, or online. Participants were offered a $20 Amazon.com gift card. To increase response rates up to three reminder emails were sent and two phone calls made to non-respondents. Data collection took place in August–September 2017.

Survey development

As described in detail elsewhere, the study team drew from measures developed and tested in its previous national health department studies and other existing instruments identified by the study team [32, 41,42,43]. The survey included sections on mis-implementation (frequency and reasons), LHD and participant characteristics, skills for EBDM, and organizational supports. After three rounds of study team input and cognitive response testing with 10 chronic disease prevention practitioners, the survey demonstrated acceptable test-retest reliability [41].

Measures

The four mis-implementation survey items included frequency and reasons for each type of mis-implementation. Frequency of mis-implementation included: “In your opinion, how often do programs end that should have continued (i.e., end without being warranted)”; and “In your opinion, how often do programs continue that should have ended (i.e., continue without being warranted). Response options were “never, rarely, sometimes, often, always, I do not know”. To understand reasons for mis-implementation, participants were asked, “when you think about public health programs that have ended when they should have continued, what are the most common reasons for programs ending”. And “when you think about public health programs that continued when they should have ended, what are the most common reasons for their continuation”. Response options included pre-listed reasons, “other,” and “I do not know.” The study team listed reasons found in the literature or commonly selected in pilot studies [2, 3]. In a pilot study with a purposive sample of chronic disease practitioners from LHDs, state health departments, and partnering agencies in multiple states, frequency of mis-implementation had 79% test-retest agreement for programs ending that should continue and 80% agreement for programs continuing that should have ended [2]. LHD characteristics included items from the 2016 NACCHO National Profile Survey and items within the current study’s survey. Participant demographic items were from the study team’s previous surveys. The supplemental file contains the complete survey instrument.

Survey items also assessed organizational supports for evidence-based decision making (EBDM) using a 7-point agreement Likert scale with 1 = strongly disagree and 7 = strongly agree. Confirmatory factor analyses in M-Plus supported the study’s conceptual framework of six organizational support factors [41, 44]. Factor scores for each participant were calculated in M-Plus. Table 1 lists the factors and items, with exact item wording.

Table 1 Evidence-based decision making (EBDM) support factors and items

Statistical analysis

Data management, variable recoding, descriptive, and bivariate analyses were conducted in SPSS (IBM SPSS Statistics for Windows, Version 24.0) in 2018. Multivariate logistic regression modeling was conducted in SAS software (SAS Institute Inc., Version 9.4) in 2018. The two dependent variables, frequency of programs ending when they should have continued, and frequency of programs continuing when they should have ended, were each dichotomized into 1 = often or always and 0 = never, rarely, or sometimes, after excluding “I do not know” and blank responses. There were too few responses of “never” and “rarely” to analyze these responses as a separate categorical group. The n’s varied slightly because different numbers of participants answered don’t know. Separate modeling was conducted for each mis-implementation dependent variable and each organizational support for EBDM (the independent variable of interest in each model). All models were adjusted for LHD jurisdiction population size and state. Due to the study design with only one participant per LHD, and low intra-cluster correlation (ICC) statistics for the six EBDM support factors by state (which ranged from 0.005 to 0.012), mixed modeling was not conducted. Models were additionally adjusted for LHD or participant characteristics associated with both the mis-implementation frequency (dependent variable) and organizational support for EBDM (independent variable of interest).

Results

Participants

Of the 579 eligible LHD chronic disease leads with valid email addresses, 376 (64.9%) completed the online survey. Per the study design, there was only one participant from each LHD. Table 2 shows participant and LHD characteristics of the sample. Most participants were women (83.2%) and had worked in public health 10 or more years (71.4%). Participants had been in their positions an average of 6.5 ± standard deviation 6.5 years, with a median 4 years. The majority (58.2%) had a graduate degree; nearly a third (31.8%) had a graduate degree specifically in public health; and 29.1% had a nursing background. As designed, the stratified sample was split roughly in thirds by local health department jurisdiction population size. The sample included LHDs from 44 of 51 states (50 states and District of Columbia).

Table 2 Participant and local health department characteristics, by perceived mis-implementation, 2017 national survey

Frequency of mis-implementation

Thirty percent (30.0%) of participants reported inappropriate termination, defined here as reporting programs often or always end that should have continued (Table 2). While frequency of inappropriate termination was not associated with any participant characteristics, those working in health departments with a jurisdiction population size < 50,000 were more likely to report inappropriate termination than participants in larger jurisdictions (p = .05). In addition, those in health departments governed by a local board of health were also more likely to report inappropriate termination compared to those with other forms of governance (p = .04). Only 16.4% reported inappropriate continuation, defined here as reporting programs often or always continue when they should have ended (Table 2). Frequency of inappropriate continuation did not differ by characteristics of participants or LHDs.

Table 3 shows adjusted odds ratios of reporting mis-implementation in separate models for each organizational support for EBDM. Organizational supports were not significantly associated with frequency of inappropriate termination in an unadjusted model and in a model adjusted for jurisdiction population size, state, having a local board of health, graduate degree in any field, nursing background, and years worked in public health. All six organizational support factors were negatively associated with inappropriate continuation after adjusting for LHD jurisdiction population size, state, public health graduate degree, and age group. That is, participants that reported higher presence of organizational supports for EBDM reported less frequent continuation of programs that should have ended.

Table 3 Adjusted odds ratios of reporting mis-implementation by organizational supports, in separate multivariate logistic regression modelsa

Reasons for mis-implementation

Figure 1 shows percentages of participants that selected pre-listed potential reasons for inappropriate termination. The most commonly chosen reasons were the ending of funding, either that grant funding ended (86.0%), or that funding was diverted to a higher priority program (45.7%). Fewer than 25% of participants selected each of the remaining pre-listed reasons. Twelve participants (3.4%) reported “other” reasons, which included lack of staff (n = 4), other funding issues (n = 4), low program participation (n = 2), and single responses of other reasons.

Fig. 1
figure 1

Reasons for ENDING programs that should have continued in local health departments, n = 350. Legend: Total percent does not equal 100% as participants could select more than one option

As shown in Fig. 2, the most frequently selected reasons chosen for inappropriate continuation were sustained funding (55.6%) and sustained support from policymakers (34.0%). Sustained support from agency leaders (27.7%), ease of maintaining the program (28.6%), lack of program evaluation (28.3%), and absence of alternative program options (27.7%) were also cited by more than 25%. The 17 (5.2%) “other” responses included resistance to change (n = 8); and continuation was “required”, “mandated”, “requested”, or “supported by others” (n = 6). Resistance to change included “it’s what we’ve always done”, “inertia”, “tradition”, “staff ingrained”, “fear of change”, and “resistance to stopping”. Reasons did not vary by reported frequency of inappropriate continuation (data not shown).

Fig. 2
figure 2

Reasons for CONTINUING programs that should have ended in local health departments, n = 329. Legend: Total percent does not equal 100% as participants could select more than one option

Discussion

Our study shines light on two different phenomena in mis-implementation of public health programs among local public health practice: inappropriate termination and inappropriate continuation. For programs ending when they should have continued, most reported funding ending as the main reason, while support for EBDM within the LHD was not a factor in frequency. For programs continuing when they should have ended, the most common reason was that funding was sustained. Furthermore, in this instance there appeared to be an association between a lack of organizational support for EBDM and inappropriate continuation.

Reported frequency of mis-implementation in this study was lower than that found in the study team’s earlier pilot work [2], but still concerning. In pilot data from 2013 to 2014 with identical item wording, 42.0% of LHD directors and program managers reported programs often or always ended that should have continued and 29.4% reported programs often or always continued that should have ended [2], compared to 30.0 and 16.4% respectively in the present study. Sampling methods differed, so findings may not be fully comparable. Nonetheless, the lower reported frequencies in the present study may reflect funders’ increased requirements since the previous study for LHDs to demonstrate use of EBIs in chronic disease prevention. Given current national emphasis on EBIs, there may also have been reluctance to report inappropriate continuation (social desirability bias).

It is encouraging that higher perceived organizational supports for EBDM were associated with lower inappropriate continuation of programs, but it is puzzling that several organizational support factors trended toward positive non-significant associations with inappropriate continuation. We can only surmise that managers in LHDs with higher evaluation capacity may be more aware of inappropriate termination. As shown in Fig. 1, only 8% reported lack of evaluation, and only 12% reported lack of program impact, as top reasons for inappropriate termination.

Organizational supports were insufficient in this study to ensure program sustainment, while other studies found multiple internal and external factors affected sustainment. Reviews found organizational climate, leadership support, staff commitment and skills, adequate staffing and low staff turnover, organizational resources, and partnerships affect EBI sustainability [18, 45, 46]. The review by Hodge et al. (2016) found engagement with community leaders and individual implementers key to community-based program sustainment in low resource settings [18]. Engagement involves building relationships with community policy makers and implementers [18, 46]. An important aspect is making decisions together to ensure programs are aligned with community context, cultures, and priorities [18, 46]. Collaborative partnerships across organizations and coalitions are also key to program sustainment [18, 45, 46]. High functioning organizational partnerships that leverage capacity of each collaborating organization are more likely to be able to sustain programs [18, 45, 46]. Policy and legislation are associated with sustainment of programs in community and clinical and social service settings [45]. Engaging community leaders and other policy makers throughout programmatic decision-making can increase likelihood of program sustainment [18].

Qualitative studies emphasize the importance of leadership support, planning, partnerships, and communication in capacity to sustain public health EBIs [26, 47]. Reassignment of staff, changes in staff workloads, and changes in leadership led to discontinuation of an evidence-based trauma intervention in a large urban school district [48]. Lack of organizational policy to support staff time to attend training led to partial implementation of after school physical activity efforts [49]. But in the present study, ending of funding was by far the most commonly reported reason for inappropriate termination, as found in a recent review [50], and organizational supports were not protective. This reflects lack of ongoing funding sources for EBIs, and points out the need for strong community and policy maker engagement, inter-organizational partnerships, and alternate and diversified funding sources. There is a need for better communication with policy makers and other program decision makers on the importance of and evidence for population-based chronic disease prevention. Communicating evidence to policy makers remains one of the top skill gaps among health department staff [51].

Public health systems are working to scale up EBIs [22, 45, 49], but little is known about strategies to address inappropriate continuation of ineffective approaches. Here public health can learn from medical studies, even though organizational structures and funding sources differ. Healthcare systems are acknowledging that ending low value care is difficult, requires different processes than implementation, and best strategies are not yet known [9]. Supportive organizational climates and training are two organizational supports that may help. A recent review by Colla and colleagues found healthcare systems that provided clinician education and feedback decreased use of low value procedures [7], but other authors viewed provider education and feedback as insufficient [9, 10, 52]. For example, clinician awareness of guidelines to avoid use of low-value tumor markers did not lead to ending such use except in healthcare systems with cultures that emphasized collective decision making in line with guidelines [52].

The present study has several limitations. The cross-sectional survey limits temporal interpretations of the findings. We do not know how public health practitioners perceive programs as something that should continue or end. However, these questions were asked after detailed definition of EBDM and sections on EBIs and supports for EBDM. Participants may have different interpretations of the terms “evidence-based” and “effectiveness.” We did not define the term “warranted” in the questions “In your opinion, how often do programs end that should have continued (i.e., end without being warranted)”; and “In your opinion, how often do programs continue that should have ended (i.e., continue without being warranted). So it is unknown whether participants interpreted “warranted” as evidence-based, which was the context of the survey, or were mentally including other factors such as champion or policy maker preferences or lack of partner support. We also did not specify a time frame for perceived frequency of inappropriate termination or continuation. There was not space in the survey to ask participants to define what they meant by inappropriate program termination or continuation, which would have furthered understanding and interpretation of survey responses. There could be social desirability bias to under-report continuation of programs that should end, given national emphasis on EBIs for chronic disease prevention. Still, the present study provides a glimpse into mis-implementation in local public health. A future study will address many of these limitations [3].

In addition to gaining a deeper understanding of organizational and external influences on mis-implementation, future solutions need to be developed for how best to fund public health practice so that effective programs can be sustained. With high staff turnover and a significant portion of the public health workforce retiring [16, 17], succession planning and ongoing EBDM capacity building efforts are needed.

Conclusions

While improvements have occurred since early pilot data were collected in 2013 [2], the results of this study show that both inappropriate termination and continuation of programs continue, mainly due to funding-related issues. Loss of funding was the main reason reported for inappropriate termination, with organizational supports not protective. Policy maker engagement, strong organizational partnerships, and creative mechanisms of support are needed to avoid inappropriate termination. This study shows organizational supports for EBDM may help LHDs avoid inappropriate continuation, but may be secondary to funding considerations. Public health agency leaders can cultivate organizational climates in which EBDM is valued and supported; ensure staff are trained in skills needed for EBDM, including program implementation and evaluation; provide access to evidence; and stimulate strong partner networks. Further understanding of the local and national influences on mis-implementation among LHDs can help identify financial and other supports so resources can be directed towards programs with the greatest promise of improving health and well-being.

Availability of data and materials

Materials related to this study are available on the Prevention Research Center in St. Louis at Washington University in St. Louis website https://prcstl.wustl.edu/. The datasets analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CDC:

Centers for Disease Control and Prevention

D.A.R.E.:

Drug Abuse Resistance Education program

EBDM:

Evidence-based decision making

HIV:

Human immunodeficiency virus

ICC:

Intra-cluster correlation

LHD:

Local health department

NACCHO:

National Association of County and City Health Officials

References

  1. Brownson RC, Colditz C, Proctor EK. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York City: Oxford Press; 2018.

    Google Scholar 

  2. Brownson RC, Allen P, Jacob RR, Harris JK, Duggan K, Hipp PR, et al. Understanding mis-implementation in public health practice. Am J Prev Med. 2015;48(5):543–51.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Padek M, Allen P, Erwin PC, Franco M, Hammond RA, Heuberger B, et al. Toward optimal implementation of cancer prevention and control programs in public health: a study protocol on mis-implementation. Implement Sci. 2018;13(1):49.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Norton WE, Kennedy AE, Chambers DA. Studying de-implementation in health: an analysis of funded research grants. Implement Sci. 2017;12(1):144.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Niven DJ, Mrklas KJ, Holodinsky JK, Straus SE, Hemmelgarn BR, Jeffs LP, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Gnjidic D, Elshaug A. De-adoption and its 43 related terms: harmonizing low-value care terminology. BMC Med. 2015;13(1):273.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Colla CH, Mainor AJ, Hargreaves C, Sequist T, Morden N. Interventions aimed at reducing use of low-value health services: a systematic review. Med Care Res Rev. 2017;74(5):507–50.

    Article  PubMed  Google Scholar 

  8. Montini T, Graham I. Entrenched practices and other biases: unpacking the historical, economic, professional, and social resistance to de-implementation. Implement Sci. 2015;10(1):24.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Voorn VMA, Marang-van de Mheen PJ, van der Hout A, Hofstede SN, So-Osman C, van den Akker-van Marle ME, et al. The effectiveness of a de-implementation strategy to reduce low-value blood management techniques in primary hip and knee arthroplasty: a pragmatic cluster-randomized controlled trial. Implement Sci. 2017;12(1):–72.

  10. Voorn VMA, van Bodegom-Vos L, So-Osman C. Towards a systematic approach for (de)implementation of patient blood management strategies. Transfus Med. 2018;28(2):158–67.

    Article  CAS  PubMed  Google Scholar 

  11. Parsons Leigh J, Niven DJ, Boyd JM, Stelfox HT. Developing a framework to guide the de-adoption of low-value clinical practices in acute care medicine: a study protocol. BMC Health Serv Res. 2017;17:1–9.

    Article  Google Scholar 

  12. Harris C, Allen K, Ramsey W, King R, Green S. Sustainability in health care by allocating resources effectively (SHARE) 11: reporting outcomes of an evidence-driven approach to disinvestment in a local healthcare setting. BMC Health Serv Res. 2018;18(1):386.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Rosenberg A, Agiro A, Gottlieb M, Barron J, Brady P, Liu Y, et al. Early trends among seven recommendations from the choosing wisely campaign. JAMA Intern Med. 2015;175(12):1913–20.

    Article  PubMed  Google Scholar 

  14. Chalmers K, Badgery-Parker T, Pearson SA, Brett J, Scott IA, Elshaug AG. Developing indicators for measuring low-value care: mapping choosing wisely recommendations to hospital data. BMC Res Notes. 2018;11(1):163.

    Article  PubMed  PubMed Central  Google Scholar 

  15. McKay VR, Morshed AB, Brownson RC, Proctor EK, Prusaczyk B. Letting Go: conceptualizing intervention de-implementation in public health and social service settings. Am J Community Psychol. 2018;62:189–202.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Leider JP, Coronado F, Beck AJ, Harper E. Reconciling supply and demand for state and local public health staff in an era of retiring baby boomers. Am J Prev Med. 2018;54(3):334–40.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Leider JP, Harper E, Shon JW, Sellers K, Castrucci BC. Job satisfaction and expected turnover among federal, state, and local public health practitioners. Am J Public Health. 2016;106(10):1782–8.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Hodge LM, Turner KM. Sustained implementation of evidence-based programs in disadvantaged communities: a conceptual framework of supporting factors. Am J Community Psychol. 2016;58(1–2):192–210.

    Article  PubMed  Google Scholar 

  19. Freedman AM, Kuester SA, Jernigan J. Evaluating public health resources: what happens when funding disappears? Prev Chronic Dis. 2013;10:E190.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Massatti RR, Sweeney HA, Panzano PC, Roth D. The de-adoption of innovative mental health practices (IMHP): why organizations choose not to sustain an IMHP. Admin Pol Ment Health. 2008;35(1–2):50–65.

    Article  Google Scholar 

  21. Furtado KS, Budd EL, Ying X, de Ruyter AJ, Armstrong RL, Pettman TL, et al. Exploring political influences on evidence-based non-communicable disease prevention across four countries. Health Educ Res. 2018;33(2):89–103.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Conte KP, Marie Harvey S, Turner GR. “During early implementation you just muddle through”: factors that impacted a statewide arthritis program’s implementation. Transl Behav Med. 2017;7(4):804–15.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Johns DM, Bayer R, Fairchild AL. Evidence and the politics of deimplementation: the rise and decline of the “counseling and testing” paradigm for HIV prevention at the US Centers for Disease Control and Prevention. Milbank Q. 2016;94(1):126–62.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Schell SF, Luke DA, Schooley MW, Elliott MB, Herbers SH, Mueller NB, et al. Public health program capacity for sustainability: a new framework. Implement Sci. 2013;8:15.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Scheirer MA. Linking sustainability research to intervention types. Am J Public Health. 2013;103(4):e73–80.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Tabak RG, Duggan K, Smith C, Aisaka K, Moreland-Russell S, Brownson RC. Assessing capacity for sustainability of effective programs and policies in local health departments. J Public Health Manage Pract. 2016;22(2):129–37.

    Article  Google Scholar 

  27. Panzano PC, Sweeney HA, Seffrin B, Massatti R, Knudsen KJ. The assimilation of evidence-based healthcare innovations: a management-based perspective. J Behav Health Serv Res. 2012;39(4):397–416.

    Article  PubMed  Google Scholar 

  28. Gold R, Bunce AE, Cohen DJ, Hollombe C, Nelson CA, Proctor EK, et al. Reporting on the strategies needed to implement proven interventions: an example from a “real-world” cross-setting implementation study. Mayo Clin Proc. 2016;91(8):1074–83.

    Article  PubMed  Google Scholar 

  29. Brownson RC, Baker EA, Deshpande AD, Gillespie KN. Evidence-based public health. 3rd ed. New York: Oxford University Press; 2018.

    Google Scholar 

  30. Brownson RC, Fielding JE, Green LW. Building capacity for evidence-based public health: reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:3.1–3.27.

    Article  Google Scholar 

  31. Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving concept. Am J Prev Med. 2004;27(5):417–21.

    PubMed  Google Scholar 

  32. Brownson RC, Allen P, Duggan K, Stamatakis KA, Erwin PC. Fostering more-effective public health by identifying administrative evidence-based practices: a review of the literature. Am J Prev Med. 2012;43(3):309–19.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Allen P, Jacob RR, Lakshman M, Best LA, Bass K, Brownson RC. Lessons learned in promoting evidence-based public health: perspectives from managers in state public health departments. J Community Health. 2018;43(5):856–63.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Aarons GA, Ehrhart MG, Farahnak LR, Sklar M. Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annu Rev Public Health. 2014;35:255–74.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Honore PA, Clarke RL, Mead DM, Menditto SM. Creating financial transparency in public health: examining best practices of system partners. J Public Health Manage Pract. 2007;13(2):121–9.

    Article  Google Scholar 

  36. Honore PA, Simoes EJ, Moonesinghe R, Kirbey HC, Renner M. Applying principles for outcomes-based contracting in a public health program. J Public Health Manage Pract. 2004;10(5):451–7.

    Article  Google Scholar 

  37. Steele CB, Rose JM, Townsend JS, Fonseka J, Richardson LC, Chovnick G. Comprehensive cancer control partners’ use of and attitudes about evidence-based practices. Prev Chronic Dis. 2015;12:E113.

    Article  PubMed  PubMed Central  Google Scholar 

  38. DeGroff A, Carter A, Kenney K, Myles Z, Melillo S, Royalty J, et al. Using evidence-based interventions to improve cancer screening in the National Breast and cervical cancer early detection program. J Public Health Manage Pract. 2016;22(5):442–9.

    Article  Google Scholar 

  39. Public Health Accreditation Board. Guide to National Public Health Department Initial Accreditation Alexandria, VA 2015 Available from: http://www.phaboard.org/accreditation-process/.

    Google Scholar 

  40. Public Health Foundation. Modified version of the core competencies for public health professionals. 2017 [cited 2018 01/16/18]. Available from: http://www.phf.org/resourcestools/Pages/Modified_Core_Competencies_for_Public_Health_Professionals.aspx.

    Google Scholar 

  41. Parks RG, Tabak RG, Allen P, Baker EA, Stamatakis KA, Poehler AR, et al. Enhancing evidence-based diabetes and chronic disease control among local health departments: a multi-phase dissemination study with a stepped-wedge cluster randomized trial component. Implement Sci. 2017;12(1):122.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Reis RS, Duggan K, Allen P, Stamatakis KA, Erwin PC, Brownson RC. Developing a tool to assess administrative evidence-based practices in local public health departments. Front Public Health Serv Syst Res. 2014;3(3):Article 2.

    Google Scholar 

  43. Allen P, Sequeira S, Jacob RR, Hino AA, Stamatakis KA, Harris JK, et al. Promoting state health department evidence-based cancer and chronic disease prevention: a multi-phase dissemination study with a cluster randomized trial component. Implement Sci. 2013;8:141.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Mazzucca S, Parks RG, Tabak RG, Allen P, Dobbins M, Stamatakis KA, Brownson RC. Assessing organizational supports for evidence-based decision making in local public health departments in the United States: development and psychometric properties of a new measure. J Public Health Manag Pract. 2019;25(5):454–63.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Shelton RC, Cooper BR, Stirman SW. The sustainability of evidence-based interventions and practices in public health and health care. Annu Rev Public Health. 2018;39:55–76.

    Article  PubMed  Google Scholar 

  46. Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:17.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Dreisinger ML, Boland EM, Filler CD, Baker EA, Hessel AS, Brownson RC. Contextual factors influencing readiness for dissemination of obesity prevention programs and policies. Health Educ Res. 2012;27(2):292–306.

    Article  PubMed  Google Scholar 

  48. Nadeem E, Ringle V. De-adoption of an evidence-based trauma intervention in schools: a retrospective report from an urban school district. Sch Ment Heal. 2016;8(1):132–43.

    Article  Google Scholar 

  49. Beets MW, Glenn Weaver R, Turner-McGrievy G, Saunders RP, Webster CA, Moore JB, et al. Evaluation of a statewide dissemination and implementation of physical activity intervention in afterschool programs: a nonrandomized trial. Transl Behav Med. 2017;7(4):690–701.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Hailemariam M, Bustos T, Montgomery B, Barajas R, Evans LB, Drahota A. Evidence-based intervention sustainability strategies: a systematic review. Implement Sci. 2019;14(1):57.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Jacob RR, Baker EA, Allen P, Dodson EA, Duggan K, Fields R, et al. Training needs and supports for evidence-based decision making among the public health workforce in the United States. BMC Health Serv Res. 2014;14:564.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Hahn EE, Munoz-Plaza C, Wang J, Garcia Delgadillo J, Schottinger JE, Mittman BS, et al. Anxiety, culture, and expectations: oncologist-perceived factors associated with use of nonrecommended serum tumor marker tests for surveillance of early-stage breast cancer. J Oncol Pract. 2017;13(1):e77–90.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We acknowledge the data collection and reporting help of Allison Poehler, and administrative support of Linda Dix, Mary Adams, and Cheryl Valko at the Prevention Research Center in St. Louis, Brown School, Washington University in St. Louis. We also acknowledge The Centers for Disease Control and Prevention and the Robert Wood Johnson Foundation, who provided funding for the 2016 National Profile study and the National Association of County and City Health Officials (NACCHO).

Funding

This study is funded by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health (5R01DK109913, 2P30DK092949, and P30DK092950), the National Cancer Institute of the National Institutes of Health (1P50CA244431), and the Centers for Disease Control and Prevention (U48DP006395). The study sponsor was not involved in study design, data collection, analyses, or reporting of findings. The findings and conclusions in this article are those of the authors and do not necessarily represent the official positions of the National Institutes of Health or Centers for Disease Control and Prevention.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization and design: RCB, RGP, MD; Measures development and testing: RCB, RGP, PA, SM; Analyses: HH, MR, SM, PA, RRJ, RCB; Writing: PA, RRJ, RCB, RGP; Manuscript content revisions: RCB, RRJ, SM, PA, RGP, MP, MR, MD, DD; All authors read and approved the final manuscript.

Corresponding author

Correspondence to Peg Allen.

Ethics declarations

Ethics approval and consent to participate

The Human Research Protections Office at Washington University in St. Louis provided institutional review approval (protocol #201603157). All participants gave online consent to the survey.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Allen, P., Jacob, R.R., Parks, R.G. et al. Perspectives on program mis-implementation among U.S. local public health departments. BMC Health Serv Res 20, 258 (2020). https://doi.org/10.1186/s12913-020-05141-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-020-05141-5

Keywords