- Open Access
Can systematic implementation support improve programme fidelity by improving care providers’ perceptions of implementation factors? A cluster randomized trial
BMC Health Services Research volume 22, Article number: 808 (2022)
Investigations of implementation factors (e.g., collegial support and sense of coherence) are recommended to better understand and address inadequate implementation outcomes. Little is known about the relationship between implementation factors and outcomes, especially in later phases of an implementation effort. The aims of this study were to assess the association between implementation success (measured by programme fidelity) and care providers’ perceptions of implementation factors during an implementation process and to investigate whether these perceptions are affected by systematic implementation support.
Using a cluster-randomized design, mental health clinics were drawn to receive implementation support for one (intervention) and not for another (control) of four evidence-based practices. Programme fidelity and care providers’ perceptions (Implementation Process Assessment Tool questionnaire) were scored for both intervention and control groups at baseline, 6-, 12- and 18-months. Associations and group differences were tested by means of descriptive statistics (mean, standard deviation and confidence interval) and linear mixed effect analysis.
Including 33 mental health centres or wards, we found care providers’ perceptions of a set of implementation factors to be associated with fidelity but not at baseline. After 18 months of implementation effort, fidelity and care providers’ perceptions were strongly correlated (B (95% CI) = .7 (.2, 1.1), p = .004). Care providers perceived implementation factors more positively when implementation support was provided than when it was not (t (140) = 2.22, p = .028).
Implementation support can facilitate positive perceptions among care providers, which is associated with higher programme fidelity. To improve implementation success, we should pay more attention to how care providers constantly perceive implementation factors during all phases of the implementation effort. Further research is needed to investigate the validity of our findings in other settings and to improve our understanding of ongoing decision-making among care providers, i.e., the mechanisms of sustaining the high fidelity of recommended practices.
Implementing new knowledge into existing practices has been a challenging task for health care services for many years . To improve implementation success, we need more guidance on how implementation is accomplished and sustained [2, 3]. Most implementation efforts (i.e., clinical unit’s activities to implement a new practice in every daily practice) make use of data on implementation outcomes, such as fidelity scores, to monitor the progression and achievement of implementation . Fidelity can be defined as the degree to which an intervention was implemented as it was described in the original protocol . Fidelity scores provide information on the degree to which the new practice is implemented as designed . However, these data cannot inform us about why the implementation progression is incomplete or how we should address these issues to improve implementation outcomes. The literature suggests several implementation factors, i.e., interrelated, coexisting moderators for implementation outcomes to explain implementation, such as collegial support and readiness [2, 5]. In mental health services, active leadership to redesign the workflow, measurement and feedback are found to be important implementation factors . However, the evidence is limited and currently, there is a lack of reliable techniques to effectively address deficits in these factors once they are identified . Evidence of implementation factors that are both associated with implementation outcomes and found to be affected by systematic implementation support is warranted to foresee challenges and effectively intervene based on these factors.
Few of the existing, theoretically founded, implementation factors have demonstrated reliable associations with implementation outcomes [2, 7,8,9]. There may be several methodological reasons for this apparent lack of association. First, many of the constructs are investigated individually rather than as a set of interrelated factors. The interdependency between factors implies that implementation success is determined by a set of potentially shifting factors . Implementation should therefore be investigated as complex interventions, i.e., as a set of interacting factors [10, 11]. Second, many investigations are based on the assumption that sufficient preparedness prior to implementation implies success. Examples are organizational readiness for change [8, 12] and clinicians’ attitudes towards evidence-based practice . However, stages of change theory by Prochaska, Rogers and others describe stages of orientation, insight and acceptance for making and maintaining a change, and these stages are iterative [14,15,16]. Care providers constantly assess pros and cons and may reconsider their involvement in the change. A more process-based approach to readiness than defining it as a pre-state is recommended . Investigations of implementation factors solely before entering the active implementation phases do not consider the continuous exposure of contextual factors under which care providers decide whether they should continue implementation . Third, many factors are investigated by observations. Examples are the model for understanding success in quality (MUSIQ)  and stages of implementation completion (SIC) . External observation is not in line with theories that define alteration of behaviour as a social construct, i.e., an experience by those making the change. Examples are Agyris’ double loop learning and Rogers’ diffusion of innovation. They both emphasize the change agents’ sense of support, gains and obstacles, rather than objective observation, when explaining change in behaviour [14, 21]. Finally, there are methodological limitations to existing research on implementation factors. Most studies employ retrospective, uncontrolled designs or qualitative investigations [2, 22]. To reveal the relevance of implementation factors for fidelity and other implementation outcomes and whether systematic implementation support can impact these factors, we need a randomized controlled design with longitudinal data collections during implementation efforts. Randomised controlled designs have been employed to investigate the impact of implementation strategies on patient outcome, such as implementation of the Crisis resolution teams in mental health care . However, we lack systematic investigations of the associations among implementation strategies, facilitating and hindering contextual factors, and implementation outcomes .
We aimed to assess the associations between fidelity and care providers’ perceptions of a set of implementation factors and investigate whether these perceptions can be affected by systematic implementation support. We wanted to investigate these associations and impacts at various timepoints in time during an implementation effort.
Using a cluster-randomized controlled trial design, specialist mental health clinics were invited to implement recommended practices for the treatment of psychosis. Each clinic chose two of four predefined practices they intended to implement and was randomized to implementation support for one practice (intervention arm) and not for the other (control arm). We prospectively measured health professionals’ perceptions of the implementation effort and the clinics’ degree of implementation every sixth month during an 18-month period. This was measured for both the practice they received implementation support for and for the control in each clinic.
Sample and randomization
This study was conducted within specialized mental health care in six Norwegian public health trusts (local health authorities) serving 38% of the country’s population in both rural and urban areas. Thirty-nine specialist mental health care clinics, i.e., 21 community mental health centres and 13 mental health hospital departments (e.g., forensic wards or specialist units for the treatment of psychosis) participated. They were invited in 2016 by the research group to take part in a study to improve compliance with the national clinical guidelines for the treatment of psychosis. Four evidence-based practices described in the guideline were chosen by the project group after a survey among the included mental health clinics. These were practices or programmes for antipsychotic medication management (MED), physical health care (PHYS), family psychoeducation (FAM), and illness management and recovery (IMR). Each clinic decided two of these four practices for implementation. Among the 39 clinics invited, 17 chose antipsychotic medication, 26 chose physical health care, 14 chose family psychoeducation and 21 chose illness management and recovery. Four unique fidelity scale were developed and employed by audit teams to score the degree of compliance to the core components of the practices the clinics had chosen. (The scales are further described under “Measures”).
By using pairwise randomization, we balanced the number of sites assigned to the intervention or control arms within each practice and ensured that each clinic was represented in both the intervention group and the control group. In the US National Evidence-Based Practice Project , the mean EBP fidelity increased from 2.28 (SD 0.95) at baseline to 3.76 (SD 0.78) at 12 months (See: Ruud et al. ). We assumed initially a similar mean increase in fidelity over 18 months for the experimental practices and no increase for control practices. Based on a two-tailed significance level of 5 and 90% power, we estimated that the overall hypothesis would be adequately powered with a minimum of eight sites in each arm for each practice.
To represent each clinic, a purposive sample of care providers, selected by their manager as particularly important for the clinical practice of psychosis treatment, constituted the sample of Implementation Process Assessment Tool (IPAT) responders. A sample of 10–12 care providers per clinic were suggested from the project, but the managers decided how many of their staff meet the inclusion criteria, resulting in five to 32 being invited per clinic. We decided a cut-off of three or more responders per clinical unit as an inclusion criterion for this study. The cut-off was set based on the logic of balancing generalizability to the unit and representation of a variety of clinical units. At the time of the first data collection (baseline), the clinics were recently informed about which practice they would receive implementation support for to enable preparations, such as deciding who should attend the clinical training.
The intervention (implementation support)
The intervention bundle was designed to affect care providers’ readiness and engagement and included tools and measures described in the literature [25, 27] and more innovative measures. It consisted of clinical training, implementation facilitation, and assessment and feedback every sixth month on level of compliance to recommended care and health professionals’ anonymous report on how they perceived the implementation effort. The clinical training included workshops with lectures and meeting colleagues from across the country, lasting one day for medication and physical health practices and 2 × 2 days for IMR and family support practices. For the IMR practice and the family support practice, the clinicians were offered additional supervision by a clinical expert on the practice. For IMR, this was offered every week for the first six months and every second week for the next six months. For the family support it was offered every other week for six months and then monthly for the next six months. A website with further educational material and tools was made available for the four practices. The implementation facilitation included support and training to enhance positive perceptions of the implementation effort i.e., planning and using tools and methods such as flow charts, process measuring and team activities to improve shared understanding and accountability. The clinics were offered one meeting with trained implementation facilitators every second week for the first six months and then once a month for a year. The implementation facilitators were trained by implementation experts in the project to recognize and interpret care providers’ perceptions of the factors facilitating and hindering implementation, measured by the IPAT questionnaire, and to suggest tools and methods to improve these perceptions and thereby make implementation teams successful. They had relevant clinical experience and training but were not clinical experts on any of the four practices to be implemented. The intervention included assessments and feedback on the clinic’s degree of compliance with the practice they had chosen and received support for, measured by the fidelity scale. Additionally, feedback on implementation factors, i.e., clinicians’ perceptions of the implementation effort as measured by the IPAT questionnaire, was provided. Fidelity assessments and the IPAT were delivered every six months over the 18-month intervention period. The feedback on IPAT scores was provided to the manager, the implementation team and the implementation facilitator at each clinic shortly after each measurement together with a short report to help interpret and suggest interventions to improve low scores. Please see Fig. 1. This intervention bundle to support implementation was in an earlier paper from this trial shown to have a positive impact on the fidelity scores. We found the difference between experimental and control sites in mean increase in fidelity score (within a range 1–5) over 18 months was 0.86 with 95% CI (0.21–1.50), p = 0.009, with corresponding effect size 0.89 (95% CI 0.43–1.35) .
The implementation was assessed using a specific fidelity scale for each of the four evidence practices. The four fidelity scales are the Physical Health Care Fidelity Scale , the Antipsychotic Medication Management Fidelity Scale , the Family Psychoeducation Fidelity Scale , and the Illness Management and Recovery Fidelity Scale . Each fidelity scale contains 13–17 items for rating core components of the practice and detailed criteria for rating each item from 1 (lack of adherence to the evidence-based model) to 5 (full adherence to the model). The total fidelity score is calculated as the unweighted mean score of the items in the fidelity scale. The fidelity scales and their psychometric properties are further described in earlier papers [28,29,30,31]. An assessment team of two clinicians trained to use the fidelity scales conducted the scoring. At each clinic, they interviewed clinicians and managers, reviewed documents (e.g., procedures and data on performance) and conducted audits of ten randomly selected patient records for patients receiving psychosis treatment. The 13–17 items of the fidelity scales were rated on a scale from 1 (not implemented) to 5 (perfectly implemented). The unit of analysis for fidelity score is the clinic, defined by the mean of the items.
Health professionals’ perceptions of the implementation factors were measured by the IPAT . The IPAT questionnaire is theoretically based on constructs defined in the Consolidated Framework for Implementation Research , including readiness for change  and stages of change [14, 15, 35]. In the theoretical model for the IPAT construct, outer- and inner-setting factors, intervention characteristics and the characteristics of individuals are perceived and interpreted at the individual and team levels. The implementers react to their perceptions by proceeding (or resisting and reversing) on the stages of change continuum from insight to acceptance and sustainment. This progression, in turn, affects how the implementers perceive the implementation effort . The 27 items measure progression in the stages of change, individual activities and perceived support, collective readiness and support, and individual perception of the intervention. Examples of items are “I have discussed with colleagues how this new practice will work in our unit”,” I believe the patient will benefit from the improvement” and “We who work here agree that we have a potential for improvement in [new practice]”. The questionnaire was distributed to a purposive sample of health professionals central to the provision of care in question. The Likert scale from 1 (not agree/not true) to 6 (= agree/correct) for each of the 27 items revealed variation between respondents and between clinics. The internal consistency for the total scale and for each of the four underlying constructs was found to be high (Cronbach’s alpha >.83) . The IPAT’s unit of analysis is the clinical unit and is represented by the mean of the responders from the clinic.
Fidelity and IPAT were scored for the practice in which implementation support was provided and for the control practice in each clinic at baseline, 6 months, 12 months and 18 months.
Descriptive methods (mean, percent and standard deviation) and confidence intervals were used to describe the IPAT responders and the IPAT and fidelity scores. The associations between care provider perceptions (IPAT scores) and fidelity scores at each time point were explored by linear mixed effects models for fidelity scores depending on the IPAT adjusted by group (support/no support) and the random intercept for practice and unit. These models were estimated both with and without the interaction between IPAT and group, i.e., enabling or disabling various slopes for the groups. The effect of support on the IPAT was assessed using the linear mixed effects model for the IPAT depending on group, time and the interaction between group and time adjusted by random intercept for cases, practice and unit with simple contrasts in the time domain. The interaction term in these models estimates the change of differences between groups from baseline. For all analyses the significant level was set to 0.05. Since the primary analysis considered only the effect of support on the IPAT, no adjustment for multiple testing was necessary.
Statistical analyses were conducted using SPSS 26.0 (IBM Corp., Armonk, NY) and R 4.0 , and MATLAB 2020b (MathWorks Inc., Natick, MA) was used for graphics.
Sample, Implementation Process Assessment Tool (IPAT)-scores and fidelity scores
Thirty-three of 39 clinical units were included. Following the inclusion criteria of three or more IPAT responders per unit, we excluded six units at baseline, eight at 6 months, sixteen at 12 months and eighteen at 18 months. Please see Fig. 2.
As shown in Table 1, the number of responses in the total sample decreased from 696 (response rate: 59%) at baseline to 326 (response rate: 30%) at eighteen months. After excluding responses missing data and those from units with less than three respondents, the valid sample consisted of 642 responses representing 33 units in the intervention arm and 32 in the control arm at baseline, decreasing to 298 responses representing 21 clinical units in both arms at eighteen months. The distribution of the IPAT respondents’ gender and profession reflects mental health care units in the present setting. In accordance with the inclusion criteria of “essential clinicians”, most respondents were trained clinical mental health care experts.
The mean IPAT scores per unit at baseline were higher for the intervention group (mean (CI) = 3.66 (3.45, 3.87)) than for the control group (mean (CI) = 3.31 (3.12, 3.49)). The fidelity scores were low for both the intervention arm (mean (CI) = 1.77 (1.49, 2.05)) and the control arm (mean (CI) = 1.82 (1.56, 2.07)). The IPAT and fidelity scores improved during the study period for both groups, yet a significantly larger improvement was seen where implementation support was provided than in the control arm (see Table 2).
Association between IPAT and fidelity
As shown in Fig. 3, health care providers’ perceptions of the implementation effort (IPAT scores) and achieved implementation (fidelity score) were found to be significantly correlated at 12 and 18 months. The linear mixed effect model, adjusting for unit and practice, revealed a strong association at 18 months (B (CI) = .7 (0.2, 1.1) p = .004). At 12 months, the association was moderate (B (CI) = 0.4 (0.0, 0.7), p = .045). The analysis indicated no, or even negative, correlation at baseline; however, this was not significant.
Implementation support’s impact on health professionals’ perceptions of the implementation effort
The health care providers reported higher scores on how they perceived the set of implementation factors when implementation support was provided than when it was not. Figure 4 displays the results of the linear mixed effect analysis, adjusting for time and type of practice implemented, confirming the positive association between implementation support (intervention) and the IPAT-score at 18 months (t(140) = 2.22, p = .028). The confidence intervals for the mean indicate significantly higher IPAT-scores in the intervention groups than in the control groups at all measurement times (see Table 2). Importantly, a difference in IPAT scores was present at baseline when the clinical units knew which practice they would receive support for.
In the present study, we found fidelity to be positively correlated with care providers’ perceptions of the implementation and that these perceptions can be affected by systematic implementation support. The correlations between fidelity and IPAT score were strongest late in the implementation process and were not present at baseline. The care providers perceived the implementation for the practice they received structured implementation support more positively than the control practice.
A relationship between how care providers perceive the implementation effort and implementation outcome, as we found, is in line with well-known theories and frameworks such as Readiness for change, Stages of change and CFIR [16, 19, 33, 37, 38]. However, we found the implementation factors and outcomes to be associated later in the implementation process and not at the onset. Our findings contrast with the literature emphasizing the investigation of facilitating factors only prior to an implementation effort [8, 19, 39]. They indicate that a more process-based approach to the readiness concept, as suggested by Stevens, among others , can be beneficial to understanding implementation. The changing IPAT scores during the 18-month period also suggest that health professionals assess the situation during all phases of the implementation to decide whether they will implement and sustain the new practice. This is in line with Nilsen, arguing that the progression of the implementation process and health professionals’ perceptions are mutually affected and that actions are mutually affected in a context [18, 40]. According to Stevens, the assumption that it is sufficient to establish readiness at baseline fails to understand the influence of context over time on individuals’ cognitive and affective evaluations and responses . Furthermore, our results are in line with complex intervention theory highlighting mutual interactions between intervention and context factors  and with evidence for repeated feedback’s impact on maintaining engagement during the implementation process . Our results suggest that care providers’ perceptions and implementation outcomes are mutually affected, creating a positive or negative spiral of improvement success or failure, as described by Øvretveit .
The implementation literature states that systematically developed intervention bundles can affect health professionals’ engagement and implementation outcomes [33, 43]. The intervention in the present study was designed to affect the same construct of care providers’ individual and collective readiness and engagement as measured by the IPAT questionnaire. It combines several recommended interventions, such as continuous measurement and provision of feedback, clinical training, implementation training and team support [1, 43,44,45]. Our finding of a significantly higher score on the IPAT scale when implementation support was provided than when it was not was therefore expected. What we believe to be innovative is the measurement of and feedback on how the implementers experience the implementation effort. Responding to the questionnaire may introduce reflections on effective implementation among the responders, and the manager can gain insight into the facilitation needs. We combined this with guidance from implementation facilitators trained to support team activities to meet the needs revealed by the IPAT scores. Tailoring implementation support to the present needs of the implementers is emphasized in the literature ; however, we cannot state which part of the intervention was the most effective. Interestingly, we also found the IPAT score to be higher for the practices the clinics knew they would receive support for before the onset of systematic implementation support. We believe this is best understood by readiness theories recognizing the importance of self-induced initial preparation and collective tuning in on the forthcoming implementation effort .
Few implementation factors are tested for their associations with implementation outcomes. The review by Chaudoir et al. found only one study that investigate explanations for variation in fidelity were investigated . In the present study, we employed a questionnaire based on a combination of theories . Given the complexity of implementation, we believe that the extraction of prominent factors from a set of theories is more likely to explain implementation than scales based on single theories. The significant correlation we found between the IPAT score and fidelity indicates that the questionnaire is well suited for investigating the “why” in implementation.
The present study used a cluster-randomized longitudinal design including a large sample of clinical units followed for 18 months. We gathered data four times on implementation factors and -outcomes for both the intervention and the control arms. Most investigations of the implementation process have used a retrospective design and/or qualitative methods, implying a risk of bias related to the informants’ and researchers’ recollection and insight into the implementation process [2, 47]. The repeated measurement we conducted every sixth month enabled an investigation of associations between perceptions and actual change to practice in various phases of the process that qualitative or pre-post design cannot. The design, where each clinical unit was assigned to both intervention and control arms for two practices they had chosen to implement, reduced the risk of bias. However, there is a risk of spillover effects from the intervention to the control arm. The intervention included quality improvement training and methods expected to be useful in most implementation efforts. The improved fidelity scores in the control arm may indicate that the clinics were to some degree able to adapt these methods for the control practice. If so, it should be seen as a strength of the tested intervention that it can provide clinics with implementation competence that is transferable to other implementation efforts. However, for research purposes, it implies that we may underestimate the effect of the intervention.
Concerns regarding response rates, statistical power and potential confounding factors should be considered when interpreting the results of the present study. The response rates decreased during the 18-month period. We do not know why the response rates decreased or to what degree the mean score per clinic represents the invited sample, but the distribution of represented professions lends support for representability. Implementing four practices measured by four unique fidelity scales reduces the statistical power compared to investigating only one or two practices, implying a risk of type II error (not detecting an existing association). The IPAT scores were higher in the intervention group than in the control group at baseline. This may indicate that the clinics gave priority to the practice they knew they would receive support for and were able to self-induce steps to improve care providers’ readiness before onset of the defined intervention. The impact of preparatory steps is of great interest for understanding implementation, but was not defined in the study as a part of the intervention and therefore adjusted for in the analysis. Retrospectively, we can see that we underestimated the impact of the minor initial preparatory steps we performed. No measures were taken to limit engagement in the control practice, which may have reduced the differences between the intervention and control arms. Further, we did not systematically investigate how each clinic employed the various elements of the implementation support. Given the strong evidence for the clinical practices that were implemented, we believe implementation success (programme fidelity) can constitute a valid intermediate outcome for patient outcomes; however, this was not investigated in the present study.
Each clinic chose two of four practices. Logically, we expect the clinics to choose practices they need to implement. The generally low score initially indicates that this was true. Our results and conclusions should be interpreted as true when implementing a practice that the clinical unit believes to be beneficial for them. We do not know the degree to which our results are valid for situations in which clinical units are obliged to implement a practice they have not chosen themselves. We have also not investigated the degree to which our results are true for each of the four subgroups of practices. Potential differences between the practices, such as different levels of clinical supervision, was not investigated.
The study was conducted in Norwegian specialist mental health care and regards the implementation of evidence-based practices for patients suffering from psychosis. The sample represents typical multiprofessional clinical units from urban and rural areas of Norway. The IPAT is based on international acknowledged literature and is expected to be valid for similar Western health services. It was developed for health care services in general but has been tested only within mental health care. The generalizability of the present study’s results and conclusions to health services for other patient groups or to other countries is unknown.
The present study indicates that implementation can be facilitated by improving how care providers perceive the situation of implementation and that these perceptions can be improved by systematic implementation support. The results suggest less emphasis on preparedness prior to implementation and more emphasis on care providers’ continuous assessment of pros and cons for implementation and how we tailor the support to meet these changing needs. The stronger association of implementation factors and implementation outcomes with larger differences between the intervention and control groups at 18 months after the onset of implementation support highlight the importance of long-term planning and facilitation to stabilize the new practice. However, further research is needed to understand how we can provide efficient implementation support. New studies should investigate whether some implementation factors are more important than others at various phases of the implementation process and whether our results can be generalized to the implementation of other practices and in other health services.
Availability of data and materials
The datasets generated and analysed during the current study are not publicly available due to ongoing investigations using the same dataset but are available from the corresponding author on reasonable request.
Implementation process assessment tool (questionnaire)
Illness management and recovery (practice)
Family psychoeducation (practice)
Anti-psychotic medication (practice)
Physical health care (practice)
Albers B, Shlonsky A, Mildon R. Implementation Science 3.0: Springer International Publishing; 2020. https://doi.org/10.1007/978-3-030-03874-8.
Nilsen P, Birken SA. Handbook on Implementation Science: Edward Elgar publishing; 2020. https://doi.org/10.4337/9781788975995.
Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci. 2007;2:40. https://doi.org/10.1186/1748-5908-2-40.
Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. https://doi.org/10.1007/s10488-010-0319-7.
Kelly P, Hegarty J, Barry J, Dyer KR, Horgan A. A systematic review of the relationship between staff perceptions of organizational readiness to change and the process of innovation adoption in substance misuse treatment programs. J Subst Abuse Treat. 2017;80:6–25. https://doi.org/10.1016/j.jsat.2017.06.001. Epub 2017 Jun 1.
Torrey WC, Bond GR, McHugo GJ, et al. Evidence-Based Practice Implementation in Community Mental Health Settings: The Relative Importance of Key Domains of Implementation Activity. Admin Pol Ment Health. 2012;39:353–64. https://doi.org/10.1007/s10488-011-0357-9.
Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implement Sci. 2013;8:22. Published 2013 Feb 17. https://doi.org/10.1186/1748-5908-8-22.
Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4:38. Published 2009 Jul 14. https://doi.org/10.1186/1748-5908-4-38.
Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, Boynton MH, Halko H. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108. https://doi.org/10.1186/s13012-017-0635-3.
Richards DA, Rahm Hallberg I. Complex Interventions in Health: An overview of research methods. 1st ed. London: Routledge; 2015. https://doi.org/10.4324/9780203794982.
Craig P, Dieppe P, Macintyre S, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655. Published 2008 Sep 29. https://doi.org/10.1136/bmj.a1655.
Shea CM, Jacobs SR, Esserman DA, Bruce K, Weiner BJ. Organizational readiness for implementing change: a psychometric assessment of a new measure. Implement Sci. 2014;9:7. Published 2014 Jan 10. https://doi.org/10.1186/1748-5908-9-7.
Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: the Evidence-Based Practice Attitude Scale (EBPAS). Ment Health Serv Res. 2004;6(2):61–74. https://doi.org/10.1023/b:mhsr.0000024351.12294.65.
Rogers EM. Diffusion of innovations (4ed). New York: Free Press; 1995.
Grol R, Wensing M. What drives change? Barriers to and incentives for achieving evidence-based practice. Med J Aust. 2004;180(S6):S57–60. https://doi.org/10.5694/j.1326-5377.2004.tb05948.x.
Prochaska JO, DiClemente CC. Transtheoretical therapy: Toward a more integrative model of change. Psychotherapy: Theory, Research & Practice. 1982;19(3):276–88. https://doi.org/10.1037/h0088437.
Stevens GW. Toward a process-based approach of conceptualizing change readiness. J Appl Behav Sci. 2013;49(3):333–60. https://doi.org/10.1177/0021886313475479.
Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19(1):189. Published 2019 Mar 25. https://doi.org/10.1186/s12913-019-4015-3.
Kaplan HC, Provost LP, Froehle CM, Margolis PA. The Model for Understanding Success in Quality (MUSIQ): building a theory of context in healthcare quality improvement. BMJ Qual Saf. 2012;21(1):13–20. https://doi.org/10.1136/bmjqs-2011-000010. Epub 2011 Aug 10.
Chamberlain P, Brown CH, Saldana L. Observational measure of implementation progress in community based settings: the Stages of Implementation Completion (SIC). Implement Sci. 2011;6:116. https://doi.org/10.1186/1748-5908-6-116.
Argyris C. Double loop learning in organizations. Harv Bus Rev. 1977;55(5):115–25.
Pereira VC, Silva SN, Carvalho VKS, Zanghelini F, Barreto JOM. Strategies for the implementation of clinical practice guidelines in public health: an overview of systematic reviews. Health Res Policy Syst. 2022;20(1):13. https://doi.org/10.1186/s12961-022-00815-4.
Lloyd-Evans B, Osborn D, Marston L, et al. The CORE service improvement programme for mental health crisis resolution teams: Results from a cluster-randomised trial. Br J Psychiatry. 2020;216(6):314–22. https://doi.org/10.1192/bjp.2019.21.
Williams NJ, Beidas RS. Annual Research Review: The state of implementation science in child psychology and psychiatry: a review and suggestions to advance the field. J Child Psychol Psychiatr. 60:430–50. https://doi.org/10.1111/jcpp.12960.
McHugo GJ, Drake RE, Whitley R, Bond GR, Campbell K, Rapp CA, Goldman HH, Lutz WJ, Finnerty MT. Fidelity outcomes in the National Implementing Evidence-Based Practices Project. Psychiatr Serv. 2007;58(10):1279–84. https://doi.org/10.1176/ps.2007.58.10.1279.
Ruud T, Drake RE, Šaltytė Benth J, et al. The Effect of Intensive Implementation Support on Fidelity for Four Evidence-Based Psychosis Treatments: A Cluster Randomized Trial. Adm Policy Ment Health. 2021;48(5):909–20. https://doi.org/10.1007/s10488-021-01136-4.
Hempel S, O'Hanlon C, Lim YW, Danz M, Larkin J, Rubenstein L. Spread tools: a systematic review of components, uptake, and effectiveness of quality improvement toolkits. Implement Sci. 2019;14(1):83. https://doi.org/10.1186/s13012-019-0929-8.
Ruud T, Høifødt TS, Hendrick DC, Drake RE, Høye A, Landers M, Heiervang KS, Bond GR. The Physical Health Care Fidelity Scale: Psychometric Properties. Adm Policy Ment Health. 2020;47(6):901–10. https://doi.org/10.1007/s10488-020-01019-0.
Ruud T, Drivenes K, Drake RE, Haaland VØ, Landers M, Stensrud B, Heiervang KS, Tanum L, Bond GR. The Antipsychotic Medication Management Fidelity Scale: Psychometric properties. Adm Policy Ment Health. 2020;47(6):911–9. https://doi.org/10.1007/s10488-020-01018-1.
Joa I, Johannessen JO, Heiervang KS, Sviland AA, Nordin HA, Landers M, Ruud T, Drake RE, Bond GR. The Family Psychoeducation Fidelity Scale: Psychometric Properties. Adm Policy Ment Health. 2020;47(6):894–900. https://doi.org/10.1007/s10488-020-01040-3.
Egeland KM, Heiervang KS, Landers M, Ruud T, Drake RE, Bond GR. Psychometric Properties of a Fidelity Scale for Illness Management and Recovery. Adm Policy Ment Health. 2020;47(6):885–93. https://doi.org/10.1007/s10488-019-00992-5.
Hartveit M, Hovlid E, Nordin MHA, et al. Measuring implementation: development of the implementation process assessment tool (IPAT). BMC Health Serv Res. 2019;19(1):721. Published 2019 Oct 21. https://doi.org/10.1186/s12913-019-4496-0.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. https://doi.org/10.1186/1748-5908-4-50.
Weiner BJ, Amick H, Lee SY. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008;65(4):379–436. https://doi.org/10.1177/1077558708317802. Epub 2008 May 29.
Grol R, Wensing M, Eccles M, Davis D. Improving patient care: the implementation of change in health care (2ed). Oxford: John Wiley & Sons; 2013.
Team RC. R: A language and environment for statistical computing. 2013.
Fixsen DL, Naoom SF, Blase K, Fiedman RM, Wallace F, et al. Implementation research: a synthesis of the literature. Florida: University of South Florida; 2005.
Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4:67. https://doi.org/10.1186/1748-5908-4-67.
Egeland KM, Ruud T, Ogden T, Lindstrøm JC, Heiervang KS. Psychometric properties of the Norwegian version of the Evidence-Based Practice Attitude Scale (EBPAS): to measure implementation readiness. Health Res Policy Syst. 2016;14(1):47. https://doi.org/10.1186/s12961-016-0114-3.
Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53. Published 2015 Apr 21. https://doi.org/10.1186/s13012-015-0242-0.
Ovretveit J, Mittman B, Rubenstein L, Ganz DA. Using implementation tools to design and conduct quality improvement projects for faster and more effective improvement. Int J Health Care Qual Assur. 2017;30(8):755–68. https://doi.org/10.1108/IJHCQA-01-2017-0019.
Øvretveit J. Quality health services. Uxbridge: Brunel Institute of Organisation and Social Studies, Brunel University; 1990.
Greenhalgh T. How to implement evidence-based healthcare: John Wiley & Sons; 2017.
Borgert MJ, Goossens A, Dongelmans DA. What are effective strategies for the implementation of care bundles on ICUs: a systematic review. Implement Sci. 2015;10:119. Published 2015 Aug 15. https://doi.org/10.1186/s13012-015-0306-1.
Ritchie MJ, Kirchner JE, Parker LE, et al. Evaluation of an implementation facilitation strategy for settings that experience significant implementation barriers. Implement Sci. 2015;10(Suppl 1):A46. Published 2015 Aug 20. https://doi.org/10.1186/1748-5908-10-S1-A46.
Holt DT, Vardaman JM. Toward a comprehensive understanding of readiness for change: The case for an expanded conceptualization. J ChangManag. 2013;13(1):9–18. https://doi.org/10.1080/14697017.2013.768426.
Brownson RC, Colditz GA, Proctor EK. Dissemination and implementation research in health: translating science to practice (2ed). New York: Oxford University Press; 2018.
We thank the clinical units involved, and especially the implementation facilitators, for provision of data and input enabling this study.
Open access funding provided by University of Bergen. The study was funded by the South-Eastern Regional Health Authority (Helse Sør-Øst HF) in Norway (Grant No. 2015106).
Ethics approval and consent to participate
The study was approved by The Regional Committee for Medical and Health Research Ethics in Southeastern Norway (Reg. No. REK 2015/2169, first approval 17.12.2015) and the Data protection officer for each health trust. We followed the principles in the Declaration of Helsinki. The study followed the Consort Extension guidelines for cluster randomized trials and the StaRI checklist for implementation studies. Both completed checklists are submitted together with the manuscript. Written informed consent was obtained from all participants. They received written information about the study and consented by responding to the questionnaire.
Consent for publication
Torleif Ruud and is an Editorial Board Member for BMC Health Services Research. The authors declare that they have no conflicts of interest. The IPAT questionnaire can be used free of charge.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Hartveit, M., Hovlid, E., Øvretveit, J. et al. Can systematic implementation support improve programme fidelity by improving care providers’ perceptions of implementation factors? A cluster randomized trial. BMC Health Serv Res 22, 808 (2022). https://doi.org/10.1186/s12913-022-08168-y
- Mental health
- Implementation process
- Implementation outcomes