Skip to main content

Building implementation capacity (BIC): a longitudinal mixed methods evaluation of a team intervention



Managers and professionals in health and social care are required to implement evidence-based methods. Despite this, they generally lack training in implementation. In clinical settings, implementation is often a team effort, so it calls for team training. The aim of this study was to evaluate the effects of the Building Implementation Capacity (BIC) intervention that targets teams of professionals, including their managers.


A non-randomized design was used, with two intervention cases (each consisting of two groups). The longitudinal, mixed-methods evaluation included pre–post and workshop-evaluation questionnaires, and interviews following Kirkpatrick’s four-level evaluation framework. The intervention was delivered in five workshops, using a systematic implementation method with exercises and practical working materials. To improve transfer of training, the teams’ managers were included. Practical experiences were combined with theoretical knowledge, social interactions, reflections, and peer support.


Overall, the participants were satisfied with the intervention (first level), and all groups increased their self-rated implementation knowledge (second level). The qualitative results indicated that most participants applied what they had learned by enacting new implementation behaviors (third level). However, they only partially applied the implementation method, as they did not use the planned systematic approach. A few changes in organizational results occurred (fourth level).


The intervention had positive effects with regard to the first two levels of the evaluation model; that is, the participants were satisfied with the intervention and improved their knowledge and skills. Some positive changes also occurred on the third level (behaviors) and fourth level (organizational results), but these were not as clear as the results for the first two levels. This highlights the fact that further optimization is needed to improve transfer of training when building teams’ implementation capacity. In addition to considering the design of such interventions, the organizational context and the participants’ characteristics may also need to be considered to maximize the chances that the learned skills will be successfully transferred to behaviors.

Peer Review reports


Implementation science has made great contributions to the knowledge about which strategies increase the use of evidence-based methods in health care [1,2,3,4]. For instance, having common goals, understanding the hindrances and facilitators, and continuously measuring process and outcomes are all important activities in implementation. However, these activities require specific knowledge from beyond health care managers’ and professionals’ training; therefore, they often lack skills for implementing evidence-based methods [5,6,7]. Consequently, implementation often occurs without careful planning or a structured approach, which results in the use of strategies that do not match organizational needs or that do not include evaluations of the implementation [7, 8]. This emphasizes the need to train managers and professionals in the skills they need to provide more effective implementations and to thus improve care quality and service efficiency.

Most implementation trainings have targeted researchers or doctoral and master’s-level students [9,10,11,12,13,14,15]. Some trainings have targeted specific groups (e.g., health care managers [16,17,18]), whereas others have targeted individual professionals through university courses, webinars [11], or a combination of workshops and webinars [15]. However, the implementation of evidence-based methods typically involves several individuals [19] who depend on the same immediate manager [20,21,22]. Hence, working together as a team is important to ensure that all members understand their roles in the implementation [8]. Consequently, implementation training should target teams rather than individuals [8]. Targeting teams allows for consideration of each team member’s unique role and supports the team’s pursuit of implementation as a collective effort (in addition to individual skill training). Furthermore, team training can create common implementation goals and work processes [8]. Team training can also help team members to efficiently identify local hindrances through their unique knowledge about the local context [23, 24]. In fact, team training has proven more effective than individual training in technology implementation [25]. However, to the authors’ knowledge, there are no prior evaluations of team trainings on the implementation of evidence-based methods.

For a team training to be effective, understanding how teams function and learn is crucial. Team learning is a social process [19] in which members acquire, share, and combine knowledge through shared experiences [26]. Adults tend to learn by reflecting on concrete experiences from everyday practice [27, 28], and adult learning happens in cycles [27]. For instance, after encountering a new experience (e.g., an attempt to implement a new routine), a learner creates meaning for that experience. Through reflection, the experience-specific learning becomes more general, which can result in ideas for new approaches that the learner then applies in practice, leading to new reflections and learning [27]. When applying this cycle to team learning, group discussions can accelerate learning by facilitating the reflection phase [19, 29]. Group discussions can also help members to contribute their unique perspectives on the phenomena at hand [19]. This is particularly true for diverse groups such as multiprofessional groups. Taken together, this evidence suggests that team reflections and discussions regarding concrete, practical workplace experiences are important in team training.

A challenge in any training is the risk that participants will not use the learned skills in their work [30,31,32]. The literature on training transfer suggests that the use of learned skills in practice is influenced by three factors: the participants’ characteristics, their organizational context, and the training intervention design [30, 32]. The individuals’ characteristics include their motivations, abilities, personalities, and existing skills. The post-training organizational context includes the organizational climate, management support and feedback, and the opportunities to put the skills into practice. The intervention design consists of the training’s objectives, content, and pedagogical methods. For those who design trainings, the participants’ individual characteristics are difficult to manipulate, particularly when there is no way to influence who will participate. Instead, to improve the chances of training transfer, intervention developers can focus on the intervention design and, to some extent, on the organizational context [30]. One way to make the organizational context more receptive to the transfer of learned skills into practical behaviors is to include managers in the training. Managers have unique opportunities to create prerequisites for their subordinates to use the acquired skills in the workplace. Another option is to train an entire organization so as to develop a common mental model of implementation and to secure the support of significant organizational stakeholders such as senior managers [33, 34]. With regard to the intervention design, training developers can choose the most suitable pedagogical activities for the objectives. For instance, when the learning goal is to perform an activity (rather than to simply learn how to do it), a suitable pedagogical technique is practice (e.g., role play). Other pedagogical methods—for instance, group work (e.g., group reflections), peer learning (e.g., exchange experiences), and testing changes in the workplace—can also be effective [30, 35].

Altogether, the evidence suggests that managers and professionals in health and social care need training to enable effective implementation. Team training may be an effective way to improve managers’ and professionals’ skills and to optimize the transfer of the learned skills into practice. Therefore, this study’s aim is to evaluate the effects of the Building Implementation Capacity (BIC) intervention that targets teams of professionals, including their managers.


A non-randomized intervention study with two intervention cases (each consisting of two groups) was conducted in 2016 and 2017. The longitudinal, mixed-methods evaluation included pre–post and workshop evaluation questionnaires, as well as interviews that were performed following Kirkpatrick’s four-level evaluation framework [36]. The study was carried out in the Stockholm region of Sweden, which is one of Sweden’s largest health care providers serving a population of two million, and is responsible for all health care provided by the regional authority. This includes primary care, acute hospital care and psychiatric rehabilitation. The study was a collaboration between academic researchers, local R&D unit and local health care and social care organizations. The researchers’ role was to design the intervention and the evaluation, and the R&D unit that employed some of the researchers that were responsible for conducting the BIC intervention.


The target population comprised teams of health care and social care work units in the region. Each team consisted of professionals’ that provide direct services to patients and clients and their immediate manager. The recruitment process was somewhat distinct for Case 1 and Case 2.

Case 1: In 2014, a local organization that provides elderly care and disability care approached the R&D unit to request help in building its professionals’ implementation capacity. The last author met with this organization’s senior managers on six occasions during 2014 and 2015 to discuss their needs and objectives. The senior managers decided that all work units should participate in the intervention, and an offer to participate was e-mailed to all 26 unit managers. In total, 20 units decided to participate with one team. For logistical reasons, the teams were divided into two groups that would participate in spring 2016 (Intervention group 1) or in autumn 2016 (Intervention group 2). The allocation of the teams was inspired by a stepped-wedge design that randomly allocated them into participating in Intervention group 1 or Intervention group 2 with the ambition that each team could act as their own control. However, ten of the participating teams expressed specific wishes in regards to when to participate in the intervention (e.g., a preference to participate in the spring due to organizational changes occurring in autumn). These wishes were considered for pragmatic reasons; however, as a result, ten teams selected when to participate in the intervention, whereas the other teams were randomized according to scope of practice of each unit. Two teams also wished to change intervention group after the randomization of teams was performed. Thus, the stepped wedge approach was not fully adhered to when allocating the teams into intervention groups.

Case 2: An invitation to participate in the intervention was distributed to health care and social care organizations in the region through emails to the division managers and to the managers and other professionals on the R&D unit’s mailing list (approximately 600 individuals), as well as a post on the unit’s web site. The recruitment was conducted between September and November 2016 for Intervention group 3 and between May and August 2016 for Intervention group 4. The exact number of organizations that were reached by the invitation cannot be determined. In Case 2, 19 units, i.e., teams of professionals and the immediate manager, signed up to participate in the intervention. All units that signed up for participation were included in the intervention, since they met the inclusion criteria mentioned below. However, four units withdrew their participation before the intervention commenced. This resulted in 15 units consisting of teams both within the same organization and teams from different organizations.

All participating units, regardless of case, were asked to provide 1) a specific implementation to actively work on during the intervention and 2) the team, i.e., the manager and key professionals, who would participate in the intervention.

Intervention cases

Based on the recruitment process, the teams in the two cases differed with regard to the intervention’s embeddedness in the organizational context and the workplace type. In Case 1 (Intervention group 1 and 2), the intervention was a part of the organizational processes and structures, as the organization’s senior managers initiated it. The organization was also involved in the overall intervention planning, and its representatives participated in the intervention workshops. Thus, the senior managers and other representatives learned the implementation methodology, received information about the implementation’s progress, and thus had an opportunity to support the teams. In Case 2 (Intervention group 3 and 4), the teams participated without any organizational embeddedness.

All participating units are public organizations funded by tax. Case 1 related to elderly care and disability care, and Case 2 related to health care and social care more broadly (e.g., primary care, rehabilitation, and public health, as well as elderly care and disability care). The staff members in the elderly and disability care group generally had lower education levels than those in health care (e.g., nursing assistants in Case 1 vs. registered nurses and physicians in Case 2). Elderly and disability care workers often also have limited Swedish-language skills, but in other aspects of health care, good knowledge of Swedish is a legal requirement. Table 1 summarizes Case 1 and Case 2.

Table 1 Description of the intervention cases including the intervention groups

Intervention development

The intervention development started in autumn 2013. A systematic process using both scientific and practical knowledge was applied. First, a literature review was conducted on the use of training initiatives in implementation science. This search provided information regarding the types of implementation approaches with scientific support, including strategies that were tailored to an organization’s barriers [23], behavioral approaches such as the Behavior Change Wheel framework, and models for the stages of implementation (from exploration to sustainment) [37, 38]. To further understand which components and key competencies are important in facilitating behavioral change in the workplace, the literature search included PsycINFO, PubMed, and Web of Science. The resulting knowledge was combined to form the content and desired outcomes of the intervention. The literature search also provided information about scientifically supported training designs such as applying practical work (experience-based learning), providing opportunities for social interactions, and involving both managers and other professionals [27]. Thus, the intervention delivery and pedagogy were based on the theory of experiential learning [27], the research on training transfer [30], and team learning [8, 19].

Second, this scientific knowledge was supplemented with the views of stakeholders from local health care organizations, as collected through nine interviews in September and October 2013. These interviews were then analyzed using content analysis. The results revealed information about the organizations’ needs, the desired training outcomes in terms of competences, the preferred learning activities, and the contextual circumstances (e.g., the practical factors that influence opportunities to participate in training).

Third, an intervention prototype was developed based on the scientific literature and the interviews. Thereafter, as part of a workshop, national experts (researchers, consultants, and practitioners) in implementation, change management, and health care and social care provided feedback on the prototype.

The intervention was pilot-tested twice (in 2014 and 2015) on a total of 24 teams. Each workshop included systematic evaluations involving questionnaires (296 responses in all). A focus-group interview was also performed with selected participants. The materials were continuously revised, mainly to clarify and simplify them.

Intervention content

The intervention was delivered in workshops (see Table 2 for the specific content of each workshop). A six-step systematic implementation method (Fig. 1), with exercises and practical work materials, was used. The steps were as follows: 1) describe the current problems and define the overall goal of the implementation; 2) list all behaviors that could accomplish that goal, then prioritize and specify the key (target) behaviors; 3) systematically analyze barriers to implementation, i.e., to perform the target behaviors; 4) choose implementation activities that fit the needs identified above; 5) test the chosen activities in the workplace; and 6) follow up on the implementation’s results to see if the chosen activities have had the desired effect. Thereafter, depending on the results, the process can continue with revised activities, new target behaviors, or a follow-up on the current activities.

Table 2 Content of each workshop (for Case 1, the content of workshop 1 was delivered in two workshops)
Fig. 1
figure 1

The six step implementation methodology in the BIC intervention

The workshop format combines short lectures, the units’ practical work, peer support (both within and across teams), collaboration between managers and other professionals, between-workshop assignments, feedback from workshop leaders, individual and group reflections, and workshop leaders’ boosting activities (emails, phone calls, and workplace visits). All participating managers were invited to a separate workshop to clarify their role as implementation leaders, following the iLead intervention outline [16, 39].

The delivery format differed somewhat across the cases. In Case 1 (Intervention group 1 and 2), the first workshop was divided into two workshops, as suggested by the organization’s senior managers, to match that organization’s specific conditions (e.g., many staff members with low education levels and lack of fluency in Swedish). Consequently, Case 1 included five half-day workshops, but Case 2 (Intervention group 3 and 4) included four half-day workshops.


Kirkpatrick’s training-evaluation framework guided the quantitative and qualitative data collection and analysis [36]. This framework differentiates potential effects into four levels: 1) reactions to the training, 2) learning, 3) behavioral changes (i.e., to what degree participants apply what they have learned in practice), and 4) organizational results (Table 3).

Table 3 Evaluation levels (Kirkpatrick, 1996 [36]), measured outcomes, data collection methods and time points

Quantitative and qualitative data were collected to account for the various training effects. The rationale for using mixed methods was to let the methods complement each other and to allow for validation of the findings by triangulation. All intervention participants in each team received an evaluation questionnaire after each workshop (Level 1: reactions), as well as a questionnaire that was given at baseline and after the intervention (Levels 1 and 2: reactions and learning). All participants were also invited to complete an interview to evaluate their learning (Level 2), their behavioral changes (Level 3), and the organizational results (Level 4). For Case 1, all members of the participating units (including professionals and managers who were not participating in the workshops) received web questionnaires at baseline and after the intervention to capture the effects of Level 4. The senior managers involved in Case 1 approved the questionnaire; it was not possible to conduct this step for Case 2. Written informed consent was obtained from all participants.

The interviews were conducted in January and February 2018. Thus, the length of follow-up differed across the four groups. All intervention participants received interview invitations by email and/or phone. Forty-three individuals communicated interest to participate in the interviews, and all of these were selected for the study. Seven interviews were canceled because the participants were unable to attend. This resulted in 35 completed interviews with participants from all four intervention groups. The interviewer (the seventh author) recorded the face-to-face interviews and took detailed notes. One informant declined to be recorded, and three others were interviewed as a group (by their request). The interviews lasted between 15 and 40 min and were performed in Swedish. All the recorded interviews were then transcribed verbatim. Further information on the data collection can be found in Table 4.

Table 4 Data collection methods and number of respondents for each intervention group


Level 1. Reactions

For each workshop, satisfaction with the intervention was measured with two items: relevance and quality [16]. Using Likert scales, the workshop topics were rated from not at all relevant (1) to very relevant (10) and from very low quality (1) to very high quality (10).

Outcome expectancy was measured with two items [41]. A sample item: “I believe that my participation in the intervention has a positive impact on my work.” The response alternatives ranged from strongly disagree (1) to strongly agree (10) on a Likert scale. The groups’ Cronbach’s alpha values varied, α = .76 to .96.

Level 2. Learning

A six-item scale for measuring the participants’ implementation knowledge was developed based on a learning scale from another intervention [42]. An example item: “I have enough knowledge to formulate and conduct appropriate activities to support implementation work.” The response alternatives ranged from strongly disagree (1) to strongly agree (10) on a Likert scale. The groups’ Cronbach’s alpha values varied, α = .91 to .93 at baseline and α = .89 to .94 after the intervention.

The participants’ perceptions of their learning were also captured in the interviews via questions about their knowledge regarding the implementation method. (See Additional file 1 for the interview guide.)

Level 3. Behavior

The participants’ application of the learned implementation steps was used as a measure of the behavioral changes and was evaluated through interviews questions about their behaviors after the intervention and the impact that the intervention had on their behaviors.

Level 4. Results

Interview questions regarding how the implementation was performed and the impact that it had on work practices were used to evaluate the effects at this level. In addition, for Case 1, the work units’ change commitment and change efficacy were measured before and after the intervention via a nine-item scale regarding organizational readiness for implementing change [42]. The wording was altered to make the items easy to understand (e.g., replacing “implementation” with “improvement”). This is a sample item: “In my work group, we are committed to work with improvements.” The response alternatives ranged from strongly disagree (1) to strongly agree (10) on a Likert scale. The Cronbach’s alpha values varied, α = .92 to .95 at baseline and α = .92 to .94 after the intervention.

Data analyses

Independent-samples t tests were performed to analyze potential baseline differences between the cases and the intervention groups in terms of reactions (Level 1) and learning (Level 2). Paired-samples t tests were performed to analyze whether the intervention groups’ implementation knowledge (Level 2) changed relative to the baseline. To evaluate the organizational results (Level 4) for Intervention group 1 and 2, paired-samples t tests were performed to analyze whether the work units’ readiness for implementation changed relative to the baseline. All analyses were conducted using SPSS 24.

For the qualitative data, the seventh author used the Kirkpatrick training-evaluation framework [36] to conduct a deductive thematic analysis [43]. The author read the transcripts iteratively and focused on the four-level concepts, i.e. reaction, learning, behavior and organizational results, to code the initial data and sorting the different codes into the four levels. The operationalization of the four levels had been defined in a discussion involving all the authors. The third and fourth authors also read the transcripts and discussed the coding with the seventh author.


Level 1: reactions

All intervention groups were satisfied with the intervention and had high expectations, believing that it would have a positive impact on their work (Table 5).

Table 5 Mean values (standard deviations) for groups in case 1 and 2 for reactions to the intervention (level 1)

Level 2: learning

The participants’ implementation knowledge increased relative to the baseline in both cases (intervention groups 1–4) (Table 6).

Table 6 Mean values (standard deviation) for groups in case 1 and 2 for implementation knowledge (level 2) at baseline and post-intervention

The interviews revealed that the participants in both cases learned about the complexity of implementation and became more aware that implementation is time-consuming, as one participant noted:

“It is important not to rush. In thinking about how to work from now on, maybe it is good to let it [the implementation process] take time so that you don’t sit for just half an hour and scribble down a plan ( …) but instead look at it a few times to feel content with the [implementation] plan.” (Case 2)

Participants also stated that the intervention had provided them with a new mind-set regarding implementation. They described how they had learned a structured method for implementation and how the specific parts of the method (e.g., considering implementation in terms of behavioral change) was valuable.

“But I have already gotten into this mind-set. I know that I have to start with certain steps; otherwise, I will not reach the end goal.” (Case 1)

Level 3: behavior

The participants in both cases reported that they had changed their behaviors by using one or more of the steps that they learned during the intervention. They did not necessarily use all of the steps; rather, they selected those that they perceived most useful in their situations. Few participants reported not using any of the steps, and none reported using all six steps (i.e., the whole implementation process). Some respondents also stated that they planned to use all the steps but had not yet reached the appropriate point in the implementation process to do so.

The first and second steps (“Describe the overall goal” and “Specify a target behavior,” respectively) were the most frequently used steps in both cases. The participants reported that the first step was very useful because it allowed the respondents to think about why the new routine should be implemented and to consider various types of goals when changing a practice, instead of solely focusing on the result of a specific implementation (e.g., putting a guideline into use). Participants reported using the first step as follows:

“I think, in the start, when the problem and goal [are] formulated ( …) is very good and obvious in some way. But I think it’s something we often miss doing; we start thinking in another end. Many times, we start with “What do we want to do?” and then try to motivate why (…) and thus are often too eager to aim for the solution.” (Case 2)

Participants used the second step (“Specify a target behavior”) to obtain clear and tangible behaviors, thereby making the implementation practical and not merely theoretical. They also used this step to establish an agenda (i.e., who does what) and to include all members at the work unit in the implementation. Participants described this step as useful at avoiding a return to past behaviors and described it as follows:

“To specifically concretize—that’s what you do through this method. You concretize exactly what you are going to do: What behavior are we going to change, and what is the overall goal (and to make this clear)? That’s what’s good and maybe even unique about working this way. Otherwise, it often becomes a lot of talk but no action. This [the method] makes it happen.” (Case 2)

The participants in both cases used the third and fourth steps (“Analyze barriers” and “Choose activities,” respectively). They frequently mentioned the motivational function of the implementation strategies. The participants stated that members of their units need to understand why the implementation is necessary and also emphasized the importance of trust and tutorials, as one of the participants noted:

“Maybe to concretize and create some sort of familiarity with why—not why we make changes, but what the purpose of these changes are. That it’s not someone at the municipal town hall that has just decided something that is to be carried out. But that there actually is a purpose.” (Case 2)

Only the participants in Case 1 described the fifth and sixth steps (“Test activities” and “Follow-up behaviors”). They explained that, before the intervention, they had trouble maintaining new behaviors and would often fall back into previous routines. After the intervention, most participants stated that they needed to more actively maintain new routines, as it took some time before they could sustain an implementation in practice. They highlighted the value of continuous follow-up behaviors:

“And the repetition, to go back and follow up, follow up, and follow up. ( …) I personally follow up and ensure that there are checklists that are easy to follow up. Have you signed it, have you not, and why? And to keep reminding about following up.” (Case 1)

Furthermore, in Case 1, only those in authority positions (e.g., the unit managers and others in supervisory roles) applied the steps. The subordinates (e.g., the nursing assistants) usually were not involved in the implementation planning and thus were not in a position to directly use the steps. Instead, they reported changes in their behaviors when their supervisors introduced new routines. By contrast, Case 2 included examples of both managers and other professionals applying the steps.

Level 4: organizational results

Work units’ readiness for implementation as measured in Case 1 increased relative to the baseline for Intervention group 1, whereas no differences were observed for units in Intervention group 2 (Table 7).

Table 7 Mean values (standard deviation) for group 1 and 2 in Case 1 for readiness for implementation (level 4) at baseline and post-intervention

Some of the interview respondents in Case 1 reported changing their everyday routines as a result of the learned skills. These changes could have positive effects on both the professionals and their clients. For example, in one unit, improving its clients’ meals led to more enjoyment and less distress among the clients. This improvement to the meals even spread to other units in the organization. The participants in Case 2 did not report any examples of learned behaviors influencing their practice. They stated that it was too early for such examples, as their implementation cases from the intervention were still in the planning phase.


Positive effects regarding the two first levels of Kirkpatrick’s evaluation model [36] were found, that is, the participants were satisfied with the intervention and showed improved knowledge and skills relative to the baseline. Some positive changes also occurred for Level 3 (behaviors), even though none of the participating units had systematically applied all the six implementation steps. Regarding Level 4 (organizational results), in Case 1, the work units’ readiness to implement significantly increased. The interviews revealed positive organizational results (i.e., changes in work practices) for Case 1, but not for Case 2.

The training intervention’s goal was to create good opportunities for the transfer of learned skills into work practice. One factor that influences training transfer—intervention design [30, 32]—was carefully considered in the intervention’ development. The content and format followed the recommendations from the literature on experience-based learning and training transfer [30]. Training was provided to teams rather than to individuals; the teams’ managers participated; and the participants’ practical experiences were combined with their theoretical knowledge, social interactions, reflections, and peer support [30,31,32]. Despite these efforts, challenges remained with regard to transferring the learned skills into applied behaviors. This suggests a need to more carefully consider the other two factors that influence training transfer: the organizational context and the participants’ characteristics.

However, influencing the organizational context can be tricky, as it is primarily affected by an organization’s processes and structures, rather than by researchers or intervention developers. Impacting this context requires partnerships between interventionists and organizational stakeholders, such as providing clear indications regarding the impacts of the context changes and the actions that the organization can take. Case 1 included some characteristics of this type of partnership, and some contextual factors were considered in the intervention design (e.g., how a participant’s limited language skills would be handled). In a similar manner, this intervention involved changes in the organizational context to improve transfer (e.g., senior managers following up on the units’ implementation processes). Furthermore, in Case 1, the intervention also affected the organizational context by increasing readiness for implementation. Taken together, the results still suggest that the measures taken to consider the organizational context were not sufficient. The robust evidence regarding the importance of local context to implementation suggests that the organizations themselves need to make sure that the context is receptive for transfer of training [40, 44, 45]. The irony is that the parts of the current intervention in which the units were specifically asked to consider their local context (and make necessary changes) were the parts that the participating units did not undertake. This illustrates how difficult it is to establish changes in an organizational context within a time-limited training intervention and suggests that, to obtain results, researchers need to spend more time and effort understanding the crucial impact of context.

Training interventionists have seldom considered the third component of training transfer—the participants’ characteristics (e.g., their motivations and skills). In this intervention, the managers chose the professionals who were involved in the implementation case, but those professionals’ personal features were not specified. Some authors [19] have suggested that individual features should be considered and that the training participants should be carefully selected. Based on the results of this study, we concur with those recommendations. For instance, in Case 1, some participants were actually not expected to directly apply the skills that they had learned in their work practices. To understand this, consider that the selection of participants in a training is commonly based on the general assumptions that competence development is good for all staff members and that everyone should receive an equal amount of training [46]. The practical implications are 1) that the individuals who are best suited to a specific implementation case should be selected to participate and 2) that the organizational routines for defining how these individuals use their acquired knowledge at work should be defined before the training.

A theme that is related to the individual characteristics is the functioning of the selected team members during the training. During training, work teams’ existing group dynamics (e.g., conflicts and power structures) are often present. Conflict related to these dynamics can have a detrimental impact on the team’s learning. In the current intervention, the teams rated the quality of their teamwork after each workshop. The goal was to identify which teams would need additional support during the intervention to establish effective work routines. To improve the design of training interventions, group dynamics should be assessed before selecting the individuals on the team (i.e., before the intervention starts). Some teams could benefit from adding a supplementary intervention [39] to the main intervention so as to improve the effects of the implementation training.

Methodological considerations

One strength of this study was that the participating units represented distinct settings and worked with diverse implementation cases. This variety strengthens the external validity of the findings. The internal validity was strengthened by the mixed-methods evaluation design, which allowed for data triangulation. Nevertheless, this study’s limitations should be considered when interpreting its results. First, the evaluation was based on self-reported data. Participants’ behavioral changes were evaluated through interviews, which implies a risk that the participants adapted their responses to ensure social desirability [47]. To mitigate this risk, the interviewer was not involved in the intervention. The respondents provided examples of the problems that they experienced when applying what they had learned in the intervention, which suggests that they felt comfortable expressing more than just socially desired answers. Nevertheless, observation would have been a more objective and valid way of evaluating behavioral changes. However, due to the many participating units and this method’s time and cost requirements, observations could not be applied in this study. This study lacked a control group, which made it difficult to determine the magnitude of the changes or to identify whether the changes were solely due to the intervention. To overcome this, we aimed to use a stepped-wedge design; however, for practical reasons, this method was not fully adhered to. Furthermore, as is always the case in evaluations, the chosen time points may have influenced the results, as effects occur on different points in time. The interviews occurred at the same time point even though the groups and cases participated in the intervention at different time points. This may explain why Case 2 showed no evidence of organizational results (Level 4), which takes the longest time to occur. Another explanation may be related to the difference in methods used to assess organizational results in the two cases. In Case 1, questionnaire data from employees, other than the participants in the intervention, showed positive organizational results after the intervention. This data was not possible to collect from employees in Case 2. Thus, it is possible that some organizational results were not detected.


Although the intervention revealed positive effects in terms of the participants’ satisfaction and knowledge, it also indicated mixed effects in the implementation behaviors and organizational results. These findings suggest that, when designing an intervention to build teams’ implementation capacity, researchers should consider not only the design of the intervention but also the organizational context and the participants’ characteristics, as this maximizes the chances for a successful transfer of learned skills into behaviors.



Building implementation capacity


Research and development


  1. Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76.

    Article  Google Scholar 

  2. Glasgow RE, Vinson C, Chambers D, et al. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Heal. 2012;102:1274–81.

    Article  Google Scholar 

  3. Sales A, Smith J, Curran G, et al. Models, strategies, and tools. Theory in implementing evidence-based findings into health care practice. J Gen Intern Med. 2006;21(Suppl 2):S43–9.

    PubMed  PubMed Central  Google Scholar 

  4. Flottorp SA, Oxman AD, Krause J, et al. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. 2013;8:35.

    Article  Google Scholar 

  5. Batalden M, Batalden P, Margolis P, et al. Coproduction of healthcare service. BMJ Qual Saf Epub ahead of print 2015.

    Article  Google Scholar 

  6. Ubbink DT, Vermeulen H, Knops AM, et al. Implementation of evidence-based practice: outside the box, throughout the hospital. Neth J Med. 2011;69:87–94.

    CAS  PubMed  Google Scholar 

  7. Mosson R, Hasson H, Wallin L, et al. Exploring the role of line managers in implementing evidence-based practice in social services and older people care. Br J Soc Work. 2017;47:542–60.

    Google Scholar 

  8. Gifford W, Davies B, Tourangeau A, et al. Developing team leadership to facilitate guideline utilization: planning and evaluating a 3-month intervention strategy. J Nurs Manag. 2011;19:121–32.

    Article  Google Scholar 

  9. Straus SE, Sales A, Wensing M, et al. Education and training for implementation science: our interest in manuscripts describing education and training materials. Implement Sci. 2015;10:136.

    Article  Google Scholar 

  10. Meissner HI, Glasgow RE, Vinson CA, et al. The U.S. training institute for dissemination and implementation research in health. Implement Sci. 2013;8:12.

    Article  Google Scholar 

  11. Chambers DA, Proctor EK, Brownson RC, et al. Mapping training needs for dissemination and implementation research: lessons from a synthesis of existing D&I research training programs. Transl Behav Med. 2017;7:593–601.

    Article  Google Scholar 

  12. Tabak RG, Padek MM, Kerner JF, et al. Dissemination and implementation science training needs: insights from practitioners and researchers. Am J Prev Med. 2017;52:S322–s329.

    Article  Google Scholar 

  13. Carlfjord S, Roback K, Nilsen P. Five years’ experience of an annual course on implementation science: an evaluation among course participants. Implement Sci. 2017;12:101.

    Article  Google Scholar 

  14. Proctor EK, Landsverk J, Baumann AA, et al. The implementation research institute: training mental health implementation researchers in the United States. Implement Sci. 2013;8:105.

    Article  Google Scholar 

  15. Moore JE, Rashid S, Park JS, et al. Longitudinal evaluation of a course to build core competencies in implementation practice. Implement Sci. 2018;13:106.

    Article  Google Scholar 

  16. Richter A, von Thiele Schwarz U, Lornudd C, et al. iLead—a transformational leadership intervention to train healthcare managers’ implementation leadership. Implement Sci. 2016;11:1–13.

    Google Scholar 

  17. Aarons GA, Ehrhart MG, Farahnak LR, et al. Leadership and organizational change for implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implement Sci. 2015;10:11.

    Article  Google Scholar 

  18. Champagne F, Lemieux-Charles L, Duranceau M-F, et al. Organizational impact of evidence-informed decision making training initiatives: a case study comparison of two approaches. Implement Sci. 2014;9:53.

    Article  Google Scholar 

  19. Greenhalgh T. How to Implement Evidence-Based Healthcare. Oxford: Wiley Blackwell; 2018.

  20. Gifford W, Davies B, Edwards N, et al. Managerial leadership for nurses’ use of research evidence: an integrative review of the literature. Worldviews Evid-Based Nurs. 2007;4:126–45.

    Article  Google Scholar 

  21. Sandström B, Borglin G, Nilsson R, et al. Promoting the implementation of evidence-based practice: a literature review focusing on the role of nursing leadership. Worldviews Evid-Based Nurs. 2011;8:212–23.

    Article  Google Scholar 

  22. Ovretveit J. Improvement leaders: what do they and should they do? A summary of a review of research. Qual Saf Heal Care. 2010;19:490–2.

    Google Scholar 

  23. Grimshaw JM, Eccles MP, Lavis JN, et al. Knowledge translation of research findings. Implement Sci. 2012;7:50.

    Article  Google Scholar 

  24. Shaw B, Cheater F, Baker R, et al. Tailored interventions to overcome identified barriers to change: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2005;20(3):Cd005470.

  25. Edmondson AC, Bohmer RM, Pisano GP. Disrupted routines: team learning and new technology implementation in hospitals. Adm Sci Q. 2001;46:685–716.

    Article  Google Scholar 

  26. Argote L, Naquin CGDH. Group learning in organizations. In: Turner ME, editor. Groups at work: advances in theory and research: 369–411. Mahwah: Lawrence Erlbaum Associates; 2001.

    Google Scholar 

  27. Kolb DA. Experiential learning: experiences as the source of learning and development. Englewood Cliffs: Prentice Hall; 1984.

  28. Schön D. The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books; 1984.

  29. Elwyn G, Greenhalgh T, Macfarlane F. Groups: a guide to small group work in healthcare, management, education and research. Boca Raton: Radcliffe Publishing; 2001.

  30. Blume BD, Ford JK, Baldwin TT, et al. Transfer of training: a meta-analytic review. J Manage. 2010;36:1065–105.

    Google Scholar 

  31. Grossman R, Salas E. The transfer of training: what really matters. Int J Train Dev. 2011;15:103–20.

    Article  Google Scholar 

  32. Baldwin TT, Ford JK. Transfer of training: a review and directions for future research. Pers Psychol. 1988;41:63–105.

    Article  Google Scholar 

  33. Yukl G. Leading organizational learning: reflections on theory and research. Leadersh Q. 2009;20:49–53.

    Article  Google Scholar 

  34. Salas E, Almeida SA, Salisbury M, et al. What are the critical success factors for team training in health care? Jt Comm J Qual Patient Saf. 2009;35:398–405.

    Article  Google Scholar 

  35. Salas E, Tannenbaum SI, Kraiger K, et al. The science of training and development in organizations: what matters in practice. Psychol Sci Public Interes. 2012;13:74–101.

    Article  Google Scholar 

  36. Kirkpatrick D. Great ideas revisited. Train Dev. 1996;50:54–60.

    Google Scholar 

  37. Michie, S., Atkins, L. & West, R. (2014). The Behaviour Change Wheel: A guide to designing interventions, London: Silverback Publishing.

    Google Scholar 

  38. Aarons, G. A., Hurlburt, M. & Horwitz, S. M. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health, 38(1), pp 4–23.

    Article  Google Scholar 

  39. Hasson H, Lornudd C, von Thiele Schwarz U RA. Intervention to support senior management teams in organizational interventions, in Implementing and evaluating organizational interventions. In: Nielsen K NA (ed) Implementing and evaluating organizational interventions. New York: Taylor and Francis; 2018.

  40. Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

    Article  Google Scholar 

  41. Fridrich A, Jenny GJ, Bauer GF. Outcome expectancy as a process indicator in comprehensive worksite stress management interventions. Int J Stress Manag. 2016;23:1–22.

    Article  Google Scholar 

  42. Augustsson H, Richter A, Hasson H, et al. The need for dual openness to change. J Appl Behav Sci. 2017;53(3):349–68.

  43. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

    Article  Google Scholar 

  44. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82:581–629.

    Article  Google Scholar 

  45. Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008;41:327–50.

    Article  Google Scholar 

  46. Augustsson H, Tornquist A, Hasson H. Challenges in transferring individual learning to organizational learning in the residential care of older people. J Health Organ Manag. 2013;27:390–408.

    Article  Google Scholar 

  47. Crowne D, Marlowe D. The approval motive: studies in evaluative dependence. New York: Wiley; 1964.

    Google Scholar 

Download references


The authors would like to thank all the respondents.


This work was financially supported by Stockholm County Council (ALF project). The funder had no role in the study (i.e., design, data collection, analysis, interpretation of data and writing the manuscript).

Availability of data and materials

The datasets used will be available from the corresponding author on reasonable request.

Author information

Authors and Affiliations



RM, HH, AR, HA and UvTS designed the project. HH secured funding for the project and was responsible for the ethical application. RM drafted the first version of the manuscript with input from all authors and conducted the statistical analyses. MG performed the interviews and the analyses together with AB and MÅ. All authors also discussed the draft at several occasions, revised it, and approved the final manuscript.

Corresponding author

Correspondence to Henna Hasson.

Ethics declarations

Ethics approval and consent to participate

The project has been reviewed by the Regional Ethical Review Board in Stockholm (Ref no. 2017/2211–31/5) and found not to need any ethical approval. Nevertheless, all participants were treated in accordance with the ethical guidelines. Written informed consent was obtained from all study participants. In the case of refusal, these individuals were not included in the data set used for analyses.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

Interview guide. The interview guide used in the study (DOCX 18 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mosson, R., Augustsson, H., Bäck, A. et al. Building implementation capacity (BIC): a longitudinal mixed methods evaluation of a team intervention. BMC Health Serv Res 19, 287 (2019).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: