Skip to main content
  • Research article
  • Open access
  • Published:

Qualitative approaches to use of the RE-AIM framework: rationale and methods

Abstract

Background

There have been over 430 publications using the RE-AIM model for planning and evaluation of health programs and policies, as well as numerous applications of the model in grant proposals and national programs. Full use of the model includes use of qualitative methods to understand why and how results were obtained on different RE-AIM dimensions, however, recent reviews have revealed that qualitative methods have been used infrequently. Having quantitative and qualitative methods and results iteratively inform each other should enhance understanding and lessons learned.

Methods

Because there have been few published examples of qualitative approaches and methods using RE-AIM for planning or assessment and no guidance on how qualitative approaches can inform these processes, we provide guidance on qualitative methods to address the RE-AIM model and its various dimensions. The intended audience is researchers interested in applying RE-AIM or similar implementation models, but the methods discussed should also be relevant to those in community or clinical settings.

Results

We present directions for, examples of, and guidance on how qualitative methods can be used to address each of the five RE-AIM dimensions. Formative qualitative methods can be helpful in planning interventions and designing for dissemination. Summative qualitative methods are useful when used in an iterative, mixed methods approach for understanding how and why different patterns of results occur.

Conclusions

In summary, qualitative and mixed methods approaches to RE-AIM help understand complex situations and results, why and how outcomes were obtained, and contextual factors not easily assessed using quantitative measures.

Peer Review reports

Background

The RE-AIM model was developed in 1999 in response to a need to have a framework to evaluate potential for, or actual, public health and population impact. [1] RE-AIM includes five dimensions to call attention to the importance of measuring not only a traditional clinical outcome (i.e. effectiveness), but also implementation outcomes that are less frequently assessed, but critical to producing broad impact. The RE-AIM dimensions (reach, effectiveness, adoption, implementation and maintenance) are outlined in Table 1. Since its development, there has been significant uptake in the use of RE-AIM as a planning and evaluation framework. [2] It has been used in over 430 publications [3, 4] that include both funded research and local and national programs. [5,6,7] There have been many more uses in proposed studies, although tracking is difficult. A recent review found that between 2000 and 2016, RE-AIM was the implementation science model used most frequently in grant applications to the NIH and to CDC. [8]

Table 1 RE-AIM Dimensions and Related Questions for Clinicians and Community Leaders

RE-AIM has been used for both planning [2] and evaluation. [3, 9] Although the model is somewhat intuitive, full use of it requires in depth information and understanding of multi-level and contextual factors. [10] RE-AIM is one way to approach the “ultimate use” question of what intervention (programs or policies) components, conducted under what conditions and in what settings, conducted by which agents for which populations (and subgroups) are most effective in producing which outcomes; and for what cost, and under what circumstances? [11].

Although not always explicitly stated, full use of the RE-AIM model [10] necessitates use of both qualitative and quantitative methods to understand reasons for results on each dimension of RE-AIM. However, most studies rely heavily on quantitative methods and lack qualitative contributions. The published literature reveals a lack of qualitative methods use with RE-AIM. [3, 10] In the review of published empirical studies on RE-AIM by Gaglio and colleagues, among publications that reported on a given RE-AIM dimension, only 3.5–15.6% of the articles (median of 7%) included a qualitative measure. [3] A separate, more recent review by Harden and colleagues of behavioral intervention studies using RE-AIM revealed use of qualitative methods in 6–24% of studies across dimensions (median of 15%). [12]

Qualitative measures are of value in RE-AIM (and other planning and evaluation approaches) for several reasons. Some questions simply cannot be answered with quantitative data. Pulling data from an electronic medical record (EMR), analyzing a survey scale or counting does not work for some questions, or are too expensive to collect feasibly. Second, qualitative data provide answers to not just what happened, but why and how. They can illuminate patterns of results and why and how results were obtained for various outcomes, including unintended effects. Third, they provide diverse and multiple assessment methods to provide convergent validity for quantitative results. They can engage the participants in a collaborative manner and consider their inputs to a program or policy in a way that quantitative approaches do not. Finally, as with many research questions and evaluation approaches, having quantitative and qualitative methods for RE-AIM dimensions allows these methods to iteratively inform each other. This should enhance understanding and lessons learned, ultimately leading to better dissemination of evidence-based approaches into practice.

The purpose of this article is to summarize and recommend qualitative approaches to address the RE-AIM model and its various dimensions. We provide guidance for researchers and community groups that wish to use qualitative methods in their RE-AIM applications.

Methods

Applying Qualitative Methods to RE-AIM Dimensions.

Qualitative research provides meaning and understanding. It is utilized in both exploratory and explanatory research. This is in contrast to quantitative methods that utilize numbers and address statistical outcomes. In general, qualitative methods help understand how and why results on various individual RE-AIM dimensions, or patterns of results across dimensions (e.g. high reach and low effectiveness) occur. A wide variety of qualitative techniques and approaches can be used to address RE-AIM issues. As the focus of this paper is not to provide a comprehensive description of qualitative data collection and analysis methods, we refer the reader to excellent texts. [13,14,15,16] Instead of using one strategy, the methods selected should be tailored to the setting, research questions and resources available. Table 1 provides simple translational questions that can be used to inquire about RE-AIM issues by and with clinicians and community members. [17] In summary, there are a variety of methods conducive to qualitative work in exploration of RE-AIM dimensions. These include interviews, observations, focus groups, photovoice, digital storytelling, and ethnography. Analysis methods are also varied and dependent on the research or evaluation issue and question. Choices include grounded theory, thematic analysis, matrix, and immersion crystallization. Below we describe how qualitative methods can be used to address each RE-AIM dimension and key issues involved. Table 2 provides examples of questions and possible qualitative methods for each RE-AIM dimension.

Table 2 RE-AIM Elements and Qualitative Data Questions and Examples

Reach.

Standard means of assessing reach are to describe the number and percent of participants who participate in a desired initiative. From a qualitative method perspective, key issues concerning reach are understanding why people accept or decline participation and describing characteristics of participants versus non-participants that are not available from quantitative data or records. For example, if the desired goal is to reach all patients with diabetes and a hemoglobin A1c level over 8, the quantitative measure of reach would be the number or percent participating in the initiative out of the total eligible. Knowing that 25% of patients are participating provides insight into what degree of penetration occurred as a result of the initiative, but does not help to understand situations in and characteristics of the reached population that distinguish them from non-participants. Often, quantitative approaches have been used to describe reach in terms of the demographics of the reached versus non-reached population. For example, maybe the reach was 25%, but three quarters of the participants were female, Caucasian, and privately insured. Thus, the reach for this program largely misses Medicaid insured participants of both genders. These data represent identifiable characteristics of participants that provide a more comprehensive picture of who is missing. However, there are often characteristics that impact participation versus non-participation that are not routinely collected, readily available from EMRs or other databases, or not easy to quantify. Perhaps reach is limited by factors such as lack of trust in health care providers, disinterest in medication taking, or social determinants barriers faced by non-participants such as lack of transportation or family support to participate. These factors are difficult to ascertain without qualitative inquiry. To thoroughly understand reach, it is often necessary to conduct more in-depth and qualitative work to identify root cause issues of suboptimal reach.

Effectiveness.

Effectiveness focuses on important clinical or behavioral outcomes of interest and is most frequently summarized quantitatively. Key qualitative issues relevant to effectiveness are understanding whether various stakeholders find the effectiveness findings meaningful, why interventions produce different pattern of results across different RE-AIM dimensions, reasons for differences in results across subgroups, and why unanticipated negative results are observed. Continuing the example above with effectiveness being the objective of lowering hemoglobin A1c for patients with diabetes; assume that 50% were able lower their A1c to under 8, and that the mean A1c reduction from the intervention was 0.8%. However, this is just the beginning of an answer to the question: “was intervention X effective?”

Qualitative methods can contribute to assessing effectiveness in several ways. The first is understanding whether quantitative effectiveness findings are meaningful to various stakeholders (e.g., clinicians, patients). By meaningful we mean two things. First, if the measured outcome is valuable to the stakeholder (i.e., it provides them with information that helps them make decisions and/or achieve their respective goals). Second, if the actual quantitative change is meaningful enough to make the intervention worthwhile. In other words, what outcomes are of value to what stakeholders and how did the intervention fare on these factors? In our scenario, the question might be, was an average reduction of A1c by 0.8% meaningful for clinicians and participants?

Corollary qualitative questions include “Is reduction in A1c levels an appropriate indicator of an effective intervention for managing diabetes for the clinician and the patient? Does this provide them with information to decide what to do next and whether they are approaching or achieved their personal (i.e., patient) or practice (i.e., clinician) goals?” Second, is the amount of change (0.8% A1c reduction) sufficient effectiveness to make the intervention worthy for routine use? Does this change lead to sufficient improvement in the participants’ everyday life or quality of life to make participation in the intervention worthwhile? Although surveys can be utilized to answer some of these questions, they often lack the depth of the responses to be insightful.

Qualitative measures add understanding to differential or heterogeneous results. Quantitative data using subgroup analyses might identify subgroups of participants who were more successful in achieving A1c reductions. Qualitative methods are best suited to answer questions of why and how these groups are different in their level of success. Quantitative analyses alone are unlikely to identify more nuanced, sociocultural, or practical features that are major contributors to program effectiveness. In our example, we might find that those participants who perceived the intervention more favorably were able to experience better results than those who had more initial reservations toward the diabetes initiative.

With any intervention, in addition to planned, intended effects, there may also be unintended effects or consequences, either positive or negative. These are very relevant results and it is important to understand the total pattern of results, both intended and unintended. Qualitative methods can help identify some of the unintended outcomes which might not have been measured quantitatively, but can emerge as qualitative reports from both clinicians and participants, or identified through observations. In our example, observations conducted by the research team during implementation might reveal that as a result of participating in the diabetes intervention, patients get to spend less time with their physicians during the visit to discuss other medical concerns and/or physicians are less likely to initiate discussion about the emotional/mental health status of the patient (i.e., shift in priorities during visit).

Adoption.

Adoption is quantitatively operationalized as the number or proportion of settings and implementing staff who agree to participate in the intervention. The qualitative key issues in adoption parallel those of reach, but are at levels of settings and staff/implementers. It is important to understand why different organizations - and staff members within these organizations - choose to participate or not; and to understand complex or subtle differences in those organizations and staff members in terms of underlying dynamics and processes. For example, compatibility with mission and current priorities, external factors, and changing context (e.g., policy changes, new regulations, competing demands) often impact why organizations and key agents within an organization choose to participate or not. Quantitative methods can be used to identify standard organizational characteristics associated with participation (e.g., size, prior experience with related innovations, employee turn-over rates), but cannot provide a full or detailed understanding of key and usually unmeasured issues. Often empirical data are not available on key organizational factors (e.g., leadership, reasons for trying a new program).

Qualitative methods are extremely instructive for understanding reasons for adoption or lack of adoption across targeted staff and their settings. To identify a staff member’s rationale for participating or not in an initiative, semi-structured interviews can be extremely illuminating. Such questions can range from more superficial and straightforward interview questions such as “Please tell me about thoughts about participating in initiative X. Why did you not participate in initiative X?” to more in-depth probing with specified interview techniques that get deeply and in a detailed way, at specific factors related to uptake of the intervention.

For example, cognitive task analysis [18] is a collection of methods that allow much greater understanding of the organizational representatives in terms of their thinking about an issue, including how they make decisions as a group. Central is the concept of mental model, or how one conceptualizes what something is and how it will work. [19] Such issues are critical to understanding decision making around participation and commitment to participation. A key aspect of interviews for understanding adoption is to purposefully select key informants that speak from different perspectives. This includes individuals “in the trenches” with little authority to organization leaders and those fulfilling different tasks to provide triangulation among roles for a broader, deeper understanding.

Beyond interviewing, observation can often prove insightful in understanding forces underpinning adoption. Observation may include a tour of the physical site to see the layout, structure and space; it may include participant observation and/or role shadowing in which observation occurs with interaction with participants to explain what is happening and why. A formal ethnographic approach may or may not be used. Observation paired with interviews, if possible, is likely highly valuable because it may reveal inconsistencies between participants responses to interview questions and what they actually do in practice.

In our example of the diabetes intervention, perhaps the intervention was taken up by the three physicians and not the two physicians assistants. Interviews and observation could reveal that the physician assistants in this setting only provide care for patients in acute situations and thus do not have the opportunity for referral to a diabetes management program. Examining adoption on a qualitative level allows for greater understanding of the factors influencing adoption at both organizational and staff levels.

Implementation.

Implementation is quantitatively measured through indicators which include fidelity to the intervention protocol, adaptations made to the original intervention or implementation strategies, the cost (especially replication cost) [20, 21] to deliver the program, and the percent of key strategies that are delivered. There are many sub-issues in investigating implementation that lend themselves to qualitative inquiry. In fact, implementation is the dimension of RE-AIM where the need for qualitative understanding is most needed and often more meaningful than that of quantitative information. These issues include understanding the conditions under which consistency and inconsistency are occurring across staff, setting, time, and different components of program or policy delivery.

The traditional view of understanding implementation is that of fidelity. [22, 23] Knowing the extent to which fidelity (e.g. delivery of key components of a program) is achieved is an important aspect of understanding the contribution of the intervention to observed outcomes. If an effective intervention is not implemented well, then it is likely that its effects are diminished. Fidelity is usually measured by having delivery staff or observers complete checklists noting which intervention core components are delivered. While useful, additional inquiry utilizing qualitative methods is often necessary to understand the how, why and to what extent questions regarding implementation of an intervention.

Implementation can be understood at a deeper level using specific interview techniques to have a participant walk through step by step with an actual recent patient and then answer questions in multiple passes about the people involved, communication involved, tools and resources needed, and other aspects. Implementation may also be understood better through observation and shadowing with extensive field notes to observe what people do as they go through their day. In our diabetes example, perhaps a research assistant shadows the diabetes educator and discovers that many patients are only getting two sessions instead of the recommended four sessions. Interviewing reveals that the diabetes educator is overwhelmed with too many patients and has managed by reducing the number of contacts with patients.

In understanding nuances in implementation, [23] the importance of documenting and understanding adaptations is becoming increasingly recognized. [24, 25] In studies of scale-up or replication, interventions are almost never delivered and integrated precisely the way they were in prior efficacy studies or intervention guides. Increased understanding is needed of how and why programs are altered over time, by whom, for what reasons, and with what results. Qualitative methods, together with quantitative data in an iterative, mixed methods approach can be essential to understanding adaptations. [26] Adaptations are often not negative, and in fact, in many cases making adaptations to improve the fit between the intervention and local context improve the outcomes of the intervention in their own setting. [27, 28] Understanding not just what was adapted, but when, why and by whom will provide far greater information about the contributions to the outcomes as well as provide guidance for future scale up efforts. Finally, understanding decision maker perspectives on types of costs, resources and burden involved in delivering a program, the person or organization’s values, and how they construe “return on investment” is important when implementing, adapting or discontinuing (de-implementing) programs and can often best be illuminated through qualitative methods. [29]

Implementation issues are ideal for qualitative methods. If resources permit, an iterative combination of written survey responses to standard questions about the intervention and its implementation along with interviews of key informants and observations (such as shadowing roles) can be triangulated to inform a very thorough picture of not just how well an intervention is implemented, but why and how. This information is extremely useful in informing how the intervention may be translated to other settings and how barriers and difficulties can be overcome.

Maintenance.

Understanding program sustainability and the reasons why a) individual benefits continue or fade, and b) why the organization delivering the intervention decides to continue or discontinue the intervention are important for future program design and scale up. Maintenance is often not assessed as grant funding runs out and sustainability suffers. [3] Therefore, planning for sustainability beyond grant funding is an important issue to address both in the initial intervention design and implementation strategies planned, as well as after the formal evaluation of a new intervention is over. This is increasingly required and especially important in pragmatic studies. [30] Qualitative methods, coupled with early and ongoing stakeholder engagement throughout a study, can help illuminate sustainability problems early and allow implementers to plan for it and address it as needed. Also, brief interviews with those who continue or discontinue, or adapt an intervention can be very informative. In our example of an intervention for diabetes, we might use interviews with stakeholders to identify existing infrastructure that could support the ongoing use of the intervention and embed the intervention into this infrastructure. Case studies could describe how successfully sustained interventions in their context may provide lessons learned for others on how planning for sustainability can be done.

Complex Issues and Challenges Involving the Use of Qualitative Approaches with RE-AIM

Above we have summarized basic application of qualitative methods to the various RE-AIM dimensions. Below we discuss more complex issues and challenges involving the use of qualitative approaches with RE-AIM.

Understanding patterns of results across RE-AIM Dimensions.

One of the most challenging issues in applying RE-AIM, or other evaluation models, is understanding patterns of results. Programs or policies often produce different patterns of results across RE-AIM dimensions. For example, a program may be high in reach, but low in effectiveness; or be widely adopted, but poorly implemented. Qualitative methods can help “look under the hood” and understand relations among and reasons for differential outcomes on RE-AIM dimensions across programs or organizations. This then can be used to address low performing dimensions through different or modified implementation strategies. For example, we might see that an intervention has high reach and adoption rates, but it scores low on the implementation dimension. The reason behind this variation can be multifold including that the intervention might resonate with providers and participants initially, however, as implementation starts, it proves to be more complex than anticipated; or organizational support/resources might be lacking which hinders full implementation.

Assuring completeness of inquiry and understanding through use of relevant theories and frameworks.

It is often helpful to apply relevant theories or frameworks to guide qualitative assessment questions and to ensure that important issues are not left out. Theories provide a possible explanation for the order and magnitude of influence of factors that may effect change, whereas frameworks often provide the components for consideration, but not how they interact. [31] Many consider the differences between theories, models and frameworks a moot point, however, their importance can be found in how they may provide structure for examination of factors that may be useful in exploring implementation and sustainment. For example, the Consolidated Framework for Implementation Research (CFIR) [32] is a synthesis of theories or meta-framework that can guide exploration of the different buckets of issues. It provides “a menu of constructs that have been associated with effective implementation” [33] such as the inner and outer settings when considering contextual factors. Including qualitative methods such as observation and interviews, along with the associated analysis, will add insight into these various groupings of issues in a way that other methods do not. For example, understanding low reach into a particular population may have elements of not just understanding that population (characteristics of individuals), but also how the organization planned, promoted and implemented the intervention, along with the context (inner and outer setting). Such process or active ingredients theories can be used in combination with RE-AIM to more fully understand processes behind results on RE-AIM dimensions; or how RE-AIM measures are related to clinical outcomes, such as weight loss, and reductions in hemoglobin A1c, blood pressure or lipid levels. In particular, understanding context at different levels and through the lens of different theories and models [34, 35] can be very useful.

There are many relevant theories regarding what helps an intervention to be successfully adopted, implemented or sustained that can be useful in helping to frame qualitative assessments. For example, Normalization Process Theory [36] examines factors such as coherence (sense-making work), cognitive participation (relational work), collective action (operational work), and reflexive monitoring (appraisal work) that help an intervention become “normalized” or in other words, routine. [37] Examining the main domains of theories can guide an inquiry to make sure relevant factors for maintenance are not overlooked. For example, it may be that an intervention is not maintained because the leaders in the organization are not actively learning from their implementation and adapting to make needed improvements. [38] Normalization Process Theory could help make these deficits clear.

There are many theories and models about what components and conditions are necessary for successful implementation and dissemination. [39, 40] Some are general and apply to all areas such as the study of macrocognition, or how groups make decisions and work together in real-world environments. [19] There are context specific models such as the Chronic Care Model [41] that inform what it takes to implement chronic care interventions and approaches in health care settings. Often, organizational level theories such as Donabedian’s Quality of Care Model [42] or Bodenheimer’s Building Blocks of Primary Care [43] are useful in understanding adoption and maintenance. Models and theories can be used to generate questions and collect and analyze data in both quantitative and qualitative ways, but the richness of how the elements of these models or theories come together in each situation is often through qualitative methods.

How and when to conduct qualitative inquiries.

Qualitative methods should be used whenever possible, at multiple points in the course of program or policy delivery. At the pre-implementation stage, it is useful for exploratory work in selecting or developing an appropriate intervention and implementation strategy(ies). It is helpful to consider each of the RE-AIM dimensions and how they may be impacted by a specified intervention. [2, 17] Qualitative methods are helpful for baseline assessment related to the RE-AIM dimensions to highlight differences or changes that may occur during delivery and maintenance.

It can be useful to use RE-AIM self-tests about estimated impact to follow-up on RE-AIM planning profiles and help with decision-making. [44] Qualitative assessments of RE-AIM issues can be especially useful in helping organizations design programs and policies. [45] Clinics or community leaders often must decide among alternative evidence-based programs or policies. The most widely used (and often oversimplified or poorly conducted) qualitative approach during program planning is an initial focus group to help inform a program design or implementation strategies. This is less expensive and time consuming than alternative approaches, but often fails to reveal root causes issues. It may also not truly engage patients or stakeholders as partners. We encourage readers to consider examining approaches such as photovoice, direct observations, individual interview with probes, user-centered design methods, or other approaches outlined in Table 2 as alternatives. An innovative approach to engaging stakeholders as true partners uses the framework of co-creation or co-production. [46]

Such assessments relate to the emerging focus on stakeholder engagement, which involve but should not be limited to patients and families. From a RE-AIM perspective, it is important to conduct assessments and identify key factors at multiple levels such as patients, families or community members, implementation staff (which may change over the course of an interventions), organization leaders, and external agents such as community leaders or health care plans. [47] With the Patient Centered Outcome Research Institute (http://www.Pcori.org) great interest and attention has been placed on engaging patients in research and making research methods and approaches patient-centered. Qualitative approaches can be ideal for patient and stakeholder engagement. They can be involved in forming the qualitative research questions, identifying appropriate interview or observation topics, and help with interpreting results. A plethora of literature is available in the area of community based participatory research which can guide these efforts. [48]

Qualitative inquiry during the intervention implementation is sometimes controversial. There is concern that asking questions and being involved during program delivery may cause a “reactive” intervention effect. Although an extensive discussion of this matter is beyond this paper, consideration of the data collection method and timing is warranted. Strategies such as sampling a subset of participants that is analyzed separately, such as with Critical Incident Analysis; [49] or non-invasive methods such as shadowing/observation are likely to produce an abundance of understanding at little risk of contamination. Additionally, innovative and less frequently used qualitative approaches can help with addressing challenges that arise during implementation. For example, rapid response and analysis techniques can be very useful relatively early on to guide adjustments needed or to inform intervention implementation in a positive way. One practical way is to incorporate qualitative data collection on a small scale and incorporate it into mini-learning opportunities such as through the use of quality improvement PDSA (plan, do, study, act) cycles (see http://www.ihi.org/resources/pages/tools/plandostudyactworksheet.aspx).

Finally there is qualitative assessment for explanatory purposes, which can occur during or after program implementation (i.e., maintenance phase of a study). The focus of this inquiry is to understand what came before and why. Decision making with regard to which or how many qualitative methods to use in a given project is dependent on many factors, several of which relate to the practicalities of purpose of the evaluation, state of the science, budget and time constraints, and expertise.

Cost and time considerations with qualitative research.

As with other methods of research, collecting and analyzing qualitative data is not free from the burdens associated with the time and expense. We posit that qualitative research is a necessary component to conducting a thorough mixed methods evaluation that explores not only the results of a RE-AIM evaluation, but the why, when, and how. However, cost and time considerations may prevail. In this case, some qualitative information is better than no qualitative information. Focus groups may be more efficient for the research team than lengthy individual interviews. Sampling of roles rather than observing all roles may be necessary. Perhaps multiple methods (i.e., both individual interviews and observations) may not be possible. Thematic analysis may be less time-consuming than true grounded theory. More rapid qualitative approaches are being developed [50, 51] and if greatly curtailed qualitative methods are all that can be used, some (valid) qualitative measures are better than nothing, especially as is appropriate to the research or program evaluation questions at hand.

Discussion

There is an important need for increased use of qualitative approaches with RE-AIM (and most other dissemination and implementation models). Indeed, there is an increasing appetite for studies that utilize both qualitative and mixed methods in health services delivery research. [50, 52,53,54,55,56] There is a lack of guidance in the literature on how to use qualitative approaches for dissemination and implementation. [52] Inclusion of qualitative approaches is necessary for ‘full use’ of the RE-AIM model. [3, 10] Qualitative approaches are often selected because there is not a good way to quantify issues, and because they also offer their own unique value. Most frequently the combination of both qualitative and quantitative methods can best address a certain issue. Qualitative methods can be of particular help in situations that are very complex or in which rigorous or unbiased quantitative data are not available or feasible. Even when strong quantitative data and analyses are available, qualitative methods enrich the understanding and conclusions via triangulation. Qualitative approaches may not represent the entire population, but add depth and meaning to facilitate understanding. Such methods can protect against a false assumption that a program or approach does not work, when it was truly an implementation failure [22, 57] as well as when, where and what types of adaptation are beneficial. [25, 26] Finally, qualitative methods can offer insight into what it will take to overcome implementation failures in the future.

Qualitative methods can both enhance and advance the usefulness of RE-AIM. Possibly the greatest contribution of qualitative methods to RE-AIM, and especially when based on theory, is their value in explaining why various RE-AIM results were obtained and how they came about. A potential limitation is that quality use of qualitative approaches can require expertise, resources and time that may not be possible in low resource or non-research community settings. This is also true of quantitative approaches, however. There are decisions to be made on the depth of inquiry and amount and type of analysis and the trade-offs with the time and money it takes to conduct and thoroughly report on qualitative results. In many non-academic applications, there is a continuum trading off precision with timeliness. Hopefully, program implementers are not dissuaded by the need to complete a particular length or type of analysis because even modest contributions are likely to be more valuable than none. Even in such cases, selected qualitative approaches can be part of pragmatic application of RE-AIM. [17, 58]

Conclusion

In conclusion, there is an clear need for more examples of the use and reporting of qualitative approaches with RE-AIM applications [3, 10]. We hope that in the near future there will be a sufficient number of such uses in RE-AIM to conduct reviews and make recommendations on how to best use and integrate qualitative strategies, identify lessons learned, and create more specific and authoritative guidelines for their use.

Abbreviations

CFIR:

Consolidated framework for implementation research

DIS:

Dissemination and Implementation Science

EMR:

Electronic medical record

PDSA:

Plan, do, study, act

RE-AIM:

Reach, effectiveness, adoption, implementation and maintenance

References

  1. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7. PMID: 10474547

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Klesges LM, Estabrooks PA, Dzewaltowski DA, et al. Beginning with the application in mind: designing and planning health behavior change interventions to enhance dissemination. Ann Beh Med. 2005;29(Suppl):66–75. PMID: 15921491

    Article  Google Scholar 

  3. Gaglio B, Shoup JA, Glasgow RE. The RE-AIM framework: a systematic review of use over time. Am J Public Health. 2013;103(6):e38–46. https://doi.org/10.2105/AJPH.2013.301299. PMID: 23597377

    Article  PubMed  PubMed Central  Google Scholar 

  4. RE-AIM framework. http:/ww.re-aim.org.

  5. Prescription for Health. http://www.prescriptionforhealth.org/.

  6. Active for Life Program. http://www.activeforlife.info/.

  7. Move Program. https://www.move.va.gov/.

  8. Vinson CA, Stamatakis KA, Kerner JF. Dissemination and implementation research in community and public health settings. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York, NY: Oxford University Press; 2017. p. 355–70.

    Google Scholar 

  9. Shoup JA, Gaglio B, Varda D, Glasgow RE. Network analysis of RE-AIM framework: chronology of the field and the connectivity of its contributors. Transl Beh Med. 2015;5(2):216–32. https://doi.org/10.1007/s13142-014-0300-1. PMID: 26029284

    Article  Google Scholar 

  10. Kessler RS, Purcell EP, Glasgow RE, et al. What does it mean to "employ" the RE-AIM model? Eval Health Prof. 2013;36(1):44–66. https://doi.org/10.1177/0163278712446066. PMID: 22615498

    Article  PubMed  Google Scholar 

  11. Glasgow RE. What does it mean to be pragmatic? Pragmatic methods, measures, and models to facilitate research translation. Health Educ Behav. 2013;40(3):257–65. https://doi.org/10.1177/1090198113486805. PMID: 23709579

    Article  PubMed  Google Scholar 

  12. Harden SM, Gaglio B, Shoup JA, et al. Fidelity to and comparative results across behavioral interventions evaluated through the RE-AIM framework: a systematic review. Syst Rev. 2015;4:155. https://doi.org/10.1186/s13643-015-0141-0. PMID: 26547687

    Article  PubMed  PubMed Central  Google Scholar 

  13. Crabtree BF, Miller WL. Doing qualitative research. 2nd ed. Thousand Oaks, CA: Sage Publications; 1999.

    Google Scholar 

  14. Green J, Thorogood N. Qualitative methods for health research. Thousand Oaks, CA: Sage Publications; 2014.

    Google Scholar 

  15. Creswell J. Qualitative inquiry and research design: choosing among five approaches. 3rd ed. Thousand Oaks, CA: Sage Publications; 2013.

    Google Scholar 

  16. Silverman D. Qualitative research. 3rd ed. Thousand Oaks, CA: Sage Publications; 2010.

    Google Scholar 

  17. Glasgow RE, Estabrooks PE. Pragmatic Applications of RE-AIM for Health Care Initiatives in Community and Clinical Settings. Prev Chronic Dis. 2018;15:E02. https://doi.org/10.5888/pcd15.170271.

  18. Potworowski G, Green LA. Cognitive task analysis: methods to improve patient-centered medical home models by understanding and leveraging its knowledge work. Rockville, MD: Agency for Healthcare Research and Quality; 2013.

    Google Scholar 

  19. Crandall B, Klein G, Hoffman RR. Working minds: a practitioner's guide to cognitive task analysis. Cambridge, MA: MIT Press; 2006.

    Google Scholar 

  20. Ritzwoller DP, Glasgow RE, Sukhanova AY, et al. Economic analyses of the be fit be well program: a weight loss program for community health centers. J Gen Intern Med. 2013;28(12):1581–8. PMID: 23733374

    Article  PubMed  PubMed Central  Google Scholar 

  21. Ritzwoller DP, Sukhanova A, Gaglio B, et al. Costing behavioral interventions: a practical guide to enhance translation. Ann Behav Med. 2009;37(2):218–27. https://doi.org/10.1007/s12160-009-9088-5. PMID: 19291342

    Article  PubMed  Google Scholar 

  22. Bellg AJ, Borrelli B, Resnick B, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH behavior change consortium. Health Psychol. 2004;23(5):443–51. PMID: 15367063

    Article  PubMed  Google Scholar 

  23. Allen J, Linnan LA, Emmons KM. Fidelity and its relationship to implementation effectiveness, adaptations, and dissemination. In: Brownson R, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health. Oxford: New York, NY; 2012.

    Google Scholar 

  24. Chambers DA, Norton WE. The Adaptome: advancing the science of intervention adaptation Am J Prev Med 2016;51(4 Suppl 2):S124–S131. doi: https://doi.org/10.1016/j.amepre.2016.05.011. PMID: 27371105.

  25. Stirman SW, Miller CJ, Toder K, et al. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implement Sci. 2013;8:65. https://doi.org/10.1186/1748-5908-8-65. PMID: 23758995

    Article  PubMed  PubMed Central  Google Scholar 

  26. Hall T, Holtrop JS, Dickinson LM, et al. Understanding adaptations to patient-centered medical home activities: the PCMH adaptations model. Trans Beh Med Submitted for publication. 2017. https://doi.org/10.1007/s13142-017-0511-3. [Epub ahead of print]

  27. Castro FG, Barrera M Jr, Martinez CR Jr. The cultural adaptation of prevention interventions: resolving tensions between fidelity and fit. Prev Sci. 2004;5(1):41–5. PMID: 15058911

    Article  PubMed  Google Scholar 

  28. Bauman AE, Cabassa LJ, Wiltsey SS. Adaptation in dissemination and implementation science. In: Brownson RC, Colditz G, Proctor EK, editors. Dissemination and implementation research in health. New York: Oxford University Press; 2017.

    Google Scholar 

  29. Rhodes W, Ritzwoller DP, Glasgow RE. Stakeholder perspectives on costs and resource expenditures: addressing economic issues most relevant to patients, providers and clinics. 2018. In Press.

  30. Loudon K, Zwarenstein M, Sullivan F, et al. Making clinical trials more relevant: improving and validating the PRECIS tool for matching trial design decisions to trial purpose. Trials. 2013;14:115. https://doi.org/10.1186/1745-6215-14-115. PMID: 23782862

    Article  PubMed  PubMed Central  Google Scholar 

  31. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53. https://doi.org/10.1186/s13012-015-0242. PMID: 25895742

    Article  PubMed  PubMed Central  Google Scholar 

  32. Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. https://doi.org/10.1186/1748-5908-4-50. PMID: 19664226

    Article  PubMed  PubMed Central  Google Scholar 

  33. CFIR Technical Assistance Website. http://cfirguide.org/

  34. Feldstein A, Glasgow RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Jt Comm J Qual Patient Saf. 2008;34(4):228–43. PMID: 18468362

    Article  PubMed  Google Scholar 

  35. Tomoaia-Cotisel A, Scammon DL, Waitzman NJ, et al. Context matters: the experience of 14 research teams in systematically reporting contextual factors important for practice change. Ann Fam Med. 2013;11(Suppl 1):S115–23. https://doi.org/10.1370/afm.1549. PMID: 23690380

    Article  PubMed  PubMed Central  Google Scholar 

  36. Normalization process theory on-line users’ manual, toolkit and NoMAD instrument. http://normalizationprocess.org/.

  37. May CR, Mair F, Finch T, et al. Development of a theory of implementation and integration: normalization process theory. Implement Sci. 2009;4:29. https://doi.org/10.1186/1748-5908-4-29. PMID: 19460163

    Article  PubMed  PubMed Central  Google Scholar 

  38. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117. https://doi.org/10.1186/1748-5908-8-117. PMID: 24088228

    Article  PubMed  PubMed Central  Google Scholar 

  39. Tabak R, Khoong EC, Chambers DA, et al. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50. https://doi.org/10.1016/j.amepre.2012.05.024. PMID: 22898128

    Article  PubMed  PubMed Central  Google Scholar 

  40. Dissemination and implementation research models. http://www.dissemination-implementation.org/.

  41. Wagner EH, Austin BT, Von Korff M. Organizing care for patients with chronic illness. Milbank Q. 1996;74(4):511–44. PMID: 8941260

    Article  CAS  PubMed  Google Scholar 

  42. Donabedian A. Evaluating the quality of medical care. Millbank Q. 1966;44:Suppl 166–206. PMID: 16279964.

  43. Bodenheimer T, Ghorob A, Willard-Grace R, et al. The 10 building blocks of high-performing primary care. Ann Fam Med. 2014;12(2):166–71. https://doi.org/10.1370/afm.1616. PMID: 24615313

    Article  PubMed  PubMed Central  Google Scholar 

  44. Planning and Evaluation Questions for Initiatives Intended to Produce Public Health Impact. Available from http://re-aim.org/resources-and-tools/self-rating-quiz/. Accessed January 7, 2018.

  45. Ory M, Altpeter M, Belza B, et al. Perceived utility of the RE-AIM framework for health promotion/disease prevention initiatives for older adults: a case study from the U.S. evidence-based disease prevention initiative. Front Public Health. 2015;2:143. https://doi.org/10.3389/fpubh.2014.00143. PMID: 25964897

    PubMed  PubMed Central  Google Scholar 

  46. Voorberg WH, VJJM B, Tummers LG. A systematic review of co-creation and co-production: embarking on the social innovation journey. Public Manage Rev. 2014:1333–57. https://doi.org/10.1080/14719037.2014.930505.

  47. Wozniak L, Soprovich A, Rees S, et al. Contextualizing the effectiveness of a collaborative care model for primary care patients with diabetes and depression (teamcare): a qualitative assessment using RE-AIM. Can J Diabetes. 2015;39(Suppl 3):S83–91. https://doi.org/10.1016/j.jcjd.2015.05.004. PMID: 26227866

    Article  PubMed  Google Scholar 

  48. Minkler M, Salvatore AL. Participatory approaches for study design and analysis in dissemination and implementation research. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science into practice. New York, NY: Oxford University Press; 2012.

    Google Scholar 

  49. Flanagan J. The critical incident technique. Psychol Bull. 1954;51(4):327–58.

    Article  CAS  PubMed  Google Scholar 

  50. QUALRIS workgroup: qualitative research in implementation science. Unpublished report, 2017. https://researchtoreality.cancer.gov/learning-communities/qualris.

  51. Hamilton AB. Qualitative methods in rapid turn-around Health Serv Res Dec 11, 2013. https://www.hsrd.research.va.gov/for_researchers/cyber_seminars/archives/780-notes.pdf.

  52. Creswell JW, Klassen AC, Plano C, et al. Best practices for mixed methods research in the health sciences. Bethesda, MD: Office of the Behavioral and Social Sciences Research, National Institutes of Health; 2011.

    Book  Google Scholar 

  53. Curry LA, Krumholz HM, O’Cathain A, et al. Mixed methods in biomedical and health services research. Circ Cardiovasc Qual Outcomes. 2013;6(1):119–23. https://doi.org/10.1161/CIRCOUTCOMES.112.967885. PMID: 23322807

    Article  PubMed  PubMed Central  Google Scholar 

  54. Plano Clark VL. The adoption and practice of mixed methods: U.S. trends in federally funded health-related research. Qual Inq. 2010;16:428–40.

    Article  Google Scholar 

  55. Miller WL, Crabtree BF, Harrison MI, et al. Integrating mixed methods in health services and delivery system research. Health Serv Res. 2013;48(6 Pt 2):2125–33. https://doi.org/10.1111/1475-6773.12123. PMID: 24279834

    Article  PubMed  PubMed Central  Google Scholar 

  56. Palinkas LA, Cooper BR. Mixed methods evaluation in dissemination and implementation science. In: Brownson RC, Colditz G, Proctor EK, editors. Dissemination and implementation research in health. New York: Oxford University Press; 2017.

    Google Scholar 

  57. Jr BM, Castro FG, Strycker LA, et al. Cultural adaptations of behavioral health interventions: a progress report. J Consult Clin Psychol. 2013;81(2):196–205. https://doi.org/10.1037/a0027085. PMID: 22289132

    Article  Google Scholar 

  58. Estabrooks P, You W, Hedrick V, et al. A pragmatic examination of active and passive recruitment methods to improve the reach of community lifestyle programs: the talking health trial. Int J Behav Nutr Phys Act. 2017;14(1):7. https://doi.org/10.1186/s12966-017-0462-6. PMID: 28103935

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

None

Competing interests.

All authors declare that they have no competing interests.

Funding

Dr. Glasgow’s time on this manuscript was partially funded by grant K12 HL137862. IMPlementation to Achieve Clinical Transformation (IMPACT): The Colorado Training Program from the NIH.

Availability of data and materials.

Not applicable

Authors’ contributions.

Each author contributed sufficiently to be deemed a co-author. Authors met to describe the overall plan for the manuscript and divided up writing sections, which were completed and shared among the co-authors until all were satisfied. All authors read and approved the final manuscript.

Authors’ information.

Dr. Russell Glasgow is the original author of the RE-AIM framework and is the co-author of over 450 publications, including many on the use of RE-AIM. Drs. Holtrop and Rabin work together with Dr. Glasgow in close collaboration in the Dissemination and Implementation Research Core at the Adult and Child Consortium for Health Outcomes Research and Delivery Science (ACCORDS) health services delivery research center at the University of Colorado Denver.

Consent for publication.

Not applicable

Ethics approval and consent to participate.

As this work did not involve the use of human subjects, no ethics approval was needed nor sought.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jodi Summers Holtrop.

Ethics declarations

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Holtrop, J.S., Rabin, B.A. & Glasgow, R.E. Qualitative approaches to use of the RE-AIM framework: rationale and methods. BMC Health Serv Res 18, 177 (2018). https://doi.org/10.1186/s12913-018-2938-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-018-2938-8

Keywords