Skip to main content

Assessing the use of constructs from the consolidated framework for implementation research in U.S. rural cancer screening promotion programs: a systematic search and scoping review



Cancer screening is suboptimal in rural areas, and interventions are needed to improve uptake. The Consolidated Framework for Implementation Research (CFIR) is a widely-used implementation science framework to optimize planning and delivery of evidence-based interventions, which may be particularly useful for screening promotion in rural areas. We examined the discussion of CFIR-defined domains and constructs in programs to improve cancer screening in rural areas.


We conducted a systematic search of research databases (e.g., Medline, CINAHL) to identify studies (published through November 2022) of cancer screening promotion programs delivered in rural areas in the United States. We identified 166 records, and 15 studies were included. Next, two reviewers used a standardized abstraction tool to conduct a critical scoping review of CFIR constructs in rural cancer screening promotion programs.


Each study reported at least some CFIR domains and constructs, but studies varied in how they were reported. Broadly, constructs from the domains of Process, Intervention, and Outer setting were commonly reported, but constructs from the domains of Inner setting and Individuals were less commonly reported. The most common constructs were planning (100% of studies reporting), followed by adaptability, cosmopolitanism, and reflecting and evaluating (86.7% for each). No studies reported tension for change, self-efficacy, or opinion leader.


Leveraging CFIR in the planning and delivery of cancer screening promotion programs in rural areas can improve program implementation. Additional studies are needed to evaluate the impact of underutilized CFIR domains, i.e., Inner setting and Individuals, on cancer screening programs.

Peer Review reports

Cancer incidence and mortality rates are generally higher in rural areas than in urban areas [1, 2]. Contributing to the elevated mortality rates is the less common uptake of routine screening tests for breast, cervical, and colorectal cancers in rural areas [3,4,5,6,7]; across cancer types, the prevalence of screening is as much as 10% lower in rural areas. Differences in cancer screening compound the elevated burden of select cancer risk factors in rural areas, e.g., higher rates of tobacco use [8] and lower levels of physical activity [9]. Certain barriers to cancer screening, such as travel distance, are more pertinent for screening among rural compared to urban populations [10,11,12,13,14]. As a result, interventions to promote cancer screening need to be responsive to the specific needs of rural populations, including their context and health systems [15]. Failure to develop or adapt interventions in this way could exacerbate urban/rural differences in cancer screening and outcomes.

The Consolidated Framework for Implementation Research (CFIR) [16, 17] outlines implementation domains and constructs that influence the success of interventions, particularly when implementing these interventions in new contexts. The five CFIR domains are Intervention (i.e., the components and characteristics of the program itself), Inner setting (i.e., immediate context in which the intervention will be implemented), Outer setting (i.e., the larger social context surrounding the implementation setting), Individuals (i.e., those actors who implement the intervention), and Process (i.e., the anticipated process for achieving change); each domain includes 4–14 constructs and sub-constructs [16]. For example, the domain of Inner setting includes the constructs of structural characteristics, networks and communication, culture, implementation climate, and readiness for implementation. Targeting these domains and constructs could improve the quality of program implementation by identifying multilevel barriers and facilitators to evidence-based interventions, e.g., cancer screening promotion programs in rural settings. However, the extent to which rural cancer screening promotion programs assess and address constructs from implementation science, including those from CFIR, is unknown.

This study involves a systematic search and scoping review of published studies on rural cancer screening promotion programs in order to characterize how interventions considered barriers and facilitators to implementation that map on to CFIR constructs. Specifically, we evaluated whether programs (a) explicitly addressed these constructs, and (b) how the constructs informed program planning and delivery. These findings can illustrate the progress and gaps in application of implementation science to rural cancer screening promotion.

Although multiple implementations science frameworks and models exist [18], we chose to structure our work around CFIR because it is well-established with cancer control researchers and practitioners, and it has well-defined, distinct constructs relevant to our review. The findings will highlight the extent to which implementation science has been used in rural cancer screening promotion and highlight areas in which additional data are needed. More broadly, the findings will help inform research and quality improvement efforts for increasing cancer screening in rural areas.


Procedures for study identification

We used standardized procedures for conducting a systematic literature search and scoping review of studies published through November 2022 [19]. This approach involved (a) a comprehensive search of the published research literature and (b) a critical, narrative review and summary. The review focused specifically on the use of CFIR domains and constructs.

First, we developed a list of inclusion/exclusion criteria and search terms (Supplemental Table S1), which we refined with the assistance of a health science librarian. Briefly, eligible studies were those that reported on programs conducted in the United States with cancer screening as a primary study outcome. We included published protocol papers and outcomes papers, but excluded reviews, meta-analyses, and commentaries. All available literature was included, regardless of publication date; although some studies predated the initial publication of CFIR in 2009 [16], this framework described constructs that already existed in the field of implementation science. Search terms included constructs related to (a) evaluation of interventions (e.g., “process evaluation,” “evidence-based practice”), (b) rural settings (e.g., “rural,” “non-metropolitan”), and (c) cancer screening (e.g., “screen*,” “early detection of cancer”). Studies that self-identified their settings as rural were included; that is, we did not apply any additional restrictions on what qualified as a rural setting. To be eligible, studies had to intervene in at least one rural site, including studies with rural-only site(s) and studies with rural and non-rural sites. Then, we conducted a comprehensive search of the scientific literature archived in Medline, CINAHL, Scopus, Cochrane, Web of Science, and Embase.

We used the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Extension for Scoping Reviews (PRISMA-ScR; guidelines to inform the search (Fig. 1). In the systematic search, we identified 166 records for review; 39 of these were duplicate publications that we removed. Two study team members (JLM and KCS) reviewed titles and abstracts for the remaining 127 records, achieving a 98% (125/127) agreement rate. A third team member (WAC) reviewed the titles and abstracts for the records in conflict to resolve the disagreements. Forty-four studies were included in the full-text review, assessed by two independent reviewers (JLM and KCS). Twenty-nine studies were excluded from further analysis, most often (20/29, or 69%) because they did not describe an intervention, e.g., they evaluated pre-existing cohort data or described theoretical/conceptual work on rural cancer screening. Ultimately, 15 studies describing rural cancer screening promotion programs were included in the critical review (Table 1) [20,21,22,23,24,25,26,27,28,29,30,31,32,33,34].

Fig. 1
figure 1

Results of a systematic search on the planning and delivery of studies evaluating rural cancer screening promotion programs in the United States, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram

Table 1 Characteristics of included studies evaluating rural cancer screening promotion programs in the United States

Data extraction and synthesis

To organize data abstraction, we developed a review tool following best practices [35]. The tool included fields focused on bibliographic data; intervention structure (e.g., geographic scope, study design, target sample); intervention dose (e.g., number of visits); intervention mode (e.g., type of materials, language of delivery); and intervention effects, if available. The remainder of the tool was devoted to closed- and open-ended items to assess the 39 CFIR constructs and sub-constructs. The tool included definitions [16] of each construct that reviewers could reference while evaluating a study.

We abstracted data from the included studies to investigate the use of CFIR domains and constructs through two complementary approaches. First, we evaluated the presence or absence of CFIR constructs in each study, creating quantitative, binary indicators for each construct. Two reviewers (JLM and KCS) independently evaluated the presence of each CFIR construct for every study and then met to generate consensus. Importantly, studies may have referred to CFIR constructs explicitly or implicitly, so we determined that a construct was ‘present’ if the study included information that mapped on to the construct definition/meaning even if the study did not use the construct label.

We pilot-tested the data abstraction tool and process by independently reviewing one study and meeting to debrief and resolve questions. Revisions to the tool were made to clarify questions that arose during the pilot-testing process (e.g., emphasizing the difference between an opinion leader and a champion). The reviewers then independently reviewed ~ 5 studies at a time, meeting every 1–2 weeks to debrief and protect against ‘drift’ in data abstraction. The reviewers achieved an 81% overall agreement rate for the presence/absence of each CFIR construct (agreement by CFIR domain: Intervention: 77%; Inner setting: 89%; Outer setting: 77%; Individuals: 90%; Process: 80%). If necessary, disagreements were resolved through discussion with a third reviewer (WC).

Second, we extracted summary information about how studies described each reported construct in program development and/or implementation. We then summarized the prevalence of CFIR constructs and how they informed program development and/or implementation for the included studies of rural cancer screening promotion.


We reviewed a total of 15 studies published between 1997 and 2021 (Table 1). As a result of the inclusion criteria for our review, all studies described interventions with some prospective, longitudinal data collection. All programs targeted adults (18–75 +), but age eligibility varied across programs. Some programs focused recruitment on participants with certain characteristics, such as a specific race or ethnicity [20, 21, 24, 33] or low income [22, 23, 27]. Studies examined mammography for breast cancer [20, 22,23,24, 26, 33], Pap smear for cervical cancer [21, 22], low-dose computed tomography (CT) for lung cancer [28, 29], colorectal cancer screening tests (fecal immunochemical test (FIT) and colonoscopy) [30,31,32], or skin cancer checks [22]. On average, studies described 17 CFIR constructs (range: 10–26) (Table 2).

Table 2 Domains, constructs, and sub-constructs from the Consolidated Framework for Implementation Research [16] present in included studies evaluating rural cancer screening promotion programs in the United States

CFIR domain: intervention

Within the Intervention domain, the least commonly-discussed construct was trialability (27%), and the most commonly-discussed construct was adaptability (87%) (Table 2).

Two programs were developed from an external intervention source, such as the American Cancer Society [23, 27]. More programs were developed from an internal intervention source based on collaborations between community and academic partners [24,25,26, 29, 30] and through formative research such as focus groups and pilot interventions [21, 30]. To demonstrate evidence strength and quality of the program, studies reported on systematic reviews [26], expert advice and education [20, 25], and evidence-based research [23, 24, 29, 34]. Studies indicated the relative advantage of a given program in two ways: (a) emphasizing the importance of selecting evidence-based programs over other options, and (b) asserting that programs that were culturally-appropriate for the target population would be more effective at changing behavior [23,24,25,26,27, 30].

A majority of studies described the adaptability of a program by summarizing refinements to the program’s core components to ensure it would resonate with the target population, including cultural relevance [21, 28, 32, 33] and local knowledge [25, 26, 30]. Some studies discussed adaptability in relation to the ‘adaptable periphery,’ [16] such as program elements [22, 23, 27] and systems related to the program [20, 24, 29]. For example, Sharp et al. described their work to promote cervical cancer screening among American Indian women in rural North Carolina, which they adapted from a previous program targeted at African American women in urban North Carolina [21]. Before program implementation, they introduced adaptations to (a) make the approach more individualized rather than community-based, and (b) increase the relevance to rural, Southern, and American Indian culture. They made additional adaptations during implementation, as well, including changing protocols to allow for more flexible scheduling and revising the format of the individualized risk assessment results. Thus, reported adaptations were multilevel and ongoing throughout this and other studies.

Trialability, or testing the program in a smaller group within the target population, was infrequently discussed [24, 29, 33, 34]. In terms of complexity of program implementation, several studies described difficulties associated with perceptions about (and reality of) the burden of staff trainings and responsibilities [20, 21, 24, 33], additional workload [22, 23, 25, 27, 29, 34], and partners’ time and resource commitment [26].

Design quality and packaging, focusing on design elements, literacy level, and program messaging, were evaluated by focus group participants or community members [22, 25, 26, 28, 34]; usually, this feedback was solicited before implementation [22, 25, 28, 34], but other studies gathered this information after implementation [26]. A majority of studies discussed program costs, focusing on program implementation (such as staff training, travel, and management [20,21,22,23, 26, 29, 31, 34]) rather than specific program components (such as free cancer screening tests [30, 32] and advertisements [28]).

CFIR domain: outer setting

Within the Outer setting domain, the least commonly-discussed construct was peer pressure (13%), and the most commonly-discussed construct was cosmopolitanism (87%) (Table 2).

In describing patient needs and resources, studies included (a) epidemiological data about cancer burden [21, 23, 25,26,27,28, 30,31,32,33], cancer risk factors, or risk factors for forgoing cancer screening [23, 26,27,28, 30, 31, 33, 34] or (b) qualitative or formative research about the specific needs of the community members [20, 21, 25,26,27,28, 33]. Most studies reported on this construct when describing the motivation for locating a program in a particular community, although CFIR suggests that patient needs and resources should be addressed in each stage of program development and implementation. Notably, none of these studies described resources, assets, or strengths; instead, they focused on the needs, deficits, and weaknesses experienced by patients in the local community.

Cosmopolitanism, or the extent to which program settings were networked with other, external organizations, was incredibly varied across studies in terms of number and types of organizations in the network. Studies reported collaborations that involved leaders of community organizations [21, 24, 27], public health programs/departments [22, 23, 25, 27, 29, 30], healthcare clinics or providers [23, 25, 28, 31, 32], and universities [24, 26, 28], among others [23, 26, 32, 33]. While most studies described partnerships with 1–5 external organizations, some studies were part of quite large networks; for example, Norman et al. [25] developed a colorectal cancer screening program through the High Plains Research Network, which is comprised of “16 community hospitals, 55 practices, 120 primary care clinicians, 20 nursing homes, several public health departments, and about 145,000 residents.”

Peer pressure [26, 34] and external policy and incentives [20, 23,24,25, 28, 29, 31] were discussed less frequently. Most often, external policy referred to federal policies that mandated funding for specific public health or clinical programs.

CFIR domain: inner setting

Within the Inner setting domain, the least commonly-discussed construct was tension for change (0%), and the most commonly-discussed constructs were networks and communications and available resources (both 73%) (Table 2).

Descriptions of structural characteristics of the program settings highlighted their history [23, 26, 30, 31, 34], physical resources [25, 26], and staff support [24, 25, 27]. Networks and communications within program settings were often described in detail [20, 21, 23,24,25,26,27, 29, 31, 32, 34]. Many studies outlined the systematized communication among departments or team members for the duration of program implementation [21, 29, 31, 32, 34]; notably, these systems were almost invariably reported as facilitators of implementation without discussion of how challenges emerged or were addressed [27]. Reported elements of culture of a program setting focused on how a given program aligned with institutional priorities [27], particularly around serving the community [21, 24].

Reports of the implementation climate were relatively infrequent in the studies reviewed, except in descriptions of how team members worked together collaboratively in support of the intervention [24, 27, 34]. Program teams prepared for, and adapted during, implementation by working to maximize compatibility among the program and the clinic/organization systems [26, 27, 29, 31, 33, 34], implementation teams [20,21,22], and patient expectations of the program setting [21, 25, 33]. Many of these efforts focused on established pathways for integrating screening tests from a research study into the clinical records. Several studies reported low relative priority of focusing on a particular topic, i.e., that staff may have perceived other health issues as higher priorities than cancer screening [24, 27, 29], but none specified how they attempted to modify this perception. Notably, none of the studies described the tension for change within an organization (beyond describing the data indicating a need for change in the community; see CFIR domains: Outer setting).

The construct of readiness for implementation consists of “specific tangible and immediate indicators of organizational commitment to its decision to implement an intervention” [16], and it includes the sub-constructs of leadership engagement, available resources, and access to knowledge and information. Leadership engagement often involved increasing the commitment and involvement of leaders from the program setting by connecting them with a research team [22, 24, 27, 34] or local community leaders (e.g., elected officials, hospital administrators) [21, 25, 27, 29, 32, 33]. Available resources in program settings included tangible assets [21, 23,24,25, 28, 31, 34] (such as materials or physical space) and intangible resources [20, 21, 34] (such as training/education). Three studies reported on the lack of available resources: (a) Breslau et al. [27] reported that insufficient funds challenged implementation of cancer screening promotion; (b) Cunningham et al. [22] reported that, compared to a better-resourced health department, a health department with fewer resources was able to adopt program innovations more easily because staff did not have to integrate the new procedures with existing practices; and (c) Elliott et al. [34] reported that the COVID-19 pandemic reduced their capacity to recruit and intervene with patients.

CFIR domain: individuals

Descriptions of the Individuals (that is, individuals within the program setting, not the participants in the research projects) were infrequent. Within this domain, the least commonly-discussed construct was self-efficacy (0%), and the most commonly-discussed construct was knowledge and beliefs about the intervention (40%) (Table 2).

Knowledge and beliefs about the intervention [25, 26, 29, 31, 32, 34] was primarily addressed through use of educational sessions and training [25, 31, 34], knowledge assessments [32], and providing research evidence to program staff in lay language [26]. One study documented staff ambivalence to changes in clinical practice due to skepticism about the program, but this was mitigated through additional peer-led educational sessions [29]. The role of individual stage of change was integrated into one program that leveraged cross-discipline, collaborative communication to move intervention staff through stages of change during program implementation [29].

Individual identification with an organization was discussed by one study that utilized a community advisory council in program implementation [25]; this approach allowed all participants to have the opportunity to contribute ideas and fostered an environment for success. Of the three studies that discussed other personal attributes, Teal et al. [24] described the knowledge gap between community and academic partners that may have impeded program implementation; Lee-Lin et al. [26] designed program materials that could be used by inexperienced yet capable health educators; and Sharp et al. [21] emphasized to community health workers that they needed to stay in compliance with their own medical care in order to effectively encourage community participants to engage with cancer screening.

CFIR domain: process

Within the Process domain, the least commonly-discussed construct was opinion leader (0%), and the most commonly-discussed construct was planning (100%) (Table 2).

Planning was a crucial component in the development and reporting of all of the studies, involving formative (often qualitative) research [20, 22, 25, 26, 28, 32, 33], pilot studies [21, 27, 30], and time dedicated to training program staff (ranging from 9–12 months) [20, 23, 24, 29, 34]. Occasionally, planning included preplanned interim evaluation and adjustment throughout the program period [22, 31], e.g., several cycles of “plan, do, study, act” [31].

The sub-constructs within engaging outline four different roles for representatives from the implementation setting and other organizations (e.g., staff, volunteers, researchers) to be included in the implementation process [16]. Beyond individuals with a formal leadership role in the projects, none of the studies identified individuals as opinion leaders. However, formally appointed internal implementation leaders (primarily, staff embedded in the program setting [20, 22, 24, 25, 29, 31, 34]) and champions (primarily, staff with exceptionally high levels of motivation to go “above and beyond” their specified program responsibilities [20, 25,26,27, 29, 31]) were more common. For example, Lee-Lin et al. [26] reported that having a “passionate local health educator was a critical success factor” for their rural cancer screening promotion program.

Studies reported successes [20, 26, 30, 32], failures [21, 23, 27,28,29], and adaptations [21, 22, 26, 30, 34] in executing the program according to plan. Methods used to track executing the plan included extensive monitoring tools to ensure fidelity to the implementation plan [20,21,22, 29, 30, 34]. Reflecting and evaluating – that is, multidirectional, quantitative and qualitative communication about implementation efforts among team members – were described in great detail, leveraging multiple sources of information, numerous modes of communication, and creative methods for tracking implementation outcomes. Two examples that illustrate the breadth and depth of data collection for reflecting and evaluating are (a) Cunningham et al. [22], who described their routine communication, which was supplemented with presentations from the evaluation team to the program setting, weekly phone calls, quarterly in-person monitoring visits, and quarterly training workshops, and (b) Tolma et al. [33], who collected implementation data through activity logs, one-on-one interviews, phone calls, forms, checklists, surveys, and focus groups. Some studies reported gaps in their reflecting and evaluating [20, 26], noting that they did not have the information they needed to evaluate the implementation of specific program components.


Among 15 studies of cancer screening promotion programs delivered in rural U.S. areas, all described at least some elements of CFIR that the authors leveraged during the development or implementation of the program. Most commonly, these studies described constructs from the Process, Intervention, and Outer setting domains, including planning, adaptability, and cosmopolitanism. However, constructs from the Inner setting and Individuals domains were described less frequently. While the focus of implementation science frameworks, such as CFIR, is on improving the quality and success of program implementation, they may also improve the effectiveness of behavioral programs [36,37,38]. As such, it is possible that leveraging a framework such as CFIR when developing and implementing these rural cancer screening programs can increase this behavior and potentially improve cancer outcomes. Recent publications from authors at the U.S. National Cancer Institute and Centers for Disease Control and Prevention emphasize the role of implementation science in developing and delivering better interventions to improve cancer screening and reduce disparities, including guiding the development of programs that are culturally-tailored [39], community-based [39, 40], multilevel [39, 40], and scalable [39,40,41]. As a result of the ongoing COVID-19 pandemic, changes to health policy and healthcare delivery make it even more important to understand how to best deliver cancer screening promotion programs to communities and patients that need them most. Additional research is needed on how to improve the use of implementation science to promote cancer screening in rural areas, particularly in terms of optimizing programmatic elements such as internal dynamics (i.e., Inner setting) and staff/stakeholders (i.e., Individuals).

The people in the programs: the role of networks in rural cancer screening promotion programs

Some studies emphasized that people and organizations in their networks played multifaceted roles in the planning and delivery of these programs, such that these roles spanned several CFIR domains, including Outer setting (e.g., cosmopolitanism), Inner setting (e.g., networks and communication), Individuals (e.g., knowledge and beliefs), and Process (e.g., engaging) domains. Engagement of community stakeholders is a critical component of effective implementation [27, 39, 40, 42, 43], although community-engaged research approaches are underutilized [44, 45]. Collaboration with community members and community organizations may be particularly important for rural cancer control given the tight relationships and small networks in rural communities [46]. Although half of the studies described internal implementation leaders and champions (i.e., roles internal to the study team), only two studies described external change agents, and none described opinion leaders (see engaging construct under Process domain).

External change agents often have training in a technical field germane to the program (e.g., paid consultants) [16]. External change agents may be hard for community programs to identify and pay, thereby reducing sustainability of cancer screening programs; as such, the contributions of external change agents (if leveraged) must be measured and reported to increase transparency.

Opinion leaders are “individuals in an organization who have formal or informal influence on the attitudes and beliefs of their colleagues with respect to implementing the intervention” [16] and may be experts or peers, e.g., tribal leaders, religious leaders, or other local community organization leaders. A review of randomized controlled trials found that leveraging opinion leaders can change the behaviors of healthcare professionals between -15% to + 72%, i.e., they can have positive of negative effects [47]. Understanding the most advantageous ways to involve opinion leaders is a pressing need for program implementation.

In addition, many programs reported collaborations with external organizations, some of which were quite large (see cosmopolitanism construct under Outer Setting domain); these relationships can be crucial for successful implementation [48]. Interestingly, communication among people and organizations during program implementation was always described positively in these studies (see networks and communications construct under Inner Setting domain), with little discussion of challenges and potential solutions. Approaches such as network analysis could provide more insight into the density and functionality of networks of healthcare and community-based organizations, particularly in rural areas where opportunities for collaboration may be constrained by geography [49, 50]. In addition, some authors have called for greater attention to communication theory in implementation science research [51, 52]; these paradigms could provide insight into how network nodes co-create priorities, information, and resources related to public health and cancer screening. An overarching approach to supporting these networks would be to employ greater team science [15] to ensure that team formation, launch, and maturation are all adequately supported throughout the program lifespan.

Understudied domains and constructs in reviewed studies

Among the included studies, there was limited discussion of certain constructs within the Individuals or Inner setting domains. Two out of the three constructs that were not discussed by any study came from these domains: tension for change (domain: Inner setting) and self-efficacy (domain: Individuals).

Tension for change can be understood as stakeholders’ perceptions of a discrepancy between current practice and what (better) practice is possible [53]. This discrepancy can be driven by patients’ needs (e.g., unacceptably low levels of cancer screening) or by expectations from leadership [53,54,55]. If stakeholders do not feel that change is necessary or even possible, they will have low motivation to support the program, but if stakeholders view the status quo as intolerable, they are more likely to support the program [56, 57]. Gustafson et al. [58] suggest that interventionists will find tension for change difficult to impact, and time burden (and burnout) among healthcare staff can preclude their ability to engage meaningfully in program activities even if they do have tension for change [16, 59, 60]. However, tension for change may wax and wane organically over time [55], and if opinion leaders within an organization have high tension for change, this sentiment can spread throughout the network of stakeholders [58]. These issues highlight the interconnected constructs of tension for change and engagement; additional research is needed to understand the interactions among these variables in predicting program success.

In terms of self-efficacy, CFIR proposes that stakeholders’ self-efficacy for implementation influences their confidence in recognizing barriers and overcoming challenges, and makes them more likely to support the program, even as challenges arise [16, 61]. While decades of research have linked participants’ self-efficacy to health behavior change [62], stakeholders’ self-efficacy is less well-studied. Approaches to improve self-efficacy could focus on simplifying or systematizing interventions or building capacity through training or technical assistance [61]. Efforts to provide training to stakeholders in rural areas can be challenged by time and travel concerns [63], but telementoring programs such as Project ECHO have been successful in improving the self-efficacy of rural healthcare providers to enact cancer prevention and control initiatives in their clinics [64, 65]. Future research should investigate how to train stakeholders in rural areas to implement cancer screening programs, and what impact this training has on intervention outcomes.

Several other constructs were discussed in less than 25% of studies: peer pressure (domain: Outer setting); culture, organizational incentives and rewards (domain: Inner setting); individual stage of change, individual identification with organization, other personal attributes (domain: Individuals); and external change agents (domain: Process). A recent systematic review concluded that behavioral health researchers should use flexible strategies for assessing Outer setting constructs, which are understudied [66]. Efforts to develop reliable and valid scales to assess a range of CFIR constructs specific to cancer control are underway; the availability of psychometrically-robust measures may stimulate measurement of these constructs in future studies [67, 68]. Actively measuring and monitoring constructs from these domains could improve intervention outcomes. Future studies implementing cancer screening programs in rural settings should better leverage aspects of the Individuals and Inner setting domains to improve intervention implementation and outcomes.

Challenges to implementation of rural cancer screening promotion programs

In general, studies highlighted certain CFIR constructs as challenges to successful implementation of cancer screening promotion programs. Program complexity and cost (constructs from the Intervention domain) were commonly discussed, often hand-in-hand, because the time, effort, and resources needed to plan and deliver these programs were often high and potentially prohibitive outside of a grant-funded context. Other studies have highlighted the importance of these two constructs, which are barriers to long-term sustainability [69, 70]. Despite the scientific complexity that may motivate evidence-based cancer screening promotion programs, it is clear that the practical application of these programs must be as simple as possible to ensure widespread dissemination and sustainability, particularly for programs focused on low-resource communities and reducing health disparities [71].

Another challenge was relative priority (from the Inner setting domain), which is perhaps not surprising given the finite resources (tangible and human) available for public health and clinical practice. Gesthalter and colleagues reported their efforts to address knowledge and beliefs about their program among skeptical staff [29], emphasizing the importance of engaging both “hearts” (i.e., relative priority) and “minds” (i.e., knowledge/beliefs) of program implementers [72]. Yet challenges from program complexity and cost may make efforts to get buy-in from implementers infeasible. For example, the program implemented by Gesthalter and colleagues added educational sessions with program staff, but these sessions increase time and costs associated with program implementation. Efforts to efficiently engage implementers are a crucial next step for implementation science research focused on cancer screening promotion programs.

Strengths and limitations

Strengths of this review include the focus on CFIR [16], a comprehensive framework commonly used in cancer-related implementation studies, with domains and constructs applicable to the broad scope of cancer screening promotion programs. We also reported on a largely understudied topic area, identifying key gaps in the collection and reporting of implementation measures in the context of rural cancer screening promotion. This information is necessary to support better implementation of programs in rural settings and help reduce urban/rural differences in cancer screening and outcomes. Another strength was the search in six major databases to capture relevant articles.

Due to large variability in study designs, reporting, and outcomes, a main limitation of this review was our inability to assess associations between program implementation and effect sizes. We did not assess study quality in this review, which also likely influences intervention effect sizes. Future studies should evaluate the quality of implementation, quality of intervention, and intervention outcomes. Although reported as a strength, our exclusive focus on CFIR constructs was also a limitation. Tabak and colleagues [18] have reported at least 61 framework and models for implementation science research. We may have excluded some studies reporting other implementation measures not present in CFIR or conceptualized in different ways. In addition, we identified some studies reporting on measures that predate CFIR; it is possible that some authors, pre-CFIR, did not use implementation science terminology or report on these measures even if they were collected. Overall, it is possible that studies reporting the development and implementation of rural cancer screening programs were excluded due to these issues, our search strategy, and/or delays in program implementation and reporting as a result of the COVID-19 pandemic. Recently, best practices for reporting implementation studies have been developed, and authors should adhere to such guidelines to the extent possible [73]. Urban/rural comparisons were not possible given the predominant focus on rural settings.


In conclusion, implementation science, including CFIR, holds promise for improving the development and implementation of programs to promote cancer screening in rural communities yet has been largely underutilized in studies. Stronger integration of CFIR domains and constructs could improve intervention delivery and reduce urban/rural disparities in cancer outcomes [1, 2]. Published studies of rural cancer screening promotion programs often leveraged aspects of CFIR’s Process, Intervention, and Outer setting domains, highlighting the importance of supportive networks in the planning and delivery of these programs. These studies less frequently described constructs from the Individuals and Inner setting domains, which reveals avenues for additional research and evaluation. These findings can inform future programs to increase cancer screening in rural communities, particularly if they further examine the role of Individuals and Inner setting constructs that may influence implementation and effectiveness.

Availability of data and materials

All data analyzed during this study are included in the article.



Consolidated Framework for Implementation Research


Computed tomography


Fecal immunochemical test


Preferred Reporting Items for Systematic Reviews and Meta-Analysis


  1. Blake KD, Moss JL, Gaysynsky A, Srinivasan S, Croyle RT. Making the case for investment in rural cancer control: an analysis of rural cancer incidence, mortality, and funding trends. Cancer Epidemiol Biomarkers Prev. 2017;26(7):992–7.

    Article  Google Scholar 

  2. Henley SJ, Anderson RN, Thomas CC, Massetti GM, Peaker B, Richardson LC. Invasive Cancer Incidence, 2004-2013, and Deaths, 2006-2015, in Nonmetropolitan and Metropolitan Counties - United States. MMWR Surveill Summ. 2017;66(14):1–13.

    Article  Google Scholar 

  3. Moss JL, Liu B, Feuer EJ. Urban/Rural Differences in Breast and Cervical Cancer Incidence: The Mediating Roles of Socioeconomic Status and Provider Density. Womens Health Issues. 2017;27(6):683–91.

    Article  Google Scholar 

  4. Belasco EJ, Gong G, Pence B, Wilkes E. The impact of rural health care accessibility on cancer-related behaviors and outcomes. Appl Health Econ Health Policy. 2014;12(4):461–70.

    Article  Google Scholar 

  5. Cole AM, Jackson JE, Doescher M. Urban–rural disparities in colorectal cancer screening: cross-sectional analysis of 1998–2005 data from the Centers for Disease Control’s Behavioral Risk Factor Surveillance Study. Cancer Med. 2012;1(3):350–6.

    Article  Google Scholar 

  6. Coughlin SS, Thompson TD, Hall HI, Logan P, Uhler RJ. Breast and cervical carcinoma screening practices among women in rural and nonrural areas of the United States, 1998–1999. Cancer. 2002;94(11):2801–12.

    Article  Google Scholar 

  7. Doescher MP, Jackson JE. Trends in cervical and breast cancer screening practices among women in rural and urban areas of the United States. J Public Health Manag Pract. 2009;15(3):200–9.

    Article  Google Scholar 

  8. Roberts ME, Doogan NJ, Kurti AN, Redner R, Gaalema DE, Stanton CA, et al. Rural tobacco use across the United States: how rural and urban areas differ, broken down by census regions and divisions. Health Place. 2016;39:153–9.

    Article  Google Scholar 

  9. Mama SK, Bhuiyan N, Foo W, Segel JE, Bluethmann SM, Winkels RM, et al. Rural-urban differences in meeting physical activity recommendations and health status in cancer survivors in central Pennsylvania. Support Care Cancer. 2020;28(10):5013–22.

    Article  Google Scholar 

  10. Greiner KA, Engelman KK, Hall MA, Ellerbeck EF. Barriers to colorectal cancer screening in rural primary care. Prev Med. 2004;38(3):269–75.

    Article  Google Scholar 

  11. Moss JL, Ehrenkranz R, Perez LG, Hair BY, Julian AK. Geographic disparities in cancer screening and fatalism among a nationally representative sample of US adults. J Epidemiol Community Health. 2019;73(12):1128–35.

    Article  Google Scholar 

  12. Studts CR, Tarasenko YN, Schoenberg NE. Barriers to cervical cancer screening among middle-aged and older rural Appalachian women. J Community Health. 2013;38(3):500–12.

    Article  Google Scholar 

  13. Wang H, Roy S, Kim J, Farazi PA, Siahpush M, Su D. Barriers of colorectal cancer screening in rural USA: A systematic review. Rural Remote Health. 2019;19(3):5181.

    Google Scholar 

  14. Yabroff KR, Lawrence WF, King JC, Mangan P, Washington KS, Yi B, et al. Geographic disparities in cervical cancer mortality: what are the roles of risk factor prevalence, screening, and use of recommended treatment? J Rural Health. 2005;21(2):149–57.

    Article  Google Scholar 

  15. Glasgow RE, Vinson C, Chambers D, Khoury MJ, Kaplan RM, Hunter C. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Health. 2012;102(7):1274–81.

    Article  Google Scholar 

  16. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

    Article  Google Scholar 

  17. Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the Consolidated Framework for Implementation Research. Implement Sci. 2016;11(1):72.

    Article  Google Scholar 

  18. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50.

    Article  Google Scholar 

  19. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J. 2009;26(2):91–108.

    Article  Google Scholar 

  20. Viadro CI, Earp JAL, Altpeter M. Designing a process evaluation for a comprehensive breast cancer screening intervention: Challenges and opportunities. Eval Program Plann. 1997;20(3):237–49.

    Article  Google Scholar 

  21. Sharp PC, Dignan MB, Blinson K, Konen JC, McQuellon R, Michielutte R, et al. Working with lay health educators in a rural cancer-prevention program. Am J Health Behav. 1998;22(1):18–27.

    Google Scholar 

  22. Cunningham LE, Michielutte R, Dignan M, Sharp P, Boxley J. The value of process evaluation in a community-based cancer control program. Eval Program Plann. 2000;23(1):13–25.

    Article  CAS  Google Scholar 

  23. Bencivenga M, DeRubis S, Leach P, Lotito L, Shoemaker C, Lengerich EJ. Community partnerships, food pantries, and an evidence-based intervention to increase mammography among rural women. J Rural Health. 2008;24(1):91–5.

    Article  Google Scholar 

  24. Teal R, Moore AA, Long DG, Vines AI, Leeman J. A community-academic partnership to plan and implement an evidence-based lay health advisor program for promoting breast cancer screening. J Health Care Poor Underserved. 2012;23(2 Suppl):109–20.

    Article  Google Scholar 

  25. Norman N, Bennett C, Cowart S, Felzien M, Flores M, Flores R, et al. Boot camp translation: a method for building a community of solution. J Am Board Fam Med. 2013;26(3):254–63.

    Article  Google Scholar 

  26. Lee-Lin F, Domenico LJ, Ogden LA, Fromwiller V, Magathan N, Vail S, et al. Academic-community partnership development lessons learned: evidence-based interventions to increase screening mammography in rural communities. J Nurs Care Qual. 2014;29(4):379–85.

    Article  Google Scholar 

  27. Breslau ES, Weiss ES, Williams A, Burness A, Kepka D. The implementation road: engaging community partnerships in evidence-based cancer control interventions. Health Promot Pract. 2015;16(1):46–54.

    Article  Google Scholar 

  28. Cardarelli R, Reese D, Roper KL, Cardarelli K, Feltner FJ, Studts JL, et al. Terminate lung cancer (TLC) study-A mixed-methods population approach to increase lung cancer screening awareness and low-dose computed tomography in Eastern Kentucky. Cancer Epidemiol. 2017;46:1–8.

    Article  Google Scholar 

  29. Gesthalter YB, Koppelman E, Bolton R, Slatore CG, Yoon SH, Cain HC, et al. Evaluations of implementation at early-adopting lung cancer screening programs: lessons learned. Chest. 2017;152(1):70–80.

    Article  Google Scholar 

  30. Katz ML, Young GS, Reiter PL, Pennell ML, Plascak JJ, Zimmermann BJ, et al. Process evaluation of cancer prevention media campaigns in Appalachian Ohio. Health Promot Pract. 2017;18(2):201–10.

    Article  Google Scholar 

  31. Schlauderaff P, Baldino T, Graham KC, Hackney K, Hendryx R, Nelson J, et al. Colorectal cancer screening in a rural US population. Int J Health Governance. 2017;22(4):283–91.

    Article  Google Scholar 

  32. Woodall M, DeLetter M. Colorectal Cancer A collaborative approach to improve education and screening in a rural population. Clin J Oncol Nurs. 2018;22(1):69–75.

    Article  Google Scholar 

  33. Tolma EL, Stoner JA, Thomas C, Engelman K, Li J, Dichkov A, et al. Conducting a formative evaluation of an intervention promoting mammography screening in an American Indian community: the native women’s health project. Am J Health Educ. 2019;50(1):52–65.

    Article  Google Scholar 

  34. Elliott TE, O’Connor PJ, Asche SE, Saman DM, Dehmer SP, Ekstrom HL, et al. Design and rationale of an intervention to improve cancer prevention using clinical decision support and shared decision making: a clinic-randomized trial. Contemp Clin Trials. 2021;102:106271.

    Article  Google Scholar 

  35. Polanin JR, Pigott TD, Espelage DL, Grotpeter JK. Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses. Res Synth Methods. 2019;10(3):330–42. Epub 2019 Jun 24.

    Article  Google Scholar 

  36. Hall KL, Oh A, Perez LG, Rice EL, Patel M, Czajkowski S, et al. The ecology of multilevel intervention research. Transl Behav Med. 2018;8(6):968–78.

    Article  Google Scholar 

  37. Lobb R, Colditz GA. Implementation science and its application to population health. Annu Rev Public Health. 2013;34:235–51.

    Article  Google Scholar 

  38. Brouwers MC, De Vito C, Bahirathan L, Carol A, Carroll JC, Cotterchio M, et al. Effective interventions to facilitate the uptake of breast, cervical and colorectal cancer screening: an implementation guideline. Implement Sci. 2011;6(1):112.

    Article  Google Scholar 

  39. Kennedy AE, Vanderpool RC, Croyle RT, Srinivasan S. An overview of the national cancer institute’s initiatives to accelerate rural cancer control research. Cancer Epidemiol Biomarkers Prev. 2018;27(11):1240–4.

    Article  Google Scholar 

  40. White A, Sabatino SA, Vinson C, Chambers D, White MC. The Cancer Prevention and Control Research Network (CPCRN): advancing public health and implementation science. Prev Med. 2019;129S:105824.

    Article  Google Scholar 

  41. Weaver SJ, Blake KD, Vanderpool RC, Gardner B, Croyle RT, Srinivasan S. Advancing rural cancer control research: national cancer institute efforts to identify gaps and opportunities. Cancer Epidemiol Biomarkers Prev. 2020;29(8):1515–8.

    Article  Google Scholar 

  42. Means AR, Kemp CG, Gwayi-Chore MC, Gimbel S, Soi C, Sherr K, et al. Evaluating and optimizing the consolidated framework for implementation research (CFIR) for use in low- and middle-income countries: a systematic review. Implement Sci. 2020;15(1):17.

    Article  Google Scholar 

  43. Young HM, Miyamoto S, Henderson S, Dharmar M, Hitchcock M, Fazio S, et al. Meaningful Engagement of Patient Advisors in Research: Towards Mutually Beneficial Relationships. West J Nurs Res. 2020;43(10):905–14.

  44. Key KD, Furr-Holden D, Lewis EY, Cunningham R, Zimmerman MA, Johnson-Lawrence V, et al. The continuum of community engagement in research: a roadmap for understanding and assessing progress. Prog Community Health Partnersh. 2019;13(4):427–34.

    Article  Google Scholar 

  45. Miller WL, Rubinstein EB, Howard J, Crabtree BF. Shifting implementation science theory to empower primary care practices. Ann Fam Med. 2019;17(3):250–6.

    Article  Google Scholar 

  46. Allen P, Walsh-Bailey C, Hunleth J, Carothers BJ, Brownson RC. Facilitators of multisector collaboration for delivering cancer control interventions in rural communities: a descriptive qualitative study. Prev Chronic Dis. 2022;19:E48.

    Article  Google Scholar 

  47. Flodgren G, Parmelli E, Doumit G, Gattellari M, O’Brien MA, Grimshaw J, et al. Local opinion leaders: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2011;8:Cd000125.

    Google Scholar 

  48. Leeman J, Baquero B, Bender M, Choy-Brown M, Ko LK, Nilsen P, et al. Advancing the use of organization theory in implementation science. Prev Med. 2019;129S:105832.

    Article  Google Scholar 

  49. Prusaczyk B, Maki J, Luke DA, Lobb R. Rural health networks: how network analysis can inform patient care and organizational collaboration in a rural breast cancer screening network. J Rural Health. 2019;35(2):222–8.

    Article  Google Scholar 

  50. Cunningham FC, Ranmuthugala G, Plumb J, Georgiou A, Westbrook JI, Braithwaite J. Health professional networks as a vector for improving healthcare quality and safety: a systematic review. BMJ Qual Saf. 2012;21(3):239–49.

    Article  Google Scholar 

  51. Gainforth HL, Latimer-Cheung AE, Athanasopoulos P, Moore S, Ginis KA. The role of interpersonal communication in the process of knowledge mobilization within a community-based organization: a network analysis. Implement Sci. 2014;9:59.

    Article  Google Scholar 

  52. Manojlovich M, Squires JE, Davies B, Graham ID. Hiding in plain sight: communication theory in implementation science. Implement Sci. 2015;10:58.

    Article  Google Scholar 

  53. Holt DT, Helfrich CD, Hall CG, Weiner BJ. Are you ready? How health professionals can comprehensively conceptualize readiness for change. J Gen Int Med. 2010;25 Suppl 1(Suppl 1):50–5.

    Article  Google Scholar 

  54. Miake-Lye IM, Delevan DM, Ganz DA, Mittman BS, Finley EP. Unpacking organizational readiness for change: an updated systematic review and content analysis of assessments. BMC Health Serv Res. 2020;20(1):106.

    Article  Google Scholar 

  55. King DK, Shoup JA, Raebel MA, Anderson CB, Wagner NM, Ritzwoller DP, et al. Planning for implementation success using RE-AIM and CFIR frameworks: a qualitative study. Front Public Health. 2020;8:59.

    Article  Google Scholar 

  56. Liang S, Kegler MC, Cotter M, Emily P, Beasley D, Hermstad A, et al. Integrating evidence-based practices for increasing cancer screenings in safety net health systems: a multiple case study using the consolidated framework for implementation research. Implement Sci. 2016;11:109.

    Article  Google Scholar 

  57. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

    Article  Google Scholar 

  58. Gustafson DH, Sainfort F, Eichler M, Adams L, Bisognano M, Steudel H. Developing and testing a model to predict outcomes of organizational change. Health Serv Res. 2003;38(2):751–76.

    Article  Google Scholar 

  59. Harry ML, Truitt AR, Saman DM, Henzler-Buckingham HA, Allen CI, Walton KM, et al. Barriers and facilitators to implementing cancer prevention clinical decision support in primary care: a qualitative study. BMC Health Serv Res. 2019;19(1):534.

    Article  Google Scholar 

  60. Cropanzano R, Rupp DE, Byrne ZS. The relationship of emotional exhaustion to work attitudes, job performance, and organizational citizenship behaviors. J Appl Psychol. 2003;88(1):160–9.

    Article  Google Scholar 

  61. Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice. Implement Sci. 2017;12(1):125.

    Article  Google Scholar 

  62. Strecher VJ, DeVellis BM, Becker MH, Rosenstock IM. The role of self-efficacy in achieving health behavior change. Health Educ Q. 1986;13(1):73–92.

    Article  CAS  Google Scholar 

  63. Charlton M, Schlichting J, Chioreso C, Ward M, Vikas P. Challenges of Rural Cancer Care in the United States. Oncology (Williston Park). 2015;29(9):633–40.

    Google Scholar 

  64. Varon ML, Baker E, Byers E, Cirolia L, Bogler O, Bouchonville M, et al. Project ECHO cancer initiative: a tool to improve care and increase capacity along the continuum of cancer care. J Cancer Educ. 2021;36(Suppl 1):25–38.

    Article  Google Scholar 

  65. Lopez MS, Baker ES, Milbourne AM, Gowen RM, Rodriguez AM, Lorenzoni C, et al. Project ECHO: a telementoring program for cervical cancer prevention and treatment in low-resource settings. J Glob Oncol. 2017;3(5):658–65.

    Article  Google Scholar 

  66. McHugh S, Dorsey CN, Mettert K, Purtle J, Bruns E, Lewis CC. Measures of outer setting constructs for implementation research: a systematic review and analysis of psychometric quality. Implement Res Pract. 2020;1:2633489520940022.

    Google Scholar 

  67. Fernandez ME, Walker TJ, Weiner BJ, Calo WA, Liang S, Risendal B, et al. Developing measures to assess constructs from the Inner Setting domain of the Consolidated Framework for Implementation Research. Implement Sci. 2018;13(1):52.

    Article  Google Scholar 

  68. Kegler MC, Liang S, Weiner BJ, Tu SP, Friedman DB, Glenn BA, et al. Measuring constructs of the consolidated framework for implementation research in the context of increasing colorectal cancer screening in federally qualified health center. Health Serv Res. 2018;53(6):4178–203.

    Article  Google Scholar 

  69. Atkins DL, Wagner AD, Zhang J, Njuguna IN, Neary J, Omondi VO, et al. Brief Report: Use of the Consolidated Framework for Implementation Research (CFIR) to Characterize Health Care Workers’ Perspectives on Financial Incentives to Increase Pediatric HIV Testing. J Acquir Immune Defic Syndr. 2020;84(1):e1–6.

    Article  Google Scholar 

  70. Ware P, Ross HJ, Cafazzo JA, Laporte A, Gordon K, Seto E. Evaluating the implementation of a mobile phone-based telemonitoring program: longitudinal study guided by the consolidated framework for implementation research. JMIR Mhealth Uhealth. 2018;6(7):e10768.

    Article  Google Scholar 

  71. Baumann AA, Cabassa LJ. Reframing implementation science to address inequities in healthcare delivery. BMC Health Serv Res. 2020;20(1):190.

    Article  Google Scholar 

  72. Blase KA, Fixsen D, Sims BJ, Ward CS. Implementation science: Changing hearts, minds, behavior, and systems to improve educational outcomes. Oakland: The Wing Institute; 2015.

  73. Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017;356:i6795.

    Article  Google Scholar 

Download references


The authors acknowledge Amy Knehans, MLIS, D-AHIP, Associate Librarian at Harrell Health Sciences Library at Penn State College of Medicine, for her assistance with the systematic search.


Funding for this project came from the National Institutes of Health (K22 CA225705; PI: Moss) and the Junior Faculty Development Program from the Penn State College of Medicine Office of Faculty and Professional Development. The funders played no role in the study design, interpretation of results, drafting/reviewing the manuscript, or the decision to pursue publication.

Author information

Authors and Affiliations



JLM designed the study, conducted data extraction and analysis, interpreted the findings, and drafted and revised the manuscript. KCS conducted data extraction and analysis, interpreted the findings, and drafted and revised the manuscript. MLP contributed to data extraction and analysis, developed the tables, and drafted and revised the manuscript. WAC contributed to the design of the study, resolved conflicts in data extraction and analysis, interpreted the findings, and drafted and revised the manuscript. JLK contributed to the design of the study, interpreted the findings, and drafted and revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jennifer L. Moss.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent or publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moss, J.L., Stoltzfus, K.C., Popalis, M.L. et al. Assessing the use of constructs from the consolidated framework for implementation research in U.S. rural cancer screening promotion programs: a systematic search and scoping review. BMC Health Serv Res 23, 48 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Consolidated Framework for Implementation Research (CFIR)
  • Cancer screening
  • Rural
  • Non-metropolitan
  • Program planning
  • Implementation science