Skip to main content

Using cognitive interviews to improve a measure of organizational readiness for implementation

Abstract

Background

Organizational readiness is a key factor for successful implementation of evidence-based interventions (EBIs), but a valid and reliable measure to assess readiness across contexts and settings is needed. The R = MC2 heuristic posits that organizational readiness stems from an organization’s motivation, capacity to implement a specific innovation, and its general capacity. This paper describes a process used to examine the face and content validity of items in a readiness survey developed to assess organizational readiness (based on R = MC2) among federally qualified health centers (FQHC) implementing colorectal cancer screening (CRCS) EBIs.

Methods

We conducted 20 cognitive interviews with FQHC staff (clinical and non-clinical) in South Carolina and Texas. Participants were provided a subset of items from the readiness survey to review. A semi-structured interview guide was developed to elicit feedback from participants using “think aloud” and probing techniques. Participants were recruited using a purposive sampling approach and interviews were conducted virtually using Zoom and WebEx. Participants were asked 1) about the relevancy of items, 2) how they interpreted the meaning of items or specific terms, 3) to identify items that were difficult to understand, and 4) how items could be improved. Interviews were transcribed verbatim and coded in ATLAS.ti. Findings were used to revise the readiness survey.

Results

Key recommendations included reducing the survey length and removing redundant or difficult to understand items. Additionally, participants recommended using consistent terms throughout (e.g., other units/teams vs. departments) the survey and changing pronouns (e.g., people, we) to be more specific (e.g., leadership, staff). Moreover, participants recommended specifying ambiguous terms (e.g., define what “better” means).

Conclusion

Use of cognitive interviews allowed for an engaged process to refine an existing measure of readiness. The improved and finalized readiness survey can be used to support and improve implementation of CRCS EBIs in the clinic setting and thus reduce the cancer burden and cancer-related health disparities.

Peer Review reports

Contributions to the literature

  • This study helps to advance the field of implementation science by developing a valid and reliable measure that aligns with the R = MC2 heuristic to increase implementation success.

  • This study provides an example of how to use cognitive interviewing to improve measurement tools.

  • This study confirms some of the common pitfalls known in survey design and measurement tools.

  • This paper demonstrates how an improved readiness survey can contribute to implementation of evidence-based interventions for cancer prevention and control.

Background

Colorectal cancer (CRC) is the second most common cause of cancer death in the United States (US) [1, 2]. The US Preventive Services Task Force recommends CRC screening (CRCS) for adults aged 45-75 who are at average risk [3]. While CRCS rates have increased in the years prior to the COVID-19 pandemic, they still lag behind national goals and the pandemic caused additional delays or halts in screening [4]. For example, recent estimates suggest 65.2% of adults were screened, while the target for US Department of Health and Human Services (DHHS) Healthy People 2030 goal is 74.4% [5] and the National Colorectal Cancer Roundtable’s goal is to achieve 80% CRCS rates in every community [6]. Moreover, CRCS rates are disproportionate in racial and ethnic groups, and disparities in screening uptake persist [7]. For example, CRCS uptake is highest among Whites and lowest among Hispanics [8].

Federally qualified health centers (FQHCs) provide affordable healthcare for many Americans, many of which are at or below the federal poverty level and come from underserved communities with lower CRCS rates [9]. Despite serving many patients, CRCS rates among FQHCs (40.1% in 2020) remain below national averages (65.2%) [10, 11]. CRCS is also a Uniform Data System clinical quality measure for health centers. To help increase CRCS rates, FQHCs utilize evidence-based interventions (EBIs), such as provider assessment and feedback, provider reminders, client reminders, and reducing structural barriers [12, 13]. .EBIs provide guidance on strategies to implement and promote use of CRCS [14]. Additionally, the Guide to Community Preventive Services (the Community Guide) [15] disseminates recommended EBIs. Despite having these EBIs available, implementation remains a challenge; Hannon et al. and Adams et al. found FQHCs often discontinue an EBI because of capacity issues [16, 17]. Thus, there is a gap in the motivation and capacity to effectively implement and sustain EBIs to improve CRCS. For example, when electronic health records cannot support integration of provider reminder systems or provider assessment and feedback reports, uptake, implementation success and the sustainability of the EBI is compromised. Additionally, provider related EBIs require strategic partnerships that take time to build, showing readiness can be an ongoing and shifting process [17]. Moreover, CRCS is often a lower priority for providers, especially amongst patients with multiple chronic conditions or complex medical histories [18]. Several initiatives exist to increase implementation of EBIs to promote CRCS in FQHCs including the Centers for Disease Control and Prevention’s (CDC) Colorectal Cancer Control Program [19], the Cancer Prevention and Control Research Network (CPCRN) [16], the American Cancer Society’s (ACS) Community Health Advocates Implementing Nationwide Grants for Empowerment and Equity (CHANGE) grant program [20], and the Evidence-Based Cancer Control Programs (EBCCP) [21].

In the health care setting, understanding and attending to organizational level barriers and organizational readiness has been associated with implementation success [22,23,24]. Readiness represents a central construct in several implementation science frameworks including the Interactive Systems Framework for Dissemination and Implementation (ISF) [25], the Consolidated Framework for Implementation Research (CFIR) [26], Getting To Outcomes [27], and Context and Capabilities for Integrating Care [28]. Organizational readiness plays a role during all phases of program implementation [22] and reflects the organizations’ commitment, motivation, and capacity for change over time [24]. This idea of readiness emerged from the ISF [22, 25]. Informed by the ISF [22, 25] and past research identifying the importance of organizational capacity [29] and motivation [23, 30], Scaccia et al. [22] developed a heuristic for organizational readiness known as R = MC2. The R = MC2 heuristic proposes that readiness is made up of three distinct components: the organization’s motivation to implement an innovation, general organizational capacities, and innovation specific capacities.

Organizational readiness is critical to successful implementation, yet there is a need for a valid and reliable measure that aligns with the R = MC2 heuristic to increase implementation success [22, 23, 31,32,33,34]. A readiness survey was originally developed based on the R = MC2 framework to assess and monitor readiness for implementing a health improvement process among community coalitions and has since been used in other settings [35, 36]. For example, one study applied the readiness survey in a mixed methods approach among primary care and specialty clinics, pharmacies within health systems, and community pharmacies. They found engaging in the readiness work was associated with many benefits including increased awareness of readiness challenges, ensuring alignment of priorities, and making sure the intervention was a good fit [37]. Another study adapted the readiness survey to assess organizational readiness for integrated care and developed a Readiness for Integrated Care Questionnaire (RICQ). The tool was then piloted with 11 health care practices that serve vulnerable, underprivileged populations [36]. The readiness survey has further been applied to operationalize readiness building in a variety of settings. Using the readiness survey is the first of three stages (assessment, feedback and prioritization, strategize) to develop and test practical strategies for supporting implementation in real-world settings [38]. While used before, the readiness survey had not been rigorously evaluated in terms of its psychometric properties or used in FQHC settings to assess readiness for implementation of cancer control interventions. This study represents part of a rigorous process of adaptation, validation and testing of the readiness survey which is ultimately intended to be used in multiple settings for a variety of implementation efforts.

To be a well-established measure, the readiness survey must demonstrate adequate levels of reliability and validity [39]. The measure development process can include initial item review from respondents and obtaining feedback to improve measures; however, there are few examples in the research literature of having members of the intended response community review items for interpretability and clarity. Cognitive interviewing is a widely used method used to improve understanding of question validity and reduce response error [40]. Cognitive interviewing (sometimes called learner verification) is a process by which participants verbalize their thought processes while responding to written text, such as a survey. Cognitive interviews may be used to examine the clarity of meaning for words and phrases, the cognitive process used for arriving at an answer, identify problems with the measure’s instructions, and the optimal order and context for information as presented to the interviewee [41, 42].

The work described in this article was part of a larger study to further develop, refine and test the previously developed readiness survey [24]. The larger study consists of multiple phases that include both qualitative and quantitative analyses [24]. Results presented in this paper represented one of the qualitative phases of this larger, more lengthy measure development process [43]. The overall goal of the larger study is to adapt, further develop, and evaluate the validity and reliability of the existing readiness survey so that it can be used across settings and topic areas to assess readiness to inform implementation strategy development or other efforts to improve implementation of evidence-based interventions. The purpose of this paper was to describe a qualitative process used to assist improvement of the existing measure of readiness.

Methods

This study used a narrative research approach to guide our qualitative work. The narrative research approach helped us understand individuals’ experiences with the readiness survey [44]. The research team that completed this study is diverse with a range of qualifications, experiences, and familiarity with FQHCs. This allowed for the integration of diverse perspectives in the research design, analysis, and interpretation of findings. We used the Standards for Reporting Qualitative Research (SRQR) checklist when compiling this manuscript. The Committee for Protection of Human Subjects at the institutions associated to this study reviewed and approved all procedures and protocols.

Setting and recruitment

We recruited participants from 11 FQHC systems in South Carolina (SC) and Texas (TX). We used a purposive sampling approach to recruit participants [45] by leveraging existing relationships with FQHCs. In TX, we worked with a clinic contact (e.g., nurse manager) to provide us with a list of possible participants and accompanying email addresses for us to reach out to for interview participation. In SC, we directly emailed FQHC contacts that the research team had previous relationships with from other projects related to implementing CRCS initiatives. We collected data between December 2020 and February 2021. All cognitive interviews were conducted using a virtual online video platform (Zoom or WebEx) and were approximately one-hour in length. Interviews were conducted virtually because of the COVID-19 pandemic. Interviews were audio-recorded, and professionally transcribed verbatim. Interview participants were compensated with $75 e-gift cards for their time.

Interviews

This study used a qualitative, semi-structured cognitive interview approach to gather feedback on the existing items included on the readiness survey. The research team developed a cognitive interview guide that included a general debriefing discussion about first impressions of the readiness survey and specific questions about each item consistent with the cognitive interview methods described by Beatty and Willis [41, 46]. The readiness survey was shared with participants prior to the interview.

Three research team members (DC, ED, MM) trained in cognitive interviewing techniques completed the interviews. We asked interview participants to think aloud or describe their thought process out loud as they answered questions. Then, we went through items on the readiness survey one at a time with the participant. Given the length of the readiness survey and because the interviews were being conducted virtually, the research team divided the readiness survey into subsets and participants were asked about only a portion of the readiness survey; thus, keeping the interviews to approximately 1 h long.

Interviews focused on the following topics: 1) what the participant was thinking about when they read a question, 2) how easy/difficult questions were to answer, 3) how the questions could be improved, and 4) how they interpreted specific terms in the questions. Table 1 describes the purpose and examples of interview questions. We asked participants a series of questions regarding the readiness survey instructions including: “What are your general impressions of the Readiness Survey instructions?” “What in the instructions did you think was unclear?” and “How would you improve this?” Data were collected from at least two interview participants per item on the readiness survey. In addition, we administered a brief descriptive questionnaire before the interview to ascertain sociodemographic information from participants. The reporting of our qualitative study is guided by the Standards for Reporting Qualitative Research (SRQR) checklist [47].

Table 1 Purpose and examples of cognitive interview questions

Data analysis

We used an inductive approach to analyze transcripts that allows themes to emerge from data [48]. We coded and analyzed the transcripts using a qualitative analysis software (ATLAS.ti 9 Windows). Two researchers (ED, MM) used open coding to review interview data independently. Both researchers then discussed coded transcripts to share any new codes and develop a consensus. A working codebook was developed and updated as new codes emerged. We reviewed the data until saturation was reached (no new themes or ideas emerged from the data) and recurring themes were identified [49].

We then mapped the transcripts back to each survey item in the readiness survey. A summary of the participant’s recommended changes was constructed for each survey item and organized in Microsoft Excel. The Excel document’s column headings included: Current Readiness survey Item, Participant Response (Interview 1), Participant Response (Interview 2), Summary of Participant’s Recommendations, Analysis Team’s Thoughts, Refinement Level (minor, major, no change), Proposed New Item, What Change Was Made (brief explanation of what the change was) (Additional file 1). Our comprehensive Excel document was used for summarizing changes to all subsets of the readiness survey, including the two interviews per subset of items.

This Excel document was then shared with the rest of our analysis team, which consisted of three more team members (LW, AL, TW). Our analysis team went through interview data in the Excel document and discussed each readiness survey item. Proposed changes for any items were also recorded in the Excel document. Proposed new items were color coded for major and minor edits. Finally, a third expert review team consisting of three readiness survey experts (MEF, BW, AW) reviewed suggestions and proposed final modifications to the readiness survey items based on the interview data. All steps leading up to the proposed change were recorded in a table (Additional file 2) that included: Current Readiness Survey Item, Interview Responses, Summary of Participants’ Recommendation, Analysis Team Thoughts, and Proposed New Readiness Survey Item.

Results

Participants

A total of 20 individuals from FQHC clinics across SC (n = 9) and TX (n = 11) participated in the cognitive interviews. Participants represented 11 different FQHC systems across SC and TX, in addition to the SC Primary Health Care Association (SCPHCA). The SCPHCA was included because of their close relationships with FQHC staff and their unique perspective as the unifying organization for SC FQHCs. Participants represented a variety of roles or job types (e.g., quality improvement directors, nurses, medical assistants). This allowed for multiple perspectives given a variety of staff members representing different roles would be taking the survey. Participants were mostly female (75%), Black/African American (40%), and middle aged (35-44, 45%). Characteristics of all interview participants are shown in Table 2.

Table 2 Characteristics of cognitive interview participants (n = 20)

Overall readiness survey feedback

Interview participants provided constructive feedback and suggested recommendations to improve the readiness survey:

  • The Survey “Needs to be Shortened.” Overall, participants felt that the readiness survey was too long and needed to be shortened. A participant explained how long surveys are less likely to be completed: “The shorter a survey is, the more likely the people will complete it.” One participant explained how she will close out of surveys if they are too long: “When you send a survey…and I get them all the time, I usually click on it. I try to gauge on the first page how long the survey by the percentage. But when I answer the first couple of questions, If I’m at 8% completed, I’m going to close... I don’t have time to sit there for 30 minutes and answer your 40 questions. If I’m 20 or 25%, which tells me it’s only four or five questions, I’m more likely to finish it.”

  • The Introduction is “Clear and Concise.” Participants expressed that the introduction language for the survey was clear. A participant explained that “it’s kind of just standard evaluation language and nothing that stood out as confusing.”

  • A 7-point Likert Scale is Challenging. One participant suggested that using a 7-point Likert scale would be more challenging to answer than a 5-point Likert scale. She described, “I can see after a while people may be get inconsistent with, do I pick a two or three... I’ve not done that much detail in a Likert scale, so that may be a little bit of a concern.

Item redundancy

A key recommendation that participants had was to remove items they interpreted to be duplicative. For example, multiple participants stated that they felt that the following three items were too similar: “people can safely tell their coworkers about any mistakes they make,” “people feel comfortable telling their coworkers about any mistakes they make,” and “people feel it is safe to admit any mistakes they make to a coworker”. Participants went further to explain that they liked “people feel comfortable telling their coworkers about any mistakes they make” the best out of the three items because it most straight forward.

Clarifying terminology

Intervention vs. innovation

Participants noted the importance of using terminology that is common in the clinic setting. For example, in their clinics, they are more familiar with the term intervention instead of innovation (as used in the readiness survey). Intervention is used in the well-known term “evidence-based intervention” (EBI). Participants recommended switching out innovation for intervention throughout the entire survey. For example, one participant explained, “When I think of innovation, to me, innovations is something that you’re leading in, almost like something you designed... and they haven’t necessarily designed this… innovation isn’t a word that I think that I would use.”

Pronouns and vague terminology

An additional recommendation included clarifying vague items. For example, one item “we communicate well with each other” was interpreted as vague and open for interpretation. Participants recommended clarifying who “we” is. For example, “we feel confident in our ability to implement this innovation,” confused participants on whether the research team was referring to a particular clinic, staff, or the entire FQHC system. Vague words were defined by participants throughout the cognitive interviews. Participants discussed that using certain terms like “our organization”, “people”, and “we” could mean different things in different items. For example, “our organization” and “people” were both defined by participants as meaning leadership/board members or staff. Participants recommended clarifying these vague terms to define either leadership or all clinic staff. In another example, the item “our clinic is among the first to try new ways of doing things” was also interpreted as being vague. A participant explained how “things” is a vague term: “community outreach things? I think that “things“ would need to defined.” Additionally, some participants recommended specifying ambiguous terms like “better.” When reading the following items: 1) “this innovation is better than other innovations we have used before in our clinic” and 2) “the innovation meets our clinic needs better than what we have been doing,” several participants asked: “how is it better than before?” and “what do you mean by better?” In the item “the people in our clinic value others’ unique skills and talents,” participants indicated that they weren’t sure what “others” meant. They suggested this term could mean other organizations or other colleagues.

Words with more than one meaning

Participants also highlighted terms that could have more than one meaning. Additional terms for clarification noted by participants included “minority,” “take the time,” and “others.” Minority was defined by participants as “people not in leadership” or “the minority among job titles in the clinical area.” However, one participant noted that this could also be defined by race/ethnicity and suggested clarifying. Participants also recommended defining “take the time” because this could mean different things to different people. For example, one participant explained how people’s interpretations could differ: “some people sat around to discuss how it worked, but I would think that that would be, you know, to run a report to see how it worked and then look at the- look at the data.

Phrases that are difficult to understand

In general, participants felt most of the items on the readiness survey were easy to understand. However, participants recommended removing the items that were difficult to understand like the item “people are not afraid to be themselves at work.” A participant explained, “I don’t know how to interpret that, I know I’m not afraid to be myself at work… every new institution has their policies and procedures and work ethic, you know, but I don’t know if, they’re showing themselves in work, Um, it’s, a little bit in a difficult, or in a to understand, ideally for me.”

Changes made to the readiness survey

Examples of item changes are illustrated in Table 3. For example, participants felt the item “people in this clinic generally reflect on how things are going” used a vague term (“reflect”). Therefore, the research team changed the item to “people in this clinic generally talk about how things are going.” This wording was suggested by an interview participant. A further example is for the item “our clinic is innovative.” A participant explained how different roles and job types at the clinic might not know what we mean by innovative. Therefore, the review team decided to switch the item to say, “our clinic is open to changes in the way we do things.” A final example is changes made to the item “intra-organizational relationships are important for implementation.” Participants explained that intra-organizational was a confusing term and “ambiguous.” Therefore, after discussion, the review team decided it was important to clarify what is meant by intra-organizational relationships and to change the item to “relationships between staff members across departments are important for implementing this innovation.”

Table 3 Examples of item changes

Discussion

Readiness assessments can be used to support and improve implementation of EBIs for cancer prevention and control and thus improve CRC outcomes. This paper described a process for collecting user-focused data to improve a comprehensive readiness measure based on the R = MC2 heuristic and assessed item understanding, its relevance to the healthcare setting/context, and general interpretations of the structure. A better measure of organizational readiness is an essential step towards informing strategies to improve implementation. This paper also provides an example of a rapid process to engage the intended response community in improving measurement tools. Despite challenges related to conducting this study during the COVID-19 pandemic, this study was able to gather opinions from a diverse range of voices (including job types), which strengthens the readiness survey. It is critical to ensure that tools are tailored to and representative of the intended audience.

Although the use of qualitative methods in implementation science is well established, there are not many published studies that describe use of cognitive interviewing for the development and refining of measures to assess contextual factors influencing implementation. This research provides opportunity to better understand the complexity of the implementation context, as well as incorporate a diverse range of perspectives to improve our measure of readiness [50, 51]. Qualitative approaches explore the complexity of human behavior (feelings, perceptions, experiences, and thoughts) and generate deeper understanding participants’ experiences in certain settings. Incorporating qualitative data into this study helps better apply the readiness survey to the intended setting it is designed for [52, 53]. Furthermore, using a comprehensive Excel document for summarizing changes to all subsets of the readiness survey was a good strategy because it helped organize a vast amount of information into an easily accessible format that multiple team members could use to make decisions on refining the readiness survey.

Within measure development approaches, there are common issues to avoid when developing items [54,55,56,57]. Our interview participants identified some key examples within our measure (i.e. avoiding jargon, avoiding vague terms, avoiding words that can mean the same thing). It can be difficult to identify these issues if we only rely on the measure developers or “expert” reviewers. Thus, our study adds an important example of advantages for this user testing stage in the measure development process. Our study also demonstrates how we processed the information so others can follow this approach.

A potential limitation of our study is that interviews were conducted using online conferencing platforms (e.g., Zoom, WebEx) instead of in person. The ideal format for cognitive interviews is in person so the interviewer can see and observe body language [58]. However, we were unable to do the interviews in person because of the COVID-19 pandemic and this interviewing format facilitated reaching more participants during a critical time for FQHC clinics balancing many responsibilities in both South Carolina and Texas. A second limitation of our study was that we were only able to show interview participants a subset of items. We broke up the readiness survey into subsets because we wanted the virtual interviews to not last longer than 1 h. A third limitation of our study was that we had three participants who identified as non-native English speakers. This may have influenced the way in which they interpreted and/or responded to the items. A fourth limitation was that data were collected from only two to three interview participants per item on the readiness survey. Because we wanted feedback on a large number of items, we focused on breaking up the item sets for participants, so the interviews were more manageable. Recruitment of participants was also a challenge due to COVID-19 overwhelming health centers at the time of the interviews. Therefore, each item was only reviewed by two to three participants.

There is a need for a comprehensive measure of readiness. Overall, the goal of this study was to improve the readiness survey based on the R = MC2 framework (a measurement tool for readiness). Readiness is a critical step for successful implementation. This paper describes the use of cognitive interviews as part of a larger study [43] to validate the readiness survey, the next phase (developmental phase) of our study includes distributing the readiness survey to a large sample of FQHC clinics across the U.S. for continued testing and development. The cognitive interview data will be combined with quantitative data collected from FQHC clinics who completed the readiness survey. These data will be analyzed and integrated with the cognitive interviews to develop a final version of the readiness survey. From there, the readiness survey will be distributed again to a larger, national set of FQHC clinics for survey validation (validation phase). This novel mixed methods approach allows for a comprehensive development and validation of a measurement tool.

Conclusion

Key recommendations included removing items interpreted as asking about the same concept and items that were difficult to understand. Additionally, participants recommended keeping terms consistent throughout the survey and changing pronouns (e.g., people, we) to be more specific (e.g., leadership). Moreover, participants recommended specifying ambiguous terms (e.g., define what “better” means).

By improving the readiness survey, the goal is to develop a theoretically-informed, pragmatic, reliable and valid measure of organizational readiness that can be used across settings and topic areas, by researchers and practitioners alike, to increase and enhance implementation of cancer control interventions. The finalized readiness survey will be used to support and improve implementation of EBIs for cancer prevention and thus reduce the cancer burden and cancer-related health disparities.

Availability of data and materials

The interview data generated and analyzed during the current study are not publicly available to protect the privacy of those interviewed. However, the de-identified summarized tables represented in Additional files 1 and 2 may be available from the corresponding author on reasonable request.

Abbreviations

EBIs:

Evidence-based interventions

FQHC:

Federally qualified health centers

CRCS:

Colorectal cancer screening

ISF:

Interactive Systems Framework for Dissemination and Implementation

SC:

South Carolina

TX:

Texas

References

  1. Siegel RL, Miller KD, Fuchs HE, Jemal A. Cancer statistics, 2021. CA Cancer J Clin. 2021;71(1):7–33.

    Article  Google Scholar 

  2. Siegel RL, Miller KD, Goding Sauer A, Fedewa SA, Butterly LF, Anderson JC, et al. Colorectal cancer statistics, 2020. CA Cancer J Clin. 2020;70(3):145–64.

    Article  Google Scholar 

  3. U.S. Preventative Services Task Force. Final Recommendation Statement: Colorectal Cancer: Screening 2021 Available from: https://www.uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening.

    Google Scholar 

  4. Harber I, Zeidan D, Aslam MN. Colorectal Cancer screening: impact of COVID-19 pandemic and possible consequences. Life (Basel). 2021;11(12):1297.

    CAS  Google Scholar 

  5. Office of Disease Prevention and Health Promotion, Office of the Secretary, U.S. Department of Health and Human Services. Increase the proportion of adults who get screened for colorectal cancer — C-07Available from: https://health.gov/healthypeople/objectives-and-data/browse-objectives/cancer/increase-proportion-adults-who-get-screened-colorectal-cancer-c-07. Accessed 22 Dec 2022.

  6. American Cancer Society I. Achieving 80% Colorectal Cancer Screening Rates In Every Community. Available from: https://nccrt.org/80-in-every-community/#:~:text=80%25%20in%20Every%20Community%20is,colorectal%20cancer%20screening%20rates%20nationally. Accessed 22 Dec 2022.

  7. Joseph DA, King JB, Dowling NF, Thomas CC, Richardson LC. Vital signs: colorectal Cancer screening test use - United States, 2018. MMWR Morb Mortal Wkly Rep. 2020;69(10):253.

    Article  Google Scholar 

  8. May FP, Yano EM, Provenzale D, Brunner J, Yu C, Phan J, et al. Barriers to follow-up colonoscopies for patients with positive results from fecal immunochemical tests during colorectal Cancer screening. Clin Gastroenterol Hepatol. 2019;17(3):469–76.

    Article  Google Scholar 

  9. Hébert JR, Adams SA, Ureda JR, Young VM, Brandt HM, Heiney SP, et al. Accelerating research collaborations between academia and federally qualified health centers: suggestions shaped by history. Public Health Rep. 2018;133(1):22–8.

    Article  Google Scholar 

  10. American Cancer Society I. Colorectal Cancer Screening Rates Reach 44.1% In FQHCs In 2018 Available from: https://nccrt.org/colorectal-cancer-screening-rates-reach-44-1-in-fqhcs-in-2018/.

    Google Scholar 

  11. Roundtable. NCC. CRC News: August 12, 2021 2021. Available from: https://nccrt.org/crc-news-august-12-2021/.

  12. Hannon PA, Maxwell AE, Escoffery C, Vu T, Kohn M, Leeman J, et al. Colorectal Cancer control program grantees' use of evidence-based interventions. Am J Prev Med. 2013;45(5):644–8.

    Article  Google Scholar 

  13. Joseph DA. Use of evidence-based interventions to address disparities in colorectal cancer screening. MMWR Suppl. 2016;65:21–8.

    Article  Google Scholar 

  14. Force. TCPST. Guide to Community Preventive Services. Cancer Screening: Multicomponent Interventions—Colorectal Cancer Available from: https://www.thecommunityguide.org/findings/cancer-screening-multicomponent-interventions-colorectal-cancer. [cited 22 Dec 2021].

  15. Community Preventive Services Task Force. The Community Guide 2022 Available from: https://www.thecommunityguide.org/.

    Google Scholar 

  16. Adams SA, Rohweder CL, Leeman J, Friedman DB, Gizlice Z, Vanderpool RC, et al. Use of evidence-based interventions and implementation strategies to increase colorectal Cancer screening in federally qualified health centers. J Community Health. 2018;43(6):1044–52.

    Article  Google Scholar 

  17. Hannon PA, Maxwell AE, Escoffery C, Vu T, Kohn MJ, Gressard L, et al. Adoption and implementation of evidence-based colorectal Cancer screening interventions among Cancer control program grantees, 2009-2015. Prev Chronic Dis. 2019;16:E139.

    Article  Google Scholar 

  18. Kim J, Wang H, Young L, Michaud TL, Siahpush M, Farazi PA, et al. An examination of multilevel factors influencing colorectal Cancer screening in primary care accountable care organization settings: a mixed-methods study. J Public Health Manag Pract. 2019;25(6):562–70.

    Article  Google Scholar 

  19. Joseph DA, DeGroff A. The CDC colorectal Cancer control program, 2009-2015. Prev Chronic Dis. 2019;16:E159.

    Article  Google Scholar 

  20. Riehman KS, Stephens RL, Henry-Tanner J, Brooks D. Evaluation of colorectal Cancer screening in federally qualified health centers. Am J Prev Med. 2018;54(2):190–6.

    Article  Google Scholar 

  21. Evidence-based Cancer control programs (EBCCP). 2020. Available from: https://ebccp.cancercontrol.cancer.gov/index.do.

  22. Scaccia JP, Cook BS, Lamont A, Wandersman A, Castellow J, Katz J, et al. A practical implementation science heuristic for organizational readiness: r = MC2. J Community Psychol. 2015;43(4):484–501.

    Article  Google Scholar 

  23. Weiner BJ, Amick H, Lee SY. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008;65(4):379–436.

    Article  Google Scholar 

  24. Walker TJ, Brandt HM, Wandersman A, Scaccia J, Lamont A, Workman L, et al. Development of a comprehensive measure of organizational readiness (motivation × capacity) for implementation: a study protocol. Implement Sci Commun. 2020;1(1):103.

    Article  Google Scholar 

  25. Wandersman A, Duffy J, Flaspohler P, Noonan R, Lubell K, Stillman L, et al. Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation. Am J Community Psychol. 2008;41(3-4):171–81.

    Article  Google Scholar 

  26. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

    Article  Google Scholar 

  27. Chinman M, Hunter SB, Ebener P, Paddock SM, Stillman L, Imm P, et al. The getting to outcomes demonstration and evaluation: an illustration of the prevention support system. Am J Community Psychol. 2008;41(3-4):206–24.

    Article  Google Scholar 

  28. Evans JM, Grudniewicz A, Baker GR, Wodchis WP. Organizational context and capabilities for integrating care: a framework for improvement. Int J Integr Care. 2016;16(3):15.

    Article  Google Scholar 

  29. Flaspohler P, Duffy J, Wandersman A, Stillman L, Maras MA. Unpacking prevention capacity: an intersection of research-to-practice models and community-centered models. Am J Community Psychol. 2008;41(3-4):182–96.

    Article  Google Scholar 

  30. Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4(1):67.

    Article  Google Scholar 

  31. Drzensky F, Egold N, van Dick R. Ready for a change? A longitudinal study of antecedents, consequences and contingencies of readiness for change. J Chang Manag. 2012;12(1):95–111.

    Article  Google Scholar 

  32. Holt DT, Vardaman JM. Toward a comprehensive understanding of readiness for change: the case for an expanded conceptualization. J Chang Manag. 2013;13(1):9–18.

    Article  Google Scholar 

  33. Khan S, Timmings C, Moore JE, Marquez C, Pyka K, Gheihman G, et al. The development of an online decision support tool for organizational readiness for change. Implement Sci. 2014;9(1):56.

    Article  Google Scholar 

  34. Storkholm MH, Mazzocato P, Tessma MK, Savage C. Assessing the reliability and validity of the Danish version of organizational readiness for implementing change (ORIC). Implement Sci. 2018;13(1):78.

    Article  Google Scholar 

  35. Scott VC, Gold SB, Kenworthy T, Snapper L, Gilchrist EC, Kirchner S, et al. Assessing cross-sector stakeholder readiness to advance and sustain statewide behavioral integration beyond a state innovation model (SIM) initiative. Transl Behav Med. 2021;11(7):1420–9.

    Article  Google Scholar 

  36. Scott VC, Kenworthy T, Godly-Reynolds E, Bastien G, Scaccia J, McMickens C, et al. The readiness for integrated care questionnaire (RICQ): an instrument to assess readiness to integrate behavioral health and primary care. Am J Orthopsychiatry. 2017;87(5):520–30.

    Article  Google Scholar 

  37. Livet M, Yannayon M, Richard C, Sorge L, Scanlon P. Ready, set, go!: exploring use of a readiness process to implement pharmacy services. Implement Sci Commun. 2020;1(1):52.

    Article  Google Scholar 

  38. Domlyn AM, Scott V, Livet M, Lamont A, Watson A, Kenworthy T, et al. R = MC2 readiness building process: a practical approach to support implementation in local, state, and national settings. J Commun Psychol. 2021;49(5):1228–48.

    Article  Google Scholar 

  39. Holmbeck GN, Devine KA. Editorial: an Author's checklist for measure development and validation manuscripts. J Pediatr Psychol. 2009;34(7):691–6.

    Article  Google Scholar 

  40. Centers for Disease Control and Prevention. Cognitive Interviewing 2014 Available from: https://www.cdc.gov/nchs/ccqder/evaluation/CognitiveInterviewing.htm.

    Google Scholar 

  41. Willis GB. Cognitive interviewing: a tool for improving questionnaire design: SAGE Publications; 2004.

    Google Scholar 

  42. Tourangeau R, Bradburn NM. The psychology of survey response. In: Handbook of Survey Research. 2nd ed; 2010. p. 315–46.

    Google Scholar 

  43. Walker TJ, Brandt HM, Wandersman A, Scaccia J, Lamont A, Workman L, et al. Development of a comprehensive measure of organizational readiness (motivation x capacity) for implementation: a study protocol. Implement Sci Commun. 2020;1(1):103.

    Article  Google Scholar 

  44. Creswell JW, Poth CN. Qualitative inquiry and research design: choosing among five approaches: California; SAGE Publications; 2016.

    Google Scholar 

  45. Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Admin Pol Ment Health. 2015;42(5):533–44.

    Article  Google Scholar 

  46. Beatty PC, Willis GB. Research synthesis: the practice of cognitive interviewing. Public Opin Q. 2007;71(2):287–311.

    Article  Google Scholar 

  47. O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51.

    Article  Google Scholar 

  48. Miles MB, Huberman AM, Saldana J. Qualitative data analysis: California; SAGE Publications; 2014.

    Google Scholar 

  49. Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res. 2017;27(4):591–608.

    Article  Google Scholar 

  50. Hamilton AB, Finley EP. Qualitative methods in implementation research: an introduction. Psychiatry Res. 2019;280:112516.

    Article  Google Scholar 

  51. Ramanadhan S, Revette AC, Lee RM, Aveling EL. Pragmatic approaches to analyzing qualitative data for implementation science: an introduction. Implement Sci Commun. 2021;2(1):70.

    Article  Google Scholar 

  52. Johnson R, Waterfield J. Making words count: the value of qualitative research. Physiother Res Int. 2004;9(3):121–31.

    Article  Google Scholar 

  53. Ivey J. The value of qualitative research methods. Pediatr Nurs. 2012;38:319.

    Google Scholar 

  54. Sullivan GM, Artino AR Jr. How to create a bad survey instrument. J Grad Med Educ. 2017;9(4):411–5.

    Article  Google Scholar 

  55. Bandalos LD. Measurement theory and applications for the social sciences; 2018.

    Google Scholar 

  56. DeVellis RF, Thorpe CT. Scale development: theory and applications: California; SAGE Publications; 2021.

  57. Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use: New York; Oxford University Press; 2014.

    Google Scholar 

  58. Shepperd JA, Pogge G, Hunleth JM, Ruiz S, Waters EA. Guidelines for conducting virtual cognitive interviews during a pandemic. J Med Internet Res. 2021;23(3):e25173.

Download references

Acknowledgments

Emanuelle Dias is supported by the University of Texas Health Science Center at Houston School of Public Health Cancer Education and Career Development Program –National Cancer Institute/NIH Grant T32/CA057712. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute or the National Institutes of Health. Research reported in this publication was supported by the American Lebanese and Syrian Associated Charities (ALSAC) of St. Jude Children’s Research Hospital.

Funding

Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under award number 1R01CA228527-01A1, PI: Maria E. Fernandez.

Author information

Authors and Affiliations

Authors

Contributions

MM, LW, ED, TW, and DC contributed towards all steps of the study including interview recruitment, data analysis, and writing the manuscript. MM, ED, and DC conducted the interviews. All contributing authors (MM, LW, ED, TW, HB, DC, RG, AL, BW, AW, MF) assisted in data analysis steps (refinement of readiness survey). HB, BW, AW, and MF reviewed and provided feedback on the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Maria McClam.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Institutional Review Board at the University of Texas and the University of South Carolina. All methods were carried out in accordance with relevant guidelines and regulations to ensure protection of participants, including gathering informed consent for each individual. Informed consent was obtained from all participants for the study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

A Snapshot Example of the Excel Document Used for Data Analysis.

Additional file 2.

A Snapshot Example of the Full Table Used to Keep Track of Steps Leading to Changes in the Readiness Survey.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

McClam, M., Workman, L., Dias, E.M. et al. Using cognitive interviews to improve a measure of organizational readiness for implementation. BMC Health Serv Res 23, 93 (2023). https://doi.org/10.1186/s12913-022-09005-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-022-09005-y

Keywords