Skip to main content
  • Research article
  • Open access
  • Published:

Development and validation of the Readiness to Train Assessment Tool (RTAT)

Abstract

Background

In recent years, health centers in the United States have embraced the opportunity to train the next generation of health professionals. The uniqueness of the health centers as teaching settings emphasizes the need to determine if health professions training programs align with health center priorities and the nature of any adjustments that would be needed to successfully implement a training program. We sought to address this need by developing and validating a new survey that measures organizational readiness constructs important for the implementation of health professions training programs at health centers where the primary role of the organizations and individuals is healthcare delivery.

Methods

The study incorporated several methodological steps for developing and validating a measure for assessing health center readiness to engage with health professions programs. A conceptual framework was developed based on literature review and later validated by 20 experts in two focus groups. A survey-item pool was generated and mapped to the conceptual framework and further refined and validated by 13 experts in three modified Delphi rounds. The survey items were pilot-tested with 212 health center employees. The final survey structure was derived through exploratory factor analysis. The internal consistency reliability of the scale and subscales was evaluated using Chronbach’s alpha.

Results

The exploratory factor analysis revealed a 41-item, 7-subscale solution for the survey structure, with 72% of total variance explained. Cronbach’s alphas (.79–.97) indicated high internal consistency reliability. The survey measures: readiness to engage, evidence strength and quality of the health professions training program, relative advantage of the program, financial resources, additional resources, implementation team, and implementation plan.

Conclusions

The final survey, the Readiness to Train Assessment Tool (RTAT), is theoretically-based, valid and reliable. It provides an opportunity to evaluate health centers’ readiness to implement health professions programs. When followed with appropriate change strategies, the readiness evaluations could make the implementation of health professions training programs, and their spread across the United States, more efficient and cost-effective. While developed specifically for health centers, the survey may be useful to other healthcare organizations willing to assess their readiness to implement education and training programs.

Peer Review reports

Background

Health centers are community-based primary health care organizations that provide comprehensive, culturally competent, high-quality services to medically underserved and vulnerable patients in the United States. In 2019, there were nearly 1400 health centers with over 252,000 full-time healthcare providers and staff serving 29.8 million patients [1]. Nevertheless, health centers continue to struggle with workforce recruitment and retention [2]. For example, approximately 13% of the clinical staff positions at health centers across the country remain vacant [3]. Furthermore, the United States is projected to experience a shortage of up to 121,900 physicians by 2032 [1]. In anticipation of this primary care workforce shortage, the Health Resources and Services Administration (HRSA), an agency of the U.S. Department of Health and Human Services, increased funding for health professions training (HPT) and is looking for approaches to solve this issue in a sustainable, long-term manner [4,5,6].

The creation of an education and workforce pipeline for the health center workforce is one such approach. In recent years, health centers have invested significant effort in the development and implementation of health professions training (HPT) programs [7]. The Teaching Health Center program for training primary care physicians that was authorized under the Affordable Care Act [8], the development of a model of postgraduate nurse practitioner residency and fellowship training led by Community Health Center, Inc. (CHCI) [9], and recent development of federal funding for such programs [10], the expansion of health center based training programs in dentistry and behavioral health [11, 12], and the development of innovative approaches to training medical assistants [13], registered nurses [14], community health workers [15], and others all point to health center interest in this area. In addition to addressing their workforce shortages, health centers’ increased interest in HPT programs can be attributed to several other factors. Among them are the movement towards innovative and integrated clinical and value-based care models, embracing the opportunity to train the next generation of health professionals to a high-performance model of care, and providing them with the tools and skills to address and remove systemic barriers to optimal care for health center patients.

While there is widespread interest in training health professionals across all educational levels and disciplines, many questions and concerns remain regarding capacity, resources, organizational abilities, leadership support and potential impact on the core and extended members of primary care teams and the organization as a whole. It is unclear how ready health centers are to implement or engage with HPT programs. Because the implementation of new programs is consistently contextual and an inherently complex process, the extent to which health center engagement with HPT programs will be successful will differ between health centers. There are many factors that may pose significant challenges to health centers in launching HPT. Specific barriers that health centers face in implementing HPT include limitations in structural capacity to engage with students [16]; finding capable preceptors/supervisors; administrative complexity [17, 18]; and leadership and financial considerations [19, 20].

There is a clear need to determine if HPT programs align with health center priorities and the nature of any adjustments that would be needed to successfully implement and adopt a training program. The extent to which a health center is ready to engage in HPT programs should be assessed by employing a measure of organizational readiness to change or adopt new behaviors. Such evaluation of organizational readiness would allow for early identification and mitigation of many barriers to engage in HPT programs. Furthermore, the potential scale and implications of implementing the programs justify an in-depth analysis of health center readiness, however the relevant literature in this field is scarce. In the change management literature the concept of organizational readiness to change is common [21] and is usually used to describe the extent to which an organization is able and prepared to embrace change and employ innovations [22]. Organizational readiness for change can be viewed as a continuum that starts with the process of exploring a new program and ends with its full implementation [23].

The concept of readiness has been previously examined across various disciplines, as well as within various settings, including hospitals and physician practices [24, 25]. Importantly, research has linked high level of readiness for change to more effective implementation and concluded that organizational readiness to change is a critical antecedent of successful implementation of new programs [24, 26, 27]. However, despite these conclusions assessing readiness to change in healthcare settings is often overlooked [24, 28].

A range of general frameworks and models have been proposed to guide implementation efforts [26, 29,30,31], though the available literature does not sufficiently address the questions about how to assess health center readiness to engage with and implement an HPT program. In healthcare settings, the main focus has been on the effects of implementing changes in service delivery and care practices. Less is known about what factors determine the successful implementation of education programs or medical curricula changes [32,33,34] and in particular, what factors influence implementation of HPT programs in health centers.

Measurement of organizational readiness is a field that still remains underdeveloped and fragmented [35], with no gold standard [36]. There are several general measurement instruments described in the readiness for change literature [24, 37], and few valid and reliable tools developed specifically for healthcare settings [28, 38]. Such tools include the Texas Christian University-ORC [39], the ORIC [24], the ARCHO [40], and the instruments specifically developed for undergraduate medical education [41,42,43,44]. However, such instruments have been designed to assess readiness for implementation of new policies, guidelines, practices, and changes in curricula rather than HPT programs. Furthermore, the uniqueness of the health centers as teaching settings emphasizes the need for a survey instrument developed to their specific problems, needs and requirements. There is a need to develop an instrument that specifically measures constructs important for the implementation of education and training programs at health centers, where the primary role of the organizations and individuals is healthcare delivery.

To summarize, there is a gap in the literature on how to assess health center readiness to engage with and implement HPT programs. In this study we sought to address this gap by developing and validating a new survey instrument to assess the core dimensions of health center readiness to implement HPT programs.

Methods

The Weitzman Institute of CHCI conducted the study from July 2018 to June 2019.

Our objectives were to develop and validate a tool to measure and assess health center readiness to engage with and implement HPT programs that is based on organizational readiness theory and experts’ judgement of the most important factors influencing successful HPT program implementation. The tool, named the Readiness to Train Assessment Tool (RTAT), had to be pragmatic and relevant to a wide range of HPT programs and types of health centers regardless of size, scope, location, etc.

To achieve these objectives, we undertook the following methodological steps: 1) development and validation of a conceptual framework; 2) generation of the initial survey item pool; 3) refinement and validation of the survey items; 4) pilot testing of the survey; and 5) psychometric and structural evaluation. To establish pragmatic strength, we followed recommendations from the literature on “pragmatic measurement” [45]. We ensured that RTAT has survey qualities such as relevance to health centers, applicability to a range of settings, feasibility, benchmarking, and actionability.

The study was approved by the CHCI Institutional Review Board Protocol ID: 1133.

Conceptual framework

During the first phase of the study, we reviewed relevant dissemination and implementation science principles and constructs as well as measures containing individual survey items (surveys, questionnaires, scales, instruments, and tools). Based on this review, we decided to utilize the Organizational Readiness to Change theory (ORC) and the Comprehensive Framework for Implementation Research (CFIR) [26, 31] to guide our work.

For the purposes of the study, we conceptualized the health center workforce needs broadly, at the system level, rather than at the organizational level. Identifying and meeting the needs of the health center workforce at a system level might involve health centers forming partnerships to agree on priorities and actions regarding health professions training. Health centers need to recognize and prioritize their shared goals to meet the workforce needs despite their organizational differences. They need to take a system-wide view of what the workforce needs are and how these needs can be met through HPT implementation.

We broadly defined ‘health professions training’ as any formal organized education or training undertaken for the purposes of gaining knowledge and skills necessary to practice a specific health profession or role in a healthcare setting. Health centers may provide HPT at any educational level (certificate, undergraduate, graduate, professional and/or postgraduate) and in any clinical discipline. For the purposes of this study, the following were considered examples of types of HPT programs:

  • Established affiliation agreements with academic institutions to host students or trainees

  • Formal agreements with individual students

  • Directly sponsoring accredited or accreditation-eligible training programs (across all disciplines and education levels).

Additionally, we defined ‘organizational readiness for change’ as the degree to which health centers are motivated and capable to engage with and implement HPT programs. We kept this definition more general and in line with the main constructs of the ORC theory (change commitment and change efficacy) [26], mainly because there is no consensus around a definition of organizational readiness for change [36]. Consistent with the ORC theory, while building capacity is a required aspect of successfully getting a health center ready to implement an HPT program, an overall collective motivation, or commitment to engage with HPT is equally and critically important [26, 46].

Since organizational readiness for change is a multi-faceted and multi-leveled construct [26, 47], to be able to measure it, we defined the domains of organizational readiness to implement an HPT program as the broad characteristics or determinants of organizational readiness which the individual survey items should represent. We utilized the CFIR’s constructs [31] as domains to adapt for our survey and include in the study’s conceptual framework.

While organizational readiness is emphasized as important in most implementation frameworks and theories [23, 48], we chose CFIR for two main reasons. First, it brings together numerous theories developed to guide the planning of implementation research. CFIR is a comprehensive framework of 37 constructs within 5 key dimensions (the intervention, the individuals, the inner setting, the outer setting, and the implementation process) that are considered important for the successful implementation of an intervention [31]. Second, we chose the CFIR’s constructs because they can be easily customized to diverse settings and scenarios [36] and thus, have been used in dozens of studies [49, 50]. This was an important consideration, because when creating a survey, it is important to clearly define and communicate the definitions of the domains to be measured to everyone involved in the survey design. Because CFIR provides the constructs’ terms and definitions to adapt and apply for varied implementations [36], we reliably described the domains of organizational readiness to implement HPT programs at health centers in the conceptual framework we developed for the study.

We included the following five domains in the study’s conceptual framework:

(1) characteristics of the HPT program; (2) external context (external to the health center factors that influence implementation); (3) organizational characteristics (health center characteristics such as leadership, culture, and learning climate); (4) characteristics of individuals (actions and behaviors of health center staff); and (5) process (the systems and pathways supporting the implementation of a new HPT program within the health center). See Additional file 1 for the detailed conceptual framework.

During the second phase of the study, this framework was validated by 20 subject matter experts with experience in HPT at health centers. We obtained their opinions through two online focus groups conducted in November, 2018. The experts had to consider every facet of their own implementation experience in order to indicate whether the domains/subdomains of the proposed conceptual framework were directly related to factors facilitating or hindering implementation of HPT programs. Experts’ comments were also invited on the practical utility of measuring each of the domains/subdomains before and during an implementation of a new HPT program. Based on the experts’ recommendations, we eliminated one of the subdomains in the initially proposed conceptual framework (Peer Pressure). The validated framework was in line with the CFIR’s conceptualizations [31].

Survey item pool

Our initial survey item pool consisted of 306 items found through review of existing surveys in the organizational readiness literature and deemed as relevant [24, 39, 42,43,44, 51,52,53,54,55,56]. There was evidence of overall reliability and validity provided for most of these surveys. Although data on the psychometric properties remain to be published for three of the surveys [52,53,54], they contain items broadly similar in content and wording to items from already validated instruments.

We reviewed all items and deleted the redundant ones. Since the items had to reflect as much as possible the construct to be measured [57] and to fit the purposes of this study, we reworded the remaining items while mapping them to a relevant domain and subdomain in the conceptual framework. During the rewording process, additional changes were intentionally made so that the survey items can be applied to any HPT program and any health center. Rewording and mapping decisions were made by a consensus decision-making process among the members of the research team. After finishing this process, 182 items remained in the item pool for possible inclusion in the survey. This was in line with recommendations for developing an initial item pool that is two to five times greater than the final measure [58, 59] which provides a sufficient margin for selecting the best and most robust item combination in the final version of the survey.

Content validation

During the third phase of the study, we used a modified, web-based, Delphi procedure to further refine and validate the survey item pool. The Delphi approach is a well-established research method to reach consensus on a specific subject matter among a panel of experts through feedback of information in several, usually 2–4 rounds [60,61,62].

Thirteen subject matter experts were recruited and agreed to serve as panelists in all three Delphi rounds in February–April 2019. For each Delphi round, the experts received an email with instructions and a link to the online survey which was administered via Qualtrics Research Core software.Footnote 1

The Delphi panelists assessed each survey item based on their level of agreement with the item’s appropriateness and ability to measure the relevant domain/subdomain in the conceptual framework (rounds 1 and 2; a 5-point Likert scale) and the item’s importance for organizational readiness to engage with HPT programs (round 3; 9-point Likert scale). In addition, the panelists had the opportunity to suggest rewording of items as well as new items for the survey. For each round, they had at least 1 week to review and assess the items and to propose changes. The Delphi procedure was closed after round three. At this time, the experts were invited to provide their feedback regarding the design of the final survey.

After each round, all experts received an individual report with their own scores for each item compared against the average group scores. After reviewing the reports, the experts had the opportunity to change and re-submit scores. Based on the level of agreement reached among the panelists in that round, items were either accepted in the survey, eliminated, modified or added back for re-evaluation. Only survey items reaching the required level of consensus by more than 80% of the panelists were retained in the final survey. In rounds 1 and 2, these levels were the highest two points of the ‘appropriateness’ and “ability to measure’ scales. In round 3, only items scoring as “critically important” (highest 3 points) were retained.

Over the three Delphi rounds, there was a reduction in the number of survey items and subdomains. The final content-validated survey comprised 65 survey items (statements) over 5 domains and 22 subdomains.

Pilot testing of the survey

During the final phase of the study, we pilot-tested the survey with staff from health centers across the United States. We used the collected responses in the psychometric and structural validation of the survey.

In the instructions, the pilot test participants were asked to answer questions related to their health center’s overall readiness and future plans to engage with HPT programs. They had to indicate the extent to which they agree with the 65 survey statements as they pertain to their health center’s readiness to engage with HPT program(s). If the health center was considering more than one HPT program, for the purposes of these questions, participants were encouraged to think about only one of them and to respond openly and honestly, based only on their own judgment, regardless of what others expect or what is socially acceptable at their health center.

A five-point Likert scale was chosen for the 65 survey statements because it approximates an interval-level measurement and has been shown to create the necessary variance to examine the relationships among survey items and to create acceptable internal consistency reliability estimates [63]. Items were positively stated with one exception; the negatively stated item was reverse coded during the analysis but eliminated during the statistical testing of the survey. The distributed questionnaire also contained questions to collect data on the demographic and professional characteristics of the individual respondents and the characteristics of their health centers (e.g., number of patients served, HPT efforts). Additional questions were also added for testing the convergent, discriminant, and predictive validities of the survey.

Before the distribution of the questionnaire, the survey items and the instructions to the respondents were uploaded into Qualtrics and field-tested for ease of comprehension by two staff members at CHCI/Weitzman Institute who also tested and confirmed the question routing and response choices. Their suggestions contributed to the final version of the distributed questionnaire.

To recruit pilot-test participants, we used both convenience and purposive sampling methods and leveraged the CHCI network. An email invitation with a link to the survey was distributed in four waves to 9209 potentially eligible individuals. In addition, three email reminders were sent over the period May–June 2019.

We screened respondents for eligibility for the survey. To be included in the study, respondents had to be current employees of a health center in the United States. They also had to feel sufficiently informed to answer questions related to current or future engagement with HPT program(s) at their health center. Respondents were provided study information and consented by proceeding to complete the survey. The range for completing the survey was between 15 to 25 min.

Of all 386 respondents, 301 (78%) screened as eligible for participation and engaged with the survey. A final sample of 212 health center employees who completed all survey items were retained for psychometric and structural validation. The pilot test sample that was included in the analysis represented a wide range of demographic and professional characteristics, as well as roles and experience with HPT programs. Seventy-one percent of the participants were female and 61.5% identified as White/Non-Hispanic. Forty-nine percent had graduate degree, while 27.7% identified as having a clinical doctorate. The respondents’ health centers were located in 41 US states and had different characteristics and levels of HPT engagement. More details about the sample characteristics can be seen in Table 1, while the characteristics of the health centers of the participants are presented in Table 2. In addition, Table 3 shows the states where the health centers are located.

Table 1 Pilot-test respondent characteristics
Table 2 Characteristics of the health centers of the pilot test participants
Table 3 Pilot test respondents: state where health center is located

Psychometric and structural validation

Exploratory factor analysis (EFA) was used to explore the internal structure and validity of the survey [64, 65]. By using this method, the survey items are being reduced, but the survey still provides nearly the same amount of information as the original number of items. First-order EFA was carried out by means of principal axis factoring and rotated using the promax procedure with Kaiser’s Normalization to an oblique solution to generate a factor solution for the survey. To assess compliance with the distribution requirements, Bartlett’s test of sphericity and the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy were used. In order to estimate the number of significant item factors, Kaiser’s criterion [66] and Cattell’s scree-plot [67] were used. Cronbach’s alpha coefficients, as well as the average correlations between the items of each factor were calculated to examine the internal consistency reliability and unidimensionality of the retained factors of the survey [68, 69]. Convergent, discriminant, and predictive validity were also examined.

Data were analyzed using IBM SPSS Statistics 20 Software Package [70].

Results

An exploratory factor analysis was used to reduce the set of survey items to a more parsimonious set and to identify those items that would load as predicted. The principal-components method of analysis was used as it is the most common factoring method and because it accounts for common, specific and random error variances [71, 72].

To determine the suitability of the data for factor analysis, the sample size and the relationship between the responses to the items were examined. The sample size was deemed adequate to appropriately conduct the analyses. The number of subjects with usable data (212) was larger than three times the number of variables (65). In most cases, as long as item intercorrelations are reasonably strong, a sample size of 150 cases is sufficient to produce an accurate solution in exploratory factor analysis [73].

The intercorrelation matrix revealed that underlying structures do exist. Both Bartlett’s test of sphericity (11,198.358; p < .001) and the KMO measure of sample adequacy (.934) confirmed that the properties of the correlation matrix of the item scores were suitable for factor analysis, in terms of the guidelines recommended by Hair et al., 2014 [74]. In the first round of EFA, the responses on the 65 items of the survey were inter-correlated and rotated by means of the promax procedure to an oblique solution. The eigenvalues and the scree plot obtained from this initial extraction were examined, which provided a preliminary indication of the number of factors represented by the data. Based on Kaiser’s [66] criterion (eigenvalues greater than 1.0), 11 factors were extracted. The 11 factors explained 70.8% of the variance in the factor space of the data.

Next, the items included in the extracted 11 factors were scrutinized. All the items which had factor loadings <.50 or cross-loadings < 0.2 were removed. To ensure that each factor was well measured, factors with less than 3 items were also removed.

Only 41 items loading on the first 7 factors were retained and they were subjected to a second round of EFA with promax rotation. The KMO measure of sample adequacy (.943) and the significant sphericity (7299.149, p < .001) of the data indicated that the properties of the correlation matrices of the 41 item scores were likely to factor well. The expected seven factors with eigenvalues greater than one were extracted. Inspection of the eigenvalues and the scree-plot confirmed that seven factors had been properly determined. These factors explained 72.64% of the total variance in the data. The communality of all the items was .50 or more; the average communality of all items was .72. This extraction together with the study’s conceptual framework was used to determine the ultimate number of underlying factors and items to be retained. The factor loadings for the survey items in the finalized factor structure are provided in Table 4.

Table 4 Factor loadings for the survey items in the finalized factor structure

The survey items that were eliminated after factor analysis are presented in Additional file 2. The bivariate Pearson correlations of all items were calculated to provide an overview of their relations and to check for multicollinearity. No multicollinearity was detected; there were no correlation coefficients of .8 or higher.

Thus, in order to measure health center readiness to engage with and implement HPT programs, a new 41-item scale for health center readiness was developed with seven subscales corresponding to the seven factors identified through factor analysis. The subscale variables were created by taking the average of the survey items belonging to a subscale (component) as shown by the factor analyses.

The new scale was constructed as a reflective measure. Reflective measures are expected to have high inter-intercorrelations and to “reflect” the latent construct, in this case - the health center readiness to engage with and implement HPT programs. Cronbach’s alpha for the overall scale was excellent (.969). The high reliability of the overall scale indicates strong item covariance or homogeneity, meaning that the survey items measure the same construct well [75]. Further reliability analysis showed that the factors or subscales also had good to excellent internal consistency (.787 to .970).

The seven subscales that emerged from the data analysis represent seven areas of readiness: readiness to engage (8 items), evidence strength and quality of the HPT program (4 items), relative advantage of the HPT program (4 items), financial resources (3 items), additional resources (3 items), implementation team (4 items), and implementation plan (15 items). See Table 5 for details about the subscales.

Table 5 Descriptions, number of survey items, and Cronbach’s alphas of the seven subscales a

During the reliability analysis none of the survey items were deleted, because the Cronbach’s alphas were already higher than .80 for six of the subscales and the different survey items contributed to the stability and completeness of the subscales they belonged to. The analyses for alphas if items were deleted did not show significant improvements.

Within the overall readiness scale, health centers can evaluate their readiness for a specific HPT program using these seven subscales. The survey allows for three levels of assessment and scoring: at the survey item, subscale, and overall scale levels by obtaining their mean (average) scores. Each survey item can and should have its response analyzed and reviewed separately.

Mean (average) scores may range anywhere from 1 to 5 with 5 indicating highest readiness to engage with and implement a specific program. To ease interpretation, these means can then be used to assign one of three levels of readiness: developing readiness (mean scores 1.0–2.9); approaching readiness (mean scores 3.0–3.9); and full readiness (mean scores 4.0–5.0) - for each survey item, subscale, and for the overall scale.

Further, we confirmed by sizeable correlations with other measures the convergent validity of RTAT. RTAT was expected to correlate with two questions: the first one, rating the health center’s overall readiness to engage with HPT (agreement, on a 1–5 scale); and the second one, rating the health center’s readiness to engage in quality improvement activities (from poor = 1 to excellent = 5). We tested the discriminant validity of RTAT by determining no relationships existed with respondents’ gender and ethnicity, as concepts unrelated to the organizational readiness of the health center. Lastly, we tested and confirmed the predictive validity of RTAT with a question rating the overall quality of the HPT provided at the employee’s health center (from poor = 1 to excellent = 5).

Discussion

In this paper, we presented the process we used to develop and validate a survey to measure health center readiness to engage with and implement HPT programs. We used survey development methods that are closely aligned with current recommendations and included the involvement of subject matter experts in the development and validation process; the use of consensus methods and theory to develop the survey items; field-testing; and pilot-testing of the developed survey. The result of this stepwise process, the Readiness to Train Assessment Tool (RTAT), is a theoretically-based, valid and reliable 41-item measure with 7 subscales. It was designed to assess readiness from a health center staff perspective and to be applicable to a wide range of HPT programs and types of health centers.

RTAT addresses the difficult task of measuring health center readiness in a field with few valid assessment tools. However, similar to other measures that are based on CFIR, the survey subscales assess factors such as perceived complexity, relative advantage, and knowledge and beliefs about the intervention being implemented; available resources; implementation climate, implementation team, and implementation plan [76].

RTAT’s seven subscales or core elements suggest that successful implementation is dependent on the health center being receptive to the change, there is an agreement among staff members that the evidence about the HPT is robust, there are efficient and effective implementation processes in place, and adequate resources are available for implementation, including dedicated staff, money, and time. Since organizational readiness involves collective action from many staff members, the factors reflected in the subscales are illustrative of the collaborative nature of the HPT engagement and implementation work. The survey items outline the multiple steps and considerations of an implementation process that is closely aligned with the context of a health center. As an example, many implementation team members are also providing patient care and need to have release (protected) time for HPT program implementation.

Although specific HPT programs and health centers may require very specific organizational changes, there is a range of key requirements or determinants of implementation success which remain similar across the HPT programs and across all health centers. Throughout the development and validation process, our focus was on these common key requirements to ensure the generalizability and applicability of the survey to various health centers and HPT programs. For these reasons, we performed the survey’s psychometric evaluation with data collected in diverse contexts (health centers in 41 US states with various levels of HPT engagement), from staff with various demographic and professional characteristics (roles/experience with HPT programs), and from the standpoint of different HPT programs. We believe this variability is one of the strengths of our development and validation process. However, we also need to explore the possibility that different groups of employees respond differently to RTAT questions. The extent of different perceptions relating to organizational readiness, any underlying reasons for such differences (e.g., demographic and professional characteristics, geographic location, and different access to information), and their impact on the results from the survey should be examined as a next step.

Furthermore, the pilot test data showed that our respondents were from health centers at different stages of implementation of HPT programs, which could have an impact on the quality of respondents’ responses. In our study, we believe that having responses from health centers that are at different stages of implementation adds needed variability to the data. When creating a scale, considerable variability is desirable because it ensures the ability of items in the scale to measure the variability of the phenomena [57]. Lack of variability makes the scaling efforts challenging and if an item lacks variability it is usually deleted during the analysis.

However, there is recent research that suggests that there are important differences in the relative significance of the broader readiness components (motivation, general capacity, and intervention-specific capacity) at different stages of implementation [77]. While there should be more research into this problem, there are some indications that different readiness issues should be considered at different stages of implementation.

It should also be noted that because of our strict inclusion criteria, the respondents in our pilot test sample might overrepresent health center employees who are interested in HPT program implementation thus limiting generalizability in this context. However, respondents represented all geographic areas in the United States, had a variety of roles and probably were more actively involved in the implementation of HPT programs at their health centers. We believe that this specific group of respondents was better placed to accurately report on the health centers’ readiness to engage with HPT programs and the specific factors measured by the survey given their sufficient knowledge of the health centers’ plans adding credibility to their perceptions. The exclusion of employees who did not feel sufficiently informed about their organization’s current or future engagement with HPT might have influenced the data and the results. However, we think this was necessary and important limitation of the study to ensure that the survey instrument is built by using the most reliable data.

As a survey instrument, RTAT demonstrates both psychometric [45, 78, 79] and pragmatic [45] strength. In establishing RTAT’s validity, we considered two important aspects: the survey measure is theory-based and its variance reflects the variance in the actual construct being measured [80]. We used exploratory factor analysis which is the most commonly used statistical method to establish construct validity [64, 65]. We reported 72% of total variance explained by the final EFA model; excellent Cronbach’s alpha of .97 for the overall scale, and Cronbach’s alphas of at least .79 for the seven subscales.

Although we attempted to achieve high standards with respect to the psychometric properties of the RTAT, we believe that additional work assessing the RTAT dimensionality should be considered. Confirmatory factor analysis (CFA) should be used in a larger study to further verify the adequacy of the conceptual structure of the instrument (the scale/subscales determined through the EFA) and to investigate whether the instrument has the same structure across different groups/ samples. In our study, while the size of the sample was adequate to meet the requirements for EFA, it was not large enough to allow the simultaneous cross-validation of the instrument with CFA, to assess the extent to which the factors proposed by the EFA fit the data and to confirm the stability of the structure.

Additionally, we acknowledge that not having a specific construct on general capacity in the final survey structure might be a potential limitation of the instrument. Our study had an exploratory approach and no clear formal hypothesis on the expected dimensionality or factor structure for the instrument was produced based on theory. CFIR provided a general idea about what to expect and key information for selecting the initial set of items, but we had no clear predetermined structure in mind for our instrument and did not seek to confirm the survey structure based on the CFIR’s domains. Rather, we used expert opinion and factor analytic procedures to explore all possible interrelationships between the set of items and were open to accept the grouping of items that presented the best structure that was both theoretically and statistically sound. Based on this methodology, general capacity items did not receive much support in the final structure of our instrument. While the initial set of possible survey items included items on general capacity, most of these items were eliminated during the Delphi procedure by our subject matter experts.

As a pragmatic measure, RTAT is expected to have a low burden of administration and to provide maximum value for the efforts for administering it. Its use for benchmarking would allow for measuring the impact of any changes after the start of the program [39], therefore, it can be used during the initiation and the implementation phases [81].

Additionally, since implementation is a complex process that requires the joint and coordinated efforts of the organization as a whole [82], RTAT was designed to measure readiness for change at the individual health center level. Therefore, the wording of the survey items and the instructions to the survey respondents were critical to ensure that all items can be answered by all health center employees; are relevant to all implementation staff roles; and are applicable at any stage of implementation of any HPT program. Additionally, the Likert-type rating scales we chose for the survey statements, allow for answers that are not compromised by forced completion. Since the performance of a scale, in general, is not affected by the approach used to calculate the scores (weighted vs unweighted) [59], we decided to use the unweighted scoring (means). Therefore, each individual survey item contributes equally to the subscale and overall scale score.

In defining the domains and developing the items to measure the domains, we followed a “best practice” approach [59] that combines two types of methods, deductive and inductive [83] . The use of the deductive method involved exploring the literature for existing theories and measures in order to define organizational readiness for change and to identify relevant domains and survey items. We extended the CFIR and organizational readiness conceptualizations into survey items by judging and mapping all items against domain and subdomain definitions. This ensured that RTAT is linked and consistent with a larger theoretical framework which is one of the strengths of our development process.

The use of the inductive method involved conducting focus groups and Delphi rounds with subject matter experts. Their contributions to the development and validation of the survey ensured its strength to measure the most important and relevant to the health centers organizational readiness determinants. By using both deductive and inductive methods (both literature review and expert opinion) we established not only the theoretical underpinning but also the best possible face and content validity for RTAT. Furthermore, we hope that because of its strong face and content validity health centers will view the survey as appropriate and acceptable for their use [79].

RTAT’s seven subscales identify critical components of organizational readiness that health centers need to address as soon as they start considering engagement with a specific HPT program. Implementation is a complex and multi-factorial process; therefore, it is important to have a better understanding of the influencing factors and mediators of successful implementation, especially during the early planning stages. RTAT will not only help health centers better understand these critical components but will also allow for early identification and mitigation of barriers for engagement. Based on the information from the survey, health centers will be able to not only identify specific issues and areas where additional focus may be needed, but to also select the most relevant and effective strategies for implementation. Implementation strategies are considered essential tools for supporting the implementation process [84] and should be linked to specific determinants of implementation success. There are several lists and taxonomies, including the Expert Recommendations for Implementing Change (ERIC) [85] and the Behavior Change Technique [86], that can be used as a starting point for developing and selecting strategies based on the RTAT results. The developed strategies should be incorporated accordingly into the implementation of HPT programs nationwide.

Conclusions

In conclusion, we developed a parsimonious and practical survey that assesses health center readiness to engage with and implement HPT programs as it is perceived by the health center employees. The final survey, the Readiness to Train Assessment Tool, is a 41-item, 7-subscale, organizational readiness scale that is both valid and reliable. It represents determinants of readiness judged as ‘critical to evaluate’ by subject matter experts. The survey measures domains such as readiness to engage, evidence, strength and quality of the HPT program, relative advantage of the HPT program, financial resources, additional resources, implementation team, and implementation plan. The advantage of the RTAT is that it covers organizational readiness dimensions that are relevant to a wide range of HPT programs and types of health centers. While developed specifically for health centers, the survey may be useful to other healthcare organizations willing to assess their readiness to implement an HPT program.

Most importantly, the RTAT meets a significant need at the national level. As a next step, it provides an opportunity to formally assess the readiness of all health centers in the United States to train all types of health professions at any education/training level and across all disciplines [87]. RTAT results from this assessment can address concerns regarding capacity, resources, and organizational abilities when launching any HPT program(s). This assessment can inform HRSA on policies, programs, and funding. It can also assist in the continued collaboration between National Training and Technical Assistance Partners and other HRSA partners (e.g. Primary Care Associations and Health Center Controlled Networks) by informing the development of targeted workforce technical and training assistance for the nation’s health centers [87]. Thus, when followed with appropriate change strategies, the readiness assessments could make the engagement and implementation of HPT programs, and their spread across the United States, more efficient and cost-effective.

Because readiness is a critical predictor of implementation success and because a validated measure of health center readiness to implement HPT programs is needed for both research and practice, the newly developed measure has the potential for high impact.

Availability of data and materials

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Notes

  1. Copyright© 2019 Qualtrics. Qualtrics and all other Qualtrics product or service names are registered trademarks or trademarks of Qualtrics, Provo, UT, USA. https://www.qualtrics.com

Abbreviations

HPT:

Health Professions Training

RTAT:

Readiness to Train Assessment Tool

HRSA:

Health Resources and Services Administration

CHCI:

Community Health Center, Inc.

ORC:

Organizational Readiness to Change

CFIR:

Comprehensive Framework for Implementation Research

EFA:

Exploratory factor analysis

KMO:

Kaiser-Meyer-Olkin

References

  1. Health Resources & Services Administration. What is a health center? 2020. https://bphc.hrsa.gov/about/what-is-a-health-center/index.html. Accessed 5 Nov 2020.

    Google Scholar 

  2. Markus A, Sharac J, Tolbert J, Rosenbaum S, Zur J. Community health centers’ experiences in a more mature ACA market. 2018; August. http://files.kff.org/attachment/Issue-Brief-Community-Health-Centers-Experiences-in-a-More-Mature-ACA-Market.

    Google Scholar 

  3. NACHC. Staffing the safety net: building the primary care workforce at America’s health centers. 2016. http://www.nachc.org/wp-content/uploads/2015/10/NACHC_Workforce_Report_2016.pdf. Accessed 3 Aug 2018.

    Google Scholar 

  4. Jennings AA, Foley T, Mchugh S, Browne JP, Bradley CP. “Working away in that Grey Area..” A qualitative exploration of the challenges general practitioners experience when managing behavioural and psychological symptoms of dementia. Age Ageing. 2018;47:295–303.

    Article  PubMed  Google Scholar 

  5. Health Resources and Services Administration. HRSA awards more than $94 million to strengthen and grow the health care workforce. 2017. https://www.hrsa.gov/about/news/press-releases/2015-07-14-health-workforce.html. Accessed 8 May 2020.

    Google Scholar 

  6. U.S. Department of Health & Human Services. HHS awards $20 million to 27 organizations to increase the rural workforce through the creation of new rural residency programs. 2019. https://www.hhs.gov/about/news/2019/07/18/hhs-awards-20-million-to-27-organizations-to-increase-rural-workforce.html%0A. Accessed 8 May 2020.

    Google Scholar 

  7. Knight K, Miller C, Talley R, Yastic M, Mccolgan K, Proser M, et al. Health centers’ contributions to training tomorrow’s physicians. Case Studies of FQHC-Based Residency Programs and Policy Recommendations for the Implementation of the Teaching Health Centers Program. 2010. http://www.nachc.org/wp-content/uploads/2015/06/THCReport.pdf. Accessed 7 Sep 2018.

    Google Scholar 

  8. Chen C, Chen F, Mullan F. Teaching health centers: a new paradigm in graduate medical education. Acad Med. 2012;87(12):1752–6. https://doi.org/10.1097/ACM.0b013e3182720f4d.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Flinter M, Bamrick K. Training the next generation: residency and fellowship programs for nurse practitioners in community health centers. Middletown: Community Health Center, Incorporated; 2017. https://www.weitzmaninstitute.org/NPResidencyBook

    Google Scholar 

  10. Health Resources & Services Administration. Advanced nursing education nurse practitioner residency (ANE-NPR) program: award table. 2019. https://bhw.hrsa.gov/funding-opportunities/ane-npr. Accessed 8 May 2020.

    Google Scholar 

  11. Garvin J. Bill would extend funding for dental public health programs: American Dental Association; 2019. https://www.ada.org/en/publications/ada-news/2019-archive/february/bill-would-extend-funding-for-dental-public-health-programs. Accessed 8 May 2020

    Google Scholar 

  12. Substance Abuse and Mental Health Services Administration. Certified community behavioral health clinic expansion grants. 2020. https://www.samhsa.gov/grants/grant-announcements/sm-20-012. Accessed 8 May 2020.

    Google Scholar 

  13. Chapman SA, Blash LK. New roles for medical assistants in innovative primary care practices. Health Serv Res. 2017;52(Suppl 1):383–406. https://doi.org/10.1111/1475-6773.12602.

    Article  PubMed  Google Scholar 

  14. Thomas T, Seifert P, Joyner JC. Registered nurses leading innovative changes. Online J Issues Nurs. 2016;21. https://doi.org/10.3912/OJIN.VOL21NO03MAN03.

  15. Hartzler AL, Tuzzio L, Hsu C, Wagner EH. Roles and functions of community health workers in primary care. Ann Fam Med. 2018;16(3):240–5. https://doi.org/10.1370/AFM.2208.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Langelier M, Surdu S, Rodat C. Survey of federally qualified health centers to understand participation with dental residency programs and student externship rotations. Rensselaer; 2016. http://www.oralhealthworkforce.org/wp-content/uploads/2016/12/OWHRC_FQHCs_Dental_Residency_Programs_and_Student_Externship_Rotations_2016.pdf. Accessed 22 Apr 2021.

  17. Christner JG, Dallaghan GB, Briscoe G, Casey P, Fincher RME, Manfred LM, et al. The community preceptor crisis: recruiting and retaining community-based faculty to teach medical students—a shared perspective from the alliance for clinical education. Teach Learn Med. 2016;28(3):329–36. https://doi.org/10.1080/10401334.2016.1152899.

    Article  PubMed  Google Scholar 

  18. Morris CG, Chen FM. Training residents in community health centers: facilitators and barriers. Ann Fam Med. 2009;7(6):488–94. https://doi.org/10.1370/afm.1041.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Glassman P, Subar P. The California community clinic oral health capacity study. Report to the California Endowment; 2005.

    Google Scholar 

  20. Sunshine JE, Morris CG, Keen M, Andrilla CH, Chen FM. Barriers to training family medicine residents in community health centers. Fam Med. 2010;42:248–54 http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=20373167.

    PubMed  Google Scholar 

  21. Rafferty AE, Jimmieson NL, Armenakis AA. Change readiness. J Manage. 2013;39(1):110–35. https://doi.org/10.1177/0149206312457417.

    Article  Google Scholar 

  22. Hameed MA, Counsell S, Swift S. A meta-analysis of relationships between organizational characteristics and IT innovation adoption in organizations. Inf Manag. 2012;49(5):218–32. https://doi.org/10.1016/J.IM.2012.05.002.

    Article  Google Scholar 

  23. Fixsen D, Naoom S, Blase K, Friedman R, Wallace F. Implementation research: a synthesis of the literature. 2005. https://nirn.fpg.unc.edu/sites/nirn.fpg.unc.edu/files/resources/NIRN-MonographFull-01-2005.pdf.

    Google Scholar 

  24. Shea CM, Jacobs SR, Esserman DA, Bruce K, Weiner BJ. Organizational readiness for implementing change: a psychometric assessment of a new measure. Implement Sci. 2014;9(1):7. https://doi.org/10.1186/1748-5908-9-7.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Allen JD, Towne SJD, Maxwell AE, Dimartino L, Leyva B, Bowen DJ, et al. Meausures of organizational characteristics associated with adoption and/or implementation of innovations: a systematic review. BMC Health Serv Res. 2017;17(1):591. https://doi.org/10.1186/s12913-017-2459-x.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4:1–9.

    Article  Google Scholar 

  27. Holt DT, Helfrich CD, Hall CG, Weiner BJ. Are you ready? How health professionals can comprehensively conceptualize readiness for change. J Gen Intern Med. 2010;25(SUPPL. 1):50–5. https://doi.org/10.1007/s11606-009-1112-8.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Weiner BJ, Amick H, Lee S-YD. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008;65(4):379–436. https://doi.org/10.1177/1077558708317802.

    Article  PubMed  Google Scholar 

  29. Rycroft-Malone J. The PARIHS framework--a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19(4):297–304. https://doi.org/10.1097/00001786-200410000-00002.

    Article  PubMed  Google Scholar 

  30. Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ Heal Prof. 2006;26(1):13–24. https://doi.org/10.1002/chp.47.

    Article  Google Scholar 

  31. Damschroder LJ, Aron DCD, Keith RE, SRSSR K, Alexander JA, Lowery JC, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50. https://doi.org/10.1186/1748-5908-4-50.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Lillevang G, Bugge L, Beck H, Joost-Rethans J, Ringsted C. Evaluation of a national process of reforming curricula in postgraduate medical education. Med Teach. 2009;31(6):e260–6. https://doi.org/10.1080/01421590802637966.

    Article  PubMed  Google Scholar 

  33. Jippes E, Van Luijk SJ, Pols J, Achterkamp MC, Brand PLP, Van Engelen JML. Facilitators and barriers to a nationwide implementation of competency-based postgraduate medical curricula: a qualitative study. Med Teach. 2012;34(8):e589–602. https://doi.org/10.3109/0142159X.2012.670325.

    Article  CAS  PubMed  Google Scholar 

  34. Bank L, Jippes M, Van Rossum TR, Den Rooyen C, Scherpbier AJJA, Scheele F. How clinical teaching teams deal with educational change: “We just do it.”. BMC Med Educ. 2019;19:1–8.

    Article  Google Scholar 

  35. Drzensky F, Egold N, van Dick R. Ready for a change? A longitudinal study of antecedents, consequences and contingencies of readiness for change. J Chang Manag. 2012;12(1):95–111. https://doi.org/10.1080/14697017.2011.652377.

    Article  Google Scholar 

  36. Miake-Lye IM, Delevan DM, Ganz DA, Mittman BS, Finley EP. Unpacking organizational readiness for change: an updated systematic review and content analysis of assessments. BMC Health Serv Res. 2020;20(1):106. https://doi.org/10.1186/s12913-020-4926-z.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Helfrich CD, Li Y-F, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the promoting action on research in health services (PARIHS) framework. Implement Sci. 2009;4(1):38. https://doi.org/10.1186/1748-5908-4-38.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Gagnon MP, Attieh R, Ghandour EK, Légaré F, Ouimet M, Estabrooks CA, et al. A systematic review of instruments to assess organizational readiness for knowledge translation in health care. PLoS One. 2014;9(12):e114338. https://doi.org/10.1371/journal.pone.0114338.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  39. Lehman WEK, Greener JM, Simpson DD. Assessing organizational readiness for change. J Subst Abus Treat. 2002;22(4):197–209. https://doi.org/10.1016/S0740-5472(02)00233-7.

    Article  Google Scholar 

  40. Nuño-Solinís R, Fernández-Cano P, Mira-Solves JJ, Toro-Polanco N, Carlos Contel J, Guilabert Mora M, et al. Development of an instrument for the assessment of chronic care models. Gac Sanit. 2013;27(2):128–34. https://doi.org/10.1016/j.gaceta.2012.05.012.

    Article  PubMed  Google Scholar 

  41. Malau-aduli BS, Zimitat C, Malau-aduli AEO. Quality assured assessment processes: evaluating staff response to change. High Educ Manag Policy. 2011;23(1):1–24. https://doi.org/10.1787/hemp-23-5kgglbdlm4zw.

    Article  Google Scholar 

  42. Jippes M, Driessen EW, Broers NJ, Majoor GD, Gijselaers WH, van der Vleuten CPM. A medical school’s organizational readiness for curriculum change (MORC): development and validation of a questionnaire. Acad Med. 2013;88(9):1346–56. https://doi.org/10.1097/ACM.0b013e31829f0869.

    Article  PubMed  Google Scholar 

  43. Bank L, Jippes M, van Luijk S, den Rooyen C, Scherpbier A, Scheele F. Specialty training’s organizational readiness for curriculum change (STORC): development of a questionnaire in a Delphi study. BMC Med Educ. 2015;15(1):127. https://doi.org/10.1186/s12909-015-0408-0.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Bank L, Jippes M, Leppink J, Scherpbier AJ, den Rooyen C, van Luijk SJ, et al. Specialty training’s organizational readiness for curriculum change (STORC): validation of a questionnaire. Adv Med Educ Pract. 2018;9:75–83. https://doi.org/10.2147/AMEP.S146018.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Glasgow RE, Riley WT. Pragmatic measures: what they are and why we need them. Am J Prev Med. 2013;45(2):237–43. https://doi.org/10.1016/j.amepre.2013.03.010.

    Article  PubMed  Google Scholar 

  46. Weiner BJ, Amick H, Lee S-YD. Review: conceptualization and measurement of organizational readiness for change. Med Care Res Rev. 2008;65(4):379–436. https://doi.org/10.1177/1077558708317802.

    Article  PubMed  Google Scholar 

  47. Pettigrew AM, Woodman RW, Cameron KS. Studying organizational change and development: challenges for future research. Acad Manag J. 2001;44(4):697–713. https://doi.org/10.2307/3069411.

    Article  Google Scholar 

  48. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50. https://doi.org/10.1016/j.amepre.2012.05.024.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Kirk MA, Kelley C, Yankey N, Birken SA, Abadie B, Damschroder L. A systematic review of the use of the consolidated framework for implementation research. Implement Sci. 2016;11(1). https://doi.org/10.1186/s13012-016-0437-z.

  50. Damschroder LJ. Clarity out of chaos: use of theory in implementation research. Psychiatry Res. 2020;283:112461. https://doi.org/10.1016/J.PSYCHRES.2019.06.036.

    Article  PubMed  Google Scholar 

  51. Sanders KA, Wolcott MD, McLaughlin JE, D’Ostroph A, Shea CM, Pinelli NR. Organizational readiness for change: preceptor perceptions regarding early immersion of student pharmacists in health-system practice. Res Soc Adm Pharm. 2017;13(5):1028–35. https://doi.org/10.1016/j.sapharm.2017.03.004.

    Article  Google Scholar 

  52. Blackman D, O’Flynn J, Ugyel L. A diagnostic tool for assessing organisational readiness for complex change. In: Anzam Australian and New Zealand Academy of Management; 2013. https://doi.org/10.4225/50/55821631B89F5.

    Chapter  Google Scholar 

  53. Austin MJ, Ciaassen J. Implementing evidence-based practice in human service organizations: preliminary lessons from the frontlines. J Evid Based Soc Work. 2008;5(1-2):271–93. https://doi.org/10.1300/J394v05n01_10.

    Article  PubMed  Google Scholar 

  54. Serhal E, Arena A, Sockalingam S, Mohri L, Crawford A. Adapting the consolidated framework for implementation research to create organizational readiness and implementation tools for project ECHO. J Contin Educ Heal Prof. 2018;38(2):145–51. https://doi.org/10.1097/CEH.0000000000000195.

    Article  Google Scholar 

  55. Lee CP, Shim JP. An exploratory study of radio frequency identification (RFID) adoption in the healthcare industry. Eur J Inf Syst. 2007;16(6):712–24. https://doi.org/10.1057/palgrave.ejis.3000716.

    Article  Google Scholar 

  56. Stamatakis KA, McQueen A, Filler C, Boland E, Dreisinger M, Brownson RC, et al. Measurement properties of a novel survey to assess stages of organizational readiness for evidence-based interventions in community chronic disease prevention settings. Implement Sci. 2012;7(1):65. https://doi.org/10.1186/1748-5908-7-65.

    Article  PubMed  PubMed Central  Google Scholar 

  57. DeVellis RF. Scale development: theory and applications. Thousand Oaks: SAGE publication Ltd.; 2011.

    Google Scholar 

  58. Kline P. Handbook of psychological testing. 2nd ed; 2013. https://doi.org/10.4324/9781315812274.

    Book  Google Scholar 

  59. Boateng GO, Neilands TB, Frongillo EA, Melgar-Quiñonez HR, Young SL. Best practices for developing and validating scales for health, social, and behavioral research: a primer. Front Public Heal. 2018;6:149. https://doi.org/10.3389/FPUBH.2018.00149.

    Article  Google Scholar 

  60. Hsu C, Sandford B. The delphi technique: making sense of consensus. Pract Assessment, Res Eval. 2007;12(4):1–8. https://doi.org/10.1016/S0169-2070(99)00018-7.

    Article  Google Scholar 

  61. Keeney S, Hasson F, McKenna HP. A critical review of the Delphi technique as a research methodology for nursing. Int J Nurs Stud. 2001;38(2):195–200. https://doi.org/10.1016/S0020-7489(00)00044-4.

    Article  CAS  PubMed  Google Scholar 

  62. Hasson F, Keeney S. Enhancing rigour in the Delphi technique research. Technol Forecast Soc Change. 2011;78(9):1695–704. https://doi.org/10.1016/j.techfore.2011.04.005.

    Article  Google Scholar 

  63. Lissitz RW, Green SB. Effect of the number of scale points on reliability: a Monte Carlo approach. J Appl Psychol. 1975;60(1):10–3. https://doi.org/10.1037/h0076268.

    Article  Google Scholar 

  64. Urbina S. Essentials of psychological testing. 2nd ed; 2014.

    Google Scholar 

  65. Dancey CP, Reidy J. Statistics without maths for psychology: using Spss for windows; 2014.

    Google Scholar 

  66. Kaiser HF. A note on Guttman’s lower bound for the number of common factors. Br J Stat Psychol. 1961;14(1):1–2. https://doi.org/10.1111/j.2044-8317.1961.tb00061.x.

    Article  Google Scholar 

  67. Cattell RB. The scree test for the number of factors. Multivariate Behav Res. 1966;1(2):245–76. https://doi.org/10.1207/s15327906mbr0102_10.

    Article  CAS  PubMed  Google Scholar 

  68. Cortina JM. What is coefficient alpha? An examination of theory and applications. J Appl Psychol. 1993;78(1):98–104. https://doi.org/10.1037/0021-9010.78.1.98.

    Article  Google Scholar 

  69. Clark LA, Watson D. Constructing validity: basic issues in objective scale development. Psychol Assess. 1995;7(3):309–19. https://doi.org/10.1037/1040-3590.7.3.309.

    Article  Google Scholar 

  70. Corp IBM. Released. IBM SPSS statistics for windows, version 20.0; 2011.

    Google Scholar 

  71. Rummel RJ. Applied factor analysis; 1970.

    Google Scholar 

  72. Ford JK, MacCALLUM RC, TAIT M. The application of exploratory factor analysis in applied psychology: a critical review and analysis. Pers Psychol. 1986;39(2):291–314. https://doi.org/10.1111/j.1744-6570.1986.tb00583.x.

    Article  Google Scholar 

  73. Guadagnoli E, Velicer WF. Relation of sample size to the stability of component patterns. Psychol Bull. 1988;103(2):265–75. https://doi.org/10.1037/0033-2909.103.2.265.

    Article  PubMed  Google Scholar 

  74. Hair JF, Black JWC, Babin BJ, Anderson RE. Multivariate Data Analysis. 7th ed; 2014.

    Google Scholar 

  75. Price JL. Handbook of organizational measurement. Int J Manpow. 1997;18(4/5/6):305–558. https://doi.org/10.1108/01437729710182260.

    Article  Google Scholar 

  76. Clinton-McHarg T, Yoong SL, Tzelepis F, Regan T, Fielding A, Skelton E, et al. Psychometric properties of implementation measures for public health and community settings and mapping of constructs against the consolidated framework for implementation research: a systematic review. Implement Sci. 2016;11(1):148. https://doi.org/10.1186/s13012-016-0512-5.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Domlyn AM, Wandersman A. Community coalition readiness for implementing something new: using a Delphi methodology. J Community Psychol. 2019;47(4):882–97. https://doi.org/10.1002/jcop.22161.

    Article  PubMed  Google Scholar 

  78. Lewis CC, Stanick CF, Martinez RG, Weiner BJ, Kim M, Barwick M, et al. The society for implementation research collaboration instrument review project: a methodology to promote rigorous evaluation. Implement Sci. 2015;10. https://implementationscience.biomedcentral.com/articles/10.1186/s13012-014-0193-x.

  79. Rabin BA, Purcell P, Naveed S, Moser RP, Henton MD, Proctor EK, et al. Advancing the application, quality and harmonization of implementation science measures. Implement Sci. 2012;7(1):119. https://doi.org/10.1186/1748-5908-7-119.

    Article  PubMed  PubMed Central  Google Scholar 

  80. Westen D, Rosenthal R. Quantifying construct validity: two simple measures. J Pers Soc Psychol. 2003;84(3):608–18. https://doi.org/10.1037/0022-3514.84.3.608.

    Article  PubMed  Google Scholar 

  81. Bouckenooghe D, Devos G, Van Den Broeck H. Organizational change questionnaire-climate of change, processes, and readiness: development of a new instrument. J Psychol Interdiscip Appl. 2009;143(6):559–99. https://doi.org/10.1080/00223980903218216.

    Article  Google Scholar 

  82. Holt DT, Armenakis AA, Harris SG, Feild HS. Research in organizational change and development. Res Organ Chang Dev. 2010;16:ii. https://doi.org/10.1108/S0897-3016(2010)0000018015.

    Article  Google Scholar 

  83. Hinkin TR. A review of scale development practices in the study of organizations. J Manage. 1995;21(5):967–88. https://doi.org/10.1177/014920639502100509.

    Article  Google Scholar 

  84. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:1–11.

    Article  Google Scholar 

  85. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci. 2015;10(1):21. https://doi.org/10.1186/s13012-015-0209-1.

    Article  PubMed  PubMed Central  Google Scholar 

  86. Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, et al. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med. 2013;46(1):81–95. https://doi.org/10.1007/s12160-013-9486-6.

    Article  PubMed  Google Scholar 

  87. Health Resources and Services Administration. State and Regional Primary Care Association Cooperative Agreements Workforce Funding Overview. 2020. https://bphc.hrsa.gov/program-opportunities/pca/workforce-funding-overview#:~:text=The HP-ET initiative will, training programs at health centers. Accessed 15 Oct 2020.

    Google Scholar 

Download references

Acknowledgements

The authors of this article would like to acknowledge and thank all experts and participants for their valuable contributions leading to the successful completion of this research study.

Funding

This project was funded by the Health Resources and Services Administration (HRSA), an agency of the U.S. Department of Health and Human Services. The views presented here are those of the authors and not necessarily those of HRSA, its directors, officers, or staff.

Author information

Authors and Affiliations

Authors

Contributions

IZ and MF conceived the study. IZ wrote the study protocol and selected the theories and frameworks to guide the development of the survey. IZ, AS, and NK were responsible for all aspects of the study including literature reviews, definitions, focus groups, initial survey item pool, modified Delphi rounds, and pilot-testing of the survey. KB and MF reviewed the study protocol and the focus group and Delphi questionnaires. IZ conducted the statistical analyses, wrote the first draft and finalized the manuscript. All authors reviewed the draft for important intellectual content, approved the manuscript, and agreed to be accountable for all aspects of the work.

Corresponding author

Correspondence to Ianita Zlateva.

Ethics declarations

Ethics approval and consent to participate

This research was performed in accordance with the Declaration of Helsinki. Research ethics approval was obtained from the Institutional Review Board at the Community Health Center, Inc., Protocol ID: 1133. Written informed consent was obtained from all participants in the focus groups and the Delphi rounds. For all participants in the pilot-testing informed consent was implied by the completion of the online survey after reading the study information sheet.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

files

Additional file 1.

Study’s conceptual framework.

Additional file 2.

Survey items that were eliminated after factor analysis.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zlateva, I., Schiessl, A., Khalid, N. et al. Development and validation of the Readiness to Train Assessment Tool (RTAT). BMC Health Serv Res 21, 396 (2021). https://doi.org/10.1186/s12913-021-06406-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-021-06406-3

Keywords