Skip to main content

Ranked determinants of telemedicine diabetic retinopathy screening performance in the United States primary care safety-net setting: an exploratory CART analysis



Diabetic retinopathy (DR) is a leading cause of blindness worldwide, despite easy detection and effective treatment. Annual screening rates in the USA remain low, especially for the disadvantaged, which telemedicine-based DR screening (TDRS) during routine primary care has been shown to improve. Screening rates from such programs have varied, however, pointing to inconsistent implementation and unaddressed barriers. This work seeks to identify and prioritize modifiable barriers for targeted intervention.


In this final phase of an exploratory mixed-methods study, we developed, validated, and administered a 62-item survey to multilevel stakeholders involved with TDRS in primary care safety-net clinics. Survey items were aligned with previously identified determinants of clinic-level screening and mapped to the Consolidated Framework for Implementation Research (CFIR). Classification and Regression Tree (CART) analyses were used to identify and rank independent variables predictive of individual-level TDRS screening performance.


Overall, 133 of the 341 invited professionals responded (39%), representing 20 safety-net clinics across 6 clinical systems. Respondents were predominately non-Hispanic White (77%), female (94%), and between 31 and 65 years of age (79%). Satisfaction with TDRS was high despite low self-reported screening rates. The most important screening determinants were: provider reinforcement of TDRS importance; explicit instructions by providers to staff; effective reminders; standing orders; high relative priority among routine diabetic measures; established TDRS workflows; performance feedback; effective TDRS champions; and leadership support.


In this survey of stakeholders involved with TDRS in safety-net clinics, screening was low despite high satisfaction with the intervention. The best predictors of screening performance mapped to the CFIR constructs Leadership Engagement, Compatibility, Goals & Feedback, Relative Priority, Champions, and Available Resources. These findings facilitate the prioritization of implementation strategies targeting determinants of TDRS performance, potentially increasing its public health impact.

Peer Review reports


Diabetic retinopathy (DR) remains the leading cause of blindness among working-age adults in the USA [1], despite its easy detection and the widespread availability of effective treatment. The American Diabetes Association recommends annual DR screening for all diabetics — a service traditionally delivered through in-person specialist exam — but screening rates remain low [2], especially among disadvantaged populations disproportionately served by safety-net clinics such as Federally Qualified Health Centers (FQHC) [3, 4]. While single-purpose specialist visits for screening are rife with known barriers to access, most persons with diagnosed diabetes visit their primary care provider at least once per year [5]. Telemedicine-based DR screening (TDRS) embedded in the primary care setting and delivered as an important part of routine diabetes care — a modality proven to increase DR screening rates [6, 7] — can remove the known barriers to compliance [8,9,10,11] and increase early detection of vision-threatening pathologies [12], all while providing cost savings, especially for low-income populations and rural patients with high transportation costs [13].

While the extent of TDRS adoption among FQHCs in the US is unknown, the implementation, screening performance, and sustainability of primary care-based TDRS programs so far published have been mixed [14,15,16,17,18,19]. Yet few studies have investigated the determinants of program screening rates, fewer have correlated perceived barriers with measures of effectiveness, and fewer still have rigorously investigated how to systematically improve TDRS implementation.

Implementation strategies are “methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice,” like TDRS [20]. The knowledge base for implementation strategies is growing and suggests that multilevel, multicomponent implementation strategies that target context-specific barriers and facilitators [21, 22] to intervention adoption, delivery, and sustainment may have the greatest impact on implementation success [23, 24].

Our previous work described a large FQHC-based TDRS network’s creation, policies, screening performance, and sustainment [6], and, using key informant interviews, reported the barriers and facilitators of program implementation perceived by multi-level professionals engaged in TDRS delivery, emphasizing those that distinguish higher- from lower-screening programs [25]. However, both our work and the literature so far lack a quantitative assessment of the relative importance of specific TDRS determinants in the safety-net setting.

Through this final phase of our sequential exploratory mixed-methods research approach, we sought (1) to quantify personnel and program characteristics, perceptions of TDRS delivery, and expectations of potential implementation strategies among multilevel stakeholders in the primary care safety-net setting; (2) to reconcile these findings with the implementation determinants identified in earlier phases of our research; (3) to organize our findings within an actionable theoretical framework; and (4) to prioritize them as a foundation for future implementation mapping [26]. This work is therefore a valuable contribution to our understandings of the interplay among real-world conditions, intervention characteristics, and implementation strategies for TDRS delivery in the primary care safety-net setting.


The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist guided this report (Additional file 1). Because this manuscript reports only the quantitative results of a sequential exploratory study, we believe the manuscript to be better served by a reporting guideline for observational studies, such as STROBE, rather than one intended for mixed methods reports. For example, though not explicitly discussed in the manuscript, we did consider Creamer’s criteria of interpretive comprehensiveness as critical to our Synthesis of Multiphase Findings in the Discussion section. Likewise, we found “the quality of mixed methods studies in health services research” assessments for quantitative components, integration, and insights to be helpful.

We developed a novel survey instrument to quantify perceptions regarding program barriers and facilitators at multiple professional strata within our TDRS network (for more information on the network’s implementation and characteristics, please see our earlier work [6]). Based on results from key informant interviews obtained during our study’s qualitative phase [25] and a survey of relevant literature, items tapping specific characteristics of the intervention, practice setting, and population of interest were generated to measure stakeholder perceptions of TDRS program implementation, as well as expectations regarding the potential effects of proposed implementation strategies.

To assess face and content validity prior to instrument distribution, cognitive interviews were performed with an ophthalmologist, an ophthalmic nurse, a telemedicine specialist, two implementation researchers, and three lay reviewers whose feedback helped refine response options; the relevance and quality of each item; and the overall clarity, organization, and scope of the instrument. Conventional pretesting, that is, rehearsal piloting in the manner and mode intended for the final survey administration, was also used to identify technical defects, frequency distributions, average time to completion, and other aspects of the survey’s administration and reporting.

The final survey instrument included 62 items addressing contextual factors serving as potential barriers and facilitators to the implementation of TDRS, personal experience with and opinions regarding TDRS, and finally, demographic and clinical practice setting characteristics using a combination of multiple choice, Likert scales, and multimodal scale response options (Additional file 2). The target time to completion for respondents was 10 min.

The importance of coherence among determinants, interventions, and theory is well established [27]. In their extensive review of models used to study mHealth adoption, Jacob et al. [28] noted that the Consolidated Framework for Implementation Research (CFIR) — a contextual framework of theoretical domains and constructs associated with effective implementation of clinical innovations [29] — was a more comprehensive tool than the more widely used Technology Acceptance Model (TAM), the diffusion of innovation theory (DOI), and the unified theory of acceptance and use of technology (UTAUT) models, particularly in the areas of monetary factors, user experience, organizational factors, workflow characteristics, and policy and regulation factors. Citing an emerging consensus, they go on to endorse the use of frameworks that, like the CFIR, can accommodate the impact of barriers to implementation at the levels of individual behavior, the complexity of health care institutions and practices, and the policy and regulatory environments in which healthcare is delivered.

To increase construct validity and to orient future selection of implementation strategies, each survey item was mapped to one or more relevant constructs of the meta-theoretical CFIR. Two implementation researchers (ABC and SLW) independently mapped instrument items to CFIR constructs based on the descriptions and rationale provided by the CFIR Research Team at the Center for Clinical Management Research. Mappings were reconciled through an iterative process of discussion and expert consultation (CRS), resulting in a final consensus crosswalk between survey items and the CFIR.

For example, the CFIR Inner Setting construct Goals & Feedback — which reflects “The degree to which goals are clearly communicated, acted upon, and fed back to staff, and alignment of that feedback with goals” — was related to two questionnaire items, both of which posed hypotheticals of whether respondents believed they would be more likely to order or perform TDRS if they were given more data as feedback. A full description of the CFIR crosswalk is included in the Supplemental Methods section of Additional file 3.

After receiving ethical approval from the university’s Institutional Review Board (IRB#44107) and participatory commitment from the leaderships of network clinics, we built the validated survey instrument using QualtricsXM (Qualtrics, Provo, UT). The sampling frame for this survey included all of our network’s clinical employees directly involved with their clinic’s TDRS program. Participation was voluntary, and completion was incentivized (10.00 USD). Disclosure prefaced the instrument, followed by screening items to identify respondents’ profession and clinical role, and to confirm TDRS involvement. All respondents were presented with a core set of 41 items, and the 21 remaining items varied based on the respondent’s reported clinical role: provider or staff.

In October 2019, survey links were distributed by clinic- or system-level administrators to all providers and staff involved in their TDRS program. Eligible staff were primarily medical assistants performing tasks such as eligibility checks, charting, and the performance of the TDRS exam itself. Up to three reminders were sent by clinic directors over the four-week collection window, which closed in November 2019.

Statistical analysis

Analyses were first conducted to describe the distribution of responses. Bivariate associations with professional strata were investigated using chi-square tests of independence.

Using screening performance (i.e., “Of all your telemedicine diabetic eye screening-eligible patients during the past 12 months, what percentage did you screen?”) as the categorical outcome of interest, we looked for bivariate associations with independent variables by dividing respondents into two groups: lower screening (those selecting “0-25%” from the response options) and higher screening (those selecting “25-50%”, “51-75%”, or “more than 75%”). “Unsure” responses (18) were treated as missing and excluded from the analysis. This lower versus higher dichotomy was chosen based on the network’s low overall screening rates, which were well below the national average and recognized targets [2]. Logistic regression was performed to identify associations between independent variables (e.g., professional strata, the presence of an established TDRS workflow, etc.) and membership in the higher screening group, and reported as estimated odds ratios (with 95% confidence intervals). Statistical significance was determined as p < .05 (two-tailed) for all tests.

Targeting implementation strategies in limited-resource settings requires the identification of determinants involved and their prioritization by degree of influence on program performance [30]. To explore and visualize the variable interactions associated with screening performance, we performed exploratory Classification and Regression Tree (CART) analyses. CART analysis is an atheoretical non-parametric exploratory technique. Through a process of recursive partitioning, CART analysis can account for higher-order interactions among independent variables, accommodates small sample sizes [31], multicollinearity [32], and incomplete datasets [33], and produces both classification trees and variable importance rankings useful for prioritizing targets of intervention without implying causal relations [34]. The generated classification tree consists of parent and binary child nodes iteratively and recursively split upon the independent variable that best reduces the variability in the dependent variable. Each subsequent split beyond the root node reflects higher-order interactions. CART analysis also produces a variable importance ranking (VIR) that reflects the relative importance of each independent variable to the construction of the final tree (calculated as the change in model-predicted values per change in the independent variable’s value), regardless of whether the variable is used to split a parent node. The VIR is therefore a powerful tool for measuring and comparing the overall influence of predictor variables on the outcome of interest, and provides a more complete picture than the decision tree alone can convey [35] by accounting for variable masking [36]. In the development of implementation strategies, such a ranking of determinants by strength of association with intervention performance may be of greater value than the final decision tree itself [37, 38].

To better visualize the multifaceted relationships among variables representing modifiable determinants with the potential to influence TDRS performance (Table S1), CART analysis was employed as an exploratory method [39]. Although CARTs are typically used to probe large data sets, the application here provides a preliminary strategy to (1) visualize the variable interactions associated with screening performance, and to (2) assess the relative importance of each variable to screening performance. The CART analyses were restricted to those variables considered to represent modifiable determinants, i.e., those amenable to change by targeted implementation strategies. The primary CART analysis was limited to only those items delivered to both providers and staff. Secondary CART analysis forced the first breakpoint by professional stratum, and included variables unique to each professional role (Table S1). We utilized the Gini impurity function to determine optimal splits, and, because this exploratory method sought to “rule in” variables, trees were pruned according to the maximum difference in risk, defined as 0 standard errors. Respondents with missing values (n = 8) were included in the CART using surrogate variables [40]. CART analyses were performed using SPSS (IBM SPSS Statistics for Windows, Version 27.0. Armonk, NY: IBM Corp).


The survey link was sent to 341 employees of 20 clinics representing 6 safety-net clinical systems, of whom 133 (39%) submitted responses — 36 providers and 97 staff. Respondents were predominately non-Hispanic White (77%), female (94%), and between 31 and 65 years of age (79%). Staff were distinguished from providers by differences in gender (p = .002) and ethnicity (p = .038); by length of time involved with TDRS (p = .030); and by practical knowledge of their clinic’s TDRS program (operationalized by identifying the type of camera used, p = .004; Table 1).

Table 1 Respondent demographic, professional, and intervention-specific characteristics by professional stratum

When asked to estimate their personal screening performance over the preceding 12 months, more than one third of respondents (39%) reported screening ≤25% of their eligible patients, and the majority (57%) screened fewer than half of those eligible. While the majority of respondents ordered/performed TDRS at least once per week (59%), nearly a quarter ordered/performed less than one screening per month (23%). Paradoxically, 95% of respondents reported overall satisfaction with TDRS (Table 2).

Table 2 Responses to key measures of TDRS utilization by professional stratum

Further, when comparing respondents who reported a screening rate (i.e., did not select “unsure” from the item response options), providers and staff differed significantly (p = .008). The majority of providers (53%) reported screening > 50% of eligible patients, while only 27% of staff reported doing so. Similarly, when considering the broader question of whether their patients were being screened at all (through in-clinic TDRS, or by outside eyecare specialists), providers and staff again had significantly different perceptions (p = .003). Most providers (69%) believed that the majority (> 50%) of their patients were being screened, compared to 35% of staff believing so.

Figure S1 shows survey response distributions and survey item alignment with the CFIR domains Process, Intervention Characteristics, and Characteristics of Individuals. Figures. S2 and S3 show the survey response distributions and survey item alignments for constructs in the Inner Setting domain (see Additional file 4).

Staff were generally comfortable performing the intervention, favorable of its characteristics and time required, satisfied with the provided training, and reported high levels of leadership direction to perform the intervention.

Providers were more likely to be white and male and to have worked with TDRS longer, but were less likely to be familiar with the TDRS-specific equipment. Providers also perceived gradeability less favorably than other intervention characteristics.

Both professional strata perceived intervention champions as effective catalysts for TDRS, though they were reportedly present in only a minority of clinics. Reminders were appreciated when present, and, along with established workflows and increased staffing, were considered very likely to improve future screening performance if implemented.

Associations with screening performance, the categorical outcome of interest, were assessed of all survey items common to both providers and staff, as well as those specific to providers or staff. The following items were associated with greater odds of screening: more than 2 years of experience with TDRS, at least monthly use; patient objection as the primary reason not to screen; standing orders; explicit positive instructions (for staff only); staff autonomy to perform the intervention (staff only); and effective reminders, workflows, and champions. Running behind (for providers only) and perception of low patient adherence to screening recommendations were associated with lower odds of screening (Table 3).

Table 3 Variables significantly associated with screening performance

The primary CART analysis produced a VIR for independent predictors of individual-level TDRS performance. Eight variables were considered important to the model (importance value ≥.01): effective alerts (.058); standing orders (.057); established workflows (.047); access to performance data (.040); effective champions (.039); access to comparative performance data (.030); encouragement from leadership (.020); and failure to screen due to running behind (.019). The corresponding classification tree (Fig. 1) identified five predictors whose interactions best distinguished lower from higher screeners: (1) established workflow, (2) running behind, (3) standing orders, (4) effective alerts, and (5) expected effect of performance feedback. “Established workflow” best split the full sample (114 cases) — those with an established TDRS workflow being more likely to screen. Those without an established workflow were best split by “running behind” as the reason for not screening, and then by the presence of effective alerts. For those with an established workflow, the presence of standing orders best distinguished higher screeners. In the absence of standing orders for TDRS, lower screeners were more likely to value the proposition of increased performance data. The pruned primary CART model’s accuracy for predicting a respondent’s screening performance was 77.2% (r = .228, SE = .039), 66.7% for those screening ≤25% of eligible patients, and 85.7% for those designated higher screeners (> 25% of eligible patients). Based on its risk estimate, we consider the model a good fit for the data.

Fig. 1
figure 1

Primary classification tree for modifiable determinants of TDRS performance. Abbreviations: TDRS and TS, telemedicine diabetic retinopathy screening

Secondary CART analysis, which included profession-specific variables and forced the first tree break by professional stratum, identified variables involving interprofessional communication as the most important predictors for both providers (.151) and staff (.076). Other variables important to the secondary model included providers’ perceptions of TDRS priority (.051), the effectiveness of champions (.034), the presence of alerts (.024), and providers’ explicit instructions not to perform TDRS (.011). The secondary model’s overall accuracy was lower than the primary model’s (69.3%; r = .307, SE = .043), better predicting higher screeners than lower (85.7% vs 49.0%, respectively).


In this final phase of a sequential exploratory mixed-methods study, we developed, validated, and delivered a 62-item survey to multilevel stakeholders involved with TDRS delivery in primary care safety-net clinics. Survey items were aligned with implementation determinants and were mapped to the CFIR for construct validity, to enable cross-study comparisons, and to inform future implementation mapping. Logistic regression and exploratory CART analyses were used to identify the variables most strongly associated with individual-level screening performance.

While most TDRS studies have focused on patient satisfaction, we found that overall satisfaction of professionals involved with TDRS was high, despite low performance rates. Though acceptability is the most commonly assessed implementation outcome [41], the discrepancy noted here suggests that post-implementation acceptability of the intervention is insufficient to drive and sustain consistent use, i.e., penetration and sustainability [42]. This is consistent with our hypothesis of multiple interacting implementation determinants, and reinforces the importance of comprehensive multi-level program assessment [43].

Patient objection was the most cited (50%) reason for not screening eligible patients. Patients may object to TDRS for many reasons (lack of time, lack of trust, competing health problems, lack of symptoms, recent but undocumented screening, etc. [8, 9, 25, 44]), and because this variable was not further refined within the instrument, it was not included in CART analyses. From its correlation with higher screening performance, we interpreted the selection of “patient objection” to indicate the relative absence of other barriers. Other reasons cited, such as “short staffed” and “running behind”, confirmed our earlier findings and agreed with the conclusions of Ogunyemi et al., who cite staff shortages, disruptions, and diversions [45], and the qualitative results described by Liu et al., who cite time and resource constraints [46]. “Running behind” was also a critical predictor of lower screening performance in this study among those without an established workflow.

Synthesis of multiphase findings

Evidence for direct correlations between perceived barriers and intervention performance is critical to implementation planning [47] and strategy selection [26], yet lacking in the literature for TDRS. Addressing this gap, the previous qualitative phase of this study found associations between clinic-level TDRS performance and the six CFIR constructs Available Resources, Relative Priority, Leadership Engagement, Goals & Feedback, Engaging, and Champions [25].

Building on those findings, our current work corroborates, expands, and preliminarily prioritizes the list of candidate barriers and facilitators of TDRS performance in the safety-net setting. By using exploratory CART analyses to rank-order modifiable determinants, we have taken a significant step towards the development and prioritization of targeted implementation strategies [48] aimed to maximize impact in a safety-net setting defined by its resource constraints. This is a novel approach for dissemination and implementation research, which we intend to further explore and develop through future studies. Figure 2 illustrates the synthesis of our qualitative and quantitative findings.

Fig. 2
figure 2

Convergence of determinants associated with TDRS performance upon aligned CFIR constructs. Abbreviations: CFIR, Consolidated Framework for Implementation Research; TDRS, telemedicine diabetic retinopathy screening; CART, Classification and Regression Tree; VIR, variable importance ranking. The Phase III variable ranking was determined from each variable’s VIR value in the primary or secondary CART model

In CART analyses, variables with high importance values are the drivers of the outcome measure. Since screening performance is the outcome of complex, interacting, multi-level factors, we should not expect to find a single variable with a predominant importance value in our dataset. VIR provides a ranking of ratio values based on the contribution of each predictor to the model. In our results, the importance values for several variables were very similar, indicating that these variables equally contributed to the model and that no single variable was the obvious driver of the categorical outcome measure.

The construct Leadership Engagement, which was strongly associated with clinic-level TDRS performance in our qualitative interview data [25], was endorsed by some of the most important predictors of individual-level screening performance in the current study. In secondary CART analysis, provider-initiated interprofessional communication best predicted screening performance among providers and among staff, suggesting that, given their roles as clinical decision-makers and influencers, provider buy-in and reinforcement is a first-order priority in TDRS implementation and in improvement and sustainment plans. In a quality improvement study, Liu and colleagues used the NIATx framework to engage clinical leaders in participatory and iterative TDRS program improvement, which resulted in a sustained increase in DR screening rates [49]. Similarly, Ogunyemi et al. noted that, “support from high-level administration and leadership [ …] was instrumental in the most successful clinic implementations,” adding that such leadership engagement increased the likelihood of both initiating and troubleshooting TDRS among personnel involved [45].

Significantly, three of the six most important determinants of individual-level screening performance in our CART models (which were also significant in logistic regression) mapped to the CFIR construct Compatibility (effective reminders, standing orders, and established workflows). A fourth determinant that mapped to Compatibility, staff autonomy (i.e., “Are you allowed to perform TDRS in eligible patients without a verbal request from the provider or an order put in by the provider?”) — while not prominent in CART regressions — was significant in logistic regression. This cluster of determinants is an important addition to our understanding of TDRS program implementation and performance, as Compatibility was not emphasized by our prior qualitative data. The AAO underlines the importance of workflow adaptation and workflow metrics by which to monitor and improve TDRS integration [50], points echoed by Bouskill and colleagues in describing the “squeeze approach” for TDRS when implemented without adequate workflow redesign [51]. They identified several critical vulnerabilities within TDRS workflows, the first being breakdowns in the processes of identifying, recruiting, and handing off patients for screening, findings congruent with our Compatibility-mapped predictors. Liu et al., based on qualitative stakeholder interviews, noted similar barriers relating to Compatibility, and proposed strategies to streamline TDRS workflow processes, such as the adoption of effective electronic health record reminders [46]. To our knowledge, standing orders and staff autonomy have not elsewhere been elucidated as important factors in TDRS success, yet may be critical buoys of screening performance in the absence of consistent, explicit positive orders by providers.

Aspects of the construct Goals & Feedback were found to be predictors of clinic- and individual-level TDRS performance, especially relating to performance feedback and access to comparative TDRS performance data. This is an insight not apparent from the raw survey results (since neither of these survey items garnered more than 50% endorsement of expected positive effect by respondents) nor from the tests of independence, thus highlighting the value of CART techniques to identify higher-order interactions among independent variables (e.g., lower screeners with established workflows but without standing orders were more likely to respond that the provision of performance feedback would improve their screening performance).

Tracing back to the importance of the communication behavior of opinion leaders in Rogers’ DOI [52], the importance of intervention champions to successful evidence-based intervention (EBI) implementation has been well established [53]. Champions have been proven critical change agents for the primary care setting [54], and at least one study has demonstrated champions’ importance to telehealth services implementation and sustainment [55]. Our work is the first to establish evidence of champion effectiveness for teleophthalmology screening. Champions were qualitatively associated with intervention promotion, timely resource mobilization, and increased communication among professionals in our key informant interview data [25], and their importance to individual-level screening performance was further demonstrated here.

The CFIR constructs Relative Priority and Available Resources were significant in both our prior qualitative and current quantitative datasets. For providers, variables representing Relative Priority and Leadership Engagement were interactive (i.e., among providers who inconsistently or infrequently communicated the importance of TDRS to staff, those who valued DR screening equal to HbA1c measures were more likely to be higher screeners than those who valued DR screening less than HbA1c measures).

While our prior key informant interviews identified several determinants aligned with the construct Available Resources and associated with clinic-level screening performance — many of which were subsequently included in the survey instrument — only one was predictive of individual-level screening performance in our survey: “running behind”. Similarly, Mamillapalli et al. found that the most commonly cited limitation of TDRS performance In a private practice setting was “availability of staff [ …] and extra time consumed to perform the eye exams” [56]. In fact, limited resources have featured prominently in most reports on barriers to TDRS. Running behind, which mapped to both Available Resources and Relative Priority, reflects the natural consequence of what Bouskill et al. [51] described as “new burdens on already-strapped safety-net clinics.” From their qualitative study of staff workarounds for TDRS in the safety-net setting, they concluded that “the additional needs identified by new screening processes, when not met through additional follow-up resources, leave frontline staff in the uncomfortable position of having to witness inequality and resource constraints without the ability to systematically address them.” This is a critical conclusion that connects downstream integration and performance barriers to upstream failures during the implementation process [48], and highlights the precarious circumstances into which EBIs like TDRS must be implemented and to which they must be adapted through careful pre-implementation planning and resource allocation if their potential patient benefits are to be realized.

Though associated with clinic-level TDRS performance in our earlier qualitative data (e.g., professional education, detailing, and awareness), the CFIR construct Engaging was not endorsed by our survey as significant to individual-level screening performance. It is possible that Engaging was underrepresented in the survey.


Because the response rate was limited and we lacked access to demographic information on those who did not participate, we were unable to assess the potential impacts of selection and participation biases. Also, because the study was cross-sectional and exploratory, we were unable to determine causal relationships. Our reliance on self-reported screening performance as the dependent variable is a limitation, though the dimensional expansion from a clinic-level to individual-level screening performance measure, coupled with the parsimonious convergence with our prior qualitative findings, buffers our confidence. Additionally, residual confounding due to unmeasured variables is a potential limitation, though our sequential mixed-methods design, which allowed exploratory qualitative data to inform the survey’s composition, and instrument mapping to the CFIR mitigated this risk as much as was possible. Our dataset lacked sufficient power for model validation by confirmatory machine learning, and while the CART models were stable in sensitivity analysis, findings here must be considered exploratory. Despite these limitations, we have identified, contextualized, and preliminarily ranked by importance the modifiable determinants of TDRS performance in the primary care safety-net setting, which, upon confirmatory testing, can inform the development of targeted, evidence-based implementation strategies to increase screening rates.


In this survey of multi-level stakeholders involved with TDRS in safety-net clinics, post-implementation acceptability, measured as satisfaction with the intervention, was high even while overall screening performance lagged. Several variables were found to be associated with higher TDRS performance, which substantiated and expanded our prior insights. Together, our triangulated multiphase mixed methods results emphasize barriers and facilitators aligned with the CFIR constructs Leadership Engagement, Compatibility, Goals & Feedback, Champions, Engaging, Relative Priority, and Available Resources as the key determinants of TDRS program screening performance.

Availability of data and materials

The dataset analyzed during the current study is available from the corresponding author by reasonable request.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:



Diabetic retinopathy


Federally qualified health centers


Telemedicine diabetic retinopathy screening


Consolidated Framework for Implementation Research


Classification and Regression Tree


Variable importance ranking


Evidence-based intervention


Strengthening the Reporting of Observational Studies in Epidemiology


Technology Acceptance Model


Diffusion of innovation theory


Unified theory of acceptance and use of technology


Institutional review board


  1. Lee R, Wong TY, Sabanayagam C. Epidemiology of diabetic retinopathy, diabetic macular edema and related vision loss. Eye Vis. 2015;2:17.

    Article  Google Scholar 

  2. Comprehensive Diabetes Care. NCQA. Accessed 18 Mar 2020.

  3. Fathy C, Patel S, Sternberg P, Kohanim S. Disparities in adherence to screening guidelines for diabetic retinopathy in the United States: a comprehensive review and guide for future directions. Semin Ophthalmol. 2016;31:364–77.

    Article  PubMed  Google Scholar 

  4. Lu Y, Serpas L, Genter P, Mehranbod C, Campa D, Ipp E. Disparities in diabetic retinopathy screening rates within minority populations: differences in reported screening rates among African American and Hispanic patients. Diabetes Care. 2016;39:e31–2.

    Article  PubMed  Google Scholar 

  5. Gibson DM. Estimates of the percentage of US adults with diabetes who could be screened for diabetic retinopathy in primary care settings. JAMA Ophthalmol. 2019;137:440–4.

    Article  PubMed  PubMed Central  Google Scholar 

  6. de Carvalho AB, Ware SL, Lei F, Bush HM, Sprang R, Higgins EB. Implementation and sustainment of a statewide telemedicine diabetic retinopathy screening network for federally designated safety-net clinics. PLoS One. 2020;15:e0241767.

    Article  CAS  Google Scholar 

  7. Daskivich LP, Vasquez C, Martinez C, Tseng CH, Mangione CM. Implementation and evaluation of a large-scale teleretinal diabetic retinopathy screening program in the los Angeles county department of health services. JAMA Intern Med. 2017;177:642–9.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Graham-Rowe E, Lorencatto F, Lawrenson JG, Burr JM, Grimshaw JM, Ivers NM, et al. Barriers to and enablers of diabetic retinopathy screening attendance: a systematic review of published and grey literature. Diabet Med. 2018;35:1308–19.

    CAS  Article  PubMed  Google Scholar 

  9. Ramchandran RS, Yilmaz S, Greaux E, Dozier A. Patient perceived value of teleophthalmology in an urban, low income US population with diabetes. PLoS One. 2020;15:e0225300.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  10. Lu Y, Serpas L, Genter P, Anderson B, Campa D, Ipp E. Divergent perceptions of barriers to diabetic retinopathy screening among patients and care providers, Los Angeles, California, 2014-2015. Prev Chronic Dis. 2016;13:E140.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Gu D, Agron S, May LN, Mirza RG, Bryar PJ. Nonmydriatic retinal diabetic screening in the primary care setting: assessing degree of retinopathy and incidence of nondiabetic ocular diagnoses. Telemed J E Health Off J Am Telemed Assoc. 2020.

  12. Zimmer-Galler IE, Kimura AE, Gupta S. Diabetic retinopathy screening and the use of telemedicine. Curr Opin Ophthalmol. 2015;26:167–72.

    Article  PubMed  Google Scholar 

  13. Avidor D, Loewenstein A, Waisbourd M, Nutman A. Cost-effectiveness of diabetic retinopathy screening programs using telemedicine: a systematic review. Cost Eff Resour Alloc CE. 2020;18:16.

    Article  Google Scholar 

  14. Mansberger SL, Gleitsmann K, Gardiner S, Sheppler C, Demirel S, Wooten K, et al. Comparing the effectiveness of telemedicine and traditional surveillance in providing diabetic retinopathy screening examinations: a randomized controlled trial. Telemed J E Health. 2013;19:942–8.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Wilson C, Horton M, Cavallerano J, Aiello LM. Addition of primary care–based retinal imaging technology to an existing eye care professional referral program increased the rate of surveillance and treatment of diabetic retinopathy. Diabetes Care. 2005;28:318–22.

    Article  PubMed  Google Scholar 

  16. Taylor CR, Merin LM, Salunga AM, Hepworth JT, Crutcher TD, O’Day DM, et al. Improving diabetic retinopathy screening ratios using telemedicine-based digital retinal imaging technology: the Vine Hill study. Diabetes Care. 2007;30:574–8.

    Article  PubMed  Google Scholar 

  17. Pr C, Bm F, Aa C, Jd C, Se B, Lm A. Nonmydriatic teleretinal imaging improves adherence to annual eye examinations in patients with diabetes. J Rehabil Res Dev. 2006;43:733–40.

    Article  Google Scholar 

  18. Davis RM, Fowler S, Bellis K, Pockl J, Pakalnis V, al, Woldorf A. Telemedicine improves eye examination rates in individuals with diabetes: A model for eye-care delivery in underserved communities. Diabetes Care. 2003;26:2476.

    Article  PubMed  Google Scholar 

  19. Hatef E, Alexander M, Vanderver BG, Fagan P, Albert M. Assessment of annual diabetic eye examination using telemedicine technology among underserved patients in primary care setting. Middle East Afr J Ophthalmol. 2017;24:207–12.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Estabrooks PA, Brownson RC, Pronk NP. Dissemination and implementation science for public health professionals: an overview and call to action. Prev Chronic Dis. 2018;15:E162.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Balasubramanian BA, Heurtin-Roberts S, Krasny S, Rohweder C, Fair K, Olmos T, et al. Contextual factors related to implementation and reach of a pragmatic multisite trial– the my own health report (MOHR) study. J Am Board Fam Med. 2017;30:337–49.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci. 2015;10:21.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7:3.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Bastos de Carvalho A, Lee Ware S, Belcher T, Mehmeti F, Higgins EB, Sprang R, et al. Evaluation of multi-level barriers and facilitators in a large diabetic retinopathy screening program in federally qualified health centers: a qualitative study. Implement. Sci Commun. 2021;2:54.

    Google Scholar 

  26. Fernandez ME, ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, et al. Implementation mapping: using intervention mapping to develop implementation strategies. Front. Public Health. 2019;7:158.

    Google Scholar 

  27. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Jacob C, Sanchez-Vazquez A, Ivory C. Understanding clinicians’ adoption of Mobile health tools: A qualitative review of the Most used frameworks. JMIR MHealth UHealth. 2020;8:e18072.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Keith RE, Crosson JC, O’Malley AS, Cromp D, Taylor EF. Using the consolidated framework for implementation research (CFIR) to produce actionable findings: a rapid-cycle evaluation approach to improving implementation. Implement Sci. 2017;12:15.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Ikram M, Sroufe R, Zhang Q. Prioritizing and overcoming barriers to integrated management system (IMS) implementation using AHP and G-TOPSIS. J Clean Prod. 2020;254:120121.

    Article  Google Scholar 

  31. van der Ploeg T, Austin PC, Steyerberg EW. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints. BMC Med Res Methodol. 2014;14:137.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Merkle EC, Shaffer VA. Binary recursive partitioning: background, methods, and application to psychology. Br J Math Stat Psychol. 2011;64(Pt 1):161–81.

    Article  PubMed  Google Scholar 

  33. Feldesman MR. Classification trees as an alternative to linear discriminant analysis. Am J Phys Anthropol. 2002;119:257–75.

    Article  PubMed  Google Scholar 

  34. Leclerc BS, Bégin C, Cadieux É, Goulet L, Allaire J-F, Meloche J, et al. A classification and regression tree for predicting recurrent falling among community-dwelling seniors using home-care services. Can J Public Health Rev Can Santé Publique. 2009;100:263–7.

    Article  Google Scholar 

  35. Protopopoff N, Bortel WV, Speybroeck N, Geertruyden J-PV, Baza D, D’Alessandro U, et al. Ranking malaria risk factors to guide malaria control efforts in African highlands. PLoS One. 2009;4:e8022.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  36. Therneau T, Atkinson E. An introduction to recursive partitioning using the RPART routines; 2015.

    Google Scholar 

  37. Taylor SL, Dy S, Foy R, Hempel S, McDonald KM, Ovretveit J, et al. What context features might be important determinants of the effectiveness of patient safety practice interventions? BMJ Qual Saf. 2011;20:611–7.

    Article  PubMed  Google Scholar 

  38. Lau R, Stevenson F, Ong BN, Dziedzic K, Treweek S, Eldridge S, et al. Achieving change in primary care—causes of the evidence to practice gap: systematic reviews of reviews. Implement Sci. 2016;11:40.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Kuhn L, Page K, Ward J, Worrall-Carter L. The process and utility of classification and regression tree methodology in nursing research. J Adv Nurs. 2014;70:1276–86.

    Article  PubMed  Google Scholar 

  40. Breiman L, Friedman JH, Olshen RA, Stone CJ. Classification and regression trees. Boca Raton: Routledge; 2017.

    Book  Google Scholar 

  41. Khadjesari Z, Boufkhed S, Vitoratou S, Schatte L, Ziemann A, Daskalopoulou C, et al. Implementation outcome instruments for use in physical healthcare settings: a systematic review. Implement Sci. 2020;15:66.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38:65–76.

    Article  Google Scholar 

  43. Glasgow RE, Battaglia C, McCreight M, Ayele RA, Rabin BA. Making implementation science more rapid: use of the RE-AIM framework for mid-course adaptations across five health services research projects in the veterans health administration. Front Public Health. 2020;8:194.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Valikodath NG, Leveque TK, Wang SY, Lee PP, Newman-Casey PA, Hansen SO, et al. Patient attitudes toward telemedicine for diabetic retinopathy. Telemed J E Health. 2017;23:205–12.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Ogunyemi O, George S, Patty L, Teklehaimanot S, Baker R. Teleretinal screening for diabetic retinopathy in six Los Angeles urban safety-net clinics: final study results. AMIA Annu Symp Proc. 2013;2013:1082–8.

    PubMed  PubMed Central  Google Scholar 

  46. Liu Y, Zupan NJ, Swearingen R, Jacobson N, Carlson JN, Mahoney JE, et al. Identification of barriers, facilitators and system-based implementation strategies to increase teleophthalmology use for diabetic eye screening in a rural US primary care clinic: a qualitative study. BMJ Open. 2019;9:e022594.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14:42.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Leeman J, Birken SA, Powell BJ, Rohweder C, Shea CM. Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice. Implement Sci. 2017;12(1):125.

  49. Liu Y, Carlson JN, Torres Diaz A, Lock LJ, Zupan NJ, Molfenter TD, et al. Sustaining gains in diabetic eye screening: outcomes from a stakeholder-based implementation program for Teleophthalmology in primary care. Telemed E Health. 2020.

  50. Telemedicine for Ophthalmology Information Statement - 2018. American Academy of Ophthalmology 2018. Accessed 6 May 2021.

  51. Bouskill K, Smith-Morris C, Bresnick G, Cuadros J, Pedersen ER. Blind spots in telemedicine: a qualitative study of staff workarounds to resolve gaps in diabetes management. BMC Health Serv Res. 2018;18:617.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Rogers EM. Diffusion of innovations. 4th ed. New York: Free Press; 2010.

  53. Miech EJ, Rattray NA, Flanagan ME, Damschroder L, Schmid AA, Damush TM. Inside help: an integrative review of champions in healthcare-related implementation. SAGE Open Med. 2018;6:2050312118773261.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Shaw EK, Howard J, West DR, Crabtree BF, Nease DE Jr, Tutt B, et al. The role of the champion in primary care change efforts. J Am Board Fam Med. 2013;25:676–85.

  55. Wade V, Eliott J. The role of the champion in telehealth service development: a qualitative analysis. J Telemed Telecare. 2012;18:490–2.

    Article  PubMed  Google Scholar 

  56. Mamillapalli CK, Prentice JR, Garg AK, Hampsey SL, Bhandari R. Implementation and challenges unique to teleretinal diabetic retinal screening (TDRS) in a private practice setting in the United States. J Clin Transl Endocrinol. 2020;19:100214.

    PubMed  Google Scholar 

Download references


We thank the participating FQHCs and all heath care professionals for their collaboration and the participating health care professionals who dedicated their time to our surveys. We thank Mr. Rob Sprang and Dr. Hayden Bosworth for their insightful suggestions on study design and survey validation.


Ana Bastos de Carvalho receives research support from NCATS UL1TR001998, the University of Kentucky College of Medicine Institutional Physician Scientist Career Development Program; the Diabetes Research Center at Washington University in St. Louis of the National Institutes of Health under award number P30DK020579; and the Cincinnati Eye Institute Foundation Ignite Award (2019 Ed). Heather Bush receives research support from NCATS UL1TR001998. The content is solely the responsibility of the authors and does not necessarily represent the official views of any funders.

Author information

Authors and Affiliations



ABC, SLW, JLS, and CRS conceived the study. All authors contributed to the study design and oversight. ABC and SLW conducted the data collection. ABC, SLW, FL, and HB conducted the analyses. SLW drafted and critically revised the manuscript in collaboration with ABC, CRS, and HB. All authors contributed to the manuscript refinement and are responsible for its content. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ana Bastos de Carvalho.

Ethics declarations

Ethics approval and consent to participate

All study procedures were reviewed and approved by the University of Kentucky’s Institutional Review Board (IRB number 44107) and were in accordance with the Declaration of Helsinki. Documentation of informed consent was waived by the University of Kentucky IRB because the research presented no more than minimal risk and involved no procedures for which written consent is normally required outside the research context.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Identifies locations of key information included in the manuscript.

Additional file 2.

Includes the complete survey instrument utilized in this study.

Additional file 3. Supplementary Methods.

Describes mapping of survey items to the Consolidated Framework for Implementation Research domains and constructs.

Additional file 4. Supplementary Figures and Tables.

Includes supplementary figures for survey response distributions organized by domain and construct.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ware, S.L., Studts, C.R., Lei, F. et al. Ranked determinants of telemedicine diabetic retinopathy screening performance in the United States primary care safety-net setting: an exploratory CART analysis. BMC Health Serv Res 22, 507 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Diabetic retinopathy
  • Primary care
  • Telemedicine
  • Screening
  • Barriers and facilitators
  • Determinants
  • Classification and regression tree (CART)
  • Consolidated framework for implementation research (CFIR)
  • Mixed methods
  • Underserved