Skip to main content

Adapting and testing measures of organizational context in primary care clinics in KwaZulu-Natal, South Africa

Abstract

Background

Implementation science frameworks situate intervention implementation and sustainment within the context of the implementing organization and system. Aspects of organizational context such as leadership have been defined and measured largely within US health care settings characterized by decentralization and individual autonomy. The relevance of these constructs in other settings may be limited by differences like collectivist orientation, resource constraints, and hierarchical power structures. We aimed to adapt measures of organizational context in South African primary care clinics.

Methods

We convened a panel of South African experts in social science and HIV care delivery and presented implementation domains informed by existing frameworks and prior work in South Africa. Based on panel input, we selected contextual domains and adapted candidate items. We conducted cognitive interviews with 25 providers in KwaZulu-Natal Province to refine measures. We then conducted a cross-sectional survey of 16 clinics with 5–20 providers per clinic (N = 186). We assessed reliability using Cronbach’s alpha and calculated interrater agreement (awg) and intraclass correlation coefficient (ICC) at the clinic level. Within clinics with moderate agreement, we calculated correlation of clinic-level measures with each other and with hypothesized predictors – staff continuity and infrastructure – and a clinical outcome, patient retention on antiretroviral therapy.

Results

Panelists emphasized contextual factors; we therefore focused on elements of clinic leadership, stress, cohesion, and collective problem solving (critical consciousness). Cognitive interviews confirmed salience of the domains and improved item clarity. After excluding items related to leaders’ coordination abilities due to missingness and low agreement, all other scales demonstrated individual-level reliability and at least moderate interrater agreement in most facilities. ICC was low for most leadership measures and moderate for others. Measures tended to correlate within facility, and higher stress was significantly correlated with lower staff continuity. Organizational context was generally more positively rated in facilities that showed consistent agreement.

Conclusions

As theorized, organizational context is important in understanding program implementation within the South African health system. Most adapted measures show good reliability at individual and clinic levels. Additional revision of existing frameworks to suit this context and further testing in high and low performing clinics is warranted.

Peer Review reports

Background

Despite the large investment in research to identify clinical and behavioral interventions to improve HIV prevention and care, many efficacious programs never get incorporated into policy or scaled into clinical settings; others fail when put into practice [1, 2]. In contexts such as South Africa, with 7.7 million people living with HIV (PLHIV), 4.8 million on antiretroviral therapy (ART) [3], and an aging population of PLHIV who have increasingly complex care needs [4, 5], scaling interventions that ensure effective, evidence-based care is a priority [6]. To this end, the field of implementation science has begun to shed light on why some efficacious interventions have not translated into programmatic successes, noting factors that must be addressed within the clinical environment to improve implementation and sustainment [1, 7, 8].

Implementation science frameworks situate interventions within the organizational context of a health care setting. The Exploration, Preparation, Implementation, Sustainment (EPIS) conceptual framework includes absorptive capacity, culture, climate, and leadership as elements of the context that shape exploration of interventions [9, 10], while the Consolidated Framework for Implementation Research (CFIR) identifies domains such as culture, implementation climate, and readiness for implementation as key factors at the organizational or team level [11, 12]. Recent updates to CFIR have focused on clarifying these domains as antecedents on the pathway to implementation outcomes [12]. The theory of Organizational Readiness for Change similarly identifies contextual factors such as the culture and climate of the organization that help to shape readiness for a specific change, which in turn affects implementation effectiveness [13]. Researchers have drawn on these definitions in efforts to better measure organizational characteristics: a 2017 systematic review found 76 articles attempting to measure organizational context, a majority of which were based in the United States; the authors recommended greater efforts to use mixed-methods research to develop and test measures in a range of settings [14].

Measures developed within the US health care system reflect the decentralized nature of the system, the national culture of individualism, and high levels of clinical autonomy that distinguish the US health care system from that of many other nations with more hierarchical, top-down power structures. The lack of validated measures of organizational context in centralized health systems, particularly in low-resource countries where primary care clinics are overextended, contributes to a clear gap in understanding which contextual factors impact successful program implementation and how these factors can be addressed [15, 16]. Research in South Africa from our team and others has found that program implementation can be heavily influenced by clinic leadership, particularly leaders’ problem-solving skills, in addition to provider teamwork and clinic environment such as material and human resources [17,18,19,20]. Qualitative assessment across multiple levels of the health system in KwaZulu-Natal Province identified perceived benefits of a particular program as well as broader resource availability and clear communication as factors shaping integration of HIV programming into general care [21]. Recent research on implementation of maternal health quality improvement underscored the importance of leadership, teamwork, and provider motivation in maintaining consistent implementation of interventions, particularly in the face of external factors such as the COVID-19 pandemic, budget cuts, and labor actions [20].

In this study, we aimed to adapt implementation science frameworks to the context of primary care in South Africa, to develop and test measures of organizational context based on the adapted framework, and to assess if the resulting measures demonstrated associations with hypothesized determinants and outcomes of organizational context.

Methods

We report this study, which included formative qualitative work and a cross-sectional survey, based on recommendations for scale development and testing studies [22] and following STROBE guidelines for observational research (Additional file 1).

Study setting

This study took place in uMgungundlovu District in KwaZulu-Natal Province, South Africa. The district includes the capital city of Pietermaritzburg but is otherwise largely rural. Adult HIV prevalence is estimated at 30%, and the 57 Department of Health (DOH) facilities provide ART for approximately 140,000 individuals [23, 24]. The U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) supports the national HIV response in this district by funding implementing partner organizations that second staff to DOH facilities in support of HIV care and that provide specific services like clinical mentoring or client tracing following disengagement from care.

Domain and item generation

We followed an iterative approach to define priority domains and identify potential items. We first synthesized implementation science literature and existing research in South Africa into a conceptual model delineating organizational context as an overall determinant of program-specific domains and ultimately organizational readiness for change (Fig. 1). We defined key elements of organizational context as service readiness (the resources available for service provision), stress and workload, leadership (including leadership practice, communication, and direction on roles and responsibilities), learning climate as a space for shared trial and evaluation, and team cohesion (trust and shared values). Primary care clinics in this area typically operate with an Operational Manager (OM) who is also a professional nurse, a deputy manager, and a small number of nurses who rotate through clinical services; we initially conceptualized the full clinic as the organizational unit, the OM and deputy as leaders, and the nursing and support staff as the relevant team. We used literature searches, investigator knowledge and networks, and the Community Advisory Board of the Human Sciences Research Council to identify participants for an Expert Panel composed of health care providers, program managers, social scientists, and DOH representatives all familiar with the provision of HIV care in the province. We convened the panel in a hybrid format and presented the initial conceptual framework for their review. The Expert Panel affirmed the primacy of organizational context in shaping program implementation in this setting, and noted major aspects to consider: that program implementation and sustainment is driven by top-down directives, that the context of overburdened facilities and resource constraints shapes program uptake, that leadership communication and management of complex relationships within and beyond facilities is critical, and that providers face high demands and shifting roles that can make teamwork particularly important. Panelists noted that the concept of a learning climate informed by a quality improvement feedback loop was not present within primary facilities; they highlighted leadership’s role in monitoring performance as more salient, complemented by providers’ active interest in solving implementation problems. Panelists concurred with conceptualizing clinics as an organizational unit with distinct leaders and a single team of providers. Following the Panel discussion, we returned to the literature to clarify candidate constructs and identify potential items. We also mapped service readiness to specific human and material resources that inform organizational context for the full clinic.

Fig. 1
figure 1

Original conceptual model of factors shaping program implementation

We identified 7 latent constructs for measurement, including 4 related to leadership; Table 1 defines each construct and the accompanying measure. Scales on leadership engagement, feedback and monitoring, and stress addressed our revised conceptual framework, and each was based on an existing measure validated in other contexts [25, 26]. To measure teamwork, we adapted a measure of cohesion we had previously validated in community settings in South Africa.[27] In place of directly assessing “learning climate”, we adapted a measure on critical consciousness, which we had originally developed to capture community empowerment and learning culture [27, 28], to address problem solving within the facility as described by the Expert Panel. Two scales were not direct adaptations. Based on other research in South African clinics [17] and Expert Panel input on the importance of leaders in optimizing implementation in light of resource constraints, we drew from and expanded on existing items on change efficacy [29, 30] to create a leadership-focused scale on resource mobilization and problem solving. To capture coordination in this setting, we developed new items to capture the specifics of coordinating facility staff, implementing partners, and community leaders. The Expert Panel reviewed and revised items for clarity; we reconvened a subset of the panel to finalize items collectively. Items were translated into isiZulu and back translated to English by co-author MM.

Table 1 Organizational context measures

We conducted cognitive interviews with 25 participants in 2 stages from June to September 2021 for content validation of the proposed items, allowing time to revise between stages. We sampled 19 providers (primarily nurses) and 6 organizational managers (OMs, clinic leaders) from 5 facilities and tested 3 to 4 scales per interview. We selected nurses and OMs for cognitive interviews based on their central responsibility for delivering care and hence capacity to answer all proposed items, including items on clinical care delivery not included in this analysis. We used the think-aloud method and probed respondents on clarity of the item and response choices, thought processes leading to their response, and ease of answering. We asked respondents to identify overlapping or redundant items. Interviews were conducted in English or isiZulu. We recorded interviews and translated and transcribed for analysis. We conducted rapid analysis to assess key terms such as “clinic leaders” and “clinic staff,” and we iteratively revised items and scales for clarity and efficiency. For instance, we revised an item on whether “leaders make use of all available staff to implement clinic programs” to whether “leaders make the most of the staff available” based on responses that not all staff are needed or appropriate for a given program. Interviewees provided consistent responses in conceptualizing their clinics as a single unit, identifying the OM and deputy as the relevant leaders, and defining nursing and support staff personnel as a team. The final instrument included 4 or 5 items per scale with 4 response options for each statement (See Additional file 2, Table S1 for all items). Greater agreement indicated respondents perceived more of that construct within the clinic; all constructs except for stress were positive aspects of organizational culture and most items on these constructs had positive stems; items with negative stems were maintained as written for inter-item assessment and reverse-scored for subsequent analyses.

Data collection

To test the proposed measures, we calculated a minimum sample size of 12 providers each within 14 facilities (168 respondents) to provide > 80% power to detect a correlation of at least 0.57 with alpha of 0.05, one-sided. To ensure this sample size while including facilities with fewer than 12 providers total, we sampled 16 facilities using random selection among facilities with at least 100 patients on ART based on provincial TIER.net data, stratified by ART patient population size (< 2000, > 2000) to account for possible differences in smaller clinics and larger clinics with potentially more complex structures. Facilities participating in cognitive interviews were ineligible. Within facility, we selected all OMs and used stratified random sampling to select up to 8 higher-level nurses (Professional Nurses, Certified Nursing Professionals, Registered Nurses) and up to 8 auxiliary nurses and other patient-facing providers engaged in HIV care (Enrolled Nurses or Enrolled Nursing Assistants, Nutritionists, Pharmacists, Nutrition or Pharmacy Assistants, Lay Counselors), as the Expert Panel and cognitive interviews confirmed that these personnel were considered part of the provider team. Selection was conducted by ordering providers at random within strata to provide replacement respondents when possible in case selected providers were not available. We administered surveys via Research Electronic Data Capture (REDCap) to capture basic demographics on providers and their roles, the organizational context measures, and additional measures on conduct of specific programs analyzed elsewhere. Data collection took place from October 2021 – March 2022. National restrictions related to the COVID-19 pandemic, including limitations on clinic scheduling and staff meetings, were shifted to alert level 1 (the lowest level) on October 1 2021 and remained at that level throughout data collection [31]; routine clinical practices continued to be affected during the study period by considerations such as diverting staff for vaccination campaigns.

We conducted a concurrent facility audit of human and material resources using direct observation and a survey with the OM or a proxy if there was no OM available. From the audit, we calculated a summary score based on the presence of key infrastructure; indicators were adapted from the World Health Organization Service Availability and Readiness Assessment and are listed in Additional file 2, Table S2 [32]. We calculated staff continuity based on the number of clinical staff and number of clinical positions with turnover in the past year. We also extracted routine program data on HIV patient outcomes from the district and national reporting systems. We calculated aggregate retention on ART per facility as the number of patients remaining on ART as of March 2022 out of the number reported on ART in March 2021 or newly starting ART between March 2021 and February 2022.

Data analysis

Analysis proceeded in four stages. First, we conducted descriptive analysis of facilities, providers, and items. We used proportions and medians to summarize measures from the facility audit and provider surveys. We assessed incomplete responses and straightline (invariant) responses by provider and measure. Second, we conducted agreement and reliability checks at individual and facility levels. We quantified inter-item agreement with Cronbach’s alpha among complete responses. We calculated the awg(j) statistic as a measure of agreement within facility; this calculation assumes a set of parallel items answered by multiple raters and can be interpreted similarly to Cohen’s kappa. We report mean awg(j) across facilities and the number of facilities achieving at least moderate agreement (awg(j) ≥ 0.50) per measure [33]. We calculated mean respondent score per measure and used the intraclass correlation coefficient (ICC) to quantify facility-level agreement and reliability among all participants and again limited to professional nurses. Third, we conducted convergent validation using the Spearman rank correlation coefficient of all measures within facility and testing correlation with human and material resources hypothesized to shape organizational context – infrastructure and staff continuity – as well as with retention patients on ART as a clinical outcome. All validity analyses were limited to facilities demonstrating moderate agreement on the relevant measure. We assessed correlations against a pre-specified threshold of rho = 0.30 (moderate effects [34]) and reporting statistical significance. Fourth, as an exploratory analysis, we compared respondents’ average scale scores between facilities with awg(j) ≥ 0.50 (moderate agreement) on all measures versus facilities where moderate agreement was obtained on fewer measures; we used linear generalized estimating equation (GEE) models accounting for clustering within facility. All analyses were conducted in Stata version 17.

Ethics approvals

This study was approved by the Institutional Review Board at the University of California, San Francisco (20–31802), the Research Ethics Committee at the Human Sciences Research Council (REC 1/19/08/20), and the uMgungundlovu Health District; all methods were carried out in accordance with relevant guidelines and regulations. All facilities provided consent for inclusion and each participant provided written informed consent to participate.

Results

A representative from each of the 16 facilities consented to participation and assisted in completion of the facility audit; data from all facilities were extracted in full from district reporting. The median facility had 14 full-time clinical staff; only 3 of 16 facilities had all positions filled with permanent personnel (no vacancies, no interim posts) at the time of assessment (Table 2A). Most facilities demonstrated some gaps in core infrastructure: median score was 61%. Routine district data suggested retention on ART was high in all facilities, with a median of 95% of patients retained between March 2021 and March 2022.

Table 2 Characteristics of sampled facilities and providers

Of 194 providers approached, 186 consented to participate (95.9%); those declining cited insufficient time. One respondent who worked as a data capturer rather than a patient-facing role was excluded from analysis. Consistent with health care providers in South Africa generally, most respondents were female (87.5%, Table 2B). Due to extensive turnover in leadership of some clinics, surveys were completed by OMs at 14 of 16 facilities, including one interim OM. Just over one third of respondents were non-OM nurses. Respondents reported a median of 8 years of professional experience and 6 years at their current facility.

At the individual level, respondents tended to agree with most items: average scores for the proposed measures clustered near 3 = “Agree” out of the possible range 1 – 4 (Table 3). Straightline responses were common, ranging from 34% of respondents on the measure of stress to 54% for critical consciousness; nearly all such responses were all “Agree” except for the measure on stress, where straightline responses were split between “Agree” and “Disagree” (data not shown). Leadership coordination had the highest missingness, with 59 participants responding “Don’t know” or skipping at least one item, primarily two items related to the external clinic committee (whether the committee met regularly and if leaders acted on its input). Cronbach’s alpha indicated moderate to strong inter-item agreement for all measures except coordination. We excluded coordination from subsequent analysis given the high degree of missingness and inadequate inter-item agreement.

Table 3 Scale performance and measurement properties

Facilities accounted for up to 22 to 23% (for critical consciousness and stress, respectively) of total variance in mean scores. ICC exceeded a minimum threshold of 0.05 for feedback and monitoring, stress, cohesion, and critical consciousness (Table 3B); near zero ICC for leadership engagement and resource mobilization suggested these measures could not reliably distinguish between facilities. ICC was higher when limited to professional nurses for most measures, suggesting that for measures other than leadership engagement, professional nurses responded more consistently within facilities than other providers. Distribution of facility means underscores the homogeneity of scales like leadership engagement across all facilities (Additional file 2, Figure S1).

Item responses demonstrated moderate to strong agreement within facilities, with awg(j) ranging from 0.57 for stress (12 of 16 facilities with at least moderate agreement) to 0.78 for critical consciousness (all facilities with at least moderate agreement). Facilities with awg(j) < 0.50 demonstrated inconsistent agreement across responses to consider a summary statistic representative of the facility as a whole.

Limiting analysis to facilities with at least moderate agreement on a given measure, we found that the 6 measures of organizational context showed substantial correlation within facility: absolute correlation exceeded the predetermined threshold of 0.30 in all cases and achieved statistical significance at p < 0.05 for multiple assessments despite the small number of facilities (Table 4A). When compared with predicted inputs and outcomes of facility climate (Table 4B), correlation with staff continuity was moderate (rho > 0.30) for feedback and monitoring, resource mobilization, and stress, with only stress showing a statistically significant correlation (-0.68). Higher scores on feedback and monitoring were correlated with lower facility infrastructure (rho = -0.53), contrary to expectation. Cohesion was correlated with higher retention on ART (rho = 0.49, p = 0.09).

Table 4 Correlation of facility-level measures within facilities with moderate agreement (awg(j) ≥ 0.50)

In our exploratory analysis to understand potential differences in context in facilities where staff were largely in agreement on their scoring, we found that respondents in the 9 facilities with moderate agreement on all scales reported more positive organizational context than the other 7 facilities, with statistically significant differences in leadership engagement, resource mobilization, and stress (Table 5). The largest difference was in reported stress: average scores on the stress scale were 0.41 points lower (less stress) in facilities with agreement on all scales.

Table 5 Participant responses within facilities with high overall agreement

Discussion

In this study, we developed and adapted measures for 7 domains of organizational context based on implementation science frameworks and expertise within primary care clinics in South Africa. The measures demonstrated reasonable individual-level consistency with our study population, except for the coordination scale created de novo; the remaining measures showed moderate to strong agreement and low to moderate reliability within facility. Variance between facilities was modest, possibly reflecting the shared context of a rural setting in a single district. Measures generally correlated with each other at the facility level, though we found limited evidence of relationships between the facility scores and hypothesized predictors and outcomes in validation analyses. Facilities with stronger agreement among respondents also tended to have a more positive context.

The Expert Panel concurred with existing literature and implementation science frameworks that facility leadership was critical to program implementation and sustainment. In this setting of relatively small clinics and distribution of responsibilities across staff, they prioritized overall leadership above leadership specific to implementation of one program, which has been more commonly measured in US-based implementation science research [35]. We adapted or developed measures for four aspects of overall leadership hypothesized to improve program implementation: engagement, feedback and monitoring, resource mobilization, and coordination. The newly created items on coordination with external partners and clinic committees proved difficult for some respondents to answer and showed limited agreement even within complete responses. Further efforts to capture this important construct, potentially as an index rather than a scale, are warranted. The other leadership measures demonstrated good item agreement and moderate agreement within facilities in our sample; as measures developed for use at an organizational level, adequate agreement across raters is critical. The finding that ICCs for leadership measures were generally higher among professional nurses, typically the most trained professional cadre in primary care facilities, indicates that these providers were relatively more consistent within facility compared to across facilities than all respondents, potentially due to greater exposure to clinic leaders or to differing interpretations of who qualifies as a ‘leader’ between professional nurses and other personnel. Our cognitive interviews demonstrated consistent understanding of ‘leader’ among the professional nurses and OMs we interviewed; including additional cadres could be useful to extend this evidence. The findings to date support use of these leadership measures within higher cadres such as professional nurses in similar settings, particularly for clinical interventions.

Beyond leadership, we tested measures of stress, cohesion, and critical consciousness hypothesized to shape uptake of new programs. These scales similarly demonstrated good item agreement and moderate inter-rater agreement within facilities; ICCs (0.23; 0.14; and 0.22, respectively) well exceeded the minimum threshold of 0.05 among all participants, suggesting greater consistency in respondents across cadres within facility than for leadership measures.

The six scales demonstrating sound measurement properties also tended to correlate together within facilities, potentially reflecting less random variation in these facilities and/or that respondents provided similar responses across scales and between raters within these facilities. Three scales demonstrated some correlation with inputs and outputs in accordance with predictions: resource mobilization, stress, and cohesion. Better resource mobilization was correlated with higher staff continuity. The scale for stress, the only construct indicating a negative climate and where agreement indicated worse performance, had less homogeneity than other scales, surfacing potential to further refine the measure to better distinguish between clinic contexts. Higher stress also showed a correlation with lower staff continuity. The scale for cohesion – teamwork within providers – demonstrated moderate heterogeneity between individuals and between facilities, and was correlated with retention on ART. Given the importance of provider burnout before and especially during the COVID-19 pandemic [36,37,38], better assessment of stress and cohesion can help to identify best performing clinics and to target facilities most in need of management or individual interventions on fostering teamwork and coping with stress.

The remaining scales—on leadership engagement, feedback and monitoring, and critical consciousness—demonstrated two drawbacks. The first was high levels of straightline responses, with approximately half of respondents indicating the same answer—typically “Agree”—to all items. These response patterns could be explained in several ways: 1) truly uniform conditions across and within these facilities within a single district, 2) insufficient distinction between items to capture indications of very low or very high levels of each construct, 3) social desirability within a hierarchical work setting, 4) lack of strong opinion, particularly given the strain providers face to deliver care amidst constrained resources, and/or 5) respondent inattention or fatigue. In the absence of a neutral response option (which we did not provide to avoid respondents defaulting to the median option), one or more of these explanations could have resulted in repeatedly agreeing. This degree of invariance can inflate apparent agreement within individuals and facilities, but it undermines the utility of the measures in distinguishing between facilities, should such distinctions exist.

The second drawback was inconsistent evidence of correlation with hypothesized predictors (staff continuity and infrastructure) and with patient outcomes (retention on ART) in the validation analysis. Careful consideration is required to understand these findings. It is possible that these initial efforts to adapt the constructs of organizational context did not fully capture the dynamics that most strongly shape performance in these facilities. This may be particularly salient given the time of the assessment following the upheaval of the COVID-19 pandemic, which shaped staff continuity, organizational context, and ART retention. An additional limitation is the relative insensitivity of the outcome measure: ART programs are longstanding, and our measure of patient retention based on (imperfect) aggregate data demonstrated little variability across the sampled facilities. Organizational context at the time of assessment may have had little influence on patient retention even had it been measured perfectly. Measures reflecting implementation or sustainment of more recently introduced programs would provide an indicator more sensitive to variation in organizational context.

Our study has multiple strengths, including use of implementation science frameworks and organizational readiness theory to propose measures of organizational context in primary care facilities in South Africa. This work expands on the qualitative work attesting to the importance of organizational context in implementing and maintaining interventions in this setting [17, 20, 21, 39]. We relied on a majority South African Expert Panel to prioritize constructs and items for measurement, and we conducted detailed cognitive interviews to revise and clarify items. Limitations include difficulty in reaching providers – particularly clinic managers – amidst regular turnover and COVID-19 challenges (including the rapid changes in clinical responsibilities and locations, provider illnesses and deaths, and restrictions on routine activities) and the reliance on aggregate patient outcome data that were both imperfect and potentially insensitive to organizational context. Soliciting perspectives on leadership and organizational context in hierarchical settings is inherently fraught; it is difficult to disentangle social desirability from true agreement. Thresholds for agreement measures are imposed on a continuous metric and may not distinguish truly different performance levels [33].

Conclusions

This study was an initial effort to adapt and test measures of organizational context to better understand program implementation in primary care within the South African health system. This work confirms the importance of organizational context from the perspective of those working within primary care clinics and supports standing calls for further efforts to develop and test theories, frameworks, and measures that capture the dynamics of health settings in resource-constrained settings. While this initial effort at adaptation of theory and measurement to the context of South African clinics produced scales with sound measurement properties and several scales – notably resource management, stress, and cohesion – with promise for differentiating facilities, further work is needed to understand the most important domains of organizational context shaping patient outcomes. From there, further efforts to refine constructs and measures are warranted, including positive deviance assessments to ensure a sample of facilities with strongly divergent performance and inclusion of implementation outcomes more closely tied to organizational context.

Availability of data and materials

The datasets used during the current study are available from the corresponding author on reasonable request.

Abbreviations

ART:

Antiretroviral

awg :

Agreement within group

CFIR:

Consolidated Framework for Implementation Research

COVID-19:

Coronavirus disease 2019

DOH:

Department of Health

EPIS:

Exploration, Preparation, Implementation, Sustainment

GEE:

Generalized estimating equation

HIV:

Human immunodeficiency virus

ICC:

Intraclass correlation coefficient

OM:

Operational manager

PEPFAR:

President’s Emergency Plan for AIDS Relief

REDCap:

Research Electronic Data Capture

References

  1. Nutbeam D. Achieving ‘best practice’ in health promotion: improving the fit between research and practice. Health Educ Res. 1996;11(3):317–26.

    Article  CAS  PubMed  Google Scholar 

  2. Dionne KY. Doomed Interventions: The Failure of Global Responses to AIDS in Africa. Cambridge: Cambridge University Press; 2017. p. 214.

    Book  Google Scholar 

  3. UNAIDS: Joint UN Program on HIV/AIDS. UNAIDS. 2018 [cited 2019 Dec 11]. UNAIDS: South Africa. Available from: https://www.unaids.org/en/regionscountries/countries/southafrica

  4. Gouda HN, Charlson F, Sorsdahl K, Ahmadzada S, Ferrari AJ, Erskine H, et al. Burden of non-communicable diseases in sub-Saharan Africa, 1990–2017: results from the Global Burden of Disease Study 2017. Lancet Glob Health. 2019;7(10):e1375–87.

    Article  PubMed  Google Scholar 

  5. Sharman M, Bachmann M. Prevalence and health effects of communicable and non-communicable disease comorbidity in rural KwaZulu-Natal. South Africa Trop Med Int Health. 2019;24:1198–207.

    Article  PubMed  Google Scholar 

  6. Croce D, Mueller D, Rizzardini G, Restelli U. Organising HIV ageing-patient care in South Africa : an implementation science approach. S Afr J Public Health. 2018;2(3):59–62.

    Google Scholar 

  7. Yamey G. What are the barriers to scaling up health interventions in low and middle income countries? A qualitative study of academic leaders in implementation science. Glob Health. 2012;8(1):11.

    Article  Google Scholar 

  8. Geng EH, Peiris D, Kruk ME. Implementation science: Relevance in the real world without sacrificing rigor. PLoS Med. 2017;14(4).

    Article  PubMed  PubMed Central  Google Scholar 

  9. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23.

    Article  PubMed  Google Scholar 

  10. Aarons GA, Green AE, Trott E, Willging CE, Torres EM, Ehrhart MG, et al. The roles of system and organizational leadership in system-wide evidence-based intervention sustainment: a mixed-method study. Adm Policy Ment Health. 2016;43(6):991–1008.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated Consolidated Framework for Implementation Research based on user feedback. Implement Sci. 2022;17(1):75.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4(1):67.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Allen JD, Towne SD, Maxwell AE, DiMartino L, Leyva B, Bowen DJ, et al. Measures of organizational characteristics associated with adoption and/or implementation of innovations: a systematic review. BMC Health Serv Res. 2017;17(1):591.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Daivadanam M, Ingram M, Annerstedt KS, Parker G, Bobrow K, Dolovich L, et al. The role of context in implementation research for non-communicable diseases: answering the ‘how-to’ dilemma. PLoS ONE. 2019;14(4).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Alonge O, Rodriguez DC, Brandes N, Geng E, Reveiz L, Peters DH. How is implementation research applied to advance health in low-income and middle-income countries? BMJ Global Health. 2019;4.

  17. Gilson L, Ellokor S, Lehmann U, Brady L. Organizational change and everyday health system resilience: lessons from Cape Town, South Africa. Soc Sci Med. 2020;266:113407.

  18. Julien A, Anthierens S, Van Rie A, West R, Maritze M, Twine R, et al. Health care providers’ challenges to high-quality HIV care and Antiretroviral treatment retention in Rural South Africa. Qual Health Res. 2021;31(4):722–35.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Leslie HH, West R, Twine R, Masilela N, Steward WT, Kahn K, et al. Measuring Organizational Readiness for Implementing Change in Primary Care Facilities in Rural Bushbuckridge, South Africa. Int J Health Policy Manag. 2020;11:912–8.

    PubMed  PubMed Central  Google Scholar 

  20. Odendaal W, Chetty T, Goga A, Tomlinson M, Singh Y, Marshall C, et al. From purists to pragmatists: a qualitative evaluation of how implementation processes and contexts shaped the uptake and methodological adaptations of a maternal and neonatal quality improvement programme in South Africa prior to, and during COVID-19. BMC Health Serv Res. 2023;23(1):819.

    Article  PubMed  PubMed Central  Google Scholar 

  21. van Heerden A, Ntinga X, Lippman SA, Leslie HH, Steward WT. Understanding the factors that impact effective uptake and maintenance of HIV care programs in South African primary health care clinics. Arch Public Health. 2022;80(1):221.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Streiner DL, Kottner J. Recommendations for reporting the results of studies of instrument and scale development and testing. J Adv Nurs. 2014;70(9):1970–9.

    Article  PubMed  Google Scholar 

  23. Department of Health. Province of KwaZulu-Natal Annual Performance Plan 2018/19 - 2020/21 [Internet]. KwaZulu-Natal, South Africa: Department of Health, Republic of South Africa; [cited 2023 Feb 6]. Available from: http://www.kznhealth.gov.za/app/APP-2018-19.pdf

  24. Dwyer-Lindgren L, Cork MA, Sligar A, Steuben KM, Wilson KF, Provost NR, et al. Mapping HIV prevalence in sub-Saharan Africa between 2000 and 2017. Nature. 2019;570(7760):189–93.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  25. Fernandez ME, Walker TJ, Weiner BJ, Calo WA, Liang S, Risendal B, et al. Developing measures to assess constructs from the Inner Setting domain of the Consolidated Framework for Implementation Research. Implement Sci. 2018;13(1):52.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational Readiness to Change Assessment (ORCA): Development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4(1):38.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Lippman SA, Neilands TB, Leslie HH, Maman S, MacPhail C, Twine R, et al. Development, validation, and performance of a scale to measure community mobilization. Soc Sci Med. 2016;157:127–37.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Lippman SA, Maman S, MacPhail C, Twine R, Peacock D, Kahn K, et al. Conceptualizing community mobilization for HIV prevention: implications for HIV prevention programming in the African context. PLoS ONE. 2013;8(10):e78208.

  29. Shea CM, Jacobs SR, Esserman DA, Bruce K, Weiner BJ. Organizational readiness for implementing change: a psychometric assessment of a new measure. Implement Sci. 2014;10(9):7.

    Article  Google Scholar 

  30. Zullig LL, Muiruri C, Abernethy A, Weiner BJ, Bartlett J, Oneko O, et al. Cancer registration needs assessment at a tertiary medical center in Kilimanjaro. Tanzania World Health Popul. 2013;14(2):12–23.

    Article  PubMed  Google Scholar 

  31. COVID-19 / Coronavirus | South African Government [Internet]. [cited 2023 Mar 20]. Available from: https://www.gov.za/Coronavirus

  32. World Health Organization. Service Availability and Readiness Assessment (SARA) reference manual. Geneva, Switzerland: World Health Organization; 2013.

    Google Scholar 

  33. LeBreton JM, Senter JL. Answers to 20 Questions about interrater reliability and interrater agreement. Organ Res Methods. 2008;11(4):815–52.

    Article  Google Scholar 

  34. Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. New York: Routledge; 1988. p. 567.

    Google Scholar 

  35. Aarons GA, Ehrhart MG, Farahnak LR. The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership. Implementation Sci. 2014;9(1):45.

    Article  Google Scholar 

  36. Khamisa N, Oldenburg B, Peltzer K, Ilic D. Work related stress, burnout, job satisfaction and general health of nurses. Int J Environ Res Public Health. 2015;12(1):652–66.

    Article  PubMed  PubMed Central  Google Scholar 

  37. van der Colff JJ, Rothmann S. Occupational stress, sense of coherence, coping, burnout and work engagement of registered nurses in South Africa. SA J Ind Psychol. 2009;35(1):1–10.

    Google Scholar 

  38. McKnight J, Nzinga J, Jepkosgei J, English M. Collective strategies to cope with work related stress among nurses in resource constrained settings: An ethnography of neonatal nursing in Kenya. Soc Sci Med. 2020;1(245).

    Article  Google Scholar 

  39. Gilson L, Barasa E, Nxumalo N, Cleary S, Goudge J, Molyneux S, et al. Everyday resilience in district health systems: emerging insights from the front lines in Kenya and South Africa. BMJ Glob Health. 2017;2:e000224.

Download references

Acknowledgements

The authors are grateful to Anna Leddy for her contributions to survey design and data collection, to the data collection team, the interview and survey respondents who provided their time and insights, and to the Expert Panel for insights and guidance throughout the project. Expert Panelists were: Lungile Mshengu, Nomusa Mtshali, Paul Nijas, Fiona Scorgie, Jonathan Stadler, Michéle Torlutte, Joslyn Walker, Bryan Weiner, and Petra Zama.

Funding

This work was supported by the National Institute of Mental Health R21MH123389 (Lippman & Steward). The funder had no role in the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: HHL, WTS, BJW, AVH, SAL. Methodology: HHL, WTS, BJW, MNM, AVH, SAL. Formal analysis: HHL. Investigation: HHL, WTS, MNM, PJ, AVH, SAL. Writing, original draft: HHL. Writing, review and editing: All. Supervision: WTS, SAL. Project administration: WTS, MNM, PJ, AVH, SAL. Funding acquisition: WTS, SAL, AVH, HHL, BJW.

Corresponding author

Correspondence to Hannah H. Leslie.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Institutional Review Board at the University of California, San Francisco (20–31802), the Research Ethics Committee at the Human Sciences Research Council (REC 1/19/08/20), and the uMgungundlovu Health District; all methods were carried out in accordance with relevant guidelines and regulations. All facilities provided consent for inclusion and each participant provided written informed consent to participate.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Leslie, H.H., Lippman, S.A., van Heerden, A. et al. Adapting and testing measures of organizational context in primary care clinics in KwaZulu-Natal, South Africa. BMC Health Serv Res 24, 744 (2024). https://doi.org/10.1186/s12913-024-11184-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-024-11184-9

Keywords