Skip to main content

Measuring provider well-being: initial validation of a brief engagement survey

Abstract

Background

Measurement is one of the critical ingredients to addressing the well-being of health care professionals. However, administering an organization-wide well-being survey can be challenging due to constraints like survey fatigue, financial limitations, and other system priorities. One way to address these issues is to embed well-being items into already existing assessment tools that are administered on a regular basis, such as an employee engagement survey. The objective of this study was to assess the utility of a brief engagement survey, that included a small subset of well-being items, among health care providers working in an academic medical center.

Methods

In this cross-sectional study, health care providers, including physicians and advanced clinical practitioners, employed at an academic medical center completed a brief, digital engagement survey consisting of 11 quantitative items and 1 qualitative item administered by Dialogue™. The emphasis of this study was on the quantitative responses. Item responses were compared by sex and degree, domains were identified via exploratory factor analysis (EFA), and internal consistency of item responses was assessed via McDonald’s omega. Sample burnout was compared against national burnout.

Results

Of the 791 respondents, 158 (20.0%) were Advanced Practice Clinicians (APCs), and 633 (80.0%) were Medical Doctors (MDs). The engagement survey, with 11 items, had a high internal consistency with an omega ranging from 0.80–0.93 and was shown, via EFA, to have three domains including communication, well-being, and engagement. Significant differences for some of the 11 items, by sex and degree, in the odds of their agreement responses were found. In this study, 31.5% reported experiencing burnout, which was significantly lower than the national average of 38.2%.

Conclusion

Our findings indicate initial reliability, validity, and utility of a brief, digital engagement survey among health care professionals. This may be particularly useful for medical groups or health care organizations who are unable to administer their own discrete well-being survey to employees.

Peer Review reports

Background

The association between the well-being of health care professionals (HCPs) and metrics related to quality, safety and the overall performance of health care systems is well-documented [1]. Burnout among HCPs is strongly correlated with lower patient satisfaction and treatment compliance, and increased rates of medical error, hospital infections, malpractice litigation, work-related interpersonal conflict, and staff turnover [2,3,4,5,6]. HCPs experiencing burnout are more likely to be dissatisfied with their work and are at an increased risk for mood and anxiety disorders, substance misuse and substance use disorders, and suicide [7,8,9,10,11,12]. This dire combination of system and individual level consequences stresses the critical importance of assessing the well-being of HCPs, ideally on a regular basis and along with other standard practices of institutional performance measurement. Understanding how well-being influences other institutional performance metrics, such as financial performance and patient satisfaction, can facilitate the implementation of better tailored and more effective improvement interventions that will have a sustainable and lasting impact on the health care system.

There are many ways to measure the well-being of HCPs across an organization. Commonly used instruments range from comprehensive assessment to single-item burnout measures. In a National Academy of Medicine discussion paper, Dyrbye and colleagues highlight common considerations for system-wide well-being measurement and specifically recommend that the data being collected is important to stakeholders, is widely applicable and actionable, can detect and reflect changes in the institution, causes limited burden to respondents and the organization, and is founded on items that exhibit a sufficient level of construct validity [13]. These insights and availability of multiple assessment options have helped further well-being measurement across health care systems; however, survey administration remains challenging due to coordination of other surveys, potential survey fatigue and low response rates, limited finances and time and personnel for data collection and interpretation, or the perceived need to focus on other matters.

One way to address these competing issues and conundrums is to include well-being measures on employee engagement surveys, which are administered across health care systems on a consistent basis. Previous research indicates that incorporating well-being items into employee engagement surveys creates a more informative data set and adds richness to data interpretation, such as understanding how leadership communication influences employee burnout and satisfaction or analyzing how the combination of employee resilience and burnout influences patient experience [14, 15]. More needs to be done to understand the different ways well-being and engagement can be measured and the how the data collected can be interpreted and acted upon. The purpose of this study was to evaluate the utility of an emerging, brief, digital engagement survey that includes a small subset of well-being items. Our efforts were guided by the following three research questions: 1) What is the internal consistency of item responses? 2) What is the construct validity of the assessment instrument? 3) Do respondents answer the engagement survey differently based on provider role?

Methods

Participants

University of Utah Health (U of U Health) annually surveys all academic faculty and staff employed within the health sciences campus to measure employee engagement. This survey is administered by Dialogue™, formerly known as Waggl™, which is a digital, organizational engagement survey company [16]. U of U Health contracts with Dialogue™ for administration of the survey, use of their questionnaire items, access to their reporting software, and utilization of the population bank of their qualitative assessment responses. The present cross-sectional study focuses on the 791 physicians and advance practice clinicians who completed the survey either in January or April 2019. Participation was voluntary, and all data were confidential. Although Dialogue™ tracks responses by employee identification number, identifying information was not available to any U of U Health employee. The ethical approval and informed consent to participate was waived by the U of U Health Internal Review Board (IRB# 00,124,369). All methods were carried out in accordance with relevant guidelines and regulations.

Measurement

The engagement survey consisted of 11 quantitative items and 1 qualitative item to measure employee satisfaction, opportunities for professional development and advancement, job-related resources, workplace communication, and well-being. The complete list of items is in supplemental Table 1. Eight of the survey items were from the Dialogue™ question bank. These items were derived from the field of employee engagement research and selected through expert consensus. They have been used broadly throughout the healthcare industry [17]. The remaining three items, specifically those measuring work control, workplace stress and burnout, were adapted and modified from the Mini-Z worklife survey in order to fit the direction of the agreement scale of the instrument. The Mini-Z survey has demonstrated moderate reliability, with Cronbach’s alpha of 0.8 for the complete measure, and good internal validity [18]. Additionally, the single-item measuring burnout is highly correlated with the emotional exhaustion scale of the Maslach Burnout Inventory [19]. The 5-point Likert scale of the Dialogue™ instrument had the following anchors: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree and 5 = strongly agree. Responses for each quantitative item were dichotomized by agreement with “strongly agree” and “agree” being recategorized into a “Yes” response and “neutral,” “disagree” and “strongly disagree” being recategorized into a “No” response. Participants who responded to the quantitative items were included in all analyses, regardless if they completed the survey in its entirety.

Table 1 Characteristics of participants (overall and stratified by degree)

Demographic information was collected via population from human resources records when participants completed the survey. Available demographic variables included age, sex, race and ethnicity, faculty appointment type (research or clinical), and academic degree. Data from the sole qualitative item, “What would make you feel more appreciated at work? How would this be impactful?” is not included in the present study.

Statistical analyses

Descriptive demographic information was summarized overall with counts and percentages because all variables were categorical. With provider type (i.e., physician vs advance practice clinician) being the primary exposure variable of interest, demographics were also stratified by this variable and characteristic comparisons between physicians and advance practice clinicians were made using a Chi-Square test for sufficiently large sample sizes, and Fisher’s Exact test for small sample sizes. For analysis of the 11-dichotimzed items, item agreement percentages were presented overall and stratified by demographics of interest (i.e., sex and degree status) while being compared with a Chi-Square test. Additionally, odds ratios with 95% confidence intervals (CIs) were presented to assess the odds of agreement for each item between the stratified groups.

While individual items alone provided useful insights, of greater interest were underlying trends seen across combinations of these items. As such, an exploratory factor analysis (EFA), with iterated principal axis factor extraction and squared multiple correlations on the diagonal of the correlation matrix, was conducted on the 11 quantitative items on their original scale. This was done to confirm the validity of grouping certain items into domains. Orthogonal varimax and oblique promax rotations were examined, and upon findings of high correlations between factors as well as findings yielding more of a simple structure (see Supplemental Figs. 1a-1b) the promax rotation was used for final results. Rotated factor pattern loadings (correlations between items and factors) were provided to determine the item domains. Item communalities (proportion of variance of each item contributed by the factors) as well as inter-factor correlations were also provided. Diagnostics were assessed to confirm an optimal number of factors and overall factor solution. As a sensitivity analysis, the EFA was repeated while changing the extraction method to maximum likelihood and minimum residual to confirm stable loadings of the domains.

To measure the internal consistency of item responses, McDonald’s omega was used [20]. This was preferred over the commonly used Cronbach’s alpha due to its ability to handle multiple latent dimensions in the item responses (as opposed to one item-wide dimension “unidimensionality” for alpha), correlation between errors (alpha assumes independence between errors), violation of tau-equivalence (different factor loadings of the items while alpha assumes all are equal), and the overall outperformance of omega over alpha under such situations [21,22,23,24,25,26,27,28,29]. Thus, with factors underlying the items, as well as correlations between those factors, total omega and hierarchical omega were employed to consider these phenomena. In addition, the algebraic greatest lower bound (GLBa) was used as a companion, which has been shown to be a reliable estimate in the presence of non-normal or skewed data [28, 30,31,32,33].

An additional analysis consisted of comparing the sample percentage of burnout (those who answered “strongly disagree” or “disagree” to the item “Burnout is not a problem for me”) to the national percentage using a one-sample z hypothesis test for proportions and a 5% significance level.

As a sub-analysis, domains from the EFA were converted into weighted factor scores. Because domains were shown to be correlated, all demographic predictors were fit simultaneously in multivariate linear regression with all domains as outcomes. Thus, models were fit each with a different outcome and involving all the same predictors. Coefficients, however, across all models covaried. To confirm selection of predictors, a multivariate analysis of variance (MANOVA) Pillai test was conducted to determine which predictors were jointly contributing to all outcomes significantly. Adjusted beta-hats (\(\widehat{\beta }\)ADJ) were calculated for predictors, which reported the average change in domain outcome with each one-unit increase in predictor. Significance of predictors was reported with p-values. Model diagnostics were assessed for predictor/outcome sets to ensure optimal fit. To capture uncertainty of estimates, while owing to the fact multiple sets of coefficients were present that covaried, 95% confidence ellipses were plotted to capture uncertainty in two dimensions (two outcomes were plotted at a time, and all combinations were assessed). The ellipse captured the area within which one could be 95% confident that the true joint domain outcome was contained. With the predictors of gender and degree being of interest, all comparisons considered these predictors while holding all others constant at mean levels.

All other hypothesis tests (besides the one sample z-test) were two-sided with a significance level of 5%. All analyses were performed in SAS, version 9.4 (SAS Institute Inc).

Results

Sample characteristics

Of the 1,447 providers invited to participate in the survey, 791 completed the survey (54.7%) and had responses eligible for analysis. The total number of respondents accurately reflected the demographics of providers within U of U Health regarding sex, race/ethnicity, age, professional degree, and role within the institution. More specifically, the total U of U Health provider population consisted of higher percentages of males (51.8%), between the ages of 30–49 years old (68.1%), white race (78.3%), physicians (83.0%) and primarily clinical workers (80.4%). Survey respondents consisted of higher percentages of males (54.7%), between the ages of 30–49 years old (64.5%), white race (78.3%), physicians (80.0%) and primarily clinical workers (89.6%; Table 1). Compared to APCs, physicians had significantly higher percentages of males (62.4% vs. 22.4%, P < 0.001) and lower percentages of mostly clinical work yet higher percentages of mostly research work (clinical: 87.5% vs. 98.1%; research: 7.3% vs. 0.0% P < 0.001). No significant differences were found in age or race/ethnicity between APCs and physicians (P = 0.44 and 0.51 respectively; Table 1). These comparisons were similar to comparisons between the total population of APCs and physicians who worked at U of U Health. More specifically, total U of U Health APCs had a higher percentage of females (77.0%) in comparison to U of U Health physicians (43.0%). APCs and physicians were both primarily white race (84% vs 76.9%) and between the ages of 30–49 years old (75.6% vs 67.9%). These demographic comparisons are also similar to comparisons between the population of APCs and physicians working in the state of Utah [34,35,36].

Item comparisons by sex and degree

The item agreement responses are displayed in Tables 2 and 3. Overall, as much as 95% agreed to the statement “I am motivated to do my best” while as low as 41% agreed to the statement “Burnout is not a problem for me.” Compared to females, males reported significantly higher agreement percentages to the statements “I have adequate opportunities to advance my career at U of U Health” (71.1% vs. 61.2%; P = 0.004), “I have control over workload” (50.8% vs. 37.6%; P < 0.001), “My work-related stress is manageable” (73.9% vs. 61.2%; P < 0.001), and “Burnout is not a problem for me” (48.7% vs. 31.2%; P < 0.001) (Table 2).

Table 2 Agreement responses by sex
Table 3 Agreement responses by degree

By degree, physicians also reported a significantly higher agreement percentage than APCs to the statement “I have adequate opportunities to advance my career” (71.7% vs. 43.7%; P < 0.001). Whereas males were 1.56 times more likely to report agreement to this statement than females, physicians were 3.27 times more likely to report agreement than APCs. Again, similar to sex, physicians reported significantly higher percentages of agreement to the items of “workload control”, “manageable work-related stress”, and “lack of burnout” (burnout p-value on the boundary of significance: 0.06). Physicians also reported 65.6% agreement to the statement “My input is sought and considered” whereas APCs only reported 53.8% (P = 0.01; Table 3).

Exploratory factor analysis

A final count of three factors was determined by various methods including content expertise on the number of expected domains, a scree plot (Supplemental Fig. 2) with an elbow in eigenvalues at three factors, discontinuation of high loadings beyond three factors, and negligible change of root mean square of the residuals (RMSR) beyond three factors (RMSR = 0.02). Finally, the Tucker Lewis Index of factoring reliability was 0.979 again indicating an optimal factor solution. Tables 4, 5 and Fig. 1 show the results of the EFA. The factors were identified by the highest loadings and were characterized as: (1) communication (“My supervisor keeps me informed,” “I can express opinions without fear of retribution,” “My input is sought, heard, and considered”) (2) well-being (“I have control over my workload”, “My work-related stress is manageable”, “burnout is not a problem for me”) (3) engagement (“I would recommend UUH as a great place to work”, “I see myself working at UUH in two years”). These three factors accounted for 59% of the variability in all the survey items. In addition, the factors exhibited significantly positive correlations with each other. Communication had a correlation of 0.63 with well-being, and a correlation of 0.77 with engagement. Well-being had a correlation of 0.70 with engagement. All correlation significance tests revealed P < 0.001. Sensitivity analyses revealed that the loading patterns remained consistent across all extraction adjustments to the EFA (Supplemental Figs. 34 and Supplemental Tables 1a-b, 2a-b). The responses indicated a high internal consistency with a total omega of 0.93, hierarchical omega of 0.80, and GLBa of 0.94.

Table 4 Rotated factor pattern loadingsa
Table 5 Inter-factor correlations1
Fig. 1
figure 1

Rotated factor pattern loadings. (Iterated principal axis factor extraction with promax rotation)

Assessing the state of burnout

The sample percentage of burnout in our study, 31.5%, was significantly lower than the national percentage of 38.2% (221/579), taken from National Data for 579 Clinicians in the ACLGIM Worklife and Wellness Project [17] (P < 0.001; Table 6).

Table 6 Burnout proportion (n = 791)

Sub-analysis: multivariate linear regression on factor domains

The MANOVA Pillai test indicated the utility of including all predictors as joint predictors in multivariate modeling (all P < 0.001 or on the boundary of significance [i.e. 0.05 < P < 0.10]). Compared to those aged 30–39, all other ages had lower outcomes of communication, well-being, and engagement on average. Males had higher outcomes than females, and all other races (compared to white) had lower outcomes. Those doing mostly research, or other, had higher outcomes than those primarily focused on clinical work. Those who were an MD reported higher outcomes than those were APC (Supplemental Table 3). Supplemental Fig. 5a-c show gender/degree predictions for joint outcomes as well as 95% confidence ellipses. When considering outcomes jointly, and across all gender and degree comparisons, all outcomes were positively correlated (increases in one outcome were associated with increases in the other outcomes). Females and APCs had the lowest outcomes whereas males and MDs had the highest outcomes.

Discussion

These findings demonstrate the initial internal-consistency reliability (via omega), construct validity (via EFA) and utility of a brief engagement survey among HCPs. Three primary domains were identified within the measurement tool: engagement, communication and well-being. While these factors do not encompass an exhaustive assessment of provider well-being, they do provide guidance and consideration for future organizational measurement and research. For instance, the identified well-being domain consists of three items that were developed and adapted to fit the direction of the Likert scale of the brief instrument. This was originally viewed as a potential downside of using this survey tool. Yet, the burnout rate generated from one of these new items was consistent with a previous assessment of burnout among our providers, where we used a measure with well-established reliability and validity, and it was similar to the current national provider burnout rate [18, 37]. This finding contributes to the national conversation of how burnout may be able to be inquired about and measured in a variety of ways, which gives more permission for innovation and flexibility when including well-being items in system-wide assessment.

The remaining two domains also reinforced previous trends related to HCP well-being. Regarding engagement, respondents reported being motivated to do their best almost every day despite approximately one-third of the sample endorsing struggling with burnout and another one-third reporting a neutral response toward experiencing burnout. This observation highlights how the internal drive of HCPs to provide excellent care for patients and be successful at work remains present even during significant exhaustion and possible despair. This finding also calls forth a cruel and costly irony: HCPs experiencing burnout still present to work motivated to do their best despite being at increased risk for causing patient harm [6]. Further understanding of this phenomenon is critical in creating a culture of medicine that supports self-care, boundary setting, and a sustainable, healthy work environment [38]. In addition, the identified communication domain may have implications for understanding psychological safety, an emerging important construct in understanding and addressing group dynamics in healthcare [15, 39].

Between group comparisons with demographics also generated notable results. The similarities and discrepancies in responses found between physicians and APCs were consistent with previous comparisons between these groups [40,41,42]. Engagement and burnout rates tend to be similar between these roles; however, there is currently more understanding of, and research conducted on, physician burnout. The APCs in our study endorsed higher work-related stress and lower work-related control, less opportunities for career advancement, and a lower sense that their input is sought, heard, and considered in comparison to their physician counterparts. This combination of high stress in conjunction with multiple perceived limitations could have the potential for many APCs to feel trapped in their profession. These findings highlight the need for further investigation on the specific needs of APCs, how they compare to other HCP roles, and how to address these needs in different healthcare settings.

There were three primary differences between male and female participants in our study. Male providers reported higher perceived well-being, work-related control, and opportunities for career advancement in comparison to female providers. These findings are consistent with previous research on gender discrepancies between HCPs [43, 44]. Female physicians are more likely to experience underrepresentation in leadership positions, slower academic promotion, fewer professional awards, fewer opportunities to present at grand rounds or national lectures, and an increased likelihood of harassment, impostor syndrome, and burnout in comparison to male physicians [45]. Our results highlight a continued need to better understand how the risk and protective factors for burnout may differ between male and female providers and how interventions such as established career pipeline programs for burgeoning leaders, effective mentoring programs for female providers, and addressing implicit bias in the workplace may reduce these disparities.

There are limitations to our study. The results of this research are difficult to generalize due to the study sample being from one institution and having a moderate response rate. The study design was cross-sectional, which implies that no causality can be contributed to any identified relationships within our findings. The demographic information in this study was collected via already populated human resources records. It is possible that this information was inaccurate depending on how participants completed the demographic section of the human resources paperwork when they applied for work at our institution. This limited our ability to make between group comparisons, especially among racial/ethnic groups. The brief nature of this survey may have been convenient; however, it was not exhaustive. Other drivers of burnout, such as the impact of the electronic medical record, workplace efficiency, staffing, salary, and support following a workplace trauma, were not measured [46]. This limitation creates an incomplete picture regarding workplace well-being at our institution. In addition, the items utilized in this survey have limited validity. The Dialogue™ items have solely exhibited face validity and the well-being items used in this assessment were not previously validated. It is important to note that some of the Dialogue™ items can read as double-barreled and need to be further reviewed and analyzed for clarity and usability. As far as measures of internal consistency, there was a disparity between the total omega (0.93) and hierarchical omega (0.80). Total omega focuses on information across all factors without specifying the specific variance contributions of sub-factors, while hierarchical omega does take these specific sub-contributions into account [29]. Although the appropriate cutoff for optimal internal consistency can be debated and should depend as well on content expertise, all our reported percentages were no lower than 0.80 and generally coincide with high internal consistency.

Conclusions

Assessing HCP well-being is an important aspect of acting on the quadruple aim in healthcare settings. Findings from this study suggest opportunities for next steps including further assessing the utility and validity of the burnout item of this measure, further examining the communication domain as a simple way to measure psychological safety in the workplace, and continued investigation of how well-being measurement can be incorporated into already existing organizational assessment practices, like engagement and climate surveys. Understanding how well-being influences other institutional performance metrics, such as financial performance and patient satisfaction, can facilitate the implementation of better tailored and more effective improvement interventions that will have a sustainable and lasting impact on the health care system.

Availability of data and materials

The datasets generated and/ or analyzed during the current study are not publicly available due to 1) the sensitivity of the data (e.g., individual information on provider well-being) and 2) license restrictions from Dialogue™. The data are available from the corresponding author upon reasonable request and with permission of Dialogue™.

Abbreviations

APC:

Advanced practice clinician

CI:

Confidence interval

HCP:

Health care professional

References

  1. Shanafelt TD, West CP, Sinsky C, et al. Changes in burnout and satisfaction with work-life integration in physicians and the general US working population between 2011 and 2017. Mayo Clin Proc. 2019;94(9):1681–94. https://doi.org/10.1016/j.mayocp.2018.10.023.

    Article  PubMed  Google Scholar 

  2. Salyers MP, Bonfils KA, Luther L, et al. The relationship between professional burnout and quality and safety in Healthcare: a Meta-analysis. J Gen Intern Med. 2017;32(4):475–82. https://doi.org/10.1007/s11606-016-3886-9.

    Article  PubMed  Google Scholar 

  3. Wallace JE, Lemaire JB, Ghali WA. Physician wellness: a missing quality indicator. Lancet Lond Engl. 2009;374(9702):1714–21. https://doi.org/10.1016/S0140-6736(09)61424-0.

    Article  Google Scholar 

  4. Shanafelt TD, Balch CM, Bechamps G, et al. Burnout and medical errors among American surgeons. Ann Surg. 2010;251(6):995–1000. https://doi.org/10.1097/SLA.0b013e3181bfdab3.

    Article  PubMed  Google Scholar 

  5. Han S, Shanafelt TD, Sinsky CA, et al. Estimating the attributable cost of physician burnout in the United States. Ann Intern Med. 2019;170(11):784. https://doi.org/10.7326/M18-1422.

    Article  PubMed  Google Scholar 

  6. Panagioti M, Geraghty K, Johnson J, et al. Association between physician burnout and patient safety, professionalism, and patient satisfaction: a systematic review and meta-analysis. JAMA Intern Med. 2018;178(10):1317. https://doi.org/10.1001/jamainternmed.2018.3713.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Dyrbye LN, Thomas MR, Massie FS, et al. Burnout and suicidal ideation among U.S. medical students. Ann Intern Med. 2008;149(5):334–341. https://doi.org/10.7326/0003-4819-149-5-200809020-00008.

    Article  PubMed  Google Scholar 

  8. Jackson ER, Shanafelt TD, Hasan O, Satele DV, Dyrbye LN. Burnout and alcohol abuse/dependence among U.S. medical students. Acad Med J Assoc Am Med Coll. 2016;91(9):1251–1256. https://doi.org/10.1097/ACM.0000000000001138.

  9. Oreskovich MR, Kaups KL, Balch CM, et al. Prevalence of alcohol use disorders among American surgeons. Arch Surg Chic Ill 1960. 2012;147(2):168–174. https://doi.org/10.1001/archsurg.2011.1481.

  10. Mata DA, Ramos MA, Bansal N, et al. Prevalence of depression and depressive symptoms among resident physicians a systematic review and meta-analysis. JAMA. 2015;314(22):2373–83. https://doi.org/10.1001/jama.2015.15845.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Schernhammer E. Taking their own lives – the high rate of physician suicide. N Engl J Med. 2005;352(24):2473–6. https://doi.org/10.1056/NEJMp058014.

    Article  CAS  PubMed  Google Scholar 

  12. Fahrenkopf AM, Sectish TC, Barger LK, et al. Rates of medication errors among depressed and burnt out residents: prospective cohort study. BMJ. 2008;336(7642):488–91. https://doi.org/10.1136/bmj.39469.763218.BE.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Dyrbye LN, Meyers D, Ripp J, Dalal N, Bird SB, Sen S. A pragmatic approach for organizations to measure health care professional well-being. NAM Perspect. Published online October 1, 2018. https://doi.org/10.31478/201810b.

  14. Howell TG, Mylod DE, Lee TH, Shanafelt T, Prissel P. Physician Burnout, Resilience, and Patient Experience in a Community Practice: Correlations and the Central Role of Activation. J Patient Exp. Published online December 15, 2019:2374373519888343. https://doi.org/10.1177/2374373519888343.

  15. Shanafelt TD, Gorringe G, Menaker R, et al. Impact of organizational leadership on physician burnout and satisfaction. Mayo Clin Proc. 2015;90(4):432–40. https://doi.org/10.1016/j.mayocp.2015.01.012.

    Article  PubMed  Google Scholar 

  16. WAGGL. Employee voice in healthcare: empowering and engaging workers to drive high reliability. Published online 2019. Accessed 15 Jan 2020.  https://www.waggl.com/white-papers/employee-voice-in-healthcare-ebook/.

  17. Perceptyx. The healthcare employee experience in 2022: a data-driven report.   Accessed 23 Feb 2023.  https://go.perceptyx.com/research-the-healthcare-employee-experience-in-2022-a-data-driven-perspective.

  18. Linzer M, Poplau S, Babbott S, et al. Worklife and wellness in academic general internal medicine: results from a national survey. J Gen Intern Med. 2016;31(9):1004–10. https://doi.org/10.1007/s11606-016-3720-4.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Rohland BM, Kruse GR, Rohrer JE. Validation of a single-item measure of burnout against the Maslach Burnout Inventory among physicians. Stress Health. 2004;20(2):75–9. https://doi.org/10.1002/smi.1002.

    Article  Google Scholar 

  20. McDonald RP. Test theory: a unified treatment. New York: Psychology Press; 1999.https://doi.org/10.4324/9781410601087.

  21. Cronbach LJ. Coefficient alpha and the internal structure of tests. 38.

  22. Tarkkonen L, Vehkalahti K. Measurement errors in multivariate measurement scales. J Multivar Anal. 2005;96(1):172–89. https://doi.org/10.1016/j.jmva.2004.09.007.

    Article  Google Scholar 

  23. Zinbarg RE, Revelle W, Yovel I, Li W. Cronbach’s α, Revelle’s β, and Mcdonald’s ωH: their relations with each other and two alternative conceptualizations of reliability. Psychometrika. 2005;70(1):123–33. https://doi.org/10.1007/s11336-003-0974-7.

    Article  Google Scholar 

  24. Zinbarg RE, Yovel I, Revelle W, McDonald RP. Estimating generalizability to a latent variable common to all of a scale’s indicators: a comparison of estimators for ωh. Appl Psychol Meas. 2006;30(2):121–44. https://doi.org/10.1177/0146621605278814.

    Article  Google Scholar 

  25. Revelle W, Zinbarg RE. Coefficients Alpha, Beta, Omega, and the glb: comments on Sijtsma. Psychometrika. 2008;74(1):145. https://doi.org/10.1007/s11336-008-9102-z.

    Article  Google Scholar 

  26. Dunn TJ, Baguley T, Brunsden V. From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. Br J Psychol. 2014;105(3):399–412. https://doi.org/10.1111/bjop.12046.

    Article  PubMed  Google Scholar 

  27. Revelle W, Condon DM. Reliability. In: The Wiley handbook of psychometric testing: a multidisciplinary reference on survey, scale and test development. Hoboken, NJ: John Wiley & Sons, Ltd; 2018.

  28. Trizano-Hermosilla I, Alvarado JM. Best alternatives to Cronbach’s alpha reliability in realistic conditions: congeneric and asymmetrical measurements. Front Psychol. 2016;7. https://www.frontiersin.org/articles/, https://doi.org/10.3389/fpsyg.2016.00769. Accessed 22 Nov 2022.

  29. Trizano-Hermosilla I, Gálvez-Nieto JL, Alvarado JM, Saiz JL, Salvo-Garrido S. Reliability estimation in multidimensional scales: comparing the bias of six estimators in measures with a Bifactor structure. Front Psychol. 2021;12. https://www.frontiersin.org/articles/, https://doi.org/10.3389/fpsyg.2021.508287. Accessed 22 Nov 2022.

  30. Sijtsma K. On the use, the misuse, and the very limited usefulness of Cronbach’s Alpha. Psychometrika. 2008;74(1):107. https://doi.org/10.1007/s11336-008-9101-0.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Woodhouse B, Jackson PH. Lower bounds for the reliability of the total score on a test composed of non-homogeneous items: II: A search procedure to locate the greatest lower bound. Psychometrika. 1977;42(4):579–91. https://doi.org/10.1007/BF02295980.

    Article  Google Scholar 

  32. Ten Berge JMF, Sočan G. The greatest lower bound to the reliability of a test and the hypothesis of unidimensionality. Psychometrika. 2004;69(4):613–25. https://doi.org/10.1007/BF02289858.

    Article  Google Scholar 

  33. Moltner A, Revelle W. Find the greatest lower bound to reliability. Published online 2015. Accessed 15 Jan 2020. http://personality-project.org/r/psych/help/glb.algebraic.html.

  34. Utah Medical Education Council. Utah’s advanced practice registered nurse workforce, 2017: a study of the supply and distribution of APRNs in Utah. Published online 2017. Accessed 19 Apr 2023. https://umec-nursing.utah.gov/wp-content/uploads/APRN_2017_FINAL.pdf.

  35. Utah Medical Education Council. Utah’s physician assistant workforce, 2019: a study of the supply and distribution of physician assistants in Utah. Published online 2019. Accessed 19 Apr 2023. https://umec.utah.gov/wp-content/uploads/2019-PAReport-Final-2019.10.24.pdf.

  36. Utah Medical Education Council. Utah’s Physician Workforce. 2020. Published online 2020. Accessed 19 Apr 2023. https://umec.utah.gov/wp-content/uploads/2020-Physician-Workforce-Report-final.pdf.

  37. Morrow E, Call M, Marcus R, Locke A. Focus on the quadruple aim: development of a resiliency center to promote faculty and staff wellness initiatives. Jt Comm J Qual Patient Saf. 2018;44(5):293–8.

    PubMed  Google Scholar 

  38. Shanafelt TD, Schein E, Minor LB, Trockel M, Schein P, Kirch D. Healing the professional culture of medicine. Mayo Clin Proc. 2019;94(8):1556–66. https://doi.org/10.1016/j.mayocp.2019.03.026.

    Article  PubMed  Google Scholar 

  39. Nembhard IM, Edmondson AC. Making it safe: the effects of leader inclusiveness and professional status on psychological safety and improvement efforts in health care teams. J Organ Behav. 2006;27(7):941–66. https://doi.org/10.1002/job.413.

    Article  Google Scholar 

  40. Tetzlaff ED, Hylton HM, DeMora L, Ruth K, Wong YN. National study of burnout and career satisfaction among physician assistants in oncology: implications for team-based care. J Oncol Pract. 2018;14(1):e11–22. https://doi.org/10.1200/JOP.2017.025544.

    Article  PubMed  Google Scholar 

  41. Freeborn DK, Hooker RS, Pope CR. Satisfaction and well-being of primary care providers in managed care. Eval Health Prof. 2002;25(2):239–54. https://doi.org/10.1177/01678702025002008.

    Article  PubMed  Google Scholar 

  42. Hoff T, Carabetta S, Collinson GE. Satisfaction, burnout, and turnover among nurse practitioners and physician assistants: a review of the empirical literature. Med Care Res Rev. 2019;76(1):3–31. https://doi.org/10.1177/1077558717730157.

    Article  PubMed  Google Scholar 

  43. Linzer M, Harwood E. Gendered expectations: do they contribute to high burnout among female physicians? J Gen Intern Med. 2018;33(6):963–5. https://doi.org/10.1007/s11606-018-4330-0.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Templeton K, Bernstein CA, Sukhera J, et al. Gender-based differences in burnout: issues faced by women physicians. NAM Perspect. Published online May 30, 2019. https://doi.org/10.31478/201905a.

  45. Newman C, Templeton K, Chin EL. Inequity and women physicians: time to change millennia of societal beliefs. Perm J. 2020;24:1–6. https://doi.org/10.7812/TPP/20.024.

    Article  PubMed  Google Scholar 

  46. Brady KJS, Kazis LE, Sheldrick RC, Ni P, Trockel MT. Selecting physician well-being measures to assess health system performance and screen for distress: Conceptual and methodological considerations. Curr Probl Pediatr Adolesc Health Care. 2019;49(12):100662. https://doi.org/10.1016/j.cppeds.2019.100662.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to recognize Rick Smith, Senior Human Resources Director of U of U Health, who invited us to collaborate with his team on the employee engagement survey.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

MC and FQ conceptualized the research question; MC, FQ and BT acquired the data; FQ and BT analyzed the data; MC, FQ, BT and AL interpreted the data; MC, FQ and BT drafted the manuscript; and EM, DW, BH, AL revised it. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Megan Call.

Ethics declarations

Ethics approval and consent to participate

The ethical approval for this study was exempted and informed consent to participate was waived by the University of Utah Health Internal Review Board (IRB# 00124369). All methods were carried out in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Call, M., Qeadan, F., Tingey, B. et al. Measuring provider well-being: initial validation of a brief engagement survey. BMC Health Serv Res 23, 432 (2023). https://doi.org/10.1186/s12913-023-09449-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-023-09449-w

Keywords