Skip to main content
  • Research article
  • Open access
  • Published:

Increasing health service access by expanding disease coverage and adding patient navigation: challenges for patient satisfaction

Abstract

Background

Cancer control programs have added patient navigation to improve effectiveness in underserved populations, but research has yielded mixed results about their impact on patient satisfaction. This study focuses on three related research questions in a U.S. state cancer screening program before and after a redesign that added patient navigators and services for chronic illness: Did patient diversity increase; Did satisfaction levels improve; Did socioeconomic characteristics or perceived barriers explain improved satisfaction.

Methods

Representative statewide patient samples were surveyed by phone both before and after the program design. Measures included satisfaction with overall health care and specific services, as well as experience of eleven barriers to accessing health care and self-reported health and sociodemographic characteristics. Multiple regression analysis is used to identify independent effects.

Results

After the program redesign, the percentage of Hispanic and African American patients increased by more than 200% and satisfaction with overall health care quality rose significantly, but satisfaction with the program and with primary program staff declined. Sociodemographic characteristics explained the apparent program effects on overall satisfaction, but perceived barriers did not. Further analysis indicates that patients being seen for cancer risk were more satisfied if they had a patient navigator.

Conclusions

Health care access can be improved and patient diversity increased in public health programs by adding patient navigators and delivering more holistic care. Effects on patient satisfaction vary with patient health needs, with those being seen for chronic illness likely to be less satisfied with their health care than those being seen for cancer risk. It is important to use appropriate comparison groups when evaluating the effect of program changes on patient satisfaction and to consider establishing appropriate satisfaction benchmarks for patients being seen for chronic illness.

Peer Review reports

Background

Patient satisfaction is an important goal for health care services, as both an indicator of service quality and as an influence on subsequent service utilization [1,2,3,4,5]. Increasing patient satisfaction with care delivery may thus provide a means for improving engagement in and outcomes of care delivery [6] and was designated by the American Cancer Society’s Patient Navigation Leadership Summit as one of the key metrics for evaluating patient navigation programs [7].

In spite of some positive evidence, however, and the clear potential for patient navigation programs to improve care delivery to underserved populations, research has not consistently linked patient navigation programs to increased satisfaction [8,9,10,11]. A systematic review of satisfaction outcomes from nine patient navigation studies that had a comparison group reported that three studies found positive effects of patient navigation on overall patient satisfaction, but in a pooled meta-analysis of data from five studies, only one found that patient navigation increased patient satisfaction with cancer care (although that study was the only one of the five that was a randomized controlled trial) [12]. The multisite Patient Navigation Research Program (PNRP) found that patient navigation improved satisfaction only among socially disadvantaged patients at two sites serving patients with recent breast or colorectal cancer diagnoses [13, 14].

One reason for inconsistent effects of patient navigation on satisfaction with care may be that studies have not taken into account the financial, practical, and cultural barriers that patients may confront and the ability of patient navigators to overcome them [15,16,17]. Reducing some barriers may increase patient satisfaction, but this may not occur with all barriers [11]. In addition, more complex health care problems that are more difficult to treat may be less likely to have satisfying outcomes [18]. Some of the variation in prior research findings may also reflect differences in influences on satisfaction with health care overall compared to satisfaction with specific health care experiences [19, 20].

Sociodemographic characteristics may also influence patient satisfaction and so must be taken into account. Prior research identifies age as a consistent predictor of greater satisfaction, in spite of its association with declining health [19, 21, 22]. Satisfaction also has been associated in prior research with better self-reported health [21, 23], fewer symptoms of depression [23, 24], and English proficiency [25]. African Americans and immigrants have faced particular barriers to health care that have resulted in lower levels of satisfaction [26,27,28]. In addition, higher levels of education have been associated consistently with lower levels of satisfaction [20, 21].

The broader research literature on patient satisfaction thus suggests that inconsistent findings about the effect of patient navigation programs on patient satisfaction may be due in part to insufficient controls for other influences that may explain or alter these effects. The addition of patient navigators to an established cancer detection program created an opportunity to test these possibilities.

The program redesign

The National Breast and Cervical Cancer Early Detection Program (NBCCEDP) initiated by Congress in 1990—and the case management component added in 1998—improved access to breast and cervical cancer screening for low income, underserved women by funding programs in every state, territory, and tribe in the United States [29, 30]. In many states, NBCCEDP nurse case managers followed up with women identified as at high risk for breast or cervical cancer and encouraged them to seek conclusive diagnostic testing and, when warranted, referral to treatment. However, outcome evaluations revealed that screening disparities related to socioeconomic status remained [31,32,33].

After a state-wide program evaluation led by the authors identified lower rates of screening and referral among eligible women with immigrant backgrounds, an Expert Panel recommended two major program changes: (1) Community-based patient navigators were added to improve outreach at each health care delivery site; (2) The patient navigators were also expected to assist with management of the chronic illnesses that both increased cancer risk and created barriers to screening and treatment. The CDC granted a waiver of the original program requirements to allow the inclusion of chronic illness as well as cancer risk; financial eligibility continued to be limited to under- and uninsured persons at or below 250% of the federal poverty level (corresponding to $27,925 for one person in 2012 in U.S. dollars).

The case managers at each site continued to focus on diagnostic screening and referral to treatment for patients at high risk of breast or cervical cancer, while the patient navigators worked with program patients to improve their management of such chronic diseases as hypertension, asthma, and diabetes. The patient navigators were hired from the most underserved communities and so two-thirds were Hispanic and fully nine in ten were bilingual. Although the patient navigators were not expected to have any formal medical education, a 50-h patient navigation training program was designed for them and delivered at a central location. A program director supervised both patient navigators and case managers at each service delivery site.

Study aims

This study examined multiple satisfaction measures in the program before and after the redesign. Patient characteristics were controlled as possible confounders, perceived barriers were examined as potential mediators, and four measures of different aspects of satisfaction were included as outcomes. Specifically, the analysis is designed to answer three interrelated research questions: (1) Did patient diversity increase after the program redesign? (2) Did any aspects of health care satisfaction increase among those from ethnic or immigrant backgrounds, as intended by the program redesign? (3) Did sociodemographic characteristics or perceived barriers explain improvements in satisfaction levels?

Methods

Surveys of representative samples of program patients conducted by the University’s Center for Survey Research before and after the program change created a repeated cross-sectional design (“trend study”) [34] that allowed before-after comparisons of patient characteristics, perceived barriers, and average satisfaction levels.

Study participants

Both samples were selected randomly from the public health agency’s database of program service recipients, allowing study eligibility to be determined prior to patient contact. The eligible population for the “pre” survey in 2005 consisted of the 3178 women who had received any case management services from the program in the preceding 2 years (2003–2004) at any of its 26 sites, had been screened for breast or cervical cancer, and spoke English, Spanish, or Portuguese. Of 385 patients sampled randomly for interviews, 76% (293) could be located and 71% of these were interviewed, for a final sample size of 207 (54% of the initial sample). The eligible population for the “post” survey in 2012 consisted of the patient population with either case management or patient navigation services in the preceding year. By that time, program enrollment had expanded as a result of the program changes to 10,627 (but with the number of program delivery sites statewide reduced to 17). Of 427 patients randomly selected from those who met eligibility criteria (could be located and had received health care at the selected site in the preceding year and could speak English, Spanish, or Portuguese), 383 (90%) completed interviews (33% of the initial random sample prior to checking for eligibility).

All procedures were approved by the University’s Institutional Review Board for the Protection of Human Subjects. Enrolled patients received a letter prior to the interviews informing them that participation in the phone survey was completely voluntary and confidential and would not affect their program services. These points were repeated in the introduction on the phone and the interview did not proceed unless patients consented verbally.

Questionnaires were translated into Spanish and Portuguese, using forward and back translation and then reconciling differences; in both surveys, interviews were conducted in English, Spanish, or Portuguese based on patient preference (see supplementary files 1 and 2). There were two differences in selection of patients in the before and after surveys. (1) Prior to the program redesign, women only received active case management when follow-up testing arranged by the case manager yielded an abnormal finding indicating high risk of breast or cervical cancer. Since the focus of this first survey was on experience with case managers, the sample was stratified by risk and high risk cases (335) were then sampled disproportionately (50 in the intended sample were in the low risk group). Response rates were similar for both the high and low risk groups and after weighting to adjust for the disproportionate selection of high risk cases, the obtained sample was similar in ethnicity and primary language to the program population, but somewhat more educated (89% of the obtained sample had completed high school, compared to 72% of the program population). Only respondents who knew of the program and answered questions about their satisfaction with it are included in the comparative analysis (157 of 207).

(2) After expanded eligibility criteria following the program change, many more men were eligible for program participation and comprised 16% (n = 61) of the “post” sample, but since men were not included in the first survey they were omitted from this analysis (resulting in a sample size of 320). Although survey respondents could have differed in multiple ways from the program population, there were no statistically significant demographic and location differences in comparison to the total program population other than underrepresentation of patients at two sites and overrepresentation at one.

Measures

Health care satisfaction was operationalized with four indicators that ranged in their focus from more general to more specific (see Table 1 for details) [6, 23]: (1) satisfaction with the program overall; (2) satisfaction with the program (first survey) or with the specific health clinic where program services had been received (second survey, because program services were then less distinct within the clinics); (3) satisfaction with the program staff with whom the respondent had had contact. The measures of satisfaction with program staff were then used to construct two measures of change: satisfaction with the licensed case manager at both times for those who had a case manager in the revised program and satisfaction with the licensed case manager in the initial program and with the patient navigator in the revised program for all respondents [35, 36].

Table 1 Measures

Indexes corresponding to three types of barriers to health care were identified with responses to a set of questions about “How much of a problem has ___ been for you in getting health care that you required?” [37,38,39]. An exploratory factor analysis of patient responses about 11 specific barriers in both samples identified three factors, which are represented with indexes constructed as simple averages [40]: economic, fear, practical. One question about immigration status as a barrier was asked only of those who were immigrants.

Health care access was indicated with questions about sources of health care insurance and ability to afford needed medical care [41, 42]. Physical health status was measured with the standard self-report rating [43, 44], while mental health was operationalized with a two-question index of depressive symptoms [45]). Age and education were obtained from program records. Ethnic and racial identification were identified with standard questions. Preference for speaking Spanish or Portuguese (in the interview) was used to distinguish first generation status, as it did in the second survey when a specific question was added about immigration status [26, 46].

Three indicators of program status that would have been affected by the program are distinguished separately to clarify the role of the program change itself. Cancer risk—which would have been less common after the program due to the expanded enrollment criteria—was identified with one question. Recognition of the patient navigator’s name in 2012 was indicated by a dichotomous indicator and was used to control for the extent of contact. All respondents had a case manager in 2005 and a patient navigator in 2012, but those being seen only for chronic illness in 2012 did not have a case manager, so having a case manager in 2012 in addition to a patient navigator was distinguished with a separate dichotomous indicator.

Statistical analyses

Bivariate tests (t-tests and χ2 tests) were used to identify changes in the composition of the program’s patients, including in race and ethnicity, marital and employment status, and health and health care access. U.S. Census data for the state were used to indicate trends in the state population that might explain changes in ethnic diversity within the program. T-tests were also used to identify differences in levels of each of the satisfaction indicators before and after the program change, and to compare satisfaction with the case manager to satisfaction with the patient navigator for patients who had both after the program change.

Multiple regression analysis (OLS) was used to determine the independence of any pre-post program change differences in satisfaction levels from program status indicators, and the patient characteristics and access barriers identified in the literature as potential predictors. Tests of interaction effects (using product terms) indicated whether differences in satisfaction levels after the program change occurred only for the ethnic and racial groups that had previously been less satisfied [47].

Standardized coefficients (β) are reported for the regression analyses first using only the program status indicators (Model 1) and then including all potential influences on satisfaction (Model 2). The coefficient of determination (R2) was used to assess and compare the fit of each model, while R2 Δ was used to identify the contribution of each block of factors potentially associated with satisfaction. Separate multiple regression analyses were conducted without the terms representing the interaction of race/ethnicity and program, but since the coefficients for the other terms in these models changed little after the interaction terms were added, only the final models including both main effects and interaction terms are presented (in Model 2).

Results

After the program redesign, enrollment data showed that patients were older, less educated, more likely to be Hispanic or African American and to have been interviewed in Spanish or Portuguese, and more likely to be living alone and to not be working (see Table 2). The proportion of the program’s patients who were Hispanic and African American increased dramatically, by 206 and 295%, respectively.

Table 2 Characteristics of Program Patients Before and After Program Redesign

Access to health care had also improved dramatically by the second survey (Table 2). Patients in the redesigned program were much more likely to report having health insurance (with almost universal coverage), to report having a regular place and a regular doctor for health care, and not to have been deterred from getting needed medical care by its cost. However, as expected after the shift in emphasis to chronic illness, levels of self-reported overall health, physical health problems, and symptoms of depression each indicated poorer average health in the post-program change sample.

Patient satisfaction was higher with the quality of health care overall after the program redesign, but lower with respect to the clinic from which they received program services and lower with respect to the patient navigator after the change compared to the case manager prior to the change (see Table 3). However, there was no difference in average satisfaction with the program case manager between those in the original program and the subset of respondents in the redesigned program who had a case manager as well as a patient navigator.

Table 3 Satisfaction Indicators by Programa

By contrast, satisfaction with the patient navigator was higher than satisfaction with the case manager for patients who had both a patient navigator and a case manager after the program change (in other words, for those being seen for cancer risk rather than only for chronic illness) (t = 2.01, df = .47, p ≤ .05).

The multiple regression analyses of satisfaction in relation to program status identified the same apparent effects of the program change (Model 1 in Table 4), but suggested different bases for these effects with the different satisfaction indicators (Model 2 in Table 4). Controlling for patient background entirely explained the improved satisfaction with health care overall in the redesigned program. However, the controls did not explain the decline in satisfaction with the program or clinic or with the care provider (CM/PN) between respondents in the redesigned program and those in the original program. Patients who had a case manager in addition to a patient navigator were more satisfied with the program/clinic and with program staff than those who only had a patient navigator (although the effect on program/clinic satisfaction only emerged after other variables were controlled), and this difference was not explained by the controls (Model 2 in Table 4). There was no statistically significant difference related to the program status indicators in satisfaction with the program case manager for the subset who also had a case manager in the redesigned program.

Table 4 Regression Analysis of Satisfaction Indicatorsa

The independent effects of patient characteristics and barriers also differed between the four satisfaction indicators. Patients who were interviewed in Spanish or Portuguese were more satisfied with their health care overall, as were those who were older. Satisfaction with the program or clinic was also higher among Portuguese speakers, and associated positively with education, self-rated health, and lack of a cost deterrent, but unrelated to age. Staff satisfaction and case manager satisfaction were both positively associated with age, while case manager satisfaction was lower among those who were not working. All four satisfaction indicators were associated with more perceived practical barriers to health care, while program/clinic satisfaction and case manager satisfaction were also diminished by more perceived economic barriers. Perceived barriers due to fear and immigration status were not associated with satisfaction. Among post-change patients only, African American patients were less satisfied with health care overall than were others, but there were no other statistically significant interactions with ethnicity or language. The regression analyses explained about one-fifth of the proportion of variance in each satisfaction indicator (a bit more for the smaller sample available for the regression analysis of satisfaction with the case manager).

Discussion

After the redesign, as intended, the program population was more diverse, with a much higher proportion of Hispanic (and Spanish-speaking) and African American patients than would have resulted only from the increased proportion of these groups in the state’s population during the same period: 29 and 2.5%, respectively [48, 49]. The program’s patients also tended to be less educated, less employed, in poorer physical and mental health, and were more likely to be living alone after the program redesign. It is not possible to link these changes specifically to the use of patient navigators, or even to the program’s expansion to include patients with chronic illnesses, but the changed demographics do indicate that the program’s social context differed markedly after the program change and as intended by those who designed it.

Given these challenges, the patients’ greater satisfaction with their overall health care after the program change is a very positive result, but this higher level of satisfaction was explained entirely by the associated differences in patient characteristics. In contrast, satisfaction declined with the clinic and the care provider, except for those who were similar in their health care needs to those in the program before its redesign—those who were eligible for case management services due to heightened cancer risk.

The higher levels of satisfaction with health care overall among the apparent first-generation immigrants was generally consistent with research that identifies particularly positive attitudes and behaviors in the first generation [50, 51], although the unique effect on clinic-specific satisfaction for Portuguese speakers may also indicate particularly positive features of the few health care sites that served most Portuguese speakers. The lower level of satisfaction among African American patients with their overall health care after the program redesign did not occur with the program-specific indicators, so it may indicate more severe chronic health problems or more general health care access problems rather than poorer program services per se.

The diminished program-related average satisfaction level after the redesign may reflect a change from the focus on successful resolution of cancer risk for most of the pre-change patients, as compared to the ongoing problems due to chronic illness of the patients who predominated after the program change. It is unlikely that many patients after the eligibility expansion found that their chronic health problems had been resolved by the program or its staff—or more generally by the health clinic from which they received their program services. This interpretation is strengthened by the finding that satisfaction with the patient navigator was higher than it was with the case manager for those patients who had both—in other words, patients enrolled after the program change who were being seen for cancer risk. This parallels the finding from the Patient Navigation Research Program of improved satisfaction due to patient navigation among socially disadvantaged patients with recent breast or colorectal cancer diagnoses [13, 14, 52].

Satisfaction with the program or clinic was uniquely influenced by health care barriers—specifically by more perceived economic as well as practical problems with obtaining care—as well as by less education, poorer self-rated health, and reports that high costs had deterred care in the past. In general, these results confirm Post et al.’s [11] finding of the selectivity of effects of perceived barriers on satisfaction. The lack of association of knowing they had a patient navigator with the satisfaction indicators suggests that there is room for strengthening the patient navigation program, perhaps by creating more opportunities for in-person contact [53, 54].

Older age and more perceived practical barriers were the only factors associated with less satisfaction across its multiple dimensions. The other factors associated with the different satisfaction indicators suggest that satisfaction with different aspects of health care are differentially susceptible to influence by patient background and current health problems. This indicates the importance for research on health care satisfaction of the use of satisfaction indicators with different levels of specificity, as emphasized first by Roberts, Pascoe, & Attkisson [20].

Our design had several limitations. First, a panel design was not possible because of the turnover in the original program’s patients as their cancer risks were identified. In addition, the rating of health care overall in the original program survey focused on health care services within the past 2 years, while the comparable question in the post-transition survey was narrowed to the past year because of the new program’s more limited history. The focus of the rating of satisfaction with the primary health care provider also had to shift from the case manager to the patient navigator, although our separate pre-post analysis of satisfaction just with the case manager for those who had a case manager after the program redesign—and our comparison of these patients’ satisfaction with their patient navigator to their satisfaction with their case manager—helps to clarify the effect of this shift in primary program delivery staff. In addition, only two questions about program satisfaction were repeated in the two surveys, resulting in a less robust index. It is also important to note that none of the apparent program effects on satisfaction were altered significantly by inclusion of the barriers indicators in the model, suggesting less tangible mediators of program effect that we did not capture [22].

The absence of a special overall satisfaction benefit of the program redesign for the more disadvantaged ethnic and linguistic groups [13, 14] may be a consequence of the limitation of our entire sample to patients in a program serving only socioeconomically disadvantaged patients. In fact, as already noted, the lower level of overall health care satisfaction after the program redesign among African American patients may reflect elevated chronic health problems in this disadvantaged group within the program after eligibility was expanded. By contrast, patient navigators did elicit higher satisfaction ratings than did case managers, when only those patients being seen for cancer risk were considered. Thus, even though we cannot be sure that the addition of patient navigators to the program workforce was responsible for the increased patient diversity, it is clear that the patient navigators were making a worthwhile contribution to serving the program’s patients.

The shift in focus of the original survey questions on satisfaction with the program itself to satisfaction with the clinic or health center where the program was implemented in the post-transition survey was necessitated by the less distinct program services delivered by patient navigators compared to those delivered by the case managers, but it means that the post-change patients could have been expressing satisfaction with a broader range of health care experiences at the health care location instead of just those delivered by program staff. Finally, although the difficulties our interviewers encountered in locating program patients are characteristic of attempts to survey lower SES groups, the obtained sample may as a result not represent the experience of all program patients. Thus, while our findings pertain to a range of issues that recur in the health care system, they must be applied to different contexts with caution.

Conclusions

The addition of patient navigators to the case management program was a key element in organizational changes designed to lower access barriers so as to reach more of the underserved population. At the same time, the more holistic focus on managing chronic illness created greater challenges for delivery of satisfying health services. While in the original program, the large majority of patients received a definitive diagnosis of the absence of cancer and so had this health concern relieved, those with chronic illnesses seen after the program redesign would have experienced no such definitive resolution of a health concern; instead, they would face ongoing needs for care, perhaps accompanied by a need for behavioral change.

Recognizing the importance of patient satisfaction must be balanced by understanding its complexity. Our analysis of the apparent effects on satisfaction of the redesign of the Massachusetts NBCCEDP program makes it clear that a single-minded focus on maximizing patient satisfaction can both obscure the effects of an intervention and deter more ambitious efforts to improve delivery of health care. As expected, the patient navigators seemed to increase the satisfaction of patients like those served in the original program with the program itself, while the patient navigators for these same patients received higher satisfaction scores than did case managers in the previous program. However, these effects were obscured until the patients being seen for cancer risk were distinguished from those being seen for chronic illness. In addition, an apparent positive effect of the redesigned program on overall health care satisfaction actually reflected a change in the orientations of new patients from different backgrounds.

We do not conclude that patient navigators should only be used to enhance cancer care, but rather that their role and expectations for their performance need to be tailored to the type of health care program in which they work. The cancer care program we studied was extended because physicians on our Expert Panel had identified chronic illness as lowering rates of diagnostic testing; the finding that this broader focus was associated with lower satisfaction ratings indicates only how much more challenging it is. Our results also suggest that more attention is needed to the mechanism by which patient navigators may improve patient satisfaction. While the apparent patient navigator effect we identified was not explained by a reduction of any perceived barriers, we suggest that the interpersonal connection between patient navigators and patients may be what is most important.

The program redesign to serve a needier patient population can be judged a success on its own terms, even though it resulted in lower average levels of satisfaction with the program and its primary staff. Our research thus highlights the potential of and challenges for health care delivery systems seeking to broaden program participation and set more ambitious goals for health improvement.

Availability of data and materials

The datasets generated and analysed during the study are not publicly available due to the terms of the state agency contract under which the data were collected as part of a state program evaluation, but are available from the corresponding author on reasonable request.

Abbreviations

CCP:

Care Coordination Program (Massachusetts)

CDC:

Centers for Disease Control and Prevention

CM:

Case Manager

DPH:

Department of Public Health (Massachusetts)

IRB:

Institutional Review Board (University of Massachusetts Boston)

NBCCEDP:

National Breast and Cervical Cancer Early Detection Program

OLS:

Ordinary Least Squares (regression analysis)

PN:

Patient Navigator

R2 :

Coefficient of Determination

WHN:

Women’s Health Network (Massachusetts)

References

  1. Cleary PD, McNeil BJ. Patient satisfaction as an indicator of quality care. Inquiry. 1988;25:25–36.

    CAS  PubMed  Google Scholar 

  2. Foxall MJ, Barron CR, Houfek J. Women’s satisfaction with breast and gynecological cancer screening. Women Health. 2003;38:21–36.

    Article  PubMed  Google Scholar 

  3. Jackson J, Chamberlin J, Kroenke K. Predictors of patient satisfaction. Soc Sci Med. 2001;52:609–20.

    Article  CAS  Google Scholar 

  4. Mathews M, Ryan D, Bulman D. What does satisfaction with wait times mean to cancer patients? BMC Cancer. 2015;15:1017.

  5. Somkin CP, McPhee SJ, Nguyen T, et al. The effect of access and satisfaction on regular mammogram and Papanicolaou test screening in a multiethnic population. Med Care. 2004;42:914–26.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Jean-Pierre P, Fiscella K, Winters PC, et al. Cross-cultural validation of a patient satisfaction with interpersonal relationship with navigator measure: a multi-site patient navigation research study. Psycho-oncology. 2012;21:1309–15.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Fiscella K, Random S, Jean-Pierre P, et al. Patient-reported outcome measures suitable to assessment of patient navigation. Cancer. 2011;117(15 Suppl):3603–17.

    Article  Google Scholar 

  8. Cavanagh MF, Lan DS, Messina CR, et al. Clinical case management and navigation for colonoscopy screening in an academic medical center. Cancer. 2013;119(Suppl 15):2894–904.

    Article  PubMed  Google Scholar 

  9. Guadagnolo BA, Cina K, Koop D, et al. A pre-post survey analysis of satisfaction with health care and medical mistrust after patient navigation for American Indian cancer patients. J Health Care Poor Underserved. 2011;23:1331–343.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Koh C, Nelson JM, Cook PF. Evaluation of a patient navigation program. Clin J Oncol Nurs. 2011;15:41–58.

    Article  PubMed  Google Scholar 

  11. Post DM, McAlearney AS, Young GS, et al. Effects of patient navigation on patient satisfaction outcomes. J Cancer Educ. 2015;30:728–35.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Wells KJ, Campbell K, Kumar A, et al. Effects of patient navigation on satisfaction with cancer care: a systematic review and meta-analysis. Support Care Cancer. 2018;26:1369–82.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Wells KJ, Winters PD, Jean-Pierre P, et al. Effect of patient navigation on satisfaction with cancer-related care. Support Care Cancer. 2016;24:1729–53.

    Article  PubMed  Google Scholar 

  14. Fiscella K, Whitley ES, Hendren P, et al. Patient navigation for breast and colorectal cancer treatment: a randomized trial. Cancer Epidemiol Biomarkers Prev. 2012;21:1673–681.

    Article  Google Scholar 

  15. Guidry JJ, Matthews-Juarez P, Copeland VA. Barriers to breast cancer control for African-American women: The interdependence of culture and psychosocial issues. Cancer. 2003;97(1 Suppl):318–23.

    Article  Google Scholar 

  16. Katz ML, Young GS, Reiter PL, et al. Barriers reported among patients with breast and cervical abnormalities in the patient navigation rsearch program: Impact on timely care. Womens Health Issues. 2014;24:e155-e162.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Krok-Schoen JL, Brewer BM, Young GS, et al. Participants’ barriers to diagnostic resolution and factors associated with needing patient navigation. Cancer. 2015;121:2756–764.

    Article  PubMed  Google Scholar 

  18. Kuipers SJ, Cramm JM, Nieboer AP. The importance of patient-centered care and co-creation of care for satisfaction with care and physical and social well-being of patients with multi-morbidity in the primary care setting. BMC Health Serv Res. 2019;19:13.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Cohen G. Age and health status in a patient satisfaction survey. Soc Sci Med. 1996;42:1085–93.

    Article  CAS  Google Scholar 

  20. Roberts RE, Pascoe GC, Attkisson CC. Relationship of service satisfaction to life satisfaction and perceived well-being. Eval Program Plann. 1983;6:373–83.

    Article  CAS  PubMed  Google Scholar 

  21. Hekkert KD, Cihangir S, Kleefstra SM, et al. Patient satisfaction revisited: a multilevel approach. Soc Sci Med. 2009;69:68–75.

    Article  Google Scholar 

  22. Jean-Pierre P, Cheng Y, Wells KJ, et al. Satisfaction with cancer care among underserved racial-ethnic minorities and lower-income patients receiving patient navigation. Cancer. 2016;122:1060–7.

    Article  PubMed  Google Scholar 

  23. Bair MJ, Kroenke K, Sutherland JM, et al. Effects of depression and pain severity on satisfaction in medical outpatients: Analysis of the Medical Outcomes Study. J Rehabil Res Dev. 2007;44:143–52.

    Article  PubMed  Google Scholar 

  24. Marshall GN, Hays RD, Maze R. Health status and satisfaction with health care: results from the medical outcomes study. J Consult Clin Psychol. 1996;64:680–90.

    Article  Google Scholar 

  25. Morales LS, Cunningham WE, Brown JA, et al. Are Latinos less satisfied with communication by health care providers? J Gen Intern Med. 1999;14:409–17.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Schutt RK, Mejía C. Health care satisfaction: effects of immigration, acculturation, language. J Immigr Minor Health. 2016, Online First. https://doi.org/10.1007/s10903-016-0409-z.

    Article  Google Scholar 

  27. Williams DR, Jackson PB. Social sources of racial disparities in health. Health Aff. 2005;24:325–34.

    Article  PubMed  Google Scholar 

  28. Clemans-Cope L, Kenney G. Low income parents' reports of communication problems with health care providers: Effects of language and insurance. Public Health Reports. 2007;122:206–16.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Centers for Disease Control and Prevention (CDC). National Breast and Cervical Cancer Early Detection Program (NBCCEDP). 2016 https://www.cdc.gov/cancer/nbccedp/about.htm, (Accessed 30 May 2016).

  30. Lee NC, Faye L, Wong PM, et al. Implementation of the National Breast and cervical Cancer early detection program: the beginning. Cancer Causes Control. 2014;120(Suppl 16):2540–8.

    Article  PubMed  Google Scholar 

  31. Howard DH, Tangka FKL, Royalty J, et al. Breast cancer screening of underserved women in the USA: results from the National Breast and cervical Cancer early detection program, 1998–2012. Cancer Causes Control. 2015;26:657–68.

    Article  Google Scholar 

  32. Lantz PM, Mullen J. The National Breast and Cervical Cancer Early Detection Program: 25 years of public health service to low-income women. Cancer Causes Control. 2015;26:653–6.

    Article  Google Scholar 

  33. Tangka FKL, Howard DJ, Royalty J, et al. Cervical cancer screening of underserved women in the United States: results from the National Breast and cervical Cancer early detection program, 1997–2012. Cancer Causes Control. 2015;15:671–86.

    Article  Google Scholar 

  34. Schutt RK. Investigating the social world: the process and practice of research. 8th ed. Thousand Oaks: SAGE; 2015.

  35. Bergenmar M, Nylen U, Lidbring E, et al. Improvements in patient satisfaction in an outpatient clinic for patients with breast cancer. Acta Oncol. 2006;45:550–8.

    Article  PubMed  Google Scholar 

  36. Schutt RK, Cruz ER, Woodford ML. Client satisfaction in a breast and cervical Cancer early detection program: the influence of ethnicity and language, health, resources and barriers. Women Health. 2008;48:283–302.

    Article  Google Scholar 

  37. Carter-Pokras O, O'Neill MJ, Cheanvechai V, et al. Providing linguistically appropriate services to persons with limited English proficiency: A needs and resources investigation. Am J Manag Care. 2004;SP29-36.

  38. Freund KM, Battaglia TA, Calhoun E, et al. National cancer institute patient navigation research program: Methods, protocol, and measures. Cancer. 2008;113:3391–9.

    Article  PubMed  Google Scholar 

  39. Ramachandran A, Freund KM, Bak SM, et al. Multiple barriers delay care among women with abnormal cancer screening despite patient navigation. J Women's Health. 2015;24:30–6.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Kahn L. Identifying barriers and facilitating factors to improve screening mammography rates in women diagnosed with mental illness and substance abuse disorders. Women Health. 2005;42:111–26.

    Article  Google Scholar 

  41. Hogan DR, Danaei G, Ezzati J, et al. Estimating the potential impact of insurance expansion on undiagnosed and uncontrolled chronic conditions. Health Aff. 2015;34:1554–62.

    Article  PubMed  Google Scholar 

  42. Wallace SP, Torres JM, Nobari TZ, et al. Undocumented and uninsured: barriers to affordable care for immigrant populations. Los Angeles: Report, UCLA Center for Health Policy Research; 2013.

  43. Kimbro RT, Gorman BK, Schachte, A. Acculturation and self-rated health among Latino and Asian immigrants to the United States. Soc Problems. 2012;59:341–63.

  44. Torres JM, Wallace SP. Migration circumstances, psychological distress, and self-rated physical health for Latino immigrants in the United States. Am J Public Health. 2013;103:1619–27.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Haggman S, Maher CG, Refshauge KM. Screening for symptoms of depression by physical therapists managing low back pain. Phys Ther. 2004;84:1157–66.

    Article  PubMed  Google Scholar 

  46. Jacobs EA, Karavolos K, Rathouz PJ, et al. Limited English proficiency and breast and cervical cancer screening in a multiethnic population. Am J Public Health. 2005;95:1410–6.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Warner RM. Applied statistics: from bivariate through multivariate techniques. Thousand Oaks: SAGE; 2008.

  48. Census Bureau US. Annual estimates of the resident population by sex, race, and Hispanic origins for the United States, states, and counties: April 1, 2010 to July 1, 2012. Washington, D.C.: U.S. Census Bureau, Population Division; 2013. http://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?src=bkmk, (Accessed 9 Oct 2016).

  49. Census Bureau US. Annual estimates of the resident population by sex, race, and Hispanic or Latino origin for state, April 1, 2000 to July 1, 2005 (SC-EST2005-03-25). Washington, D.C: U.S. Census Bureau, Population Division; 2006. http://www.census.gov/popest/data/historical/2000s/vintage_2005/index.html (Accessed 9 Oct 2016).

  50. Acevedo-Garcia D, Bates LM. Latino health paradoxes: empirical evidence, explanations, future research, and implications. In: Rodriguez H, Saenz R, Menjivar C, editors. Latinas/Os in the United States: changing the face of America. New York: Springer; 2008. p. 101–13.

    Chapter  Google Scholar 

  51. Antecol H, Bedard K. Unhealthy assimilation: Why do immigrants converge to American health status levels? Demography. 2006;43:337–60.

    Article  PubMed  Google Scholar 

  52. Ko N, Darnell JS, Calhoun E, et al. Can patient navigation improve receipt of recommended breast cancer care? Evidence from the National Patient Navigation Research Program. J Clin Oncol. 2014;32:2758–64.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Carroll JK, Humiston SG, Meldrum SC, et al. Patients' experiences with navigation for cancer care. Patient Educ Couns. 2010;80:241–7.

    Article  PubMed  Google Scholar 

  54. Peikes DA, Schore CJ, Brown R. Effects of care coordination on hospitalization, quality of care, and health care expenditures among Medicare beneficiaries: 15 randomized trials. JAMA. 2009;301:603–18.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgments

Successive program directors Mary Lou Woodford, RN, Heather Nelson, PhD, and Anita Christie, RN collaborated in the research. We are grateful for the comments of Paul Benson, PhD, Mary L. Fennell, PhD, and Nancy L. Keating, MD, MPH, and for the assistance of Jacqueline Fawcett, PhD, RN, FAAN, and of Camila Mejía, MA, Xavier Lazcano, MA, Lisa-Marie Guzman, MA, Angela Cody, MA, Jessica Callea, MA, Anthony Roman, MA, Dragana Bolcic-Jankovic, MA, and Rumel Mahmood, MA. An earlier version of this paper was presented at the 2016 Annual Meeting of the American Sociological Association, Seattle, Washington.

Funding

Massachusetts Department of Public Health (INTF3406HH2706811045).

Author information

Authors and Affiliations

Authors

Contributions

RKS and MLW conceived and designed the manuscript and made substantial contributions to data acquisition. RKS analyzed and interpreted the data and drafted the manuscript. RKS and MLW substantively revised the manuscript. RKS and MLW read and approved the submitted and revised manuscript and agree to be personally accountable for their respective contributions and to ensure that questions related to the accuracy or integrity of any part of the work are appropriately investigated, resolved, and documented. The Authors read and approved the final manuscript.

Authors’ information

Russell K. Schutt, PhD, Professor of Sociology, University of Massachusetts Boston and Lecturer (Part-time), Department of Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School; Mary Lou Woodford, RN, President, Owner, MLW Associates, LLC (former director, Women’s Health Network and Care Coordination Program, Department of Public Health).

Corresponding author

Correspondence to Russell K. Schutt.

Ethics declarations

Ethics approval and consent to participate

All procedures, including procedures for obtaining informed consent were approved by the University of Massachusetts Boston Institutional Review Board for the Protection of Human Subjects (FWA00004634, IRB Reg # 00000690, FWA Exp: 8/24/21). Sampled patients received a letter from Principal Investigator at the University of Massachusetts Boston about 1 week prior to the phone survey informing them that participation was completely voluntary and confidential and would not affect their program services. These points as well as the assurance that respondents could decline to answer any question and could terminate the interview at any point were repeated in the introduction on the phone and the interview did not proceed unless patients consented verbally. The IRB judged this standard procedure for obtaining consent in phone surveys to be sufficient to ensure that participation was both informed and voluntary.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests. Both surveys were funded by the Massachusetts Department of Public Health (DPH) and DPH staff for the programs surveyed provided feedback on the questions proposed for the survey and contact information for the participants in the programs. DPH had no other role in the design of the study or the collection, analysis, or interpretation of the data. Although the initial director of both programs co-authored this manuscript, she had retired from DPH 10 years before this manuscript was prepared.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Women’s Health Network Survey. Questionnaire for survey of Women’s Health Network patients (pre-survey).

Additional file 2.

Care Coordination Program Survey. Questionnaire for survey of Care Coordination Program patients (post-survey).

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schutt, R.K., Woodford, M.L. Increasing health service access by expanding disease coverage and adding patient navigation: challenges for patient satisfaction. BMC Health Serv Res 20, 175 (2020). https://doi.org/10.1186/s12913-020-5009-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-020-5009-x

Keywords