Skip to main content

Social determinants of health and the prediction of missed breast imaging appointments



Predictive models utilizing social determinants of health (SDH), demographic data, and local weather data were trained to predict missed imaging appointments (MIA) among breast imaging patients at the Boston Medical Center (BMC). Patients were characterized by many different variables, including social needs, demographics, imaging utilization, appointment features, and weather conditions on the date of the appointment.


This HIPAA compliant retrospective cohort study was IRB approved. Informed consent was waived. After data preprocessing steps, the dataset contained 9,970 patients and 36,606 appointments from 1/1/2015 to 12/31/2019. We identified 57 potentially impactful variables used in the initial prediction model and assessed each patient for MIA. We then developed a parsimonious model via recursive feature elimination, which identified the 25 most predictive variables. We utilized linear and non-linear models including support vector machines (SVM), logistic regression (LR), and random forest (RF) to predict MIA and compared their performance.


The highest-performing full model is the nonlinear RF, achieving the highest Area Under the ROC Curve (AUC) of 76% and average F1 score of 85%. Models limited to the most predictive variables were able to attain AUC and F1 scores comparable to models with all variables included. The variables most predictive of missed appointments included timing, prior appointment history, referral department of origin, and socioeconomic factors such as household income and access to caregiving services.


Prediction of MIA with the data available is inherently limited by the complex, multifactorial nature of MIA. However, the algorithms presented achieved acceptable performance and demonstrated that socioeconomic factors were useful predictors of MIA. In contrast with non-modifiable demographic factors, we can address SDH to decrease the incidence of MIA.

Peer Review reports


Missed imaging appointments (MIA) are a challenge for patients and providers, causing delayed care and inefficiencies. Variables including race, age, sex, housing, and health insurance may be associated with MIA [1, 2]. Screening and diagnostic breast imaging are essential to early detection of breast cancer, which significantly impacts survival [3]. MIA are also inefficient and costly: an average academic medical center in the US loses $1 million annually in potential revenue from missed appointments [4]. Additionally, patient care may suffer due to missed appointments [5,6,7].

Outside of radiology, several machine learning algorithms have been used for predicting MIA, including the naive Bayes classifier [8], decision tree models [7, 9], artificial neural networks [8], time-frequency analysis models [10], metaheuristic-based models [11], and the most widely used, logistic regression models (LR) [8, 12,13,14]. In radiology, Harvey et al. [15] developed descriptive statistics and LR models to predict failure to attend a scheduled radiology appointment, achieving 75.3% AUC. Daye et al. [2] assessed the relationship between wait days (WDs) on missed outpatient MRI with multivariate LR and linear regression. In that study, WDs were defined as the time interval between the date that the appointment was made and the actual date of the appointment. They concluded that increased WDs for MRI appointments substantially increases the probability of missed appointments. The pre-appointment likelihood of no-show was predicted by Mieloszyk et al. [16] using LR models, achieving 77% AUC. Lead time was the most impactful feature of their model.

Social determinants of health (SDH) are social and demographic patient characteristics which influence health outcomes [17, 18]. Clearly many other factors affect health; however, they are outweighed by the impact of social and economic factors [19]. Several studies have identified risk factors and variables associated with no-shows and cancellations [11, 20,21,22,23]. Thus, SDH should be included in models that attempt to predict imaging appointment outcomes. Boston Medical Center (BMC) has implemented a novel screening & referral tool/program (THRIVE), seeking to understand patient social needs through. THRIVE is a custom SDH screening program that surveys patients on their unmet social needs in eight different domains: transportation insecurity, difficulty arranging child/elder care, ability to pay for utilities, difficulty accessing education resources, food insecurity, housing insecurity, employment insecurity, and difficulty paying for medication [24]. THRIVE, which is not an acronym, allows clinicians to understand and address patients’ unmet social needs.

Machine learning techniques with simple demographic factors can be useful to better understand no-show appointments, both in radiology and other specialties. We aimed to build upon that foundation by including modifiable and potentially more impactful SDH in our analysis. The goal of our study was to create models utilizing SDH, demographics, and environmental data and employ them to predict MIA among breast imaging patients at BMC.


Data description

This HIPAA compliant retrospective study was IRB approved. Informed consent was waived. Our dataset contains 11,296 patients and 53,326 appointments from 1/1/2015 to 12/31/2019. The dataset was de-identified for analysis. Following the removal of duplicate appointments and patients without SDH data, our data set included 36,606 appointments and 9,970 unique patients. Study data includes appointment records, demographic data, and SDH responses obtained from BMC’s electronic health records (EHR) at the research clinical data warehouse.

Data pre-processing and variable preparation

Demographic variables included age, gender, race/ethnicity, birthplace, primary language, education level, marital status, primary insurance, and estimated address (ZIP Code). Appointment records included reservation date, appointment date, department name, and type of order (e.g., diagnostic mammography, MRI breast biopsy). Using information in appointment records, we created new variables to consider in our analysis. The complete lists of categorical and numerical variables and their characteristics are presented in Tables 1, 2 and 3. Please note that we used all 36,606 appointments to calculate the values in Tables 2 and 3. In these tables, N (%) refers to the percentage of appointments within each category. Attended (%) and Missed (%) denote the percentages of attended and missed appointments for each category. In Table 3, Mean (\(\underline{X}\)), Q1, Median, and Q3 denote the average, first quartile, second quartile (i.e., median), and third quartile of the corresponding variable, respectively. For each variable, \(\underline{X}_{U}\ \left(\underline{X}_{L}\right)\) refers to the set of appointments for which the given variable is above (below) its respective mean (\(\underline{X}\)).

Table 1 Generated variables and their descriptions
Table 2 Descriptive statistics of the study sample (categorical variables).
Table 3 Descriptive statistics of the study sample (numerical variables)

As will be discussed later, we ultimately want to use the last appointment of each patient (i.e., 9,970 samples) to develop our predictive models. We have three types of appointments (i.e., "Order Name" in Table 2), namely diagnostic, biopsy, and screening. Approximately 96% of appointments are diagnostic or screening. All 9,970 patients have at least one screening appointment. In our work, the term “imaging appointments” refers to both diagnostic and screening appointments. Some categorical variables such as time, type of order, and primary insurance were categorized into fewer categories to simplify analysis. As for SDH variables, we created eight indicator variables for each of the eight domains. The value of a domain will be encoded as ‘1’ If a patient reports the corresponding social need. Otherwise, if the answer is ‘No need’ or the answer is missing, the value is set to ‘0’. The SDH information was obtained using THRIVE, which was integrated into the BMC's EHR starting in 2017. THRIVE is based on a short questionnaire, which is voluntarily completed by patients at each clinic visit. The THRIVE questionnaire and the summary of THRIVE data pre-processing steps are provided in the Supplement (see Additional file 1).

For SDH variables we provide the percentages of those who answered Yes/No to the question of having the specific need and the corresponding percentages of attended/missed appointments for each cohort.

Most of the features in our data set do not contain any missing values except “Marital Status”, “Education Level”, “Hispanic Indicator”, and “Primary Race”. We used the mode of each feature to impute the existing missing values. Table 4 presents the number of missing values for each feature and the mode of feature that we used for imputation.

Table 4 Missing values in the data set

Continuous variables were scaled to lie between zero and one. To mitigate the effect of outliers, we substituted each variable with values higher than the 99th percentile or lower than the 1st percentile with the 99th or 1st percentile, respectively. Categorical variables such as primary race and education level were converted to numerical by ‘one-hot’ encoding. Each categorical variable was encoded as an indicator variable for each category, yielding 57 variables for each patient.

To reduce the dimensionality of the data and find the most informative features for our model, we used an l1-norm regularized Support Vector Machine algorithm (SVM-L1) for recursive feature elimination. Several studies [25, 26] demonstrated the effectiveness of feature selection via regularization in biomedical applications.

Linear models like SVM penalized with the l1 norm induce sparse solutions. In SVM, the parameter C (a.k.a. the soft margin constant or misclassification penalty) controls the sparsity of the model where the smaller C the fewer features selected, since this has the effect of increasing the importance of the l1-norm regularizer. We used recursive feature elimination as follows. We started with all features and progressively dropped less informative features (i.e., a feature that has minimal absolute coefficient) by decreasing the parameter C [27]. The Supplement includes additional details (see Additional file 1). This method selected 20 features for our final model. The complete list of these features will be presented shortly. There were three SDH variables (i.e., housing, transportation, and utilities) between these 20 features. Since we wanted to examine the effect of the SDH variables on MIA, we manually added the other five SDH variables to our final selected features. Thus, we used 25 features in our parsimonious model.

Classification methods

We employed non-linear and linear classifiers including random forest (RF) [28], XGBoost [29], and the regularized versions of support vector machine (SVM) [30] and logistic regression (LR) [31] using l1 or l2-norm regularization.

The random forest (RF) is an ensemble algorithm that combines the prediction of multiple decision tree classifiers [28]. RF trains multiple decision trees in parallel using a random subset of the training set and features. The trained classifiers are used to classify a test sample and all classifiers are combined by majority voting. Combining multiple decision trees and RF randomness prevents overfitting and reduces model variance.

XGBoost is an ensemble tree algorithm [29]; it generates a large number of decision trees in sequential order so the training samples misclassified by the previous tree receive a higher weight. This process repeats until the number of trees reaches a predetermined number. Eventually, all trained trees are weighted together to produce a final decision. Shrinkage and column subsampling in XGBoost prevent overfitting [32].

RF and XGBoost are non-linear algorithms that are difficult to interpret (often involving hundreds of decision trees) but are useful because they may indicate what is the best classification performance one could obtain. We also employed custom linear classifiers, including the support vector machine (SVM) [30] and logistic regression (LR) [31] which can yield interpretable models. SVM constructs a hyperplane that separates the two classes to maximize the margin between samples while minimizing misclassification errors. We used the linear SVM, but the method can be extended to allow for non-linear decision surfaces.

LR is a regression algorithm for predicting a dichotomous dependent variable. It uses a linear regression model to approximate the logarithm of the odds of the dependent variable (outcome) [33]. The regularized versions of LR and SVM (using l1 or l2-norm regularization) were considered to improve the robustness of these algorithms in the presence of noise and outliers [31]. We used open-source python packages (i.e., Scikit-learn [34] and Statsmodels [35]) to implement our predictive models.

Performance metrics

The predictive models were assessed using three performance metrics, namely Area Under the Curve (AUC, a.k.a. C-statistic) of the Receiver Operating Characteristic (ROC), the Micro-F1 score, and the Weighted-F1 score. The ROC plots sensitivity (or recall) against one minus the specificity. Values are between 0 and 1 with a higher AUC value indicating better predictive capability of the model. The F1 score is defined as the harmonic mean of recall and precision. Precision refers to the number of appointments in the real positive class (e.g., truly missed appointments) over the number of appointments predicted in the positive class. The Micro-averaged F1 score aggregates the contributions of both classes to compute the harmonic mean while the Weighted-F1 score is calculated by weighting the F1-score of each class by the number of appointments in that class.

Outcome and experimental settings

Our primary outcome (class 1) is MIA which can be defined as any scheduled imaging appointment not performed, canceled, or rescheduled before the scheduled time. Since the later appointments of a patient contain past information (e.g., total cancellations, total appointments, and so on), we cannot assume appointments for the same patients are independent of each other. Since independence is needed for training predictive models, we only used the last appointment of each patient in our predictive models. Thus, in total, we used 9,970 appointments for model development and validation.

The data were split into a training (80%) and a test set (20%). Algorithm parameters were optimized on the training (derivation) set using five-fold cross-validation. Performance metrics were computed on the test set. This process was repeated five times, each time with a random split into training/testing sets to ensure the robustness of our results. Please note that we optimized algorithm parameters to maximize AUC. The average and standard deviation of performance metrics on the test set over the 5 random splits are presented.

We considered three sets of features for these models. All 57 features were used to develop “full models.” Using the feature selection procedure outlined in Section 2.2 (i.e., SVM-L1 feature selection method), we developed parsimonious models with the most impactful variables. To show the impact of the SVM-L1 feature selection method, we also reported the result of a Univariate Feature Selection (UFS) method. To that end, we used a chi-squared test to select the twenty-five best features. In feature selection, the goal is to select features that are highly dependent on the output. When two features are independent, the observed count is close to the expected count. Therefore, we observe a smaller chi-squared value. In other words, we select a feature for model development if the feature is more dependent on the output which means a higher chi-squared value. Interestingly, twenty features out of 25 UFS selected features are the same as features that we selected using our SVM-L1 method. Specifically, UFS selected “Order name – diagnostic,” “Time - 8 A.M. to 10 A.M.,” “Weekday – Monday,” “Days Since Last Appointment,” and “Primary Insurance - Medicare” instead of “Median Household Income,” “Distance,” “Primary Insurance – Private,” “SDH – Caregiving,” and “Temperature”. The other 20 features are the same.

Here, the coefficients of the LR parsimonious model are of great importance. After standardizing the variables, a larger absolute coefficient suggests that the likelihood is more sensitive to this specific variable [31]. The sign of the coefficient indicates the direction of correlation with MIA. We also used the odds ratio (OR) and marginal effect (ME) [36] in our analysis. The odds ratio (OR) represents the odds that MIA will occur (probability p it will occur over 1 − p) given a particular binary variable divided by the odds of MIA in the absence of that variable. For continuous variables, the odds ratio corresponds to the ratio of the odds induced by a unit increase in the respective variable. An OR greater than one implies that a variable increases the odds of MIA. Marginal effect can be described as the change in the predicted probability of a binary outcome as the risk factor changes by 1 unit holding all other variables in the model constant [36]. Marginal effect provides a simpler way to compare the relative importance of various features in the model and quantify the incremental risk associated with each factor. In logistic regression, we cannot define a single marginal effect for all samples. The most common way is to report the average marginal effect across all samples in the data set [36].


Model performance

From the 9,970 appointments included in the study, there are 1,381 MIA, and 8,589 non-MIA. Therefore, the overall proportion of MIA was 13.8%, considerably higher than that of Harvey et al. [15] which included many different radiology modalities and reported a no-show rate of 6.5%. It should be noted that 13.8% is the percentage of missed appointments if we just consider the last appointment of each patient (i.e., 9,970 appointments). Here, we present the results of two types of models for MIA prediction, namely linear (SVM and LR) and non-linear (RF and XGBoost) models. Moreover, we consider three sets of features for these models. All 57 features were used to develop “full models.” These models utilize many variables, making them challenging to interpret. We then developed two parsimonious versions of these models using the feature selection techniques, namely SVM-L1 and UFS. The parsimonious models have 25 features, including all 8 SDH variables. Models with fewer features have several advantages; they (i) are easier to interpret, (ii) require less training time, (iii) have less data redundancy, and (iv) are easier to implement in a clinical setting.

We utilized linear and non-linear models to predict MIA among scheduled radiology appointments at BMC. The highest-performance model (using all 57 features) was the RF that achieved an average AUC of 76% and an average F1 score of 85%. However, similar performance was achieved with a parsimonious model utilizing just 25 variables as well as linear models. Particularly, RF was the best parsimonious model with SVM-L1 features. It has a similar AUC of 75.7%, as high as the model with all 57 features. The same model achieved an average F1 score of 84.2% that is slightly less than the performance of the full model. Moreover, the performance of parsimonious models using SVM-L1 features is better than the models trained using the UFS features. This analysis clearly shows the advantages of sophisticated feature selection methods like SVM-L1.

Table 5 presents the predictive model performance of the full and parsimonious models. Table 5 also lists the 25 variables in the LR-L2 parsimonious model, the LR coefficients of each variable (Coef), the correlation of the variable with the outcome (Ycorr), the mean of the variable (\(\underline{X}_{1}\)) for MIA, and the mean of the variable (\(\underline{X}_{0}\)) for attended appointments. ORs are reported with their 95% confidence intervals (OR 95% CI). Table 5 also presents Marginal Effects (ME) and p-values. The values inside the parentheses refer to the standard deviation of the corresponding metric. SVM-L2 and LR-L2 refer to the l2-norm regularized SVM and LR models. Note that the coefficients listed for each variable are from the LR-L2 model. We also presented the feature importance of XGBoost in the Supplement (see Additional file 1).

Table 5 MIA prediction models: Performance metrics of full and parsimonious models.

Overall, the most impactful variables (i.e., with higher absolute coefficient values) on missed appointments were characteristics of the appointment, such as the timing of the appointment and source of referral, the patient’s appointment history, and socioeconomic features such as median household income and the SDH ‘Caregiving’ variable, indicating having trouble providing care for children, family members, or friends.

Appointment features

Appointments before 8 am (OR=2.312, ME=0.112) were more likely to be missed. Patients referred to imaging from Community Health Centers (CHCs) were less likely to miss their appointments (OR=0.146, ME=-0.115). Similarly, patients referred to imaging from the Radiology department were less likely to miss their appointments (OR=0.257, ME=-0.144). In contrast, patients referred from the Primary Care Department at BMC were more likely to miss their appointments (OR=1.476, ME=0.042). Patients with more past appointments completed had fewer MIA (OR=0.951, ME=-0.006). Additionally, longer time intervals following previously cancelled appointments were also associated with more MIA, with the odds of MIA increasing by approximately 1.001 each day elapsed. Appointments occurring in the spring and winter were less likely to be missed with the odds of MIA decreasing by 0.744 and 0.710, respectively. Finally, patients who scheduled a diagnostic appointment were less likely to miss their appointments (OR=0.851, ME=-0.018).


Patients whose primary language is Spanish had fewer MIA (OR=0.794, ME=-0.024). Patients with private insurance were less likely to miss appointments than those with Medicare or Medicaid (OR=0.824, ME=-0.021). Married individuals were less likely to miss appointments (OR=0.833, ME=-0.019).

Socioeconomic factors

Three SDH studied were observed to have a statistically significant impact on MIA, namely housing insecurity, difficulty paying utility bills and caretaking. Patients who are at risk of becoming homeless (OR=1.364, ME=0.036) or have trouble paying utility bills (OR=1.282, ME=0.028) were more likely to miss their appointments. On the other hand, patients who had trouble taking care of a child, family member or friend had fewer missed appointments (OR=0.512, ME=-0.059). Moreover, higher income was associated with fewer MIA (OR=0.999, ME=-0.0000004). Furthermore, inadequate access to transportation (OR=1.235, ME=0.024) came close to statistical association with more missed appointments.


Prediction of MIA helps clinicians introducing targeted interventions to efficiently utilize limited imaging capacity, improve radiology scheduling systems, and ultimately increase access to care. Due to their complex and multifactorial etiologies, predicting MIA with high accuracy may not be practical. However, the algorithms presented here achieved acceptable performance and did elucidate useful information about the variables most predictive of MIA.

Notably, appointment timing was shown to be a good indicator of MIA. MIA before 8 am may occur due to unexpected life events interfering (e.g., dropping children off at school), whereas patients may have an easier time attending appointments in the late afternoon after work. Morning rush-hour traffic could also be a contributing factor. Additionally, orders originating in the Primary Care Department at BMC were more likely missed than those referred from Community Health Centers (CHCs) and Radiology. These clinics serve as primary care, referring patients to BMC for breast imaging. Variety in patient populations is less likely to explain this finding, as SDH and demographics were controlled for. It is possible that patients who typically access care locally at CHCs plan carefully for travel to BMC and may schedule appointments with support of navigators onsite at CHCs, leading to fewer MIA. With this information, we can target patients coming from Primary Care with reminders and offer them enhanced patient navigation.

Diagnostic appointments were less likely to be missed. The importance of diagnostic appointments, evaluation of indeterminate clinical exam findings or of inconclusive findings on screening mammogram, may serve as a motivator for patients not to miss their appointment. Patients who kept past appointments tend to remain reliable, which is expected if other factors in their lives remained constant over time. The odds of patients missing an appointment increased as time elapsed from their last appointment, whether or not they attended that appointment. Patients may lose touch with the health system when appointments are spaced over a longer time interval.

The finding of Spanish-speaking patients being observed to have fewer MIA is challenging to explain, as there is no obvious connection between primary language and missed appointments. However, this is consistent with findings that patients requiring interpreters were less likely to miss appointments [8]. This finding remains largely unexplained, as current clinical practice is to call interpreter services on an iPad upon patient arrival.

Patients with private insurance may miss fewer appointments, as they often have greater financial resources than those on public insurance, such as Medicaid. With greater financial resources, it is easier to keep appointments despite unexpected life events. This finding is consistent with findings by Daye et al [2] and Harvey et al. [15] that patients with noncommercial and Medicaid insurance, respectively, had more missed appointments. However, in contrast to these studies, we did not find associations between patient race and MIA. The patient demographics in our study population differs considerably compared to the population in the two other studies, both performed at another academic medical center in our city. Married people may miss fewer appointments due to social and/or financial support from partners, increasing their ability to attend appointments. In addition, partners may be providing assistance with transportation beyond detection by the THRIVE screener for unmet transportation needs.

It is well-documented that socioeconomic factors can negatively impact health outcomes [8, 37]. Housing insecurity and inability to afford utility bills are emblematic of financial strain, which could make MIA more likely given that they could result in uncovered cost-sharing expenses for patients. Patients who care for a dependent family member or friend may plan ahead for travel to medical appointments and may need to hire a caregiver themselves, possibly leading to fewer MIA. Patients with higher median household incomes have greater financial capability that makes it easier to keep appointments. In contrast with non-modifiable demographic factors, we can address SDH to decrease the incidence of MIA. For example, identifying transportation as an impactful SDH would justify providing ride-share vouchers to selected patients.

Prediction of MIA with the data available is inherently limited by the complex, multifactorial nature of MIA, which also are influenced by variables not included in this study. Since the variables associated with MIA cannot be studied in a randomized, controlled trial, it is not appropriate to make causal inferences from these data. It is possible that other unidentified variables are truly responsible for the relationships observed, which may limit the generalizability of our results. Moreover, generalizability of our findings to other settings may be limited due to the unique patient support mechanisms at our institution and the diversity of population, including 54% black patients and over 50% with public insurance. While missed appointments could be prevented by addressing the variables associated with MIA identified by the prediction model, these variables may vary by institution based on patient demographics and insurance payor mix.

The most impactful variables on missed appointments in our breast imaging patient population, included appointment timing, prior appointment history, referral department of origin, and socioeconomic factors such as household income and access to caregiving services. With more complete data on appointment characteristics as well as patient SDH and demographics, it may be possible to achieve better predictive performance. Since this study includes only approximately 15 months of THRIVE data, many patients have not been screened for unmet social needs. Additional data collected in the future may provide more focused insights on how these variables are associated with MIA and potentially strengthen these predictive models.

Availability of data and materials

The data that support the findings of this study are available from Boston Medical Center in Massachusetts but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Boston Medical Center. The codes for preprocessing, modeling, and evaluation can be found in the following GitHub repository:



Social Determinants of Health


Missed Imaging Appointments


Logistic Regression


Random Forest


Support Vector Machines


Area Under the ROC Curve


Community Health Centers


Odds Ratio


Marginal Effects


Boston Medical Center


  1. Moreira CB, Fernandes AFC, Castro RCMB, Oliveira RD, Pinheiro AKB, Moreira CB, et al. Social determinants of health related to adhesion to mammography screening. Rev Bras Enferm. 2018;71(1):97–103.

    Article  PubMed  Google Scholar 

  2. Daye D, Carrodeguas E, Glover M, Guerrier CE, Harvey HB, Flores EJ. Impact of Delayed Time to Advanced Imaging on Missed Appointments Across Different Demographic and Socioeconomic Factors. J Am Coll Radiol. 2018;15(5):713–20.

    Article  PubMed  Google Scholar 

  3. Migowski A. Early detection of breast cancer and the interpretation of results of survival studies/A deteccao precoce do cancer de mama e a interpretacao dos resultados de estudos de sobrevida. Ciecircncia Amp Sauacutede Coletiva. 2015;20(4):1309–10.

    Article  Google Scholar 

  4. Mieloszyk RJ, Rosenbaum JI, Hall CS, Raghavan UN, Bhargava P. The Financial Burden of Missed Appointments: Uncaptured Revenue Due to Outpatient No-Shows in Radiology. Curr Probl Diagn Radiol. 2018;47(5):285–6.

    Article  PubMed  Google Scholar 

  5. Norris JB, Kumar C, Chand S, Moskowitz H, Shade SA, Willis DR. An empirical investigation into factors affecting patient cancellations and no-shows at outpatient clinics. Decis Support Syst. 2014;1(57):428–43.

    Article  Google Scholar 

  6. González-Arévalo A, Gómez-Arnau JI, DelaCruz FJ, Marzal JM, Ramírez S, Corral EM, et al. Causes for cancellation of elective surgical procedures in a Spanish general hospital. Anaesthesia. 2009;64(5):487–93.

    Article  PubMed  Google Scholar 

  7. Valero-Bover D, González P, Carot-Sans G, Cano I, Saura P, Otermin P, et al. Reducing non-attendance in outpatient appointments: predictive model development, validation, and clinical assessment. BMC Health Serv Res. 2022;22(1):451.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Mohammadi I, Wu H, Turkcan A, Toscos T, Doebbeling BN. Data Analytics and Modeling for Appointment No-show in Community Health Centers. J Prim Care Community Health. 2018;1(9):2150132718811692.

    Google Scholar 

  9. Dove HG, Schneider KC. The Usefulness of Patients’ Individual Characteristics in Predicting No-Shows in Outpatient Clinics. Med Care. 1981;19(7):734–40.

    Article  CAS  PubMed  Google Scholar 

  10. Chatfield C, Xing H. The Analysis of Time Series: An Introduction with R: CRC Press; 2019. p. 415.

    Book  Google Scholar 

  11. Ahmadi E, Garcia-Arce A, Masel DT, Reich E, Puckey J, Maff R. A metaheuristic-based stacking model for predicting the risk of patient no-show and late cancellation for neurology appointments. IISE Trans Healthc Syst Eng. 2019;9(3):272–91.

    Article  Google Scholar 

  12. Chua SL, Chow WL. Development of predictive scoring model for risk stratification of no-show at a public hospital specialist outpatient clinic. Proc Singap Healthc. 2019;28(2):96–104.

    Article  Google Scholar 

  13. Ding X, Gellad ZF, Mather C, Barth P, Poon EG, Newman M, et al. Designing risk prediction models for ambulatory no-shows across different specialties and clinics. J Am Med Inform Assoc. 2018;25(8):924–30.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Bhavsar NA, Doerfler SM, Giczewska A, Alhanti B, Lutz A, Thigpen CA, et al. Prevalence and predictors of no-shows to physical therapy for musculoskeletal conditions. PLoS One. 2021;16(5):e0251336.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Harvey HB, Liu C, Ai J, Jaworsky C, Guerrier CE, Flores E, et al. Predicting No-Shows in Radiology Using Regression Modeling of Data Available in the Electronic Medical Record. J Am Coll Radiol. 2017;14(10):1303–9.

    Article  PubMed  Google Scholar 

  16. Mieloszyk RJ, Rosenbaum JI, Bhargava P, Hall CS. Predictive modeling to identify scheduled radiology appointments resulting in non-attendance in a hospital setting. In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2017. p. 2618–21.

    Chapter  Google Scholar 

  17. Braveman P, Gottlieb L. The Social Determinants of Health: It’s Time to Consider the Causes of the Causes. Public Health Rep. 2014;129(Supp 2):19–31.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Marmot M, Wilkinson R. Social Determinants of Health: OUP Oxford; 2005. p. 501.

    Book  Google Scholar 

  19. Marmot M, Allen JJ. Social Determinants of Health Equity. Am J Public Health. 2014;104(S4):S517–9.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Dantas LF, Hamacher S, Cyrino Oliveira FL, Barbosa SDJ, Viegas F. Predicting Patient No-show Behavior: a Study in a Bariatric Clinic. Obes Surg. 2019;29(1):40–7.

    Article  PubMed  Google Scholar 

  21. Alaeddini A, Yang K, Reddy C, Yu S. A probabilistic model for predicting the probability of no-show in hospital appointments. Health Care Manag Sci. 2011;14(2):146–57.

    Article  PubMed  Google Scholar 

  22. Huang Y, Hanauer DA. Patient No-Show Predictive Model Development using Multiple Data Sources for an Effective Overbooking Approach. Appl Clin Inform. 2014;05(3):836–60.

    Article  CAS  Google Scholar 

  23. Cashman SB, Savageau JA, Lemay CA, Ferguson W. Patient Health Status and Appointment Keeping in an Urban Community Health Center. J Health Care Poor Underserved. 2004;15(3):474–88.

    Article  PubMed  Google Scholar 

  24. de la Vega PB, Losi S, Sprague Martinez L, Bovell-Ammon A, Garg A, James T, et al. Implementing an EHR-based Screening and Referral System to Address Social Determinants of Health in Primary Care. Med Care. 2019;57:S133.

    Article  PubMed  Google Scholar 

  25. Haq AU, Li JP, Memon MH, Malik A, Ahmad T, et al. Feature selection based on L1-norm support vector machine and effective recognition system for Parkinson’s disease using voice recordings. IEEE Access. 2019;7:37718-37734

  26. Sotudian S, Desta IT, Hashemi N, Zarbafian S, Kozakov D, Vakili P, et al. Improved cluster ranking in protein–protein docking using a regression approach. Comput Struct Biotechnol J. 2021;19:2269–78.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Sotudian S, Paschalidis ICH. Machine Learning for Pharmacogenomics and Personalized Medicine: A Ranking Model for Drug Sensitivity Prediction. IEEE/ACM Trans Comput Biol Bioinform. 2021:1–1.

  28. Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.

    Article  Google Scholar 

  29. Chen T, Guestrin C. "Xgboost: A scalable tree boosting system. Proc 22nd Acm Sigkdd Int Conf Knowl Discov Data Min. 2016;785–94.

  30. Cortes C, Vapnik V. Support-vector networks. Mach Learn. 1995;20(3):273–97.

    Article  Google Scholar 

  31. Hao B, Hu Y, Sotudian S, Zad Z, Adams WG, Assoumou SA, et al. Development and Validation of Predictive Models for COVID-19 Outcomes in a Safety-net Hospital Population. J Am Med Inform Assoc. 2022;29(7):1253.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Hao B, Sotudian S, Wang T, Xu T, Hu Y, Gaitanidis A, et al. Early prediction of level-of-care requirements in patients with COVID-19. Giamarellos-Bourboulis EJ, van der Meer JW, Giamarellos-Bourboulis EJ, editors. eLife. 2020;9:e60519.

  33. Hercus C, Hudaib AR. Delirium misdiagnosis risk in psychiatry: a machine learning-logistic regression predictive algorithm. BMC Health Serv Res. 2020;20(1):151.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. Scikit-learn: Machine Learning in Python. J Mach Learn Res. 2011;12:2825–30.

  35. Seabold S, Perktold J. Statsmodels: Econometric and Statistical Modeling with Python. In Proceedings of the 9th Python in Science Conference, Texas; 2010;57(61):92–6.

  36. Norton EC, Dowd BE, Maciejewski ML. Marginal Effects—Quantifying the Effect of Changes in Risk Factors in Logistic Regression Models. JAMA. 2019;321(13):1304–5.

    Article  PubMed  Google Scholar 

  37. Pickett KE, Pearl M. Multilevel analyses of neighbourhood socioeconomic context and health outcomes: a critical review. J Epidemiol Community Health. 2001;55(2):111–22.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Download references


Not applicable


The research was partially supported by the NSF under grants CCF-2200052, DMS-1664644, and IIS-1914792, by the NIH under grants R01 GM135930 and UL54 TR004130, and by the Boston University Kilachand Fund for Integrated Life Science and Engineering.

Author information

Authors and Affiliations



SS analyzed and prepared the data, developed the models, and obtained results. SS, AA, IP, and MF analyzed the results and drafted the manuscript. IP, MF, CL, and AR contributed to data acquisition and performed the critical revision of the manuscript. IP and MF designed/led the study. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ioannis Ch. Paschalidis.

Ethics declarations

Ethics approval and consent to participate

The protocol of the retrospective HIPAA-compliant cohort study was approved by the Boston University School of Medicine Institutional Review Board (# H-41315), and written informed consent from the patient was waived. All methods were carried out in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sotudian, S., Afran, A., LeBedis, C.A. et al. Social determinants of health and the prediction of missed breast imaging appointments. BMC Health Serv Res 22, 1454 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: