Current prognostic models factor in patient and disease specific variables but do not consider cumulative risks of hospitalization over time. We developed risk models of the likelihood of death associated with cumulative exposure to hospitalization, based on time-varying risks of hospitalization over any given day, as well as day of the week. Model performance was evaluated alone, and in combination with simple disease-specific models.
Patients admitted between 2000 and 2006 from 501 public and private hospitals in NSW, Australia were used for training and 2007 data for evaluation. The impact of hospital care delivered over different days of the week and or times of the day was modeled by separating hospitalization risk into 21 separate time periods (morning, day, night across the days of the week). Three models were developed to predict death up to 7-days post-discharge: 1/a simple background risk model using age, gender; 2/a time-varying risk model for exposure to hospitalization (admission time, days in hospital); 3/disease specific models (Charlson co-morbidity index, DRG). Combining these three generated a full model. Models were evaluated by accuracy, AUC, Akaike and Bayesian information criteria.
There was a clear diurnal rhythm to hospital mortality in the data set, peaking in the evening, as well as the well-known ‘weekend-effect’ where mortality peaks with weekend admissions. Individual models had modest performance on the test data set (AUC 0.71, 0.79 and 0.79 respectively). The combined model which included time-varying risk however yielded an average AUC of 0.92. This model performed best for stays up to 7-days (93% of admissions), peaking at days 3 to 5 (AUC 0.94).
Risks of hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual’s hospitalization. Combining disease specific models with such time varying- estimates appears to result in robust predictive performance. Such risk exposure models should find utility both in enhancing standard prognostic models as well as estimating the risk of continuation of hospitalization.
Weekend effectRisk exposure modelDynamic risk predictionQuality of hospital service
Prognostic models help clinicians identify those patients most at risk of negative outcomes, and tailor therapy to manage that risk [1, 2]. Such models are typically developed for specific conditions or diseases, combining clinical indicators (such as patient or disease characteristics) and test results that stratify populations by risk . Performance varies and even the best models often have good but not high classification accuracy . Models are most often developed using data from specific periods of time, patient cohorts or geographies, and when they are evaluated against new populations, may not perform as well as in the original population .
Variation in predictive accuracy of models in part must be due to variations in disease patterns across populations, but is also likely due to local variations in clinical service and practice. If prognostic models specifically incorporated information about health service delivery the result might be more accurate, generalizable clinical tools. There are for example well known risks associated with hospitalization that can lead to patient harm or death . These risks vary with diagnosis [5, 6], hospital [7, 8] and route of admission . Risks also are known to vary significantly with the time that hospitalization occurs, with weekend admissions carrying greater risk than other days [10–17].
Modeling the risk of exposure to hospitalization should allow for more informed decisions around the decision to admit or discharge. Indeed, whilst it is standard to develop risk-benefit models for new interventions such as medicines or procedures, we do not routinely apply the same logic to the decision to admit to or discharge from hospital, one of the most universal of all clinical interventions. In the same way that radiation exposure models help determine when a patient has exceeded a safe radiation dose, it should be feasible to develop models that determine when patients have exceeded a safe ‘dose’ of hospitalization.
Some work has explored the risk of harm to a patient following exposure to an average day in hospitalization [10, 18]. Combining models which predict time-varying risks associated with hospitalization with traditional disease-specific prediction rules should theoretically result in much more accurate predictive tools which can be used to update risk estimates as an admission progresses over time. In this paper we report on the development and evaluation of one such family of models, using only standard administrative data.
All admissions to 501 public and private hospitals in New South Wales (NSW) Australia between 1 July 2000 and 30th June 2007 were extracted from the NSW Admitted Patient Data Collection (APDC). Admissions were coded using the International Classification of Diseases 10th revision Australian modification (ICD-10-AM) and Australian refined diagnosis related groups (DRG) . Records with an invalid or missing admission date, date of death, principal diagnosis, DRG, patient age or gender were excluded. After exclusion, a total of 11,732,260 admissions remained, with 201,647 deaths either during hospitalization (177,828) or within 7 days of discharge (23,819). A Charlson comorbidity index was calculated for each admission using ICD-10-AM codes .
Hospital mortality rates are prone to “discharge bias”, underestimating the true impact of hospitalization on death rates because some deaths occur post-discharge . To capture such post-discharge deaths, admission records were linked to the death registry [22, 23].
Inspection of the data set revealed that the probability of death up to seven days post discharge varied by day of admission, with weekend admissions having a higher risk of mortality [13–17]. The data also showed that the risk of mortality varied by the time of day of admission, peaking in the early evening and lowest in the morning. Combining these two revealed a fine-grained sinusoidal pattern to the risk of mortality with both daily and weekly periodicities (Figure 1).
Logistic regression models
Logistic regression models were developed with the dependent variable being the probability of dying up to seven days post discharge from hospital . Three groups of models were developed using an array of independent variables: 1/a simple background risk model using only age and gender, 2/a group of time-varying risk models that estimated the risk associated with exposure to hospitalization (time of admission and a counter for number of days currently in hospital) and 3/a group of disease specific models which characterize specific risks associated with disease state (Charlson co-morbidity index), and subgroup analyses for 5 DRGs known to display significant day to day variation in risk of mortality ) as well as route of admission (Emergency Department (ED) or non-ED).
Within the time-varying risk model set, the differential impact of admission at nights and weekends was explored by developing three separate models that 1/treated all days as having equal risk, 2/distinguished the risk of weekdays from weekends, or 3/modeled both daily and weekly periodicities in the risk of death following admission. When sampling a sinusoidal function, to avoid aliasing or under sampling, the rate of sampling must exceed the Nyquist rate, and thus must be greater than double the frequency of the function being sampled . As our mortality data has a daily periodicity of one cycle, our model would need to sample the distribution more than twice a day. We thus developed a model that segmented each day of the week into three sample periods: “daytime” (08:00 to 16:59 hours), “evening” (17:00 to 23:59), and “night” (0:00 to 07:59), creating 21 unique sample periods every week.
Finally, a full logistic model was assembled using these three separate models, which estimated the risk of exposure during hospitalization as the sum of background, time varying and disease specific factors:
Model training and testing
APDC data from 2001 to 2006 were used to train the logistic models (10,745,181 admissions), and 2007 data were set aside to prospectively test model performance (987,079 admissions). Models were developed using admission data and tested against their ability to correctly identify whether a patient was alive or dead at 7 days post discharge.
Model performance was assessed by the area under the receiver operating characteristic curve (AUC), also known as the C-statistic. Models with an AUC greater than 0.8 are considered to be good classifiers, with an AUC of 1 indicating perfect performance. In addition, we also estimated the ‘goodness’ of each model using the Akaike information criterion (AIC), and Bayesian information criterion (BIC) which are penalized model selection criteria that track model performance as the number of model parameters increase . Both criteria help minimize over-fitting of models to data and amongst similarly performing models, the one with the smallest value is preferred.
AUC requires calculation of model sensitivity and specificity at different values of the ratio of patients who die to those who survive (the cut off value). We selected the optimal cut-off as that point in the ROC curve where the sum of sensitivity and specificity is maximal, and report sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy at this point. The study was approved by the NSW Population and Health Services Research Ethics Committee and the UNSW Human Research Ethics Committee.
The demographic distribution of patients in the training (July 2000 – December 2006) and test (2007) data sets is summarized in Table 1. Age and comorbidity patterns were similar in both groups. Mean length of stay dropped from 3.21 days in the seven training years to 2.75 in the final test year, and the death rate (the ratio of deaths during hospitalization and deaths within 7 days of discharge to all admissions) also dropped from 1.8% to 1.3%.
Demographic characteristics of patients in model training and testing data sets
When tested alone, individual model components for background risk, time-varying risk, and disease-specific risk had similar but modest power to identify risk of death up to 7 days post-discharge, with AUCs of 0.71, 0.79 and 0.79 respectively (Table 2). Amongst the three time- varying risk models, treating each day as if it has the same risk produced an AUC of 0.71, distinguishing weekdays from weekends 0.69, improving further to 0.79 with the fine-grained temporal model. This best temporal exposure model had a greater accuracy (74.4) compared to the background risk (66.2) and illness severity (66.6) models alone.
Model Performance in predicting death up to 7 days post-discharge, as measured by AUC, AIC and BIC
Number of variables
Optimal cut-off point (ratio of deaths to survival)
Model 1: Background risk
.711 (.708 -.715)
1.0710e + 006
1.0710e + 006
Model 2: Time exposure risk
2(a): Length of stay (LOS)
.710 (.705 - .714)
1.3155e + 006
1.3155e + 006
2(b): LOS (Weekday, weekend), admission time
.692 (.687 - .697)
1.1960e + 006
1.1961e + 006
2(c): LOS (Morning, evening, or night for each of seven days), admission time
.786 (.782 - .79)
1.3464e + 006
1.3467e + 006
Model 3: Disease model
Charlson comorbidity index
.786 (.783 - .79)
1.0826e + 006
1.0826e + 006
Model 4: Background plus time exposure risk
4(a): 1 + 2(a)
.73 (.726 - .739)
0.9408e + 006
0.94097e + 006
4(b): 1 + 2(b)
.743 (.739 - .747)
1.1519e + 006
1.1520e + 006
4(c): 1 + 2(c)
.883 (.88 - .886)
1.4713e + 006
1.4716e + 006
Model 5: Full model
5(a): 1 + 2(a) + 3
.891 (.888 - .894)
1.5143e + 006
1.5143e + 006
5(b): 1 + 2(b) + 3
.893 (.89 - .896)
1.5118e + 006
1.5119e + 006
5(c): 1 + 2(c) + 3
.923 (.921 - .926)
1.6769e + 006
1.6772e + 006
Model 6: DRG-specific models 5(c) trained and tested only single DRG subpopulation
R61: Lymphoma and Non-Acute Leukemia
.90 (.878 - .922)
4.7669e + 003
4.9223e + 003
E02: Other Respiratory System OR Procedures
.940 (.911 - .969)
2.6019e + 003
2.7281e + 003
F70: Major Arrhythmia and Cardiac Arrest
.762 (.736 - .802)
1.1044e + 003
E64: Pulmonary Oedema and Respiratory Failure
.85 (.807 - .894)
J62: Malignant Breast Disorders
.903 (.874 - .933)
Model 7: Emergency Department admission 5(c) trained and tested only on ED or non ED subpopulations
.909 (.905 - .912)
3.0017e + 005
3.0044e + 005
.925 (.921 - .928)
1.4626e + 006
1.4629e + 006
An optimal cut-off value is selected as the point in the receiver operating characteristic curve where the sum of sensitivity and specificity is maximum. Sensitivity, specificity, model accuracy, positive predictive value (PPV) and negative predictive value (NPV) are reported at the optimal ratio of patient deaths to survival.
When the individual models were combined to create the full model, AUC rose to 0.92 for the whole population. When the model was trained and tested only on patients admitted via the Emergency Department (ED) the AUC was 0.91; training and testing on non-ED admissions achieved an AUC of 0.92. Performance dropped to 0.62 when the population-trained model was tested only on patients with a primary diagnosis of a major arrhythmia or cardiac arrest, 0.75 for Lymphoma and Non-Acute Leukaemia and 0.87 for laryngoscopic, mediastinal and other chest procedures. When the full model was trained using only the subgroup of patients from these DRGs, performance improved, but variably. Patients with major arrhythmia or cardiac arrest had minimal improvement (AUC 0.76). Lymphoma and Non-Acute Leukaemia patients (AUC 0.90), and malignant breast disorders (AUC 0.90) improved substantially, and patients with laryngoscopic, mediastinal and other chest procedures surpassed the general population benchmark (AUC 0.94).
The combined or full model’s performance varied with length of stay (Figure 2). The model performed best with patients of hospital stays of seven days or less, covering some 93% of admissions, and peaking at days 3 to 5 with an AUC of 0.94. After seven days, model sensitivity remained surprisingly steady, but specificity dropped, indicating that model performance was deteriorating because of an increased number of false positives. Performance also became increasingly erratic, possibly reflecting smaller patient numbers both in training and tests sets, as well as increasing influence of unmodelled disease and service specific factors.
AIC and BIC measures closely tracked each other for all models, and showed an increase with increasing model complexity, as expected. Most models had AIC values in the range 1.071-1.68e + 006 except for the more specialised models based upon single DRG (e.g. R61: Lymphoma and Non-Acute Leukemia, AIC = 4.7669e + 003).
Modeling the time accumulated risk associated with hospital stay has allowed the development of a class of predictive models that, when combined with simple disease data appears to substantially outperform many disease specific models alone. This is despite the disease specific information used in our model being represented by relatively simple linear combination of administrative information including the Charlson index and DRG categories. One would anticipate even better performance if an exposure model was combined with a clinically-based disease prognostic model, relying for example on patient record or disease registry data, as well as better modeling the relationship of variables to mortality (e.g. age and death are better represented as a log linear rather than linear relationship ).
A striking feature of our data set is the strong daily rhythm to risk of death, which echoes the well-known weekly variation of risk of death, which peaks with weekend admission (Figure 1). Our data shows the risk of mortality peaks in the early evening and is at its lowest in the morning (Figure 3). Daycare did not appear to increase risk of death but this attenuated at weekends – the weekend effect. Compared to admission on Monday day, the odds ratios for risk of death were worse for evenings and nights, and Sunday daytime had a 1.7 times greater risk than Monday daytime.
It is hard to impute causation to such observational data and we can only speculate why the diurnal rhythm exists. There are two main causal readings to be explored. Firstly, it may be that such risks of death are a feature of the health service, where patients experience increased risk because of reduced availability or quality of clinical services. The push for health services to provide uniform “24/7” care is a response to this concern. The second causal reading is equally plausible, and that is that some patient groups are associated with greater risk of death at different times. This may be due to a selection bias, where sicker patients present to hospital at given times, or in some instances may have a biological cause. Our recent analysis of the weekend effect identified that both causal readings are plausible, and that different patient groups demonstrate either care or illness risk patterns .
We show that performance of models using patient subgroups based upon DRGs is also variable. For example, performance was not strong for patients with major arrhythmia or cardiac arrest. As 8,254 admissions with major arrhythmia or cardiac arrest were present in the training data set, with an average LOS of 2.69, it seems unlikely that sample size or LOS were factors influencing performance. In contrast model performance was strong for DRGs associated with leukemia or lymphoma. In our recent analysis of weekend effects, increased weekend deaths for acute cardiovascular events appear to be related to service quality (e.g. lack of availability of imaging or stenting services out of hours). For oncological patients however, the cause seems to be that a sicker cohort of patients is presenting at the weekend . This suggests that the models developed here may be better at modeling disease rather than service specific factors in its exposure risk profile, and warrants further investigation. Where there are time varying risks associated with a health service for a particular patient group, specific modeling of that independent risk may be needed.
Modeling the route of admission (Emergency vs. non-Emergency Department) did not appear to confer any advantage despite it being show by others to be a major predictor of mortality . One possible explanation is that information about route of admission is already captured implicitly in the exposure model. For example, patients admitted out of hours may be most likely to presenting via emergency, and so the out of hours risk is already modeling route of admission as a hidden covariate. Further work is needed to understand the nexus between time-varying risk and route of admission.
Adding new variables to the basic model helped improve model accuracy at the expense of increasing AIC and BIC values (Table 2). However AIC/BIC value increments were modest and the improvement in model performance substantial. Further, interpreting AIC and BIC values is hard with large data sets, and they typically are more valuable when models are developed on small data sets. With large data sets, AIC and BIC may end up preferring over-fitted models .
Most current clinical predictive tools provide snapshot predictions of risk, independent of time. Yet clearly risks should vary with time. In this study predictive performance was best in the first week of admission, and performance early in the first week was exceptionally strong. One would expect as admission time lengthens that many additional factors come into play to modify risk, and therefore long run prediction becomes increasingly difficult – a situation well known in forecasting. It is also the case that patient mix changes with time, and those patients with long stays represent a different cohort with different risk profile, and may need to be modeled separately. Sample sizes available for such model development also diminish as length of stay increases, making it harder to develop generalizable models.
Models such as those presented here could be used to forecast risks such as death or specific clinical events, and to update those forecasts as additional information accumulates. Using the full model developed here, it is clear that the risk profile for patients varies with both time and day of admission (Figure 4). Given that predictive accuracy changes with LOS, it would also be appropriate to provide clinicians with estimates of the accuracy of each prediction and emphasize for example that confidence is stronger in the short rather than long run. The next stage in developing the models presented here would be to evaluate their capacity to forecast future risk of death for patients given their current exposure to hospitalization.
One challenge for creators of clinical prediction models is that they often fail to generalize to settings beyond those from which they are created, because they model a point in time, local disease patterns or service provision patterns . The approach developed here is likely to be highly generalizable as it relies on no location specific information, except for the weightings associated with different model variables. These weightings could be calibrated using the best available data for a given location. Further, the evaluation here was on patients from a large geography encompassing urban, rural and remote settings, and hospitals ranged from public to private, and small to large teaching and specialist referral centers. Addition of location specific variables and disease specific elements are likely to further enhance the general model reported here.
Risks of death associated with hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual’s hospitalization. Enhancing disease specific prognostic models with estimates of the cumulative risk of exposure to hospitalization appears to greatly enhance predictive performance. Risk exposure models should find utility both in enhancing standard prognostic models as well as estimating the risk of continuation of hospitalization.
Akaike information criterion
Admitted patient data collection
Area under receiver operating characteristic curve
Bayesian information criterion
Diagnosis related group
International classification of diseases 10th revision Australian modification
Length of stay
Negative predictive value
New South Wales
Positive predictive value.
Data for this study were provided by the New South Wales Ministry of Health, and data linkage was performed by the Centre for Health Record Linkage. The study was supported by funding from NHMRC Program Grant 568612 and the NHMRC Centre for Research Excellence in E-health. The Australian Institute of Health Innovation is supported by a Capacity Building Infrastructure Grant provided by the NSW MInistry of Health.
Centre for Health Informatics, Australian Institute for Health Innovation, University of New South Wales
The School of Psychology, Social Work & Social Policy, University of South Australia
Australian Patient Safety Foundation
Altman DG, Vergouwe Y, Royston P, Moons KG: Prognosis and prognostic research: validating a prognostic model.Bmj. 2009, 338:b605.PubMedView Article
Donze J, Aujesky D, Williams D, Schnipper JL: Potentially Avoidable 30-Day Hospital Readmissions in Medical Patients Derivation and Validation of a Prediction Model.Jama Intern Med. 2013,173(8):632–638.PubMedView Article
Toll DB, Janssen KJM, Vergouwe Y, Moons KGM: Validation, updating and impact of clinical prediction rules: A review.J Clin Epidemiol. 2008,61(11):1085–1094.PubMedView Article
Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ: Temporal Trends in Rates of Patient Harm Resulting from Medical Care.New Engl J Med. 2010,363(22):2124–2134.PubMedView Article
Saposnik G, Baibergenova A, Bayer NH, Hachinski V: Weekends: A dangerous time for having a stroke?Neurology. 2007,68(12):A228-A.
Kostis WJ, Demissie K, Marcella SW, Shao YH, Wilson AC, Moreyra AE: Weekend versus weekday admission and mortality from myocardial infarction.New Engl J Med. 2007,356(11):1099–1109.PubMedView Article
Hauck K, Zhao XY, Jackson T: Adverse event rates as measures of hospital performance.Health Policy. 2012,104(2):146–154.PubMedView Article
Cram P, Hillis SL, Barnett M, Rosenthal GE: Effects of weekend admission and hospital teaching status on in-hospital mortality.Am J Med. 2004,117(3):151–157.PubMedView Article
Campbell MJ, Jacques RM, Fotheringham J, Maheswaran R, Nicholl J: Developing a summary hospital mortality index: retrospective analysis in English hospitals over five years.Bmj. 2012, 344:e1001.PubMed CentralPubMedView Article
Freemantle N, Richardson M, Wood J, Ray D, Khosla S, Shahian D, Roche WR, Stephens I, Keogh B, Pagano D: Weekend hospitalization and additional risk of death: An analysis of inpatient data.J Roy Soc Med. 2012,105(2):74–84.PubMed CentralPubMedView Article
Becker D: Weekend hospitalisation and mortality: a critical review.Expert rev pharmacoecon outcomes res. 2008,8(1):23–26.PubMedView Article
Schmulewitz L, Proudfoot A, Bell D: The impact of weekends on outcome for emergency patients.Clin Med. 2005,5(6):621–625.PubMedView Article
Mohammed MA, Sidhu KS, Rudge G, Stevens AJ: Weekend admission to hospital has a higher risk of death in the elective setting than in the emergency setting: a retrospective database study of national health service hospitals in England.Bmc Health Serv Res. 2012, 12:87.PubMed CentralPubMedView Article
Barnett MJ, Kaboli PJ, Sirio CA, Rosenthal GE: Day of the week of intensive care admission and patient outcomes - A multisite regional evaluation.Med Care. 2002,40(6):530–539.PubMedView Article
Bhonagiri D, Pilcher DV, Bailey MJ: Increased mortality associated with after-hours and weekend admission to the intensive care unit: a retrospective analysis.Med J Australia. 2011,194(6):287–292.PubMed
Miyata H, Hashimoto H, Horiguchi H, Matsuda S, Motomura N, Takamoto S: Performance of in-hospital mortality prediction models for acute hospitalization: hospital standardized mortality ratio in Japan.Bmc Health Serv Res. 2008, 8:229.PubMed CentralPubMedView Article
Hauck K, Zhao XY: How Dangerous is a Day in Hospital? A Model of Adverse Events and Length of Stay for Medical Inpatients.Med Care. 2011,49(12):1068–1075.PubMedView Article
Roberts RF, Innes KC, Walker SM: Introducing ICD-10-AM in Australian hospitals.The Medical Journal of Australia 1998, 169:S32-S35.PubMed
Charlson ME, Pompei P, Ales KL, Mackenzie CR: A New Method of Classifying Prognostic Co-Morbidity in Longitudinal-Studies - Development and Validation.J Chron Dis. 1987,40(5):373–383.PubMedView Article
Pouw ME, Peelen LM, Moons KG, Kalkman CJ, Lingsma HF: Including post-discharge mortality in calculation of hospital standardised mortality ratios: retrospective analysis of hospital episode statistics.Bmj. 2013, 347:f5913. 2013–10–21 12:00:26PubMed CentralPubMedView Article
Lawrence G, Dinh I, Taylor L: The Centre for Health Record Linkage: a new resource for health services research and evaluation.Health Inf Manag J. 2008,37(2):60–62.
Borthwick A, Buechi M, Goldberg A: Key Concepts in the ChoiceMaker 2 Record Matching System. Washington, DC: Proc. KDD-2003 Workshop on Data Cleaning, Record Linkage, and Object Consolidation; 2003:28–30.
Hosmer DWL: S. Applied Logistic Regression. 2nd edition. New York: Wiley; 2000.View Article
Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E: Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study.BMJ Quality & Safety. 2014,2013(23):215–222.View Article
Horne JH, Baliunas SL: A Prescription for Period Analysis of Unevenly Sampled Time-Series.Astrophys J. 1986,302(2):757–763.
Burnham KP, Anderson DR: Multimodel inference - understanding AIC and BIC in model selection.Sociol Method Res. 2004,33(2):261–304.View Article
Laird N, Olivier D: Covariance analysis of censored survival data using log-linear analysis techniques.J Am Stat Assoc 1981,76(374):231–240.View Article
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.