Skip to main content
  • Research article
  • Open access
  • Published:

Performance of in-hospital mortality prediction models for acute hospitalization: Hospital Standardized Mortality Ratio in Japan

Abstract

Objective

In-hospital mortality is an important performance measure for quality improvement, although it requires proper risk adjustment. We set out to develop in-hospital mortality prediction models for acute hospitalization using a nation-wide electronic administrative record system in Japan.

Methods

Administrative records of 224,207 patients (patients discharged from 82 hospitals in Japan between July 1, 2002 and October 31, 2002) were randomly split into preliminary (179,156 records) and test (45,051 records) groups. Study variables included Major Diagnostic Category, age, gender, ambulance use, admission status, length of hospital stay, comorbidity, and in-hospital mortality. ICD-10 codes were converted to calculate comorbidity scores based on Quan's methodology. Multivariate logistic regression analysis was then performed using in-hospital mortality as a dependent variable. C-indexes were calculated across risk groups in order to evaluate model performances.

Results

In-hospital mortality rates were 2.68% and 2.76% for the preliminary and test datasets, respectively. C-index values were 0.869 for the model that excluded length of stay and 0.841 for the model that included length of stay.

Conclusion

Risk models developed in this study included a set of variables easily accessible from administrative data, and still successfully exhibited a high degree of prediction accuracy. These models can be used to estimate in-hospital mortality rates of various diagnoses and procedures.

Peer Review reports

Background

Numerous studies have shown that the quality of healthcare is variable and often inadequate [1–3]. Initiatives to measure healthcare quality are an important focus for policymakers who believe that such measurements can drive quality-improvement programs [4]. The measurement of healthcare quality includes process and outcome measurements [5]. Outcome evaluation, including in-hospital mortality, requires adequate risk-adjustment for different patient mixes to make appropriate evaluations of healthcare performance [6]. Because of the clear definition of outcome and influential patient conditions, disease-specific risk adjustment models have been developed to a certain extent in several specialties (e.g. cardiovascular diseases) and have been available for various quality improvement activities [7–10].

Although disease-specific risk adjustment may be useful for quality improvement of a specific type of care, more generic case-mix risk-standardized outcomes are required for generalized quality evaluation across specialties [11]. In the United States, several generic case-mix measures are available in commercial as well as non-commercial sources (e.g. APACHE, MedisGroup, Adjusted Clinical Group, Diagnostic Cost Groups, and the RxRisk model) [12–14], and have been applied to categorizing patients according to resource needs. However, many of these systems require detailed clinical and/or administrative data that involve extensive data collection. Furthermore, most of these case-mix measures target healthcare costs rather than clinical outcomes.

To alleviate the burden of data collection, risk prediction models for in-hospital mortality using administrative data have been proposed [15, 27–29]. One study used a modified version of the Charlson Index [16] as a summary score of co-existing diagnoses. A recent international comparative study [17] demonstrated that the estimated comorbidity index could predict the chance of in-hospital death with relatively high precision (c-index of approximately 0.80), although the accuracy was suboptimal when Japanese data were analyzed. In this study, we developed a new prediction model for in-hospital mortality by using the same electronic dataset with national standardized format used in the aforementioned study. We successfully exceeded previously demonstrated predictive precision by including patient demographics and multiple administrative variables. Our study demonstrates a potential use of the developed prediction model for benchmarking the quality of healthcare across various performance units with the national database.

Methods

Data source

We used a dataset provided by the Ministry of Health, Labor, and Welfare that was originally used to evaluate a patient classification system newly introduced to 80 university affiliated hospitals and 2 national center hospitals for reimbursement since 2003. The new classification system, called Diagnosis Procedure Combination (DPC), includes information regarding up to two major diagnoses and up to six co-existing diagnoses. The 2003 version of the DPC patient classification system includes 16 major diagnosis categories (MDC) and 575 disease subcategories which are coded in ICD-10 format. The dataset also included additional information on patient demographics, use and types of surgical procedures, emergency/elective hospitalization, length of stay (LOS), and discharge status including in-hospital death [18–20]. The dataset originally included information derived from hospital administrative and clinical information provided by participating hospitals to the Ministry research group, then was made anonymous and fed back to the hospitals for benchmarking purposes. Records for 282,064 patients who were discharged from 82 hospitals between July 1, 2002 and October 31, 2002 were distributed and made available for public use as of June 2008. Following the inclusion criteria of previous studies on Hospital Standardized Mortality Ratio (HSMR) [21, 22], we excluded MDC categories with mortality rates of less than 0.5% from our analysis. The data (n = 224,207) were then randomly assigned further into two subsets that were split 80/20, one for model development and the other for validation tests. The development dataset included 179,156 records and the validation dataset included 45,051 records. The datasets were made anonymous and prepared by the government sector for public use. Thus, data use was officially approved and protection of confidential information is ensured.

Model building

We started with a prediction model by referring to the Canadian model of HSMR as mentioned earlier [21, 22]. The model includes age as the ordinal variable (under 60, 60–69, 70–79, 80–89, and 90 and over), gender, use of an ambulance at admission, admission status (emergency/elective), LOS, MDC, and comorbidities (model 1). We also tested another prediction model which omitted LOS (model 2). The rationale is that the model without LOS should be a "pure" prediction model since LOS can be regarded as an outcome affected by patient characteristics and hospital care quality. Several diagnosis-specific models also consider the duration of hospitalization as a part of outcome and do not include it as a predictor variable [23, 24]. Based on Quan's methodology [15], the ICD-10 code of each co-existing diagnosis was converted into a score, and was summed up for each patient case to calculate a Charlson Comorbidity Index score. Scores were then classified into five categories: 0, 1–2, 3–6, 7–12, and 13 and over.

We did not include surgical treatment status as a risk parameter because the decision of whether or not to operate on a patient with a certain medical condition would vary and depend on the clinical judgment of each hospital team. Also, surgery is not a treatment option in certain areas of medicine.

Analytical Methods

A multivariate logistic regression analysis was performed to predict in-hospital mortality by using the development dataset. Tests of model performance and model fitness were conducted using the test dataset. The prediction accuracy of the logistic models was determined using the c-index [25], and the c-index of the full (models 1 and 2) and partial models were compared. A c-index value of 0.5 indicates that the model is no better than random chance in predicting death, and a value of 1.0 suggests perfect discrimination. The models were calibrated by plotting observed versus predicted deaths based on risk. All analyses were conducted with SPSS version 15.0J (SPSS Japan, Inc).

Results

Patient Demographics in the Models

Table 1 shows in-hospital mortality by MDCs in the original full dataset. We excluded 6 out of 15 diagnostic categories due to low mortality rates (< 0.5%). The 9 remaining diagnostic categories (n = 224,207) accounted for almost 99% of in-hospital mortality in total acute hospitalization cases. We further grouped 4 MDCs with lowest mortality into one, resulting in 6 MDCs for the following analysis.

Table 1 Discharge mortality rate in each Major Diagnostic Categories (n = 282064)

Of the 179,156 patients included in the development dataset, 53.2% were male, 35.9% had emergency status at admission, and 8.9% used an ambulance (Table 2). Nearly half (46.6%) of the patients were under 60 years of age at admission, and 9.2% were 80 years or over. The digestive system, hepatobiliary system, and pancreas made up the largest share (22%) of MDCs, followed by the respiratory system (13.5%), circulatory system (13.1%), and nervous system (7.2%). The majority of patients (68.6%) had a total score of 0 for the Charlson Comorbidity Index, and only 2.5% of patients had a score higher than 6.

Table 2 Characteristics of patients in learning dataset and test dataset (n = 224207)

Prediction Models (development dataset; n = 179,156)

Table 3 shows the in-hospital mortality prediction model with LOS as a predictor (Model 1). Using those with a LOS under 10 days as a reference, the odds ratio of in-hospital death for patients with longer LOS increased linearly; the odds ratio for patients with LOS ≥ 30 days reached 4.35 (4.01–4.72). Using the neurological MDC as a reference, MDCs for respiratory, digestive, hepatology, and hematology diseases showed a significantly higher odds ratio for in-hospital death, whereas the cardiology MDC showed a significantly lower odds ratio. Older age, gender, use of an ambulance at admission, and emergency admission status also showed significantly higher odds ratios. Finally, scores for Charlson Index categories exhibited an increasing linear trend in odds ratio as scores increased.

Table 3 MODEL1 Hospital mortality prediction model with length of stay (n = 179156)

Table 4 shows the prediction model without LOS (model 2). The overall statistical significance of odds ratios was completely identical to that of model 1, although the magnitude was somewhat smaller for MDCs and larger for Charlson Index categories.

Table 4 MODEL2 Hospital mortality prediction model without length of stay (n = 179156)

Model Performance (test dataset; n = 45,051)

Table 2 compares patient characteristics in the test dataset (n = 45,051 patients) to those of the development dataset. The two datasets were almost identical in the distribution of patient characteristics and case mix. In-hospital mortality rates were 2.68% and 2.76% for the development and test datasets, respectively.

Table 5 shows the c-indexes for models 1 and 2, and those using a partial set of predictors. C-index values were fairly high in both models (0.841 and 0.869 for models 1 and 2, respectively). A partial model which only included patient characteristics had a c-index of 0.727, and the addition of MDC increased the c-index to 0.786. Further including the comorbidity index resulted in only a marginal increase to 0.841. The model that included more information on comorbidities showed a higher c-index. Figures 1 and 2 demonstrate the goodness of fit regarding the models (i.e., how well the predicted mortality rates match the observed mortality rates among patient subgroups of risk). Close agreement between the predicted and observed mortality rates with our models was seen across various patient risk subgroups analyzed.

Table 5 Hospital mortality prediction model performance metrics
Figure 1
figure 1

Model1 hospital mortality prediction model calibration (n = 45051). * Figure 1 shows the result of the goodness of fit test regarding the model 1 based on test dataset (n = 45051).

Figure 2
figure 2

Model2 hospital mortality prediction model calibration(n = 45051). * Figure 2 shows the result of the goodness of fit test regarding the model 2 based on test dataset (n = 45051).

Discussion

The prediction model of in-hospital mortality developed in this study is fairly consistent with observed mortality. Results also suggest that inclusion of both comorbidity and other demographic/clinical characteristics of patients account for the better performance of our model compared to a previously described model [17]. When administrative data are used in clinical outcomes research, algorithms to code comorbidities are essential for defining comorbidities. Charlson comorbidity measurement tools [16] are widely used with administrative data to determine the burden of the disease or case-mix. Past studies suggest that the original Charlson Index by chart review and its adaptations for use with administrative databases discriminate mortality similarly [15, 17]. The database used in this study assigns to each patient one to six diagnostic codes. Counting multiple comorbidities markedly enhanced accuracy compared to counting comorbidity based on a single ICD-10 code. In addition to comorbidities based on ICD-10 codes, MDCs were also incorporated into our models. By including MDCs, our model could better reflect the characteristics of major patient conditions among all co-existing diagnoses. This may also help to explain the improved performance of our model compared to former prediction models (c-index: 0.69–0.71) which incorporated only the Charlson Index in the analysis of Japanese data [17].

Recent studies in the U.S. introduced a new risk prediction model that includes extended administrative data with lab test results [30, 31]. Although the inclusion of detailed clinical data may further improve prediction performance, it requires a sophisticated standardized information system on a nationwide scale. Our prediction model exhibited a comparable level of precision, using variables easily accessible in conventional administrative electronic record systems. As we demonstrated, inclusion of patient demographics, conditions at admission, and the category of major diagnosis with a summary score of comorbidities may be useful and efficient in improving model performance.

In the present study, we developed two models that include and exclude LOS. It is possible that a hospital may promote premature discharge in order to lower in-hospital mortality, thereby adjusting for LOS to allow for a fair comparison of hospital performance. However, the duration of hospitalization is a parameter reflecting various factors other than in-hospital mortality risk, such as the quality of hospital management and socio-economic conditions that facilitate earlier discharge (e.g. availability of informal care at home). Since no major difference in accuracy was observed between the two models, we believe that the use of model 1, which excludes LOS, would be more suitable to adjust for the likelihood of in-hospital death purely due to patient conditions.

In contrast to the risk factor of age, gender did not have a pronounced impact on mortality in our study. Previous studies on cardiovascular surgery in Japan have also shown that the impact of gender on in-hospital mortality is negligible even in risk prediction models with detailed clinical variables [9]. The odds ratio of the circulatory system category was unexpectedly low and may require some explanation. The average risk of cardiovascular hospitalization may have been relatively low in this study because many patients are hospitalized for cardiac catheterization as a post-intervention evaluation in Japan. Thus, an alternative model that categorizes hospitalization for evaluation separately may increase performance in Japanese cases and deserves further consideration in future studies.

A number of limitations of this study are worth noting. Exclusion of 6 low mortality MDCs might bias the performance of our models. Given the c-index for model 2 (n = 282,064) was 0.854, we believe that our model can be useful for hospital mortality analysis in all types of disease. Nevertheless, it would be necessary to update the hospital prediction model periodically, given that the relative importance of factors contributing to mortality may change due to future medical innovations in diagnosis and therapy.

Conclusion

This study is one of the few Japanese studies that verifies and demonstrates the accuracy of in-hospital mortality prediction models that take into account all diseases. As standardized hospital mortality rates could be used as indicators of quality of care and in setting national standards, risk adjustment in relation to in-hospital mortality is thought to be useful in implementing hospital-based efforts aimed at improving the quality of medical treatment[26]. The risk model described in this study demonstrates a good degree of discrimination and calibration. In addition to its statistical evaluation, it is important that the model can be readily used for risk prediction by clinicians in the field. A major task for the future is to consider how to improve this model in order to make it more detailed, its analytical qualities even more convincing, and its use more compelling.

References

  1. Jencks SF, Huff ED, Cuerdon T: Change in the quality of care delivered to Medicare beneficiaries, 1998–1999 to 2000–2001. JAMA. 2003, 289: 305-12. 10.1001/jama.289.3.305.

    Article  PubMed  Google Scholar 

  2. McGlynn EA, Asch SM, Adams J, et al: The quality of health care delivered to adults in the United States. N Engl J Med. 2003, 348: 2635-45. 10.1056/NEJMsa022615.

    Article  PubMed  Google Scholar 

  3. Institute of Medicine: Crossing the quality chasm: a new health system for the 21st century. 2001, Washington, D.C.: National Academies Press

    Google Scholar 

  4. Galvin R, Milstein A: Large employers' new strategies in health care. N Engl J Med. 2002, 347: 939-42. 10.1056/NEJMsb012850.

    Article  PubMed  Google Scholar 

  5. Donabedian A: Explorations in Quality Assessment and Monitoring. The Definition of Quality and Approaches to Its Assessment. Ann Arbor. 1980, MI: Health Administration Press, 1.

    Google Scholar 

  6. Iezzoni LI: Risk adjustment for measuring health care outcomes. 2003, Chicago, IL: Health Administration Press

    Google Scholar 

  7. Bradley EH, Herrin J, Elbel B, et al: Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA. 2006, 296 (1): 72-78. 10.1001/jama.296.1.72.

    Article  CAS  PubMed  Google Scholar 

  8. Krumholz HM, Wang Y, Mattera JA, et al: An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with an acute myocardial infarction. Circulation. 2006, 113 (13): 1683-1692. 10.1161/CIRCULATIONAHA.105.611186.

    Article  PubMed  Google Scholar 

  9. Motomura N, Miyata H, Tsukihara H, Takamoto S: Risk Model of Thoracic Aortic Surgery in 4707 Cases from a Nationwide Single-race Population, via a Web-based Data Entry System: The First Report of 30-day and 30-dayOperative Outcome Risk Models for Thoracic Aortic Surgery. Circulation. 2008, 118: S153-S159. 10.1161/CIRCULATIONAHA.107.756684.

    Article  PubMed  Google Scholar 

  10. Miyata H, Motomura N, Ueda U, Mastuda H, Takamoto S: Effect of procedural volume on outcome of CABG surgery in Japan: Implication toward minimal volume standards and public reporting. Journal of Thoracic and Cardiovascular Surgery. 2008, 135: 1306-12. 10.1016/j.jtcvs.2007.10.079.

    Article  PubMed  Google Scholar 

  11. Rosen Ak, Loveland S, Anderson JJ, Rothendler JA, Hankin CS, Rakavski CC, Moskowitz MA, Berlowitsz DR: Evaluating diagnosis-based case-mix measures: how well do they apply to the VA population?. Medical Care. 2001, 39 (7): 692-704. 10.1097/00005650-200107000-00006.

    Article  CAS  PubMed  Google Scholar 

  12. Starfield B, Weiner J, Mumford L, et al: Ambulatory Care Groups: a categorization of diagnoses for research and management. Health Serv Res. 1991, 26: 53-74.

    CAS  PubMed  PubMed Central  Google Scholar 

  13. Ellis RP, Ash A: Refinements to the diagnostic cost group (DCG) model. Inquiry. 1995, 32: 418-429.

    PubMed  Google Scholar 

  14. Fishman PA, Goodman MJ, Hornbrook MC, Meenan RT, Bachman DJ, Rosetti MCO: Risk adjustment using automated ambulatory pharmacy data: The RxRisk Model. Medical Care. 2003, 41 (1): 84-99. 10.1097/00005650-200301000-00011.

    Article  PubMed  Google Scholar 

  15. Quan H, Sundararajan V, Halfon P, Fong A, Burnand B, Luthi JC, Saunders LD, Beck CA, Feasby TE, Ghali WA: Coding algorithms for defining comorbidities in ICD-9-CM and ICD-10 administrative data. Medica Care. 2005, 43: 1130-39. 10.1097/01.mlr.0000182534.19832.83.

    Article  Google Scholar 

  16. Charlson ME, Pompei P, Ales KL, et al: A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987, 40: 373-383. 10.1016/0021-9681(87)90171-8.

    Article  CAS  PubMed  Google Scholar 

  17. Sundararajan V, Quan H, Halfon P, Fushimi K, Luthi JC, Burnand B, Ghali W: Cross-national comparative performance of three versions of the ICD-10 Charlson Index. Med Care. 2007, 45: 1210-1215.

    Article  PubMed  Google Scholar 

  18. Matsuda S, Fushimi K, Hashimoto H, Kuwabara K, Imanaka Y, Horiguchi H, Ishikawa KB, Anan M, Ueda K: The Japanese Case-Mix Project: Diagnosis Procedure Combination (DPC). Proceedings of the 19th International Case Mix Conference PCS/E; 8–11 October 2003; Washington, DC. 2003, 121-124.

    Google Scholar 

  19. Okamura S, Kobayashi R, Sakamaki T: Case-mix payment in Japanese medical care. Health Policy. 2005, 74: 282-286. 10.1016/j.healthpol.2005.01.009.

    Article  PubMed  Google Scholar 

  20. Fushimi K, Hashimoto H, Imanaka Y, Kuwabara K, Horiguchi H, Ishikawa KB, Matsuda S: Functional mapping of hospitals by diagnosis-dominant case-mix analysis. BMC Health Serv Res. 7: 5013-2007 Apr 10;

  21. Canadian Institute for Health Information: HSMR: A New Approach for Measuring Hospital Mortality Trends in Canada. 2007, Ottawa: CIHI

    Google Scholar 

  22. Heijink R, Koolman X, Pieter D, Veen A, Jarman B, Westert G: Measuring and explaining mortality in Dutch hospitals; The hospital standardized mortality rate between 2003 and 2005. BMC Health Services Research. 2008, 8: 73-10.1186/1472-6963-8-73.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Anderson HV, Shaw ES, Brindis RG, McKay CR, Klein LW, Krone RJ, Ho KKL, Rumsfeld JS, Smith SC, Weintraub WS: Risk-adjusted mortality analysis of percutaneous coronary interventions by American College of Cardiology/American Heart Association guidelines recommendations. Am J Cardiol. 2007, 99: 189-196. 10.1016/j.amjcard.2006.07.083.

    Article  PubMed  Google Scholar 

  24. Shroyer AL, Coombs LP, Peterson ED, et al: The Society of Thoracic Surgeons: 30-day operative mortality and morbidity risk models. Ann Thorac Surg. 2003, 75 (6): 1856-1864. 10.1016/S0003-4975(03)00179-6.

    Article  PubMed  Google Scholar 

  25. Ash A, Schwartz M: Evaluating the performance of risk-adjustment methods: dichotomous variables. Risk adjustment for measureing health care outcomes. Ann Arbor. Edited by: Iezzoni L. 1994, MI: Health Administration Press, 313-46.

    Google Scholar 

  26. Jarman B, Bottle A, Aylin P, Browne M: Monitoring changes in hospital standardised mortality ratios. BMJ. 2005, 330: 329-10.1136/bmj.330.7487.329.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Jarman B, Gault S, Alves B, Hider A, Dolan S, Cook A, Hurwitz B, Iezzoni L: Explanining differences in English hospital death rates using routinely collected data. BMJ. 1999, 318: 1515-20.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Lakhani A, Coles J, Eayres D, Spence C, Rachet B: Creative use of existing clinical and health outcomes data to assess NHS performance in England: Part1 – performance indicators closely linked to clinical care. BMJ. 2005, 330: 1426-1431. 10.1136/bmj.330.7505.1426.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Aylin P, Bottle A, Majeed A: Use of administrative data or clinical databases as predictors of risk of death in hospital: comparison of models. BMJ. 2007, 334: 1044-10.1136/bmj.39168.496366.55.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P: Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and loboratory databases. Medical Care. 2008, 46: 232-39.

    Article  PubMed  Google Scholar 

  31. Tabak YP, Johannes RS, Silber JH: Using automated clinical data for risk adjustment: Development and validation of six disease-specific mortality predictive models for pay-for-performance. Medical Care. 2008, 45: 789-805. 10.1097/MLR.0b013e31803d3b41.

    Article  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The authors express thanks to following researchers in the Study Group on Diagnosis Procedure Combination that made DPC public data available.

Makoto Anan, Kiyohide Fushimi, Yuichi Imanaka, Koichi B. Ishikawa, Kenji Hayashida, and Kazuaki Kuwabara

Funding Sources

None

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hiroaki Miyata.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

HM conceived of the study and designed the protocol. HM and HH1 wrote the paper. HH2 managed data collection and data cleaning. All authors have read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Miyata, H., Hashimoto, H., Horiguchi, H. et al. Performance of in-hospital mortality prediction models for acute hospitalization: Hospital Standardized Mortality Ratio in Japan. BMC Health Serv Res 8, 229 (2008). https://doi.org/10.1186/1472-6963-8-229

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-8-229

Keywords