- Research article
- Open Access
- Open Peer Review
Did hospital mortality in England change from 2005 to 2010? A retrospective cohort analysis
BMC Health Services Researchvolume 13, Article number: 216 (2013)
There is some evidence that hospital performance in England measured by the Dr Foster Hospital Standardised Mortality Ratio (HSMR) has improved substantially over the last 10 years. This study explores mortality in-hospital and up to 30 days post-discharge over a five year period to determine whether there have been improvements in case-mix adjusted mortality, to examine if any changes are due to changes in case-mix adjustment variables such as age, sex, method of admission and comorbidity, and to compare changes between hospital trusts.
Using Hospital Episode Statistics linked to mortality data from the Office for National Statistics the Summary Hospital-Level Mortality Index (SHMI) was calculated for all patients who were discharged or died in general acute hospital trusts in England for the period 01/04/2005 to 30/09/2010.
During this five year period the number of admissions rose by 8% but deaths fell by 5%. The SHMI fell by 24% from 112 to 85 over the period, partly due to fewer deaths but partly due to increasing numbers predicted by the SHMI model. Excluding comorbidities from the model the SHMI fell by 18% from 108 to 89 over this period. The reduction was similar in emergency and elective admissions and in all other sub-groups examined. The average quarterly change in SHMI varied considerably between trusts (range: -4.4 to −0.2).
As measured by the SHMI there has been a 24% improvement in mortality in acute general trusts in England over a period of five and a half years. Part of this improvement is an artificial effect caused by changes in the depth of coding of comorbidities and other effects due to change in case-mix or non-constant risk.
There is some evidence that hospital performance in England has improved substantially over the last 10 years. The Dr Foster Hospital Standardised Mortality Ratio (HSMR) for admissions to all English hospitals taken together between 2000 and 2010 fell by 42% from 115 in 2002 to 67 in 2011 , and there was a 21% reduction in an age, sex and diagnosis adjusted HSMR for emergency admissions between 2004/5 and 2008/9 . This could be interpreted as a result of large gains in the quality of care in recent years. However, there are doubts about whether HSMRs are closely related to quality of care , or can ever be used to reliably measure quality of care . Empirical studies have also demonstrated that different methods of calculating standardised mortality measures rank the performance of hospitals differently [5, 6], raising doubts about their reliability. The influence of comorbidity on risk of death has been show to vary across hospitals, including greater burden of comorbidity actually being protective . Although there is evidence of the constant risk fallacy, exclusion of comorbidity has been shown to change the HSMR by more than 5 points in only 16% of trusts . Local organisation of care and disease severity remains uncaptured. Nevertheless, a fall of 30% in hospital mortality, even if it is only true in part, would point to important medical advances and/or better hospital care. It would help justify the 34% increase in NHS hospital expenditure over the same period , and might suggest that some of the many initiatives affecting practice, policy, and the organisation of the NHS had had some benefits.
Because the Dr Foster HSMR only focuses on in-hospital deaths for patients admitted with a subset of conditions, and in response to methodological concerns and transparency issues  a new measure - the Summary Hospital-Level Mortality Index (SHMI) – was developed. The SHMI includes all admissions and includes all deaths up to 30 days post discharge in order to avoid potential biases in using in-hospital deaths . Using the SHMI we have therefore sought to validate the Dr Foster finding, investigated if any changes in the national SHMI are due to changes in case-mix adjustment variables such as age, sex, method of admission and comorbidity, and compared changes between hospital trusts.
We were supplied with a dataset by the NHS Information Centre for the purpose of statistical modelling of the new SHMI indicator, including the impact of case-mix adjustment variables and the variability of the measure over time. The dataset comprised of all admissions to English hospitals from the Hospital Episode Statistics (HES) data warehouse for spells which ended between 01/04/2005 and 30/09/2010. Date of death data supplied by the Office for National Statistics (ONS) was linked to the hospital episode data set and deaths within 30 days of discharge were identified. All patient data provided was anonymised prior to receipt by the authors.
We followed the previously described methodology for processing the linked hospital episode data before calculating the SHMI [11, 12]. Briefly, this involved excluding maternity admissions, day case admissions, and admissions to private and community hospitals. We also excluded admissions to 72 Specialist trusts. There was no formal definition of General/Specialist status and we took the definition of general trusts from lists reported by other mortality indicator providers.
Categories were created for all variables. Age was split into 5 year age bands except for infants aged 0–1 and preschool children aged 1–4. A comorbidity score was derived by converting secondary diagnosis codes into the 19 clinical conditions identified in the Charlson comorbidity index , with contemporary weights for the presence of individual conditions contribution to the overall score . The Index of Multiple Deprivation rank (an area level deprivation measure derived from the patient’s postcode) was reported by HES and grouped by fifths. Type of admission was grouped into emergency and elective.
The reason for admission was identified from the ICD10 code in the first diagnosis field, and collapsed into the diagnostic groups given by the Agency for Healthcare Research and Quality . Diagnosis groups were then combined into the 138 groups used in the calculation of the SHMI . It has previously been reported that the mean c statistic over all diagnosis groups in the SHMI model was 0.830 (range 0.534 – 0.970) and that the coefficient of determination R2 showed the SHMI model accounted for 81% over the total variability .
We estimated the probability of death in hospital or within 30 days of discharge for all completed admissions for the period 01/04/2005 to 30/09/2010 by fitting logistic regression models using the SHMI covariates (age, sex, method of admission and comorbidity) within diagnosis group. We accounted for the effect of seasonal variation in hospital admissions by including an extra categorical variable for month of admission in each of the logistic regression models. We then summed these probabilities predicted by the model over all diagnosis groups and for each trust for each consecutive 3 month period to obtain the expected number of deaths per trust for each quarter. The ratio of the observed number in each quarter to the expected is equivalent to indirect standardisation . Fitting one model to the data from all five years combined means that we can make valid comparisons over time. This is because we calculate one set of case-mix weights for all time periods instead of the weights changing over time (which would be the case if separate models were fitted for each year or quarter).
We plotted the quarterly values of the SHMI, expected number of deaths and observed number deaths in all hospitals against time for the five year period. Coding levels of the comorbidity variable have changed over this time period so we examined the effect of removing comorbidity from the model so that we could be sure any trends identified were not a result of these changes. Further analyses examined the quarterly changes in SHMI in subgroups of age, sex, admission method, index of deprivation and comorbidity. As the SHMI model adjusts for age, sex, admission method and comorbidity we would not expect to see differences in the overall SHMI between the subgroups. However, trend is not adjusted for in the SHMI model so we can investigate any differences between subgroups in terms of their time trend.
We estimated the linear trend in individual hospital SHMIs by ordinary least squares regression of the 22 quarterly SHMIs on time. The regression coefficients were plotted on a funnel plot with control lines calculated in a conventional manner [17, 18], Winsorising the 20% most extreme values to examine whether there were any extremes in the rate of change.
Over the five and a half year period there has been an increase of 8% in the total number of admissions per year meeting our inclusion criteria, but a fall of 5% in the number of deaths in-hospital or up to 30 days post discharge (Table 1). Adjusting for changing case-mix, the SHMI fell by 24% from 112 in the first quarter of 2005/06 to 85 in the first quarter of 2010/11 (Figure 1a). This reduction occurred both as a result of a fall in the observed death rate and an increase in the expected death rate (Figure 1b). Excluding comorbidity from the model the SHMI fell by 18% from 108 in the first quarter of 2005/06 to 89 in the first quarter of 2010/11 (Figure 1a). This reduction also occurred as a result of a fall in the observed death rate and an increase in the expected death rate (Figure 1c).
The reduction in the standardised mortality rate was similar in all age groups (Figure 2), sexes (Figure 3), three groups with different levels of recorded comorbidity (Figure 4), elective and emergency admissions (Figure 5), and patients from areas of different levels of deprivation (Figure 6).
Figure 5 shows that the SHMI for elective admissions is much more variable from quarter to quarter than that of emergency admissions. This variation appears to be seasonal with a reduction in the SHMI for elective admissions in the fourth quarter of each reporting year, that is January to March. Figure 6 shows the subgroups of fifths of the index of multiple deprivation, the trend is similar for each fifth. The expected probabilities from the SHMI model are not adjusted for deprivation as this was not found to significantly improve the discrimination between hospitals [11, 12].
The estimated quarterly reduction in the SHMI varied considerably between hospitals from a maximum reduction of −4.4 points per quarter to a minimum of −0.2 points per quarter. Plotting these on a funnel plot shows that all trusts are within the 99.9% limits with the exception of the Mid Staffordshire Trust which is outlying with a large average quarterly decrease (Figure 7).
Do the changes really indicate improving quality of care?
We have found a decline in the SHMI of 24% over the 5 year period. We have previously suggested that effects like this should be put to a number of tests before they are accepted as indicating real changes in performance . These tests include:
Is any change in the SHMI the result of a change in the observed death rate or the expected death rate?
Is a difference in the SHMI sensitive to the methods used? For example, is it sensitive to how the standardisation is carried out or the weightings used?
Is there any corroborating evidence from related quality of care indicators?
Determining all of the individual factors that have influenced the change in SHMI would be extremely challenging. More broadly, we have looked at changes in the observed death rate and found that deaths up to 30-days post discharge have fallen by 15% from 4.7 to 4.0 per 100 admissions over this 5 year period. Explanations include improved clinical care, more deaths in the community without accessing secondary services and improving population health.
The number of expected deaths has increased by 15% from 3.9 per 100 admissions in Q2 2005/06 to 4.5 per 100 admissions in Q2 2010/11. Changes in SHMI variables that drive the increase in expected deaths include a small increase in the average age of patients (50.5 to 51.6), an increase in the proportion admitted as emergencies (75% to 78%), and a large increase in the proportion of patients recorded with comorbidities (26% to 35%) (see Table 1), all of which are assigned greater risk of death in the SHMI model. The 15% fall in the observed death rate is amplified by the increasing age of patients and the increase in the proportion of patients admitted as emergencies, patient groups more likely to die than their younger elective counterparts. Whilst the changes in age and method of admission may reflect the characteristics of the population or admission policy/thresholds, the change in comorbidities may just reflect a change in coding practice.
Population age should only increase the expected number of deaths if the age-specific risk is constant over time. Indirect standardisation models used to produce standardised mortality rates (SMRs) like the SHMI assume that the risk associated with a risk factor such as age is constant between places and over time . So, for example, the model assumes that the risk associated with a particular age is the same at the beginning of the five year period as at the end. Population mortality rates have improved by 10% over the same period  suggesting that, due to improving population health, the risk at a particular age is declining and this will result in a fall in SHMI.
An increase in the number of admissions coded as emergency over this period has been reported elsewhere as a result of a growth in admissions lasting a day or less, and predominantly in people aged 25 to 60 years of age . A likely explanation is that some emergencies previously managed out of hospital are being admitted, leading to the growth of short length of stay admissions. It is possible therefore that the reduction in the SHMI is due to an increase in less severe cases who are more likely to survive. This along with the concurrent decrease in elective admission mortality and improvements in all bands of comorbidity (Figures 4 and 5) suggests a difference in admission case-mix is not responsible for improvements in the SHMI.
Our finding that the model without comorbidities found an estimated annual change in the SHMI of −3.6 compared to −4.9 with comorbidities, indicates that changes in coding of comorbidities do not explain the majority of the reduction in the SHMI over this 5 year period. The change in comorbidity over this period may reflect a genuine increase in underlying comorbidity in admitted patients but it more likely reflects an improvement in the hospital’s capacity to record underlying comorbidity.
It looks therefore as if part of the improvement in the SHMI is due to a reduction in the numbers and rate of death brought about by improvements in care; part is an artificial effect caused by changes in the coded comorbidities over time; and the remainder may be due to other real or artificial effects due to changes in case-mix or non-constant risk.
There is some corroborating evidence that there have been real improvements in care from more detailed audits of outcomes in specific clinical conditions such as acute myocardial infarction and stroke , chronic obstructive pulmonary disease , head injury , and hip fracture , which have found a fall in in-hospital mortality during this period. These reductions have been ascribed to improvements in care brought about for many reasons such as advances in medical technologies and the introduction and implementation of evidence-based guidelines. It should also be remembered that during this period NHS net expenditure in England increased by 34% to 99.8bn pa , increased competition between hospitals was created, payment by results introduced, and a number of programmes focusing specifically on quality and safety of hospital care introduced which has resulted for example in a 64% reduction in C. difficile and a 78% reduction in MRSA reported infections in hospitals over this period .
A more direct comparison with a mortality measure such as the Dr Foster HSMR was not performed as publically available data are recalibrated annually and would mask changes in the expected death rate over time. Theoretically the SHMI should be more robust to changes in discharge and community care policy than the HSMR as it incorporates death at 30 days from discharge.
Variation between hospitals
We have also examined variation between hospitals in this trend. The results show that improvements have been widespread but there are some hospitals where almost no improvement has been seen and others where large improvements have been recorded. One hospital has shown an exceptional improvement and that is the Mid-Staffordshire Hospital Trust which reduced its SHMI at about 4.4 percentage points each quarter and was well outside the 99.9% control limit. Whilst the SHMI is described as being used by the DH to monitor hospital performance , in reality because the weights are recalculated every quarter, the expected values change and it is actually only being used to compare hospital performance. We think that the Department of Health should monitor trends in order to identify any hospitals where the SHMI is going in the wrong direction, or changing their coding practice so that hospital comparisons are unreliable. We don’t think this needs analysis over a five year period as we have done. A sensible approach would be for a rolling analysis which compared two consecutive years using funnel plots to see year on year differences between hospitals.
There has been a 24% improvement in mortality in acute general trusts in England over a period of five and a half years as measured by the SHMI. This improvement is due to a decrease in the number of observed deaths and an increase in the number of expected deaths. The reduction in the number of observed deaths is in part due to falling mortality rates in the general population, but may also be in part due to improvements in hospital care. The increase in the expected number of deaths is partially an artificial effect due changes in the amount of coded comorbidities; and the remainder may be due to other changes in case-mix or non-constant risk. However, there is some evidence that hospital mortality has improved over this five year period.
Hospital episode statistics
Hospital standardised mortality ratio
International classification of diseases
National Health Service
Office for National Statistics
Summary hospital-level mortality index.
Inside your hospital, Dr Foster Hospital Guide 2001–2011. 2011, Dr Foster Intelligence, http://drfosterintelligence.co.uk/wp-content/uploads/2011/11/Hospital_Guide_2011.pdf.
Blunt I, Bardsley M, Dixon J: Trends in emergency admissions in England 2004–2009: is greater efficiency breeding inefficiency?. 2010, The Nuffield Trust, http://www.nuffieldtrust.org.uk/publications/trends-emergency-admissions-england-2004-2009.
Pitches D, Mohammed M, Lilford R: What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature. BMC Health Serv Res. 2007, 7 (1): 91-10.1186/1472-6963-7-91.
Lilford R, Pronovost P: Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ. 2010, 340: c2016-10.1136/bmj.c2016.
Shahian DM, Wolf RE, Iezzoni LI, Kirle L, Normand S-LT: Variability in the measurement of hospital-wide mortality rates. N Engl J Med. 2010, 363 (26): 2530-2539. 10.1056/NEJMsa1006396.
Bottle A, Jarman B, Aylin P: Hospital standardized mortality ratios: sensitivity analyses on the impact of coding. Health Serv Res. 2011, 46 (6pt1): 1741-1761. 10.1111/j.1475-6773.2011.01295.x.
Mohammed MA, Deeks JJ, Girling A, Rudge G, Carmalt M, Stevens AJ, Lilford RJ: Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ. 2009, 338 (mar18_2): b780.
Harker R: NHS funding and expenditure. 2011, House of Commons Library
Whalley L: Report from the steering group for the national review of the hospital standardised mortality ratios. 2010, NHS Information Centre for Health and Social Care, http://www.hscic.gov.uk/media/9717/Report-from-DH-of-SHMI-ratio/pdf/Report_from_DH_SMHI.pdf.
Drye EE, Normand SLT, Wang Y, Ross JS, Schreiner GC, Han L, Rapp M, Krumholz HM: Comparison of hospital risk-standardized mortality rates calculated by using in-hospital and 30-Day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012, 156 (1): 19-U66.
Campbell MJ, Jacques RM, Fotheringham J, Pearson T, Maheswaran R, Nicholl J: School of Health and Related Research. An evaluation of the summary hospital mortality index: final report. 2011, http://www.sheffield.ac.uk/polopoly_fs/1.51777!/file/SHMI_Final_Report.pdf.
Campbell MJ, Jacques RM, Fotheringham J, Maheswaran R, Nicholl J: Developing a summary hospital mortality index: retrospective analysis in English hospitals over five years. BMJ. 2012, 344: e1001-10.1136/bmj.e1001.
Charlson ME, Pompei P, Ales KL, MacKenzie CR: A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987, 40 (5): 373-383. 10.1016/0021-9681(87)90171-8.
Understanding HSMRs: A Toolkit on Hospital Standardised Mortality Ratios. 2012, Dr Foster Intelligence, http://drfosterhealth.co.uk/docs/HSMR_Toolkit_Version_7.pdf.
Healthcare Cost and Utilization Project. Clinical classifications software (CCS) for ICD10. http://www.hcup-us.ahrq.gov/toolssoftware/icd_10/ccs_icd_10.jsp.
Roalfe A, Holder R, Wilson S: Standardisation of rates using logistic regression: a comparison with the direct method. BMC Health Serv Res. 2008, 8 (1): 275-10.1186/1472-6963-8-275.
Spiegelhalter DJ: Funnel plots for comparing institutional performance. Stat Med. 2005, 24 (8): 1185-1202. 10.1002/sim.1970.
Spiegelhalter DJ: Handling over-dispersion of performance indicators. Qual Saf Health Care. 2005, 14 (5): 347-351. 10.1136/qshc.2005.013755.
Nicholl J: Case-mix adjustment in non-randomised observational evaluations: the constant risk fallacy. J Epidemiol Community Health. 2007, 61 (11): 1010-1013. 10.1136/jech.2007.061747.
Mortality in the United Kingdom, 2010: Office for National Statistics. 2012, http://www.ons.gov.uk/ons/rel/mortality-ageing/mortality-in-the-united-kingdom/mortality-in-the-united-kingdom--2010/index.html.
Vamos EP, Millett C, Parsons C, Aylin P, Majeed A, Bottle A: Nationwide study on trends in hospital admissions for major cardiovascular events and procedures among people with and without diabetes in England, 2004–2009. Diabetes Care. 2012, 35 (2): 265-272. 10.2337/dc11-1682.
George PM, Stone RA, Buckingham RJ, Pursey NA, Lowe D, Roberts CM: Changes in NHS organization of care and management of hospital admissions with COPD exacerbations between the national COPD audits of 2003 and 2008. QJM. 2011, 104 (10): 859-866. 10.1093/qjmed/hcr083.
Fuller G, Bouamra O, Woodford M, Jenks T, Patel H, Coats TJ, Oakley P, Mendelow AD, Pigott T, Hutchinson PJ, et al: Temporal trends in head injury outcomes from 2003 to 2009 in England and Wales. Br J Neurosurg. 2011, 25 (3): 414-421. 10.3109/02688697.2011.570882.
Wu TY, Jen MH, Bottle A, Liaw CK, Aylin P, Majeed A: Admission rates and in-hospital mortality for hip fractures in England 1998 to 2009: time trends study. J Public Health. 2011, 33 (2): 284-291. 10.1093/pubmed/fdq074.
Analyses Q: Mandatory MRSA, MSSA and E. coli Bacteraemia and CDI in England (up to January-March 2012). 2012, London: Health Protection Agency
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/13/216/prepub
JF is funded by Kidney Research UK to develop performance indicators in renal services.
The authors declare that they have no competing interests.
All authors designed the study. RMJ and JF carried out the statistical analysis. JN wrote the first draft of the manuscript. All authors were involved in editing consecutive drafts of the manuscript, interpreted the findings, and approved the final draft.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.