Variations and inter-relationship in outcome from emergency admissions in England: a retrospective analysis of Hospital Episode Statistics from 2005–2010

Background The quality of care delivered and clinical outcomes of care are of paramount importance. Wide variations in the outcome of emergency care have been suggested, but the scale of variation, and the way in which outcomes are inter-related are poorly defined and are critical to understand how best to improve services. This study quantifies the scale of variation in three outcomes for a contemporary cohort of patients undergoing emergency medical and surgical admissions. The way in which the outcomes of different diagnoses relate to each other is investigated. Methods A retrospective study using the English Hospital Episode Statistics 2005–2010 with one-year follow-up for all patients with one of 20 of the commonest and highest-risk emergency medical or surgical conditions. The primary outcome was in-hospital all-cause risk-standardised mortality rate (in-RSMR). Secondary outcomes were 1-year all-cause risk-standardised mortality rate (1 yr-RSMR) and 28-day all-cause emergency readmission rate (RSRR). Results 2,406,709 adult patients underwent emergency medical or surgical admissions in the groups of interest. Clinically and statistically significant variations in outcome were observed between providers for all three outcomes (p < 0.001). For some diagnoses including heart failure, acute myocardial infarction, stroke and fractured neck of femur, more than 20% of hospitals lay above the upper 95% control limit and were statistical outliers. The risk-standardised outcomes within a given hospital for an individual diagnostic group were significantly associated with the aggregated outcome of the other clinical groups. Conclusions Hospital-level risk-standardised outcomes for emergency admissions across a range of specialties vary considerably and cross traditional speciality boundaries. This suggests that global institutional infra-structure and processes of care influence outcomes. The implications are far reaching, both in terms of investigating performance at individual hospitals and in understanding how hospitals can learn from the best performers to improve outcomes.


Background
Wide variations in clinical outcomes between hospitals have been described for a number of conditions. In addition, a number of high-profile failures of healthcare have been reported [1,2]. One common theme has been a failure to deliver safe and high quality care, with subsequent poor clinical outcomes. In response to such variations and failures, there have been suggestions that rating systems for hospitals could be developed in tandem with stringent hospital inspections. The aim of such a system would be to inform key stakeholders on the quality of care delivered by providers and to provide a method to prevent problems from developing, detect failure before harm is done and to facilitate a timely response.
The feasibility of a single rating being able to achieve all these aims has been questioned by commentators, including a report by the Nuffield Trust [3,4]. In particular, a rating system can only be useful if significant variations in outcome exist between hospitals, and can only be of value if the outcomes of different diagnoses are inter-related within individual hospitals [5]. Whether ratings should be limited to specific conditions, or whether they can be hospital-wide requires evaluation [6]. Therefore, it is important to define whether hospitals perform at a similar level of outcome across a range of conditions, or whether individual providers encompass a range of performance levels, dependent on specialty [7][8][9].
The objectives of this study were to determine whether the risk-standardised clinical outcomes for emergency admissions varied significantly between providers, and whether outcomes were inter-related between different diagnoses at a hospital level.

Methods
The data source for this retrospective study was the national administrative dataset, the Hospital Episode Statistics (HES) data, for the period 1 st April 2005 to 31 st March 2010. The HES detail every admission into the NHS and allow patients to be tracked between NHS hospitals and across years through the use of a unique pseudo-anonymised identifier. The dataset can be considered an inclusive record of National Health Service (NHS) hospital activity in England as there is a requirement for every hospital to submit a "minimum dataset" to the Department of Health. Each hospital submits data from its own Patient Administration System (PAS) to the centralised HES data warehouse which is then validated and cleaned by the NHS Information Centre according to a pre-specified list of rules. HES contains publicly available admitted patient care data from 1989 onwards, with more than 12 million new records added each year. Each HES record contains a significant amount of information about the associated admission including patient demographics, the treating hospital, diagnostic and procedural coding, length of stay, discharge status and destination. Emergency and elective admissions can be differentiated by specific combinations of codes relating to type of admission and diagnostic/procedural code sub-groups. As a result, it is possible to identify patient cohorts with specific characteristics with a high degree of accuracy [10]. A recent report based on HES data describes the approximate number of emergency admissions in England as more than 4.5 million per annum [11].
A five-year study period was chosen to achieve a pragmatic balance between sample size and the cohort reflecting contemporary practice. The HES were linked with the Office of National Statistics (ONS) Registry data to provide longer-term and out-of-hospital mortality data. Clinical coding in the HES is through the OPCS-4 procedural coding system and ICD-10 diagnostic codes [12].
Twenty patient groups were used that covered the breadth of hospital emergency admissions in adults (>17years of age) to acute NHS (public) hospitals. The groups were chosen a-priori by quantifying the commonest emergency admission diagnoses with the highest numbers of deaths to all acute NHS hospitals using a sample of HES data (2010/2011) (Additional file 1: Tables S1 and S2). ICD-10, OPCS-4 and HRG codes were used to create clinically meaningful groups. Medical conditions were defined by ICD-10 codes whilst surgical groups were defined by OPCS-4 codes.
The primary outcome measure was in-hospital all-cause risk-standardised mortality rate (in-RSMR) and secondary outcomes were one-year all-cause risk-standardised mortality rate (1yr-RSMR) and 28-day all-cause emergency readmission rate (RSRR) [13]. In-hospital mortality was defined as death occurring after admission to an index hospital and before discharge either from the index hospital or any subsequent receiving hospital in cases where the patient was transferred from the index hospital (a definition equivalent to a "super-spell" whereby a "spell" is HES-specific terminology denoting one continuous admission at a hospital) [12,14]. RSRRs were quantified using only patients discharged alive from hospital.
Established methods were used to extract and clean the data, and entries with missing key fields, such as operation dates, were excluded from analyses (representing 1.15% of the dataset) [15][16][17]. Each patient was included only once, using the diagnostic or procedural code in the primary field in the first occurrence for allocation to a clinical group. This prevented a single patient being included multiple times due to complications of care, or multiple surgical procedures.
The confounding effect of inter-hospital transfers on in-RSMR and RSRR was accounted for as patients can be tracked between hospitals within the HES. Previous studies have shown that linking concurrent admissions together and assigning the ultimate outcome to the index hospital (super-spells) provides the most accurate reflection of in-hospital death rates [12].

Statistical analyses
The analyses had two distinct stages: first, to quantify the extent of variations in outcome within each clinical group; and second, to determine whether the outcomes of a specific clinical group were associated with the collective outcomes of the other clinical groups within the same hospital. Statistical analysis was performed using SAS version 9.2 (SAS Institute, USA).
Risk-standardisation was performed using hierarchical logistic regression models. Patient age, sex, RCS (Royal College of Surgeons) Charlson score, social deprivation index and year of admission formed first-level predictors [18]. Co-morbidity scores for cases were derived from the HES data by using published methodology [19]. The method relies on identification of pre-existing ICD-10 diagnostic codes to denote co-morbidity. The score is created by calculating the number of co-morbidity categories present in any admission episode in the preceding 365 days or in the index episode for any one case of interest. A case scoring in any one category (regardless of the number of times) achieves a score of 1, scoring in any two different categories achieves a score of 2 and a score in three or more categories achieves a score of 3. Cases without any flagged co-morbidities receive a score of 0. The scoring system is a modification (the Royal College of Surgeon's [RCS] modification) of the original score derived by Charlson [20,21]. The Index of Multiple Deprivation (IMD) overall ranking is made by combining seven weighted domains [22]. A score of 1 indicates the most deprived and 32482 the least deprived. This numerical range is sub-divided into fifths and cases are allocated into to one of the five quintiles according to their score (i.e. quintile 1 represents the most deprived cases and quintile 5 represents the least deprived cases).
The second level of the RSMR and RSRR models permitted hospital-level random intercepts to vary in order to identify hospital-specific random effects and account for the clustering of patients within the same hospital [23]. This allowed, within-and between-hospital variation to be separated, within the limits of the data, after adjusting for patient-level characteristics. Analysis was performed at NHS trust level (i.e. potentially including more than one physical site) and the term "hospital" is used synonymously with "trust".

Variation in outcome
Variations in outcome for each metric were assessed for each clinical group. Risk estimates from individual patient data were used to calculate the expected number of deaths for each condition at each hospital. For each clinical group, the discrepancy between the observed and expected mortality in each hospital was quantified. A statistically significant divergence was reported when it exceeded the 95% confidence interval of the Poisson distribution. Visually these were represented with risk-standardised funnel plots.

Outcome inter-relationship analysis
The risk-standardised outcome for each of the twenty clinical groups was compared against the risk-standardised aggregate outcome for the other clinical groups excluding the procedure/condition of interest (e.g. acute myocardial infarction vs. an aggregate outcome of the other nineteen groups [i.e. excluding acute myocardial infarction]) using published methodology [8,14].
Hospitals were placed into quintiles containing equal numbers of patients based on their aggregate events rates. This was achieved by ranking hospitals according to their aggregate "other" event rate into five groups (quintiles) such that hospitals with the highest observed event rate relative to expectation (calculated by dividing the difference between the observed and expected numbers by a measure of random variability) were in the highest numbered group (quintile 5) and those with the lowest observed event rate relative to expected were in quintile 1. Hospitals with very high event rates relative to expected would not necessarily be allocated to the highest quintile if the case load was small. Aggregation was performed using the provider's mean Studentised residual aggregated across the remaining patient groups such that conditions with high event rates did not dominate the aggregation. Hospitals were assigned such that there were roughly equal numbers of patients in each quintile. Having assigned quintiles of aggregate "other" event rates, the combined event rate for the procedure/condition of interest was calculated for each quintile (by multiplying the observed to expected ratio by the crude total death rate for the procedure/ condition of interest). Results were expressed as a bar plot. The relationship between the outcome for each clinical group and the amalgamated outcome quintiles was tested using logistic regression.
Reporting of the study complied with the STROBE guidelines [24].

Ethics committee approval
The Wandsworth research ethics committee (WanREC) confirmed that no ethical approval was required for this study.

Results
2,406,709 admissions were included across 20 emergency groups in the five-year period. Summary demographic and outcome data for each group are provided (Table 1).

Variations in outcomes
Clinically important, and statistically significant, variations in outcome were seen across the range of medical and surgical conditions for in-RSMR, 1 yr-RSMR and RSRR (Table 2).   In-hospital all-cause risk-standardised mortality rate (in-RSMR) For in-RSMR, the clinical groups with the greatest proportion of hospitals lying outside statistical control limits were medical diagnoses, including acute myocardial infarction, heart failure, stroke, pneumonia and sepsis. For each of these, more than 20% of hospitals lay above the 95% confidence limit. This was in addition to wide variations in the actual in-RSMR for each diagnosis, with many diagnoses demonstrating more than doubling of mortality rates between hospitals. An illustrative funnel plot for fractured neck of femur is given in Figure 1.
Within surgical diagnoses, significant variations in mortality were seen for fractured neck of femur, emergency colorectal laparotomy, ruptured abdominal aortic aneurysm repair, and emergency urological interventions. Of particular note, for fractured neck of femur, 19.4% of hospitals lay above the upper 95% confidence limit for in-RSMR.
One-year all-cause risk-standardised mortality rate (1 yr-RSMR) A number of medical conditions displayed wide variations in 1 yr-RSMR. For acute myocardial infarction, heart failure, stroke, pneumonia and urinary tract infection more than 15% of hospitals lay above the upper 95% confidence limit. An illustrative funnel plot for acute myocardial infarction is given in Figure 2.
Mortality variations after emergency surgery were less pronounced for 1 yr-RSMR than for in-RSMR. Significant variations were still observed, most notably, for fractured neck of femur where 14.4% of hospitals lay above the upper 95% confidence limit for 1 yr-RSMR.

28-day all-cause risk-standardised emergency readmission rate (RSRR)
The widest variations in RSRR were for appendicectomy, fractured neck of femur, urinary tract infections, pneumonia, stroke and acute myocardial infarction with over 10% of hospitals lying above the upper 95% confidence limit. An illustrative funnel plot for pneumonia is given in Figure 3. Outcomes shown are risk-standardised in-hospital mortality, one-year mortality and 28-day emergency readmission rates.

Within provider inter-relationship in outcome between clinical groups
First, significant differences were observed between aggregated outcome quintiles when considering all 20 groups together for in-RSMR, 1 yr-RSMR and RSRR (p < 0.0001 for each). This confirmed hospital-wide variations in the overall outcomes delivered by individual hospitals for acute medical and surgical admissions. Second, the outcomes for specific patient groups within a hospital were strongly associated with the aggregate NOF (excludes trusts with <10 expected events) Figure 1 Example of a funnel plot visually exploring the issue of variability in all-cause in-hospital mortality rate for fractured neck of femur (NOF) between providers. 37.5% of hospitals lay outside the 95% (green lines) confidence intervals (green) and 18.1% of hospitals lay outside the 99.8% (red lines) confidence intervals. Further statistical analysis determined whether the probability that the number of hospitals outside the confidence intervals was significant (both p < 0.0001). Figure 2 Example of a funnel plot visually exploring the issue of variability in all-cause 1-year mortality rate for acute myocardial infarction (AMI) between providers. 27.7% of hospitals lay outside the 95% (green lines) confidence intervals (green) and 11.4% of hospitals lay outside the 99.8% (red lines) confidence intervals. Further statistical analysis determined whether the probability that the number of hospitals outside the confidence intervals was significant (both p < 0.0001).

AMI (excludes trusts with <10 expected events)
outcome of the other clinical groups within the same hospital (Additional file 1: Tables S3-S5 and Figures S1-S3). For those hospitals with the lowest aggregate in-RSMR (quintile 1), the in-RSMR in most cases was also lowest for each clinical group being tested. Using a more specific example, for acute heart failure, hospitals in aggregate quintile 1 had a in-RSMR of 15.0% whereas hospitals in aggregate quintile 5 had a in-RSMR of 19.8% (Figure 4). The mortality rate increased sequentially from quintile 1 to 5 demonstrating that there was a significant relationship between the aggregate in-RSMR and the heartfailure specific outcome (p < 0.001). This result was maintained for 1 yr-RSMR with 40.1% mortality in quintile 1 and 44.6% in quintile 5 and a progressive increase between quintiles (p < 0.001; Figure 5).
This was also seen in surgical procedures, such as fractured neck of femur in which quintile 1 had an in-RSMR of 7.85% and 11.3% in quintile 5, a 43.9% increase in risk across quintiles. The aggregate outcome was strongly associated with fractured neck of femur-specific outcome (p < 0.001). At one-year, the 1 yr-RSMR was 27.0% for quintile 1 and increased across quintiles to 31.0% for quintile 5 (p < 0.001).
Diagnoses such as sepsis showed similar results with the best outcomes being in quintile 1 for in-RSMR 24.3% rising to 32.4% in quintile 5. This 33.3% risk increase was incremental across quintiles, and demonstrated a strong association with the sepsis-specific outcome (p < 0.001). Similarly, the 1 yr-RSMR for sepsis was 41.1% for quintile 1 rising to 48.7% for quintile 5 (p < 0.001).
When considering RSRR, the aggregated readmission rate was associated with the diagnosis-specific readmission rates in many cases. For example, the RSRR after fractured neck of femur was 9.80% in quintile 1 rising to 13.0% for quintile 5 ( Figure 6; p < 0.001).

Discussion
This study analysed nearly 2.5 million emergency admissions to acute English NHS Trusts over a contemporary five-year period, with one year follow-up. The risk-standardised analyses covered the commonest acute medical and acute surgical diagnoses and procedures using granular, and clinically meaningful subgroups.
The key finding of this study was to demonstrate significant variations between hospitals in terms of the in-hospital mortality rates, one-year mortality rates and 28-day emergency readmission rates that followed non-elective medical or surgical admissions. Furthermore, highly relevant within-provider inter-relationships were seen such that Pneumonia (excludes trusts with <10 expected events) Figure 3 Example of a funnel plot visually exploring the issue of variability in all-cause 28-day emergency readmission rate for pneumonia (LRTI) between providers. 22.3% of hospitals lay outside the 95% (green lines) confidence intervals (green) and 8.9% of hospitals lay outside the 99.8% (red lines) confidence intervals. Further statistical analysis determined whether the probability that the number of hospitals outside the confidence intervals was significant (both p < 0.0001).
the outcome of a particular condition in one hospital was strongly associated with the aggregated outcome of the other conditions in that hospital.
These data are suggestive of the presence of hospitallevel factors that determine the outcomes of a variety of disparate emergency conditions [25,26]. In some cases, this translates into the outcomes from the same diagnosis being significantly better in one hospital than in another hospital. Importantly, in addition to this interhospital variation, the outcomes of apparently disparate diagnoses within individual providers are inter-related, such that systemic structural and process factors are QuinƟle of "aggregate" in-hospital risk-adjusted mortality (1 = lowest, 5 = highest) Figure 4 Example of the inter-relationship analysis for heart failure (all-cause in-hospital mortality rate).
implicated that underlie clinical outcomes. These factors remain incompletely defined, but appear to contribute to a system-wide phenomenon that persisted in analyses of 28-day emergency readmission rates and one-year mortality. Whilst it is likely that some of these influences are definable structure or process factors, they might also be representative of more abstract concepts such as institutional attitudes that stimulate excellence [27,28]. The scale of variation between hospital outcomes was concerning, and often exceeded both statistical and clinical boundaries of acceptability. At a time when the quality of care delivered is being closely examined, such variations require further investigation. Modifiable technical, organisational or hospital-related factors play an important role in patient care, and merit further study in order to optimise service delivery and to improve care. Hospitallevel factors which have thus far been demonstrated to influence outcomes for certain defined clinical groups include availability of high-dependency beds with adequate medical staffing levels, co-localisation of ancillary specialties, nurse-to-patient ratios, teaching status and hospital size [29][30][31][32][33]. Process factors are more heterogeneous and consequently can be more difficult to identify but have, nonetheless, also been shown to influence outcomes [34][35][36]. This supposition is further supported by the findings of others that the association between hospital procedural volume and post-operative mortality lacks specificity [9]. Good governance practices such as complete and accurate data collection/submission, internal audit, transparent publication of results and benchmarking against defined quality standards perpetuate good performance. Gauging improvement against national Quality Improvement Frameworks requires objective appraisal of outcomes and must be accompanied by a willingness to change sub-standard practices and to embrace service remodelling if necessary. Robust local monitoring and mandatory submission of data is needed to ensure early reaction to divergent performance and will be of interest to commissioners who may use evidence of such practice to shape service delivery [37]. Transparent reporting of results with clear elaboration of statistical methodology used to risk-adjust data and identify outliers and peer-review of outcomes in an environment seeking to improve quality of care rather than to ostracize individuals or units is essential [14]. Interestingly, current prevailing opinion is that supposed patient empowerment through reporting of clinical outcomes in their current form is unlikely to drive service improvement through market forces as patients tend rely more on subjective information or patient experience measures (PEMs) such as hospital cleanliness [38].
On the basis of these results, the development of an outcomes rating system could be cautiously supported, as this work has shown that there is a need for improved Quintile of "aggregate" 28-day risk-adjusted emergency readmission (1 = lowest, 5 = highest) Figure 6 Example of the inter-relationship analysis for fractured neck of femur (all-cause 28-day emergency readmission).
outcomes and reduced variations in outcome in emergency care [39][40][41]. However, in the first instance, any composite rating system must be based on clinically meaningful subgroups, with definable endpoints. A single pan-provider rating is unlikely to be the correct model [42][43][44]. These results suggest that acute conditions require detailed appraisal at the speciality level at least. Furthermore, a single rating system that purports to cover both hospitals' clinical and non-clinical performance will likely be misleading [3].

Strengths and limitations
Clinical outcomes in this study were limited to validated hard endpoints. As a contemporary five-year cohort was used, concerns about the quality of coding in administrative data are mitigated as systematic reviews have found acceptable coding accuracy rates within the HES data, with data quality improving in more recent years [45,46]. Uniquely, HES data allows for linkage to out-ofhospital deatha powerful feature which adds robustness to the findings through the use of 1-year mortality rates. In addition, recognised and published techniques were used to risk-standardise the data, and to determine the presence and strength of inter-relationships [8,17].
Another strength of the study lies in case selection that focussed on a group of commonly encountered medical and surgical pathologies and ensured a plausible link between mortality and quality by including patients with conditions amenable to salvage [47]. Using the example of colorectal cancer as an exampleas the COLOLAP group included only patients who underwent surgerythe majority of patients with disseminated disease requiring only palliation would have been excluded. Similarly the AAA group included only patients who underwent surgery and thus would have excluded moribund patients judged too unfit for surgery. This supports findings that restricting the calculation of standardised mortality ratios to include only certain conditions reduces over-dispersion of data and may yield a more useful comparative statistic [48]. This clinically meaningful case selection addresses the criticism of risk-standardised outcomes that there is a poor correlation between quality of care provided and probability of death [42,49]. Until complete and accurate disease-staged outcomes data are available on the same scale as current administrative data, risk-standardised outcomes remain the best measure of clinical outcomes for national studies. Whilst data from clinical databases and registries may seem more attractive than HES data, their utility remains hampered by the lack of mandatory submission from all providers [39,50].
Finally, the use of a super-spell definition of in-hospital mortality and the inclusion of 1-year mortality and 28-day emergency readmissions as secondary outcome measures demonstrated that the findings were consistent and mitigated criticisms of in-hospital mortality as an outcome measure, namely that it can be confounded by institutional, social and financial factors favouring rapid discharge [51,52]. Despite criticisms of in-hospital mortality as an outcome metric, it should be noted that studies indicate that its value for internal benchmarking purposes is comparable to other mortality outcomes whilst others have demonstrated that different mortality metrics are themselves highly correlated at hospital level [7,53]. With particular regard to health outcomes research using HES data, in-hospital mortality remains the most commonly used metric [12].
Limitations of this study include the possibility of inter-hospital coding variability inherent to the use of retrospective administrative data and it is acknowledged that there remains no universal consensus on risk adjustment methodology with the consequence that different models can yield conflicting results [43,47]. Whilst it is not possible to definitively state that the hospital to hospital variations observed are not at least partially a result of inadequate adjustment for case-mix rather than reflecting genuine differences in quality of care, it should be noted that, with regard to HES data, the covariates used for risk adjustment have been shown to produce regression models of comparable discriminatory power to those derived from clinical databases for colorectal and vascular surgery [7,18,54,55]. Additionally, it is acknowledged that geographic variations in primary care and social care facilities are not included in the model and as such there may be unmeasured confounding given the effect that these factors can have on mortality and readmission rates. Interestingly however, none of the standardised funnel plots showed evidence of overdispersion of data suggesting that the influence of uncontrolled factors in the risk-adjustment process was not significant [14,56].
The RCS Charlson score has been specifically modified and validated against the HES data [19]. Nonetheless, criticism has been directed towards the use of the Charlson co-morbidity index for case mix adjustment as the index is itself a function of coding depth and accuracy and consequently displays non-constant risk relationships amongst hospitals [43]. In the present study, a post-hoc sensitivity analysis performed by excluding the RCS Charlson score as a regression covariate did not alter the present findings (data not shown). This finding is consistent with research suggesting that exclusion of Charlson scores when calculating RSMRs has only a modest effect and that co-morbidity recording in HES data is not associated with widespread bias [48]. The use of the RCS version of the Charlson score in the analyses acknowledges the finding that modifications of the original Charlson co-morbidity index (i.e. that originally described for US administrative data) to take account of local coding practices can improve the discriminatory power of logistic regression models [57].
The use of mortality as a metric for assessing the quality of care has been criticised although it remains a commonly used measure in health outcomes research and is particularly relevant in the context of emergency admissions [12,14,42].
It is acknowledged that the study does not address which critical structural and process measures account for the observed inter-relationships and that the results are not necessarily generalisable to paediatric populations, elective care admissions or to healthcare systems outside the United Kingdom. Determination of the underlying structures and processes of care remains the focus of ongoing research.

Conclusions
For emergency medical and surgical admissions in England, wide variations in outcome exist between providers. In addition, strong associations in outcome were found between disparate clinical groups within individual providers that suggested the presence of underlying global structure and process factors underpinning clinical outcomes. These results have implications for the way in which care is delivered and provides potential targets for global quality improvement.

Additional file
Additional file 1: Table S1. Clinical group descriptors by diagnostic and procedural codes. Table S2. Highest numbers of deaths by OPCS-4, ICD-10 and HRG codes providing the basis for the selection of the 20 emergency groups. Table S3. Results from the inter-relationship quintile analysis for in-hospital all-cause risk-stratified mortality. Table S4. Results from the inter-relationship quintile analysis for 1-year all-cause risk-stratified mortality. Table S5. Results from the inter-relationship quintile analysis for all-cause risk-stratified 28-day emergency readmissions. Figure S1. Figures from the inter-relationship quintile analysis for in-hospital all-cause risk-stratified mortality. Figure S2. Figures from the inter-relationship quintile analysis for 1-year all-cause risk-stratified mortality. Figure S3. Figures from the inter-relationship quintile analysis for all-cause risk-stratified 28-day emergency readmissions.