Skip to main content
  • Research article
  • Open access
  • Published:

Identifying patients at risk of nursing home admission: The Leeds Elderly Assessment Dependency Screening tool (LEADS)

Abstract

Background

Discharge from hospital to a nursing home represents a major event in the life of an older person and should only follow a comprehensive functional and medical assessment. A previous study identified 3 dependency scales able to discriminate across outcomes for older people admitted to an acute setting. We wished to determine if a single dependency scale derived from the 3 scales could be created. In addition could this new scale with other predictors be used as a comprehensive tool to identify patients at risk of nursing home admission.

Methods

Items from the 3 scales were combined and analysed using Rasch Analysis. Sensitivity and specificity analysis and ROC curves were applied to identify the most appropriate cut score. Binary logistic regression using this cut-off, and other predictive variables, were used to create a predictive algorithm score. Sensitivity, specificity and likelihood ratio scores of the algorithm scores were used to identify the best predictive score for risk of nursing home placement.

Results

A 17-item (LEADS) scale was derived, which together with four other indicators, had a sensitivity of 88% for patients at risk of nursing home placement, and a specificity of 85% for not needing a nursing home placement, within 2 weeks of admission.

Conclusion

A combined short 17-item scale of dependency plus other predictive variables can assess the risk of nursing home placement for older people in an acute care setting within 2 weeks of admission. This gives an opportunity for either early discharge planning, or therapeutic intervention to offset the risk of placement.

Background

The National Service Framework for older people within the UK highlighted the need for a single assessment process to determine the most appropriate setting for ongoing care [1]. To date, professionals are often faced with a large assortment of scales to choose from, identifying measures to aid them in this decision was seen as a priority. As the setting of care is largely determined by the extent of dependency and, for example nursing needs, then clearly measures of dependency will be important in this process. A previous study examined the use of 7 outcome scales and other predictive factors (e.g. presence of pressure sores) in order to identify which scales were predictive of outcome when the patient had recently entered an acute hospital setting[2]. Out of the 7 scales examined only four scales, The Modified Barthel Index (MBI) the Abbreviated Mental Test (AMTS), the Northwick Park Dependency Scale (NPDS), and the Caregiver Strain Index (CSI) were found to discriminate across outcomes as defined by placement, for example to a nursing home [3–7].

Mindful of the fact that using several different scales can be time consuming for staff and stressful for the patient we wished to determine whether it was possible to create a screening tool to identify dependency consistent with a nursing home placement from three of these health status scales (MBI, AMTS, NPDS). The CSI was not included as not all patients have a carer, and the three other scales were, in theory, measuring an underlying construct of dependency. If combining some of the original items, together with other key predictive variables into a new scale, could create an algorithm to act as a screen for such risk, this may provide an economical way of assessing the likely need for a nursing home admission.

Methods

Objective

To assess if a screening tool (the Leeds Elderly Assessment Dependency Scale (LEADS) could be developed from three previously identified scales which discriminate for nursing home placement [2]. To test if this scale, along with other key predictive variables (identified in previous study), would be sensitive and specific to predicting the need for a nursing home placement within two weeks of admission to acute wards for the Care of the Elderly.

Participants

Patients were recruited on admission to the Care of the Elderly wards in a multi-site acute hospital trust. A random sample of every fourth patient admitted to three such wards, together with all patients requiring a comprehensive assessment were included in the study. Full details of recruitment and patient characteristics are given elsewhere [2].

Outcome

Outcome was defined in terms of discharge destination i.e. whether the person was discharged to home, home with family/carer, sheltered housing, residential or nursing home care (in the initial study). In this paper we are primarily concerned with differentiating between nursing home placement and the 'other' placements, as nursing home placement has considerable impact on patients, their families, and on service provision.

Statistical Methods

A three stage approach was used to develop the screening tool. (Figure 1)

Figure 1
figure 1

Statistical Methodology Analysis Pathway.

Stage 1: Rasch analysis, developing the LEADS

The three scales were combined and the development of a single shorter scale was explored using Rasch analysis [8]. The Rasch model is the current standard for the development of unidimensional scales (e.g. of impairment or dependency) delivering metric quality outcomes in health care [9]. Briefly, data collected from scales completed by clinical staff, which are intended to be summated into an overall score are tested against the expectations of the Rasch measurement model. The model defines how responses to items should be if measurement (at the metric level) is to be achieved. This was considered the most appropriate model for identifying items that could measure the underlying latent trait (level of disability) into a short assessment tool as it is the only measurement model delivering a metric transformation of ordinal scale [10, 11]. A previous study constructed short form scales using this method without any loss of validity, and found it to be a better method for item reduction than more classical forms of item reduction e.g. principal components analysis [7]. Wolfe in his paper discussed different mechanisms for combining scales [10]. The method chosen for the current analysis was the common person equating method. That is the same patient is assessed, at the same time, and assigned values (by clinicians) to each item on the three scales. In looking for potential items to discard, items that were identified as redundant by the Rasch analysis (that is they had high negative residuals, equating to high item-total correlations in classical analysis), were discarded. Also, items that failed to follow the expected probabilistic relationship of a valid scale (misfitting items), were also discarded. Finally, items that showed bias for external factors such as age and gender (Differential Item Functioning were also removed) [13].

The most appropriate cut off point was determined by the sensitivity, specificity, likelihood ratio and ROC curves [14]. Deeks and Altman suggest that likelihood ratios have more powerful properties making them more appropriate in clinical use than sensitivity and specificity alone [15]. Likelihood ratios is the ratio of the probability of finding people who will need nursing home placement to the probability of predicting those patients who will not be at risk of nursing home placement using a defined cut-off score. For comparison, ROC curves are also presented [16].

Stage 2: Binary Logistic Regression

Following the construction of the LEADS, a binary logistic regression analysis was used to identify the screening cut-off score derived from the LEADS in combination with the other variables [17]. As the numbers going into a nursing home were expected to be relatively small, it was expected that this would cause some problems in interpretation. King and Zeng identified the difficulties in analysing rare event analysis in logistic regression and advocate that all the rare event cases are used and random sample of the remainder [18]. Consequently this strategy was adopted here, with repeated random samples (in the event of 50 cases) taken from the remainder (the non-nursing home cases) in a one-to -two ratio of those placed in nursing home and non-nursing home cases. The exp (B) values from the binary logistic regression for the indicator variables, together with the LEADS cut-off score were used in an algorithm to create an overall algorithm score for the risk of nursing home placement [19]. (See Appendix 1)

Stage 3: Sensitivity, specificity and likelihood ratios

The predictive score for nursing home, identified by the algorithm was then examined for sensitivity and specificity to identify the best predictive cut-off score for the total algorithm. The aim was to maximise sensitivity, and minimise false positives and again, for comparison, ROC curves are presented [20].

Ethics and Consent

Ethical committee approval was obtained from the Leeds Teaching Hospital NHS Trust. Patients were asked to sign a written consent form. If they were unable to give consent due to cognitive, visual or communication problems, a relative or carer was asked to consent on their behalf.

Software

Statistical analysis was undertaken with SPSS version 11.5 and RUMM 2020 [21, 22].

Results

Five hundred and forty nine patients were recruited into the full study, of whom 258 were assessed on each of the three scales to be analysed as potential contributions to the screening tool, and discharged to various destinations [2]. The mean age of these 258 patients was 83.8 years (SD 5.5) and their mean length of stay was 31.2 days (SD 31.6). Seven out of ten (70.2%) were female.

Stage 1: Reducing the item set and producing a single scale: using Rasch Analysis

Items from the 3 scales were merged and examined with the Rasch model. Items that showed misfit or interdependency were discarded, as were those showing bias for age and gender. Reducing the item set in this way gave a 17-item scale with a unidimensional construct of dependency that incorporated both cognitive and functional ability items. The new scale, called the Leeds Elderly Assessment Dependency Screening Tool (LEADS) included 7 items from the MBI, 3 from the AMTS and 7 from the NPDS (Table 1). The overall fit to the Rasch model of this common 17-item scale was good, with Item Fit of -0.414 (SD 1.013), Person Fit of -0.241 (SD 0.559) and Item-trait interaction of (0.004) (Bonferroni correction significance used 0.0006) [23].

Table 1 Items and their location in the LEADS scale

Enteral feeding (NPDS 8.3) was the item with the highest negative location (-5.799 logits). This means that the majority of people did not require enteral feeding. In contrast the stairs item (MBI 3) had the highest positive location (+5.031 logits), suggesting that the majority of people found stairs difficult, and independence in this activity was difficult for this group to achieve (Table 1).

Distribution of people and items for the 17-item scale is good, as shown by the person separation index of 0.944, this indicates that the scale is able to discriminate between several different groups of patients [24]. A Principal components analysis of the residuals showed, with a non-significant Bartlett's test, that no patterns remained in the residuals, thus supporting the unidimensionality of the 17-item scale. The distribution of people and item thresholds can be seen in Figure 2. Each person is shown located on the logit metric scale and thus the raw score from the LEADS can be transformed into a linear metric number [25]. The metric logit location scores from the LEADS (ranging from approximately -6 to + 7 logits) were converted to the range of 0 – 39 (i.e. the same as the original raw score from the contributing items, to facilitate ease of interpretation).

Figure 2
figure 2

Person item threshold map showing distribution of people and items on the LEADS.

The mean score for those discharged to a nursing home was found to be 15.07 (SD 4.33), while those discharged home without a carer was 23.69 (SD 8.17) (Table 2). There is a significant difference of the LEADS score by discharge destination (Kruskal-Wallis sig. <0.001) (Data was found to be bimodal) and this is shown in Figure 3. As scores for destinations other than nursing home were similar (and had overlapping confidence intervals) we grouped all these destinations into an overall 'other' category. The ROC curve and sensitivity and specificity of different cut points identified the score of 19 as being able to maximise sensitivity (88%) and minimise specificity (61%). With an area under of the curve of 0.81 (SE .036) sig 0.000 (CI 0.738–0.881) this shows that the LEADS score has good predictability as a test for nursing home admission. (Figure 4)

Table 2 Mean LEADS score by outcome with confidence intervals
Figure 3
figure 3

Mean and CI of LEADS score by outcome.

Figure 4
figure 4

ROC curve of the LEADS showing cut score.

Stage 2: Predicting the need for nursing home placement: Binary logistic regression

Using the nursing home and the combined 'other' groups as a dependent variable, a binary logistic regression was used to identify predictors for patients at risk of a nursing home placement. There were a disproportionate number of cases between these groups (233:25) and thus, five random samples were selected from the 'other' group and added to the nursing home group to create repeated samples for analysis. The results were consistent for all samples and thus the results presented are the sample that gave the best predictive model.

The best model to predict nursing home placement included the LEADS (with its cut score at 19) together with respite care on admission, communication difficulties on admission, family or patient wishes for placement, and pressure sores (grade 1 or above). The -2 log likelihood statistic (27.227) is analogous to the error sum of squares in multiple regression and is an indicator to how much unexplained information there is after the model is fitted (Table 3). The main model (including all of the variables) is explaining approximately 75% of the variance in predicting nursing home need (chi square (<0.01)); this is significantly better than the best model from the five samples which does not use these predictors.

Table 3 Binary logistic regression fit statistics

The coefficients derived from this analysis were then used as the basis of the full algorithm for predicting nursing home placement (Table 4). Exp (B) indicates the change in odds resulting from a unit change in the predictor. For example having a score of less than 19 in the LEADS increases the odds of needing a nursing home admission sixty-five times. Each of these values were included in the algorithm (see Appendix 1).

Table 4 Minimum Exp B Variables from logistic regression and range over 5 samples

Stage 3: Sensitivity, specificity of the final algorithm

Initially the random sample, which gave the best result for the LEADS was used to determine the best cut score for the full algorithm score based upon the exponentiated values of the logistic regression, and ranging from 0 to 358. This gave a cut score of 244, with sensitivity of 0.88 and specificity of 0.96, and a positive predictive value of 0.92. The likelihood ratio (22.44) was maximised at this value. In addition ROC Curves (figure 5) showed that the area under the curve was 0.972 (SE .016) and this was significant p < 0.000 CI 0.940 to 1.004.

Figure 5
figure 5

ROC curve showing sensitivity and 1- specificity of cut-points algorithm in sample.

The algorithm was then re-tested on the original full data set of those discharged to home or sheltered accommodation, to residential care, or nursing home care., Thus 258 patients were used to test its sensitivity, specificity and predictive power. Examination of the ROC curve identified 244 as the score that maximised sensitivity (88%) and minimised specificity (85%) (Table 5). The area under the curve for the algorithm score was 0.921 (SE 0.019, sig. 0.000, CI 0.883 to 0.959). Therefore the algorithm score is an excellent predictive test and with a likelihood ratio of 6.04 this indicates that given an algorithm score of less than 244 there is a moderate increase in the likelihood that patients will require nursing home admission. It identified 34 patients needing nursing home that subsequently went elsewhere and identified 3 as 'other' who eventually went to a nursing home. Of the 34 who were predicted as needing nursing home and went elsewhere, 22 (64.7%) returned home, 3 (8.8%) returned home with a carer, 7 (20.6%) went to residential accommodation and 2 (5.9%) went to sheltered housing.

Table 5 Specificity and sensitivity of the cut scores from the LEADS algorithm full cohort

Appendix 1 - Algorithm

The algorithm for predicting nursing home placement using SPSS based on minimum Exp B

Compute predscrn = 358

if (LEADS score le 19) predscrn = predscrn -65

if (familypat wish = 1(no)) predscrn = predscrn -34

if (communication difficuties = 1 (yes)) predscrn = predscrn -112

if (grade 1+ pressure sore = 1 (yes)) predscrn = predscrn -18

if (respite care on admission = 1 (yes)) predscrn = prdescrn -128

Excel spreadsheet with the algorithm and scoring for the LEADS is available from:

a.slade@leedsmet.ac.uk or a.tennant@compuserve.com

Figure 6
figure 6

ROC curve showing sensitivity and 1-specificity of algorithm score full cohort.

Discussion

Currently, people are faced with a bewildering variety of potential measures for use in assessing outcome in an acute elderly setting. Having previously identified three scales that discriminated between people going to a nursing home as against other outcomes, we have now shown that it is possible, through Rasch analysis, to extract items that work well together and measure the underlying dependency trait. Clinicians may still wish to use the original scales for clinical purposes but in terms of measurement, the 17-item LEADS scale and associated algorithm has been shown to be a powerful tool in predicting patients at risk of nursing home placement and those likely to go to other types of care or home.

The false positive rate in the final analysis may be viewed as a major weakness in the approach. Some patients improved such that they could go home, or into other institutional settings. This is a valid comment and the majority of mismatch between the indicative and final placement was for those patients who went home. It is important to remember this data was collected within two weeks of admission. Thus the algorithm, as well as providing a common equitable means of assessment, can act as an early warning system for risk of institutional care. Early identification of those patients at risk enables interventions to be instigated early on in their admission, potentially reducing the risk of nursing home placement. Given the parsimony of the scale there is nothing to prevent repeated measurements during the patients stay in hospital, so providing a monitoring system for the continuing risk of institutional placement.

There are a number of weaknesses to the study. The low number of patients subsequently entering a nursing home was always going to be a cause of concern. However, we accommodated this, as best as possible, by sampling from the other group and comparing the results. The assessment was also only undertaken once within two weeks of admission. Additional work needs to be carried out using repeated assessments and to look at the changes in sensitivity and specificity over time in order too determine if there is an optimum time to maximise these parameters. Due to the low numbers of those entering a nursing home, we had to use this group in the development of the algorithm cut point, as well as its validation. This is likely to overestimate its predictive value, although we did try to offset this as far as possible by validating the algorithm on the full data set, rather than the developmental sample. Finally, as with all models developed on a particular set of data, these results need replicating on other elderly acute samples to support conclusions about the predictive validity of the screening tool.

Conclusion

Using selective items from three separate scales, previously shown to be discriminative for nursing home placement, together with other key indicators, enables those working in an acute setting, within two weeks of admission, to identify 85 % of patients at risk of needing nursing home placement. The resulting LEADS scale and four indicator variables can easily be administered by any health care professional and the risk algorithm lends itself to a simple spreadsheet calculation.

References

  1. Department of Health: The single assessment process for older people. Health Service Circular 2002/001. The NHS plan (Single assessment process). [http://www.dh.gov.uk/PolicyAndGuidance/HealthAndSocialCareTopics/SocialCare/SingleAssessmentProcess/fs/en]

  2. Slade A, Fear J, Tennant A: Predicting outcome for older people in a hospital setting: which scales are appropriate?. International Journal of Therapy and Rehabilitation. 2004, 11: 25-30.

    Article  Google Scholar 

  3. Mahoney F, Barthel D: Functional Evaluation: The Barthel Index. Md State Med J. 1965, 14: 61-65.

    CAS  Google Scholar 

  4. Shah S, Vanclay F, Cooper B: Improving the sensitivity of the Barthel Index for stroke rehabilitation. J Clin Epidemiol. 1989, 42: 703-709. 10.1016/0895-4356(89)90065-6.

    Article  CAS  Google Scholar 

  5. Hodkinson HM: Mental impairment in the elderly. JR Coll Physicians Lond. 1973, 7: 305-317.

    CAS  Google Scholar 

  6. Turner-Stokes L, Tonge P, Nyein K, Hunter M, Nielson S, Robinson I: The Northwick Park Dependency score (NPDS): a measure of nursing dependency in rehabilitation. Clin Rehabil. 1998, 12: 304-318. 10.1191/026921598669173600.

    Article  CAS  Google Scholar 

  7. Robinson BC: Validation of a Caregiver Strain Index. J Gerontol. 1983, 3: 344-348.

    Article  Google Scholar 

  8. Rasch G: Probabilistic models for some intelligence and achievement tests. 1960, Cophenhagen: Danish Institute for Educational Research, 1980 Chicago: University of Chicago Press, expanded

    Google Scholar 

  9. Tennant A, McKenna SP, Hagell P: Application of Rasch Analysis in the development and application of quality of life instruments. Value Health. 2004, 7 Suppl 1: S22-26. 10.1111/j.1524-4733.2004.7s106.x. Sep-Oct

    Article  Google Scholar 

  10. Bond TG, Fox CM: Applying the Rasch Model, Fundamental measurement in the Human Sciences. New Jersey: Lawrence Erlbaum Associates

  11. Wolfe E: Equating and item banking with the Rasch model. J Appl Meas. 2000, 1: 409-434.

    CAS  PubMed  Google Scholar 

  12. Luquet C: A method for shortening instruments using the Rasch model: Validation on a hand functional measure. Rev Epidemol Sante Publique. 2001, 49: 273-286.

    CAS  Google Scholar 

  13. Tennant A, Penta M, Tesio L, Grimby G, Thonnard JL, Slade A, Lawton G, Simone A, Carter J, Lundgren-Nilsson A, Tripolski M, Ring H, Biering-Sørensen F, Marincek C, Burger H, Phillips S: Assessing and adjusting for cross cultural validity of impairment and activity limitation scales through Differential Item Functioning within the framework of the Rasch model: the Pro-ESOR project. Medi Care. 2004, 42 (Suppl 1): 37-48.

    Article  Google Scholar 

  14. Ebell MH: 'Likelihood Ratios', Introduction to information mastery. Michigan State University, [http://www.poems.msu.edu/InfoMastery/Diagnosis/likelihood_ratios.htm]

  15. Deeks JJ, Altman DG: Diagnostic tests 4: Likelihood ratios. BMJ. 2004, 329: 168-169. 10.1136/bmj.329.7458.168.

    Article  Google Scholar 

  16. Altman DG, Bland JM: Statistics Notes: Diagnostic tests 3: receiver operating characteristic plots. BMJ. 1994, 309: 188.

    Article  CAS  Google Scholar 

  17. Field A: Logistic Regression. Discovering Statistics using SPSS for Windows. 2000, London: Sage Publications, 163-205.

    Google Scholar 

  18. King G, Zeng LC: Explaining rare events in international relations. International Organization. 2001, 55: 697-715.

    Google Scholar 

  19. Hosmer DW, Lemeshow S: Applied Logistic Regression. 2000, New York: Wiley, 2

    Book  Google Scholar 

  20. Loong T: Understanding sensitivity and specificity with the right side of the brain. BMJ. 2003, 327: 716-719. 10.1136/bmj.327.7417.716.

    Article  Google Scholar 

  21. SPSS Version 11.5. 2003, SPSS Inc Chicago

  22. Andrich D, Lyne A, Sheridon B, Luo G: RUMM2010. Perth: RUMM Laboratory., [http://www.rummlab.com.au/]

  23. Bland JM, Altman DG: Multiple significance tests: the Bonferroni method. BMJ. 1995, 310: 170.

    Article  CAS  Google Scholar 

  24. Fisher WP: Reliability statistics. Rasch Measurement Transactions. 1992, 6: 238.

    Google Scholar 

  25. Smith EV: Metric Development and Score Reporting in Rasch Measurement. J Appl Meas. 2000, 1: 303-326.

    PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

We would like to acknowledge and thank the Northern and Yorkshire Region NHS Executive for funding the project.

We would like to thank the staff and patients from Leeds Teaching Hospitals NHS Trust involved in the original study for their help and co-operation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anita Slade.

Additional information

Competing interests

The author(s) declare that they have no competing interests.

Authors' contributions

AS, AT and JF conceived the study question and design.

AS was the principal data analyst.

AS wrote the provisional drafts of the manuscript with AT and JF reviewing the manuscripts.

AS was responsible for data collection and fieldwork.

All authors contributed to the critical evaluation of the methods, analysis and writing.

AT is the guarantor of the study.

All authors read the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Slade, A., Fear, J. & Tennant, A. Identifying patients at risk of nursing home admission: The Leeds Elderly Assessment Dependency Screening tool (LEADS). BMC Health Serv Res 6, 31 (2006). https://doi.org/10.1186/1472-6963-6-31

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-6-31

Keywords