Skip to main content
  • Research article
  • Open access
  • Published:

The acceptability and feasibility of using the Adult Social Care Outcomes Toolkit (ASCOT) to inform practice in care homes



The Adult Social Care Outcomes Toolkit (ASCOT) measures social care related quality of life (SCRQoL) and can be used to measure outcomes and demonstrate impact across different social care settings. This exploratory study built on previous work by collecting new inter-rater reliability data on the mixed-methods version of the toolkit and exploring how it might be used to inform practice in four case study homes.


We worked with two care home providers to agree an in-depth study collecting SCRQoL data in four case-study homes. Data was collected about residents’ age, ethnicity, cognitive impairment, ability to perform activities of daily living and SCRQoL in the four homes. Feedback sessions with staff and managers were held in the homes two weeks after baseline and follow-up data collected three months later. Interviews with managers explored their views of the feedback and recorded any changes that had been made because of it.


Participant recruitment was challenging, despite working in partnership with the homes. Resident response rates ranged from 23 to 54 % with 58 residents from four care homes taking part in the research. 53 % lacked capacity to consent. Inter-rater reliability for the ASCOT ratings of SCRQoL were good at time one (IRR = 0.72) and excellent at time two (IRR = 0.76). During the study, residents’ ability to perform activities of daily living declined significantly (z = -2.67, p < .01), as did their expected needs in the absence of services (z = -2.41, p < .05). Despite these rapid declines in functionings, residents’ current SCRQoL declined slightly but not significantly (Z = -1.49, p = .14). Staff responded positively to the feedback given and managers reported implementing changes in practice because of it.


This exploratory study faced many challenges in the recruitment of residents, many of whom were cognitively impaired. Nevertheless, without a mixed-methods approach many of the residents living in the care homes would have been excluded from the research altogether or had their views represented only by a representative or proxy. The value of the mixed-methods toolkit and its potential for use by providers is discussed.

Peer Review reports


In many countries, central governments are responding to the challenges of an ageing population. With the proportion of people requiring long-term care expected to increase, policy makers are keen to deliver health and social care services efficiently and effectively. In the UK, the importance of measuring people’s outcomes, wellbeing and quality of life to support service evaluation and planning has been emphasised by researchers and accepted by policymakers and service providers for some time. Work in this area has developed considerably, particularly in terms of the development of measures for research and economic evaluation [1]. In England, national outcomes frameworks have been developed for adult social care (the Adult Social Care Outcomes Framework) [2] and the Care Act [3] has placed a statutory responsibility on local government to place well-being at the heart of care and support [4]. Care homes will increasingly be expected to demonstrate the impact and quality of the care and support they provide, as the regulator asks whether services are; safe, effective, caring, responsive and well-led [5]. The Adult Social Care Outcomes Toolkit (ASCOT) was derived through a series of studies and to date is the only measure focusing specifically on the areas of quality of life that can reasonably be attributed to social care services [6]. ASCOT is a preference-weighted measure with eight conceptually distinct domains of social care related quality of life (SCRQoL), outlined in Table 1. The domains cover the basic (personal cleanliness and comfort, accommodation cleanliness and comfort, food and drink, and feeling safe) and higher order (social participation, occupation, and control over daily life) aspects of SCRQoL. The final domain, dignity, differs from the other domains, reflecting the impact of the care process on how people feel about themselves [7].

Table 1 The ASCOT domains

The ASCOT domains were identified through expert review with professional stakeholders to ensure its sensitivity to outcomes of interest to policymakers and its relevance to the evaluation of social care interventions [6]. This was complemented by a literature review exploring service users’ understanding of social care outcomes and cognitive interviews to check social care service users’ understanding of terms and to clarify the wording of the items [6]. Although ASCOT is not a measure of health outcomes, previous research has found a reasonable relationship with the EQ-5D (r = 0.4) [7], whilst also confirming that ASCOT is more sensitive to the impact of social care interventions, such as care provided in people’s own homes to help them get up, washed, dressed, eat meals and keep their home clean and comfortable [79].

Although developed in the UK, since its release in 2012, ASCOT has received a lot of international attention and has been used by researchers in studies examining the impact of various forms of long-term care in Australia [9, 10], Finland [11], the Netherlands [8, 12], Austria [13], Denmark [14], Italy [15] and is also currently being translated into Japanese. In Australia, ASCOT is also being piloted as a potential quality indicator for aged care services [16]. Thus, its use as a tool for measuring the outcomes of long-term care and informing policy and practice is well established.

A number of different measures can be derived from the ASCOT toolkit [17], making it possible to estimate the impact a service is having on a person’s SCRQoL. The first, current SCRQoL, reflects the person’s currently experienced SCRQoL (with services in place). The second, expected SCRQoL, is an innovative method for estimating the counter-factual without the necessity for a control group [18] and reflects the SCRQoL that would be expected in the absence of services. Although this approach requires further validation and testing, early findings lend support for use of the counter-factual estimation approach in general [18] and more specifically, as a measure of expected SCRQoL in ASCOT [6]. Furthermore, in a previous care homes study, criterion validity of the expected scores was supported by the finding that nursing homes had significantly lower scores than residential care homes [19].

By subtracting expected SCRQoL from current SCRQoL, we can calculate the SCRQoL gain, which reflects the total benefit of the intervention or service [6].

$$ Current\kern0.3em SCRQoL\kern0.3em {\textstyle \hbox{-}}\kern0.3em expected\kern0.3em SCRQoL=SCRQoL\kern0.3em gain $$

Like most quality of life measures, ASCOT was originally designed as a self-completion tool but there are well-documented difficulties of using self-completion questionnaires and even structured-interviews with the most impaired populations [2022], such as those living in long-term care. To overcome these issues, we developed a multi-method approach to evaluating the outcomes of care home residents [23, 24]. The care homes toolkit (CH3) collects data through structured observations and interviews, which then form the basis for ratings of residents’ SCRQoL [25]. In each domain, one rating is made to reflect whether people have no, some or high unmet needs in that aspect of their life. These are defined in Table 2. Some needs have a negative impact on the person’s quality of life, whereas high needs have consequences for the person’s physical or mental health. For example, in the case of food and drink, people who do not have meals at times they would like or choice over what to eat would have some needs; those who were getting an inadequate diet or insufficient liquids would have high needs.

Table 2 Outcome states for each ASCOT domain for current SCRQoL

CH3 was developed and tested in a study involving 366 residents living in 82 English care homes and showed acceptable properties [19]. Local fieldworkers were recruited and trained for the original study and inter-rater reliability showed acceptable percentage agreement (77 % for current and 81 % for expected SCRQoL) but kappa statistics suggested some room for improvement (0.47 for current and 0.57 for expected SCRQoL) [19]. Since its development, CH3 has been refined and the training and guidance improved, with a view to improving inter-rater reliability [26]. There has also been support from providers that ASCOT has the potential to have a positive impact on how services assess and meet residents’ health and social care needs [27, 28]. Unlike the self-completion and interview instruments, the observational element of CH3 provides context/evidence for the overall scores. Ratings are supported with real life examples taken from observations and interviews and there is scope to use these to help staff understand how they might improve the SCRQoL of residents [29].

However, we know from previous research that the process of improving practice is not straightforward [30, 31] and observations of practice can be perceived as threatening by staff [31, 32]. It is imperative the staff feel supported by management [33] and that any feedback delivered about ratings of residents’ SCRQoL and the context behind those ratings are delivered sensitively [31]. As well as the relevance and quality of the feedback intervention itself, there are several home-level factors that will influence its impact on practice. These include: the level of staff engagement with the research [34] and corresponding attendance at the feedback sessions [35, 36]; the care home culture [37]; and the skills and commitment of the management team [38]. Drawing on the key messages from research examining the impact of staff training on practice improvement [39], there is some evidence that training works best when it is tailored to the issues identified in particular settings, part of a wider commitment to quality improvement [33] and given to supervisory staff and management as well [40]. Although this intervention was considered feedback, not training, it was important to consider these key messages in this research. As such, following consultations with key stakeholders; including academics, a national care home provider and a representative from a UK-wide initiative to promote change and improve quality of life in care homes, this study worked with four care homes to design a feedback-intervention based on the measures included in the ASCOT. We examined whether the feedback was considered relevant and helpful by staff and also whether it led to any reported changes in practice and/or measurable improvements in the SCRQoL of residents.

Aims and objectives

The aims of this study were to:

  1. 1.

    Design a feedback-intervention based on the evidence collected using the CH3 toolkit (observational notes and interviews) and pilot it in a small sample of care homes in England.

  2. 2.

    Examine the acceptability of this feedback to care home staff and explore whether there were any reported changes in staff practice and/or measurable changes in residents’ SCRQoL after the feedback had been delivered.

  3. 3.

    Examine and report new inter-rater reliability analysis on the CH3 approach


This study was funded by the School for Social Care Research and given a favourable ethical review from the Social Care Research Ethics Committee (12/IEC08/0051) who confirmed it complied with the requirements of the Mental Capacity Act [41].

Homes and participants

As care homes and care home residents are known to be very difficult to recruit [24] with high attrition rates [42], we worked closely with four case study homes for older adults in one local authority in England. There are two main types of care home in the UK, residential care and nursing homes. All homes provide care and support throughout the day and night and have staff who provide help with washing, dressing, meal times and using the toilet. However, nursing homes also provide 24-h medical care from a qualified nurse [43]. In order to explore the feasibility of the intervention working in different types of homes, we purposively recruited two nursing homes and two residential homes. We also recruited both a large national chain and a small independent provider. Ideally each provider would have volunteered a home with and a home without nursing but our final sample included two nursing homes owned by a national care home provider and the two residential homes run by a small [44] independent provider. All homes accepted people living with dementia. Homes varied in size between 29 and 64 beds. The two residential care homes taking part in the study were unusual in that they only accepted female residents.

All staff were invited and encouraged to take part in the research. As the feedback was aimed at staff, we wanted as many staff as possible to be present. The feedback was relevant to all staff; including administrative, catering, domestic and estate. Family members were invited to take part in focus groups (to be reported elsewhere) and were asked their opinions of their relative’s SCRQoL, if they consented to be interviewed.

All permanent residents were invited to take part in the research, including people with dementia, other cognitive impairments and communication difficulties. The only exclusion criteria were those who were there for respite/short-term care and those currently in hospital. In accordance with the Mental Capacity Act [41], residents assessed as lacking the capacity to consent to take part in the research were recruited via the advice of a personal consultee. The Act defines a personal consultee as an unpaid carer or someone interested in the person’s welfare (such as a friend or relative), who is willing to be consulted [41]. We asked home managers for advice on this and where they felt consultees ought to be involved, they forwarded the appropriate information sheets and consent forms to consultees on our behalf. Alongside this, researchers spent time in each home talking to residents, explaining the study and assessing their capacity to consent. Throughout the study researchers continuously monitored whether or not residents agreed to participate. Consent was considered a continuous process and researchers continuously assessed residents’ willingness to be involved in the study (see [45]).

Data collection

For the purposes of examining inter-rater reliability, two researchers collected the data for 84 % of the participating residents in the care homes at time one (T1) and again for 93 % of the residents at time two (T2). One was the main rater (R1) and provided the ratings on which all the analysis is based. The other was the second rater for reliability purposes (R2) and was also the main researcher preparing and administering the feedback. R1 did not prepare or administer the feedback, so whilst she was not blind to the T1 ratings, she was less likely to be influenced by what had been discussed during the feedback session when making the T2 ratings. R1 was trained by the lead author and R2, who are both ASCOT trainers. Checking reliability again at T2 allowed us to examine whether being part of the feedback biased R2 towards more positive ratings at T2 and whether R1, who was previously new to the ASCOT toolkit, agreed more or less with R2 after gaining experience during wave 1.

Following the same approach used in Netten, Trukeschitz et al. [19], staff provided information about residents’ functional abilities and their level of cognitive impairment through the completion of user characteristic questionnaires. At T1 and T2, researchers spent up to 5 days in each home using the CH3 toolkit. For each participant, the following data is collected using the mixed-methods toolkit: structured and general observations in communal areas, including during a meal time; conversations or interviews with the residents themselves (depending on their cognitive ability); and structured interviews with care staff asking them to respond to the ASCOT questions on behalf of the resident (proxy interviews). These data, together with detailed guidance, were drawn on in order to rate residents’ SCRQoL using the Adult Social Care Outcomes Toolkit (ASCOT).

To reflect the impact of the care provided by the home, CH3 includes both ‘current’ SCRQoL (i.e. experienced/achieved quality of life now) and ‘expected’ SCRQoL (i.e. expected quality of life in the absence of the care and support they receive in the home, holding all other factors constant). In care homes for older adults, where physical and cognitive decline is highly likely [46], we would expect to see a decline in residents’ expected SCRQoL if measured over sufficient time, as their ability to meet their own needs worsens. Thus, all other things being equal, if current SCRQoL remains constant, despite declines in expected, the resident is gaining more from the service over time; the service is increasingly compensating for their loss of functionality (ability to care for themselves). SCRQoL gain is a measure of the impact of care defined as the difference between current SCRQoL and expected SCRQoL.


The main outcome measure is current SCRQoL, as measured by ASCOT. However, in order to understand these scores, we are also interested in trends in the expected SCRQoL scores and overall gain at T1 and T2. Rather than a simple summed score assuming each domain is of equal importance, the current and expected ASCOT ratings were weighted to reflect English population preferences [6]. Possible scores range from 1.0 to -0.23. The ASCOT score for each case is calculated by anchoring the score to the best possible or ‘ideal’ state and to the equivalent of ‘being dead’ state. This means that whilst a score of 1.00 would represent optimum or ‘ideal’ SCRQoL, a score of 0.00 would indicate a state that is equivalent, according to the preferences exhibited by the general population, to being dead. The score also can drop below zero into negative values. A negative score represents SCRQoL that is so bad that it is considered to be worse than being dead [6, 17, 47].

Inter-rater reliability was assessed using a two-way random, absolute agreement, single-measures Intraclass Correlation Coefficient (ICC) [48] to assess the degree that coders provided consistency in their ratings of SCRQoL across residents. The resulting ICC for current SCRQoL was good (ICC (2,2) = 0.72) [49] at T1 and excellent (ICC (2,2) = .76) at T2. Similarly, the resulting ICC for expected SCRQoL at T1 was good (ICC(2,2) = .71) and excellent at T2 (ICC(2,2) = .81). These ICCs indicate that coders had a good degree of agreement and that R1, who had not previously used the ASCOT toolkit, agreed more closely with R2 (an ASCOT trainer) after gaining experience of using the toolkit in the first wave.

We also examined inter-rater reliability for interesting subgroups within our sample using the time 2 data, in which we had a very high level of agreement overall. These are reported in Table 3. Notably, for current SCRQoL, there was very little difference in agreement according to type of home but less agreement for residents lacking the capacity to consent (IICC (2,2) = .62), although this is still considered a ‘good’ level of agreement [49]. For expected SCRQoL, we found the opposite pattern, with slightly less agreement in residential care home compared to nursing homes, however, it was very small and all ICCs were excellent (above .75) [49]. Following the approach outlined by Stolarova et al [50], differences between ICCs for different subgroups were evaluated by examining their confidence intervals (CIs), reported in Figs. 1, 2, 3 and 4. Overlapping CIs indicate that the ICCs do not differ significantly from each other.

Table 3 Comparing inter-rater reliability at time two for subgroups of interest
Fig. 1
figure 1

Comparison of inter-rater reliability for current SCRQoL at time 2 by type of home

Fig. 2
figure 2

Comparison of inter-rater reliability for current SCRQoL at time 2 by the capacity of residents to consent

Fig. 3
figure 3

Comparison of inter-rater reliability for expected SCRQoL at time 2 by type of home

Fig. 4
figure 4

Comparison of inter-rater reliability for expected SCRQoL at time 2 by the capacity of residents to consent

These levels of ICC suggest that only a small amount of measurement error was introduced by the independent coders [51], and therefore statistical power for subsequent analyses is not substantially reduced. Both the current and the expected SCRQoL ratings were therefore deemed to be suitable for use in the analysis. There was no evidence of rater 2 bias at T2, despite him delivering the feedback in the homes.

Feedback intervention

Feedback was based on the SCRQoL of participating residents. Scores for each domain were presented at an aggregate level to protect the anonymity of specific residents. Rather than focusing on the summed, preference-weighted scores, homes were given feedback at the domain level, supported by examples of how the researchers came to those ratings. The feedback sessions were held two weeks after T1 data collection. Feedback sessions began by describing how many residents took part in the research and reminding staff that part of the study was to explore whether the information was accurate and helpful. The session focused first on what the home was doing well. This would be the domains of quality of life where most residents had no needs and no residents had high needs. We backed these ratings up by giving examples from fieldwork observations. Afterwards, we discussed the domains where larger numbers of residents had some or high needs and gave examples to support these ratings too. See Table 4 for an example.

Table 4 Current SCRQoL ratings for ‘occupation’ in one case study home and an example of the feedback given to staff about this domain during the feedback sessions

The researchers prompted group discussion by asking whether the feedback was representative of life in the home and how they felt about it. Throughout, staff were encouraged to think of ways to improve ratings and overcome difficulties in the domains of quality of life requiring improvement. To give all staff the opportunity to take part in the feedback we ran multiple feedback sessions throughout the day, as required by the homes. Feedback sessions were tape recorded and transcribed for analysis of the acceptability and face-validity of the feedback.


Data were analysed using a variety of non-parametric techniques appropriate for variables that do not have a normal distribution (e.g. ordinal variables). For comparisons between participants in homes with and without nursing, the Mann-Whitney U-test was used. When controlling for co-variates of type of home, such as dependency and cognitive ability, a General Linear Model was used instead. Chi-squared (X2) tests of association were used to explore relationships between capacity to consent and setting. For comparisons between T1 and T2, the Wilcoxon signed-rank test was used. To explore relationships between background variables and outcome variables, non-parametric tests of correlation were employed (Spearman’s rho). The statistical analyses were undertaken using SPSS Statistics, version 20 [52].


Residents’ characteristics

Fifty eight residents across four homes were recruited to the research. Response rates ranged from 23 % in one of the nursing homes to 54 % in one of the residential care homes. This is consistent with previous research involving care homes for older adults in the UK [53]. Nobody withdrew from the study, however, we had to exclude nursing home 2 from the T2 data analysis of SCRQoL because the home was taken over by another provider and residents were being moved to other homes. Allowing for this, our attrition rate was 16 %.

The proportion of residents in our sample lacking capacity to consent was quite high (mean 53 %) but not surprising given that in excess of 80 % of care home residents in the UK have dementia or significant memory problems [54]. In one home, which was for older adults with nursing needs and dementia, the manager requested we involve personal consultees for all residents. Significantly more of the nursing home residents lacked the capacity to consent to the study (χ2(1,58) = 14.70, p < .001) compared with those living in homes without nursing. The impact of recruiting participants via consultees is explored in the discussion section. Nobody with capacity at T1 was found to have lost the capacity to consent at T2 (12 weeks later).

Of our total sample, 85 % were female, which is higher than that found in the population of older people living in care homes [55]. As shown in Table 5 and outlined above, this was because two of the homes in our sample were exclusively for female residents. Residents ranged in age from 73 to 97 years old with a mean age of 86 years. Age did not significantly vary by type of home (p = 0.34). 60 % of our sample were self-funding their own care, 10 % were part publicly funded and 19 % were completely publicly funded. We had missing information for the remaining 11 %. All our sample were white and 97 % were White British/Irish, which is in line with 2011 census data on the over 65 s in the South East of England, reporting over 97 % of the population in this area as being white [56].

Table 5 Characteristics of homes and residents

The Barthel Index of Activities of Daily Living [57] was calculated (see Table 5). Scores range from 0 to 20, with higher scores indicating greater independence and lower scores indicating the need for more help. Nursing home residents had significantly (U = 271.50, p < .05) lower T1 scores and T2 scores (U = 265, p < .05) than those in homes without nursing. The overall sample mean was 8.67, which is lower than in previous research indicating greater levels of dependency [58, 59]. The mean Barthel score for the whole sample declined significantly between T1 and T2 (z = -2.67, p < .01).

Cognitive impairment was measured by the Minimum Data Set Cognitive Performance Scale (MDS PS) [60], with scores ranging from zero (intact) to 6 (very severe impairment). The mean score for the whole sample was 3.4 at T1, which is higher than previous research [58, 59] and may reflect the increasing incidence of cognitive impairment in care home residents and/or the ability of our methodology to include them in the sample (Baumker et al, 2011 and Darton et al, 2012, excluded residents lacking capacity to consent). There was no change in mean cognitive impairment score between T1 and T2 (z = -1.63, p = .10). Nursing home residents were significantly more cognitively impaired (median = 4.00) than residents in homes without nursing (median = 3.00) (U = 276.00, p < .05).

Social care-related quality of life

Mean scores for current SCRQoL in our sample were 0.71 at T1 and 0.67 at T2 (See Table 6). This difference is not significant (Z = -1.49, p = .14). Decline in residents’ mean expected SCRQoL between T1 (.13) and T2 (.06) was significant (z = -2.41, p < .05), indicating that, at T2, they were less able to meet their own needs without help from services. This is in line with the reported decline in residents’ abilities to perform activities of daily living and in fact the two scores are significantly correlated at both T1 (rs = .69, p < .001) and T2 (rs = .58, p < .001). However, despite residents’ requiring more help at T2 to maintain similar levels of SCRQoL, the overall gain from services remained the same (z = -.29, p = .77). This is due to the fact that current SCRQoL also dropped slightly, albeit not significantly. Neither current SCRQoL (rs = .13, p > .05, NS) nor expected SCRQoL (rs = -.11, p > .05, NS) was even marginally related to residents’ age in our sample. We were unable to reliably test for gender differences, given the very small number of men in our sample (N = 10).

Table 6 Showing SCRQOL scores (current, expected and gain) for the homes in our sample

Using the T1 data for all four homes, residents living in homes without nursing had significantly higher SCRQoL scores (median = .74) than those living in homes with nursing (median = .65) (U = 258.50, p < .05). However, after controlling for the differences in residents’ needs and characteristics related to setting (Barthel and MDSCPS), this difference in SCRQoL no longer held (F(2,58) = 3.60, p = .06).

Acceptability of the feedback intervention and reported changes to practice

The acceptability of the feedback intervention was explored in both the feedback sessions with staff and in interviews with home managers after T2 data collection. What emerged was generally a positive view from staff and managers on the data collection process and feedback intervention:

“There was no disruption to the home at all, [The fieldworkers] just went off, found their residents that they needed to observe, and just basically just took hold of it all and got on with it. It didn’t cause any disruption to us whatsoever.” (Manager Nursing Home National Chain)

The interviews with staff about residents’ SCRQoL, were deemed a strain on staff time, although staff were not uncomfortable with researchers being present to observe:

“The staff were actually fine because the staff are used to people coming in and out…. everybody seemed to be very discreet. I mean, you know, so if they were aware they forgot that you were there.” (Manager Care Home Independent)

Any apprehension staff may have felt at the start of the research disappeared as staff realised that fieldworkers were not there to scrutinise or criticise them and their working practices. During the feedback sessions, staff often expressed support for our findings and in one case a desire that the research team ensure that management were made aware of our findings. Some staff were also happy to think about what the findings meant for residents and how they could address the issues our work had raised:

Interviewer: “I just wondered how useful you found this feedback?…”

Staff 4: “I think it is actually ‘cause we.. where we are, so I’m like constantly from one job to the next job,… sometimes it takes outside eyes … to see that” (feedback session Nursing home national chain)

Staff and managers agreed with the feedback they were given and felt it accurately reflected the areas of quality of life they do well at (personal cleanliness and comfort, accommodation cleanliness and comfort, safety and dignity) but also identified areas they struggle to make time for (choice over food, control over daily life, social participation and occupation):

Staff 1: “Well everything you said about the activities is completely right, it’s not enough” (feedback session nursing home, national chain).

All sessions led to interesting conversations about care workers’ desires to meet these needs but the challenges they faced within their organisational culture to do so. In the nursing homes in particular, which were both owned by a large national provider, staff frequently talked about insufficient staff-resident ratios. They felt they could meet the basic health and social care needs but did not have time for anything else:

Staff 4: “Unfortunately, you know, unless or until [provider] ups their [staffing] levels where--, ‘cause at the minute--, I mean as it stands at the moment we are five residents per one member of staff”

More quotes representing the issues that arose are summarised in Additional file 1: Table S7, which can be found in the Additional file, with reference to the domains being discussed in each case.

Staff from the nursing home for people with advanced dementia were particularly positive about the focus on quality of life because they felt their emphasis was usually on meeting health/nursing needs and that the organisation they worked for did not employ enough staff to do more than get people up, washed, dressed and fed:

Staff 2: “They always say, don’t they, when you go to a care home you always see residents in the lounge asleep? It’s not because they’re tired, it’s ‘cause they’re bored, it’s boredom I think a lot of the time.”

Staff 1: “There’s nothing keeping their mind going.”

Staff 2: “Exactly, what do you do? You sleep when you’re bored.” (feedback session, nursing home, national chain)

There was a sense that the feedback gave them an opportunity to raise issues around staffing structures and the limitations this placed on their ability to spend time with residents, socialising and supporting their independence:

Staff 2: “Will it go past regional management?”

Interviewer: “I don’t know.”

Staff 2: “I hope so, I hope so.”

Indeed, it was in this home that the feedback had the biggest impact of reported changes to practice:

“I completely changed the whole setup of the working day. So I looked at smaller groups of residents, because the staff were coming back to me and saying, ’We haven’t got time to complete all of our tasks with so many residents.’.... They now have more time to spend with the residents in terms of social care; the little things, painting nails, and so on and so forth, and the lipstick and it’s all very, very important. So that took the onus off of a task-orientated workload.” (Care Home Manager Nursing National Chain)

Interestingly, despite these changes, the manager was not certain it had impacted upon the residents:

Interviewer: “Have you noticed any changes in residents from those changes?

Manager: “It’s difficult to say with the residents. I mean there are a few that are happier now that they have got their time set for them in the morning.” (Care Home Manager Nursing National Chain)

This perhaps reflects the challenges of maintaining measurable or observable improvements in quality of life in long-term care settings, particularly nursing homes where many people have multiple comorbidities and complex needs.

Nevertheless, all managers felt that they had been able to use our feedback to put in place changes in the home that they hoped would improve quality of life for the residents. For example, feedback about low levels of occupation/engagement led to the residential care home provider employing an organisation specialising in creating activities for older adults with dementia:

“they’re doing some training here. It is interesting. It’s broken up into different--, a whole series of different modules from kind of meet and greet, icebreaker type things to physical activities, to singing, to storytelling, erm, and it’s all themed and it kind of allows the--, it allows the residents to sort of take things off in a particular direction.” (manager, residential care, independent provider)

Managers also reflected upon the acceptability of the whole process of the feedback intervention to residents and their families and friends. Again the view was broadly positive, with no notable differences between the nursing homes and the residential care homes. One manager suggested that subtle and discreet observational techniques by fieldworkers meant that we did not affect either residents’ “day to day routine” or “their relationships with anybody else that’s in the environment” (Residential care Home manager independent). However, it was noted by the manager of one of the nursing homes that some residents on the more severe dementia floor did display “some extra agitation” whilst we were there. This was not something noted by the other three homes (including the other nursing home) or by the researchers themselves during the visits. However, there was a higher degree of cognitive impairment and disability in this particular home because the research was carried out on the floor for people with advanced dementia and nursing needs. It will be important to explore the impact of researchers being present in such settings in future research.

In terms of acceptability to relatives and friends of residents, managers felt that they were overwhelmingly supportive of the research project:

Interviewer: “How about relatives and residents? Did they have any comments to make about it [the research]?”

Manager: “… from the very onset, once they had their letters explaining to them what was going to happen, they were quite enthralled by it and they were looking forward to actually having an outside person come and look at what it is that we do here at [the nursing home]. So they were on our side from start to end.” (Nursing Home Manager National Chain)

Low but not unusual recruitment levels meant that there were concerns about those who did not take part in the research directly. One manager addressed this in the interview saying that some of the relatives of residents who lacked capacity did not want their relative to take part directly in the research due to concerns that it might cause them stress. Nonetheless, the manager did feel that there was still general support for the project from these relatives despite not wanting direct participation by their relative who lived in the home:

I mean although there was a tiny minority of relatives that didn’t want their relatives being in the research there was no concern about you being there and they actually felt the research itself was of value… they just didn’t want their own--, they didn’t want you questioning or asking their relatives ‘cause they thought it might cause them distress, but there was no concern about you being there to do it “ (Care Home Manager Independent)


This study sought to design a feedback intervention based on the observational and interview evidence collected by the mixed-methods approach to measuring outcomes in care homes using the Adult Social Care Outcomes Toolkit (ASCOT). We worked in partnership with care home providers to design an intervention and agreed its implementation in four case-study homes. The pilot study aimed to collect revised estimations of inter-rater reliability for the CH3 toolkit and explore the acceptability and feasibility of the intervention to care home staff.

Recruitment of residents was a significant challenge throughout this research, which worked with two nursing and two residential homes, all of whom cared for at least some people living with dementia. For example, in one nursing home for people with dementia, all residents lacked the capacity to consent and had to be recruited to the study via the advice of personal consultees. This places considerable administrative burden on the homes, who, for data protection reasons, have to act as gatekeepers to those consultees, sending information sheets to family members and representatives on our behalf. This effectively means that the pathway between the researcher and the resident is at least four steps (researcher-home-consultee-researcher-resident), making the recruitment process very long and slow. We also experienced difficulties with one of the nursing homes, who unexpectedly transferred its ownership to the NHS after the feedback stage of the project and had to withdraw from the research. This home also experienced a complete change of senior management during the life-cycle of the project, which affected the support the research project received in contacting consultees and recruiting residents in that home.

Nevertheless, once we had consent to collect data in the homes, we found staff and residents welcoming and cooperative. In line with previous research [19], resident outcomes were higher for the basic quality of life domains and most of the feedback sessions focussed on the higher order domains: control over daily life, social participation and occupation, with one home also having some needs around choice of food and drink. Staff responded well to the feedback in all homes and felt it accurately reflected what they did well, as well as the areas they found more challenging to address. In the nursing homes in particular, staff felt that they were restricted by low staff numbers and a task-focused approach to caring. In all homes, staff engaged well with the feedback sessions and came up with ideas to improve outcomes for residents.

However, despite staff finding the feedback helpful and valid, and managers saying they had implemented changes because of it we did not find a measurable improvement in SCRQoL between T1 and T2. It seems likely that our follow-up time of 12 weeks was not long enough to elicit improvements in residents’ SCRQoL. This period was agreed following consultations with senior management within the homes, who felt this should be enough time for changes in practice. However, previous work on the impact of interventions in residential care has noted that three months might not be long enough to see improvements in such things as unmet need, quality of life or depression [61, 62]. Moreover, the procedures of using the well-established Dementia Care Mapping Tool (DCM) [63, 64] in studies of care practice tend to suggest gaps of one year between periods of data collection [65, 66]. Indeed, in one of the homes in this study, a new approach to engaging residents in activities, being trialled in response to our feedback, had only just begun on the last day of T2 data collection. It had, therefore, had no opportunity to impact upon the lives of residents but may have done had we been able to go back again at a later date.

Whilst this may explain why we might not see an improvement in current SCRQoL between T1 and T2, it does not account for the slight decline in residents’ current SCRQoL. To understand this, it helps to look at the expected SCRQoL scores. Across all homes, expected SCRQoL decreased between T1 and T2, matched by a significant decline in Barthel scores, indicating that those taking part in the study became increasingly frail between T1 and T2. Despite significant declines in health and expected SCRQoL, current SCRQoL declined only slightly because homes have compensated (or at least partially compensated) for this by adapting/increasing the care and support. This has implications for the measurement of SCRQoL and how we judge social care interventions. Most people using social care services have conditions that involve a permanent (and often declining) loss of functional ability. In these situations, the primary aim of social care interventions is to compensate a person for their lost functional ability, rather than try to restore it [1]. Whilst we expect good services to meet residents’ needs despite these challenges, in care homes the decline is often rapid [46] leading to frequent fluctuations in health and social care related quality of life. During this study, researchers often rescheduled interviews and observations with individual residents because of poor health and noted that residents have ‘good and bad days’. If observing on a bad day, ratings might indicate a lower than average outcome for that individual. If observing on a good day, the opposite might be true. Methodologically, this is a limitation of measures relying on ‘snapshots’ of information about residents’ lives.

One alternative approach would be to integrate outcome measurement into care planning, so that variation in health and social care needs can be accounted for and outcomes improved in a targeted, person-centred way. Arguably, had staff collected the data and made their own ratings of residents’ lives, using ASCOT, it may have had more impact on care practice than a feedback intervention and would also have had sustainability beyond the life of the study, providing potential for ongoing benefits for residents and staff. Although subject to several drawbacks from a research perspective, not least the loss of an external independent evaluation of residents’ lives on which to base the ratings and feedback, the model is attractive to care providers. Since conducting this research, one national health and social care provider in England has integrated ASCOT into their care planning processes with a view to improving the quality of life of service users [28] and another provider in New South Wales is also piloting this approach. Future research will aim to examine the reliability of staff ratings of residents’ SCRQoL compared with judgements made by external researchers/fieldworkers. In this study inter-rater reliability was excellent but nonetheless showed an improvement over time, emphasising the value of practice and experience in making these ratings, particularly for people who lack the capacity to consent. Furthermore, comparing SCRQoL in homes adopting this outcomes-focused approach to care planning with matched homes following usual care planning will help us evaluate the impact of this approach in residents’ quality of life.


The older participants in our study declined significantly in terms of their health and social care needs during the three-month period between giving the feedback and collecting the follow-up data. Despite this, their overall SCRQoL remained largely the same. Thus, homes maintained residents’ quality of life but did not improve it. As this was a small feasibility study, it did not include a control group, and so we cannot draw any conclusions about whether the feedback had a role to play in this. Furthermore, a limitation of our results is that they are based on a very small sample, reflecting the difficulties we had recruiting and retaining homes to the research. However, the ASCOT feedback was well-received, considered valid by staff, and changes in practice were reported by managers. Furthermore, since conducting this research, one national and one international provider have begun integrating ASCOT into their care planning and review activities, indicating the growing support for this approach within the sector and its relevance to an international audience. We have identified a variety of different options to address the problems raised and aim to address these as part of future research with these providers to evaluate the use of ASCOT in care planning.



Adult Social Care Outcomes Toolkit


Social care-related quality of life


  1. Netten A. Overview of outcome measurement for adults using social care services and support. London: NIHR School for Social Care Research; 2011.

    Google Scholar 

  2. Department of Health. Transparency in outcomes: a framework for quality in adult social care. London: Department of Health; 2011.

    Google Scholar 

  3. Care Act 2014, Chapter 23, London: The Stationery Office. Available at:

  4. Department of Health. Care and Support Statutory Guidance. Issued under the Care Act 2014. London: Department of Health; 2014.

    Google Scholar 

  5. Care Quality Commission. A new start. Responses to our consultation on changes to the way CQC regulates, inspects and monitors care services. October 2013. London: Care Quality Commission; 2013.

    Google Scholar 

  6. Netten A, Burge P, Malley J, Potoglou D, Towers A, Brazier J, Flynn T, Forder J, Wall B. Outcomes of social care for adults: developing a preference-weighted measure. Health Technol Assess. 2012;16(00):1–166.

    CAS  PubMed  Google Scholar 

  7. Malley J, Towers A, Netten A, Brazier J, Forder J, Flynn T. An assessment of the construct validity of the ASCOT measure of social care-related quality of life with older people. Health Qual Life Outcomes. 2012;10(1):21.

    Article  PubMed  PubMed Central  Google Scholar 

  8. van Leeuwen KM, Jansen APD, Muntinga ME, Bosmans JE, Westerman MJ, van Tulder MW, van der Horst HE. Exploration of the content validity and feasibility of the EQ-5D-3L, ICECAP-O and ASCOT in older adults. BMC Health Serv Res. 2015;15(1):1–10.

    Article  Google Scholar 

  9. Kaambwa B, Gill L, McCaffrey N, Lancsar E, Cameron ID, Crotty M, Gray L, Ratcliffe J. An empirical comparison of the OPQoL-Brief, EQ-5D-3 L and ASCOT in a community dwelling population of older people. Health Qual Life Outcomes. 2015;13(1):1–17.

    Article  Google Scholar 

  10. Milte CM, Walker R, Luszcz MA, Lancsar E, Kaambwa B, Ratcliffe J. How important is health status in defining quality of life for older people? An exploratory study of the views of older South Australians. Appl Health Econ Health Policy. 2014;12(1):73–84.

    Article  PubMed  Google Scholar 

  11. Steffansson M, Pulliainen M, Kettunen A, Linnosmaa I, Halonen M. The Association between Freedom of Choice and Effectiveness of Home Care Services. Int J Integr Care. 2016;16(1):5.

    Article  PubMed  PubMed Central  Google Scholar 

  12. van Leeuwen KM, Bosmans JE, Jansen AP, Rand SE, Towers A-M, Smith N, Razik K, Trukeschitz B, van Tulder MW, van der Horst HE, et al. Dutch translation and cross-cultural validation of the Adult Social Care Outcomes Toolkit (ASCOT). Health Qual Life Outcomes. 2015;13(1):1–13.

    Article  Google Scholar 

  13. Trukeschitz B. Worauf es letztlich ankommt. Ergebnisqualität in der Langzeitpflege und betruung. Kurswechsel. 2011;4:22–35.

    Google Scholar 

  14. Rostgaard T, Højmose M, Clement S, Rasmussen A. Omsorgsbetinget livskvalitet og hjemmehjælp - en ASCOT-undersøgelse blandt hjemmehjælpsmodtagere. Aalborg: Institut for Statskundskab, Aalborg Universitet; 2013.

    Google Scholar 

  15. Cetrano G. Quality assessment of mental health and social care services. Findings from the European REFINEMENT project and an Italian multicentre study. Verona: University of Verona, Department of Public Health and Community Medicine; 2015.

    Google Scholar 

  16. Ageing and Aged Care Quality Indicators [].

  17. Netten A, Beadle-Brown J, Caiels J, Forder J, Malley J, Smith N, Towers A-M, Trukeschitz B, Welch E, Windle K. ASCOT Adult Social Care Outcomes Toolkit: Main Guidance v2. 1 PSSRU Discussion Paper 2716/3. Kent: University of Kent; 2011.

    Google Scholar 

  18. Mueller CE, Gaus H, Rech J. The counterfactual self-estimation of program participants: impact assessment without control groups or pretests. Am J Eval. 2014;35(1):8–25.

    Article  Google Scholar 

  19. Netten A, Trukeschitz B, Beadle-Brown J, Forder J, Towers A, Welch E. Quality of life outcomes for residents and quality ratings of care homes: is there a relationship? Age Ageing. 2012;41(4):512–7.

    Article  PubMed  Google Scholar 

  20. McKee K, Houston D, Barnes S. Methods for assessing quality of life and well-being in frail older people. Psychol Health. 2002;17(6):737–51.

    Article  Google Scholar 

  21. Hellstrom I, Nolan M, Nordenfelt L, Lundh U. Ethical and methodological issues in interviewing persons with dementia. Nurs Ethics. 2007;14(5):608–19.

    Article  PubMed  Google Scholar 

  22. Hubbard G, Downs MG, Tester S. Including older people with dementia in research: challenges and strategies. Aging Ment Health. 2003;7(5):351.

    Article  CAS  PubMed  Google Scholar 

  23. Netten A, Burge P, Malley J, Potoglou D, Brazier J, Flynn T, Forder J. Outcomes of Social Care for Adults (OSCA) Interim Findings. PSSRU Discussion Paper 2648/2. Kent: Canterbury Personal Social Services Research Unit, University of Kent; 2009.

    Google Scholar 

  24. Netten A, Beadle-Brown J, Trukeschitz B, Towers A, Welch E, Forder J, Smith J, Alden E. Measuring outcomes for public service users. In: Volume Discussion paper 2696/2. Kent: PSSRU; 2010.

    Google Scholar 

  25. Beadle-Brown J, Towers A-M, Netten A, Smith N, Trukeschitz B, Welch E. ASCOT Adult Social Care Outcomes Toolkit: Additional Care Home Guidance v2. 1 PSSRU Discussion Paper 2716/2_1. Kent: University of Kent; 2011.

    Google Scholar 

  26. Beadle-Brown J, Towers A, Netten A, Smith N, Trukeschitz B, Welch E. Adult Social Care Outcomes Toolkit: Additional Care Home Guidance v2.1. Canterbury: Personal Social Services Research Unit, University of Kent; 2011.

    Google Scholar 

  27. Evaluating ‘the life I want’ strategy using ASCOT [].

  28. Community Integrated Care's use of ASCOT quality of life outcomes in our care and support plan documents [].

  29. Smith N. Using ASCOT to improve care practice and monitor quality in residential care. In: ASCOT users workshop 2014. London: PSSRU; 2014.

    Google Scholar 

  30. Szczepura A, Nelson S, Wild D. Improving care in residential care homes: a literature review. 2008.

    Google Scholar 

  31. Lechner C. Internal versus External Evaluation. In: Innes A, McCabe L, editors. Evaluation in Dementia Care. London: Jessica Kingsley Publishers; 2007. p. 46–65.

    Google Scholar 

  32. Lawrence V, Banerjee S. Improving care in care homes: a qualitative evaluation of the Croydon care home support team. Aging Ment Health. 2010;14(4):416–24.

    Article  PubMed  Google Scholar 

  33. Moriarty J, Kam M, Coomber C, Rutter D, Turner M. Communication training for care home workers: outcomes for older people, staff, families and friends. In: Research Briefing 34. London: Social Care Institute of Excellence; 2010.

    Google Scholar 

  34. Nolan MR, Keady J. Training in long-term care: the road to better quality. Rev Clin Gerontol. 1996;6:333–42.

    Article  Google Scholar 

  35. Allen S, O'Connor M, Chapman Y, Francis K. The implications of policy on delivering a palliative approach in residential aged care: Rhetoric or reality? Contemp Nurse. 2008;29(2):174–83.

    Article  PubMed  Google Scholar 

  36. Moniz‐Cook E, Agar S, Silver M, Woods R, Wang M, Elston C, Win T. Can staff training reduce behavioural problems in residential care for the elderly mentally ill? Int J Geriatr Psychiatry. 1998;13(3):149–58.

    Article  PubMed  Google Scholar 

  37. Reed J, Payton VR. Understanding the dynamics of life in care homes for older people: implications for de-institutionalizing practice. Health Soc Care Community. 1997;5(4):261–8.

    Article  Google Scholar 

  38. Kaasalainen S. Staff development and long-term care of patients with dementia. J Gerontol Nurs. 2002;28(7):39–46. quiz 54-35.

    Article  PubMed  Google Scholar 

  39. Help the Aged. My home life: quality of life in care homes. UK: Help the Aged; 2006.

    Google Scholar 

  40. Smyer M, Walls C, Fisher C, Lerner R. Design and evaluation of interventions in nursing homes. In: Applied Developmental Psychology. 1994. p. 475–501.

    Google Scholar 

  41. Mental Capacity Act. Mental Capacity Act. London: The Stationery Office; 2005.

    Google Scholar 

  42. Chatfield MD, Brayne CE, Matthews FE. A systematic literature review of attrition between waves in longitudinal studies in the elderly shows a consistent pattern of dropout between differing studies. J Clin Epidemiol. 2005;58(1):13–9.

    Article  PubMed  Google Scholar 

  43. Care Homes [].

  44. Laing & Buisson. Care of Elderly People UK Market Survey 2011/12. 24th ed. London: Laing & Buison; 2012.

    Google Scholar 

  45. Ramcharan P, Cutcliffe JR. Judging the ethics of qualitative research: considering the ‘ethics as process’ model. Health Soc Care Community. 2001;9(6):358–66.

    Article  CAS  PubMed  Google Scholar 

  46. Bebbington A, Darton R, Netten A. Care Homes for Older people: Volume 2 admissions, needs and outcomes. The 1995/96 National Longitudinal Survey of Publicly-Funded Admissions. Kent: Personal Social Services Research Unit, University of Kent; 2001.

    Google Scholar 

  47. Netten A, Forder J, Malley J, Smith N, Towers A-M. ASCOT Adult Social Care Outcomes Toolkit: additional guidance scoring ASCOT v2.1. In: PSSRU Discussion paper 2818. Canterbury: PSSRU, The University of Kent; 2011.

    Google Scholar 

  48. McGraw KO, Wong S. Forming inferences about some intraclass correlation coefficients. Psychol Methods. 1996;1(1):30.

    Article  Google Scholar 

  49. Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychol Assess. 1994;6(4):284.

    Article  Google Scholar 

  50. Stolarova M, Wolf C, Rinker T, Brielmann A. How to assess and compare inter-rater reliability, agreement and correlation of ratings: an exemplary analysis of mother-father and parent-teacher expressive vocabulary rating pairs. Frontiers Psychol. 2014;5:509.

    Google Scholar 

  51. Hallgren KA. Computing inter-rater reliability for observational data: An overview and tutorial. Tutorials Quan Methods Psychol. 2012;8(1):23.

    Article  Google Scholar 

  52. Corp IBM. IBM SPSS Statistics for Windows. Version 20.0 edition. Armonk: IBM Corp; 2011.

    Google Scholar 

  53. Netten A, Bebbington A, Darton R, Forder J. Care Homes for Older People: Vol 1 Facilities, Residents and Costs. Canterbury: Personal Social Services Research Unit (PSSRU) at the University of Kent; 2001.

    Google Scholar 

  54. Society A's. Low expectations. Attitudes on choice, care and community for people with dementia in care homes. London: Alzheimers Society; 2013.

    Google Scholar 

  55. Bajekal M. Characteristics of care homes and their residents. London: The Stationery Office; 2002.

    Google Scholar 

  56. 2011 Census: Aggregate data (England and Wales) [computer file] [].

  57. Mahoney FI, Barthel DW. Functional evaluation: the Barthel Index. Md State Med J. 1965;14:61–5.

    CAS  PubMed  Google Scholar 

  58. Bäumker T, Netten A, Darton R, Callaghan L. Evaluating extra care housing for older people in england: a comparative cost and outcome analysis with residential care. J Service Sci Manag. 2011;4(4):523–39.

    Article  Google Scholar 

  59. Darton R, Bäumker T, Callaghan L, Holder J, Netten A, Towers A-M. The characteristics of residents in extra care housing and care homes in England. Health Soc Care Community. 2012;20(1):87–96.

    Article  PubMed  Google Scholar 

  60. Morris J, Fries B, Mehr D, Hawes C, Phillips C, Mor V, Lipsitz L. The MDS Cognitive Performance Scale. J Gerontol. 1994;49(4):174–82.

    Article  Google Scholar 

  61. Orrell M, Hancock G, Hoe J, Woods B, Livingston G, Challis D. A cluster randomised controlled trial to reduce the unmet needs of people with dementia living in residential care. Int J Geriatr Psychiatry. 2007;22(11):1127–34.

    Article  PubMed  Google Scholar 

  62. Ames D. Depression among elderly residents of local-authority residential homes. Its nature and the efficacy of intervention. Br J Psychiatry. 1990;156(5):667–75.

    Article  CAS  PubMed  Google Scholar 

  63. Kitwood T. Dementia reconsidered: The person comes first. Buckingham: Open University Press; 1997.

    Google Scholar 

  64. Bradford Dementia Group. Care Mapping DCM. 8th ed. Bradford: University of Bradford, Bradford Dementia Group; 2005.

    Google Scholar 

  65. Brooker D, Foster N, Banner A, Payne M, Jackson L. The efficacy of Dementia Care Mapping as an audit tool: Report of a 3-year British NHS evaluation. 1998.

    Google Scholar 

  66. Martin G, Younger D. Person‐centred care for people with dementia: a quality audit approach. J Psychiatr Ment Health Nurs. 2001;8(5):443–8.

    Article  CAS  PubMed  Google Scholar 

Download references


Thank you to the members of the project advisory group and people involved in the consultation phase of the project for your advice and guidance throughout the project; thank you also to the residents, relatives, staff and management who gave up their time to take part in this research.


The research on which this paper is based was funded by the NIHR School for Social Care Research. The views expressed in this presentation are those of the authors and not necessarily those of the NIHR School for Social Care Research or the Department of Health/NIHR.

Availability of data and materials

The datasets generated during and/or analysed during the current study are not available in any repositories and cannot be shared by the authors because ethical approval for data sharing was not given by the Social Care Research Ethics Committee and study participants did not give their consent for data sharing. The Adult Social Care Outcomes Toolkit can be found here: and is available to view and use, subject to registration and licence conditions. The remaining study materials are available from the corresponding author by reasonable request, with some scales (EQ5D) being restricted, as they were used under licence in the current research.

Authors’ contributions

AT – conceived of the study, was the principal investigator and grant holder, assisted with fieldwork, conducted the analysis and drafted the paper. NS – was a co-applicant on the grant, assisted with study design, collected the fieldwork data, contributed to the planning of this paper and commented on drafts. SR – recruited homes, conducted the fieldwork, contributed to the planning of this paper and commented on drafts. EW – recruited homes, assisted with fieldwork and commented on drafts of the paper. AN – was a co-applicant on the grant, assisted with study design, provided advice and guidance throughout and commented on drafts of the paper. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Consent to publish data collected as part of this study was given by all participants and consultees.

Ethics approval and consent to participate

This study was funded by the School for Social Care Research and given a favourable ethical review from the Social Care Research Ethics Committee (12/IEC08/0051) who confirmed it complied with the requirements of the Mental Capacity Act [41] Participants gave their informed consent to participate in the research or were recruited through personal consultees (family members) acting in their best interests.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Ann-Marie Towers.

Additional file

Additional file 1: Table S7.

Summary of staff responses and any reported changes in practice. (DOCX 15 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Towers, AM., Smith, N., Palmer, S. et al. The acceptability and feasibility of using the Adult Social Care Outcomes Toolkit (ASCOT) to inform practice in care homes. BMC Health Serv Res 16, 523 (2016).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: