Participants across all sites reported that a designated NCA lead, typically a consultant, received a copy of the publically-available annual report produced by NCA suppliers. The NCA lead then disseminated a summary of the report to Trust groups with a remit for monitoring care quality. However, the report and summary were reported to receive little response outside the clinical service. For example, after the publication of a PICANet report, a Paediatrician commented:
‘I'm not invited to the Board meeting to discuss [the PICANet annual report], [ … ] there's never any discussion with me directly about it and what it means.’ (Paediatrician, Site 1).
Similarly a Cardiologist (Site 4) noted that in response to their recommendations for change: ‘absolutely nothing, nothing changes. Why collect the data?’ By way of explanation, the Cardiologist explained that when MINAP was initiated ‘Cardiology was very high on the political agenda […] and so you could get people to enact the changes and the recommendations’. However, they believed that the priority accorded to MINAP diminished once the original changes it was designed to promote had been adopted widely across Trusts. This shift in priorities appeared to constrain access to resources when quality improvement opportunities were identified, as another Cardiologist commented:
‘If you present them [Trust Board] with a problem I think they just think let’s not look at that because it might cost us some money.’ (Cardiologist, Site 4).
In addition, participants explained that data presented in NCA feedback did not always support recommendations for change, as two Information Managers (Site 1) discussed:
Senior Information Manager: ‘There are some national audits where we’ll get a report that’s published within 2017, ‘18, that they call their 2016 audit, that is reporting you the data from 2015/16 patients and if we’re getting that this year, it’s already two years out of date, you know.’
Information Manager: ‘We’ve already changed our practice by then.’
In other words, data in the public reports could be up to 2 years old and was not, therefore, perceived as a reliable basis for practice change. Timeliness of data was reported as a significant attribute for use in quality improvement across services. In theory, more timely data could be accessed via supplier websites, for example, PICANet and MINAP have quarterly data upload targets. Even so, some sites were not resourced to meet those targets, as an Information Analyst (Site 3) noted: ‘We’re a long way off being resourced to do that [quarterly uploads] as it stands.’ Better resourced services experienced challenges in accessing timely data also, as an audit clerk in one such service commented:
‘You can actually wait quite a long time for data to be sent to you [by the NCA supplier], by which time you’re like, well, actually I needed it yesterday.’ (PICANet audit clerk, Site 1).
Equally, clinicians reported constraints on their time to log on to supplier websites to review the standardised data reports available: ‘the reality of our lives in the NHS, is that we don’t have time to do that’ (Urological surgeon site 4). Therefore, in some services the primary mode of NCA feedback could be limited to the public report, in which data quality (accuracy and completeness), alongside timeliness, could also constrain its perceived usefulness for stimulating quality improvement. For example, a Cardiologist (Site 5) described that, in a previous year, inappropriate coding of data had resulted in too many cases of myocardial infarction being entered into the database, which:
‘had a knock-on effect in that some of the other variables, like the number of times the patient was on the heart ward or et cetera were low.’
Inaccurate coding impacted on the validity of the measures reported, and acted as a constraint on the perceived usefulness of the feedback as a basis for practice change. Data quality was often linked with the data collector’s skills and/or experience, which varied between audits and sites. For example, a surgeon participating in the BAUS audits explained that they trusted the accuracy of their data because: ‘I know that I’ve written every bit of that data myself’ (Urological surgeon, Site 4). In comparison to BAUS, data collection for NCAs that reported at service, rather than specific procedural-levels, was supported by a mix of clinical and non-clinical staff. In these contexts, the importance of clinical support for data validation was highlighted. However, even where services were satisfied with the accuracy of their own data, there was apprehension about the quality of other organisations’ data against which their performance was compared.
In summary, participants across Trusts reported interacting with feedback produced by NCA suppliers. However the resources allocated to support audit participation (which impacted data quality and timeliness) and access to resources to enact change in response to this feedback were reported to constrain its use as a tool for stimulating quality improvement. Despite these constraints, however, examples of how and why (the mechanisms) different groups within Trusts interacted with and acted on feedback produced by NCA suppliers, and the circumstances (the contexts) that influenced these interactions, were provided and are discussed below.
Protecting trust reputation
Trust Boards and their sub-committees monitor performance of services across their organisations. As NCAs report about specific clinical specialities and patient groups, members of these governing groups discussed that their interactions with NCA feedback was typically limited to particular circumstances, as explained by this Trust Board member:
‘The division of medicine would oversee quality of the stroke service, so there would be the SSNAP audits, the stroke performance, stroke mortality etc., etc. through there, and feed up areas of concern through to the appropriate committee then through to quality committee and if necessary to Trust Board.’ (Trust Board Member, Site 2)
The participant explained that clinical specialities are responsible for monitoring and improving the quality of their service, but escalate areas of concern to Trust management groups. In these circumstances, participants reported that Trust Boards were more likely to respond to NCA feedback when the clinical service audited was to appear as an outlier in the public report, as described by this Trust Board member:
‘If we’re going to be an exception for any measure, the national audit team will get in touch with us, and say, you’re going to be an outlier for this particular metric. And, you usually get a short period of time to check your data, and also then, if the data’s correct, for you to write a comment, which they don’t necessarily always commit to publish, but they say they might put a comment on their website for example.’ (Trust Board Member, Site 1)
Here, rather than the feedback itself, notification from the audit supplier prior to public reporting, prompted a response from the Trust Board. A Paediatrician (Site 1) commented that Trust Boards were particularly responsive to performance measures considered publically sensitive in nature: ‘they [the Trust Board] want to know that we’re not killing more patients because that’s the first thing that the press pick up on’. Put simply, Trust Boards appeared keen to maintain public confidence in their hospitals’ reputations for providing safe and effective care and acted on NCA feedback where this was brought into question. Where services were identified as outliers in this way, Trust Boards or their sub-committees continued to monitor them, until satisfied that appropriate standards had been met, as described by a Quality Manager (Site 1):
‘They [the Trust Board] want to see the on-going reporting into that group until the point where they say,’ okay, we’re assured about this now.
In two examples provided by clinicians of such an incident, the subsequent thorough investigations performed by the clinical service, established that data quality and/or patient case mix explained the outlier status. The outcome of these interrogations, therefore, was assurance that the service was performing within expected parameters. Furthermore, in response to being flagged as an outlier, a clinician within an affected service reported monitoring the standardised reports available via the supplier website on a more frequent basis: ‘We had a lot of deaths all together, which started to flag us up […]. It just flagged us up once and that was it, we were on it like a car bonnet, we were just looking at it all the time.’(Consultant, Site 1).
Improving performance
In comparison to Trust Boards acting on notification from audit suppliers, a surgeon who participated in the BAUS audits explained how the audit report (produced by the NCA supplier) had stimulated quality improvement. In a previous year the report had highlighted their service as an outlier, in comparison to peer organisations, in relation to patient length of hospital stay. The audit lead discussed the report findings in a clinical governance meeting (as part of standard practice) and in response a number of initiatives were introduced to improve service performance. The surgeon commented that the service changes:
‘meant that we could get patients out over the weekend, rather than everyone having to wait until the Monday, […] So, yes, it [the NCA report] does have, certainly in our own department, if you are looking like an outlier, we would try hard to address that’. (Urological surgeon, Site 4)
This surgeon reported that they did not review national comparator data via the NCA supplier website routinely owing to constraints on their time. However, as noted previously, they trusted the accuracy of the data reported, and further commented that they could make judgements about the quality of other organisations’ data based on what was presented. In this context, therefore, the NCA report was trusted as a reliable measure of service performance and national comparison enabled the service to assess where improvement could be made and, where resources allowed, they acted to improve performance.
Attracting patient referrals
Competition and patient choice have been used as strategies to deliver improvements in the NHS e.g. by offering patients a wider choice of provider and introducing funding systems such as Payment by Results [23]. Therefore, alongside identifying areas for improvement, national comparator data (available in NCA supplier feedback) were also useful to clinical services that wanted to attract patient referrals, as described by a Cardiologist (Site 3):
‘There are certain centres that are in the capture area of several tertiary centres [centres that offer specialist services] and so there is obviously a competition, if you can call it that, for the tertiary centres to capture those patients and so looking at our data and how well we’re performing and how quickly we’re able to offer this service is quite important. Because you want to be able to say to these centres, look, if you refer to us your average wait is a day. If you refer to centre Y your average wait’s a week.’
In this context of competition, the primary use of NCA feedback was to attract patient referrals to the service; used in this way, however, feedback may stimulate practice change if the service is not performing well in comparison to their peers.