Forty Diabetes Centres (17 metropolitan and 23 regional) were invited to participate in the study. A total of 14 interviews (with 17 participants in total) were conducted between November 26, 2019 and January 14, 2020, after which there were no new emerging themes and saturation was considered to be reached. Interviews took between 30 and 75 min (mean interview duration: 50 min, 54 s). Participants represented a diverse range of healthcare professionals from Diabetes Centres involved in receiving and using ANDA feedback. Six participating centres were metropolitan centres and eight were regional centres.
A summary of the participant demographics and centre details is shown in Table 1.
Aim 1. To elicit user perceptions regarding the utility and acceptability of the feedback currently provided as part of ANDA
Several belief statements regarding the utility and acceptability of current ANDA feedback were identified. These beliefs mapped to the CFIR domain of ‘Intervention characteristics’ and the associated constructs of ‘Design Quality and Packaging’, ‘Complexity’, ‘’Evidence Strength and Quality’, ‘Cost’, and ‘Relative Advantage’. For clarity and consistency, CFIR domains and constructs/sub-constructs are presented in bold; belief statements are presented within inverted commas (' ') ; and direct quotes are presented in italics with quotation marks (“ “). Tables with full data are shown in Additional File 2.
Design Quality and Packaging was encapsulated in the belief statement ‘The current data presentation is difficult to understand’ and was reported by more participants from regional sites than metropolitan sites: “You almost have to be a scientist to understand half those graphs. Not just anyone could look at those graphs and decipher, very quickly, how they relate to this service. You want all the data but just presented in a meaningful way.” (Participant 2, regional).
Of those participants who did not find the presentation difficult to understand, some held the belief statement that ‘The current data presentation is useful but could be summarised for easier orientation’: “I don’t think it’s too hard to read with the graphs and you can see at a glance how your institution is ranked. So, I think that’s useful. I don’t know how you do it differently. I suppose…some infographic in that you are in the top quartile for these indicators, bottom quartile for these indicators in some summary. That might make it easier to just work out which pages you want to look at more carefully.” (Participant 9, metropolitan).
Complexity was raised by most participants from both regional and metropolitan sites, who believed: ‘The current report is too long to be useful’: “If there was sort of a 10-page document, a few graphs, and something that was more achievable to read, I think we’d read the whole lot. [Our last report was] 150 pages, I was sort of skimming and just sort of floating through it. I haven’t even – to be honest, I haven’t read it.” (Participant 4, regional).
Evidence strength and quality was reflected in the belief statement ‘The current benchmarking isn’t comparing like with like’, expressed by a similar proportion of regional and metropolitan participants: “The populations in my observation are actually quite different and heterogeneous so benchmarking against others is probably one of the limitations of the ANDA reports because you’re not necessarily comparing like with like. But I certainly find it useful to benchmark with ourselves from year to year” (Participant 7, metropolitan).
Cost was reflected in the participants’ belief that ‘The data collection period is burdensome’. This was more common among metropolitan participants but was also raised by participants from regional centres: “… it is quite time consuming, if you want to do it properly that is, and it costs money. We’re lucky that we’ve got some funding that we can pay someone to do that… The last two years we’ve funded her to do, I think, about 15 days of work just completing audits” (Participant 2, regional).
Relative advantage was reflected in the belief statement ‘We value the ANDA report as a measure of our clinical practice’. This belief was more predominant among metropolitan sites: “In our current state the ANDA survey is our only way of collecting clinical data…So to us, ANDA is critical at this stage, to support a process of collecting some data around what we’re doing in our clinics” (Participant 14, metropolitan).
Aim 2. To elicit user perceptions regarding preferred options for redesign of feedback format
Perceptions were sought regarding options for redesigned feedback, including shorter reports, different data visualisation techniques and infographics, a clinical dashboard with a traffic-light style interface, a report card style summary, and a pre-populated PowerPoint summary of the feedback. Belief statements about these techniques mapped to the CFIR domain Characteristics of the intervention and construct of Design quality and packaging. A full list of belief statements with illustrative quotes are shown in (Additional File 2).
Reflecting the belief against Aim 1 regarding the need for a summary, most participants believed ‘A shorter report would be more engaging’. Some participants, particularly those in metropolitan centres, reported that also maintaining access to the full report is helpful, but all agreed that there would likely be more team engagement with a shorter report. Having the full data available as an electronic addition to summary data was an attractive option: “I think that’s sensible and I think it makes people—I mean it’ll make people read the document more if it’s 20 pages but then when you need the additional data to make a statement, you can just print out the other 60 pages or whatever as well. I think it is useful to separate the two.” (Participant 5, metropolitan).
A range of views were expressed regarding data visualisation techniques, but there were clear preferences. The belief statement ‘Infographics are a helpful data visualisation technique’ was shared by almost all participants, particularly in terms of clearly expressing summary data and as a way to disseminate audit feedback to a wider audience: “Boom. You can see it, in summary. Bang.” (Participant 4, regional).
The majority of participants were also positive about the use of a clinical dashboard (‘A dashboard is a helpful data visualisation technique’). While participants had varying views of the utility of traffic-light dashboards, most felt that a dashboard interface would be useful to prioritise discrete areas of clinical care, or to identify trends over time. “”It’s very visual and I think if you were—if we were trying to explain where the data was to people that need to get that message across clearly, whether we’re pushing how well we’re doing and saying where all the green bits are or if we’re pushing an agenda to say we need to do better or we need more help and pushing the red parts of the dashboard, that would be helpful. I mean it is really, really easy to understand so I think that would be very helpful.” Participant 5, metropolitan.
Commonly, participants expressed doubt about the utility of a report card style summary (belief statement ‘A report card may not be helpful in illustrating actionable clinical improvements’), as there were concerns about the appropriateness of the messaging (e.g. ‘needs improvement’), staff engagement and whether this format would provide actionable items: “It, again, just seems like, “Oh yeah, they’re just telling us where we need to be headed.” I don’t think the format of that – because they are just statements. It might be okay as the front page of a report, as a summary in something, but in terms of individual staff, in a service, taking notice of it, I don’t think they’d look at it” (Participant 12, regional).
All participants shared the belief statement ‘A pre-populated PowerPoint deck would be helpful for disseminating the data’. The succinct nature of this option was particularly favoured. “I think a PowerPoint presentation would be great because we could do that as a team and like I said we don’t want a detailed report that details every item of the audit, we just want the key points basically.” (Participant 8, metropolitan).
Aim 3. To elicit user perceptions regarding the barriers to implementing feedback in its current format
Participants identified a range of barriers to implementing the feedback in its current form which mapped to the CFIR domain of Inner setting including constructs of Implementation climate and Readiness for implementation. Full details of the barriers and their relevant constructs and subconstructs are included in Additional File 2.
A subconstruct of Implementation climate is compatibility, relevant to participants’ commonly voiced concerns that the demands of the clinical environment impede clinical staff engaging with the audit and feedback. We summarise these many pressures in the belief statement ‘There’s a lack of engagement with the audit because of clinical pressure’. An example of this can be seen in the quote: “I think people, possibly, aren’t as invested in it as we would like them to be. Because, the way the audits occur we’re funding a nurse, who doesn’t normally work with those clients [to collect] the data, so they’re [clinical staff] not really invested right from the start… they know the audits are happening, they know it’s their clients that are being audited, but they’re not really involved in an active way, from the beginning. So, to them, I think, they just possibly see it as, “Oh, another thing I’ve got to add to my list of jobs to do.”” (Participant 2, regional).
Another sub-construct of Implementation Climate is relative priority. The majority of participants believed that ‘Reviewing ANDA feedback and utilising it for developing QI activities is a low priority’. Although this was more commonly expressed in centres with smaller teams, similar ideas were generated by metropolitan and regional sites: “Well we would say that we technically probably haven’t sat down as a team and gone through it as such… One, because a time factor, and two, because we’ve never worked out how to utilise the document to – I don’t know – use to our advantage so to speak.” (Participant 4, regional).
Mapping to the construct of readiness for implementation, and more specifically the sub-construct of access to knowledge and information, some participants expressed the belief statement ‘Not everyone knows how to use the feedback to inform QI activities’. “I think the report in itself, we know what’s going wrong. It’s actually having the means to address it and time is often a big issue, but sometimes it’s just the know-how” (Participant 10, regional).
Mapped to the subconstruct of Available resources, the majority of participants (metropolitan and regional) believed ‘We have limited resources for implementation’. “The biggest one is time to be honest and I guess along with that is the resources. In any public hospital the focus is on service provision, the delivery of clinical care. Although QI obviously has a flow on effect for that if you’re out on the ward all the time or in the clinic all the time or don’t have the admin [administration] support or the IT [information technology] support that is an obstacle to QI.” (Participant 7, metropolitan).
Another barrier to implementation, aligned with the subconstruct of Leadership engagement, was the belief that ‘Lack of leadership engagement is a barrier to implementation’: “…it could be useful if we had the right platform, and I mean like a committee or something that was dedicated to actually analysing the data and influencing some quality improvement activity. I have to take time to wade through the information, currently, and then present it in a form that management are, actually, going to take notice of.” (Participant 2, regional).
Aim 4. To elicit user perceptions regarding the enablers to more effective use of feedback
The participants identified a range of enablers to more effective use of audit feedback. These enablers related to different CFIR constructs, including Inner setting (sub-constructs: readiness for implementation; leadership engagement), Implementation process (sub-constructs: Engaging: Champions and External change agents) and Outer setting (sub-construct: External Policy & Incentives), as shown in Additional File 2.
Some participants identified that they were able to use the ANDA data to leverage leadership engagement and facilitate more effective use of audit feedback to help expand services to meet the needs identified in the audit. These perspectives contribute to the belief statement ‘We can use the data in ANDA feedback to engage leadership’, as demonstrated in the following quote: “I think it’s actually because it is benchmarked that, really, time managers in our organisation take note. I would hate to see that removed from the audit and the report, because that’s really speaking executive management language, and it’s incredibly powerful, so just from a, I guess, purely a political point of view, I would just hate to not have that benchmarking. Because we really use that as leverage if…we have the lowest uptake of something, that’s incredibly important for us.” (Participant 5, metropolitan).
Some participants believed that success stories from clinical champions were, or could be, enablers in the implementation process. ‘Success stories and the experiences of other centres are enablers’. A range of regional and metropolitan sites reported that hearing those stories from clinical champions who have successfully made change in their own centres inspires changes to build QI interventions at a local level. “But also having perhaps one of the organisations that are doing it well, or having a call out to those that are meeting the needs and how are they addressing it, how are they achieving that result?… there are little things like just being aware of where the services and how to access those services, especially in an ever-changing population, there are things that I think we can learn from other places that are doing it well.” (Participant 10, regional).
A similar proportion held the belief that ‘Mentoring from other centres would be an effective enabler of implementation’. This was particularly true for regional centres: “I’m sure there’s not only one centre that has the same issues. It would be good to work together and if there’s not a solution, work together and try and address it in the same way so that we can see those outcomes are working or – I hate this reinventing the wheel all the time, that round Australia five centres might have the same issue but we’re all trying to fix it with our own little committees and things rather than if someone’s got the answer, why not share it?” (Participant 10, regional).
This belief extended to the value of peer-to-peer support (‘Peer to peer support is an enabler’). Participants talked about their desire for peer support and its value. “I went along and networking with other services and seeing all of these amazing QIs and amazing things that other centres were doing with less or more resources or less or more staff and also sharing some of the challenges that we all face, that was probably the first motivating factor.” P 7, metropolitan.
Several participants felt that ‘External policy and incentives can be effective enablers’. Examples such as NADC accreditation or Professional Development programs in professional societies can be effective enablers to engage staff in the process of audit and feedback: “When we last had our…accreditation…the fact that we’d participated in ANDA was an incredibly important element of our quality improvement. We had a lot of evidence to show that we’d done good quality improvement.” P 11, regional.
Aim 5. To elicit user perceptions regarding co-intervention that is likely to support implementation of feedback and development of QI activities
A number of options for additional interventions to support feedback and facilitate quality improvement activities were explored with participants, with focus on their feasibility and acceptability. These ideas did not contribute to belief statements but explored the potential of not only improving the format of the feedback, but also supplementing it with a supporting co-intervention. Participants from both regional and metropolitan sites had suggestions for co-interventions.
Coaching: there were concerns regarding the logistics of providing coaching: “… we are very time poor, but it could be good if someone could go this is where you’re doing well, this is what you need to improve on, I think that would be useful, but it would need to be straight to the point.” (Participant 8, metropolitan). Many of the participants offered that with improved feedback, coaching should not be necessary: “I would hope that it was presented in a way that you could, easily – like you wouldn’t need coaching if it was clear how to interpret it.” (Participant 2, regional).
Change champions: Webinars or online delivery of stories from change champions working in exemplary sites were considered: “So, if something’s been successful, share it. I know people do that at conferences and things, but not everyone can get to those, and it’s not specifically about this data. So, if there was some sort of showcase facility, on the website, that specifically related to quality improvement activities that can come from such a report, that would be useful.” (Participant 12, regional).
Educational activity: instructional webinars to help sites in data collection or to understand the feedback were suggested “I think possibly if there [were] more webinars around it [ANDA] where people could access around—again maybe a webinar for first timers using it, a webinar around some of the tricks of getting the best data and what have you, understanding the reports. I think if there [were] webinars that were available to look at any time for members participating, I think that would be incredibly helpful.” (Participant 5, metropolitan).
Community of practice: reflecting the diversity of sites, participants wanted examples of QI interventions from similar sites to theirs. Commonly, this was an issue for participants from regional sites, but was also relevant for metropolitan sites: “I’d like to get some ideas about perhaps how other institutions perhaps similar to mine with similar funding issues, similar problems, have tried to resolve the issues that they had… it would be probably something similar to a website, I suppose, where people can post something, this was the problem, this is how we solved it or tried to solve it and we actually found usefulness from that. So, I think, yeah, it could be useful for us to get some ideas.” (Participant 9, metropolitan).