The effectiveness of clinical networks in improving quality of care and patient outcomes: a systematic review of quantitative and qualitative studies

Background Reorganisation of healthcare services into networks of clinical experts is increasing as a strategy to promote the uptake of evidence based practice and to improve patient care. This is reflected in significant financial investment in clinical networks. However, there is still some question as to whether clinical networks are effective vehicles for quality improvement. The aim of this systematic review was to ascertain the effectiveness of clinical networks and identify how successful networks improve quality of care and patient outcomes. Methods A systematic search was undertaken in accordance with the PRISMA approach in Medline, Embase, CINAHL and PubMed for relevant papers between 1 January 1996 and 30 September 2014. Established protocols were used separately to examine and assess the evidence from quantitative and qualitative primary studies and then integrate findings. Results A total of 22 eligible studies (9 quantitative; 13 qualitative) were included. Of the quantitative studies, seven focused on improving quality of care and two focused on improving patient outcomes. Quantitative studies were limited by a lack of rigorous experimental design. The evidence indicates that clinical networks can be effective vehicles for quality improvement in service delivery and patient outcomes across a range of clinical disciplines. However, there was variability in the networks’ ability to make meaningful network- or system-wide change in more complex processes such as those requiring intensive professional education or more comprehensive redesign of care pathways. Findings from qualitative studies indicated networks that had a positive impact on quality of care and patients outcomes were those that had adequate resources, credible leadership and efficient management coupled with effective communication strategies and collaborative trusting relationships. Conclusions There is evidence that clinical networks can improve the delivery of healthcare though there are few high quality quantitative studies of their effectiveness. Our findings can provide policymakers with some insight into how to successfully plan and implement clinical networks by ensuring strong clinical leadership, an inclusive organisational culture, adequate resourcing and localised decision-making authority. Electronic supplementary material The online version of this article (doi:10.1186/s12913-016-1615-z) contains supplementary material, which is available to authorized users.


Supplementary File 1 -Detailed description of systematic review methodology Overall Approach
This systematic review was conducted in accordance with the PRISMA approach to ensure the transparent and complete reporting of our sensitive searching, systematic screening and independent quality assessment [1]. The concepts and overarching methods for systematic reviews [2] have been adapted to be applicable for a mixed methods systematic review [3,4].

Eligibility -inclusion and exclusion criteria
Articles were eligible for inclusion in this review if: i) The primary focus of the paper was on clinical networks in any healthcare setting (e.g. acute, primary, community, vertical integration) ii) The networks corresponded with the category of network that would be includedthat is a managed or non-managed clinical network iii) The paper reported an outcome related to improvement of quality of care or patient outcomes (based on objective measures)

Quality and assessment of risk bias
The risk of bias and quality assessment of the quantitative studies and qualitative studies were assessed separately [2,5].

Quantitative Studies
The quantitative study designs were assessed on the basis of whether they would meet the study design acceptable for a Cochrane Effective Practice and Organisation of Care Group (EPOC) review with those being: a) patient or cluster randomised control trials; b) non-randomised cluster control trials; c) controlled before and after studies; and d) interrupted time series [6,7]. Given the lack of high quality study designs found in the included articles, study designs were coded into the followed grades of evidence used previously for a communities of practice review [ • Statistical analysis -were the methods appropriate and was reporting adequate? (yes/no) • Was there a declaration of funding or sponsorship? (yes/no) • Was the study free from other risks of bias? (yes/no) The studies were grouped into three categories on the basis of quality of methods and reporting [11]: • High quality -design and conduct of study address risk of bias, appropriate measurement of outcomes, appropriate statistical and analytical methods, low dropout rates, adequate reporting; • Moderate quality -do not meet all criteria for a rating of good quality but no flaw is likely to cause major bias, some missing information; • Low quality -significant biases including inappropriate design, conduct, analysis or reporting, large amounts of missing information, discrepancies in reporting.
Two authors (BB, CP) independently assessed each quantitative study against the criteria above. There was 50% agreement (5/10 articles) and through discussion there was 90% agreement (9/10 articles) with final ratings given to 8 articles (see Table 1). A third author (MH) resolved one instance where there was disagreement and two instances where additional input was sought. The authors agreed that observational articles would not be given a "high" quality rating even when bias was minimised in the study due to the inherent flaws of an observational study design. At this stage, one article in question was deemed to be ineligible and excluded from this review. There was 100% agreement on the quality assessment rating of the nine included articles between the three authors.

Qualitative Studies
There is lack of consensus about how to assess risk of bias for qualitative studies [12]. For this review we considered that assessing the validity of the methods and quality of the reporting was the most appropriate approach to take [13,14]. To do this, we used nine criteria to assess the quality of qualitative studies recently developed by Harden and colleagues [4] and two criteria on the extent to which the 'participant voice' [15] was elucidated using a definition suggested by Mays and Pope [13] (see Box 2).
Box 2 -Criteria used to assess the quality of the qualitative studies.
Were the aims and objectives clearly reported?

2.
Was there an adequate description of the context in which the research was carried out?

3.
Was there an adequate description of the network and the methods by which the sample was identified and recruited?

4.
Was there an adequate description of the methods used to collect data?

5.
Was there an adequate description of the methods used to analyse data?

Use of strategies to increase reliability and validity [4]
6.
Were there attempts to establish the reliability of the data collection tools (for example, by use of interview topic guides)?

7.
Were there attempts to establish the validity of the data collection tools (for example, with pilot interviews)?

8.
Were there attempts to establish the reliability of the data analysis methods (for example, by use of independent coders)?

9.
Were there attempts to establish the validity of data analysis methods (for example, by searching for negative cases)?

Quality of the application of the methods [13]
10. The extent to which qualitative studies are grounded in and reflect study participants' perspective and experiences (as evidenced by the use of supporting quotes)

11.
Whether the studies produce also rich or 'thick' descriptions of the investigation and explanatory insights rather than 'thin' descriptions or flat summaries of the findings.
We grouped these studies into three categories on the basis of quality in accordance with the approach used by Harden and colleagues [4] and the Cochrane qualitative research methods group [16]. Arbitrary cut offs were selected as: • High quality -those meeting 8 or more criteria • Medium quality -those meeting between 5 and 7 criteria • Low quality -those meeting fewer than five criteria

Data extraction and synthesis
Given the lack of high quality evidence from randomised controlled trial data, we adopted a pragmatic approach of examining all available evidence from primary observational studies, and assessing study quality within this lower level of the evidence hierarchy. Studies were first categorised as either qualitative or quantitative. Quantitative papers were then further categorised according to the focus of the study linked to the review objectives into two categories: 1. Improving quality of care: These papers examined whether clinical networks were successful in improving the delivery of health care.
2. Improving patient outcomes: These papers examined whether reorganisation into clinical networks or interventions implemented by networks were effective in improving patient outcomes.
Qualitative methods were used to thematically analyse and synthesise textual data extracted from the qualitative studies [17]. Two authors (BB and CP) independently identified the focus of the qualitative papers and categorised them into four themes. As several papers could have been classified under more than one theme, articles were categorised on the basis of the most prominent theme. The four themes were: 2. Network implementation: These articles described the process of implementing a clinical network and the key lessons learned from the implementation process.
3. Organisational structure: These articles looked at how networks were structured and how its structure impacted the way the network worked (namely, the network's ability to achieve its desired outcomes).
4. Organisational learning and knowledge: These articles examined the organisational learning and education role of clinical networks.
Due to the heterogeneity of the included studies, data were extracted directly into a data extraction table. Information was extracted on: i) country; ii) description of network studied; iii) description of the sample and size in terms of networks and participants; iv) study aim; v) intervention (quantitative studies); vi) design; vii) data collection method; viii) outcomes assessed; ix) results. One author (BB) extracted all the information from the initial search on the basis of what was available in the publications and a second (CP) checked all the extracted information. There was majority agreement between the reviewers on the data extracted and queries were resolved through consensus. For the updated search, two authors (BB, CP) extracted information from the articles and agreed on the data extracted through consensus. The main findings of the quantitative and qualitative studies were first examined separately, and then integrated to identify recurrent themes and findings to enable conclusions to be drawn.
Due to the heterogeneity of the included quantitative studies and their outcomes, results were reported narratively. Key outcomes demonstrating the effectiveness of clinical networks were reported. Qualitative methods were used to synthesise textual data extracted from the qualitative studies. Results from the quantitative narrative analysis were then integrated with the qualitative synthesis in the discussion to identify recurrent themes and findings to enable conclusions to be drawn. Details on the findings of each of the included articles can be found in Additional File 2.