Volume, nature and frequency
960 titles and abstracts were retrieved from the electronic database searches, and screened for inclusion. Of these, 54 papers were judged to be potentially relevant to the current review’s criteria and were subjected to full review. Twelve of these were deemed relevant to the inclusion criteria. A list of papers excluded from the review at this stage can be found in Additional file 2.
Five duplicate papers were removed and the remaining papers’ reference lists were subjected to scrutiny for sources that may have been missed in the original searches. Eight additional papers were identified following this reference list scrutiny and a total of 15 papers were retained for review (Figure 1).
All of the included papers were research based, with eight quantitative in design, four qualitative and three using mixed methods. Three of the studies were conducted in the UK and Ireland, three in Canada, two in the US, one with a mixed sample from the US and Canada, two in Australia, two in New Zealand, one in Israel, and one in the Netherlands. A higher number of papers were published in 2008 (n = 5; 33%) than in any other year (Figure 2). No link to country of origin was present in relation to this spike as all five of these studies were conducted in different countries and across continents.
Scope and quality
Of the 15 included papers, nine used a sample of physiotherapists, two used a mixed sample of physiotherapists and occupational therapists, two used a sample of occupational therapists, one used a sample of speech and language therapists, and one used a mixed sample of occupational therapists and speech and language therapists. Five papers investigated allied health professional managers or directors, seven investigated a general staff sample and two investigated a mixed manager/staff sample; one paper did not define whether their sample was staff or managerial. In total, 2161 allied health professionals participated in the reviewed studies, with the majority (N = 1450) being physiotherapists; see Additional file 3. Based on the quality appraisal, the papers were generally found to be adequately conducted, but several papers had various limitations (see Additional file 3). In general, the qualitative and mixed-methods papers’ were found to be more methodologically rigorous. Quantitative papers were of mixed quality. Several of the quantitative papers were unclear about their sampling in terms of sample size justification [21, 25–28]; power analyses were only reported in one of the quantitative papers [17]. Many of the quantitative papers used simple descriptive analyses.
Themes and concepts identified
A range of barriers and facilitators to routine outcome measurement by allied health professionals in practice were identified. The majority of the papers included in this review identified barriers to routine outcome measurement. Only one of the papers [21] focused their wording positively and consequently received more detail about facilitators. Five papers [21, 22, 29–31] reported facilitators and barriers as a continuum. Four higher level themes were identified: Knowledge, Education, and Perceived Value in Outcome Measurement; Support/Priority for Outcome Measure Use; Practical Considerations; Patient Considerations.
Knowledge, education, and perceived value in outcome measurement
This theme is composed of factors relating to outcome measure use at the individual clinician level. Eleven papers identified issues relating to clinicians’ knowledge as influencing routine outcome measurement usage. Eight papers [22, 23, 25–27, 30, 32, 33] identified clinicians’ lack of knowledge about outcome measures’ reliability and validity as barriers to their use; whilst three papers [21, 29, 31] suggested that greater knowledge, understanding and familiarity of outcome measures’ increased the likelihood that they would be used in practice. One paper found that the use of outcome measures’ was more positively viewed by those with a Masters level qualification [12] and another identified that those who had a clinical specialty, as opposed to those who did not, were twice as likely to use outcome measures’ in practice [22]. The level of clinicians’ perceived value of outcome measurement use was discussed in seven papers [21, 22, 26, 29–32]. Four of them recognised that this factor was bi-directional [21, 29–31]. A lack of perceived value of outcome measures lead to a decreased likelihood of their use in practice, whilst greater perceived value appears to increase uptake.
Support and priority for outcome measure use
This theme relates predominantly to the influence of organisational factors on routine outcome measurement in practice. Low organisational priority and support for outcome measurement were identified as barriers in six papers [22, 24, 26–28, 32]. Two papers indicated that having a high level of organisational commitment and support for routine outcome measurement facilitated their use [23, 32]. Co-operation of colleagues [21] and the support of management [32] were also recognised as facilitating routine outcome measurement. Concerns about a lack of management support [32], inappropriate use of outcome data by managers to reproach staff [21, 32], and the imposition of measurement tools by management were all cited as barriers to their use in practice [30]. At an individual level, clinicians appear to be more positive about outcome measurement when they have a choice over the selection of outcome measures which they consider to be the most useful or meaningful to their practice [23].
Practical considerations
This theme relates to practical issues and considerations relating to the use of routine outcome measurement in practice. Time was identified as an important influencing factor in ten papers. The barriers associated with lack of time involved not only the amount of time required for both patients and clinicians to complete an outcome measure [7, 22, 25, 28], but also the number of patients seen by a clinician [24, 31] and institutional restrictions which may limit the amount of time available to spend with patients [24]. Time was not considered in isolation, but in association with the clinicians’ assessments of suitability of a measure to their required context and the number of measures required [21, 29, 31]. Eleven papers [7, 9, 22–28, 30, 33] identified a lack of appropriate or available outcome measures as a barrier to their use. An outcome measure that was appropriate to the context, could be practically applied and did not require too much time to document [31] was recognised as increasing the chances of being used in practice [30, 31]. Lack of funding or excessive costs of outcome measures [21, 22, 24, 28–30] were clearly recognised as being barrier to their use.
Patient considerations
This theme relates to clinicians’ concerns about using outcome measures with and for their patients. The relevance of outcome measures to practice is of clear concern to clinicians. Six papers discussed how outcome measures which did not inform their practice were a barrier to their use [7, 22, 23, 30, 32, 33]. Clinicians reported that information provided by outcome measures were too subjective or not useful to their practice [30, 31, 33], and that they do not help to inform or direct patient care [22, 32]. Conversely, the opinion that outcome measures could support patients’ understanding, facilitate discharge planning, communication and treatment management [32], and the opinion that they provide the ability to make comparative clinical assessments [21] were likely to increase their use. This ‘fit’ of outcome measurement to routine practice was highlighted in five papers [7, 21, 24, 29, 32]. Three [21, 24, 32] identified that when there is a poor ‘fit’, barriers arose at both individual and organisational levels.
Two papers highlighted clinicians’ philosophical concerns about the relevance of standardised outcomes [30, 31]; such concerns, however, were not found to be statistically related to outcome measure use [22]. Five papers reported clinicians’ concerns about their patients’ ability to complete outcome measures [21, 22, 24, 30, 33]. These included the belief that measures: could be too complicated to be completed independently [22, 24]; were confusing [22]; required a reading level was too high [22]; presented language barriers for patients not fluent in English [22]; presented ethnic and cultural sensitivity issues [22, 30]; and that patients may become disheartened if they viewed their progress as slow based on the outcome measurement findings [22]. Such perceptions were reported to decrease clinicians’ likelihood of using outcome measures in practice. Two papers, however, reported facilitating factors relating to patient considerations. Routine outcome measurement was viewed more favourably if they were easy for a patient to understand [22] and if patients did not find the measure to be too time consuming [21].