Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Understanding clinician attitudes towards implementation of guided self-help cognitive behaviour therapy for those who hear distressing voices: using factor analysis to test normalisation process theory

BMC Health Services ResearchBMC series – open, inclusive and trusted201717:507

https://doi.org/10.1186/s12913-017-2449-z

Received: 4 November 2016

Accepted: 16 July 2017

Published: 24 July 2017

Abstract

Background

The Normalisation Process Theory (NPT) has been used to understand the implementation of physical health care interventions. The current study aims to apply the NPT model to a secondary mental health context, and test the model using exploratory factor analysis. This study will consider the implementation of a brief cognitive behaviour therapy for psychosis (CBTp) intervention.

Methods

Mental health clinicians were asked to complete a NPT-based questionnaire on the implementation of a brief CBTp intervention. All clinicians had experience of either working with the target client group or were able to deliver psychological therapies. In total, 201 clinicians completed the questionnaire.

Results

The results of the exploratory factor analysis found partial support for the NPT model, as three of the NPT factors were extracted: (1) coherence, (2) cognitive participation, and (3) reflexive monitoring. We did not find support for the fourth NPT factor (collective action). All scales showed strong internal consistency. Secondary analysis of these factors showed clinicians to generally support the implementation of the brief CBTp intervention.

Conclusions

This study provides strong evidence for the validity of the three NPT factors extracted. Further research is needed to determine whether participants’ level of seniority moderates factor extraction, whether this factor structure can be generalised to other healthcare settings, and whether pre-implementation attitudes predict actual implementation outcomes.

Keywords

Normalisation process theory Implementation Questionnaire CBTp CBTv Psychosis Hearing voices Auditory hallucinations Self-help IAPT

Background

Cognitive behaviour therapy (CBT), delivered over a minimum of 16 sessions, is the only individual psychological therapy recommended for the treatment of psychosis in a number of countries [14]. However, access to CBT for psychosis (CBTp) is poor, with recent figures from the UK suggesting only 10% of people with a psychosis diagnosis are offered CBTp [5]. The poor implementation of CBTp is not just limited to the UK, but is an international problem that is reported in the United States [69], Canada [10], and Australia [11]. The CBTp access rates are not available for many countries, but it is likely that if these more affluent countries are not able to facilitate access, then countries with less economic resource would also experiencing implementation challenges.

One of the most commonly cited reasons for the poor access to CBTp is a lack of resources, including lack of protected time and staff shortages [12]. One possible approach to increase access to CBTp is by developing interventions that can be delivered using comparatively less resources. A recent meta-analysis found briefer forms of CBTp (i.e. fewer than the recommended 16 therapy sessions) led to a significant reduction in psychosis symptoms compared to control conditions [13] These brief CBTp interventions typically targeted a specific symptom associated with psychosis (e.g. delusions or voices). Consequently, there is potential for brief forms of symptom-specific CBTp to be offered in the first instance. Drawing on a stepped care approach [14], more resource intensive forms of CBTp could then be delivered only to those still in need. However, based on the broader CBTp literature, we know demonstrating effectiveness does not necessarily lead to widespread implementation. For example, a recent audit of CBTp implementation within a NHS healthcare trust found that only 6.9% of people with psychosis were offered CBT, despite NICE [2] recommending that everyone with psychosis should be offered CBTp [15]. Therefore, in addition to investigating the effects of brief CBTp, we need to consider the potential challenges and facilitators to implementing this novel intervention.

This problem of implementation does not just apply to CBTp. It is common for healthcare services to experience delays in the process of implementing new treatments more broadly [16]. The difficulties associated with implementation has led to the development of numerous theoretical models that aim to understand and simplify this process [17]. A review by Tabak, Khoong, Chambers and Brownson [18] identified 12 separate models of implementation; however only two of these (Conceptual Model of Implementation Research, [19]; Normalisation Process Theory, [20]) consider implementation at multiple levels, including the individual and system levels.

The Conceptual Model of Implementation Research [19] synthesises previous theories of implementation to create a model that suggests different ways that implementation can be conceptualised (i.e. systems environment, organisational, learning, supervision, individual providers), and measured (i.e. feasibility, fidelity, penetration, acceptability, sustainability, uptake, and costs). The main purpose of this model is to explain the different outcomes that can be used to assess implementation, and the relationship between these outcomes [21]. Although this model is useful, it is not appropriate to be used within the present study, as its purpose is not in line with our study aims. This study aims to explore the barriers and facilitators to implementing a brief CBTp intervention prospectively; whereas the Conceptual Model of Implementation Research considers implementation retrospectively, and does not include a framework for exploring what these barriers and facilitators could be.

Conversely, the flexibility of the Normalisation Process Theory (NPT; [20]) means it can be appropriately applied to the present research study. NPT provides a theoretical framework to guide the implementation process. The theory specifies four factors that may enhance the likelihood of successfully implementing a new idea into an existing service: (1) coherence: the attitude of staff towards the new idea, (2) cognitive participation: the willingness of staff to be involved in implementation, (3) collective action: service level pragmatics involved in implementation, and (4) reflexivity: how the implementation process should be evaluated. This model can be used to consider the implementation of a brief CBTp intervention prospectively, and the NPT factors provide a theoretical basis from which barriers and facilitators can be explored.

NPT [20] has been applied to many different healthcare interventions and contexts, including physical health, service infrastructure, and mental health [22]. Looking specifically at the mental health related research, NPT [20] has been used to explore the implementation of stepped care [23], depression interventions [24] and collaborative care [25], primary mental health care [26], bipolar treatment guidelines [27], and problem-solving therapies [28]. Some of these studies applied the NPT model [20] retrospectively as a means of reviewing a previous implementation process e.g. [26]; and some utilised NPT [20] prospectively to develop an implementation plan e.g. [25]. All of these studies used qualitative research methods to understand implementation within the NPT framework, and all concluded that NPT was a useful and comprehensive model to guide the implementation process in mental health service settings. There are currently no studies however that have tested the validity of the NPT model using a quantitative design in a mental health context.

A NPT questionnaire has recently been developed (NoMAD; [29]). The psychometric properties of this measure are currently under assessment [30]. Similar to the Conceptual Model of Implementation Research [19], the items are phrased to look at implementation retrospectively. Furthermore, while the NoMAD measure is suitably vague to enable its use across multiple settings, this does limit its practical use in certain contexts. For example, one item on the NoMAD asks whether ‘sufficient resources are available to support the intervention’. In the context of our brief CBTp intervention, resources can be taken to mean the number of clinicians, clinician’s time, training, or information [12]. Consequently, we have developed our own questionnaire measure based on the NPT model [20] that has been specifically developed to investigate the prospective implementation of a brief CBTp intervention.

We plan to implement a brief CBTp intervention for distressing voices (CBTv) into National Health Service (NHS) mental health services in the UK [31]. The intervention is designed for adults who are distressed by hearing voices and who are currently receiving mental health care in an NHS service. Services would most typically be either secondary care community teams or early intervention for psychosis services. Accredited therapists generally have a positive attitude towards, and frequently use, self-help materials in their clinical work [32]. In contrast, mental health nurses have reported feeling sceptical about the value, and even the appropriateness, of talking to people about their voice hearing experiences [33]. These differing attitudes towards these aspects of guided self-help CBTv, suggests that mental health practitioners as a workforce may not be a homogeneous population. This has implications for our study as the findings from our NPT questionnaire could be moderated by the sample characteristics.

In light of this research, our study aims to: (1) test the validity of the four factor NPT model within our questionnaire using factor analysis; and based on the established factor structure, (2) identify mental health practitioner views on the implementation of guided self-help CBTv, and what sample characteristics may moderate these views. To meet our second study aim we will explore the following research questions: (a) Do attitudes differ between those who do and do not have accreditation to deliver therapy? (b) Do attitudes differ depending on the participants’ level of experience working with people who hear voices?

Methods

Design

This was a cross-sectional study using self-report questionnaires to seek clinicians’ views about guided self-help CBTv.

Ethics, consent and permissions

The study received ethical approval from the Sciences and Technology C-REC at the University of Sussex, UK (Reference: ER/CH283/4). NHS Research Governance approval was granted by the Sussex Partnership NHS Foundation Trust. Participants gave informed consent for their participation in this study.

Participants

The study inclusion criteria required that participants were clinicians working in an NHS mental health service, and also had experience of delivering psychological interventions and/or experience of working with clients who hear voices. To elaborate upon the terminology used in this paper, (1) Psychological Wellbeing Practitioners (PWPs) refers to clinicians with a year-long training in guiding CBT-based approaches for anxiety and depressive disorders, but they are not trained to the level of a CBT therapist; (2) Psychological Therapists refer to clinicians with a formal psychological therapy training which would include CBT therapists, Clinical Psychologists, and Counselling Psychologists, they may or may not have experience of working with people who hear voices; (3) Mental Health Professionals refers to clinicians with a core profession (e.g. nurses, psychiatrists, and occupational therapists) but without a formal psychological therapy training – they would be expected to have experience working with people who hear voices; and (4) Support Workers refers to clinicians who have no formal mental health qualification and no formal therapy training, but would be expected to have worked with people who hear voices in a support capacity. There were no exclusion criterion.

A total of 201 mental health clinicians, working in an NHS mental health trust in the South of England, participated in the survey. See Table 1 for information on the participant characteristics.
Table 1

Participant Characteristics: N = 201

Age (years)M(SD)

42.68 (10.58)

Gender %

 Male

25.9

 Female

73.6

 Prefer Not to Say

0.5

Team %

 Primary Care

5.0

 Secondary Care

86.5

 Early Intervention in Psychosis

8.5

Profession %

 Psychological Therapist

27.9

 Psychological Wellbeing Practitioner (PWP)

1.5

 Mental Health Professional

56.6

 Support Worker

14.0

Duration in Profession (years) M(SD)

13.71 (10.40)

Materials

We developed a questionnaire to assess clinicians’ attitudes towards guided self-help CBTv in relation to each of the four NPT factors [20]. The NPT questionnaire was developed in three phases, in line with the questionnaire development guidelines by Finch et al. [34]: (1) use of empirical knowledge, (2) use of expert opinion, and (3) use of theory.

Phase One (Use of Empirical Knowledge): Items were informed by the findings of a meta-analysis of briefer forms of CBTp (i.e. <16 sessions) [13]. Notably, the meta-analysis supported continued research on brief CBTp, as well as the adoption of the symptom-specific approach. Items were also informed by the implementation literature for CBTp more broadly, and included all of the known barriers to CBTp e.g. lack of time, high workloads, inadequate training [35, 36].

Phase Two (Use of Expert Opinion): Members of the research team included two experts in CBTp and an expert in self-help CBT approaches. They drew on their expertise to generate items for the self-report questionnaire.

Phase Three (Use of Theory): As mentioned previously, this questionnaire was based upon the NPT model [20]. Items were developed to address each of the NPT factors: (1) coherence (12 items) e.g. “I would be happy to refer a client who hears distressing voices to receive guided self-help CBT for distressing voices”; (2) collective action (9 items) e.g. “The resources needed to trial guided self-help CBT for distressing voices are available”, (3) cognitive participation (10 items) e.g. “I would like to be involved in research that is trialling guided self-help CBT for distressing voices”; and (4) reflexive monitoring (9 items) e.g. “Measures of symptom severity e.g. psychosis measures, are a good way to evaluate the effectiveness of guided self-help CBT for distressing voices”.

The resultant items were discussed amongst the research team, editing and removing items as required until consensus amongst the team was reached. The subsequent questionnaire included 40 items (see Additional file 1). In line with questionnaire design best practice [37], eight items were negatively worded. The questionnaire also included four free text boxes, to give participants the opportunity to elaborate on their responses. The questionnaire used a 7 point Likert scale, from strongly agree to strongly disagree. A score of 7 reflects a strong negative attitude, 4 reflects a neutral response, and 1 reflects a strong positive attitude.

Procedure

Mental health clinicians meeting the inclusion criteria were invited to complete the questionnaire either online (using Bristol Online Survey) or on a hard copy. Participants were informed that consent was to be assumed if they returned their completed questionnaire. All of the items and free text boxes were optional questions so that participants could decline to answer any of the items. The questionnaires were completed anonymously to allow clinicians the freedom to express negative views.

Planned analysis

Participants’ responses to the NPT questionnaire were transferred to SPSS version 22. All items that were negatively worded were reverse-scored at this point. Items were initially screened using the criteria suggested by Field [38] to determine whether they met criteria for factor analysis: the standard deviations, skew and kurtosis of the items were screened to ensure that none were outliers or significantly non-normal. The inter-item correlations and multicollinearity statistics were also checked to ensure that items were related and additive.

To address our first aim, an exploratory factor analysis (EFA) with a principal axis factoring method was used to establish a factor structure. An oblique rotation (specifically Direct Oblimin) was used as factors are expected to be related; the number of iterations for the rotation was set to 50. Exploratory factor analysis was chosen in favour of confirmatory factor analysis (CFA) as this is the first empirical assessment of the NPT model, and EFA does not make any assumptions about the model that will emerge. EFA will therefore provide a more rigorous test than CFA of the NPT model. That is, EFA will allow the model with the best fit to the data to emerge without any theoretical constraints. If the resultant factor structure supports the NPT model this would provide strong evidence for the model (as any model was free to emerge). Factor extraction was initially based upon eigenvalues reaching greater than one, as per the recommendations of Kaiser [39]. This criteria for factor extraction can only be used if Kaiser’s [39] criteria are met (i.e. there are fewer than 30 items and all communalities after extraction are greater than 0.7). The second of Kaiser’s [39] criterion does not apply to this analysis as the sample size is less that 250 (n = 201). If Kaiser’s [39] criteria was not met, then, as the sample contains more than 200 participants, factor extraction can be conducted using a Scree plot. Factor loadings that were less than .4 were suppressed. Where items loaded onto more than one factor at .4 or greater, the item was assigned to the factor with which it made most conceptual sense. The reliability of the factors was assessed using Cronbach’s alpha; the interpretation of scale reliability follows the guidance suggested by Tavakol and Dennick [40]. Subsequently, a scale was interpreted as reliable if the value of Cronbach’s alpha was between .70 and.95.

In order to address the second aim, mixed-design ANOVAs were planned to explore group-level differences. The following research questions were explored: (a) Do attitudes differ between those who do and do not have accreditation to deliver therapy? (b) Do attitudes differ depending on the participants’ level of experience working with people who hear voices? Post hoc tests with Bonferroni corrections were used where significant main effects were found. The post hoc test used for between group analyses was chosen in line with recommendation from Field [38] based on whether group sizes and variances were equal or not.

Results

Aim 1: Establishing the factor structure

Items were initially removed if more than 20% of the inter-item correlations were non-significant at the p < .05 level; this resulted in 7 items being removed (J1, K1, N1, P1, A2, H2 and K2). Secondly, items were removed if either the item standard deviation was greater than 2.5, or both skew and kurtosis Z scores were significantly different from normal at the p < .001 level; 7 more items were removed (C1, M1, D2, E2, J2, M2 and O2). None of the inter-item correlations suggested multicollinearity (all rs < .80). The remaining 26 items were included in the subsequent principal axis factor analysis.

Kaiser’s [39] criteria were not met, as communalities were below 0.7 after extraction (lowest 0.26); therefore the factor structure extracted based on the eigenvalues is not reliable. Consequently we used a Scree plot to determine the number of factors to be extracted. The inflexion on the Scree plot suggested a three factor solution. The EFA was re-run forcing three factors to be extracted. Six more items were removed at this stage, as factor loadings were below .4 (B2, C2, G2, I2, L2 and P2); resulting in 20 items being included in the final EFA. The present sample size was deemed ‘meritorious’ (KMO = .88; χ 2(190) = 1940.18, p < .001) [41].

The EFA required five iterations to converge. The three factor structure explained 50.14% of the total variance. Table 2 shows the results of the final factor structure. None of the items cross loaded. The first factor includes 8 items that all relate to attitudes towards the concept of guided self-help CBTv and conceptually fits with the NPT construct of ‘coherence’. The 6 items within the second factor all enquire about willingness to be involved in different aspects of the intervention and conceptually fits with the NPT construct of ‘cognitive participation’. The final factor has 6 items that ask about the different ways that the intervention could be evaluated to examine if it has been effective. This factor conceptually fits with the NPT construct of ‘reflexive monitoring’. It is noteworthy that three of the four proposed NPT factors emerged from the EFA, with one of the NPT factors (‘collective action’) not being represented.
Table 2

Final factor structure of staff questionnaire on guided self-help CBT invention for Voices

 

M

SD

α

1

2

3

1 Coherence

2.63

0.92

.89

   

 Guided self-help CBT for distressing voices is an appropriate treatment option

   

0.86

  

 I would be happy to refer a client who hears distressing voices to receive guided self-help CBT

   

0.80

  

 I would be willing to refer a client who hears distressing voices to receive guided self-help CBT as part of a research project

   

0.69

  

 Guided self-help CBT for distressing voices would be effective for those with long standing symptoms

   

0.68

  

 Guided self-help CBT for those who hear distressing voices would be unsafea

   

0.67

  

 It is a waste of resources to trial guided self-help CBT for those who hear distressing voicesa

   

0.64

  

 Guided self-help CBT for those who hear distressing voices will be very effective

   

0.61

  

 People who hear distressing voices would not be able to engage in guided self-help CBTa

   

0.61

  

2 Cognitive participation

2.76

1.11

.86

   

 I would be willing to have training to be able to deliver guided self-help CBT for distressing voices

    

0.86

 

 I would be willing to deliver guided self-help CBT for distressing voices as part of my job

    

0.79

 

 It would be possible to find the time to attend two day training course on how to deliver guided self-help CBT for distressing voices

    

0.66

 

 I would be willing to be involved in the development of guided self-help CBT for those with distressing voices

    

0.62

 

 I would be willing to be involved in research that is trialling guided self-help CBT for distressing voices

    

0.57

 

 I would not be prepared to receive training to deliver guided self-help CBT for distressing voicesa

    

0.55

 

3 Reflexive Monitoring

2.28

0.74

.79

   

 Research is a good method of testing a new intervention

     

0.72

 Following clients up after a period of several months to administer clinical measures is a good way to evaluate the effectiveness of guided self-help CBT for distressing voices

     

0.68

 Measures of the distress experience from hearing voices is a good way to evaluate the effectiveness of guided self-help CBT for distressing voices

     

0.68

 Measures of symptom severity e.g. psychosis measures, are a good way to evaluate the effectiveness of guided self-help CBT for distressing voices

     

0.60

 Randomised controlled trials e.g. comparing the treatment to a control group, is a good way to evaluate the effectiveness of guided self-help CBT for distressing voices

     

0.52

 Measures of other clinical symptoms e.g. anxiety and depression, are a good way to evaluate the effectiveness of guided self-help CBT for distressing voices

     

0.49

α = Cronbach’s alpha; aItems have been reverse scored; Scale scores: from 1 (positive attitude) to 7 (negative attitude)

All of the scales had good internal consistency (lowest Cronbach’s α = .79; see Table 2). The reliability of all scales could not be improved by removing any of the items. Furthermore all of the items correlated moderately well with the associated scale total (Idea: lowest r = .57; Involvement: lowest r = .54; Evaluation: lowest r = .52).

Aim 2: Clinicians’ attitudes towards guided self-help CBT for voices

The attitude scores, for all of the factors, were significantly lower than 4 (representing a neutral attitude) (Coherence: t(200) = −21.03, p < .001; Cognitive Participation: t(199) = −15.76, p < .001; Reflexive Monitoring: t(200) = −33.04, p < .001). This finding suggests that attitudes were generally positive in relation to each of the factors; see Tables 2 and 3 for the descriptive statistics.
Table 3

Descriptive statistics for all factors across participant characteristics

  

Coherence

Cognitive Participation

Reflexive Monitoring

 

n

M

SD

d

M

SD

d

M

SD

d

Accredited Therapist

   

0.47

  

0.31

  

0.03

 Yes

59

2.94

0.90

 

3.00

1.20

 

2.30

0.80

 

 No

140

2.52

0.90

 

2.66

1.07

 

2.28

0.71

 

Experience working with people who hear voices

   

−0.21

  

0

  

−0.12

 A lot to moderate

166

2.61

0.90

 

2.76

1.12

 

2.27

0.71

 

 Little to none

34

2.80

1.00

 

2.76

1.13

 

2.36

0.87

 

d = Cohen’s d

Mauchly’s test indicated that the assumption of sphericity had been violated (W = .95; χ 2(2) = 10.35, p = .006); therefore a Greenhouse-Geisser correction was used for all ANOVAs involving repeated measures (as ε > .75).

Are attitudes different across the factors between those who do and do not have accreditation to deliver therapy?

There was a significant main effect of therapist qualification on the ratings given across factors (F(1, 197) = 5.01, p = .03). Those qualified to deliver psychological therapy (EMM = 2.75; SE = 0.10) gave generally higher (less favourable) ratings than non-therapists (EMM = 2.49; SE = 0.06). There was a significant interaction between the factors and whether participants were qualified to deliver therapy or not (F(1.89, 371.27) = 3.79, p = .03). The only factor where therapists and non-therapists differed significantly was on the Coherence factor, where therapists gave significantly less favourable ratings compared to non-therapists (Therapists: M = 2.94, 95% CI [2.71, 3.17]; Non-therapists: M = 2.52, 95% CI [2.37, 2.67]), representing a medium sized effect (Cohen’s d = 0.47) [42].

Do attitudes differ across the factors depending on the level of experience the participant has working with people who hear voices?

There was no significant main effect of experience working with people who hear voices on subscale scores (F(1, 198) = 0.47, p = .49). Also the interaction between the factors and level of experience with clients who hear voices was non-significant (F(1.90, 377.01) = .52, p = .58). These findings suggest that ratings on subscales were not affected by participant’s level of experience of working with people who hear voices.

Discussion

The first aim of our study was to test the proposed factor structure of Normalisation Process Theory (NPT) [20] using exploratory factor analysis. We found partial support for the NPT model as the three factors extracted were akin to three of the NPT factors: (1) coherence, (2) cognitive participation, and (3) reflexive monitoring. This study seems to be the first test of the NPT model using factor analysis, although the findings of the NoMAD factor analysis are imminent [30]. Our findings suggest that coherence, cognitive participation, and reflexive monitoring are important facets of implementation that should be considered prior to the dissemination of brief CBTp interventions. This result is particularly compelling as the use of EFA meant that any factor structure could have emerged. It is possible that these three factors would also emerge as important when understanding the implementation of healthcare interventions more generally. However, as our study asked participants about a specific intervention (guided self-help CBTv) further research using factor analysis is required to determine the generalizability of the NPT model.

We failed to find support for the fourth NPT factor, collective action, which speaks to the feasibility of implementing the new intervention into the existing service. This study recruited clinicians currently working in mental health services, rather than staff in more senior positions – such as service managers. Clinicians generally do not have the power or responsibility to make service-level decisions. As the collective action factor describes a facet of implementation that occurs at the service-level, this factor could be argued as inconsequential to our sample, and therefore explain why this factor did not emerge in our analysis. If participants in these more senior positions had been recruited to the study, it is possible that we may have found support for the collective action factor.

A kin to our study, most NPT studies in mental health settings recruited practitioners [24, 25, 28]. However the NPT studies by Gask et al. [26] and Franx et al. [23] did include service leads and managers to investigate the implementation of mental health care, and stepped care, into primary care services respectively. Both of these studies found qualitative support for the Collective Action factor, as having coherent and consistent leadership across the services was associated with successful implementation. However, neither study explored the moderating effect of profession, nor did they validate the NPT model quantitatively. Consequently we suggest that future tests of the NPT model should include participants of varying levels of seniority. These studies would benefit from the use of quantitative, moderation analysis to explore the effects of seniority, and address this limitation of our study and previous research.

The second aim of our study was to examine clinicians’ attitudes towards guided self-help CBTv, and whether these differed as a function of therapy training (therapist versus non-therapist) and experience working with clients who hear voices. We found that clinicians’ attitudes were favourable across all three factors (all Ms. < 3; see Table 2). With respect to each of the factors extracted, these ratings can be interpreted to mean clinicians are, on average, supportive of the concept of guided self-help CBTv (coherence), are willing to be involved in the implementation (cognitive participation), and agree with the proposed means of evaluating the implementation (reflexive monitoring). These are encouraging findings as they suggest that most clinicians working in NHS mental health services in the UK have a positive attitude about guided self-help CBTv and would be willing to support its implementation and evaluation. This suggests that clinicians’ attitudes and willingness to be involved would not be barriers to implementation of guided self-help CBTv in the NHS.

Only therapist training significantly moderated the clinicians’ attitude, with qualified therapists reporting significantly less favourable attitudes on the coherence subscale compared to non-therapists. In practical terms however, this difference may be negligible as the mean difference between these groups was 0.42 on the seven point Likert scale. In addition, in both cases, therapists and non-therapists had mean ratings that were in the favourable range (<3) suggesting that whilst therapists were somewhat more sceptical, they were still, on average, positive in their attitudes towards to the intervention. However, there is some evidence to suggest attitudes towards psychological therapy can vary as a function of the clinicians’ profession and training. The majority of the literature suggests therapists view psychosocial interventions for psychosis with a greater optimism compared to other mental health professionals [42]. However there is some evidence that concurs with our findings, as therapists seem to be more pessimistic than mental health nurses about their ability to ‘treat’ psychosis [43]. Therapists also report that delivering brief CBTp interventions can be problematic owing to the limited number of sessions involved, and the complex nature of many patients’ presenting problems [44]. Whether the therapists’ reservations are realised in practice requires further research.

Overall, it seems clinicians show support for implementing guided self-help CBTv, which is encouraging. This finding contrasts with previous research that suggests mental health clinicians may not be supportive of interventions that invite people to talk about the voices they experience [33]. Perhaps the recent growth of emancipatory approaches to voices, such as the Hearing Voices Movement, has helped to demonstrate the therapeutic value of openly discussing voices [45]. Whether clinicians’ positive pre-implementation attitudes will aid the actual implementation process remains to be seen. There is evidence to suggest that negative clinician attitudes are associated with poorer intervention outcomes [46]. The favourable attitudes of clinicians in the present study are therefore welcome as this will help to create the optimal service environment, from which we can explore the effectiveness of this intervention.

Limitations

It is possible that the reason we did not find support for the fourth NPT factor is because the items developed to target this factor were poor representations of the collective action factor. That is to say, our items may not have sufficiently examined implementation at the service-level. Before accepting this as a study limitation, it is important to first explore whether the sample characteristics may have contributed to the factor structure extracted. As mentioned previously, future studies should aim to include participants in more senior-level positions to see whether this causes the emergence of the collective action factor.

Our questionnaire was designed to investigate implementation prospectively. Our findings are therefore indicative of mental health practitioners anticipated barrier and facilitators to implementation – our study cannot determine whether this will translate into practice. This limitation highlights the benefits of longitudinal implementation research. Using this research design will help to determine whether positive pre-implementation attitudes translate into a successful initial implementation, and the sustained employment of the intervention. To our knowledge, there are currently no quantitative studies that have looked at the NPT’s model ability to predict implementation outcomes; however this is one of the study aims of the NoMAD psychometric assessment [30] which is currently underway.

The measure we have developed appears to have factorial validity. However, we did not assess other forms of validity such as construct and divergent validity. This will be the focus of future research evaluating the psychometric properties of the measure. Furthermore, factors were extracted using a combination of Kaiser’s [39] criteria and the Scree Plot. Although this method of factor extraction is arguably the most widely used and well-established in scale construction, other methods such as parallel analysis are gaining support [47].

Research implications

Our study has identified a number of areas for future research. For example, future studies assessing the validity of the NPT model [20] should aim to recruit participants of varying levels of seniority, and determine whether pre-implementation attitudes correspond to the subsequent ease of implementation. As our study seems to be the first to use factor analysis to test the NPT model, more factor analysis studies are needed to see whether the NPT factor structure can be generalised to the implementation of other interventions in both mental health and physical health contexts.

Conclusions

The present study used exploratory factor analysis to test the application of the NPT model [20] in a mental health setting. We found support for three of the NPT factors when considering the implementation of a brief CBTp intervention. The fourth factor, collective action, was not extracted. Clinicians generally supported the implementation of the intervention. The feedback from clinicians can be used to inform both research and intervention protocols. In time, we will be able to assess whether clinicians’ pre-implementation attitudes impact upon the subsequent implementation process.

Abbreviations

CBT: 

Cognitive behaviour therapy

CBTp: 

Cognitive behaviour therapy for psychosis

CBTv: 

Cognitive behaviour therapy for voices

EFA: 

Exploratory factor analysis

NHS: 

National Health Service

NPT: 

Normalisation process theory

UK: 

United Kingdom

US: 

United States

Declarations

Acknowledgements

Thank you to all of Sussex Partnership NHS Foundation Trust mental health clinicians who took the time to be part of this study. Also thank you to Sussex Partnership NHS Foundation Trust and the Economic and Social Research Council (ESRC) for funding this research.

Funding

This research was funded via a PhD studentship co-funded by Sussex Partnership NHS Foundation Trust and the Economic and Social Research Council (ESRC) (grant number: ES/J500173/1).

Availability of data and materials

The dataset supporting the conclusions of this article is available on request from the corresponding author.

Authors’ contributions

CH analysed the data and produced the initial draft of the paper. KC, CS and MH supervised the analysis of data and made editorial contributions to the paper. All authors read and approved the final manuscript.

Ethics approval and consent to participate

This study received ethical approval from the Sciences and Technology C-REC at the University of Sussex (Reference: ER/CH283/4). Research Governance approval was granted by the Sussex Partnership NHS Foundation Trust.

Consent for publication

Not applicable.

Competing interests

Two of the authors of this article (CS and MH) are also authors of the self-help book that will be used within the guided self-help CBTv intervention.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Authors’ Affiliations

(1)
School of Psychology, University of Sussex
(2)
R&D Department, Sussex Partnership NHS Foundation Trust, Sussex Education Centre

References

  1. Gaebel W, Weinmann S, Sartorius N, Rutz W, McIntyre JS. Schizophrenia practice guidelines: international survey and comparison. Br J Psychiatry Psychiatry. 2005;187(3):248–55.View ArticleGoogle Scholar
  2. NICE. Psychosis and schizophrenia in adults: prevention and management. London: National Institute for Health and Care Excellence (NICE); 2014.Google Scholar
  3. Royal Australian And New Zealand College Of Psychiatrists Clinical Practice Guidelines Team For The Treatment Of Schizophrenia, Disorders R. Royal Australian and new Zealand College of Psychiatrists Clinical Practice Guidelines for the treatment of schizophrenia and related disorders. Aust N Z J Psychiatry. 2005;39(1–2):1–30.Google Scholar
  4. American Psychiatric Association. Practice guidelines for the treatment of psychiatric disorder. 2nd ed. Arlington, Virginia: American Psychiatric Association; 2006.View ArticleGoogle Scholar
  5. Schizophrenia Commission. The Abandoned Illness: a Report from the Schizophrenia Commission. London; 2012.Google Scholar
  6. Burns BJ, Phillips SD, Wagner HR, Barth RP, Kolko DJ, Campbell Y, et al. Mental health need and access to mental health services by youths involved with child welfare: a National Survey. J Am Acad Child Adolesc Psychiatry. 2004;43(8):960–70.View ArticlePubMedGoogle Scholar
  7. Mueser KT, Noordsy DL. Cognitive behavior therapy for psychosis: a call to action. Clin Psychol Sci Pract. 2005;12:68–71.Google Scholar
  8. Eisenberg D, Golberstein E, Gollust SE. Help-seeking and access to mental health Care in a University Student Population. Med Care. 2007;45(7):594–601.Google Scholar
  9. Kataoka SH, Zhang L, Wells KB. Unmet need for mental health care among U.S. children: variation by ethnicity and insurance status. Am J Psychiatry. 2002;159(9):1548–55.Google Scholar
  10. Myhr G, Payne K. Cost-effectiveness of cognitive-behavioural therapy for mental disorders: implications for public health care funding policy in Canada. Can J Psychiatr. 2006;51(10):662–70.View ArticleGoogle Scholar
  11. Morley B, Pirkis J, Naccarella L, Kohn F, Blashki G, Burgess P. Improving access to and outcomes from mental health care in rural Australia. Aust J Rural Health. 2007;15(5):304–12. Google Scholar
  12. Ince P, Haddock G, Tai S. A systematic review of the implementation of recommended psychological interventions for schizophrenia: Rates, barriers, and improvement strategies. Psychol Psychother Theory, Res Pract. 2015.Google Scholar
  13. Hazell CM, Hayward M, Cavanagh K, Strauss C. A systematic review and meta-analysis of low intensity CBT for psychosis. Clin Psychol Rev. 2016;45:183–92.View ArticlePubMedGoogle Scholar
  14. Bower P, Gilbody S. Stepped care in psychological therapies: access , effectiveness and efficiency. Br J Psychiatry. 2005;186:11–7.View ArticlePubMedGoogle Scholar
  15. Haddock G, Eisner E, Boone C, Davies G, Coogan C, Barrowclough C. An investigation of the implementation of NICE-recommended CBT interventions for people with schizophrenia. J Ment Health. 2014;8237(4):1–4.Google Scholar
  16. Grol R. Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care. 2001;39(8):46–54.Google Scholar
  17. May C. Towards a general theory of implementation. Implement Sci. 2013;8:18.View ArticlePubMedPubMed CentralGoogle Scholar
  18. Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50.Google Scholar
  19. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Heal Ment Heal Serv Res. 2009;36(1):24–34.Google Scholar
  20. May CR, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: normalization process theory. Implement Sci. 2009;4:29.View ArticlePubMedPubMed CentralGoogle Scholar
  21. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Heal Ment Heal Serv Res. 2011;38(2):65–76.Google Scholar
  22. McEvoy R, Ballini L, Maltoni S, O’Donnell CA, Mair FS, Macfarlane A. A qualitative systematic review of studies using the normalization process theory to research implementation processes. Implement Sci. 2014;9:–2.Google Scholar
  23. Franx G, Oud M, de Lange J, Wensing M, Grol R. Implementing a stepped-care approach in primary care: results of a qualitative study. Implement Sci. 2012;7(1):8.Google Scholar
  24. Gunn JM, Palmer VJ, Dowrick CF, Herrman HE, Griffiths FE, Kokanovic R, et al. Embedding effective depression care: using theory for primary care organisational and systems change. Implement Sci. 2010;5(1):62.Google Scholar
  25. Gask L, Bower P, Lovell K, Escott D, Archer J, Gilbody S, et al. What work has to be done to implement collaborative care for depression? Process evaluation of a trial utilizing the normalization process model. Implement Sci. 2010;5(1):15.Google Scholar
  26. Gask L, Rogers A, Campbell S, Sheaff R. Beyond the limits of clinical governance? The case of mental health in English primary care. BMC Health Serv Res. 2008;8(1):369–84.Google Scholar
  27. Morriss R. Implementing clinical guidelines for bipolar disorder. Psychol Psychother Theory Res Pract. 2008;81(4):437–58.View ArticleGoogle Scholar
  28. May CR, Mair FS, Dowrick CF, Finch TL, Hacker J, Tanenbaum S, et al. Process evaluation for complex interventions in primary care: understanding trials using the normalization process model. BMC Fam Pract. 2007;8(1):42.Google Scholar
  29. Finch TL, Girling M, May CR, Mair FS, Murray E, Treweek S, et al. NoMAD: Implementation measure based on Normalization Process Theory [Internet]. 2015. Available from: http://www.normalizationprocess.org/nomad-study/
  30. Finch TL, Rapley T, Girling M, Mair FS, Murray E, Treweek S, et al. Improving the normalization of complex interventions: measure development based on normalisation process theory (NoMAD): study protocol. Implement Sci. 2013;8:43.View ArticlePubMedPubMed CentralGoogle Scholar
  31. Hazell CM, Hayward M, Cavanagh K, Jones A-M, Strauss C. Guided self-help cognitive behavioral intervention for VoicEs (GiVE): study protocol for a pilot randomized controlled trial. Trials. 2016;17:351.View ArticlePubMedPubMed CentralGoogle Scholar
  32. MacLeod M, Martinez R, Williams C. Cognitive behaviour therapy self-help: who does it help and what are its drawbacks? Behav Cogn Psychother. 2009;37(1):61–72.Google Scholar
  33. Coffey M, Hewitt J. “You don't talk about the voices": voice hearers and community mental health nurses talk about responding to voice hearing experiences. J Clin Nurs. 2008;17(12):1591–600.Google Scholar
  34. Finch TL, Mair FS, O’Donnell C, Murray E. May CR. From theory to “measurement” in complex interventions: methodological lessons from the development of an e-health normalisation instrument. BMC Med Res Methodol. 2012;12(1):69.Google Scholar
  35. Berry K, Haddock G. The implementation of the NICE guidelines for schizophrenia: barriers to the implementation of psychological interventions and recommendations for the future. Psychol Psychother Theory Res Pract. 2008;81:419–36.View ArticleGoogle Scholar
  36. Prytys M, Garety PA, Jolley S, Onwumere J, Craig T. Implementing the NICE guideline for schizophrenia recommendations for psychological therapies: a qualitative analysis of the attitudes of CMHT staff. Clin Psychol Psychother. 2011;18(1):48–59.View ArticlePubMedGoogle Scholar
  37. Lindell MK, Whitney DJ. Accounting for common method variance in cross-sectional research designs. J Appl Psychol. 2001;86(1):114–21.View ArticlePubMedGoogle Scholar
  38. Field A. Discovering statistics using IBM SPSS statistics. 4th ed. London: Sage; 2013.Google Scholar
  39. Kaiser HF. An index of factorial simplicity. Psychometrika. 1974;39(1):31–6.Google Scholar
  40. Tavakol M, Dennick R. Making sense of Cronbach’s alpha. Int J Med Educ. 2011;2:53–5. Google Scholar
  41. Hutcheson GD, Sofroniou N. The multivariate social scientist. London: Sage; 1999. Google Scholar
  42. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46. Google Scholar
  43. Caldwell TM, Jorm AF. Mental health nurses’ beliefs about likely outcomes for people with schizophrenia or depression: a comparison with the public and other healthcare professionals. Aust N Z J Ment Health Nurs. 2001;10(1):42–54.View ArticlePubMedGoogle Scholar
  44. Hugo M. Mental health professionals’ attitudes towards people who have experienced a mental health disorder. J Psychiatr Ment Health Nurs. 2001;8(5):419–25. Google Scholar
  45. Waller H, Garety P, Jolley S, Fornells-Ambrojo M, Kuipers E, Onwumere J, et al. Training frontline mental health staff to deliver “low intensity” psychological therapy for psychosis: a qualitative analysis of therapist and service user views on the therapy and its future implementation. Behav Cogn Psychother. 2013;43(3):298–313.View ArticlePubMedGoogle Scholar
  46. Oakland L, Berry K. “lifting the veil”: a qualitative analysis of experiences in hearing voices network groups. Psychosis. 2015;7(2):119–29.Google Scholar
  47. Harper Romeo K, Meyer PS, Johnson D, Penn DL. An investigation of the relationship between therapist characteristics and alliance in group therapy for individuals with treatment-resistant auditory hallucinations. J Ment Heal. 2014;23(4):166–70.Google Scholar

Copyright

© The Author(s). 2017

Advertisement