The response rate to the questionnaire was low (19%, N=86). The first administration of the questionnaire achieved an initial response rate of 7% (N=32). The first, electronic, reminder was followed by an increase of 18 responses (21% of the respondents who received the reminder responded), and the second, paper based, reminder (sent with a paper copy of the questionnaire) was followed by an increase of 36 responses (24% of the respondents who received the second reminder responded). Completion rates were also variable across questionnaire sections; the ‘system-antecedent’ section was highest (19%, N=85), followed by the ‘adopters’ section (15%, N=69) and the ‘communication and influence’ section (13%, N=58). Due to the low response rate the data was unsuitable for a planned two stage analysis
. Instead, a series of exploratory, bivariate, analyses were run, focussing on effect sizes to aid interpretation of the meaningfulness of the findings
 due to the reduced power of the analyses. The qualitative and quantitative findings were then synthesised to arrive at a final set of factors considered to be influential upon health professionals’ referrals for women diagnosed with mild to moderate postnatal depression. The analyses, a summary of the findings, and an overview of the data synthesis are available in Additional file
2 for reference. However, the focus of the rest of the paper is upon the challenges and complexities encountered in the process of conducting the diagnostic analysis, a consideration of the key issues arising from this, and recommendations for future implementation studies.
The low response rate clearly illustrates the challenge of eliciting the views of health professionals to inform implementation research; particularly where research uses questionnaires to try and systematically and rigorously explore factors influencing innovation adoption. Any initial lack of engagement by the health professionals in the developmental stage of an implementation study has significant implications, reducing the robustness and representativeness of the diagnostic analysis. This is critical as the subsequent behaviour-change intervention is then developed to target the barriers identified from the diagnostic analysis. Low response rates when exploring the local context, therefore, make an implementation study vulnerable to overlooking some important barriers and of only capturing the perceptions of those health professionals’ who are especially interested in the clinical topic. Barriers identified from such health professionals may be different to barriers experienced by those who do not respond to the questionnaire; for example, the latter may hold more negative perceptions of the recommendation. This has potential implications for the implementation strategy, which may only target those barriers which apply to more motivated health professionals.
We attempted to maximise responses to the questionnaire by following recommendations in the literature, including the use of two reminders, the second of which was accompanied by a paper copy of the questionnaire, conducting follow-on phone calls, and offering an incentive prize draw
[25, 26]. The second reminder, accompanied by a paper copy of the questionnaire, had a greater impact than the first, electronic, reminder, suggesting that some of the health professionals preferred a paper-based over an electronic questionnaire. Given the strategies used to bolster response rates, two main explanations are suggested as to why the response rate was still low. First, the topic of postnatal depression may not have had salience with the majority of the health professionals, with topic salience a key variable influencing response rates
. At the start of the research, two separate activities were undertaken to try and ensure that we selected a recommendation to target that was prioritised locally by stakeholders and health professionals, to encourage local buy-in and engagement with the diagnostic analysis. First, a series of meetings were held with stakeholders to identify local priority areas. In addition, a questionnaire was run with local health professionals to explore which characteristics of hypothetical recommendations are most influential upon their prioritisation decisions, to guide us in selecting a recommendation that met those criteria. Combined, it was envisaged that these two processes would ensure that we selected a recommendation that, whilst not well adopted at baseline, would have a greater chance of success through PCT and adopter level support. The finding from that questionnaire was that health professionals’ prioritise in particular those recommendations that have a higher impact on patient care and a stronger supportive evidence base, but attach less importance to the costs associated with the recommendation. However, when it came to selecting the recommendation from a shortlist developed from the meetings conducted with local stakeholders, none of the recommendations identified as a local priority exactly met these challenging criteria. Referrals for psychological treatment for women diagnosed with mild to moderate postnatal depression were rated by the team, having reviewed NICE documentation
, as having a ‘moderate’ evidence base and a ‘moderate’ impact on patient care, rather than a ‘significant’ impact and ‘strong’ evidence base.
This highlights the challenge of marrying what the health professionals- the adopters- consider to be priorities with what stakeholders, such as commissioners, regard as priorities. Trying to engage health professionals at the start of an implementation topic who may not be overly motivated by ‘moderate’ impact and ‘moderate’ evidence base is likely to be a hurdle for other implementation studies. Even if a recommendation does meet such criteria, if health professionals do not perceive it to do so, the challenge still remains: whilst the behaviour change intervention can target negative perceptions of a recommendation to try and encourage uptake, gaining health professionals’ engagement in the initial diagnostic analysis in such instances is still a challenge. However, studies need to continue to target innovations and recommendations that have low adoption levels at the outset, to make best use of resources and avoid potential “ceiling” effects in adoption.
The second explanation for the low response rate relates to the design of the questionnaire; particularly its length and ordering of sections. The questionnaire was long, comprising four separate sections spread over 14 pages in the paper version. Research regarding the effect of longer versus shorter questionnaires on response rates is equivocal, but suggests that questionnaires on highly salient topics can be longer than those on less salient topics
. Given our suggestion that the topic may not have been salient to the health professionals, questionnaire length was likely to be particularly influential upon response rates. To try and keep the questionnaire as short as was possible, we opted to use the shortened version of the team climate inventory rather than the long version, and we presented this before the other two sections, which required more detailed and considered responses that we anticipated could deter potential responders if presented first. It is possible, however, that having the more abstract set of questions at the start of the questionnaire, exploring perceptions of team climate, may have negatively impacted upon the response rate, lacking face validity and, again, salience to the health professionals.
The finding that responses were lowest to the communication and influence section – the second rather than the final section- suggests that the content of that particular section also likely impacted upon the response rate. Requesting the health professionals to name colleagues and specify whether they give them advice, seek their advice, and the modalities used to do so may have appeared unusual to the health professionals, and they may have been uncomfortable naming individuals they work with. Unfortunately this is an essential prerequisite for using social network analysis to identify channels of communication and local opinion leaders. In a study exploring the feasibility of involving opinion leaders in implementation efforts
, health professionals who completed a sociometric measure to identify opinion leaders- as used in this study – reported in interviews afterwards that they had struggled with the concept of opinion leaders and found the questionnaire rather abstract. In the same study, it was also found that opinion leaders varied across clinical topics- and that only 32% of respondents cited the identified opinion leaders. This suggests that the same population would need to be re-surveyed for every new implementation topic to identify opinion leaders, and that the persuasive powers of opinion leaders could be restricted to only a small proportion of the target population when it comes to developing the behaviour-change intervention. With recommendations for future research to harness the potential of social networks to bring about change in practice
, attention needs to be given to exploring different techniques that can be used to map social networks in the healthcare setting to overcome the response rate challenge and reluctance to name colleagues.
Piloting of the questionnaire with a small sample of health professionals produced few comments regarding the communication and influence section as a whole, nor regarding the decision to place the system antecedents section at the start. This highlights the importance of conducting more detailed assessments of how targeted health professionals feel about the content (rather than just the wording) of implementation questionnaires and exploring their likelihood of responding to such measures. Techniques such as cognitive interviewing
 can help illuminate issues with questionnaires and encourage a deeper, more systematic, piloting than sending a questionnaire out to a sample and obtaining general feedback regarding wording of items and layout.
The original plan for data analysis had been to run a multilevel model on the questionnaire data
, with separate analysis of the interview data, delineating the effects of characteristics of the health professionals from team characteristics, thus gaining an understanding of the influence these two levels had upon health professionals’ intention, and knowledge of whether it may be more effective to target teams or individual attitudes with the implementation strategy. The focus of our analyses instead was upon exploring patterns and associations between factors and health professionals’ intentions to refer. Missing the opportunity to simultaneously explore the relative importance of team versus adopter level variables using multilevel modelling was disappointing. One study
 obtained a significantly higher response rate for a lengthy implementation questionnaire exploring factors at the adopter, team and organisational level. However, in that study ninety-nine general practices registered on a Medical Research Council General Practice Research Framework were sampled: as noted by the authors, these practices are research orientated and can receive funding to support their participation in research studies. The study authors reported involving these practices due to their previous experience of low response rates to implementation questionnaires. The MRC GP research practices database is now discontinued; however, adopting this sort of approach to recruitment is arguably more suited to large scale studies across multiple sites, rather than collaborative implementation studies between single organisations and academic institutions. Given that implementation research occurs in busy healthcare settings, such as hospital wards or GP practices, and needs to develop an understanding of local contextual factors influencing innovation adoption, recruiting enough health professionals in such units to achieve a large enough sample size for more complex statistical techniques such as multilevel modelling will always be challenging, regardless of topic salience, or questionnaire content or length. Adopting a case study approach instead, pulling together multiple strands of evidence from different sources (in our example, a questionnaire and set of interviews), may represent a more realistic, pragmatic and feasible approach.
This case study experienced a low response rate to the questionnaire, despite early stage engagement with local health professionals and stakeholders; piloting the questionnaire; and using evidence-based strategies to bolster response rates. As such, the robustness, representativeness and generalisability of the findings from the questionnaire, and subsequent recommendations for the behaviour change intervention, are limited. The underpinning framework for our diagnostic analysis also presented a challenge; with so many factors proposed to influence innovation adoption, a lack of formal definitions provided, and no specification of cause and effect amongst the factors, a number of decisions had to be made regarding which factors to operationalise (due to the burden of response for busy health professionals if all were operationalised) and how to operationalise them. Another team of researchers
 recently undertook a systematic literature review to identify existing measures of the same factors from the framework, and where none existed, to devise their own. This was a different approach to ours, which used the framework as an ‘aide memoir’ rather than attempting to measure every factor. Different interpretations of the framework across these two studies are apparent, including our conceptualisation of ‘motivation’ in the ‘adopters’ category as ‘intention to adopt the recommendation’ compared with their ‘readiness to change’: both interpretations coming from the psychological literature, both justifiable. Whilst the work of Cook et al. is valuable as a first step towards operationalising the framework and trying to develop it into a testable theory, the number of factors and competing ways of operationalising them will likely always be a challenge for those wishing to use it.
Future research should explore other techniques that can be used to conduct a diagnostic analysis, to safeguard implementation studies from low initial engagement from health professionals. The consensus that a diagnostic analysis of the local context is important; that different factors will be important across different recommendations and settings; and the high number of recommendations targeted at health professionals, makes the feasibility and sustainability of survey-based approaches to implementation in health services questionable, particularly for topics of low saliency to health professionals. The need to ensure that behaviour-change is a scientific endeavour, adopting a rigorous approach, may struggle to be realised as repeated surveying of health professionals makes their continued use less feasible and increasingly unsustainable. Whilst this approach may be successful for individual research studies, as a technique for future roll-out across the health services to increase implementation of recommendations on an on-going basis, it may be less realistic. Future research should seek to explore and experiment with different methods for exploring the local context, marrying the need for rigour with approaches that are feasible in the long as well as short term.