Researching Complex Interventions in Health: The State of the Art

Table of contents KEYNOTE PRESENTATIONS K1 Researching complex interventions: the need for robust approaches Peter Craig K2 Complex intervention studies: an important step in developing knowledge for practice Ingalill Rahm-Hallberg K3 Public and patient involvement in research: what, why and how? Nicky Britten K4 Mixed methods in health service research – where do we go from here? Gunilla Borglin SPEAKER PRESENTATIONS S1 Exploring complexity in systematic reviews of complex interventions Gabriele Meyer, Sascha Köpke, Jane Noyes, Jackie Chandler S2 Can complex health interventions be optimised before moving to a definitive RCT? Strategies and methods currently in use Sara Levati S3 A systematic approach to develop theory based implementation interventions Anne Sales S4 Pilot studies and feasibility studies for complex interventions: an introduction Lehana Thabane, Lora Giangregorio S5 What can be done to pilot complex interventions? Nancy Feeley, Sylvie Cossette S6 Using feasibility and pilot trials to test alternative methodologies and methodological procedures prior to full scale trials Rod Taylor S7 A mixed methods feasibility study in practice Jacqueline Hill, David A Richards, Willem Kuyken S8 Non-standard experimental designs and preference designs Louise von Essen S9 Evaluation gone wild: using natural experimental approaches to evaluate complex interventions Andrew Williams S10 The stepped wedge cluster randomised trial: an opportunity to increase the quality of evaluations of service delivery and public policy interventions Karla Hemming, Richard Lilford, Alan Girling, Monica Taljaard S11 Adaptive designs in confirmatory clinical trials: opportunities in investigating complex interventions Munyaradzi Dimairo S12 Processes, contexts and outcomes in complex interventions, and the implications for evaluation Mark Petticrew S13 Processes, contexts and outcomes in complex interventions, and the implications for evaluation Janis Baird, Graham Moore S14 Qualitative evaluation alongside RCTs: what to consider to get relevant and valuable results Willem Odendaal, Salla Atkins, Elizabeth Lutge, Natalie Leon, Simon Lewin S15 Using economic evaluations to understand the value of complex interventions: when maximising health status is not sufficient Katherine Payne S16 How to arrive at an implementation plan Theo van Achterberg S17 Modelling process and outcomes in complex interventions Walter Sermeus S18 Systems modelling for improving health care Martin Pitt, Thomas Monks

This presentation will provide the history of the "Complex Interventions in Health" book. The development of the book took off in the European Academy of Nursing Science (EANS) summer school. For more than 12 years the EANS has gathered doctoral students from all over Europe in a three year programme. The content developed over time, inspired by the critique stating that nursing research was heavily dominated by descriptive, crosssectional or qualitative research not really informing practice. This lead to building a curriculum based on the MRC guidance on complex interventions and the teaching programme became truly successful among students. The road to developing the content for a book was short. Professor Richards and I played around on the black board and after that it has been a success-story. So many authors delivering on time! However, rightfully we have been told that research in health is more than researching complex interventions. I cannot agree more, it is only one step, but a very important one. Health research, as any research with aspirations for informing practice, needs to be carried out systematically and programmatically, using a variety of designs and methods. It is helpful to think of knowledge development as being a stepwise process starting off with discovery and once possible going into the phase of evaluation and once solid knowledge is obtained it is about implementation in practice. In this presentation I will explain what PPI is, what it isn't, and how it differs from engagement, as there is some misunderstanding. There is a range of both ethical and pragmatic reasons for involving members of the public and patients in research, depending on the different values and perspectives of those involved. I will provide practical examples of PPI at different stages of the research cycle: identifying and prioritising research questions; developing research proposals; doing research; analysing research data; disseminating and implementing findings; as well as PPI in systematic reviewing and operational research. Drawing on my own research, I will use the particular example of the Diabetes Intervention for Agenda Trial (DIAT) to illustrate PPI in a trial from start to finish. I will present a theoretical framework for characterising PPI which paradoxically is of practical value in reflecting on PPI practice. Finally I will present the Public Involvement Impact Assessment Framework (the PiiAF) for helping to evaluate the various intended and unintended impacts of PPI. In the 21st century, it is more obvious than ever before that health services research and its practitioners exist in an extremely complex contextual environment. Consequently, at the heart of understanding how to deliver an evidence base for safe and effective care in a setting characterised by multifaceted health care demands, is the realisation that no one research method in isolation will suffice. This realisation, together with the REWARD group's contention that 85 % of health service research conductedand most importantly mainly funded by public moneyis 'waste' , is one of the reasons why frameworks such as the MRC guidance on complex interventions, and mixed methods designs are subject to intensified attention. This plenary address aims to discuss the contribution of mixed methods to researching complex interventions within the MRC framework and to consider its place in contemporary health service research. The MRC framework for researching complex interventions has highlighted the importance of research designs including both qualitative and quantitative approaches. However when reviewing the literature prior to writing my chapter in our recent textbook, it was clear that designing and conducting truly mixed methods research presents health service researchers with a number of challenges. Predominant amongst these are the selection of designs from suggested typologies, methodological reporting, critical evaluation and most importantly ensuring that the core feature in true mixed methods (as opposed to multi-methods) designsanalytical and methodological integrationis implemented. To be able to conduct mixed methods approaches that are rigorously designed, logically executed, and transparently reported, we need to move to a position where funders, researchers, journal editors and research consumers demand methodological integration of methods and data from study outset, rather than as a mere afterthought in the discussion section of research reports.

SPEAKER PRESENTATIONS S1
Exploring complexity in systematic reviews of complex interventions Sascha Köpke 1 , Jane Noyes 2 , Jackie Chandler There is a lot of discussion about the most adequate approaches to perform systematic reviews and meta-analyses of complex interventions. Reviewers need to broaden their view and to have a detailed look at all available evidence for the whole process of developing and evaluating complex interventions. Decisions about approaches should acknowledge intervention aims. Decisions about the scope and methods of the systematic review depend on the available information, which should be assessed in view of clinicians' and patients' needs. Important decisions include the type of evidence to be searched and the choice of methods to describe or synthesise it. This will usually require a combination of different study types including quantitative and qualitative data. Transparent reporting is a key issue to allow readers to apply the findings to specific contexts.

S2
Can complex health interventions be optimised before moving to a definitive RCT? Strategies and methods currently in use Sara Levati (Sara.Levati@gcu.ac.uk) NMAHP Research Unit, Glasgow Caledonian University, UK BMC Health Services Research 2016, 16(Suppl 1):S2 RCTs are recognised as the 'gold standard' methodology in the evaluation of complex interventions, however, they often fail to detect whether the lack of intervention effect is due to sub optimal design, implementation failure or genuine ineffectiveness. Given the current financial constraints upon health services research, it is increasingly important to define pre-trial optimisation methods that can give indications on how the intervention works and help maximise chances for the intervention to be effective. This scoping review provides a map of the health literature on pre-trial strategies aimed at optimising complex interventions before a RCT and a snapshot of the methods currently used. Scholarly literature was searched using MEDLINE, CINAHL, AMED, PsycINFO and ProQuest Nursing & Allied Health Source for papers published between January 2000 and March 2015 available in English language. The literature search identified 3940 unique references, 27 of which met the inclusion criteria. Optimisation strategies explored the feasibility and acceptability of the intervention to patients and healthcare professionals, estimated the effectiveness and cost-effectiveness of different combinations of components and identified potential barriers to implementation. Large variations in the methods adopted were noted across studies, including interviews and focus groups with interventions' providers and receivers, experts consultations, economic modelling, small uncontrolled pilot studies and evaluation questionnaires. Overall, there is the potential for optimisation strategies to detect, in a costeffective way, those interventions and components that are likely to fail or show little effect if implemented in a full-scale RCT. In this presentation, I will briefly review the large landscape of frameworks and theories in implementation research, and suggest an approach to designing implementation interventions that builds on current widely cited frameworks to develop theory-based implementation interventions. Pilot studies for phase III trials-which are comparative randomized trials designed to provide preliminary evidence on the clinical efficacy of an intervention-are routinely performed in many clinical areas. Also commonly known as "feasibility", "dress rehearsal" or "vanguard" studies, they are designed to assess recruitment potential; to assess the feasibility of international collaboration or coordination for multicentre trials; to increase clinical experience with the study intervention for the phase III trials. They are the best way to assess feasibility of a large, expensive full-scale study, and in fact are an almost essential pre-requisite (Thabane et al (BMC Medical Research Methodology 2010, 10:1). Conducting a pilot prior to the main study can enhance the likelihood of success of the main study and potentially help to avoid doomed main studies. The presentation will cover some key aspects of pilot studies for phase III trials of complex interventions including: 1) reasons for conducting a pilot study; 2) misconceptions about pilot studies; 3) criteria for evaluating the success of a pilot study; 4) differences and similarities between pilot and feasibility studies; and 5) a brief update on the development of CON-SORT extension for pilot trials. challenges anticipated and the questions that remain at the end of intervention development. This presentation will examine which intervention features might be examined in a pilot study, and describe examples of studies that have done so. In addition, special issues such as contamination and co-intervention will be discussed.

S6
Using feasibility and pilot trials to test alternative methodologies and methodological procedures prior to full scale trials Rod Taylor  Feasibility and pilot studies play a key role in health research, in providing information for the planning of full scale randomised controlled trials. Medical Research Council guidance for development and evaluation of complex interventions states that pilot and feasibility studies are essential in the development and testing of an intervention prior to a large-scale evaluation. Before committing investment in costly and time-consuming full scale trials, funding bodies increasingly demand that investigators provide evidence from pilot/feasibility studies addressing the question: "Can this full scale trial be done?" This presentation will: (i) identify the common methodological uncertainties in the design and execution of full scale trials; (ii) review approaches to testing these methodological uncertainties in feasibility/pilot studies; and (iii) discuss the design and selection of feasibility and pilot studies to best address these uncertainties. These concepts will be illustrated by reference to recent feasibility/pilot studies. Depression is set to become the third biggest cause of the global burden of disease by 2030 yet access to treatment is poor. To maximise access, worldwide psychological therapies for depression are delivered using a system called stepped care. However, we do not know if this system achieves similar patient benefit for less cost compared with alternatives. A fully-powered clinical trial of stepped care is required but currently prevented by a number of uncertainties. This presentation will describe how we chose to answer these in a single feasibility study encompassing a pilot randomised controlled trial and embedded qualitative interviews. In particular, the presentation will focus on the innovative use of integrated mixed methods analysis to explore how patients' views of stepped care and the number of treatment sessions they attend relate. In this way, we will illustrate how it is possible to interweave quantitative and qualitative data in original ways as part of a feasibility study to address multiple clinical, procedural and methodological uncertainties. Randomized controlled trials (RCT) have been accepted as the 'gold standard' design for evaluating the effectiveness of a single intervention such as a drug. However, recruitment to clinical trials may be affected by the choice of intervention that participants might make, if they were allowed to choose, and by whether they actually receive their preferred intervention. These effects cannot be estimated in the RCT design. Additionally the RCT runs counter to the current emphasis on patient choice which is the cornerstone in many governments' current health strategies and where preferences or ethical objections to an RCT exist, alternatives that replicate more closely the behaviors in real world health care rather than conform to standard trial design should be considered. Citizen participation in research is advocated by governments, research councils, and other funding bodies and as the public, patients, and significant others become more involved in research activities they may be unwilling to be passive participants of a research randomization process. Non-standard and preference designs go some way towards addressing the problems of low recruitment and retention rates and non-implementation of results of clinical research. Advantages and challenges e.g. with regard to internal and external validity with non-standard and preference designs such as the comprehensive cohort design, pre-randomized design, cohort multiple randomized controlled trial, and two-stage randomized design will be presented at the conference. With the ability to calculate the unbiased attributable effects on an intervention the randomised controlled trial (RCT) is seen as the 'gold standard' method for answering all three questions. However, as complex interventions are recognised as events within systems, they are rarely tame enough for RCTs to be feasible   [2]. The natural experimental approaches (NEAs) permit quasiexperimental evaluation which can answer the three questions when the intervention is wilder, particularly when routinely collected outcome data are available (Craig et al., 2011) [3]. Robust NEAs rely on quasi-randomised identification (instrumental variable, regression discontinuity or propensity score matching) of a control (counterfactual) population to compare with the intervention population. Subsequently, taking a difference in differences or controlled before-after approach permits the calculation of an effect attributable to the intervention addressing the three questions. Quasirandomised scenarios often occur by chance (naturally, not designed as part of an evaluation) within complex interventions allowing valuable prospective or retrospective evaluation. For example, criterionbased intervention eligibility permitting a regression discontinuity design. Although not faultless, NEAs offer an often missed opportunity to evaluate complex interventions.
The stepped wedge cluster randomised trial (SW-CRT) is a novel research study design that is increasingly being used in the evaluation of service delivery type interventions. The design involves random and sequential crossover of clusters from control to intervention, until all clusters are exposed. We illustrate the use of the design by giving case examples, summarise the results of an update of a methodological systematic review of the quality of reporting and provide recommendations for reporting and analysis. The use of the SW-CRT is rapidly increasing and areas of use are diverse. We illustrate how the design is being used to evaluate the effectiveness of a complex intervention, being rolled-out across 90 UK hospitals, to reduce mortality in patients undergoing emergency laparotomy. Quality of reporting is found to be low. In a SW-CRT more clusters are exposed to the intervention towards the end of the study than in its early stages. A result which prima facia might look to be suggestive of an effect of the intervention may therefore transpire to be the result of a positive underlying temporal trend. A large number of studies do not report how they allowed for temporal trends in the design or analysis. The SW-CRT is a pragmatic study design which can reconcile the need for robust evaluations with political or logistical constraints. Quality and reporting is generally low and so consensus guidelines on reporting and analysis are urgently needed. Adaptive designs are now considered innovative alternative trial designs with potential to improve efficiency in clinical trials research when appropriately used to answer research questions. Their appropriate implementation in confirmatory trials is gaining widespread attention among researchers, public funders and regulators. Based upon findings from our review of confirmatory adaptive designs, this talk focuses on acceptable scope with huge potential to be implemented in confirmatory trials setting. In theory, although adaptive designs could be applied across a wide range of study interventions, attention will be given to complex interventions, with relevant examples without focusing much on technical statistical details. This study is funded by the NIHR investigating the use of adaptive designs in publicly funded confirmatory trials with an aim to come up with a guidance document and recommendations for their appropriate use. It would be convenient if there were two categories of interventionssimple and complexbut of course this is not the case. Instead interventions can be placed on a spectrum of complexity. For simpler interventions there is often a relatively well-understood (or perhaps just well-accepted) causal pathway between the intervention and its outcomes. This may derive from basic science -for example if the physiological pathways between particular medical interventions and changes in outcomes have been previously shown. For more complex interventions -those in which the intervention interacts with and adapts within its social context -such pathways may be less well understood, less predictable (or even chaotic) and, crucially, non-linear. Here, the intervention may not even be easily distinguishable from the context. In such situations it may be more productive to move away from identifying "interventions" and "components," towards seeing interventions as changes systems   [1] rather than as discrete events, or as "packages" of interconnected elements. Such a perspective has major implications for how we conceptualise and evaluate public health interventions. This presentation will discuss the implications for evaluation, using recent examples from public health. Attempts to address health problems increasingly involve the development and evaluation of 'complex interventions'. In addition to outcomes evaluation, understanding complex intervention mechanisms, how interventions are implemented, and how they produce change in specific contexts, is essential if findings are to inform policy and practice. For this, process evaluation is vital. MRC guidance for evaluating complex interventions, published in 2008, recognised the value of process evaluation alongside trials of effectiveness, though provided little insight into how to conduct process evaluation. In 2010, a workshop funded by the MRC Population Health Science Research Network (PHSRN) identified a need for guidance to fill this gap. Following the workshop, 11 researchers with experience of public health evaluations developed draft guidance, drawing upon reviews of the literature, group discussions of case studies and feedback obtained at a number of conference workshops. Academic, policy and practice stakeholders were also consulted during the development of the guidance. The new guidance sets out a framework for linking core functions of process evaluation, understanding implementation, mechanisms of impact and context. It provides assistance in planning, designing, conducting, reporting and appraising process evaluations, and argues for a systematic approach to designing and conducting process evaluations, drawing on clear descriptions of intervention theory and identification of key empirical uncertainties. While acknowledging that each process evaluation will be different, the guidance assists in thinking through common decisions to be made when planning and conducting a process evaluation, and provides a framework for peer review. Qualitative evaluations are becoming increasingly commonplace alongside RCTs. They provide useful insight into the way things work within RCTs, particularly insight into the how and why things work, and contextualise RCT effects. While there are established guidelines for conducting qualitative research, there is less guidance on how to go about qualitative process evaluations alongside RCTs. Our presentation will discuss how to enhance the utility of qualitative evaluations in explaining RCT effects, through identifying key issues, such as involvement of trialists in the evaluation; scaling the evaluation; and maintaining the integrity of the trial. We discuss these issues by reflecting on practical examples of qualitative evaluations conducted in South Africa.

S15
Using economic evaluations to understand the value of complex interventions: when maximising health status is not sufficient Katherine Payne (Katherine.Payne@manchester.ac.uk) The University of Manchester, UK BMC Health Services Research 2016, 16(Suppl 1):S15 The economic evaluation of complex interventions may be problematic for many reasons generally underpinned by 'complexity' of the intervention but more specifically because of the potential number of relevant patient outcomes. This presentation will begin by presenting an overview of methods of economic evaluation used by decision-making bodies such as the National Institute for Health and Care Excellence and then describe a programme of work that focused on clinical genetics services, as an example of a complex intervention, to illustrate why sometimes maximising health status alone is not sufficient. Genetic services and tests, are a good example of a complex intervention and have broader objectives than just health gain, which may usefully be measured using the concept of capability to make an informed decision. The presentation will conclude by describing the further methodological work required to ensure the methods to value outcomes continue to consider opportunity cost and are therefore, suitable for informing resource allocation decisions with the constraints of a finite healthcare budget.

S16
How to arrive at an implementation plan Theo van Achterberg (theo.vanachterberg@med.kuleuven.be) Leuven University KU Leuven, Belgium BMC Health Services Research 2016, 16(Suppl 1):S16 The presentation "How to arrive at an implementation plan" will highlight the most important steps to be taken in the development of an implementation plan, using examples from research and practice. These include 1) analysing barriers and facilitators for implementation and linking these to relevant concepts, 2) using these concepts to find and consider theories that can inform the selection of implementation strategies, 3) considering effectiveness of common strategies, and 4) collaborating with target groups and stakeholders in operationalising strategies into a practical implementation plan. Well considered preparations such as these are crucial to implementation success, and thus optimising chances of effective improvement in healthcare.

Modelling process and outcomes in complex interventions
Walter Sermeus (walter.sermeus@telenet.be) School of Public Health & Primary Care, Leuven University KU Leuven, Belgium BMC Health Services Research 2016, 16(Suppl 1):S17 In this presentation, I focus on probably the least studied and understood element of developing a complex intervention: modelling the complex intervention by putting together all 'active components' that are known to have an effect based on empirical evidence or theory. I will discuss several problems: What active ingredients to select? How do we know that the intervention is the best one? How to optimise? How to anticipate on barriers in implementation? These problems will be discussed making use of a practical example. The growing complexity of health care coupled with the everincreasing pressures to ensure efficient and effective use of limited resources have encouraged policy makers to turn to system modelling solutions. Such techniques have been available for decades, but despite ample research which demonstrates potential, their application in health services to date is limited. This presentation surveys the breadth of approaches available to support delivery and design across many areas and levels of healthcare planning. Case studies will be presented to exemplify how impactful application of health system modelling can be achieved and demonstrate how policy making can benefit from the application of these tools. This is followed by a discussion of the key issues surrounding the use of these methods in health, what barriers need to be overcome to ensure more effective implementation, and the likely developments in the future.