- Research article
- Open Access
- Open Peer Review
Measuring organisational readiness for patient engagement (MORE): an international online Delphi consensus study
BMC Health Services Researchvolume 15, Article number: 61 (2015)
Widespread implementation of patient engagement by organisations and clinical teams is not a reality yet. The aim of this study is to develop a measure of organisational readiness for patient engagement designed to monitor and facilitate a healthcare organisation’s willingness and ability to effectively implement patient engagement in healthcare.
The development of the MORE (Measuring Organisational Readiness for patient Engagement) scale was guided by Weiner’s theory of organisational readiness for change. Weiner postulates that an organisation’s readiness is determined by both the willingness and ability to implement the change (i.e. in this context: patient engagement). A first version of the scale was developed based on a literature search and evaluation of pre-existing tools. We invited multi-disciplinary stakeholders to participate in a two-round online Delphi survey. Respondents were asked to rate the importance of each proposed item, and to comment on the proposed domains and items. Second round participants received feedback from the first round and were asked to re-rate the importance of the revised, new and unchanged items, and to provide comments.
The first version of the scale contained 51 items divided into three domains: (1) Respondents’ characteristics; (2) the organisation’s willingness to implement patient engagement; and (3) the organisation’s ability to implement patient engagement. 131 respondents from 16 countries (health care managers, policy makers, clinicians, patients and patient representatives, researchers, and other stakeholders) completed the first survey, and 72 of them also completed the second survey. During the Delphi process, 34 items were reworded, 8 new items were added, 5 items were removed, and 18 were combined. The scale’s instructions were revised. The final version of MORE totalled 38 items; 5 on stakeholders, 13 on an organisation’s willingness to implement, and 20 on an organisation’s ability to implement patient engagement in healthcare.
The Delphi technique was successfully used to refine the scale’s instructions, domains and items, using input from a broad range of international stakeholders, hoping that MORE can be applied in a variety of healthcare contexts worldwide. Further assessment is needed to determine the psychometric properties of the scale.
Promoting patient engagement in routine clinical care is considered one of the pillars of modern medicine [1,2]. In the UK, patient engagement has received renewed policy support in the form of the National Commissioning framework , and the Commissioning for Quality and Innovation (CQUIN) framework, where both enable commissioners to reward implementation initiatives. Further, several national programmes have emerged to stimulate shared decision-making (SDM) and self-management, which are important aspects of patient engagement . This includes the National Voices and MAGIC websites, which provide a broad evidence base on how to best support self-management and SDM , and projects exploring and promoting the implementation of SDM [5,6]. Still, despite increasing policy commitments, routine adoption remains difficult [7,8]. Various definitions of patient engagement have been introduced . Angela Coulter, in her book ‘Engaging patients in healthcare’ , defines patient engagement as “working together to promote and support active patient and public involvement in health and healthcare and to strengthen their influence on healthcare decisions, at both the individual and collective level”. Sometimes, the term ‘patient engagement’ is used interchangeably with terms such as ‘patient involvement’ or ‘patient activation’. However, these terms do not reflect all aspects of patient engagement, including the reciprocity between patients and health professionals or healthcare organisations [2,10].
Patient engagement in healthcare at the individual level can occur in direct care interactions, for example through SDM and at the collective level in the form of organisational design and governance . Most work to date has focused on engaging patients at the individual level, e.g. barriers to implementing SDM at the health professional’s and patient’s levels have been explored [11,12]. When promoting engagement in organisational activities such as the commissioning, planning, design, delivery, evaluation, and reconfiguration of health services, patients and members of the public will need to be enabled to be engaged . So far, studies have focused on barriers to engagement among health professionals and patients, but the impact of organisational barriers and facilitators on the spread and adoption of patient engagement in healthcare has not been closely investigated.
A complex whole system change such as patient engagement in healthcare would require its adoption by the organisation as a whole . It is widely recognised that organisational changes are challenging, and often unsuccessful . Some have suggested that half of all failures to implement organisational change can be ascribed to insufficient organisational readiness for change . Organisational readiness for change can be defined as a state of being willing and able to take action . A timely assessment of organisational readiness can facilitate effective implementation by enabling a tailored implementation strategy. Despite extensive searches, to the best of our knowledge, no tools to assess organisational readiness for patient engagement in healthcare exist. A systematic review of the literature on the conceptualisation and measurement of organisational readiness for change by Weiner and colleagues yielded 43 instruments . Many of these scales were not specific to the healthcare context, and data on reliability or validity was often lacking. Recently, Weiner and colleagues published a measure of organisational readiness for change in healthcare settings. However the items in this generic scale were not specifically focussed on achieving patient engagement .
Given this gap, we initiated work to develop such a measure. This study addressed the following research question: ‘Which domains and items should be included in a scale to monitor and facilitate a healthcare organisation’s willingness and ability to effectively implement patient engagement in healthcare?’. As we were striving to develop a scale that would fit in a variety of healthcare contexts, the development of this scale included the consultation of a broad range of international stakeholders in patient engagement.
A Delphi study was performed to seek consensus on the nature, number, and wording of the items to include in the scale. The Delphi approach consists of an iterative process that aims to combine the perspectives of a panel of experts into a group consensus . It was decided to conduct two Delphi rounds, in anticipation of competing priorities and time constraints of relevant stakeholders in the field of patient engagement . In two rounds of online questionnaires, a panel of stakeholders was consulted to provide feedback about the evolving set of items. The anonymous responses from participants in round 1 were fed back to them in round 2.
To maximise the generalisability of the results to a variety of healthcare contexts worldwide, we planned to include healthcare managers, policy makers, medical clinicians, nurse clinicians, other health professionals, administrative staff, patients and patient representatives, and researchers from various countries. Members of the research team nominated multi-disciplinary stakeholders on the basis of their interest and expertise in patient engagement. However, these experts all work in the field of patient engagement and most were acquainted with the researchers. In an effort to include a wider range of perspectives, additional open invitations were circulated through national and international mailing lists including the Evidence Based Medicine mailing list, the Shared Decision Making Network, the Shared Decision Making listserv, and the NHS CHAIN network. The invitations provided a brief outline of the study, an enclosed participant information sheet, and a link to the online survey. Consent was inferred by participants’ completion of the survey. Ethical approval was granted on 3 September 2013, by the University of Hertfordshire’s Ethics Committee for Studies Involving the Use of Human Participants (LMS/SF/UH/00024).
A literature search on appropriate theories to underpin the development of the proposed new scale identified Weiner’s theory of organisational readiness for change in healthcare settings . This theory was developed based on a comprehensive review of the literature on how organisational readiness has been defined and measured . Weiner postulates that organisational readiness for change can be viewed as a psychological state that is shared by the members of the organisation, and in which they feel committed to implementing an organisational change (change commitment or ‘willingness to change’), and are confident in their collective abilities to do so (change efficacy or ‘ability to change’). Further, Weiner defined three key determinants of an organisation’s ability to change: task demands (‘do we know what it will take to implement this change effectively?’), resource availability (‘do we have the resources to implement this change effectively’), and situational factors (‘can we implement this change effectively given the situation we currently face?’).
A literature search was conducted to identify any publications describing the development or validation of relevant scales. Four instruments that were broadly related to our planned approach were identified. This included a hospital self-assessment inventory for patient- and family-centred care , an organisational self-assessment tool for patient-centred care , a checklist for organisational readiness for improving patient care experience , and an organisational communication climate assessment toolkit .
We designed the domains of the ‘Measuring Organisational Readiness for patient Engagement’ (MORE) scale according to Weiner’s theory and developed items that were aligned with the theory’s key constructs, and in part derived from existing scales [22-25]. This was an iterative process involving regular discussions in the research team.
Round 1 survey
The round one survey included a brief introduction to the aim of the study and a summary of the development process and theoretical underpinnings. Participants were asked to rate each of the proposed items on a four-point Likert scale. For the items about the stakeholders targeted by the scale, participants were asked to rate whether each of the proposed stakeholders should be involved (1 = avoid; 4 = definitely involve). For all other items, participants were asked to rate the importance of the items (1 = not important; 4 = very important). Furthermore, for each subset of items and at the end of the questionnaire, participants were given the opportunity to provide rewording suggestions, additional comments, or questions. Email addresses were collected for inclusion in round 2. Participants were also asked to complete basic demographic questions. The survey was open for three weeks and two email reminders were sent to the stakeholders on the contact list.
Round 2 survey
Round 1 participants were invited to complete a second survey, in which feedback was provided about the items’ ratings of importance (percentage of participants who thought an item was important or very important) and about the changes made based on the qualitative feedback. Participants were invited to re-rate the importance of the items and to provide additional rewording suggestions, comments or questions. The same demographic items were included. The survey was open for three weeks and two email reminders were sent.
Following round 1, the research team discussed the qualitative feedback received, including rewording suggestions per item, suggestions to add new items and more general comments or questions. Items were revised if at least two participants suggested it, or if the researchers agreed that the item would benefit from rewording or merging. For example, for item a in domain 1 ‘Informing patients about their condition or potential health issues’, two participants suggested changing ‘informing’ to ‘discussing’ and one participant suggested using the phrase ‘engaging patients in..’. For round 2, the item was reworded accordingly to ‘Engaging patients in discussing their condition or potential health issues’.
Following the two survey rounds, a decision on whether to retain items in the scale was made based on the ratings of importance in round 2. The ratings were summarised using percentages and the views of all participants were given equal weight. If at least 70% of participants rated the importance of the item in the lower two categories (not important or somewhat important), or in the higher two categories (important or very important), it was determined that consensus was achieved and the item would be removed or retained, respectively. A 70% threshold was considered appropriate, considering that the MORE scale will undergo further testing, and is consistent with related research using a Delphi approach [26,27]. If no consensus was achieved, the research team decided whether or not to retain an item in the scale, basing this decision on qualitative feedback from the participants where possible. For example, no consensus was achieved on whether to retain gender, age and length of employment in domain 1 of the scale. The research team decided to retain these items to examine the representativeness of the stakeholders completing the tool, supported by qualitative feedback from multiple participants emphasising the importance of capturing as many different perspectives as possible.
The first Delphi round took place between 15 November 2013 and 6 December 2013. Of the 69 participants nominated in the contact list, 30 completed the questionnaire (43%). In addition, 101 questionnaires were completed by participants recruited through the open invitations, thus totalling 131 participants (see Table 1 for participants’ characteristics). Sixteen countries were represented in this study: UK (n = 49), US (n = 33), the Netherlands (n = 13), Germany (n = 11), Canada (n = 6), Austria (n = 2), India (n = 2), Ireland (n = 2), Israel (n = 2), Italy (n = 2), New Zealand (n = 2), Spain (n = 2), Australia (n = 1), Denmark (n = 1), Romania (n = 1), Switzerland (n = 1), and Europe (country unspecified) (n = 1). The second round took place between 11 and 28 February 2014 and was completed by 72 of the 131 invited round 1 participants (55%) (see Table 1).
Development of the first draft of the scale
A total of 51 items were developed, divided into three domains. The first domain contained items related to the background of the stakeholders involved in completing the scale, to ensure that these stakeholders were representative of the entire workforce (see Table 2). Following Weiner’s theory, the second domain was designed to assess the organisation’s willingness to implement patient engagement and included 14 items reflecting the different aspects of patient engagement (at the individual and collective levels) (e.g. ‘Informing patients about their condition or potential health issues’). These items were partly derived from existing scales [22-25] and complemented with newly developed items to reflect all aspects of patient engagement. The third domain was designed to assess the organisation’s ability to implement patient engagement, in terms of tasks (9 items, e.g. ‘Developing a shared organisational vision for patient engagement’), resources (10 items; e.g. ‘Expertise in patient engagement’), and situational factors (7 items; e.g. ‘Alignment of patient engagement with organisational priorities’). Again, items in these domains were partly derived from existing scales [22-25] and complemented with newly developed items. In addition, the online questionnaire contained six questions about the demographics of the Delphi survey participants. It was developed using Bristol Online Survey . To support face and content validity, the survey was pilot tested among colleagues.
Participants’ ratings of importance are shown in Table 2. In round 1, 222 rewording suggestions, 238 other suggestions and 23 general comments were provided. In round 2, 91 rewording suggestions, 42 other suggestions, and 8 general comments were made. Below, the results of the survey are presented per domain of the scale, with each round described separately.
Domain 1: stakeholders involved in completing the scale
Between 95 and 99% of all participants agreed that senior and junior managers, clinicians and other health professionals should be involved in completing the scale. The majority of participants also agreed to include the following staff categories: receptionists, other staff in administration, and other staff. Several participants commented that executives or members of the organisation’s Board should also be involved in completing the scale. The items ‘Senior Managers’ and ‘Junior Managers’ were reworded to ‘Executives/Board of Directors’ and ‘Managers’. Other feedback related to involving patients in completing the scale. However, given many of the items do not apply to individual patients, a recommendation to involve patient representatives will therefore be included in the instructions. In the future, we will consider developing a separate version of the scale specifically intended for patients. Assessing the gender, age, and length of employment of stakeholders completing the scale was believed to be important or very important by 38%, 50%, and 61% of the survey participants. Based on the qualitative feedback, three new items were added in round 2: (1) ‘role in the organisation’, (2) ‘discipline (e.g. cardiology)’ and (3) ‘ethnicity’.
Participants’ ratings of staff categories were similar to round 1. The items ‘Other staff in Administration’ and ‘Other’ were merged into: ‘Other stakeholders’. Participants also suggested a number of additional stakeholders (e.g. HR staff). These will be included in the scale’s instructions as an example of ‘Other stakeholders’. The ratings of importance for gender, age and length of employment were similar to round 1 and no consensus was reached (47-60%). The research team decided to retain these items to examine the representativeness of the stakeholders completing the scale. For the three new items (‘role in the organisation’ (85%), followed by ‘discipline’ (68%) and ‘ethnicity’ (36%)), consensus was reached that ‘role in the organisation’ would be retained, with an open question in which stakeholders can also report their discipline. The ‘ethnicity item’ was removed from the scale.
Domain 2: The organisation’s willingness to implement patient engagement
Most participants (83-99%) considered that it would be ‘important’ or ‘very important’ to include all 14 items when assessing an organisation’s willingness to implement patient engagement. Six items were reworded, which essentially consisted of representing the patient’s voice more strongly and using plainer language. Furthermore, the two items about eliciting patients’ preferences and taking patient preferences into account were combined and reworded into ‘Asking patients about their health-related preferences and acting upon them’. The two items about provision of written information and decision support tools were perceived to be overlapping and merged into ‘Supporting patients with additional health information resources (e.g. access to patient groups and decision support resources)’. Two new items were added in order to emphasise the importance of including self-management as a key aspect of patient engagement, as well as the role of all members of the organisation in treating patients as partners. Further, some participants suggested broadening the definition of ‘patients’ by emphasising the role of families and carers. Others believed that the term ‘patient’ would not apply to healthy people using health services (e.g. maternity services or screening). An explanation of this term will be provided in the instructions of each domain of the scale.
Similarly to round 1, all 14 items were rated very highly (85-100%). For the nine combined and/or reworded items, ratings were similar or higher than round 1 ratings. The newly added items were rated very highly (99%). Based on participants’ suggestions, three of the items were reworded to emphasise that communication should be tailored to individual patients. The newly added item about treating patients as partners, with respect and consideration for their individual needs, was removed based on the comment that this would be difficult to operationalise and measure.
Domain 3: The organisation’s ability to implement patient engagement
All ten items in this domain were rated very highly (83-99%). Six items were reworded, three of which were clarified by adding examples. Two items about tailoring communication and consultations to individual patients’ needs were removed because of perceived duplication.
Ratings of importance were consistently high or higher than in round 1 (92-100%). Following participants’ comments, the two items about sharing the organisational vision for patient engagement were merged into: ‘Sharing the organisational vision for patient engagement with employees, patients, and the public (e.g. information on intranet, information in waiting areas)’. In line with this change, the item about monitoring patient engagement and giving feedback to employees was reworded accordingly: ‘Monitoring patient engagement in the organisation and giving feedback to employees, patients, and the public’. Further, the item about solving problems was broadened to encompass both individual issues and organisational improvements.
Participants gave all ten items a high rating of importance (80-98%). Two items about training were combined into the following: ‘Training health professionals in patient engagement (e.g. communication and shared decision-making skills)’. We also combined three items about patient education materials, decision support resources, and trained medical interpreters and care coordinators: ‘Resources to provide health-related information and support to patients (e.g. access to interpreters, answering questions, helping patients to make decisions). In addition, we reworded five items and added examples. Three new items about access to patient representatives, having resources to support patients in becoming partners, and tools to evaluate the implementation of patient engagement were added.
Increased ratings of importance were seen for all seven combined or reworded items (94-100%). In addition, high ratings were obtained for the three new items (79-94%). The items addressing the impact of time on patient engagement were reworded to account for the fact that not only time, but other resources too, would be required. Some participants felt that the item on adaptable systems and processes was unclear. This item was merged with the item about resources to provide health-related information and support to patients: ‘Resources to provide health-related information and support to patients to meet their diverse needs (e.g. access to interpreters, answering questions, scheduling appointments)’. Other participants found the item about access to patient representatives confusing. It was merged with the resources to support patients in becoming partners item into ‘Resources to support patients in becoming partners (e.g. access to patient representatives, recruitment of representatives, training, coaching, money to pay patients for participation)’.
Ratings of importance ranged from 80 to 97%. The ‘timing of implementation item’ was removed because of perceived duplication. We reworded the item about ‘positive and consistent communication’ into ‘frequent and consistent communication’ to reflect the respondents’ belief that communication is not/should not be overly positive. Two items about employee and patient involvement in planning the implementation were reworded to reflect the necessary involvement of employees and patients in all aspects of patient engagement (planning, implementation, and evaluation).
Compared to round 1, participants gave very similar or slightly higher ratings (92 to 97%). The item about organisational priorities was reworded to ‘Alignment of organisational priorities with patient engagement’ to better reflect that patients’ priorities should be the organisation’s priorities.
This study describes the first steps in the development of a measure of organisational readiness for patient engagement in healthcare (MORE). Guided by Weiner’s theory, a first version of the scale was developed by the research team using items derived from related tools. One hundred and thirty one multidisciplinary stakeholders from 16 countries gave their feedback on the proposed instructions, domains, and items in an online Delphi consensus procedure. Based on stakeholders’ feedback, the number of proposed items was reduced from 51 to 38 by merging overlapping items and deleting items that would be difficult to operationalise and measure. Participants’ feedback was also helpful to word items in plainer language, to clarify items by adding examples, and improve the scale’s instructions.
Strengths and limitations
This study is the first to report the development of a theory-based measure of organisational readiness for patient engagement. This work builds on the expertise and experience of the authors in developing measurement scales in the area of patient engagement in healthcare [29-32], and was complemented by that of the 131 international stakeholders. It is worth noting that 16 countries were represented, therefore promoting the potential applicability of the scale in various healthcare contexts. Although the broad selection of stakeholders is a strength, it may be considered a weakness due to the inability to establish the exact number and categories of stakeholders approached through open invitations. In addition, while our sampling approach targeted a variety of healthcare contexts, we cannot be sure that all healthcare contexts (e.g. primary care, nursing homes, etc.) were represented in our survey. Further, despite including the CHAIN network, no administrative staff completed the Delphi survey. This is considered a weakness as receptionists and other administrators of healthcare organisations have an important role to play in fostering patient engagement and are often the patient’s first port of call. Some groups, especially policy makers, seem to be underrepresented in the survey. While this could potentially be addressed by assigning more weight to the answers of this group, it would be difficult to decide on the weight that should be used, and there are no indications that policy makers rated the importance of the items differently compared to other participants. Another group that may have been underrepresented is patients. Only one of the participants identified himself as a patient. However, 22 participants identified themselves as patient representatives, describing a variety of roles including family members and patient advocates. Finally, the consistently high ratings of importance across all three domains suggest a ceiling effect. To improve the design of future Delphi studies, it has been suggested that ceiling effects can be reduced by replacing the 4-point scale with a 5-point scale, using three positive choices (moderately important, important, and very important) and two negative choices (not at all important, somewhat important) .
Results in context
A sizeable proportion of commonly reported barriers to patient engagement in healthcare are admittedly influenced, and occasionally fostered, by organisational factors: Time, continuity of care, workflow, characteristics of the healthcare settings (e.g. noisy environments, lack of privacy), lack of skills to apply SDM and access to training, clinical coordination and inter-professional collaboration, overall attitudes towards patient engagement, competing priorities and targets (e.g. criteria for referral, reimbursement in favour of surgery) [34,35]. If patient engagement is to become a reality in routine care, it is critical to give organisations the means to identify and address those barriers through usable routine measurements.
As far as can be determined, MORE is the first measure to target organisational readiness for patient engagement in healthcare. However, other scales have focussed on related constructs. The Organisational Readiness for Implementing Change measure (ORIC) was based on Weiner’s theory of organisational readiness for change . While ORIC’s initial psychometric assessment, combining a qualitative approach, lab studies and field testing show promises, convergent, predictive and discriminant validity still remain to be tested. Further, one may question the value of lab studies that did not include the target population with limited data in naturalistic settings. By contrast, Wynia’s validation study of an organisational communication climate assessment toolkit was conducted among its target users, across 13 geographically and ethnically diverse health care organisations . The analysis suggested that domains and items were reliable and accurately predicted patients’ perceptions of the quality of organisational communication.
Building on the above validation methodologies, and given our intention to develop a measure that can be applied in routine settings as well as being used for research purposes, we intend to prioritise naturalistic testing over simulated validation. We will use a combination of qualitative and quantitative methods. First, the items and domains of MORE will undergo further refinement using cognitive debriefing interviews with a wide range of international stakeholders. MORE was initially conceptualised as a measure that could be used across countries and healthcare settings (with minor adaptation and translation where necessary). We therefore intend to interview stakeholders from the US, UK and from at least one European country. In parallel, observation and field notes will be used to determine how the scale can fit in healthcare settings. Our ultimate goal is for MORE to become a routine measure, one that is not used solely for research purposes but to foster change and improvement and enable organisations to improve their practices, track progress and flag-up potential barriers to patient engagement. Second, we will assess the psychometric properties of the scale in a field study across geographically diverse healthcare organisations. We will assess intra-rater reliability, as well as convergent, discriminant and predictive validity. In accordance with Weiner’s theory, we will analyse within group agreement in order to determine whether ‘organisational readiness for patient engagement’ is indeed a psychological state shared by the members of an organisation.
Promoting patient engagement in healthcare is critical to the quality and efficiency of healthcare delivery. This is often mediated, and arguably hindered, by organisational factors. It is therefore essential to facilitate an organisation’s transition to truly patient-centred services, where patients, their families and their carers are engaged throughout the care pathway, from booking an appointment to discussing treatment options with the medical team and being consulted on the design and delivery of services. The first prototype of MORE has been carefully developed using a solid theoretical and conceptual foundation, and iterative development process. The clarity, relevance and applicability of the items and domains have been significantly strengthened by the ratings and qualitative inputs of 131 multi-disciplinary stakeholders from 16 countries, including 18% of patients and patient representatives. Further assessment is needed to determine the psychometric properties of the scale.
Commissioning for Quality and Innovation
Measuring Organisational Readiness for patient engagement
Organisational readiness for implementing change
Shared decision making
Dentzer S. Rx for the ‘blockbuster drug’ of patient engagement. Health Aff. 2013;32(2):202 (Millwood).
Coulter A. Engaging patients in healthcare. Maidenhead: Open University Press; 2011.
Strategy Unit - NHS England. Planning and delivering service changes for patients. A good practice guide for commissioners on the development of proposals for major service changes and reconfigurations. 2013. http://www.england.nhs.uk/wp-content/uploads/2013/12/plan-del-serv-chge1.pdf.
National Voices. People shaping health and social care [http://www.nationalvoices.org.uk/evidence]
Coulter A, Edwards A, Elwyn G, Thomson R. Implementing shared decision making in the UK. Z Evid Fortbild Qual Gesundhwes. 2011;105(4):300–4.
King E, Taylor J, Williams R, Vanson T. The MAGIC programme: evaluation. 2013. http://www.health.org.uk/public/cms/75/76/313/4173/MAGIC%20evaluation.pdf?realName=J1N5Vk.pdf.
Elwyn G, Legare F, van der Weijden T, Edwards A, May C. Arduous implementation: does the Normalisation Process Model explain why it’s so difficult to embed decision support technologies for patients in routine clinical practice. Implementation Sci. 2008;3:57.
Ipsos MORI. Patient Feedback Survey. 2012. http://www.institute.nhs.uk/images/documents/Share%20and%20network/PEN/National%20Survey%20Summary%20Report%20send%20to%20AK%204%2012.pdf.
Gallivan J, Kovacs Burns KA, Bellows M, Eigenseher C. The Many Faces of Patient Engagement. J Particip Med. 2012;4:e32.
Carman KL, Dardess P, Maurer M, Sofaer S, Adams K, Bechtel C, et al. Patient and family engagement: a framework for understanding the elements and developing interventions and policies. Health Aff. 2013;32(2):223–31 (Millwood).
Gravel K, Légaré F, Graham ID. Barriers and facilitators to implementing shared decision-making in clinical practice: a systematic review of health professionals’ perceptions. Implementation Sci. 2006;1:16.
Légaré F, Ratté S, Gravel K, Graham ID. Barriers and facilitators to implementing shared decision-making in clinical practice: update of a systematic review of health professionals’ perceptions. Patient Educ Couns. 2008;73(3):526–35.
Johnson A, Bament D. Improving the quality of hospital services: how diverse groups of consumers prefer to be involved. Aust Health Rev. 2002;25(6):194–205.
Williams I. Organizational readiness for innovation in health care: some lessons from the recent literature. Health Serv Manage Res. 2011;24(4):213–8.
Troy K. Change management: an overview of current initiatives: Conference Board. New York, NY; 1994.
Kotter JP. Leading Change. Boston, MA: Harvard Business Press; 1996.
Weiner BJ. A theory of organizational readiness for change. Implementation Sci. 2009;4:67.
Weiner BJ, Amick H, Lee SY. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008;65(4):379–436.
Shea CM, Jacobs SR, Esserman DA, Bruce K, Weiner BJ. Organizational readiness for implementing change: a psychometric assessment of a new measure. Implementation Sci. 2014;9:7.
Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32(4):1008–15.
Keeney S, Hasson F, McKenna H. Consulting the oracle: ten lessons from using the Delphi technique in nursing research. J Adv Nurs. 2006;53(2):205–12.
Institute for Family Centered Care and the American Hospital Association. Strategies for leadership. Patient and family centred care. A hospital self-assessment inventory. 2004. http://www.aha.org/content/00-10/assessment.pdf.
Frampton S, Guastello S, Brady C, Hale M, Horowitz S, Smith SB, et al. Patient-centered care improvement guide. 2008. http://www.patient-centeredcare.org/inside/chapter3_Form_Self-Assess_Tool1.pdf.
Australian Commission on Safety and Quality in Health Care Sydney. Patientcentred care: Improving quality and safety through partnerships with patients and consumers. 2011. http://www.patient-centeredcare.org/inside/chapter3_Form_Self-Assess_Tool1.pdf.
Wynia MK, Johnson M, McCoy TP, Griffin LP, Osborn CY. Validation of an organizational communication climate assessment toolkit. Am J Med Qual. 2010;25(6):436–43.
Elwyn G, O’Connor A, Stacey D, Volk R, Edwards A, Coulter A, et al. Developing a quality criteria framework for patient decision aids: online international Delphi consensus process. BMJ. 2006;333(7565):417.
Rao JK, Anderson LA, Sukumar B, Beauchesne DA, Stein T, Frankel RM. Engaging communication experts in a Delphi process to identify patient behaviors that could enhance communication in medical encounters. BMC Health Serv Res. 2010;10:97.
Bristol Online Surveys [http://survey.bris.ac.uk/]
Barr PJ, Thompson R, Walsh T, Grande SW, Ozanne EM, Elwyn G. The psychometric properties of CollaboRATE: a fast and frugal patient-reported measure of the shared decision-making process. J Med Internet Res. 2014;16(1):e2.
Elwyn G, Barr PJ, Grande SW, Thompson R, Walsh T, Ozanne EM. Developing CollaboRATE: a fast and frugal patient-reported measure of shared decision making in clinical encounters. Patient Educ Couns. 2013;93(1):102–7.
Elwyn G, Hutchings H, Edwards A, Rapport F, Wensing M, Cheung WY, et al. The OPTION scale: measuring the extent that clinicians involve patients in decision-making tasks. Health Expect. 2005;8(1):34–42.
Tomson C, Durand M-A, Cullen R. Shared decision-making in kidney care: the challenge of measurement. British Journal of Renal Medicine. 2013;18(1):23–5.
Moret L, Nguyen JM, Pillet N, Falissard B, Lombrail P, Gasquet I. Improvement of psychometric properties of a scale measuring inpatient satisfaction with care: a better response rate and a reduction of the ceiling effect. BMC Health Serv Res. 2007;7:197.
Hofstede SN, van Bodegom-Vos L, Wentink MM, Vleggeert-Lankamp CL, Vliet Vlieland TP, Marang-van de Mheen PJ. Most important factors for the implementation of shared decision making in sciatica care: ranking among professionals and patients. PLoS One. 2014;9(4):e94176.
Hofstede SN, Marang-van de Mheen PJ, Wentink MM, Stiggelbout AM, Vleggeert-Lankamp CL, Vliet Vlieland TP, et al. Barriers and facilitators to implement shared decision making in multidisciplinary sciatica care: a qualitative study. Implementation Sci. 2013;8:95.
The authors would like to thank all survey participants for their invaluable feedback and suggestions. We are grateful to Nayantara Kansal for her assistance with preparing the first round of the Delphi survey. We would like to thank the CHAIN network for circulating the invitation to their members, and Isabelle Scholl, Marleen Kunneman and Jördis Zill for their advice on Delphi procedures.
LO, M-A D, and A-L declare that they have no competing interests. GE is founder of the Option Grid Collaborative and Consultant for Emmi Solutions LLC, developer of patient decision support materials.
LO, M-AD, AL, and GE conceived the study, and participated in its design, coordination, analysis, and interpretation. LO and M-AD drafted the manuscript and M-AD, AL, and GE commented on the manuscript. All authors read and approved the final manuscript.