Protocol refinement for a diabetes pragmatic trial using the PRECIS-2 framework
BMC Health Services Research volume 21, Article number: 1039 (2021)
This report describes how we refined a protocol for a pragmatic comparative effectiveness study of two models of an evidence-based diabetes shared medical appointment intervention and used the PRECIS-2 rating system to evaluate these adaptations.
We report primary data collected between June and August 2019, and protocol refinements completed between 2018 and 2020. Twenty-two members of the study team collaborated in protocol refinement and completed the PRECIS-2 ratings of study pragmatism. We discuss study design refinements made to achieve the desired level of pragmatism vs. experimental control for each of the nine PRECIS-2 dimensions. Study team members received training on PRECIS-2 scoring and were asked to rate the study protocol on the nine PRECIS-2 dimensions. Ratings were compared using descriptive statistics.
In general, the PRECIS-2 ratings revealed high levels of pragmatism, but somewhat less pragmatic ratings on the categories of Delivery and Organization (costs and resources). This variation was purposeful, and we provide the rationale for and steps taken to obtain the targeted level of pragmatism on each PRECIS-2 dimension, as well as detail design changes made to a) make the design more pragmatic and b) address COVID-19 issues. There was general agreement among team members and across different types of stakeholders on PRECIS-2 ratings.
We discuss lessons learned from use of PRECIS-2 and experiences in refining the study to be maximally pragmatic on some dimensions and less so on other dimensions. This paper expands on prior research by describing actions to achieve higher levels of pragmatism and revise our protocol fit to the changed context. We make recommendations for future use of PRECIS-2 to help address changing context and other strategies for the planning of and transparent reporting on pragmatic research and comparative effectiveness research.
Failure to replicate research results in real world settings [1, 2] has received increased attention [3, 4], and demands proactive methodologies rather than “business as usual” to improve health services research [5, 6]. Although a multifaceted issue, part of ‘failures to replicate’ may not be actual failures, but rather reflect that interventions found to be efficacious under one set of conditions do not generalize to other settings, conditions and populations [4, 7]. One step for addressing replication concerns is to enhance transparency in reporting on context, including the extent to which study features mirror or contrast with aspects of usual care. Although participating patient details are now addressed by CONSORT [8,9,10] and other reporting standards, other contextual factors such as selection, exclusions, participation and representativeness at the broader levels of settings (communities, healthcare systems, clinics) and staff are often unreported . Fortunately, methods for addressing these generalizability issues are emerging, including the NIH Reporting requirement , the Standard Reporting for Implementation Science (StaRI) guidelines  and the updated Pragmatic-Explanatory Continuum Indicator Summary-2 (PRECIS-2) and the just published PRECIS-2 Provider Strategies (PRECIS-2-PS) systems [12,13,14].
In contrast to more traditional randomized explanatory or efficacy trials, pragmatic trials are typically characterized by few exclusions for settings, staff and patients, using practical measures of outcomes important to patients and decision makers, and comparing real world intervention alternatives . Pragmatic trials having the features above evolved as a way to address the well-documented gap between standard RCT research findings and practice [16, 17]. In particular, the PRECIS-2 rating system  is widely recommended to plan pragmatic trials [18,19,20,21]. As demonstrated in earlier publications and below, it is also possible to use the PRECIS tools to report on changes in design during a trial or to categorize published trial designs [18, 22]. To aid trial design decisions, PRECIS-2 is represented as a 9-spoked ‘wheel’ with the following domains: 1) eligibility criteria (who is selected to participate in the trial?); 2) recruitment (how are participants recruited into the trial?); 3) setting (where is the trial being done?); 4) organization (what expertise and resources are needed to deliver the intervention?); 5) flexibility delivery (how is the intervention delivered?); 6) flexibility adherence (what measures are in place to make sure participants adhere to the intervention?); 7) follow up (how closely are participants followed-up?); 8) primary outcome (how relevant it is to participants?); and 9) primary analysis (to what extent are all data included?).
PRECIS-2 may best be used in partnership with stakeholders as part of the study design process . Yet even when trials are planned in partnership with stakeholders, grant proposals are hypothetical in nature and intended to demonstrate feasibility, rigor, and importance to reviewers. Given the time lag between when a study is proposed, actually funded and the trial actually begins, even the best planning cannot predict the exact context, staff, resources, and motivations present at the time a trial begins. To ensure ongoing fit to changing context while retaining rigor, pragmatic trials should anticipate the need for protocol refinement when launching (and potentially at key points during the study) to fully specify the trial procedures, measures, materials, data sources, timelines, and participant and setting eligibility criteria. Any changes to the originally approved or agreed upon methods or study design elements should be guided by a systematic process and framework such as the PRECIS-2, well-documented, and communicated transparency. For pragmatic trials, protocol refinements made during the trial are usually driven by contextual changes. For example, a change in the electronic health record software used by a participating healthcare organization could necessitate a change in how clinical outcomes are operationalized and assessed.
Many reports detail use of PRECIS-2 for planning (www.precis-2.org) , including for reviewing and reporting findings [18, 22], yet to our knowledge only one has focused on protocol adaptations or design changes and that one only briefly . This report describes how we refined a protocol for Invested in Diabetes, summarized below, which was intended as a pragmatic comparative effectiveness study of two models of evidence-based diabetes shared medical appointment (SMAs). The “main objective” for this study explicitly included protocol refinement. We previously reported on pre-implementation adaptations to the comparator interventions to fit context (i.e., planned adaptations to intervention content, packaging, delivery, as well as training for those delivering the interventions) as a step in the Replicating Effective Programs framework used to guide the study , yet other aspects of protocol refinement have not yet been reported or viewed through a PRECIS-2 lens. In 2020, the Invested in Diabetes study - at about the midpoint of patient enrollment - was also faced with responding to the COVID-19 pandemic, requiring further protocol refinements to both the intervention and the research design. We used PRECIS-2 as a lens for describing and designing the process of protocol refinements to be more pragmatic and feasible within the context of COVID.
The purposes of this paper are to describe how we: 1) refined our study protocol using a participatory team science approach [25, 26] and the PRECIS-2 framework; 2) responded to the changing context in implementation settings and due to COVID-19; 3) identified similarities and differences in perceptions of study pragmatism among different types of raters; and 4) to discuss lessons learned and recommendations for future use the PRECIS-2 system and related strategies to aid protocol refinement.
Intervention approach and methods
The Invested in Diabetes study is a cluster randomized pragmatic trial currently underway in 22 primary care practices in Colorado and Kansas. Its overall objective is to compare the effectiveness of patient-driven vs. standardized diabetes shared medical appointments (SMAs) on patient-centered outcomes, especially diabetes distress. Standardized SMAs were group visits led by a health educator with a set order of session topics. Patient-driven SMAs were delivered collaboratively by a multidisciplinary care team consisting of a health educator, behavioral health provider, and a peer mentor, and patients choose the topic order.
During the pre-implementation phase for this project, the research team engaged practice stakeholders and refined the study protocol as originally designed to better align with real world systems and processes of care (i.e., to make the protocol more pragmatic). The PRECIS-2 was used to structure this process of protocol refinement.
Our goal was to study diabetes SMAs across a variety of primary care settings (e.g., federally qualified health centers (FQHCs), internal and family medicine practices serving patients with commercial insurance, and community mental health centers) who had integrated behavioral health providers. Community mental health centers were ultimately unable to participate due to Colorado Medicaid reimbursement changes that impacted their ability to provide health education services. The final randomized study cohort included 12 FQHCs and 12 primary care practices from practice-based research networks .
Original study protocol
Invested in Diabetes used cluster randomization to assign practices to either patient-driven or standardized diabetes SMAs. In both SMA conditions, the curriculum used is Targeted Training in Illness Management (TTIM), which has been shown to be effective for patients with diabetes and serious mental illness [27, 28]. Existing practice staff deliver the assigned SMAs using study-provided TTIM materials. Eligible primary care practices must be one of the practice types above, have a roster of ≥150 patients with type 2 diabetes, and have integrated behavioral providers to serve on the multidisciplinary care team. Patient eligibility criteria are broad: patients must be ≥18 years of age, have type 2 diabetes, not pregnant or planning on becoming pregnant, and not be in hospice or within 6 months of the end of life. A mixed-methods evaluation includes quantitative (practice- and patient-level data) and qualitative (practice and patient interviews, observation) components. The primary patient-centered outcome, selected by patient stakeholders, is diabetes distress [29, 30]. Secondary outcomes include autonomy support, quality of life, and diabetes self-management behaviors, clinical outcomes, patient reach and engagement, and practice-level value and sustainability. The primary source of clinical data is from each practice’s electronic health records, using data collected from routine care. Further details on the study protocol have been previously published .
Stakeholder engagement in protocol refinement
During the pre-implementation phase (study year 1), we engaged clinic and patient stakeholders to refine the study protocol and address potential barriers to implementation of the protocol. Stakeholders included two members of practice leadership, two diabetes educators or nutritionists, one clinician, and five patient stakeholder who had long-standing type 2 diabetes or were family members of patients with diabetes. The study team discussed elements of the study protocol with interested practices, including those who ultimately enrolled in the study and those who did not enroll. Protocol elements discussed included practice and patient eligibility criteria, research data collection plans at the both the patient and practice level, and expectations for clinical sites to use usual practices and personnel for patient recruitment, delivery of care, and documentation and billing. These discussions occurred in a variety of forums, including between the investigators and practice leadership and clinical teams during recruitment and initial project start-up meetings, between practice facilitators and clinical teams during trainings and coaching sessions, and between investigators and practice and patient stakeholder representatives during research team and stakeholder engagement meetings.
Protocol refinement and operationalization
Based on stakeholder input, the research team refined the study protocol to increase practice willingness to participate, and ensure the protocol would reflect real-world practices and resources. The practice facilitators and investigators iteratively discussed potential changes with practice representatives until reaching satisfactory decisions. In refining the protocol, there were aspects of the study design that were ultimately not pragmatic for the practices (e.g., patient-reported outcomes measures). In Table 3, we summarize the protocol refinements by PRECIS-2 domain and describe how the study team, patient stakeholders, and participating practices have operationalized the protocol to fit their usual processes of care. In addition to the protocol refinements made in the pre-implementation phase in 2018, we also describe protocol refinements made during the implementation phase in response to the COVID-19 pandemic in 2020.
Pragmatic design ratings using the PRECIS-2
[23, 32] Approximately 9 months into implementation of the study, The study team reflected upon the extent to which aspects of the original study protocol were more or less pragmatic using the PRECIS-2. PRECIS-2 ratings were completed by 21 individuals, including 6 clinician investigators, 6 non-clinician investigators, 6 research staff, and 3 practice-interfacing staff. The research team was asked to rate the original Invested in Diabetes research protocol according to how pragmatic it was on each of the PRECIS-2 domains. Following a one-hour presentation by the first and senior authors on PRECIS-2, including examples of common issues in rating PRECIS-2 dimensions and questions and answers, research team members (with the exception of patient representatives) were given relevant excerpts from the funded study proposal and a spreadsheet defining each of the PRECIS-2 domains and rating criteria. They independently rated each domain on a scale of 1 (very explanatory) to 5 (very pragmatic) based on their own perspective of the protocol’s consistency with PRECIS-2 definitions with reference to usual care systems and processes. Ratings were conducted between June and August 2019. We report ratings by different subgroups, but efforts were not made to resolve discrepancies.
We used descriptive statistics to summarize mean ratings for each domain overall and by rater type and plotted the mean responses for each domain on the PRECIS-2 spider plot. To address our third study aim, we compared ratings by those who worked directly with the practices to ratings by research team members who were not directly engaged with practices. Our hypothesis was that those who worked directly with practices would rate the protocol as less pragmatic.
Participating practice characteristics
Table 1 summarizes the characteristics of participating practices as previously described . As can be seen, there was a diversity of practice types and sizes, and many had a substantial minority or uninsured population.
Figure 1 shows the spider plot for overall study team PRECIS-2 domain ratings for the originally proposed study protocol. Overall, the average rating for all domains was 2.92 or higher, showing that the study team generally perceived the study to be pragmatic. The most pragmatic domains (the most patient-centered and consistent with usual processes of care) included: 1. Primary analysis (intent-to-treat analysis using all available survey and electronic health records data); 2. Setting (primary care practices with ≥150 adult patients with type 2 diabetes and existing integrated behavioral health); 3. Participant eligibility (adults with type 2 diabetes, with the exception of those pregnant, with cognitive impairment, or with limited life expectancy); and 4. Flexibility - Adherence (practices use standard processes such as reminder calls to encourage patient attendance). These were all rated with average ratings exceeding 4.33.
The PRECIS-2 domains of 5. Recruitment (practices invite patients to participate in SMAs as part of their regular health care, using existing resources and processes of care); 6. Primary outcome (diabetes distress, selected by the project’s patient stakeholder partners); 7. Follow-up (patient-reported data collected at baseline and follow-up as part of SMAs; clinical outcomes are secondary use from the EHR); and 8. Flexibility - delivery (core components of SMA delivery established by protocol with required use of the TTIM curriculum) were rated moderately pragmatic, with average PRECIS-2 ratings between 3.40 and 3.77. Finally, 9. Organization (health education, behavioral health, scheduling resources available plus physical space to hold groups) was consistently rated as the least pragmatic dimension (average score of 2.92), possibly because of the low level of resources for conducting research in the participating settings.
Table 2 shows domain ratings by different rater types. There was generally high agreement on the PRECIS-2 ratings, especially on Flexibility-Delivery, Organization, Follow-up, Outcomes and Analyses, with differences among rater groups of .5 or less on mean ratings for these dimensions. The largest differences among types of raters were on the dimensions of Eligibility, Setting, Recruitment and Flexibility-Adherence, with researchers rating the design as more pragmatic than other groups. All rater types agreed that the organization dimension was the least pragmatic.
Protocol refinement to fit usual processes and systems of care
During the protocol refinement period, the study team, clinicians and delivery staff, and practice representatives operationalized and/or refined the study protocol to best fit the usual processes and systems of care while still fulfilling study objectives. Table 3 presents the protocol refinements by PRECIS-2 domains and accompanying rationale. Without inherently changing the protocol, we specified and/or adopted detailed procedures and materials for the outcomes and follow-up data collection and procedures (EHR data extracts; patient reported outcome (PRO) measures, scoring and tracking; attendance; fidelity; cost collection; and patient and practice interviews).
Electronic health records (EHR) extracts
In partnership with three participating sites (two with a more developed data and reporting infrastructure and one with a less developed system), we iteratively developed detailed specifications for EHR data extraction (Additional file 1). We created an easy to use protocol and template/mock report tables based on practice input. We made some data elements optional (e.g., foot and eye exams), based on capability of the local EHR (refining our “must haves”). The optional data elements were those that would only be needed to provide sites with feedback reports on the degree to which SMAs influenced their quality metrics, and not our ability to test a priori study hypotheses. Data extraction instructions addressed the selection criteria for patients to include, the time frame for the data pull, and specifics from four data categories: demographics, medications, vital signs and laboratory results, and patient visits. The extraction guide noted details such as the number of diagnosis codes to include, the types of labs to include, and how to report patient insurance status (e.g., patient level vs. visit level). For all sites, we were willing to accept more data than requested and selected only those elements needed.
The study protocol called for observation of 10% of SMAs to assess fidelity to core components (use of the TTIM curriculum and delivery according to assigned study arm) and the conceptual model (described in more detail elsewhere) . We ultimately decided it was feasible to observe 5–8% of visits or one session per practice per quarter for 6 quarters (6 sessions per practice). In the abstract, one might think that the initial goal of observing 10% of sessions was pragmatic, but even this proved challenging in these settings. We adapted a fidelity checklist (Additional file 2) from previous studies using the TTIM curriculum to capture facilitator role, timing, curriculum content, group engagement, and facilitation style. Seven study team members were trained in a two-hour training, then debriefed after completing their first observation.
Patient-reported outcomes (PRO) measures and scoring
The patient stakeholders and investigators reviewed the planned PRO measures from the original proposal, considering the extent to which proposed measures were clinically relevant and actionable as part of the intervention for either patients or the health care team. Guidance from the literature on pragmatic trial design and specifically from our Patient-Centered Outcomes Research Institute funder  recommends that PRO measures should not be for research purposes only. As a result, the study team eliminated the originally planned quality of life measure as its clinical utility was unclear. However, we added the Perceived Confidence Scale , which more directly matches with the skills-building nature of the curriculum and was endorsed as important by patient stakeholders. We designed an easy to understand patient survey document and scoring instructions for practices, including how to handle missing data.
Attendance and PRO tracking
As a pragmatic trial, the majority of measures were designed to be administered as part of clinical care by regular practice staff rather than research personnel. We developed a spreadsheet for practices to capture both session data (presenter, date, provider who was available, TTIM content covered or missing, added content, and peer mentor if applicable) and patient data (attendance, PRO scores). To reduce burden of data collection, practices were instructed to submit the tracking sheet quarterly, but were asked to report monthly attendance via email to fulfill sponsor obligations.
In addition to fully operationalizing the data collection processes, there was considerable work for practices to determine how to operationalize patient recruitment. Practices found different ways to recruit, such as: “warm hand-offs” from providers; systematically reviewing lists of patients with suboptimal diabetes management; using social media to advertise; word of mouth, particularly for practices with peer mentors; and advertising around the clinic using fliers, handouts, and on their waiting room TVs. It was an important aspect of the protocol that practices use regular systems (e.g., registries) and processes (e.g., provider referrals, direct calls to patients) to invite patients to participate in groups as part of their regular diabetes care – rather than for a research study. As a result, much of the patient-level data collection procedures needed to reflect secondary use, such that the data were primarily for clinical or operations use. This is a common approach for cluster randomized trials, and a waiver of patient consent was approved by the Colorado Multiple Institutional Review Board.
Dynamic changes in the context of COVID-19
The COVID-19 pandemic significantly impacted the delivery of primary care services. We aimed to be extremely flexible with regard to changes in practice delivery of the SMAs in response to COVID-19, without compromising the integrity of the research. A number of practices who had telehealth capabilities switched to virtual delivery of SMAs (vSMAs). The main aspects which the practices had license to modify delivery was to adopt vSMAs and electronic collection of PROs via online surveys.
Adaptations to the delivery of SMAs for a virtual environment included an introduction to the virtual environment for patients, new “ground rules” required for the online environment, and resources to help with problem-solving technical difficulties. Additionally, practices adapted TTIM materials to better fit the online format, e.g., practices increased use of graphics and images and made materials less text based. Other adaptations included sending materials ahead of time by email or mail to SMA participants, scheduling sessions for shorter amounts of time and smaller cohorts, scheduling virtual prescribing provider visits separately rather than during the vSMA session, and moving patient survey data collection to virtual formats or by phone.
The process of transitioning to vSMAs and electronic PROs was captured in field notes by practice facilitators. As of the writing of this paper, twelve practices had adopted vSMAs, while others continued in-person (depending on local infection rates) or postponed SMAs during the pandemic.
We found it feasible to utilize the PRECIS-2 system to characterize adaptations to our comparative effectiveness trial. With some exceptions, a wide variety of stakeholders were able to understand and make ratings on the PRECIS-2 domains, with generally high agreement across types of raters and good differentiation between PRECIS-2 scores on those domains intended to be very pragmatic vs. those not. Based on these results, we recommend further use of PRECIS-2 and also the newer PRECIS-2-PS system and below we discuss lessons learned as well as recommendations for future research.
Invested in Diabetes was a stakeholder-engaged trial with an overarching goal to optimize pragmatism across the PRECIS-2 factors. The intent was to design a pragmatic study so that the results would be broadly applicable and thus support SMA (and vSMA) adoption, implementation, and sustainability across diverse primary care settings and patient populations. An iterative, multi-stakeholder team approach was used for protocol refinement and operationalization during the first year of the study. We subsequently applied the same protocol refinement process in response to the COVID-19 pandemic. The protocol refinement and operationalization process was then reflected upon, organized, and reported using PRECIS-2.
Importance of protocol refinements to ensure ongoing practice engagement
Protocol refinements were designed to align make the study more feasible to conduct in busy and low-resource practice settings. In particular, as the majority of data collection was conducted by regular practice staff, we needed to ensure the evaluation was the least burdensome possible. Practice staff needed more assistance than originally anticipated with regards to recruitment, and needed considerable flexibility in response to COVID-19. This process also helped to retain practice participation in the study. As a cluster randomized trial in which practices are the primary unit of analysis and the source of statistical power, it is critical to prioritize practice needs and prevent their withdrawal.
Value of the PRECIS-2 framework as a tool for collaborative team science
PRECIS-2 helped the team to collaboratively and transparently develop a shared ‘mental model’ of what we were trying to accomplish. It helped to facilitate understanding among research-focused investigators (who prioritize trial rigor) and the practice-focused implementation team (who prioritize practice relevance). The process of a brief initial introductory ‘training’ session on PRECIS-2, followed by independent ratings of PRECIS-2 dimensions, and then coming together to reflect on results was feasible and efficient.
Overall, there was high agreement on PRECIS-2 scores across types of raters as summarized in Table 2. This level of agreement was achieved even though most team members had not worked with the PRECIS-2 before and there was only minimal training. Our hypothesis that the implementation team members would rate the study design as less pragmatic was partially supported. Non-investigator raters scored the study as less pragmatic on dimensions of recruitment, eligibility, organization, analysis and flexibility- adherence but actually provided slightly more pragmatic ratings for setting, flexibility – delivery, follow up, and outcomes. Overall, the biggest difference in rating between groups was still under 1 full point.
There was good differentiation between PRECIS-2 scores on dimensions we wanted to be more pragmatic vs. more explanatory. In particular, the study design was rated as very pragmatic on setting, eligibility, fidelity-adherence, and analyses, even before the protocol refinement process (scores over 4.38). The outcomes were rated as only moderately pragmatic (average score of 3.75), since although the specific outcomes selected were very patient-centered, the scale and scope of the PROs and the resources required to collect them were seen as burdensome to practices and unlikely to be used for clinical care outside the study context.
Despite the general applicability and usefulness of the PRECIS-2 framework , there were two issues that created some confusion. First, multiple levels in the study design and the definition of usual care were at times misunderstood. In many pragmatic studies – especially those with cluster-level randomization and implementation strategies – there are multiple levels of recruitment (e.g., system, practice, staff and patient) and outcomes to consider. When questions arose about how to approach ratings involving multiple levels, for example on recruitment of patients vs. practices, vs. clinical staff, we instructed raters to focus on the patient level. This issue should be ameliorated with the newer PRECIS-2-PS which is focused on the ‘provider delivery level’ rather than patient level design issues . Second, there was some confusion among the team between what is a textbook “pragmatic” study design element (i.e., the same as what happens in usual care) and ease of application in participating practices. Many things that are consistent with real-world workflows (e.g., needing to call patients to enroll them in SMAs) are not easy. But this does not make the study non-pragmatic. It may actually make the study more pragmatic, because these methods are more similar to those necessary in ‘real world care settings’. It would have been easier and less burdensome on participating practices if research staff conducted recruitment, but this would have been less pragmatic.
Lessons learned and directions for future research
It was possible after brief training to have staff rate the nine PRECIS-2 dimensions. Raters new to the PRECIS-2 system reported that it was helpful to have examples and to proactively anticipate and address frequently asked questions about ratings. The PRECIS-2 helped the team to be purposeful about where it wanted study be more pragmatic. The PRECIS-2 exercise also helped the research team conceptualize and make adaptations necessary to address COVID-19. It would be of interest to see if other research teams have found PRECIS-2 similarly useful to address research challenges due to COVID-19 and to other rapid or major changes to context . Finally, future research should investigate the extent to which different types of raters continue to evaluate PRECIS-2 dimensions similarly over the life course of a pragmatic trial, as well as whether overall pragmatism ratings increase or decrease.
This research has both strengths and limitations. Strengths include the team science approach, stakeholder engagement, and the relatively large number and types of raters who were involved in rating the study design using the PRECIS-2. The concrete illustrations of how to purposefully make a study more or less pragmatic following initial grant submission and in response to major disruptions such as the COVID-19 pandemic should be of use to other research teams. A caveat is that it may have been more challenging to use PRECIS-2 to rate this study where the comparator was another active SMA intervention rather than usual care. Limitations include a) the study did not involve patients or health system leaders in rating the study design (although they were involved in the protocol refinement process); and b) use of PRECIS-2 was not contrasted with alternative approaches to guide pragmatic design choices.
In conclusion, we encourage greater use of PRECIS-2, and the new PRECIS-2-PS for implementation studies, as well as other design and reporting frameworks (e.g., StaRI) as these implementation science and external validity issues are often neglected. We think that such iterative, multi-stakeholder-engaged protocol refinement processes can a) enhance transparent reporting and help address the scientific replication crisis [1, 2]; and b) provide proactive guidance in the planning stage as well as for later protocol adaptations to address fit to context.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Ioannidis JP. How to make more published research true. Revista Cubana de Información en Ciencias de la Salud (ACIMED). 2015;26(2):187–200.
Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2(8):e124. https://doi.org/10.1371/journal.pmed.0020124.
National Institutes of Health. Reviewer Guidance on Rigor and Transparency: Research Project Grant and Mentored Career Development Applications https://grants.nih.gov/grants/peer/guidelines_general/Reviewer_Guidance_on_Rigor_and_Transparency.pdf. Published 2019. Updated March 18, 2019. Accessed 23 Nov 2020.
Glasgow RE, Huebschmann AG, Brownson RC. Expanding the CONSORT figure: increasing transparency in reporting on external validity. Am J Prev Med. 2018;55(3):422–30. https://doi.org/10.1016/j.amepre.2018.04.044.
Green LW, Nasser M. Furthering dissemination and implementation research. Dissemination and implementation research in health: Translating science to practice; 2017. p. 301.
Kessler R, Glasgow RE. A proposal to speed translation of healthcare research into practice: dramatic change is needed. Am J Prev Med. 2011;40(6):637–44. https://doi.org/10.1016/j.amepre.2011.02.023.
Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29(1):126–53. https://doi.org/10.1177/0163278705284445.
Schulz KF, Altman DG, Moher D. CONSORT group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials. Ann Intern Med. 2010;152(11):726–32. https://doi.org/10.7326/0003-4819-152-11-201006010-00232.
Zwarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, et al. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 2008;337(nov11 2):a2390. https://doi.org/10.1136/bmj.a2390.
Campbell MK, Piaggio G, Elbourne DR, Altman DG, Group C. Consort 2010 statement: extension to cluster randomised trials. BMJ. 2012;345(sep04 1):e5661. https://doi.org/10.1136/bmj.e5661.
Pinnock H, Barwick M, Carpenter CR, et al. Standards for reporting implementation studies (StaRI) statement. BMJ. 2017;356:i6795.
Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350:h2147.
Thorpe KE, Zwarenstein M, Oxman AD, Treweek S, Furberg CD, Altman DG, et al. A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J Clin Epidemiol. 2009;62(5):464–75. https://doi.org/10.1016/j.jclinepi.2008.12.011.
Norton WE, Loudon K, Chambers DA, Zwarenstein M. Designing provider-focused implementation trials with purpose and intent: introducing the PRECIS-2-PS tool. Implement Sci. 2021;16(1):7. https://doi.org/10.1186/s13012-020-01075-y.
Patient Centered Outcomes Research Institute. PCORI Methodology Standards. https://www.pcori.org/research-results/about-our-research/research-methodology/pcori-methodology-standards. Published 2019. Updated Feb 26, 2019. Accessed 3 Dec 2020.
Patsopoulos NA. A pragmatic view on pragmatic trials. Dialogues Clin Neurosci. 2011;13(2):217–24. https://doi.org/10.31887/DCNS.2011.13.2/npatsopoulos.
Pawson R. Pragmatic trials and implementation science: grounds for divorce? BMC Med Res Methodol. 2019;19(1):176. https://doi.org/10.1186/s12874-019-0814-9.
Gaglio B, Phillips SM, Heurtin-Roberts S, Sanchez MA, Glasgow RE. How pragmatic is it? Lessons learned using PRECIS and RE-AIM for determining pragmatic characteristics of research. Implement Sci. 2014;9(1):96. https://doi.org/10.1186/s13012-014-0096-x.
University of Dundee Health Informatics Centre. PRECIS-2. http://www.precis-2.org/. Published 2016. Accessed 25 Nov 2020.
Johnson KE, Tachibana C, Coronado GD, Dember LM, Glasgow RE, Huang SS, et al. A guide to research partnerships for pragmatic clinical trials. BMJ. 2014;349(dec01 7):g6826. https://doi.org/10.1136/bmj.g6826.
Glasgow RE. What does it mean to be pragmatic? Pragmatic methods, measures, and models to facilitate research translation. Health Educ Behav. 2013;40(3):257–65. https://doi.org/10.1177/1090198113486805.
Luoma KA, Leavitt IM, Marrs JC, Nederveld AL, Regensteiner JG, Dunn AL, et al. How can clinical practices pragmatically increase physical activity for patients with type 2 diabetes? A systematic review. Transl Behav Med. 2017;7(4):751–72. https://doi.org/10.1007/s13142-017-0502-4.
Neta G, Johnson KE. Informing real-world practice with real-world evidence: the value of PRECIS-2. BMC Med. 2018;16(1):1–3. https://doi.org/10.1186/s12916-018-1071-1.
Kwan BM, Rementer J, Richie N, et al. Adapting diabetes shared medical appointments to fit context for practice-based research (PBR). J Am Board Fam Med. 2020;33(5):716–27. https://doi.org/10.3122/jabfm.2020.05.200049.
Cooke NJ, Hilton ML. Enhancing the effectiveness of team science. Washington, DC: National Academies Press; 2015.
Tebes JK. Team science, justice, and the co-production of knowledge. Am J Community Psychol. 2018;62(1–2):13–22. https://doi.org/10.1002/ajcp.12252.
Sajatovic M, Gunzler DD, Kanuch SW, Cassidy KA, Tatsuoka C, McCormick R, et al. A 60-week prospective RCT of a self-management intervention for individuals with serious mental illness and diabetes mellitus. Psychiatr Serv. 2017;68(9):883–90. https://doi.org/10.1176/appi.ps.201600377.
Sajatovic M, Colon-Zimmermann K, Kahriman M, Fuentes-Casiano E, Liu H, Tatsuoka C, et al. A 6-month prospective randomized controlled trial of remotely delivered group format epilepsy self-management versus waitlist control for high-risk people with epilepsy. Epilepsia. 2018;59(9):1684–95. https://doi.org/10.1111/epi.14527.
Fisher L, Mullan JT, Arean P, Glasgow RE, Hessler D, Masharani U. Diabetes distress but not clinical depression or depressive symptoms is associated with glycemic control in both cross-sectional and longitudinal analyses. Diabetes Care. 2010;33(1):23–8. https://doi.org/10.2337/dc09-1238.
Fisher L, Glasgow RE, Mullan JT, Skaff MM, Polonsky WH. Development of a brief diabetes distress screening instrument. Ann Fam Med. 2008;6(3):246–52. https://doi.org/10.1370/afm.842.
Kwan BM, Dickinson LM, Glasgow RE, Sajatovic M, Gritz M, Holtrop JS, et al. The invested in diabetes study protocol: a cluster randomized pragmatic trial comparing standardized and patient-driven diabetes shared medical appointments. Trials. 2020;21(1):65. https://doi.org/10.1186/s13063-019-3938-7.
Johnson KE, Neta G, Dember LM, Coronado GD, Suls J, Chambers DA, et al. Use of PRECIS ratings in the National Institutes of Health (NIH) health care systems research Collaboratory. Trials. 2016;17(1):32. https://doi.org/10.1186/s13063-016-1158-y.
Williams GC, Freedman ZR, Deci EL. Supporting autonomy to motivate patients with diabetes for glucose control. Diabetes Care. 1998;21(10):1644–51. https://doi.org/10.2337/diacare.21.10.1644.
Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):117. https://doi.org/10.1186/1748-5908-8-117.
We would like to thank the entire Invested in Diabetes study team, as well as our patient and practice partners.
This research was funded through a Patient-Centered Outcomes Research Institute (PCORI) Award (IHS-1609-36322). The views, statements, and opinions presented in this work are solely the responsibility of the authors and do not necessarily represent the views of PCORI, its Board of Governors or Methodology Committee. Lead author’s time was also partially supported by NCI grant #P50 CA244688.
Ethics approval and consent to participate
This study was reviewed and approved by the Colorado Multiple Institutions Review Board (COMIRB) under protocol 17–2377. All methods were conducted in accordance with relevant guidelines and regulations. Respondents to the PRECIS-2 survey were provided informed consent via verbal/written consent information and consented to study participation.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Glasgow, R.E., Gurfinkel, D., Waxmonsky, J. et al. Protocol refinement for a diabetes pragmatic trial using the PRECIS-2 framework. BMC Health Serv Res 21, 1039 (2021). https://doi.org/10.1186/s12913-021-07084-x