Unresolved medication discrepancies during hospitalization can contribute to adverse drug events, resulting in patient harm. Discrepancies can be reduced by performing medication reconciliation; however, effective implementation of medication reconciliation has proven to be challenging. The goals of the Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS) are to operationalize best practices for inpatient medication reconciliation, test their effect on potentially harmful unintentional medication discrepancies, and understand barriers and facilitators of successful implementation.
Six U.S. hospitals are participating in this quality improvement mentored implementation study. Each hospital has collected baseline data on the primary outcome: the number of potentially harmful unintentional medication discrepancies per patient, as determined by a trained on-site pharmacist taking a “gold standard” medication history. With the guidance of their mentors, each site has also begun to implement one or more of 11 best practices to improve medication reconciliation. To understand the effect of the implemented interventions on hospital staff and culture, we are performing mixed methods program evaluation including surveys, interviews, and focus groups of front line staff and hospital leaders.
At baseline the number of unintentional medication discrepancies in admission and discharge orders per patient varies by site from 2.35 to 4.67 (mean=3.35). Most discrepancies are due to history errors (mean 2.12 per patient) as opposed to reconciliation errors (mean 1.23 per patient). Potentially harmful medication discrepancies averages 0.45 per patient and varies by site from 0.13 to 0.82 per patient. We discuss several barriers to implementation encountered thus far. In the end, we anticipate that MARQUIS tools and lessons learned have the potential to decrease medication discrepancies and improve patient outcomes.
One of the most prevalent hazards facing hospitalized patients is unintentional medication discrepancies, i.e. unexplained differences in documented medication regimens across different sites of care [1, 2]. Unresolved medication discrepancies can contribute to adverse drug events (ADEs), resulting in patient harm [3, 4]. Nearly two-thirds of inpatients have at least one unexplained discrepancy in their admission medication history, and some studies found up to 3 medication discrepancies per patient [5–7]. Such medication discrepancies are either caused by history errors (i.e., errors in determining a patient’s preadmission medication list) or reconciliation errors (i.e., errors in orders despite accurate medication histories) [3, 4].
One way to minimize medication discrepancies and improve patient safety is to perform high quality medication reconciliation, defined as the process of identifying the most accurate list of all medications a patient is taking and using this list to provide correct medications for patients anywhere within the healthcare system [8, 9]. Since 2005 The Joint Commission (TJC) has required U.S. hospitals to conduct medication reconciliation on admission, upon transfer, and at discharge . Additionally, the World Health Organization has encouraged all member states to implement medication reconciliation at care transitions . When tested, hospital-based medication reconciliation interventions have consistently demonstrated reductions in medication discrepancies, though effects on more distal outcomes such as readmission have been less consistent and limited by study size [12, 13]. Yet, one study at two large urban academic hospitals found that general medical inpatients averaged more than one potentially harmful discrepancy in either admission or discharge medication orders despite documented completion of medication reconciliation .
Though medication reconciliation practices are required at care transitions throughout hospitalization, implementation has been challenging for many hospitals because it often involves a dramatic change in work processes and additional tasks for busy clinicians. Furthermore, the implementation of medication reconciliation interventions varies widely across hospitals, and hospitals need clearer guidance on which interventions are more likely to be successful in their local environment . Moreover, it has been relatively easy for hospitals to document compliance with medication reconciliation processes to meet national and international standards without demonstrating that medication safety has actually improved. To identify and address the barriers to implementing medication reconciliation, an Agency for Healthcare Research and Quality (AHRQ)-funded conference organized by the Society of Hospital Medicine (SHM) in 2009 brought together 36 key stakeholders from 20 organizations representing healthcare policy, patient safety, regulatory, technology, and consumer and medical professional groups. The conference yielded a White Paper with recommendations, including a call for further research . To address the latter, SHM subsequently received funding from AHRQ to conduct the Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS; clinicaltrials.gov identifier NCT01337063).
The specific aims of MARQUIS are to:
Develop a toolkit consolidating the best practices for medication reconciliation, based on the strongest evidence available.
Conduct a multi-site mentored quality improvement (QI) study in which each site adapts the tools for its own environment and implements them.
Assess the effects of medication reconciliation QI interventions on unintentional medication discrepancies with potential for patient harm.
Conduct rigorous program evaluation to determine the most important components of a medication reconciliation program and how best to implement them.
This paper describes the design and early methodological lessons learned from MARQUIS, an example of real-world, rigorous, mixed methods QI research. It is our hope that the design of the study, rationale for that design, and early experiences will be useful for other medication safety efforts, as well as for QI and patient safety research in general.
The MARQUIS conceptual framework is based on Brown and Lilford’s model for evaluating patient safety interventions, which is an adaptation of Donabedian’s “structure—process—outcome” model [17–21]. The model distinguishes interventions that focus on management processes (e.g., provider training) from those that focus on clinical processes (e.g., tools supporting medication list comparisons across care transitions). MARQUIS involves both types of interventions, focusing primarily on the latter. To better understand why interventions succeed or fail, we assess contextual factors (i.e., micro- and macro-organizational structure and existing management processes) and intervening variables (e.g., team climate, safety culture, and knowledge of medication safety principles, Figure 1). Also important to understanding whether interventions succeed or fail is intervention fidelity, or the faithfulness with which the intervention is performed, which can be influenced by how usable the tools are and the degree of training and support given to front-line clinicians (i.e., items at the level of intervention in Figure 1). These are measured as an “intervention score” updated monthly to assess what toolkit components have been adopted and how the state of medication reconciliation changes over time (Table 1). Along with the MARQUIS intervention itself, each of the contextual factors may affect the patient-level outcomes being assessed, such as unintentional medication discrepancies, patient satisfaction, and healthcare utilization (i.e., items to the right of the intervention, Figure 1).
The MARQUIS toolkit, described elsewhere,  synthesizes best practices in medication reconciliation and provides aids to facilitate their implementation. The toolkit components were informed by a systematic review of medication reconciliation interventions,  the AHRQ-funded conference of stakeholders, and the work of the MARQUIS investigators and advisory board.
Each toolkit component is framed as a standardized functional goal (e.g., “Improve access to preadmission medication sources”). This approach is ideal for complex QI interventions,  allowing sites to: 1) integrate intervention components with their baseline medication reconciliation efforts, information system capabilities, and organizational structures; and 2) add, customize, and iteratively refine the toolkit components and their implementation over time. This approach also improves generalizability, allowing other organizations to apply the lessons learned regardless of their culture or unique circumstances.
While recognizing the importance of flexibility, it was nevertheless important to have some common elements across sites. Thus, each site prioritized the implementation of certain toolkit components based on their potential for improvement and effort required. These included provider education on medication history taking, patient education and teach-back at discharge, patient risk stratification, and more intensive medication reconciliation efforts in high-risk patients.
Six U.S. sites are participating in this study: 3 academic medical centers, 2 community hospitals, and 1 Veterans Affairs hospital. We purposely chose sites that vary in size, academic affiliation, geographic location, and use of health information technology (Table 2). However, all sites had several common features: 1) medication reconciliation was a priority; 2) hospital leadership was committed to making further improvements in the process; 3) an active hospitalist group was engaged in QI; 4) a suitable hospitalist and/or pharmacist clinical champion at each site; and 5) each site planned to use primarily its own resources to pursue this effort.
Patient subjects are drawn from the medical and surgical inpatient, non-critical care units of each site, and are included if hospitalized long enough for a “gold-standard” medication history to be obtained by a study pharmacist (i.e., generally more than 24 hours). Institutional Review Board (IRB) approval was obtained from the Partners Healthcare System. In addition, each study site’s IRB reviewed the study: four considered it an exempt QI project, while two sites required informed consent of patients prior to participation. Informed consent has been incorporated into the data collection process at these sites.
Mentored local implementation
MARQUIS utilizes SHM’s mentored implementation approach , providing each site with a hospitalist mentor to facilitate toolkit implementation. Each mentor has QI expertise and performs distance mentoring through monthly calls with the study site’s mentee/clinical champion, based upon the MARQUIS Implementation Guide which explains how to use the toolkit . Each study site also receives two visits from the mentor, important from a QI standpoint (e.g., to maintain institutional support and enthusiasm among the local QI team, and better understand local practices) and from a research standpoint (e.g., to assess intervention fidelity and other barriers and facilitators of implementation). Additionally, SHM provides sites with an assigned lead project manager and research assistants located at SHM headquarters to assist with monitoring progress and collecting and analyzing data. Through the mentored implementation infrastructure, MARQUIS is affordable, adaptable, generalizable, scalable, and feasible for wide dissemination.
At each study site, a local QI team, led by the mentee/clinical champion conducts monthly meetings to oversee intervention implementation and data collection, as well as to address protocol questions and determine the effectiveness of the interventions. Sites can access a central website with additional resources and a listserv. The monthly conference calls with their mentor and ad lib email communications promote a consistent approach across sites .
Study outcomes are assessed from 6 months pre-intervention through 21 months post-intervention (Table 3). The primary outcome is the number of potentially harmful unintentional medication discrepancies per patient, determined by a trained on-site pharmacist taking a “gold standard” medication history on a random sample of patients (20–25 per month). This history is then compared to the primary team’s medication history and to admission and discharge orders. For discrepancies in admission or discharge orders not caused by history errors, the pharmacist reviews the medical record for a clinical explanation, and if necessary, talks with the medical team. This allows sites to distinguish unintentional medication discrepancies (i.e., due to reconciliation errors) from intentional medication changes. Physician adjudicators, blinded to the status of intervention implementation, record and categorize unintentional medication discrepancies with respect to: 1) timing (admission vs. discharge); 2) type (omission, additional medication, change in dose, route, frequency, or formulation, or other); 3) reason (history vs. reconciliation error); 4) potential for harm; and, 5) potential severity.
Secondary outcomes include accuracy and timeliness of the medical team’s documented admission medication history, absence of discharge reconciliation errors, unplanned healthcare utilization (i.e., readmission or emergency department use), and patient satisfaction. Unplanned healthcare utilization to the same site within 30 days of discharge is determined from hospital records on all eligible patients. We examine patient satisfaction scores on the global satisfaction and the medication specific dimensions from the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, and will be aggregated by service, unit, and time period as data are available.
Contextual factors are measured using surveys of providers directly involved in the medication reconciliation process. Additionally, we measure intervention fidelity using direct observation during site visits, and evaluate training, support, and other steps offered to improve fidelity. Importantly, the extent of implementation is quantified as an intervention “score” for each toolkit component (Table 1) and factored into the analysis. The score is completed by the clinical champion at each site, informed by surveys administered to front-line clinicians who are directly involved in the medication reconciliation process when necessary.
Data quality assurance
In an effort to ensure consistency of on-site pharmacist data collection, the research team: 1) conducts monthly phone meetings with on-site pharmacists in which a patient case is reviewed for consistency and all discrepancies discussed; 2) provides on-site pharmacists with an updated ‘frequently asked questions’ (FAQ) document for managing new situations; and, 3) conducts site visits with the research team’s pharmacist to observe data collection processes and provide feedback, including how to improve process efficiency.
To ensure the consistency of the adjudication process, the principal investigator (PI) conducts a quarterly conference call with the sites’ physician adjudicators to discuss cases. In addition, the PI and a co-investigator review 6 cases from each site quarterly and review the results individually with each site’s adjudicators. A FAQ document for adjudicators is updated and redistributed as needed.
Web-based data center
The study sites utilize a web-based data collection and reporting system built specifically for this study. The system creates HIPAA-compliant de-identified data sets for the coordinating data center and all investigators. The system allows for identification, classification, and adjudication of all discrepancies. Unintentional discrepancies identified by the on-site pharmacist are flagged in the system for physician adjudication. The data center provides detailed reports to trend discrepancies, facilitates uploads of patient-specific administrative data, tracks implementation of intervention components, and provides tools to support mentored implementation. It also provides tracking for patient enrollment compared to monthly targets.
The primary outcome will be analyzed using multivariable Poisson regression, including random effects and clustering of patients by site and treating physician. To account for temporal trends and the varied introduction of interventions by site, we will employ an interrupted time series analysis on all 3,600 patients across the 6 sites, evaluating outcomes monthly for 6 months pre-intervention and 21 months post-implementation . The outcome is assessed as both a change from site-specific baseline temporal trends (i.e., change in slope) and sudden improvement with implementation of the intervention components (i.e., change in y-intercept). If each site has concurrent controls, these can be entered into the model to partially adjust for the effect of concurrent interventions. The model also allows for the detection of iterative refinement of the intervention (i.e., continuous improvement over time), as well as ceiling effects (i.e., lack of continued improvement beyond a certain threshold).
We will assess implementation of each component of the toolkit monthly, using the scoring system on a scale of no adoption to complete adoption for each component (Table 1). Because many of the sites have already implemented pieces of the interventions, their scores often do not start at zero, and scores increase with the implementation of interventions to a maximum score of 318. The scores for each component will be entered into the multivariable model as time-varying covariates, such that we can determine whether implementation of a particular component is correlated with improved outcomes thereafter. This allows us to make inferences about the most important components of the intervention.
Power and sample size
For a stable estimate of temporal trends, each site’s data collection goal is approximately 22 patients per month, beginning 6 months pre-intervention through 21 months post-intervention. Due to our study design it is impossible to know a priori the nature of our post-intervention data, and therefore what our actual power would be to look at the effect of any specific intervention. However, based on prior research, we assumed that the number of medication discrepancies would follow a Poisson distribution and that, in the absence of an intervention, each hospitalized patient would have an average of 1.5 potentially harmful medication discrepancies in admission and discharge orders combined . We also conservatively assumed that an intervention would be implemented at only 1 of 6 sites with 12, not 21, months of follow-up due to delays in planning and phasing in the intervention widely. This would yield data from 133 patients pre-intervention and 266 patients post-intervention. With these estimates and alpha = 0.05, we would have 90% power to detect a reduction in the mean number of medication discrepancies from 1.5 per patient to 1.1 per patient .
As sites began to implement the intervention, one methodological issue that arose was the extent to which sites should over-sample data from hospital areas receiving early versions of the intervention. We decided on a 3:1 ratio of intervention to control patients during the intervention period. This allows for concurrent controls during the spread of the intervention, while maintaining an adequate sample of intervention patients to evaluate effects on patients outcomes.
We will evaluate the influence of contextual factors, intervention fidelity, and intervening variables on implementation and outcomes using a mixed methods approach (Table 4). Measures of context are gathered using front-line staff and site surveys, direct observation, focus groups and interviews. At baseline, each site completed a site leader survey and front-line staff surveys to provide a semi-quantitative measurement of these issues. After the start of intervention implementation, a qualitative researcher conducts on-site focus groups with front-line staff and the QI team and interviews hospital leadership. At 12 months post-intervention, follow-up interviews are conducted by telephone. For focus groups and interviews, a convenience sample of staff is selected by role and department to ensure broad representation .
Intervention fidelity is assessed by direct, semi-structured observation of the site’s medication reconciliation process by their mentor during site visits at 3 and 12 months after implementation of the intervention. The observation protocol evaluates five steps of the medication reconciliation process: taking an admission medication history, identifying high risk patients to receive a high intensity medication reconciliation intervention, performing discharge medication reconciliation, performing discharge medication counseling using the teach back method, and forwarding the discharge medication list to the next provider of care after discharge. The mentor observes the actual process to identify if the intervention is being implemented as designed (content fidelity) and rates how well it was performed on a 1 to 4 scale (process fidelity) . The observation forms also allow documentation of systems issues that impact the medication reconciliation process based on the Systems Engineering Initiative for Patient Safety (SEIPS) model’s  5 domains: people, technology/tools, tasks, organization, and environment. Mentors received group training on how to assess fidelity, using a coding manual with standardized examples and training from a human factors expert. Mentors share feedback about the direct observations during the site visit with the site leader and QI team.
Intervening variables (i.e., in determining intervention fidelity) are assessed from surveys of front-line staff pre- and post-intervention. Topics include the quality of education and training received in the intervention (e.g., in taking medication histories), degree of input into intervention design, and adequacy of staffing and time to complete medication reconciliation processes. Other barriers and facilitators of intervention implementation are determined from structured and open-ended questions regarding the front-line staff’s opinions of the medication reconciliation process and its perceived impact on patient care.
Figure 2 outlines the timeline of the study. The study’s three critical time points are: intervention start, interim evaluation with iterative refinement of the intervention and a second draft of the implementation guide 9 months after the start, and end of the intervention after 21 months.
Baseline data collection began March 2011 and was completed August 2012. Over 1,000 patients have been enrolled to date across the 6 sites, of which 980 have data entered in a centralized database (Table 5). Of these, preliminary analyses show 844 patients have had discrepancies adjudicated for potential harm. The total number of unintentional medication discrepancies in admission and discharge orders per patient varies by site from 2.35 to 4.67 (mean = 3.35). Consistent with prior research, most discrepancies are due to history errors (mean 2.12 per patient) as opposed to reconciliation errors (mean 1.23 per patient) . The number of potentially harmful medication discrepancies averages 0.45 per patient and varies by site from 0.13 to 0.82 per patient. In comparison, in prior studies by Schnipper et al. potentially harmful medication discrepancies began at 1.44 per patient, then decreased to 1.05 and then 0.32 per patient with successive versions of medication reconciliation interventions [27, 35].
Challenges have arisen during the initial implementation. From a research perspective, these included delays in signing data use agreements, IRB requirement for patient consent at some sites, delays in obtaining IRB approval, and staffing challenges, all of which led to delays in completing baseline data collection. Another challenge was achieving adequate response rates from front-line surveys. In the end, we asked each site to identify a select group of clinicians likely to complete surveys while still representative of the locations and provider types involved in the medication reconciliation process, sacrificing some generalizability for better rates (and thus internal validity). Operationally, rather than create a distinct “version 2” of the intervention, we chose to iteratively refine and add to our toolkit as sites and mentors have identified specific needs, although we do plan to develop a distinct “second edition” of the implementation guide.
To date, initial site visits have been conducted at 3 of the 6 sites within 4 months of start of the intervention implementation. Feedback after mentor and qualitative researcher visits were shown to be more valuable at sites that were further along with intervention implementation. This timing allowed for more feedback on barriers and facilitators to implementation from focus group participants intimately involved in the intervention, and allowed for richer data collection on intervention fidelity, creating more detailed feedback to sites on how to improve their interventions going forward. On the other hand, because site visits also enhanced the visibility and institutional support of the project at the sites, for those sites that were struggling with implementation, the decision was made to conduct these visits on time anyway, trading some data loss regarding intervention fidelity for gains in local support.
MARQUIS seeks to improve medication safety at participating hospitals, while rigorously studying the implementation of a best practices toolkit and contextual factors that may influence outcomes. As such, the study offers one approach to conducting rigorous, “real world” QI research in which we hope to understand: 1) the most important components of the intervention; 2) reasons for success or failure; and, 3) barriers and facilitators of implementation. MARQUIS also attempts to balance the criticism for a more case-based approach to QI research with more rigorous outcome assessment that adequately adjusts for potential confounders.
Importantly, MARQUIS does not provide sites with resources for the intervention and only a small stipend for data collection, similar to QI efforts at most hospitals. This lowered the cost of the study and also makes it more generalizable since other sites wishing to adopt the intervention toolkit most likely would not receive external resources for implementation. Nevertheless, this also makes sites more vulnerable to resource constraints and changes in leadership or institutional priorities, in particular during the lag time between applying for funding and beginning the intervention. As hospitals are increasingly challenged to conserve resources, projects like MARQUIS are more likely to succeed if medication safety is a consistent priority or if a favorable return on investment is anticipated. To address the latter, we have provided sites with business plans on making the business case for medication reconciliation, which is available online in the MARQUIS implementation guide.
Other challenges reflect the need to balance the needs of QI work with research, such as the length of pre-intervention data collection (shorter for QI, longer for research) and the optimal timing of site visits (earlier for QI to enlist institutional support, later for research to best assess intervention fidelity).
Despite these challenges, our use of a mentored implementation model makes MARQUIS a generalizable approach to studying the improvement of complex processes like inpatient medication reconciliation. If the intervention is shown to be successful, mentored implementation resources could easily be scaled up. Using a refined version of the tools and implementation lessons, MARQUIS holds promise to provide a large impact on medication safety during transitions in care across many hospitals.
Adverse drug events
Agency for healthcare research and quality
Academic medical center
Advanced practice nurses
Computerized physician order entry
Frequently asked questions
Hospital consumer assessment of healthcare providers and systems
Institutional review board
Multicenter medication reconciliation quality improvement study
Nurse practitioners/Physician assistants
Society of hospital medicine
The joint commission
Veterans affairs medical center.
Coleman EA, Smith JD, Raha D, Min SJ: Posthospital medication discrepancies: prevalence and contributing factors. Arch Intern Med. 2005, 165: 1842-1847. 10.1001/archinte.165.16.1842.
Smith JD, Coleman EA, Min SJ: A new tool for identifying discrepancies in postacute medications for community-dwelling older adults. Am J Geriatr Pharmacother. 2004, 2: 141-147. 10.1016/S1543-5946(04)90019-0.
Schnipper JL, Kirwin JL, Cotugno MC, Wahlstrom SA, Brown BA, Tarvin E, Kachalia A, Horng M, Roy CL, McKean SC, Bates DW: Role of pharmacist counseling in preventing adverse drug events after hospitalization. Arch Intern Med. 2006, 166: 565-571. 10.1001/archinte.166.5.565.
Tam VC, Knowles SR, Cornish PL, Fine N, Marchesano R, Etchells EE: Frequency, type and clinical importance of medication history errors at admission to hospital: a systematic review. CMAJ Canadian Medical Association Journal. 2005, 173: 510-515. 10.1503/cmaj.045311.
Climente-Marti M, Garcia-Manon ER, Artero-Mora A, Jimenez-Torres NV: Potential risk of medication discrepancies and reconciliation errors at admission and discharge from an inpatient medical service. Ann Pharmacother. 2010, 44: 1747-1754. 10.1345/aph.1P184.
Chan AH, Garratt E, Lawrence B, Turnbull N, Pratapsingh P, Black PN: Effect of education on the recording of medicines on admission to hospital. J Gen Intern Med. 2010, 25: 537-542. 10.1007/s11606-010-1317-x.
Shekelle PG, Pronovost PJ, Wachter RM, McDonald KM, Schoelles K, Dy SM, Shojania K, Reston JT, Adams AS, Angood PB, et al: The Top patient safety strategies that Can Be encouraged for adoption Now. Ann Intern Med. 2013, 158: 365-368. 10.7326/0003-4819-158-5-201303051-00001.
WHO Collaborating Centre for Patient Safety Solutions: Assuring medication accuracy at transitions in care. Book Assuring medication accuracy at transitions in care. 2007, City: World Health Organization, vol. 1
Koehler BE, Richter KM, Youngblood L, Cohen BA, Prengler ID, Cheng D, Masica AL: Reduction of 30-day postdischarge hospital readmission or emergency department (ED) visit rates in high-risk elderly medical patients through delivery of a targeted care bundle. J Hosp Med. 2009, 4: 211-218. 10.1002/jhm.427.
Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ: An epistemology of patient safety research: a framework for study design and interpretation. Part 1. Conceptualising and developing interventions. Qual Saf Health Care. 2008, 17: 158-162. 10.1136/qshc.2007.023630.
Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ: An epistemology of patient safety research: a framework for study design and interpretation. Part 2. Study design. Qual Saf Health Care. 2008, 17: 163-169. 10.1136/qshc.2007.023648.
Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ: An epistemology of patient safety research: a framework for study design and interpretation. Part 3. End points and measurement. Qual Saf Health Care. 2008, 17: 170-177. 10.1136/qshc.2007.023655.
Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ: An epistemology of patient safety research: a framework for study design and interpretation. Part 4. One size does not fit all. Qual Saf Health Care. 2008, 17: 178-181. 10.1136/qshc.2007.023663.
Donabedian A: Explorations in quality assessment and monitoring. The definition of quality and approaches to it assessment. Edited by: Griffith JR. 1980, Washington, DC: Health Administration Press, 4-163.
Mueller SK, Kripalani S, Stein J, Kaboli P, Wetterneck TB, Salanitro AH, Labonville S, Etchells E, Hanson D, Williams MV, et al: Development of a toolkit to disseminate best practices in inpatient medication reconciliation. JtComm J Qual Patient Saf. 2013, In press
Maynard GA, Budnitz TL, Nickel WK, Greenwald JL, Kerr KM, Miller JA, Resnic JN, Rogers KM, Schnipper JL, Stein JM, et al: 2011 John M. Eisenberg patient safety and quality awards. Mentored implementation: building leaders and achieving results through a collaborative improvement model. Innovation in patient safety and quality at the national level. Jt Comm J Qual Patient Saf. 2012, 38: 301-310.
Schnipper JL, Schnipper JL: MARQUIS implementation manual: a guide for medication reconciliation quality improvement. Book MARQUIS implementation manual: a guide for medication reconciliation quality improvement. 2011, City: Society of Hospital Medicine
Schnipper JL, Hamann C, Ndumele CD, Liang CL, Carty MG, Karson AS, Bhan I, Coley CM, Poon E, Turchin A, et al: Effect of an electronic medication reconciliation application and process redesign on potential adverse drug events: a cluster-randomized trial. Arch Intern Med. 2009, 169: 771-780. 10.1001/archinternmed.2009.51.
Quinn R, Seashore S, Kahn R, Mangion T, Cambell D, Staines G, McCullough M: Survey of working conditions: final report on univariate and bivariate tables. Book survey of working conditions: final report on univariate and bivariate tables. 1971, City: Government Printing Office
Wetterneck TB, Linzer M, McMurray JE, Douglas J, Schwartz MD, Bigby J, Gerrity MS, Pathman DE, Karlson D, Rhodes E: Worklife and satisfaction of general internists. Arch Intern Med. 2002, 162: 649-656. 10.1001/archinte.162.6.649.
Dumas JE, Lynch AM, Laughlin JE, Phillips Smith E, Prinz RJ: Promoting intervention fidelity. Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial. Am J Prev Med. 2001, 20: 38-47. 10.1016/S0749-3797(00)00272-5.
Dr. Kripalani is a consultant to and holds equity in PictureRx, LLC. The terms of this agreement were reviewed and approved by Vanderbilt University in accordance with its conflict of interest policies. Dr. Schnipper is a consultant to QuantiaMD, for whom he helps create educational tools for providers and patients regarding medication safety; these tools are not part of MARQUIS. Dr. Schnipper is also principal investigator of an investigator-initiated study of interventions to improve transitions in care in patients with Diabetes. The terms of these agreements were reviewed and approved by Partners HealthCare and Harvard Medical School in accordance with its conflict of interest policies. All other authors have no competing interests.
JLS, SK, TBW, JS, and PJK designed the study. EE, DJC, DH, JLG, and MVW are members of the advisory board and provided input throughout the study to assist in refining the process. AHS and JLS drafted the manuscript. All authors read, made significant contributions, and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Salanitro, A.H., Kripalani, S., Resnic, J. et al. Rationale and design of the Multicenter Medication Reconciliation Quality Improvement Study (MARQUIS).
BMC Health Serv Res13, 230 (2013). https://doi.org/10.1186/1472-6963-13-230