Skip to main content

Incentivizing performance in health care: a rapid review, typology and qualitative study of unintended consequences

Abstract

Background

Health systems are increasingly implementing policy-driven programs to incentivize performance using contracts, scorecards, rankings, rewards, and penalties. Studies of these “Performance Management” (PM) programs have identified unintended negative consequences. However, no single comprehensive typology of the negative and positive unintended consequences of PM in healthcare exists and most studies of unintended consequences were conducted in England or the United States. The aims of this study were: (1) To develop a comprehensive typology of unintended consequences of PM in healthcare, and (2) To describe multiple stakeholder perspectives of the unintended consequences of PM in cancer and renal care in Ontario, Canada.

Methods

We conducted a rapid review of unintended consequences of PM in healthcare (n = 41 papers) to develop a typology of unintended consequences. We then conducted a secondary analysis of data from a qualitative study involving semi-structured interviews with 147 participants involved with or impacted by a PM system used to oversee 40 care delivery networks in Ontario, Canada. Participants included administrators and clinical leads from the networks and the government agency managing the PM system. We undertook a hybrid inductive and deductive coding approach using the typology we developed from the rapid review.

Results

We present a comprehensive typology of 48 negative and positive unintended consequences of PM in healthcare, including five novel unintended consequences not previously identified or well-described in the literature. The typology is organized into two broad categories: unintended consequences on (1) organizations and providers and on (2) patients and patient care. The most common unintended consequences of PM identified in the literature were measure fixation, tunnel vision, and misrepresentation or gaming, while those most prominent in the qualitative data were administrative burden, insensitivity, reduced morale, and systemic dysfunction. We also found that unintended consequences of PM are often mutually reinforcing.

Conclusions

Our comprehensive typology provides a common language for discourse on unintended consequences and supports systematic, comparable analyses of unintended consequences across PM regimes and healthcare systems. Healthcare policymakers and managers can use the results of this study to inform the (re-)design and implementation of evidence-informed PM programs.

Peer Review reports

Introduction

Health systems are increasingly implementing policy-driven programs to incentivize performance in healthcare organizations and networks using contracts, targets, scorecards, rankings, rewards, and sanctions. These “Performance Management” (PM) programs provide performance feedback and establish accountability for performance outcomes with the aim of influencing behavior and results [1]. Studies of PM in healthcare demonstrate that in addition to contributing to improvement PM can generate unintended consequences for organizations, providers, and patients [2,3,4,5,6]. Much of the literature on unintended consequences stems from the English National Health Service (NHS) where a centralized “command-and-control” approach to PM has been criticized for contributing to measure fixation, gaming, and reduced staff morale [5, 7,8,9,10]. Increasingly, studies from the United States also report unintended consequences of PM, citing many of the same issues identified in England in addition to concerns about the role of PM in widening racial and socioeconomic disparities [11,12,13,14,15,16].

Typologies of the unintended consequences of PM in healthcare [2, 5] and in public management [17, 18] exist. However, there is no single comprehensive typology of the negative and positive unintended consequences of PM in healthcare. Furthermore, few studies of the unintended consequences of PM have been conducted outside of England and the United States, which limits our understanding of unintended consequences across PM regimes and healthcare systems. The aims of this study were twofold: (a) To develop a comprehensive typology of unintended consequences of PM in healthcare, and (b) To describe multiple stakeholder perspectives of the unintended consequences of a PM system in Ontario, Canada.

Methods

Rapid review

We conducted a rapid review of the literature on unintended consequences of PM in healthcare [19]. A rapid review was appropriate because our intent was to develop a typology, not to synthesize, assess, and critique the body of evidence on unintended consequences of PM.

A search of the academic literature was conducted in Fall 2020 and updated in Fall 2021 using the electronic databases PubMed and Web of Science (All Databases). The following search string was used for both databases: ((“healthcare” OR “health care” OR “health system”) AND (“performance management” OR “performance incentives” OR “performance feedback” OR “performance measurement” OR “quality indicators” OR “quality measure*”) AND (“unintended consequences” OR “unintended effects” OR “unintended responses” OR “unintended negative consequences” OR “perverse effects” OR “dysfunctional consequences” OR “unintended positive consequences” OR “unintended benefits”). To be included, papers had to be peer-reviewed academic papers (grey literature, conference proceedings, policy statements, and clinical guidelines were excluded), published in English, and involve a conceptual/theoretical discussion or empirical evidence on unintended consequences of PM in healthcare. No limits were placed on year of publication. Two authors independently screened papers for inclusion and met to discuss and finalize screening decisions. A hand search of the reference lists of included papers was also conducted to identify relevant papers. Ultimately, 41 papers were included in the review (Fig. 1). Excluded papers – while relevant to performance measurement and management – had minimal discussion of unintended consequences.

Fig. 1
figure 1

PRISMA Flow Diagram for Rapid Review on Unintended Consequences of Performance Management in Healthcare

The following information was extracted from each paper to facilitate synthesis: Reference; Study Purpose; Methods; PM Intervention(s) Studied; Definitions, Theories, or Frameworks of Unintended Consequences; Types of Unintended Consequences; Results; and Recommendations for Preventing/Managing Unintended Consequences. Two authors independently extracted data for three papers and met to discuss and finalize the extraction approach. Extracted data were compared and synthesized to create a typology of all unintended consequences identified in the literature. This process was undertaken collaboratively by the authors and involved merging similar categories and minor modifications to wording for accuracy and clarity.

Qualitative study setting

The province of Ontario is in Central Canada and has a population of 14.5 million. Cancer Care Ontario (CCO), which houses the Ontario Renal Network (ORN), is a government agency with a mandate to fund, oversee, and improve cancer and renal care. Delivery of cancer and renal care is organized by geographic region through 13 regional cancer networks and 27 regional renal networks that together cover the full geography of the province.

As an oversight body, CCO uses a robust PM system to monitor and improve performance of cancer networks (since 2005) and renal networks (since 2013). CCO’s PM system consists of the following components: funding contracts outlining performance expectations/deliverables with funding at risk of withdrawal for non-compliance; a regional scorecard with indicators, targets, and network rankings; access to performance data through electronic platforms; quarterly performance review reports and meetings; annual performance recognition certificates; an escalation process for poor or declining performance; and public reporting of performance on select indicators [20]. CCO’s PM system is described in more detail in Table 1. Available evidence suggests that CCO and its PM system are high-performing and internationally renowned [20,21,22,23,24]. Hagens et al. [20] found that 89% of cancer care indicators prioritized by CCO over the past 15 years demonstrated sustained improvement over time. Furthermore, Ontario’s performance on most cancer mortality and cancer survival indicators exceeds the Canadian and OECD averages [22]. Finally, CCO has been described in media outlets as an “internationally leading agency” [21], “the envy of the world” [24], and “the pinnacle of success in establishing standards, tracking and publicly reporting outcomes, multi-year planning, developing information systems, consulting patients, and advising government and practitioners” [24].

Table 1 Cancer Care Ontario (CCO) Performance Management Interventions Organized by Primary Function

In 2019, after the completion of data collection, CCO was incorporated into a new agency, Ontario Health. CCO’s programs and services remain unchanged and they continue to be referred to as “CCO” within Ontario Health.

Secondary analysis of qualitative interviews

We conducted a secondary analysis of semi-structured interview data from a qualitative constructivist study [25] on CCO’s PM system. Administrative and clinical representatives working at CCO and in the regional networks who were involved with or impacted by PM were invited via e-mail to participate in individual or group interviews using purposeful and snowball sampling. Participants provided informed consent verbally at the start of each interview. Participants were asked open-ended questions regarding CCO’s role and the strengths and weaknesses of CCO’s PM system using a pre-tested semi-structured interview guide. The one-hour interviews took place in-person or on the phone, depending on geographic location, with no non-participants present, and were digitally recorded and transcribed verbatim. All interviews were conducted between 2017 and 2018 by the second author, who holds a PhD, specializes in qualitative and mixed methods research, and worked with CCO as a Staff Scientist at the time of data collection. She was hired, in part, to establish a program of research on PM. Six participants from CCO had a prior relationship with the interviewer through other projects. Data saturation was reached with approximately half of the final sample of participants, but interviews continued with the aim of achieving broad stakeholder engagement and input into PM, which is a core function of CCO. Refusals to participate were minimal and related to time and scheduling challenges. No participants dropped out.

Unintended negative consequences were explored through one interview question in the primary study (“Tell me about the unintended negative consequences of PM, if any”), but it was not a central part of the primary analysis. That data were originally coded by the second author as “unintended consequences” using NVivo software. The first author conducted a more detailed coding of that sub-set of the data in NVivo. The first author coded the data deductively, classifying the unintended consequences according to the typology developed from the rapid review. Upon completion, the second author reviewed the node reports for accuracy and coding disagreements were resolved through discussion and rectified in NVivo. The authors discussed and agreed on the addition of new inductive codes, where necessary, and updated the typology accordingly. Emerging results were shared with participants via email and/or presentations where they were encouraged to ask questions, offer feedback, and share their interpretations of the data.

In the participant quotes provided below, we use the notation “P” for individual interviews and “G” for group interviews. In group interviews, it was not always possible to identify the role of the speaker (administrative versus clinical) or the clinical area they represent (cancer versus renal); therefore, some quotes indicate role and clinical area while others do not.

Results

We identified 48 unintended consequences of PM in healthcare, which we organized based on who was impacted – providers and organizations or patients and patient care – and on the nature of impact – negative or positive. Table 1 presents the final typology incorporating both the results of the rapid review and qualitative study.

Rapid review

The review included 41 papers published between 2002 and 2020 consisting primarily of qualitative studies, literature reviews, and discussion papers. Although several typologies of unintended consequences were available in the literature, none were comprehensive [2, 5, 18].

Most of the unintended consequences described in the literature focused on the impact of PM on providers and organizations. We divided these unintended consequences into six subcategories, retaining much of the language and content of the typology proposed by Mannion and Braithwaite [5]: (a) increased work, (b) poor design or use of performance data, (c) breaches of trust and increased work environment toxicity, (d) exacerbation of inequalities, (e) politicization of performance management and (f) positive unintended consequences. The most common unintended negative consequences on providers and organizations were ‘measure fixation’, ‘tunnel vision’, and ‘misrepresentation’ and ‘gaming’ (Table 2). The most common positive unintended consequence for providers and organizations was ‘improved morale’ due to the sense of pride that comes with high performance (Table 2).

Table 2 Typology of Unintended Consequences of Performance Management in Healthcare

The second major category of unintended consequences were those that affect patients and patient care, which we divided into four subcategories: (a) inappropriate or sub-optimal care, (b) reduction in patient-centered care, (c) exacerbation of inequalities, and (d) positive unintended consequences. The most common unintended negative consequences in this category were ‘clinical decisions driven by PM’ rather than by evidence and clinical judgment and ‘increased inequality in access’ or ‘increased healthcare disparities’ (Table 2). Few studies explicitly examined positive unintended consequences on patients and patient care [61]. However, ‘beneficial spillover effects’ were identified in which PM contributed to improved performance in other non-incentivized clinical areas for the target population (Table 2).

Several strategies were proposed to mitigate the unintended negative consequences of PM. The most common recommendation was greater collaboration among developers of PM systems, those responsible for implementation, and providers and patients affected by PM – with the aim of maximizing alignment of PM systems with the local context, provider goals and values, and patient priorities [5,6,7, 9, 12, 14,15,16, 32,33,34, 40, 44]. This collaboration should be ongoing, occurring not just during the design phases, but also over time, allowing for feedback, reflexive review, and modifications to PM [2, 5, 10, 12, 14, 51]. Many recommendations also focused on the characteristics of effective indicators (e.g., evidence-based, balanced) and transparency of the indicator selection and measurement processes [5, 35, 43, 47, 50].

Unintended consequences of performance management in cancer and renal care in Ontario

We analyzed interview data from 147 clinical leads and administrative staff members. Fifty-nine were representatives from CCO (85% in an administrative role and 15% in a clinical role). The remaining 88 participants were from cancer and renal networks (74% in an administrative role and 26% in a clinical role). Twelve of the 14 cancer networks (86%) and 19 of the 27 renal networks (73%) were represented.

Before describing the results of our secondary analysis of unintended consequences of PM, it is important to reiterate that previous research, international health data comparisons, and media reports establish CCO and its PM system as high-performing and internationally leading [20,21,22,23,24]. Our qualitative study supports this conclusion. Participants’ perceptions of CCO’s approach to PM were positive, highlighting province-wide priority-setting, benchmarking, improvement initiatives, and sharing of best practices as strengths, while recognizing opportunities for refinement.

We found that most of the unintended consequences identified by participants focused on organizations and providers, not patients and patient care. Even though participants were only asked to reflect on negative unintended consequences of PM, we did identify examples of positive unintended consequences. Overall, the most common unintended consequences were ‘increased administrative burden’, ‘insensitivity’, ‘reduced morale’, and ‘systemic dysfunction’. We also identified 5 unintended consequences not captured by previous typologies and rarely identified in existing literature: (a) systemic dysfunction, (b) resource waste, (c) increased perceived injustice, (d) toxic ambition, and (e) improved capacity planning. Below we summarize results for each category of unintended consequences. In Fig. 2, we map how the unintended consequences mutually reinforced one another.

Fig. 2
figure 2

Relationships Among Unintended Consequences Based on Interview Data

Negative unintended consequences on providers and organizations

Increased work

‘Increased administrative burden’ was the most common unintended consequence of PM reported by participants across all stakeholder groups and among all categories in the typology. Participants’ primary concern was the volume of data that is collected, processed, and reported, as these two participants explained:

“We probably have more staff tied up in data submissions and data quality work than most of the rest of the hospital. It just seems like a really labour-intensive, very resource-intensive requirement” (G21, Network, Cancer and Renal)

“Our resources don’t increase, our staff don’t increase, but the demand on us increases exponentially from one quarter to the next” (G06, Network, Renal)

The administrative burden described by participants was often linked to two contributing factors. The first factor was the number of required performance indicators, as this CCO representative acknowledged: “It’s the sheer number of indicators that we try to push forward. At some point, you’re just diluting the capacity that exists within the regions or in the hospitals to make change” (P71, CCO, Cancer, Administrator). The second contributing factor was the inconsistencies in data systems and PM requirements across the multiple oversight bodies to whom networks are accountable as this participant explained: “For me, the big thing is 100% the data burden. They forget that we have shared accountabilities. We’re not just accountable to CCO” (G06, Network, Renal). Many participants described conflicting data requirements between organizations and oversight bodies such as their own hospital, CCO, the Canadian Institute for Health Information, the Local Health Integration Network, Health Quality Ontario, and Accreditation Canada. As such, the administrative burden was spurred in part by the unintended consequence of ‘systemic dysfunction’, described further below.

Workload concerns were also closely related to the unintended consequence of ‘resource waste’ as many participants expressed concern about whether the resource investment in PM was meaningful and contributing to improvement. There were also concerns that the workload associated with PM results in over-attention to PM, potentially reinforcing the unintended consequences of ‘tunnel vision’ and ‘measure fixation’, described below.

Poor design or use of performance data

Regarding the design of PM interventions and use of performance data, the most common unintended consequences identified by participants were ‘insensitivity’, ‘measure fixation’, ‘tunnel vision’, and ‘systemic dysfunction’ (Table 2).

‘Insensitivity’ refers to PM failing to capture the complexity of healthcare delivery and performance, potentially resulting in unfair penalties or rewards. In our data, insensitivity manifested primarily as a perceived a lack of control over performance due to contextual differences between networks and/or the nature of indicators. First, some participants argued that PM neglects regional differences that can impact network performance, such as differences in geography, resources, and patient demographics:

“There’s a lot of unmeasured confounding in some of these measures. So, we think everybody should be able to get there, but, you know, everything falls within a distribution, and someone’s going to be an outlier, and it’s not necessarily their fault that they’re an outlier” (P36, Network, Clinical Lead)

Second, some network representatives argued that select indicators reflect service performance beyond their immediate control. CCO representatives acknowledged that some indicators are aimed at stimulating collaboration across programs within and across organizations.

‘Measure fixation’ involves placing emphasis on meeting the performance target rather than the associated objective. Participants shared examples and raised questions regarding the extent to which PM inadvertently distracts attention away from patient care:

“People are really focused on numbers and really focused on the methodology. They’re not necessarily focused on what’s ultimately best for the patient because they want to look good on the scorecard and rankings” (G14, CCO, Clinical Leads)

“Sometimes I feel if we just play the game we could be a perfect performer, but I’m not sure our patients would be any better off…we would just learn how to play the game of being a good performer” (G22, Network, Cancer)

‘Tunnel vision’ occurs when emphasis is placed on dimensions of performance that are measured or incentivized, while other unmeasured but important aspects are overlooked. Participants described tunnel vision in two ways. In the first set of examples, participants described how indicators tend to focus on select fragments of the patient journey, and on processes like wait times rather than outcomes like patient survival, which one participant described as “losing the forest for the trees” (G11, Network, Cancer). Some of these indicator-focused examples were related to the unintended consequence of ‘quantification privileging’. In the second set of examples, participants focused more broadly on the healthcare system, explaining how PM in one part of the healthcare system (in this case, in cancer care) can exacerbate problems in other parts of the system that do not operate under the same PM requirements:

“Sometimes the cancer program is seen as the have-more program and other disease states, for people that present to hospital, are maybe the have-less. For example, gastrointestinal does not have a provincial body that’s driving performance. Or rheumatology, for example. So, those patients, I’m hearing, are being bumped or less prioritized for surgery and things like that, because they’re not associated with, for example, a surgical wait time metric” (G18, Network, Cancer)

These system-level examples of ‘tunnel vision’ seemed to contribute to the unintended consequence of ‘increased perceived injustice’, described in the next section.

We also identified examples of or concerns about ‘systemic dysfunction’ and ‘resource waste’. These unintended consequences are not included in previous typologies and are rarely discussed in the literature. ‘Systemic dysfunction’ was a prominent theme with participants describing inconsistent priorities, measurement methodologies, and data interpretations between hierarchical levels within their organization or between oversight bodies in the healthcare system. This systemic dysfunction was described as pre-existing in the healthcare system and exacerbated by PM requirements that reinforce conflicts or misalignments.

“We’re often collecting the same data slightly differently for all four organisations… The data tends to not take on the same meaning as it should if everybody was measuring it and looking at it together” (P61, Network)

“Sometimes your own hospital strategy or direction may be in conflict with what [CCO] is trying to do so you’re trying to always play this balancing game” (G24, Network, Renal)

‘Systemic dysfunction was viewed as contributing to the unintended consequence of ‘increased administrative burden’, described earlier.

The unintended consequence of ‘resource waste’ was implied in participants’ descriptions of the workload associated with PM and their reflections on the impact of PM on patient care. Participants often wondered about the cost-benefit of PM:

“Remember, there’s financial cost and opportunity cost for the amount of time we spend on PM” (G20, Network, Cancer and Renal)

“It does seem disproportionate in terms of the amount of effort we put into measuring particular indicators that don’t necessarily have a return on patient experience or patient outcome” (G21, Network, Cancer and Renal)

We identified very few examples of each of the remaining seven unintended consequences in this category: suboptimization, myopia, quantification privileging, anachronism, misinterpretation, complacency, and fossilization.

Breaches of trust & increased toxicity of the work environment

Regarding the unintended consequences of PM on the work environment, many participants used language that reflected ‘reduced morale’, most often in terms of a loss of belief and confidence in PM (not in their organization or their work). Negative emotive descriptors like “frustrating’ and ‘embarrassing’ were common among network representatives. However, CCO representatives were more likely to reflect on the impact of PM on morale in networks struggling with performance using stronger terms like ‘discouraging’, ‘depressing’, or ‘demoralizing’. Reduced morale was closely linked with the unintended consequence of ‘insensitivity’, as this quote from a CCO representative demonstrates:

“They don’t feel a strong motivation to work toward achieving the provincial benchmark, if they feel like they’re never going to be able to reach it anyway” (G03, CCO, Administrators)

In addition to ‘insensitivity’, ‘reduced morale’ was also often coupled with the unintended consequences of ‘increased administrative burden’ and ‘systemic dysfunction’.

Many participants, particularly CCO representatives, expressed concerns about ‘misrepresentation’ of data and ‘gaming’ of the PM system, primarily in reference to wait time indicators:

“All of a sudden over 50% of their patients were being seen the same day they were referred, which is amazing, and it doesn’t make sense either…I think because they changed how they define the referral date as whatever was convenient to them. So, I think we have to be careful that we don’t push people so far that they start making up information so that we get off their backs” (P29, CCO, Cancer, Administrator)

“We’ve had that in wait times reporting, where certain cases are being reclassified to incorrect buckets to make it appear as though they’re being completed on time” (P61, CCO, Renal, Administrator)

“My own hospital is absolutely gaming wait times…I have no doubt that there has been no actual improvement in wait times at this institution” (G14, CCO, Cancer, Clinical Leads)

Another ‘gaming’ behaviour identified in the data was intentionally prioritizing indicators that were more likely to lead to an increase in overall performance and ranking:

“We sacrifice some indicators for others. For example, we have a look at all the indicators, and know that we’re not going to be able to make a very big dent on this indicator due to…the challenges of our region. So, then we focus on the indicators that we actually know we can make an impact on because we don’t want to be [ranked] 14th” (P60, Network)

The unintended consequences of ‘misrepresentation’ and ‘gaming’ were closely related to ‘measure fixation’.

‘Reduced autonomy, agency, and/or self-regulation’ also emerged as a negative unintended consequence of PM. Although we identified one example of this at the individual provider level, most accounts were focused on the network level:

“[CCO] sets the priorities now whereas programs used to be able to create that strategy envisioned for themselves. Some of that is now being driven based on [CCO] priorities and I think that might take away from some of the innovation and creativity that might have happened at a program level before” (G25, Network, Renal)

We identified two unintended consequences in this category that are not described in the literature: (1) ‘increased perceived injustice’ and (2) ‘toxic ambition’. The literature describes ‘team and inter-professional conflict’ as an unintended consequence of PM due to workload, resource, and accountability issues [26, 27, 32, 36]. ‘Increased perceived injustice’ is similar, but in our data this occurred at the program or system level in response to social comparisons between groups affected by PM and those that are not, as this quote illustrates:

“My colleagues in other programs sometimes feel that we are favoured or the spotlight is more on us than on them, and it creates an element of resentment, which is really not of our own doing. It’s just our requirement to report and to produce” (G11, Network, Cancer)

‘Increased perceived injustice’ appeared to be exacerbated by ‘tunnel vision’ and ‘systemic dysfunction’.

The second novel unintended consequence in this category, ‘toxic ambition’, was uncommon and identified only among CCO representatives as they described concerns regarding continuous increases in performance targets:

“We got lots of push back that we needed to increase [the target]. Whether that makes sense or not, I’m not sure. People are doing well and exceeding the target. Isn’t that good enough?” (P29, CCO, Cancer, Administrator)

“Penalizing a high performing program that isn’t getting even better? The optics are terrible…Some of them say, why do you even give us a target? Why don’t you accept that we’re doing a darn good job and move on to some other initiative? I’m sympathetic with that. I think above a certain level, we should leave them alone and stop hammering them” (P53, CCO, Renal, Clinical Lead)

We identified very few, if any, examples of the remaining three unintended consequences in this category: ‘reduced learning and psychological safety’, ‘loss of professional ethos/morality’, and ‘bullying’.

Exacerbation of inequities

Exacerbation of inequities was not a theme in the data. ‘Increased resource gap’ was alluded to as a potential issue if networks fail to meet a PM requirement with funding tied to it and thus have funds clawed back, thereby further reducing their capacity to invest in improvement and achieve higher performance. The unintended consequences of ‘reduced ability to recruit necessary staff’ and ‘overcompensation’ were not identified at all.

Politicization of PM

We did not identify the unintended consequences of ‘political grandstanding’ and ‘political diversions’ in the data.

Positive unintended consequences

Participants were not explicitly asked about positive unintended consequences. Nevertheless, we identified examples of positive unintended consequences of PM on providers and organizations. Some participants described ‘improved morale’ because of PM, most often focusing on the pride associated with high performance:

“I’ll brag if we have a top ten one because it’s good for the team to hear that, and good for those who are working hard day in and day out for the patient population…They’re proud of being high performers” (G19, Network, Cancer)

“The meetings happening quarterly are very helpful and give us a chance to relate to Cancer Care Ontario some of the work we’ve been doing that we’re proud of” (G21, Network, Cancer and Renal)

PM also occasionally contributed to increased confidence and collective efficacy, as described by this network representative:

“Our staff and our leaders said yeah, we can do something that actually improves things. You know, we don't have to be defeatist about this. We proved to ourselves that we could actually do this” (G16, Network, Cancer, Administrator).

PM sometimes spurred ‘motivated learning and development’ beyond a reactive response to performance feedback or incentives as this quote demonstrates:

“We’re not waiting to be told how we did. We actively look at our data on an ongoing basis and pick out where we can improve and really focus resources on that” (G13, Network, Cancer)

‘Motivated learning and development’ often occurred together with ‘new relationships and collaborative problem-solving’. Network representatives frequently described how CCO’s PM approach spurred inter-network mentorship:

“We really look [at the data] and reach out to other programs to see how they’re actually achieving some of their targets so we can collectively share our ideas and do better” (G26, Network, Renal)

“The ability to reach out to peer programs, better performing programs and gain nuggets of, wow, how have you done that, what can we pick up, what can we learn?” (G19, Network, Cancer)

We identified one positive unintended consequence not described in the literature: ‘improved capacity planning’. A few participants described the use of information collected for PM purposes to support internal planning and external applications:

“Although the measurement might be in order to incite improved performance…You can't create a strategic plan and get [Ministry] approval for capital initiatives in the absence of that information. So, I think it's leveraged in different ways than originally intended” (G06, Network, Renal)

Negative unintended consequences on patients and patient care

Inappropriate or sub-optimal care

Participants often questioned the extent to which PM improves quality of care, but rarely expressed concerns or shared anecdotes demonstrating that PM explicitly contributes to inappropriate or suboptimal care. In the very few examples we identified of ‘clinical decisions driven by PM’ and ‘improved documentation without improved care’, these unintended consequences seemed to stem from ‘measure fixation’ and contribute to ‘resource waste’. We did not identify any examples of ‘less continuity of care’.

Reduction in patient-centered care

Participants expressed more concern regarding the unintended consequences of PM on patient-centeredness than on the appropriateness of care (above). The most common unintended consequence regarding patient-centeredness was ‘compromised patient convenience’ due to PM, as this quote illustrates:

“Even the patients are experiencing survey fatigue because we’ve got the patient experience survey that came from [CCO], but we also have our own organizational patient experience survey too…and they're not certain why they're getting surveys with similar questions” (G23, Network, Renal)

We identified very few examples of ‘disregard for patient voice’, ‘compromised patient autonomy’, and ‘compromised patient education’, though in the few hypothetical examples shared, the three unintended consequences were intertwined:

“If we’re supposed to be patient-centered, patient-focused, and we’re being mandated that this must be completed, we’re not truly being sensitive to the needs of the patient. Yes, this is very, very important, but, in the real world, when you’ve got that person in front of you, it may not be the time” (P69, Network, Cancer)

“Many of these targets involve patients. It could result, theoretically, in having pressure applied to patients, to do things that patients don’t want to do. That, to me, is a bridge too far” (P42, CCO, Renal)

The two remaining unintended consequences in this category – ‘compromised patient engagement’ and ‘erosion of trust in care’ – were not evident in our data.

Exacerbation of inequities

A few participants discussed the potential for PM to inadvertently ‘increase inequities in access to high quality care’, thus ‘increasing healthcare disparities’. These participants were concerned that PM favors low-risk, mainstream patients:

“I think that the regions trying to address performance issues for the majority of their populations, leaves out the marginalised populations – the homeless, the low-income, the ethnic populations. I think focusing on specific indicators, and moving those indicators closer to the target, can actually increase inequities in those populations because you’re focused on that indicator” (G03, CCO, Administrators)

Positive unintended consequences

Participants were not explicitly asked about positive unintended consequences and we did not identify any examples of positive unintended consequences of PM on patients and patient care.

Mitigating unintended negative consequences

CCO’s primary strategy for mitigating unintended negative consequences mirrors that in the literature: endorsement from administrative and clinical network leaders on indicators and PM interventions.

“It’s not as though CCO comes up with these metrics and targets on their own. Provincial clinical leads and providers in our communities of practice are involved in developing them” (G15, Network, Cancer and Renal)

“Our stakeholder engagement is very strong. We go out to the groups, we take the initial data, we see if they have feedback and if it makes clinical sense, we rework it, if needed, then maybe eventually we get to setting a target but it’s not something we roll out right away. It’s often a year, if not more, of work and socialization” (G14, CCO, Clinical Leads, Cancer)

In addition to involvement in indicator selection and development, ongoing dialogue between CCO and network leaders ensures that issues regarding data quality, indicator sensitivity to change, or impact on staff morale are addressed. For example, participants described the removal of two indicators from the scorecard “at the request of the RVPs [Regional Vice Presidents]” because they lacked responsiveness to improvement efforts and were negatively impacting staff morale (P02, CCO, Administrator).

Discussion

Based on the results of our rapid review and qualitative study, we present a typology of 48 unintended consequences of PM in healthcare. To our knowledge, this is the most comprehensive typology of the negative and positive unintended consequences of PM and includes five unintended consequences not previously identified or well-described in the literature.

In both the rapid review and qualitative study, we found that most of the identified unintended consequences were focused on providers and organizations, not patients and patient care. Furthermore, the focus tends to be on the negative unintended consequences of PM. Very little attention has been given to identifying the positive unintended consequences of PM. Our qualitative study also did not explicitly elicit the positive unintended consequences of PM and therefore is subject to the same bias and limitation as the literature. Nevertheless, we identified ‘improved capacity planning’ as a novel positive unintended consequence of PM on organizations.

The most common unintended consequences of PM identified in the literature were measure fixation, tunnel vision, and misrepresentation or gaming, while those that were most prominent in our qualitative study were administrative burden, insensitivity, reduced morale, and systemic dysfunction. These differing results demonstrate the importance of collecting data on the unintended consequences of PM in different contexts. Results may vary due to the nature of the PM system under study, such as the extent to which PM is punitive versus supportive, or due to contextual factors, such as the extent to which healthcare organizations are subject to accountability requirements from multiple oversight bodies. Varying results may also be explained by the effects of time. PM is an evolving discipline and there has been a shift in the academic literature during the last two decades away from control- and accountability-oriented PM systems to those that are more learning- and improvement-oriented [2, 69, 70]. Paradigm shifts such as these may influence not only the nature of PM systems and therefore which unintended consequences dominate, but also how we conceptualize what is unintended versus intended. For example, while we classified “motivated learning and development” and “new relationships and collaborative problem-solving” as unintended positive consequences, given the paradigm shift just described these could be conceptualized as intended consequences.

In general, we did not find major differences in perceptions of unintended consequences of PM across stakeholder groups; there was significant alignment in perceptions across CCO and network representatives, cancer and renal representatives, and clinical and administrative representatives. This convergence of stakeholder views may be explained by two factors. First, CCO’s PM system involves regular feedback and discussion between CCO and the networks through quarterly performance review reports and meetings, among other methods. This ongoing dialogue means that there are ample opportunities for network representatives to bring unintended consequences to the attention of CCO representatives, and vice versa. Second, although our sample included many practicing clinicians, these individuals were in clinical leadership roles. Given their involvement in strategic decision-making, clinical leaders may be more likely than those in non-leadership positions to hold similar views as administrators. Although there were common views between cancer and renal representatives, the application of the PM system to renal care was relatively new at the time of data collection; therefore, some unintended negative consequences identified by renal representatives may reflect the early stages of implementation and change management.

We identified three key themes in our qualitative data. First, many participant comments were not about CCO’s PM approach per say, but rather about how it co-exists with the PM requirements of other oversight bodies in the healthcare system. Two of our unintended consequences – ‘systemic dysfunction’ and ‘increased perceived injustice’ – are explicitly focused on the experiences of and relationship between those subject to CCO’s PM requirements and those subject to PM requirements from other oversight bodies. Second, many of the unintended consequences identified by participants were driven by a common underlying question: “Does PM capture what matters?” This question reflects concerns with indicator selection and data availability – more so than the PM approach itself. Both of these themes can be framed as sensemaking challenges [71] in which participants struggle to make sense of what is being measured and managed, why, how, and by which oversight body. Third, we found that the unintended consequences often reinforced one another, and we included such observations throughout the results section and summarized them in Fig. 2. For example, ‘reduced morale’ was exacerbated by ‘insensitivity’, ‘increased administrative burden’ and ‘systemic dysfunction’, while ‘increased perceived injustice’ was exacerbated by ‘tunnel vision’ and ‘systemic dysfunction’. Previous studies of unintended consequences rarely examine how unintended consequences influence and mutually reinforce each other. Powell et al. [15] and Aryankhasal et al. [35] drew maps to illustrate the pathway from structures or behaviours to unintended consequences, but these maps provide minimal insight into the mutual relationships between unintended consequences.

Overall, our qualitative results suggest that the negative unintended consequences spurred by CCO’s PM system do not undermine the entire effort, but rather are side effects to be mitigated through collaborative design and implementation strategies.

Implications for practice

This study demonstrates that even internationally leading PM systems with broad stakeholder support must grapple with unintended negative consequences of PM. Healthcare policymakers and managers can use the results of this study to inform the (re-)design and implementation of PM programs. The typology can serve as a tool to facilitate reflection and discussion on the potential risks of PM and how they may be avoided or mitigated. Our qualitative results, which provide insight into how unintended consequences manifest from multiple stakeholder perspectives and how they reinforce one another, can be similarly applied. For example, our results suggest that PM programs should be designed with consideration for how they align or conflict with PM requirements from other oversight bodies in the healthcare system. Collaborative approaches to developing PM requirements across oversight bodies will reduce conflict, confusion, and administrative burden. Our results also suggest that the unintended consequences of ‘increased administrative burden’, ‘insensitivity’, ‘systemic dysfunction’, and ‘measure fixation’ generated at least three other unintended consequences each, suggesting that particular attention should be paid to how to reduce these four core issues. Finally, we recommend pilot testing PM interventions to identify unintended consequences in real-world settings prior to full implementation [3].

Implications for future research

Our comprehensive typology provides a common language for discourse on unintended consequences and supports systematic, comparable analyses of unintended consequences across PM regimes and healthcare systems. We recommend that researchers use and build on the typology in future studies to facilitate the accumulation of evidence on the influence of unintended consequences of PM.

The results of our rapid review and qualitative study suggest that additional research is needed on (1) the positive unintended consequences of PM, (2) how unintended consequences relate to and exacerbate each other, (3) how unintended consequences vary over time and between PM regimes and contexts and (4) mitigation strategies. Examination of these issues could help further understanding of unintended consequences and how best to mitigate those that are negative while retaining the benefits of those that are positive in diverse healthcare systems. As noted above, our qualitative results suggest that the following four unintended consequences have negative generative effects and should be prioritized in future research and practice changes: ‘increased administrative burden’, ‘insensitivity’, ‘systemic dysfunction’, and ‘measure fixation’.

Finally, in our study, we conceptualized unintended consequences from the vantage points of three core stakeholder groups - providers, organizations, and patients. We recommend researchers break these groups down further to allow for a more nuanced understanding of the question, “unintended for whom?” For example, how a PM consequence is experienced and whether it is intended or unintended may vary based on provider role, organization size or type, and patient demographics or health status.

Study limitations

This study has limitations. First, although our literature search involved the use of several synonymous keywords and two databases reflecting the health sciences and management fields, respectively, it is possible that relevant papers were missed, particularly due to the range of possible PM interventions and differences in terminology across disciplines. Second, we did not assess the quality of studies included in the review. However, our aim was to develop a typology, not conduct a systematic review, and it is common for rapid reviews to limit sources and omit quality assessment [19]. Third, examining the unintended consequences of PM was not the research question that drove initial data collection. However, one interview question inquired about weaknesses of the PM system and another about unintended negative consequences of PM. Fourth, participants were not explicitly asked about positive unintended consequences of PM, though some were nevertheless identified in our data. Fifth, although roughly 20% of our sample consisted of practicing clinicians, these individuals held clinical leadership roles. No front-line staff in non-leadership roles were included; these individuals may have differing experiences and views of PM. Sixth, our results do not link unintended consequences with specific PM interventions; therefore, we do not know which unintended consequences are common based on type of PM intervention. Finally, since the data were collected in one Canadian province, the results may have limited generalizability. The typology, however, is rooted in both the literature and our qualitative data, and our qualitative results align with those of studies in other contexts.

Conclusion

In this study, we undertook a rapid review and qualitative study to develop a comprehensive typology of the unintended consequences of PM in healthcare and to explore unintended consequences from multiple stakeholder perspectives. Given the increasing use of PM to drive quality and accountability in healthcare, unintended administrative and clinical implications must be considered. While unintended consequences can never be fully eliminated, we can strive to minimize those that are negative and leverage those that are positive.

Availability of data and materials

The data analyzed during the current study is available from the corresponding author on reasonable request.

Abbreviations

CCO:

Cancer Care Ontario

NHS:

National Health Service

ORN:

Ontario Renal Network

PM:

Performance Management

References

  1. Smith PC. Performance management in British health care: will it deliver? Health Aff. 2002;21(3):103–15.

    Article  Google Scholar 

  2. Freeman T. Using performance indicators to improve health care quality in the public sector: a review of the literature. Health Serv Manag Res. 2002;15:126–37.

    Article  Google Scholar 

  3. Lester HE, Hannon KL, Campbell SM. Identifying unintended consequences of quality indicators: a qualitative study. BMJ Qual Saf. 2011;20(12):1057–61.

    PubMed  Article  Google Scholar 

  4. Liu D, Green E, Kasteridis P, Goddard M, Jacobs R, Wittenberg R, et al. Incentive schemes to increase dementia diagnosis in primary care in England: a retrospective cohort study of unintended consequences. Br J Gen Pract. 2019;69(680):e154–63.

    PubMed  PubMed Central  Article  Google Scholar 

  5. Mannion R, Braithwaite J. Unintended consequences of performance measurement in healthcare: 20 salutary lessons from the English National Health Service. Intern Med J. 2012;42:569–74.

    CAS  PubMed  Article  Google Scholar 

  6. McDonald R, Roland M. Pay for performance in primary care in England and California: comparison of unintended consequences. Ann Fam Med. 2009;7(2):121–7.

    PubMed  PubMed Central  Article  Google Scholar 

  7. Armstrong N, Brewster L, Tarrant C, et al. Taking the heat or taking the temperature? A qualitative study of a large-scale exercise in seeking to measure for improvement, not blame. Soc Sci Med. 2018;198:157–64.

    PubMed  PubMed Central  Article  Google Scholar 

  8. Bevan G, Hood C. What’s measured is what matters: targets and gaming in the English public health care system. Public Adm. 2006;84(3):517–38.

    Article  Google Scholar 

  9. Conrad L, Uslu PG. UK health sector performance management: conflict, crisis and unintended consequences. Account Forum. 2012;36(4):231–50.

    Article  Google Scholar 

  10. Wankhade P. Performance measurement and the UK emergency ambulance service. Unintended consequences of the ambulance response time targets. Int J Public Sect Manag. 2011;24(5):384–402.

    Article  Google Scholar 

  11. Chien AT, Chin MH, Davis AM, et al. Pay for performance, public reporting, and racial disparities in health care – how are programs being designed? Med Care Res Rev. 2007;64:283S–304S.

    PubMed  Article  Google Scholar 

  12. Damschroder LJ, Robinson CH, Francis J, et al. Effects of performance measure implementation on clinical manager and provider motivation. J Gen Intern Med. 2014;29:877–84.

    PubMed  PubMed Central  Article  Google Scholar 

  13. Hysong SJ, SoRelle R, Smitham KB, et al. Reports of unintended consequences of financial incentives to improve management of hypertension. PLoS One. 2017;12(9):e0184856.

  14. Kansagara D, Tuepker A, Joos S, et al. Getting performance metrics right: a qualitative study of staff experiences implementing and measuring practice transformation. J Gen Intern Med. 2014;29:607–13.

    PubMed Central  Article  Google Scholar 

  15. Powell AA, White KM, Partin MR, et al. Unintended consequences of implementing a national performance measurement system into local practice. J Gen Intern Med. 2012;27:405–12.

    PubMed  Article  Google Scholar 

  16. Ross JS, Williams L, Damush TM, et al. Physician and other healthcare personnel responses to hospital stroke quality of care performance feedback: a qualitative study. BMJ Qual Saf. 2016;25:441–7.

    PubMed  Article  Google Scholar 

  17. Franco-Santos M, Otley D. Reviewing and theorizing the unintended consequences of performance management systems. Int J Manag Rev. 2018;20:696–730.

    Article  Google Scholar 

  18. Smith P. The unintended consequences of publishing performance data in the public sector. Int J Public Adm. 1995;18(2):277–310.

    Article  Google Scholar 

  19. Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10.

    PubMed  PubMed Central  Article  Google Scholar 

  20. Hagens V, Tassone C, Evans JM, et al. From measurement to improvement in Ontario’s cancer system: analyzing the performance of 28 provincial indicators over 15 years. Healthc Q. 2020;23:53–9.

    PubMed  Article  Google Scholar 

  21. Bell B. 2019. Don’t harm Cancer Care Ontario while restructuring health agencies. The Toronto Star. https://www.thestar.com/opinion/contributors/2019/01/17/dont-harm-cancer-care-ontario-while-restructuring-health-agencies.html (Accessed 30 Dec 2019).

  22. CIHI (Canadian Institute for Health Information). 2019. OECD interactive tool: International comparisons – quality of care. https://www.cihi.ca/en/oecd-interactive-tool-international-comparisons-quality-of-care (Accessed 25 Oct 2021).

  23. Evans JM, Im J, Grudniewicz A, Richards G, Veillard J. Managing the performance of health systems: an agency-stewardship dance. Acad Manag Proc. 2020;2020(1). https://doi.org/10.5465/AMBPP.2020.224.

  24. Frketich J, LaFleche G. 2019. Super agency brings end to independent Cancer Care Ontario. Hamilton Spectator. https://www.thespec.com/news-story/9510881-super-agency-brings-end-to-independent-cancer-care-ontario/ (Accessed 30 Dec 2019).

  25. Patton MQ. Qualitative research and evaluation methods. 3rd ed. Thousand Oaks: Sage Publications; 2002.

    Google Scholar 

  26. Allard J, Bleakley A. What would you ideally do if there were no targets? An ethnographic study of the unintended consequences of top-down governance in two clinical settings. Adv Health Sci Educ. 2016;21:803–17.

    Article  Google Scholar 

  27. Weber EJ, Mason S, Carter A, Hew RL. Emptying the corridors of shame: organizational lessons from England’s 4-hour emergency throughput target. Ann Emerg Med. 2011;57(2):79–88.e71. https://doi.org/10.1016/j.annemergmed.2010.08.013.

    Article  PubMed  Google Scholar 

  28. Bliss K, Chambers M, Rambur B. Building a culture of safety and quality: the paradox of measurement. Nurs Econ. 2020;38:178–84.

    Google Scholar 

  29. Shaw J, Murphy AL, Turner JP, Gardner DM, et al. Policies for Deprescribing: an international scan of intended and unintended outcomes of limiting sedative-hypnotic use in community-dwelling older adults. Healthc Policy. 2019;14:39–51.

    PubMed  PubMed Central  Google Scholar 

  30. Willing E. Hitting the target without missing the point: New Zealand’s immunisation health target for two year olds. Policy Stud. 2016;37(6):534–49.

    Article  Google Scholar 

  31. Pollard K, Horrocks S, Duncan L, Petsoulas C, Allen P, Cameron A, et al. How do they measure up? Differences in stakeholder perceptions of quality measures used in English community nursing. J Health Serv Res Policy. 2020;25(3):142–50.

    PubMed  Article  Google Scholar 

  32. Brewster L, Tarrant C, Dixon-Woods M. Qualitative study of views and experiences of performance management for healthcare-associated infections. J Hosp Infec. 2016;94:41–7.

    CAS  Article  Google Scholar 

  33. Rambur B, Vallett C, Cohen JA, et al. Metric-driven harm: an exploration of unintended consequences of performance measurement. Appl Nurs Res. 2013;26:269–72.

    PubMed  Article  Google Scholar 

  34. Saultz A, Saultz JW. Measuring outcomes: lessons from the world of public education. Ann Fam Med. 2017;15:71–6.

    PubMed  PubMed Central  Article  Google Scholar 

  35. Aryankhesal A, Sheldon TA, Mannion R, et al. The dysfunctional consequences of a performance measurement system: the case of the Iranian national hospital grading programme. J Health Serv Res Policy. 2015;20:138–45.

    PubMed  Article  Google Scholar 

  36. Tenbensel T, Chalmers L, Willing E. Comparing the implementation consequences of the immunisation and emergency department health targets in New Zealand. J Health Organ Manag. 2016;30(6):1009–24.

    PubMed  Article  Google Scholar 

  37. Casalino LP, Alexander GC, Jin L, Konetzka RT. General internists’ views on pay-for-performance and public reporting of quality scores: a national survey. Health Aff. 2007;26(2):492–9.

    Article  Google Scholar 

  38. Hannon KL, Lester HE, Campbell SM. Patients’ views of pay for performance in primary care: a qualitative study. Br J Gen Pract. 2012;62(598):e322–8.

    PubMed  PubMed Central  Article  Google Scholar 

  39. Lindenauer PK, Lagu T, Ross JS, Pekow PS, Shatz A, Hannon N, et al. Attitudes of hospital leaders towards publicly reported measures of health care quality. JAMA Intern Med. 2014;174(12):1904–11.

    PubMed  PubMed Central  Article  Google Scholar 

  40. Martin B, Jones J, Miller M, et al. Health care professionals’ perceptions of pay-for-performance in practice: a qualitative metasynthesis. Inquiry. 2020;57:1–17.

    Google Scholar 

  41. Adair CE, Simpson E, Casebeer AL, et al. Performance measurement in healthcare: part II -- state of the science findings by stage of the performance measurement process. Healthc Policy. 2006;2:56–78.

    PubMed  PubMed Central  Google Scholar 

  42. Deber R, Schwartz R. What’s measured is not necessarily what matters: a cautionary story from public health. Healthc Policy. 2016;12:52–64.

    PubMed  PubMed Central  Google Scholar 

  43. Valderas JM, Fitzpatrick R, Roland M. Using health status to measure NHS performance: another step into the dark for the health reform in England. BMJ Qul Saf. 2011. https://doi.org/10.1136/bmjqs-2011-000184.

  44. Feinstein AR. Is “quality of care” being mislabeled or mismeasured? Am J Med. 2002;112:472–8.

    PubMed  Article  Google Scholar 

  45. Friebel R, Steventon A. The multiple aims of pay-for-performance and the risk of unintended consequences. BMJ Qual Saf. 2016;25:827–31.

    PubMed  Article  Google Scholar 

  46. Hong Y, Zheng C, Hechenbleikner E, Johnson LB, Shara N, Al-Refaie WB. Vulnerable hospitals and cancer surgery readmissions: insights into the unintended consequences of the patient protection and affordable care act. J Am Coll Surg. 2016;223(1):142–51.

    PubMed  PubMed Central  Article  Google Scholar 

  47. Esposito ML, Selker HP, Salem DN. Quantity over quality: how the rise in quality measure is not producing quality results. J Gen Intern Med. 2015;30:1204–7.

    PubMed  PubMed Central  Article  Google Scholar 

  48. Wholey DR, Finch M, Kreiger R, et al. Public reporting of primary care clinic quality: accounting for Sociodemographic factors in risk adjustment and performance comparison. Popul Health Manag. 2018;21:378–86.

    PubMed  Article  Google Scholar 

  49. Riskin L, Campagna JA. Quality assessment by external bodies: intended and unintended impact on healthcare delivery. Curr Opin Anaesthesiol. 2009;22:237–41.

    PubMed  Article  Google Scholar 

  50. Baker DW, Qaseem A. Evidence-based performance measures: preventing unintended consequences of quality measurement. Ann Intern Med. 2011;155:638–40.

    PubMed  Article  Google Scholar 

  51. Kerpershoek E, Groenleer M, de Bruijn H. Unintended responses to performance management in Dutch hospital care: bringing together the managerial and professional perspectives. Public Manag Rev. 2016;18:417–36.

    Article  Google Scholar 

  52. Tenbensel T, Jones P, Chalmers LM, Ameratunga S, Carswell P. Gaming New Zealand’s emergency department target: how and why did it vary over time and between organizations? Int J Health Policy Manag. 2020;9(4):152–62.

    PubMed  Google Scholar 

  53. Harris AHS, Chen C, Rubinsky AD, Hoggatt KJ, Neuman M, Vanneman ME. Are improvements in measured performance driven by better treatment or “denominator management”? J Gen Intern Med. 2016;31(Suppl 1):21–7.

    PubMed  PubMed Central  Article  Google Scholar 

  54. Peterson LA, Woodard LD, Urech T, Daw C, Sookanan S. Does pay-for-performance improve the quality of health care? Ann Intern Med. 2006;145(4):265–72.

    Article  Google Scholar 

  55. Rahman AN, Applebaum RA. The nursing home minimum data set assessment instrument: manifest functions and unintended consequences – past, present, and future. Gerontologist. 2009;49(6):737–5.

    Article  Google Scholar 

  56. Lee JY, Lee S-I, Jo M-W. Lessons from healthcare providers’ attitudes toward pay-for-performance: what should purchasers consider in designing and implementing a successful program? J Prev Med Public Health. 2012;45(3):137–47.

    PubMed  PubMed Central  Article  Google Scholar 

  57. Lester H, Matharu T, Mohammed MA, Lester D, Foskett-Tharby R. Implementation of pay for performance in primary care: a qualitative study 8 years after introduction. Br J Gen Pract. 2013;63(611):e408–15.

    PubMed  PubMed Central  Article  Google Scholar 

  58. Chien AT. The potential impact of performance incentive programs on racial disparities in health care. The potential impact of performance incentive programs on racial disparities in health care. In: Williams RA, editor. Healthcare disparities at the crossroads with healthcare reform. 1st ed. New York: Springer; 2011. p. 211–29.

    Chapter  Google Scholar 

  59. Werner RM, Konetzka RT, Kruse GB. Impact of public reporting on unreported quality of care. Health Serv Res. 2009;44:379–98.

    PubMed  PubMed Central  Article  Google Scholar 

  60. Werner RM, Goldman LE, Dudley RA. Comparison of change in quality of care between safety-net and non-safety-net hospitals. JAMA. 2008;299:2180–7.

    CAS  PubMed  Article  Google Scholar 

  61. Powell AA, White KM, Partin MR, et al. More than a score: a qualitative study of ancillary benefits of performance measurement. BMJ Qual Saf. 2014;23:651–8.

    PubMed  Article  Google Scholar 

  62. Metersky ML. Should management of pneumonia be an indicator of quality of care? Clin Chest Med. 2011;32:575–89.

    PubMed  Article  Google Scholar 

  63. Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293(10):1239–44.

    CAS  PubMed  Article  Google Scholar 

  64. Aron DC, Tseng CL, Soroka O, Pogach LM. Balancing measured: identifying unintended consequences of diabetes quality performance measures in patients at high risk for hypoglycemia. Int J Qual Health Care. 2019;31(4):246–51.

    PubMed  Article  Google Scholar 

  65. Mason T, Sutton M, Whittaker W, McSweeney T, Millar T, Donmall M, et al. The impact of paying treatment providers for outcomes; difference-in-differences analysis of the ‘payment by results for drug recovery’ pilot. Addiction. 2015;110(7):1120–8.

    PubMed  Article  Google Scholar 

  66. Pannick S, Archer S, Long SJ, Husson F, Athanasiou T, Sevdalis N. What matters to medical ward patients, and do we measure it? A qualitative comparison of patient priorities and current practice in quality measurement on UK NHS medical wards. BMJ Open. 2019;9:e024058.

  67. Werner RM, Asch DA, Polsky D. Racial profiling: the unintended consequences of CABG report cards. Circulation. 2005;111:1257–63.

    PubMed  Article  Google Scholar 

  68. Sutton M, Elder R, Guthrie B, Watt G. Record rewards: the effects of targeted quality incentives on the recording of risk factors by primary care providers. Health Econ. 2010;19(1):1–13.

    PubMed  Google Scholar 

  69. Van Dooren W. Better performance management: some single- and double-loop strategies. Public Perform Manag Rev. 2011;34(3):420–33.

    Article  Google Scholar 

  70. Van Dooren W, Hoffman C. Performance management in Europe: an idea whose time has come and gone? In: Ongaro E, Van Thiel S, editors. The Palgrave handbook of public administration and Management in Europe. London: Palgrave Macmillan; 2018. p. 207–25.

    Chapter  Google Scholar 

  71. Weick KE. Sensemaking in Organisations. Thousand Oaks: Sage; 1995.

    Google Scholar 

Download references

Acknowledgements

At the time of data collection for this study, JME was a Staff Scientist with Ontario Health, formerly known as Cancer Care Ontario.

Funding

This study was not funded.

Author information

Authors and Affiliations

Authors

Contributions

JME collected the original data and conceived of this study. XL conducted the rapid review under JME’s supervision. XL conducted a secondary analysis of the data. Both authors participated in the interpretation of results. XL wrote the first draft of the manuscript. JME revised the manuscript critically. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Jenna M. Evans.

Ethics declarations

Ethics approval and consent to participate

Cancer Care Ontario is designated a “prescribed entity” for the purposes of Section 45(1) of the Personal Health Information Protection Act of 2004. As a prescribed entity, Cancer Care Ontario is authorized to collect and use data with respect to the management, evaluation or monitoring of all or part of the health system, including the delivery of services. Because this study complied with privacy regulations, ethics review was waived. All participants provided verbal informed consent prior to participation in the research.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Li, X., Evans, J.M. Incentivizing performance in health care: a rapid review, typology and qualitative study of unintended consequences. BMC Health Serv Res 22, 690 (2022). https://doi.org/10.1186/s12913-022-08032-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-022-08032-z

Keywords

  • Performance management
  • Performance measurement
  • Quality indicators
  • Unintended consequences
  • Health systems