Provider attributes correlation analysis to their referral frequency and awards

Background There has been a recent growth in health provider search portals, where patients specify filters—such as specialty or insurance—and providers are ranked by patient ratings or other attributes. Previous work has identified attributes associated with a provider’s quality through user surveys. Other work supports that intuitive quality-indicating attributes are associated with a provider’s quality. Methods We adopt a data-driven approach to study how quality indicators of providers are associated with a rich set of attributes including medical school, graduation year, procedures, fellowships, patient reviews, location, and technology usage. In this work, we only consider providers as individuals (e.g., general practitioners) and not organizations (e.g., hospitals). As quality indicators, we consider the referral frequency of a provider and a peer-nominated quality designation. We combined data from the Centers for Medicare and Medicaid Services (CMS) and several provider rating web sites to perform our analysis. Results Our data-driven analysis identified several attributes that correlate with and discriminate against referral volume and peer-nominated awards. In particular, our results consistently demonstrate that these attributes vary by locality and that the frequency of an attribute is more important than its value (e.g., the number of patient reviews or hospital affiliations are more important than the average review rating or the ranking of the hospital affiliations, respectively). We demonstrate that it is possible to build accurate classifiers for referral frequency and quality designation, with accuracies over 85 %. Conclusions Our findings show that a one-size-fits-all approach to ranking providers is inadequate and that provider search portals should calibrate their ranking function based on location and specialty. Further, traditional filters of provider search portals should be reconsidered, and patients should be aware of existing pitfalls with these filters and educated on local factors that affect quality. These findings enable provider search portals to empower patients and to “load balance” patients between younger and older providers. Electronic supplementary material The online version of this article (doi:10.1186/s12913-016-1338-1) contains supplementary material, which is available to authorized users.

We visualized the ratio of providers with Referral Frequency=Very High and Castle Connolly Award=true over the total number of providers for each state using a heat map, shown in Figures A.2 and A.3. As shown in Figure A.2, Nevada and the mid and south Atlantic regions of the U.S. have the highest concentration of providers with Referral Frequency=Very High, which may imply that a majority of referral services are concentrated to a smaller number of providers in these areas due to a lack of specialists. As shown in Figure A.3, the north east region of the U.S. contains a higher concentration of providers with Castle Connolly awards than any other region in the U.S. Further, Florida, Washington, and Indiana also contain a considerably high ratio of Castle Connolly awards (greater than 5%). These results suggest that more providers in these states seek peer validation, which may result in a greater number of medical or clinical peer reviews. And peer review processes, such as accreditation programs, are tools to improve provider quality-of-care [2].

Figure A.2 Ratio of providers with Referral Frequency=Very
High to the total number of providers by state. This map was generated using the Google Visualization API and used according to terms described in the Creative Commons 3.0 Attribution License [3,4].

Figure A.3 Ratio of providers with Castle Connolly
Award=true to the total number of providers by state. This map was generated using the Google Visualization API and used according to terms described in the Creative Commons 3.0 Attribution License [3,4].

Appendix B State-Level Correlations
Here we present our analysis of state-level correlations with Referral Frequency=Very High in order to observe local trends in providers with frequent referrals. We found that 75 distinct attributes have a correlation greater than 0.05 when the data is stratified by each state. A majority of these attributes had correlations greater than 0.05 in one or two states; Table B.1 lists the top 10 most frequently correlated attributes at the state level (note that the total number is 51 as Washington D.C. is included). Based on this table, there is indeed local influences on providers who are frequently referred, and these influencers are dominated by pediatric specialties. We also examined correlations of Castle Connolly Award=true at the state level and found that 82 distinct attributes have a correlation greater than 0.05 when the data is stratified by each state. A majority of these attributes had correlations greater than 0.05 in one or two states; Table B.2 lists the top 10 most frequently correlated attributes at the state level. Based on this table, Castle Connolly awards indeed observe localized behavior and this behavior is influenced by the provider's specialty. This localized behavior could be explained by the peer-nomination process employed by Castle Connolly. Further, we also see local trends for certain types of drugs, such as Metformin for Type II diabetes and Cyclobenzaprine for muscle spasms. Lastly, despite the overrepresentation of males in Castle Connfolly (79% versus 69% overall), we see that female has a correlation greater than 0.05 with Castle Connolly Award=true in nine states whereas male had zero states with a correlation greater than 0.05.

Appendix C Most Discriminative Attributes for Referrals
To gain insight into attributes useful for classifying providers' referral frequency, we examined the top 10 most discriminative attributes for the discretized Referral Frequency attribute in Table  C.1. This table shows that a provider's referral frequency may be discriminated by vascularrelated prescriptions (e.g., Warfarin), if the provider offers electronic prescriptions, the provider's relative volume, if the provider is seeing new patients, and if the provider participates in PQRS. Note, the top three discriminative attributes from this table are also strongly correlated with Referral Frequency=Very High.  As with the national level, we see a majority of errors are relative to the ordering of categories. Further, we observe a significant improvement in sensitivity from 52% to 72% for Referral Frequency=Very High classifications, however there is no change in accuracy and some degradation in positive predictive value, from 78% to 70%. Other categories observed similar behavior except for Referral Frequency=Low, which observed a decrease in sensitivity. Thus, finding discriminative attributes to classify providers with high referral frequency is easier using attributes at the local level, and these local influencers should be modeled in each classifier separately. However, local influencers have less of an effect on classifying providers with very low referral frequency or no referrals.

Appendix E Rule Learning Results
In this section we report a summary of the rules found using the RIPPER algorithm on Castle Connolly Award and discretized Referral Frequency. For each dataset at the national and state level, we ran RIPPER with pruning, a maximum error rate of 50%, and the minimum number of items covered by a rule to 10; i.e., every rule evaluates to at least 10 positives and each rule has at most half the number of negatives. For every rule, at both the state and national levels, we computed its accuracy using the number of positives and negatives that the rule covers and present the rules that yield the highest accuracies; in the case of Referral Frequency, we only report rules that cover at least 100 providers as there are several rules that cover more than 100 providers with 90% or better accuracy. Essentially, each rule is identifying a cadre of providers with similar qualities who either have a high referral frequency or received a Castle Connolly award. This qualitative analysis gives further insight into local influencers of highly referred providers and providers with a Castle Connolly award. Thus-in addition to the number of hospital affiliations, and Medicare procedures and patients-providers who are highly referred perform specific laboratory procedures that differ based on locality and these same providers tend to avoid a specific medication unique to the locality. We also see an interesting rule in Washington, that says females with at least one fellowship, 20 to 35 years of experience, whose hospital affiliation score is in the top 53%, and who work at organizations with at least 189 employees are more likely to receive a Castle Connolly award.