DRG coding practice: a nationwide hospital survey in Thailand
© Pongpirul et al; licensee BioMed Central Ltd. 2011
Received: 25 July 2011
Accepted: 31 October 2011
Published: 31 October 2011
Diagnosis Related Group (DRG) payment is preferred by healthcare reform in various countries but its implementation in resource-limited countries has not been fully explored.
This study was aimed (1) to compare the characteristics of hospitals in Thailand that were audited with those that were not and (2) to develop a simplified scale to measure hospital coding practice.
A questionnaire survey was conducted of 920 hospitals in the Summary and Coding Audit Database (SCAD hospitals, all of which were audited in 2008 because of suspicious reports of possible DRG miscoding); the questionnaire also included 390 non-SCAD hospitals. The questionnaire asked about general demographics of the hospitals, hospital coding structure and process, and also included a set of 63 opinion-oriented items on the current hospital coding practice. Descriptive statistics and exploratory factor analysis (EFA) were used for data analysis.
SCAD and Non-SCAD hospitals were different in many aspects, especially the number of medical statisticians, experience of medical statisticians and physicians, as well as number of certified coders. Factor analysis revealed a simplified 3-factor, 20-item model to assess hospital coding practice and classify hospital intention.
Hospital providers should not be assumed capable of producing high quality DRG codes, especially in resource-limited settings.
Since 2001, the Universal Coverage (UC) scheme has provided health benefits to approximately three quarters of Thailand's citizens. The scheme is financed from general taxation and administered by the National Health Security Office (NHSO), under the supervision of the Public Health Minister. Hospital providers are mainly from the public sector and are paid for outpatient and preventive services based on prospective capitation whereas the Diagnosis Related Group (DRG)-based retrospective payment was used to compensate for the cost of inpatient care.
The DRG system can work well if the diagnosis and procedure codes could reflect both the patient's clinical condition and the actual cost of care incurred by hospital providers. In addition, coding practice in hospitals has to be reliable and capable of producing consistent and reliable coding quality. In an ideal setting, physicians would carefully review relevant clinical information to prepare a discharge summary, which would then be used by certified coders to produce appropriate diagnosis and procedure codes to be submitted for reimbursement. However, such an ideal condition is unlikely, at least in resource-limited settings like Thailand.
In a related qualitative study, we detailed the variation of coding practice in 10 hospitals and then defined the concept of Hospital Coding Practice as comprising elements of both structure and process . In terms of structure, we identified at least eight health care professional disciplines (Medical Statisticians, Nurse, Physician, Public Health Staff/Paramedics, Medical Record Staff, Information Technology Staff, Finance/Accounting Staff, and others) as well as IT infrastructure specifically relevant to coding practice. We also described seven major steps of the coding process (Discharge Summarization, Completeness Checking, Diagnosis and Procedure Coding, Code Checking, Relative Weight Challenging, Coding Report, and Internal Summary and Coding Audit). The findings demonstrated that coding practice is not a simple two-step activity as had been assumed before by payers, but rather a multi-step process that involves many overlapping responsibilities across health care professional disciplines, especially in a resource-limited setting .
Although DRG has been applied in many countries and has been improved over the decades, it still is an imperfect system and a number of concerns have therefore been raised. Discrepancies between the submitted codes and the information in medical records revealed during the coding audit--especially when the submitted codes can result in larger reimbursements than would be consistent with the actual condition--have triggered a number of concerns about quality of medical records, cooperation of physicians, knowledge and skill of coders, as well as hospital intentions to "game the system" (also known as "DRG Creep") . While most of these issues can be objectively verified, the last concern seems to be both the most important and the most difficult to measure.
Inclusion criteria for SCAD 2008
Are in the top 100 for reimbursement amount, after sorting for Adjusted Relative Weight (RW)
Have large amount of 'abnormal' values defined as:
- Procedure does not match hospital capacity
- Diagnosis and procedure codes are not correct
- Admission of ambulatory cases
- Larger among of reimbursement than other hospitals with similar size
- 'High-cost DRG (RW > 4)
Medical record that has:
- RW >= 8 but Length of Stay (LOS) < 3
- RW < 5 but LOS > 10 or > 2 times the average LOS of the same DRG
- Tracheostomy procedure code but LOS < 3 and discharge status = cure
- Cardiovascular surgery procedure code that is not relevant to the hospital capacity
- Brain surgery procedure code but LOS < 3 and discharge status = improved
- Kidney dialysis procedure code but LOS < 5
- Large amount of data in some DRG
- Adjusted RW does not concur with cost
- Other unusual conditions
Hospital coding practice can be affected by some factors beyond a hospital's control . Simply because the hospitals were chosen to be audited based on the above inclusion criteria does not necessarily mean they intended to cheat or were incapable of producing quality coding. Understanding what might distinguish SCAD hospitals from the non-SCAD counterparts was an appropriate and feasible next step. This nationwide hospital survey has two main objectives: (1) to describe and compare the characteristics of both groups of hospitals and (2) to develop a simplified scale to measure hospital coding practice.
Based on the case study findings , a questionnaire was developed to comprise three sections. Section 1 asked about general demographics of the respondents and of the hospitals. Section 2 aimed to explore the hospital coding structure; various types of resources in the hospital were asked. Section 3.1 explored how, for each step of the hospital coding process, primary and secondary responsibility was assigned to, or assumed by, a particular health care professional discipline. Face validity of the questionnaire was assessed by presenting the findings to data-coding experts. The general demographic information was verified with national hospital registry to ensure content validity. To ensure comprehensibility and feasibility, a pilot test was conducted among 3-5 graduate students with relevant professional background as well as among a few public and private hospitals; feedback was then used for revision. Forward and backward translation was done to ensure its conceptual and linguistic equivalence.
Section 3.2 contained a set of opinion-oriented items to reflect on the current hospital coding practice. A pool of 63 items were identified from the interviews with the 10 hospitals in the case study . They were categorized into group A to K, according to their relevance to each step of the hospital coding process (A1-A19: Non-specific; B1-B8: Discharge Summarization; C1-C2: Completeness Checking; DE1-DE10: Diagnosis and Procedure Coding; F1-F6: Code Checking; G1-G5: Relative Weight Challenging; H1-H3: Coding Report; I1-I3: Medical Record Audit; J1-J3: Internal Summary Audit; K1-K3: Internal Coding Audit). These items were presented as declarative sentences, followed by response options that indicate varying degree of agreement with the statement. A five-point Likert scale format (-2 = Strongly Disagree; -1 = Disagree; 0 = Neutral; +1 = Agree; +2 = Strongly Agree) was considered the most appropriate, based on the feedback from our pilot test, for the respondents' ability to discriminate meaningfully .
Site & Study Population
The questionnaire, a cover letter, and a prepaid return envelope were mailed to the target hospitals. Hospitals were asked to return the questionnaire within four weeks, after which two follow-up phone calls were done. Non-respondents were defined as those who did not return the questionnaire within two weeks after the second follow-up call. This study was determined by the Johns Hopkins Bloomberg School of Public Health Institutional Review Board as not human subjects research (IRB# 00002096).
For responses to the Section 1 (General Demographics), Section 2 (Hospital Coding Structure), and Section 3.1 (Hospital Coding Process), Pearson's chi-square and Student's t-test were used for analyzing categorical and continuous variables, respectively. Exploratory factor analysis (EFA) was used to analyze the responses in Section 3.2 (Hospital Coding Practice Scale). EFA is a technique used to explain covariance among observed random variables in terms of fewer unobserved random variables named factors. It helps to generate a hypothesis in such a way that the investigation of the relationships between manifest variables and factors is not based on any prior assumptions about which manifest variables are related to which factors .
The factor analysis was done to identify optimal number of factors, determined by Kaiser-Guttman Criterion (Number of eigenvalues > 1) , scree test , and parallel analysis . Items with high uniqueness, defined as larger than 0.70, were removed whereas the remaining items were retained within the factors that showed high factor loadings. According to the 'rule of thumb' in confirmatory factor analysis, loadings should be at least 0.70 because it corresponds to about half of the variance of the variable being explained by the factor of interest. However, some researchers suggest lower levels of cut-points such as 0.60  or as low as 0.40  for EFA. We also performed initial reliability test and item-based statistics in conjunction with EFA . Stata/SE Version 10 (Stata Corp.) was used for all statistical calculations.
The overall response rate was 37.56% with a well-balanced geographical distribution. Of those who responded, 18 hospitals were excluded because they were no longer in operation. The sample therefore consisted of 374 SCAD and 100 non-SCAD hospitals. The SCAD hospitals were significantly more likely to respond than Non-SCAD hospitals (40.87% vs 29.74%; OR 1.63; 95% CI: 1.26-2.12; p = 0.0001). Public hospitals were significantly more likely to respond to our survey than private hospitals (p < 0.001). The geographical distributions were similar between the responders and non-responders (p = 0.663). Larger hospitals were significantly more likely to respond to the survey than smaller ones (p < 0.001).
Characteristics of Hospital Survey Responders
The majority of the contact persons was female (80.08%) with an average age of 37 years and had worked in their hospital for at least 12 years on average (Table 1). Almost half of them (47.97%) were medical statisticians and 31.71% were nurses. Each of the surveyed hospitals was responsible for an average UC population of 60,000. Universal Coverage, moreover, was the major source of hospital revenue for most responders. Under the UC scheme, hospitals have had to set up an adequate number of Primary Care Units (PCU) to provide care for remote populations. In this survey, the mean number of PCUs was 9.20 per hospital. On average, approximately 75% of the beds were occupied. The average length of stay (LOS) was 5.95 days. The mean Relative Weight (RW)--a standard value assigned to each DRG to reflect the cost of its care--and Adjusted Relative Weight (Adjusted RW) were 0.91 and 0.82, respectively. On an average day, the surveyed hospitals took care of 75.50 inpatient and 511.23 outpatient cases. One quarter of the hospitals received full accreditation by the Healthcare Accreditation Institute (similar to the US's Joint Commission on Accreditation of Healthcare Organizations). SCAD hospitals were more likely to be public and smaller than non-SCAD hospitals (p < 0.001). The majority of SCAD hospitals were in Northeast and Central region whereas non-SCAD hospitals were from Bangkok and Central region.
Hospital Coding Structure
The number of computers used specifically for coding purposes in each hospital ranged from 1 to 200 (Mean 7; SD 14.56; N = 376), regardless of SCAD status. We found a significant variation in the types of software that hospitals used. These software programs offered different capability to assist in the coding process. Even the most popular software (HOSxP) was used in only 45% of the hospitals sampled. Majority of SCAD hospitals used HOSxP whereas most of the Non-SCAD hospitals preferred either less common commercial or proprietary software.
Human resource for hospital coding practice in SCAD and Non-SCAD hospitals
Total # Staff Involved
7.52 ± 14.32
7.49 ± 14.18
7.64 ± 14.91
p = 0.9287
Medical Statisticians (2&4 yr)
1.79 ± 1.41
1.72 ± 1.31
2.22 ± 1.86
p = 0.0256
# Training (times)
3.24 ± 3.34
3.27 ± 3.44
2.83 ± 1.18
p = 0.8623
10.74 ± 5.49
10.63 ± 5.58
12.17 ± 4.85
p = 0.6465
Medical Statisticians (2 yr)
1.25 ± 1.11
1.21 ± 1.00
1.46 ± 1.60
p = 0.1658
# Training (times)
3.62 ± 3.30
3.60 ± 3.35
3.75 ± 2.92
p = 0.8525
8.11 ± 6.01
7.89 ± 5.92
9.69 ± 6.55
p = 0.1372
Medical Statisticians (4 yr)
0.54 ± 1.03
0.50 ± 0.92
0.76 ± 1.54
p = 0.1189
# Training (times)
4.44 ± 5.31
4.58 ± 5.49
2.83 ± 1.47
p = 0.4430
11.67 ± 6.19
12.15 ± 5.95
7.70 ± 7.05
p = 0.0310
5.05 ± 9.58
5.41 ± 10.23
3.67 ± 6.30
p = 0.2324
# Training (times)
2.70 ± 2.25
2.74 ± 2.35
2.63 ± 1.79
p = 0.7856
4.80 ± 3.27
4.75 ± 3.36
5.05 ± 3.02
p = 0.5898
4.06 ± 6.74
3.74 ± 5.08
6.67 ± 14.08
p = 0.0838
# Training (times)
1.96 ± 1.35
1.93 ± 1.34
2.33 ± 1.63
p = 0.4890
3.10 ± 2.46
2.89 ± 2.24
5.36 ± 3.71
p = 0.0109
Public Health Staff
2.06 ± 2.24
2.10 ± 2.34
1.00 ± 0.00
p = 0.5159
# Training (times)
2.11 ± 2.23
2.00 ± 2.32
3.00 ± 1.41
p = 0.5643
3.05 ± 2.20
3.22 ± 2.31
2.04 ± 1.47
p = 0.7523
Medical Record Staff
2.93 ± 3.11
2.96 ± 2.84
2.89 ± 3.51
p = 0.9220
# Training (times)
2.38 ± 2.03
2.39 ± 2.59
2.42 ± 1.41
p = 0.9618
6.86 ± 6.96
7.30 ± 7.25
6.61 ± 6.85
p = 0.6357
2.22 ± 3.87
2.42 ± 4.28
1.20 ± 0.45
p = 0.5340
# Training (times)
1.64 ± 0.92
1.78 ± 0.97
1.00 ± 0.00
1.37 ± 0.71
1.67 ± 0.52
5.00 ± 0.00
2.10 ± 2.23
1.82 ± 2.21
3.25 ± 2.22
p = 0.2609
# Training (times)
2.20 ± 1.10
2.20 ± 1.10
3.08 ± 1.30
3.08 ± 0.27
4.39 ± 9.35
3.59 ± 7.29
6.34 ± 12.91
p = 0.1947
# Training (times)
1.92 ± 1.92
1.88 ± 1.81
2.13 ± 2.30
p = 0.6820
5.26 ± 6.66
4.33 ± 4.32
7.50 ± 10.11
p = 0.0488
Certified intermediate and advance coders
0.74 ± 1.13
0.63 ± 1.04
1.49 ± 1.35
p < 0.001
0.04 ± 0.29
0.04 ± 0.31
0.03 ± 0.16
p = 0.8337
Hospital Coding Process
Hospital Coding Practice Scale
Based on the Kaiser-Guttman criterion, scree test, and parallel analysis, our data revealed that the hospital coding practice should comprise 2-4, 10, and 15 factors, respectively. Although parallel analysis has been considered the most accurate as compared to the other two criteria, it suggested a 15-factor model which we thought was not simple enough for our exploration of hospital coding practice.
We then explored the Kaiser-Guttman criterion by re-running the factor analysis with 10 factors specified. The factors were rotated to spread variability more evenly among factors so that all solutions are relatively the same. We used a cut point of 0.70 to drop items with high uniqueness, grouped the retained items into factors, and name the factors based on the member items. We found that 10-factor model was still not clear as some items have similar loadings across 2-3 factors. For example, item A10 (There is a physician responsible for coding practice) had 0.4275 and 0.4613 loadings for factor 2 and 5, respectively.
Grouping and naming 3 factors
FACTOR 1 - DATA QUALITY
Summary Audit is always conducted
Results of Summary Audit are informed and used to improve discharge summary
Results of Medical Record Audit are used for improvement
There is a committee responsible for Summary Audit
Coding Audit is always conducted
Results of Medical Record Audit are publicly announced
Medical Record Audit is always conducted
Results of Coding Audit are informed and used to improve coding
There is a committee responsible for Coding Audit
There is a committee responsible for Medical Record Audit
FACTOR 2 - CODING PRACTICE
There is a policy to ensure that physicians do the codings themselves
Physician has to provide diagnosis and procedure codes in the discharge summary
The hospital develops a computer software to check the codes given by coder
There is a physician responsible for coding practice
There is an incentive/punishment mechanism to ensure timeliness of the discharge summary completion
The hospital provides clear career pathway for medical statistician
There is a staff responsible for using DRG Seeker software
FACTOR 3 - REIMBURSEMENT
Various combinations of codes are entered into DRG Seeker software to see the change in RW/Adjusted RW
Principal and secondary diagnoses may be swapped in order to see the change in Adjusted RW
DRG Seeker software is used for every inpatient
Uniqueness > 0.70
Health insurance status will be checked before coder can code
Checking health insurance status is an essential step of coding practice
There have been attempts to find diagnosis and procedure codes that give as high RW as possible
A criteria is used to select some discharge summary for code checking
Coder always needs to see laboratory results
Hospital intention profiles based on the 3-factor model
Hospital appoints committees to be responsible for the quality of medical record, discharge summary, as well as diagnosis and procedure codes. The audits are regularly conducted and the findings are used as feedback to improve the quality of medical records, discharge summary, as well as diagnosis and procedure codes.
Hospital sets policy and provides essential inputs to ensure good diagnosis and procedure codes. Physicians are considered as the key profession and therefore expected to take responsibility at various levels. Hospital may appoint a physician to oversee the whole hospital coding practice. Physicians at operational level may be required to provide appropriate diagnosis and procedure codes, in addition to their mandated discharge summary task. Medical statisticians are important primarily for checking the codes provided by the physicians and secondarily overseeing the IT aspect of the coding process.
Hospital pays most of its attention on the diagnosis and procedure codes given. The codes are strategically analyzed to see how much relative weight would change across various sets of codes. Selectivity is more obvious when health insurance status of the patient has to be checked before the coding is done.
Hospital not only tries to improve coding practice, but also attempts to monitor the quality of medical record, discharge summary, as well as diagnosis and procedure codes.
Hospital tries to improve coding practice, but mainly to increase relative weight and therefore reimbursement.
Hospital concerns about both data quality and it effects on reimbursement; however, no explicit evidence of coding practice improvement can be found.
Hospital shows evidence of improving the coding practice, monitoring the quality of medical record, discharge summary, and diagnosis and procedure codes, as well as checking how the codes affect final reimbursement amount.
Hospital is in status quo with no interest in monitoring data quality, improving coding practice, or how much reimbursement would be affected.
As a preferred method for provider payment in both developed and developing countries , DRG assumes that hospitals are well equipped with physicians and certified coders and therefore able to submit diagnosis and procedure codes with high quality. Literature on DRG implementation has been mostly from countries with abundant resource or mainly about its macro-level effects whereas study on how DRG codes are actually produced by hospitals is lacking.
DRG creep has been a major concern in resource-rich setting, in which hospital is suspected of being profit maximizer. As this kind of organizational intention is difficult to assess directly, it is not surprising to see mixed findings on the extent of DRG creep based on surrogate outcome measures [13–18]. While some studies investigated organizational behavior and reported potential influence of hospital management and payer on coding process [19, 20], other studies tried to demonstrate potential upcoding of some specific diagnoses such as pneumonia and heart failure [21, 22].
Poor coding quality is not only because of DRG creep, but also sicker patients, improvements in coding, and changes instituted by the payer . We added that variation in hospital coding practices in an under-resourced health system is another major determinant of DRG coding quality. It was not fair for a hospital to be assumed 'capable' of producing good codes without qualified physicians and/or coders.
To our knowledge, this study is the first national survey to explore the structural and process components of coding practice, that might affect DRG coding quality. In terms of structure, we found that the use of software, number of medical statisticians, and experience of physicians seemed to be the most important. SCAD hospitals are more likely to have fewer medical statisticians, fewer certified coders, and less experienced physicians. Our previous case study revealed that, with inadequate numbers and inequitable distribution of certified coders, hospitals have tried to survive by using part-time coders from other disciplines, especially nurses . This survey expanded the point further by suggesting that SCAD hospitals are more likely to face such problems than non-SCAD hospitals.
The current production of medical statisticians has been limited whereas the actual task is not necessarily about coding. In Thailand medical statistician is a job position that requires undergraduate-level training and usually is responsible for analyzing patient information . Although the Thai DRG anticipated medical statisticians to be trained and certified to work as coder, a survey of 322 hospitals in 2001 revealed that only 59.87% of the hospital had medical statisticians worked as coders; but as many as 46% of them were considered 'part-time coders' as they had to be responsible for other jobs as well .
Based on the seven steps of the hospital coding process we reported earlier , the cross-function phenomenon also occurs with other health care professional disciplines as well. This study adds to the case study findings by quantifying the number of hospitals that allows such phenomenon to occur. For example, discharge summary has been assumed to be done by physicians and therefore used as a gold standard to see if the codes assigned by hospital coders are correct. However, nurses or medical statisticians are indeed the primarily responsible staff for discharge summary in some hospitals. Nurses have played important roles in almost all steps of the hospital coding process but they have not been formally recognized. While an experienced nurses can become certified coders, their contribution might not be counted toward job promotion in public hospitals.
We also are proposing development of a new tool called Hospital Coding Practice Scale, which can indirectly explore the DRG creep phenomenon. The audit by external peers has been a main mechanism to assess the quality of discharge summary as well as diagnosis and procedure codes but the results cannot be used to judge hospital intention to game the system. By using this measurement model, one can classify hospitals based on the domains they focused on and the profiles they fall into. We hypothesize that DRG creep is more likely among hospitals who focus mainly on reimbursement (profiles 3, 5, 6; Table 3). Further studies are required to ensure the validity, reliability, and feasibility of this tool.
The generalizability of the findings from this study is limited by low response rate. This was actually anticipated because some hospitals might be cautious to provide such detailed and confidential information about coding practices. The difference in both response rates and characteristics between SCAD and non-SCAD hospitals is another limitation of this study that does not allow a direct comparison of various aspects between the two groups. Also, we were unable to explore some other factors that might affect hospital coding practice. For example, various functionality of the software can affect the hospital coding practice.
SCAD and Non-SCAD hospitals were different in many aspects, especially the number of medical statisticians, experience of medical statisticians and physicians, as well as number of certified coders. The findings suggested that hospital providers should not be assumed capable of producing high quality DRG codes, especially in resource-limited settings.
This study would not be possible without cooperation from hospital respondents. We also would like to thank Professor Richard H. Morrow for his comments during the preparation of this manuscript and Miss Sudarat Chadsuthi for her help with the production of some figures. This study is a part of the first author's dissertation project "Hospital Coding Practice, Data Quality, and DRG-based Reimbursement under the Thai Universal Coverage Scheme", which received partial financial support from the Health Insurance System Research Office (HISRO), Thailand. He also received the Higher Educational Strategic Scholarships for Frontier Research Network, from the Commission on Higher Education, Thailand.
- Pongpirul K, Walker DG, Winch PJ, Robinson C: A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme. BMC Health Serv Res. 2011, 11 (7).Google Scholar
- Simborg DW: DRG creep: a new hospital-acquired disease. N Engl J Med. 1981, 304 (26): 1602-1604. 10.1056/NEJM198106253042611.View ArticlePubMedGoogle Scholar
- NHSO: Thai DRGs Version 4.0 Volume 1. 2007, Nonthaburi: National Health Security OfficeGoogle Scholar
- Pongpirul K, Wongkanaratanakul P: Coding Audit for In-patient Reimbursement of the National Health Security Office. Journal of Health Systems Research. 2008, 2 (4): 535-545.Google Scholar
- DeVellis RF: Scale development: theory and applications. 2003, Thousand Oaks, CA: Sage Publications, Inc, 2Google Scholar
- Everitt B, Dunn G: Applied multivariate data analysis. 2001, London: Oxford University Press, 2View ArticleGoogle Scholar
- Kaiser HF: The application of electronic computers to factor analysis. Educational and Psychological Measurement. 1960, 20: 141-151. 10.1177/001316446002000116.View ArticleGoogle Scholar
- Cattell RB: The scree test for the number of factors. Multivariate Behavioral Research. 1966, 1: 245-276. 10.1207/s15327906mbr0102_10.View ArticlePubMedGoogle Scholar
- Horn JL: A rationale and test for the number of factors in factor analysis. Psychometrika. 1965, 30: 179-186. 10.1007/BF02289447.View ArticlePubMedGoogle Scholar
- Hair JF, l Anderson R, Tatham R, Black W: Multivariate data analysis with readings. 1998, Englewood Cliffs, NJ: Prentice Hall, 5Google Scholar
- Raubenheimer J: An item selection procedure to maximize scale reliability and validity. South African Journal of Industrial Psychology. 2004, 30 (4): 59-64.Google Scholar
- Roger France FH: Case mix use in 25 countries: a migration success but international comparisons failure. Int J Med Inform. 2003, 70 (2-3): 215-219. 10.1016/S1386-5056(03)00044-3.View ArticlePubMedGoogle Scholar
- Carter GM, Newhouse JP, Relles DA: How much change in the case mix index is DRG creep?. J Health Econ. 1990, 9 (4): 411-428. 10.1016/0167-6296(90)90003-L.View ArticlePubMedGoogle Scholar
- Chulis G: Assessing Medicare's prospective payment system for hospitals. Medical Care Research and Review. 1991, 48: 167-206. 10.1177/002570879104800203.View ArticleGoogle Scholar
- Hsia DC, Ahern CA, Ritchie BP, Moscoe LM, Krushat WM: Medicare reimbursement accuracy under the prospective payment system, 1985 to 1988. JAMA: The Journal of the American Medical Association. 1992, 268 (7): 896-899. 10.1001/jama.268.7.896.View ArticlePubMedGoogle Scholar
- Hsia DC, Krushat WM, Fagan AB, Tebbutt JA, Kusserow RP: Accuracy of diagnostic coding for Medicare patients under the prospective-payment system. The New England journal of medicine. 1988, 318 (6): 352-355. 10.1056/NEJM198802113180604.View ArticlePubMedGoogle Scholar
- Lüngen M, Lauterbach KW: [Upcoding--a risk for the use of diagnosis-related groups]. Deutsche medizinische Wochenschrift (1946). 2000, 125 (28-29): 852-856.View ArticleGoogle Scholar
- Steinwald B, Dummit LA: Hospital case-mix change: sicker patients or DRG creep?. Health affairs (Project Hope). 1989, 8 (2): 35-47. 10.1377/hlthaff.8.2.35.View ArticleGoogle Scholar
- Lorence DP, Richards M: Variation in coding influence across the USA. Risk and reward in reimbursement optimization. J Manag Med. 2002, 16 (6): 422-435. 10.1108/02689230210450981.View ArticlePubMedGoogle Scholar
- Lorence DP, Spink A: Regional variation in medical systems data: influences on upcoding. Journal of medical systems. 2002, 26 (5): 369-381. 10.1023/A:1016405214914.View ArticlePubMedGoogle Scholar
- Psaty BM, Boineau R, Kuller LH, Luepker RV: The potential costs of upcoding for heart failure in the United States. The American journal of cardiology. 1999, 84 (1): 108-109. 10.1016/S0002-9149(99)00205-2. A109View ArticlePubMedGoogle Scholar
- Silverman E, Skinner J: Medicare upcoding and hospital ownership. Journal of health economics. 2004, 23 (2): 369-389. 10.1016/j.jhealeco.2003.09.007.View ArticlePubMedGoogle Scholar
- Prasanwong C, Reungdech S, Lokchareonlap S, Tatsalongkarnsakoon W, Pantarassamee C, Yeunyongsuwan M: Medical coding practices in Thailand. 2001, Nonthaburi: Health Systems Research InstituteGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/11/290/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.