Skip to main content
  • Research article
  • Open access
  • Published:

DRG coding practice: a nationwide hospital survey in Thailand

Abstract

Background

Diagnosis Related Group (DRG) payment is preferred by healthcare reform in various countries but its implementation in resource-limited countries has not been fully explored.

Objectives

This study was aimed (1) to compare the characteristics of hospitals in Thailand that were audited with those that were not and (2) to develop a simplified scale to measure hospital coding practice.

Methods

A questionnaire survey was conducted of 920 hospitals in the Summary and Coding Audit Database (SCAD hospitals, all of which were audited in 2008 because of suspicious reports of possible DRG miscoding); the questionnaire also included 390 non-SCAD hospitals. The questionnaire asked about general demographics of the hospitals, hospital coding structure and process, and also included a set of 63 opinion-oriented items on the current hospital coding practice. Descriptive statistics and exploratory factor analysis (EFA) were used for data analysis.

Results

SCAD and Non-SCAD hospitals were different in many aspects, especially the number of medical statisticians, experience of medical statisticians and physicians, as well as number of certified coders. Factor analysis revealed a simplified 3-factor, 20-item model to assess hospital coding practice and classify hospital intention.

Conclusion

Hospital providers should not be assumed capable of producing high quality DRG codes, especially in resource-limited settings.

Peer Review reports

Background

Since 2001, the Universal Coverage (UC) scheme has provided health benefits to approximately three quarters of Thailand's citizens. The scheme is financed from general taxation and administered by the National Health Security Office (NHSO), under the supervision of the Public Health Minister. Hospital providers are mainly from the public sector and are paid for outpatient and preventive services based on prospective capitation whereas the Diagnosis Related Group (DRG)-based retrospective payment was used to compensate for the cost of inpatient care.

The DRG system can work well if the diagnosis and procedure codes could reflect both the patient's clinical condition and the actual cost of care incurred by hospital providers. In addition, coding practice in hospitals has to be reliable and capable of producing consistent and reliable coding quality. In an ideal setting, physicians would carefully review relevant clinical information to prepare a discharge summary, which would then be used by certified coders to produce appropriate diagnosis and procedure codes to be submitted for reimbursement. However, such an ideal condition is unlikely, at least in resource-limited settings like Thailand.

In a related qualitative study, we detailed the variation of coding practice in 10 hospitals and then defined the concept of Hospital Coding Practice as comprising elements of both structure and process [1]. In terms of structure, we identified at least eight health care professional disciplines (Medical Statisticians, Nurse, Physician, Public Health Staff/Paramedics, Medical Record Staff, Information Technology Staff, Finance/Accounting Staff, and others) as well as IT infrastructure specifically relevant to coding practice. We also described seven major steps of the coding process (Discharge Summarization, Completeness Checking, Diagnosis and Procedure Coding, Code Checking, Relative Weight Challenging, Coding Report, and Internal Summary and Coding Audit). The findings demonstrated that coding practice is not a simple two-step activity as had been assumed before by payers, but rather a multi-step process that involves many overlapping responsibilities across health care professional disciplines, especially in a resource-limited setting [1].

Although DRG has been applied in many countries and has been improved over the decades, it still is an imperfect system and a number of concerns have therefore been raised. Discrepancies between the submitted codes and the information in medical records revealed during the coding audit--especially when the submitted codes can result in larger reimbursements than would be consistent with the actual condition--have triggered a number of concerns about quality of medical records, cooperation of physicians, knowledge and skill of coders, as well as hospital intentions to "game the system" (also known as "DRG Creep") [2]. While most of these issues can be objectively verified, the last concern seems to be both the most important and the most difficult to measure.

Hospitals in Thailand are required to check for the accuracy and completeness of the data using the National Health Security Office (NHSO) standard guidelines [3]; however, errors have been frequently reported. Each hospital is therefore required to check the data before submission, and penalties--financial and non-financial--are imposed if errors are found. In 2008, the Bureau of Claims and Medical Audit (BCMA) conducted the Summary and Coding Audit on 57,828 medical records of 931 hospitals in 75 provinces (SCAD 2008). Hospitals were selected based on pre-specified criteria as presented in Table 1 [4]. These 'SCAD hospitals' were then regarded negatively as they were suspected of either having poor data quality or trying to manipulate the system.

Table 1 Inclusion criteria for SCAD 2008

Hospital coding practice can be affected by some factors beyond a hospital's control [1]. Simply because the hospitals were chosen to be audited based on the above inclusion criteria does not necessarily mean they intended to cheat or were incapable of producing quality coding. Understanding what might distinguish SCAD hospitals from the non-SCAD counterparts was an appropriate and feasible next step. This nationwide hospital survey has two main objectives: (1) to describe and compare the characteristics of both groups of hospitals and (2) to develop a simplified scale to measure hospital coding practice.

Methods

Questionnaire

Based on the case study findings [1], a questionnaire was developed to comprise three sections. Section 1 asked about general demographics of the respondents and of the hospitals. Section 2 aimed to explore the hospital coding structure; various types of resources in the hospital were asked. Section 3.1 explored how, for each step of the hospital coding process, primary and secondary responsibility was assigned to, or assumed by, a particular health care professional discipline. Face validity of the questionnaire was assessed by presenting the findings to data-coding experts. The general demographic information was verified with national hospital registry to ensure content validity. To ensure comprehensibility and feasibility, a pilot test was conducted among 3-5 graduate students with relevant professional background as well as among a few public and private hospitals; feedback was then used for revision. Forward and backward translation was done to ensure its conceptual and linguistic equivalence.

Section 3.2 contained a set of opinion-oriented items to reflect on the current hospital coding practice. A pool of 63 items were identified from the interviews with the 10 hospitals in the case study [1]. They were categorized into group A to K, according to their relevance to each step of the hospital coding process (A1-A19: Non-specific; B1-B8: Discharge Summarization; C1-C2: Completeness Checking; DE1-DE10: Diagnosis and Procedure Coding; F1-F6: Code Checking; G1-G5: Relative Weight Challenging; H1-H3: Coding Report; I1-I3: Medical Record Audit; J1-J3: Internal Summary Audit; K1-K3: Internal Coding Audit). These items were presented as declarative sentences, followed by response options that indicate varying degree of agreement with the statement. A five-point Likert scale format (-2 = Strongly Disagree; -1 = Disagree; 0 = Neutral; +1 = Agree; +2 = Strongly Agree) was considered the most appropriate, based on the feedback from our pilot test, for the respondents' ability to discriminate meaningfully [5].

Site & Study Population

The study population of interest initially included all 1,356 hospitals in Thailand, of which 931 hospitals participated in the UC scheme and their audit results were contained in the SCAD 2008. Thirty-five hospitals that provided services to highly specific population groups (such as drug addicts, prisoners, special psychiatric patients, organizational employees) or had special kind of financial support, and 11 hospitals without verified postal address were excluded. After exclusion, there were 1,310 target hospitals, comprising 920 SCAD and 390 non-SCAD hospitals (Figure 1). The respondents targeted for the study were hospital staff who were responsible for the diagnosis and procedure coding process, as identified by the hospital director.

Figure 1
figure 1

Survey strategy.

Data Collection

The questionnaire, a cover letter, and a prepaid return envelope were mailed to the target hospitals. Hospitals were asked to return the questionnaire within four weeks, after which two follow-up phone calls were done. Non-respondents were defined as those who did not return the questionnaire within two weeks after the second follow-up call. This study was determined by the Johns Hopkins Bloomberg School of Public Health Institutional Review Board as not human subjects research (IRB# 00002096).

Data Analysis

For responses to the Section 1 (General Demographics), Section 2 (Hospital Coding Structure), and Section 3.1 (Hospital Coding Process), Pearson's chi-square and Student's t-test were used for analyzing categorical and continuous variables, respectively. Exploratory factor analysis (EFA) was used to analyze the responses in Section 3.2 (Hospital Coding Practice Scale). EFA is a technique used to explain covariance among observed random variables in terms of fewer unobserved random variables named factors. It helps to generate a hypothesis in such a way that the investigation of the relationships between manifest variables and factors is not based on any prior assumptions about which manifest variables are related to which factors [6].

The factor analysis was done to identify optimal number of factors, determined by Kaiser-Guttman Criterion (Number of eigenvalues > 1) [7], scree test [8], and parallel analysis [9]. Items with high uniqueness, defined as larger than 0.70, were removed whereas the remaining items were retained within the factors that showed high factor loadings. According to the 'rule of thumb' in confirmatory factor analysis, loadings should be at least 0.70 because it corresponds to about half of the variance of the variable being explained by the factor of interest. However, some researchers suggest lower levels of cut-points such as 0.60 [10] or as low as 0.40 [11] for EFA. We also performed initial reliability test and item-based statistics in conjunction with EFA [5]. Stata/SE Version 10 (Stata Corp.) was used for all statistical calculations.

Results

Response

The overall response rate was 37.56% with a well-balanced geographical distribution. Of those who responded, 18 hospitals were excluded because they were no longer in operation. The sample therefore consisted of 374 SCAD and 100 non-SCAD hospitals. The SCAD hospitals were significantly more likely to respond than Non-SCAD hospitals (40.87% vs 29.74%; OR 1.63; 95% CI: 1.26-2.12; p = 0.0001). Public hospitals were significantly more likely to respond to our survey than private hospitals (p < 0.001). The geographical distributions were similar between the responders and non-responders (p = 0.663). Larger hospitals were significantly more likely to respond to the survey than smaller ones (p < 0.001).

Characteristics of Hospital Survey Responders

The majority of the contact persons was female (80.08%) with an average age of 37 years and had worked in their hospital for at least 12 years on average (Table 1). Almost half of them (47.97%) were medical statisticians and 31.71% were nurses. Each of the surveyed hospitals was responsible for an average UC population of 60,000. Universal Coverage, moreover, was the major source of hospital revenue for most responders. Under the UC scheme, hospitals have had to set up an adequate number of Primary Care Units (PCU) to provide care for remote populations. In this survey, the mean number of PCUs was 9.20 per hospital. On average, approximately 75% of the beds were occupied. The average length of stay (LOS) was 5.95 days. The mean Relative Weight (RW)--a standard value assigned to each DRG to reflect the cost of its care--and Adjusted Relative Weight (Adjusted RW) were 0.91 and 0.82, respectively. On an average day, the surveyed hospitals took care of 75.50 inpatient and 511.23 outpatient cases. One quarter of the hospitals received full accreditation by the Healthcare Accreditation Institute (similar to the US's Joint Commission on Accreditation of Healthcare Organizations). SCAD hospitals were more likely to be public and smaller than non-SCAD hospitals (p < 0.001). The majority of SCAD hospitals were in Northeast and Central region whereas non-SCAD hospitals were from Bangkok and Central region.

Hospital Coding Structure

The number of computers used specifically for coding purposes in each hospital ranged from 1 to 200 (Mean 7; SD 14.56; N = 376), regardless of SCAD status. We found a significant variation in the types of software that hospitals used. These software programs offered different capability to assist in the coding process. Even the most popular software (HOSxP) was used in only 45% of the hospitals sampled. Majority of SCAD hospitals used HOSxP whereas most of the Non-SCAD hospitals preferred either less common commercial or proprietary software.

We reported at least eight health care professional disciplines (Medical Statisticians, Nurse, Physician, Public Health Staff/Paramedics, Medical Record Staff, Information Technology Staff, Finance/Accounting Staff, and others) were involved in the hospital coding practice [1]. The number of medical statisticians as well as experience of medical statisticians and physicians were statistically significantly different between SCAD and non-SCAD hospitals (p = 0.0256) (Table 2).

Table 2 Human resource for hospital coding practice in SCAD and Non-SCAD hospitals

Only 55 out of 492 hospitals (11.18%) reported that they had at least one formally trained medical statistician (Figure 2). There were at least 572 medical statisticians who were formally trained and received the 2-year certificate program from Kanchanabhisek Institute of Medical and Public Health Technology (KMPHT). Approximately 30% of them continued their study to finish the 4-year Bachelor's of Science Program in Medical Records from the Department of Social Science, Faculty of Social Sciences and Humanities, Mahidol University.

Figure 2
figure 2

Distribution of medical statisticians with 2- or 4-year degree program.

Regardless of formal training, some of the experienced hospital staff could take an examination to be certified as intermediate and advance coders, who could also be invited to work as external auditors upon request. SCAD hospitals were more likely to have fewer certified coders than non-SCAD hospitals (Table 3).

Table 3 Certified intermediate and advance coders

Hospital Coding Process

Hospital coding process involved different health care professional disciplines in each of the steps. Figure 3 depicts the proportional representation of both primary and secondary responsible staff for all steps of the hospital coding process. The distribution of primary and secondary responsible staff between SCAD and non-SCAD hospitals are not different (results not shown).

Figure 3
figure 3

Primary and secondary responsible staff in each step of the hospital coding process.

Hospital Coding Practice Scale

Based on the Kaiser-Guttman criterion, scree test, and parallel analysis, our data revealed that the hospital coding practice should comprise 2-4, 10, and 15 factors, respectively. Although parallel analysis has been considered the most accurate as compared to the other two criteria, it suggested a 15-factor model which we thought was not simple enough for our exploration of hospital coding practice.

We then explored the Kaiser-Guttman criterion by re-running the factor analysis with 10 factors specified. The factors were rotated to spread variability more evenly among factors so that all solutions are relatively the same. We used a cut point of 0.70 to drop items with high uniqueness, grouped the retained items into factors, and name the factors based on the member items. We found that 10-factor model was still not clear as some items have similar loadings across 2-3 factors. For example, item A10 (There is a physician responsible for coding practice) had 0.4275 and 0.4613 loadings for factor 2 and 5, respectively.

We therefore tried to follow the scree test approach and re-run the analysis with 4 factors. Orthogonal (varimax) and oblique (promax) rotation gave similar results but we decided to proceed with the latter because of the potential non-independent nature of the factors. After rotation and deletion of items with high uniqueness, we found that Factor 4 did not contain any items. Hence, the findings suggested that 3 factors might be the best solution for our purpose (Table 4).

Table 4 Grouping and naming 3 factors

Factor 1 was named "Data Quality" as hospitals who score high for this factor seemed to pay more attention to the quality of medical record, discharge summary, as well as diagnosis and procedure codes. One could see that hospitals who score high for the second factor paid attention to various aspects of coding practice. Factor 2 was therefore called "Coding Practice". Factor 3 contained three items, all of which suggested relevance to a hospital's interest in reimbursement rather than data quality or coding practice. Hence, we named it "Reimbursement". We also revisited the other items that were dropped because of high uniqueness but also had considerable loadings to Factor 3. Interestingly, they were also suggestive of hospital's reimbursement interest but with the other step of hospital coding process. Table 5 presents the three factors as three different profiles.

Table 5 Hospital intention profiles based on the 3-factor model

Discussion

As a preferred method for provider payment in both developed and developing countries [12], DRG assumes that hospitals are well equipped with physicians and certified coders and therefore able to submit diagnosis and procedure codes with high quality. Literature on DRG implementation has been mostly from countries with abundant resource or mainly about its macro-level effects whereas study on how DRG codes are actually produced by hospitals is lacking.

DRG creep has been a major concern in resource-rich setting, in which hospital is suspected of being profit maximizer. As this kind of organizational intention is difficult to assess directly, it is not surprising to see mixed findings on the extent of DRG creep based on surrogate outcome measures [13–18]. While some studies investigated organizational behavior and reported potential influence of hospital management and payer on coding process [19, 20], other studies tried to demonstrate potential upcoding of some specific diagnoses such as pneumonia and heart failure [21, 22].

Poor coding quality is not only because of DRG creep, but also sicker patients, improvements in coding, and changes instituted by the payer [13]. We added that variation in hospital coding practices in an under-resourced health system is another major determinant of DRG coding quality. It was not fair for a hospital to be assumed 'capable' of producing good codes without qualified physicians and/or coders.

To our knowledge, this study is the first national survey to explore the structural and process components of coding practice, that might affect DRG coding quality. In terms of structure, we found that the use of software, number of medical statisticians, and experience of physicians seemed to be the most important. SCAD hospitals are more likely to have fewer medical statisticians, fewer certified coders, and less experienced physicians. Our previous case study revealed that, with inadequate numbers and inequitable distribution of certified coders, hospitals have tried to survive by using part-time coders from other disciplines, especially nurses [1]. This survey expanded the point further by suggesting that SCAD hospitals are more likely to face such problems than non-SCAD hospitals.

The current production of medical statisticians has been limited whereas the actual task is not necessarily about coding. In Thailand medical statistician is a job position that requires undergraduate-level training and usually is responsible for analyzing patient information [1]. Although the Thai DRG anticipated medical statisticians to be trained and certified to work as coder, a survey of 322 hospitals in 2001 revealed that only 59.87% of the hospital had medical statisticians worked as coders; but as many as 46% of them were considered 'part-time coders' as they had to be responsible for other jobs as well [23].

Based on the seven steps of the hospital coding process we reported earlier [1], the cross-function phenomenon also occurs with other health care professional disciplines as well. This study adds to the case study findings by quantifying the number of hospitals that allows such phenomenon to occur. For example, discharge summary has been assumed to be done by physicians and therefore used as a gold standard to see if the codes assigned by hospital coders are correct. However, nurses or medical statisticians are indeed the primarily responsible staff for discharge summary in some hospitals. Nurses have played important roles in almost all steps of the hospital coding process but they have not been formally recognized. While an experienced nurses can become certified coders, their contribution might not be counted toward job promotion in public hospitals.

We also are proposing development of a new tool called Hospital Coding Practice Scale, which can indirectly explore the DRG creep phenomenon. The audit by external peers has been a main mechanism to assess the quality of discharge summary as well as diagnosis and procedure codes but the results cannot be used to judge hospital intention to game the system. By using this measurement model, one can classify hospitals based on the domains they focused on and the profiles they fall into. We hypothesize that DRG creep is more likely among hospitals who focus mainly on reimbursement (profiles 3, 5, 6; Table 3). Further studies are required to ensure the validity, reliability, and feasibility of this tool.

The generalizability of the findings from this study is limited by low response rate. This was actually anticipated because some hospitals might be cautious to provide such detailed and confidential information about coding practices. The difference in both response rates and characteristics between SCAD and non-SCAD hospitals is another limitation of this study that does not allow a direct comparison of various aspects between the two groups. Also, we were unable to explore some other factors that might affect hospital coding practice. For example, various functionality of the software can affect the hospital coding practice.

Conclusion

SCAD and Non-SCAD hospitals were different in many aspects, especially the number of medical statisticians, experience of medical statisticians and physicians, as well as number of certified coders. The findings suggested that hospital providers should not be assumed capable of producing high quality DRG codes, especially in resource-limited settings.

References

  1. Pongpirul K, Walker DG, Winch PJ, Robinson C: A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme. BMC Health Serv Res. 2011, 11 (7).

  2. Simborg DW: DRG creep: a new hospital-acquired disease. N Engl J Med. 1981, 304 (26): 1602-1604. 10.1056/NEJM198106253042611.

    Article  CAS  PubMed  Google Scholar 

  3. NHSO: Thai DRGs Version 4.0 Volume 1. 2007, Nonthaburi: National Health Security Office

    Google Scholar 

  4. Pongpirul K, Wongkanaratanakul P: Coding Audit for In-patient Reimbursement of the National Health Security Office. Journal of Health Systems Research. 2008, 2 (4): 535-545.

    Google Scholar 

  5. DeVellis RF: Scale development: theory and applications. 2003, Thousand Oaks, CA: Sage Publications, Inc, 2

    Google Scholar 

  6. Everitt B, Dunn G: Applied multivariate data analysis. 2001, London: Oxford University Press, 2

    Book  Google Scholar 

  7. Kaiser HF: The application of electronic computers to factor analysis. Educational and Psychological Measurement. 1960, 20: 141-151. 10.1177/001316446002000116.

    Article  Google Scholar 

  8. Cattell RB: The scree test for the number of factors. Multivariate Behavioral Research. 1966, 1: 245-276. 10.1207/s15327906mbr0102_10.

    Article  CAS  PubMed  Google Scholar 

  9. Horn JL: A rationale and test for the number of factors in factor analysis. Psychometrika. 1965, 30: 179-186. 10.1007/BF02289447.

    Article  CAS  PubMed  Google Scholar 

  10. Hair JF, l Anderson R, Tatham R, Black W: Multivariate data analysis with readings. 1998, Englewood Cliffs, NJ: Prentice Hall, 5

    Google Scholar 

  11. Raubenheimer J: An item selection procedure to maximize scale reliability and validity. South African Journal of Industrial Psychology. 2004, 30 (4): 59-64.

    Google Scholar 

  12. Roger France FH: Case mix use in 25 countries: a migration success but international comparisons failure. Int J Med Inform. 2003, 70 (2-3): 215-219. 10.1016/S1386-5056(03)00044-3.

    Article  PubMed  Google Scholar 

  13. Carter GM, Newhouse JP, Relles DA: How much change in the case mix index is DRG creep?. J Health Econ. 1990, 9 (4): 411-428. 10.1016/0167-6296(90)90003-L.

    Article  CAS  PubMed  Google Scholar 

  14. Chulis G: Assessing Medicare's prospective payment system for hospitals. Medical Care Research and Review. 1991, 48: 167-206. 10.1177/002570879104800203.

    Article  CAS  Google Scholar 

  15. Hsia DC, Ahern CA, Ritchie BP, Moscoe LM, Krushat WM: Medicare reimbursement accuracy under the prospective payment system, 1985 to 1988. JAMA: The Journal of the American Medical Association. 1992, 268 (7): 896-899. 10.1001/jama.268.7.896.

    Article  CAS  PubMed  Google Scholar 

  16. Hsia DC, Krushat WM, Fagan AB, Tebbutt JA, Kusserow RP: Accuracy of diagnostic coding for Medicare patients under the prospective-payment system. The New England journal of medicine. 1988, 318 (6): 352-355. 10.1056/NEJM198802113180604.

    Article  CAS  PubMed  Google Scholar 

  17. Lüngen M, Lauterbach KW: [Upcoding--a risk for the use of diagnosis-related groups]. Deutsche medizinische Wochenschrift (1946). 2000, 125 (28-29): 852-856.

    Article  Google Scholar 

  18. Steinwald B, Dummit LA: Hospital case-mix change: sicker patients or DRG creep?. Health affairs (Project Hope). 1989, 8 (2): 35-47. 10.1377/hlthaff.8.2.35.

    Article  CAS  Google Scholar 

  19. Lorence DP, Richards M: Variation in coding influence across the USA. Risk and reward in reimbursement optimization. J Manag Med. 2002, 16 (6): 422-435. 10.1108/02689230210450981.

    Article  PubMed  Google Scholar 

  20. Lorence DP, Spink A: Regional variation in medical systems data: influences on upcoding. Journal of medical systems. 2002, 26 (5): 369-381. 10.1023/A:1016405214914.

    Article  PubMed  Google Scholar 

  21. Psaty BM, Boineau R, Kuller LH, Luepker RV: The potential costs of upcoding for heart failure in the United States. The American journal of cardiology. 1999, 84 (1): 108-109. 10.1016/S0002-9149(99)00205-2. A109

    Article  CAS  PubMed  Google Scholar 

  22. Silverman E, Skinner J: Medicare upcoding and hospital ownership. Journal of health economics. 2004, 23 (2): 369-389. 10.1016/j.jhealeco.2003.09.007.

    Article  PubMed  Google Scholar 

  23. Prasanwong C, Reungdech S, Lokchareonlap S, Tatsalongkarnsakoon W, Pantarassamee C, Yeunyongsuwan M: Medical coding practices in Thailand. 2001, Nonthaburi: Health Systems Research Institute

    Google Scholar 

Pre-publication history

Download references

Acknowledgements

This study would not be possible without cooperation from hospital respondents. We also would like to thank Professor Richard H. Morrow for his comments during the preparation of this manuscript and Miss Sudarat Chadsuthi for her help with the production of some figures. This study is a part of the first author's dissertation project "Hospital Coding Practice, Data Quality, and DRG-based Reimbursement under the Thai Universal Coverage Scheme", which received partial financial support from the Health Insurance System Research Office (HISRO), Thailand. He also received the Higher Educational Strategic Scholarships for Frontier Research Network, from the Commission on Higher Education, Thailand.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Krit Pongpirul.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

KP conceived of and designed the study, carried out the survey, analyzed the data, and drafted the manuscript. DGW participated in its design, and helped to draft the manuscript. HR helped to revise the manuscript. CR helped to draft and revised the manuscript. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Pongpirul, K., Walker, D.G., Rahman, H. et al. DRG coding practice: a nationwide hospital survey in Thailand. BMC Health Serv Res 11, 290 (2011). https://doi.org/10.1186/1472-6963-11-290

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-11-290

Keywords