Findings of the national PbR data Assurance Framework: improving the quality of data underpinning payment by results using benchmarking to target clinical coding audits

Since 2003/04 the Department of Health (DH) has been progressively implementing a prospective payment casemix funding system, known as Payment by Results (PbR), so that in 2008/09 over 90 percent of acute inpatient activity is reimbursed through PbR. The accuracy of the data recorded for each episode of care can directly influence the accuracy of reimbursement between commissioners and providers under PbR. In 2006, the Audit Commission was approached to develop a national data assurance framework for PbR. A pilot study found an average HRG error rate of 11.9 percent with considerable variation between trusts. The financial impact of the errors on payments represented between 5 percent and 14 percent of the total sample value. Following from the pilot, in 2007/08 the Commission began a national programme of PbR data assurance at all NHS acute trusts. This paper explains the components which make up the PbR Assurance Framework today; the particular contribution that benchmarking makes to the process; and the results from the first full year of audits.


Introduction
Since 2003/04 the Department of Health (DH) has been progressively implementing a prospective payment casemix funding system, known as Payment by Results (PbR), so that in 2008/09 over 90 percent of acute inpatient activity is reimbursed through PbR. The accuracy of the data recorded for each episode of care can directly influence the accuracy of reimbursement between commissioners and providers under PbR. In 2006, the Audit Commission was approached to develop a national data assurance framework for PbR. A pilot study found an average HRG error rate of 11.9 percent with considerable variation between trusts. The financial impact of the errors on payments represented between 5 percent and 14 percent of the total sample value. Following from the pilot, in 2007/08 the Commission began a national programme of PbR data assurance at all NHS acute trusts. This paper explains the components which make up the PbR Assurance Framework today; the particular contribution that benchmarking makes to the process; and the results from the first full year of audits.

Methods
The PbR Assurance Framework consists of: • an annual, independent, external, clinical coding audit programme at every NHS acute trust; • the development of benchmarking indicators and tools to target the audits and for wider use by the NHS; • regular national analyses and briefings on issues arising from the audits and benchmarking.
The Assurance Framework process comprises three phases: a preparation phase, an audit phase and a reporting phase. Auditors re-abstract the diagnosis and procedure coding data from clinical records relating to 300 separate episodes of care split across four areas. The impact of any coding errors is reported at diagnosis and procedure, HRG and financial impact levels. The results of the audits are shared with the local NHS, published nationally and incorporated into the work of healthcare regulators.
The pilot demonstrated that analysis using predefined indicators does highlight significant anomalies between trusts, and so the Commission decided to develop a benchmarking process to support the Assurance Framework. The primary aim of the benchmarking is to recommend areas for audit. This approach now defines the Assurance Framework as different from other clinical coding audit programmes both in the UK and abroad.
The Commission developed 23 separate data quality indicators to analyse Secondary Uses Service (SUS) data. The fundamental principles of the benchmarking methodology are that it is transparent, deterministic and repeatable. It is neither judgmental nor does it attempt to comment on behaviours.

Results
Nationally, the average percentage of incorrect primary codes for procedures was 13.4 percent and for diagnoses was 15.1. The average percentage of HRGs derived incorrectly was 9.4 percent. The errors identified from all the audits had a gross monetary value of 4.8 percent of the total value of the 50,000 episodes audited. However, for the majority of trusts, the net value was found to be close to zero suggesting no consistent trend in either over-or under-coding.
After analysing all of the error classifications of the coding audits, we found that the errors were substantially caused by coder (83 percent) rather than non-coder (17 percent) error. The most common factors associated with coding errors were: the quality of source documentation; issues relating to trust coding arrangements; clinician engagement and involvement; and identification and coding of co-morbidities. Numbers of coders was found to be less significant than the training they received.

Conclusion
The results of the first year of the PbR Data Assurance Framework show that, nationally, the standard of clinical coding in English acute trusts is comparable with their international peers. However, data quality remains an issue, and one which is having a financial impact on some organisations working within the PbR system. However, whilst the financial balance of these errors remains in equilibrium, it seems that PbR can operate successfully with less-than-perfect data quality.
The use of benchmarking to target clinical coding audits has become the real value-adding factor in the Commission's approach to PbR assurance. The National Benchmarker has correctly identified areas most worth further inspection; it has been able to track improvements in clinical coding processes; and it is beginning to build a community of enthusiasts within the NHS. In time, the Commission will make the benchmarking function the foundation of a lighter-touch, more risk-based approach to assuring the PbR system.