Differences in the quality of primary medical care for CVD and diabetes across the NHS: evidence from the quality and outcomes framework
BMC Health Services Research volume 7, Article number: 74 (2007)
Health policy in the UK has rapidly diverged since devolution in 1999. However, there is relatively little comparative data available to examine the impact of this natural experiment in the four UK countries. The Quality and Outcomes Framework of the 2004 General Medical Services Contract provides a new and potentially rich source of comparable clinical quality data through which we compare quality of primary medical care for coronary heart disease (CHD), stroke, hypertension and diabetes across the four UK countries.
A cross-sectional analysis was undertaken involving 10,064 general practices in England, Scotland, Wales and Northern Ireland. The main outcome measures were prevalence rates for CHD, stroke, hypertension and diabetes. Achievement on 14 simple process, 3 complex process, 9 intermediate outcome and 5 treatment indicators for the four clinical areas.
Prevalence varies by up to 28% between the four UK countries, which is not reflected in resource distribution between countries, and penalises practices in the high prevalence countries (Wales and Scotland). Differences in simple process measures across countries are small. Larger differences are found for complex process, intermediate outcome and treatment measures, most notably for Wales, which has consistently lower quality of care. Scotland has generally higher quality than England and Northern Ireland is most consistently the highest quality.
Previously identified weaknesses in Wales related to waiting times appear to reflect a more general quality problem within NHS Wales. Identifying explanations for the observed differences is limited by the lack of comparable data on practice resources and organisation. Maximising the value of cross-jurisdictional comparisons of the ongoing natural experiment of health policy divergence within the UK requires more detailed examination of resource and organisational differences.
Since devolution in 1999, health policy has increasingly diverged in the four UK countries (England, Scotland, Wales and Northern Ireland). [1, 2] Health policy across the UK continues to share a number of similar objectives including reducing waiting times, shifting care from hospital to community, reducing health inequalities, and improving clinical quality for people with long term conditions. [3, 4] However, the preferred mechanisms for delivering these objectives increasingly vary. [5–8] Post-devolution, England initially focused on centralised performance management, with rewards and sanctions for trusts that succeeded or failed to achieve centrally set targets. Latterly, the emphasis has shifted to implementing a regulated healthcare market built around patient choice and increased plurality of providers, allied to an increasing role for the private sector within the primary care setting. [5–7] Scotland has abolished the purchaser-provider split and (re)created vertically integrated Health Boards, with an emphasis on professionalism and clinical leadership, particularly in managed clinical networks to co-ordinate care across organisational and professional boundaries.  The distinctive feature of Welsh policy is a strong emphasis on public health, with local health boards having a duty of partnership with local authorities in commissioning.  In Northern Ireland, political uncertainty has meant that major organisational reform has not been implemented. 
Therefore a natural experiment is in progress in UK healthcare, and cross-jurisdictional comparison is of growing interest. Examining this natural experiment is difficult because data are often not comparable, partly because data definitions and collection are not consistent, even for healthcare expenditure and analysis of National Health Service (NHS) workforce.  Two recent comparisons indicate that England has been more successful than Wales and Northern Ireland in rapidly reducing waiting times for first outpatient appointment and elective surgery, although these are now also falling in Wales and Northern Ireland. Though not fully comparable, Scottish waiting times are consistently lower over the whole period despite waiting times in England falling faster than in Scotland between 2002 and 2005. [3, 4, 10] Other research has shown that Scotland has higher rates of breast cancer screening and some immunisations, whereas England has lower mortality from colorectal cancer and ischaemic heart disease, and Wales has the highest rate of statin prescribing.  There are substantial variations in per-capita healthcare spending, with England spending the least and Scotland the most, although at country level, these differences are partly explained by differences in need, and have reduced post-devolution due to faster growth in healthcare spending in England.  Nonetheless, mean listsize per whole time equivalent GP remains lower in Scotland than in other UK countries (partly due to greater rurality).  There is no recent comparable data on practice nurse and other practice employed staff available at national levels (Table 1). 
The Quality and Outcomes Framework (QOF) of the 2004 General Medical Services contract provides a new opportunity to compare clinical quality using measures with consistent definition and data collection in all four countries. [3, 4, 14] However, although QOF is designed to incentivise high quality care and is unique in scale and scope for a national pay-for-performance program, it is not comprehensive in its coverage. There remain a wide variety of measures that can have a significant impact on the quality of care, which are not included. Nevertheless, QOF is consistent across all four UK counties and offers the best current opportunity to compare differences in clinical quality. QOF data is reported at practice level as the proportion of eligible patients achieving each indicator, and case-mix adjustment using patient level data is therefore not possible. However, data reported for payment (which we call 'payment' quality and others have called 'reported' quality  allows practices to 'exception report' individual patients for a range of reasons, including patient not attending for care, patient dissent, and patient on maximum treatment. An exception reported patient is removed from the denominator for each indicator. In principle, exception reporting should therefore adjust for case-mix variation. For example, in Scotland in 2004/5, there is no consistent socio-economic gradient in quality using 'payment' quality (which allows exceptions), whereas practices serving more deprived populations have lower 'population' achievement for many measures (calculation of which does not allow exceptions).  This paper uses QOF data to compare the quality of care for diabetes and cardio-vascular disease (coronary heart disease [CHD], stroke and transient ischaemic attack, and hypertension) in the four UK countries. All of these conditions cause considerable morbidity and mortality, and the relevant QOF measures are underpinned by a strong evidence base of guidelines and National Service Frameworks.  Differences in the average QOF financial incentive in each country are examined by comparing QOF reported prevalence of incentivised disease. QOF payment within countries varies by prevalence of disease in each practice, but this is compared to mean prevalence in each country. However, as payment for the average prevalence practice achieving the same points total in each country is the same, the average per-patient rewards for quality are therefore lower in higher prevalence countries.
The analysis used publicly available data on QOF achievement for each practice. [16–19]. Practices with missing data or those with no patients in their denominators were excluded, leaving 8,214 (96%) practices in England, 1023 (98%) in Scotland, 362 (99%) in Northern Ireland and 459 (92%) in Wales.
The Quality Management and Analysis System (QMAS) reports quality measures in the form of 'percentage of eligible patients achieving measure', from which practices are allowed to exception report patients for a variety of reasons ('payment' quality). Although 'payment quality' is therefore effectively a crude form of case-mix adjustment, any differences between countries might reflect differences in exception reporting as well as differences in actual quality of care.
Exception reporting rates are not directly available in QMAS for 2004/5 data, but 'population achievement' (a measure which does not allow exceptions) can be estimated for measures where the non-excepted denominator is the number of patients on the disease register. This method has been outlined in detail elsewhere.  In short, we estimate the register size at 31st March using the maximum value of any denominator in the relevant clinical domain to calculate exception rates and population achievement. For the intermediate outcomes measures, this method assumes that unmeasured patients are uncontrolled. In the analysis we report 'payment quality' as the primary outcome, assuming that there is some adjustment for case-mix, but additionally examine differences in 'population' achievement as a form of sensitivity analysis.
Indicators were classified into four groups: simple process; complex process; intermediate outcome (e.g. level of blood pressure achieved); and treatment (e.g. influenza immunisation). We distinguished simple and complex processes in terms of whether they could easily be delivered opportunistically irrespective of whom the patient was seeing. A simple process is therefore one that can be delivered during routine care by any doctor or nurse, such as blood pressure measurement or blood taking. A complex process either requires referral to a specialist such as diabetic retinal screening, or is likely to be done only by particular primary care clinicians such as comprehensive diabetic foot examination. Table 2 details the indicators used in each category.
A simple composite of the achievement levels by indicator category was calculated for each country, by adding up the mean values for each indicator within a particular group and then dividing by the number of indicators in that group (e.g. for complex process indicators- average achievement levels of 70%, 80% and 75% would be added together and divided by three to give a composite achievement level of 75%).
Summary statistics and analyses were based on practice-level data weighted by register size. Since the distributions of many of the quality indicators are non-normal, we provide bootstrapped confidence intervals based on 2,000 replications.  We use 99% confidence levels in recognition of the large number of comparisons.
Crude prevalence rates for each disease in each country were calculated for each disease domain by dividing the total number of patients on practice disease registers by the total populations registered with practices. The analysis was undertaken in STATA v8.2.
Table 3 shows average payment and population achievement in each of the four indicator categories by country. Northern Ireland has the highest achievement under both payment and population achievement for simple process, intermediate outcomes and treatment indicators. Scotland has the highest achievement for the complex process indicators. Wales has the lowest achievement for all categories for both payment and population achievement.
Table 4 shows that the higher achievement levels in Northern Ireland compared to England are statistically significant. Scotland has significantly higher achievement than England for all indicator categories under payment quality but not for the treatment measures under population achievement due to higher exclusion rates. Wales has significantly lower achievement than England for complex process and treatment indicators for both payment and population achievement and for outcomes based on population achievement only.
Achievement across the process, intermediate outcome and treatment indicators is generally highest in Northern Ireland and lowest in Wales (Tables 5 and 6). Northern Ireland has significantly higher quality than England for 23 of the 31 indicators and lower quality only for the percentage of diabetics with HbA1c < 7.4. Wales has significantly lower quality than England for 17 of the 31 indicators. Scotland has significantly higher quality than England on 19 indicators and significantly lower quality on 5 indicators.
For simple process measures, like the recording of blood pressure or smoking status, absolute differences between the highest and lowest performing countries are generally small and probably of little clinical significance. Most differences are less than 1% and only cholesterol recording in people with stroke shows a difference greater than 5% between Northern Ireland and Wales. For the more complex diabetes process measures, such as retinopathy screening, absolute differences between highest and lowest performing countries are larger (5–10%) and Scotland has the highest quality. For intermediate outcome and treatment indicators, absolute differences are also greater, and clinically significant. Again, Northern Ireland generally has the highest quality and Wales the lowest.
Table 7 shows that average prevalence rates vary by up to 28% (the difference in CHD prevalence between England and Scotland). England has the lowest reported prevalence for CHD and stroke, and Northern Ireland for diabetes and hypertension. Prevalence rates are highest for CHD and stroke in Scotland and for diabetes and hypertension in Wales
This is the first study to undertake a comparison of quality of care between the four countries of the UK using QOF data. The key findings are that the quality of care measured by the QOF is generally highest in Northern Ireland and lowest in Wales, and that the financial incentive for quality is proportionately lower in Wales and Scotland. The strength of the study is that it uses clinical quality data for important diseases collected to a common definition for 96% of UK general practices. Although QOF is not a fully comprehensive quality dataset, it exceeds all previous cross-jurisdictional comparisons in terms of the scope and coverage of primary care clinical activity. However, the analysis is limited by the nature of the data, which reflects its origin in a payment, rather than a quality monitoring, system. In particular, the data are at practice level, which makes patient-level case-mix adjustment impossible. Although exception reporting of patients who are unsuitable for or who decline care should function as a crude form of patient level case-mix adjustment, exception reporting is under practice control, potentially serves practices' financial self-interests, and could systematically vary by country. However, the overall conclusions are unchanged whether examining quality measures that allow exception reporting or not. Finally, although QOF data are consistent, other practice level variables are not (for example, data on practice employed staff) which limits exploration of potential resource or organisational explanations for the differences that are observed.
Cross-jurisdictional comparisons using routine data are helpful in identifying broad trends and areas of concern. QOF adds to existing comparisons because of its focus on clinical quality, and its near comprehensive coverage. However, the lack of comparable data on practice resources and organisation means that it is difficult to account for the differences seen beyond generating hypotheses for more detailed examination.
The most notable finding is that quality of CHD, stroke and diabetes care in Wales on these measures is significantly lower than elsewhere in the UK for complex care processes, intermediate outcomes, and treatments. Although varying prevalence between countries means that practices in Wales are, on average, less financially incentivised for quality under QOF, this seems unlikely to have directly influenced motivation to deliver high quality in the first year because the detailed link between performance and reward in the payment system is opaque.  However, in the longer term, lower payment rates per patient for the same level of quality might be expected to affect quality. Notably, the measures for which Wales consistently performs poorly are those for which practices are more likely to require collaboration with larger NHS organisations (organisation of diabetes care, effective management of intermediate outcomes, influenza vaccination), rather than simple processes directly under practice control. The evidence from this study is therefore consistent with previously identified waiting time problems being symptomatic of a wider quality problem within NHS Wales, and requires closer examination.
NHS Scotland has generally higher quality than NHS England, particularly on the complex process measures for diabetes. The largest difference is for diabetic foot examination. In theory, Scottish policy on the co-ordination of care for whole populations by managed clinical networks might be expected to have a beneficial impact on such indicators. This finding is also consistent with other evidence of higher quality in Scotland for services requiring area-based co-ordination like immunisation and breast screening.  On average, Scotland has substantially more whole time equivalent GPs per 1000 registered patients than England which might also be expected to lead to higher QOF performance (although the higher prevalence of cardiovascular disease implies more work per GP than registered list differences alone imply).
The generally small differences that we have observed may be reassuring for NHS England, where the focus of targets and market reform on acute services and elective surgery does not appear to have seriously affected quality of chronic disease care, at least for these incentivised measures. However, the true test of the success of a policy of increasing market reform and the setting of targets in primary care will require additional research on how the introduction of the QOF has impacted on those areas not covered by the GMS contract.
Finally, it is striking that clinical quality in Northern Ireland is higher than in the rest of the UK. This may reflect greater population or health service stability, for example because Northern Ireland has experienced less diversion of management and clinical attention due to repeated re-organisation. It may also in part be a consequence of Northern Ireland having a younger population compared to the UK as a whole with evidence from England suggesting that practices with a higher proportion of over 65 year-olds have lower achievement. 
Previously identified weaknesses in Wales related to waiting times appear to reflect a more general quality problem within NHS Wales. The introduction of QOF data and this subsequent analysis adds to the existing relatively sparse literature on cross-jurisdictional differences in quality of care within the UK. However, such routine data sets are predominately useful for identifying overall patterns and areas for more detailed examination by focused research projects. Although some research to tease out the impact of diverging health policy has recently been funded, [23, 24] maximising the potential for cross-jurisdictional learning within the UK will require more detailed, comparative examination of the organisation and resourcing of primary medical care, and collection of a wider range of clinical process and outcome measures, including care for non-incentivised diseases.
BG had the original idea, and planned the analysis with GM and MS. GM conducted the data analysis and all three authors wrote and revised the manuscript. GM is the guarantor.
Greer SL: Four Way Bet: How devolution has led to four different models for the NHS. 2004, London, University College London Constitution Unit
Adams J, Schmueker K: Devolution in practice 2006. 2006, London: Institute for Public Policy Research
Alvarez-Rosete A, Bevan G, Mays N, Dixon J: Effect of diverging policy across the NHS. BMJ. 2005, 331: 946-50. 10.1136/bmj.331.7522.946.
Bevan G, Hood C: Have targets improved performance in the English NHS?. BMJ. 2006, 332: 419-22. 10.1136/bmj.332.7538.419.
Department of Health: Our health, our care, our say: a new direction for community services. 2006, London, HMSO
Department of Health: Delivering the NHS Plan. 2002, London, HMSO
Department of Health: Supporting people with long-term conditions: an NHS and Social Care Model to support local innovation and integration. 2005, London, Department of Health
Scottish Executive Health Department: Delivering for health. 2005, Edinburgh, Scottish Executive
Health and Social Care Department: Designed for life: Creating world class health and social care for Wales in the 21st Century. 2005, Cardiff, Welsh Assembly Government
Information and Statistics Division NHS Scotland: Discussion paper: Comparing median waiting times in Scotland with those in England. 2006, [http://www.isdscotland.org/isd/files/lv2474.doc]
Leatherman S, Sutherland K: The quest for quality in the NHS. 2005, Oxford, Radcliffe Publishing
RCGP: Profile of UK practices. 2006, [http://www.rcgp.org.uk/pdf/ISS_INFO_02_MAY06.pdf]
Sutton M, McLean G: Determinants of primary medical care quality measured under the new UK contract: cross sectional study. BMJ. 2006, 332: 389-390. 10.1136/bmj.38742.554468.55.
NHS Confederation and British Medical Association: Investing in General Practice: the new GMS contract. 2003, London, British Medical Association
Guthrie B, McLean G, Sutton M: Workload and reward in the quality outcomes framework of the 2004 general practice contract. British Journal of General Practice. 2006, 56 (532): 836-842.
Information and Statistics Division NHS Scotland: Quality and outcomes framework. 2006, [http://www.isdscotland.org/isd/QOF]
NHS Wales: Quality and outcomes framework. 2006, [http://www.wales.nhs.uk/sites3/page.cfm?orgid=480&pid=10486]
Department of Health, Social Services and Public Safety Northern Ireland: Quality and outcomes framework data at practice level. 2006, [http://www.dhsspsni.gov.uk/index/hss/gp_contracts/gp_contract_qof/qof_data/qof_pactice.htm]
NHS England Health and Social Care Information Centre: Quality and outcome framework information. 2006, [http://www.ic.nhs.uk/services/qof]
McLean G, Sutton M, Guthrie B: Deprivation and quality of primary care services: evidence for persistence of the inverse care law from the UK Quality and Outcomes Framework. Journal of Epidemiology and Community Health. 2006, 60 (11): 917-922. 10.1136/jech.2005.044628.
Barber A, Thompson SG: Analysis of cost data in randomised controlled trials: An application of the non-parametric bootstrap. Statistics in Medicine. 2000, 19: 3219-3236. 10.1002/1097-0258(20001215)19:23<3219::AID-SIM623>3.0.CO;2-P.
Doran T, Fullwood C, Gravelle H, Reeves D, Kontopantelis E, Hiroeh U, Roland M: Pay-for-Performance programs in family practices in the United Kingdom. New England Journal of Medicine. 2006, 355: 375-384. 10.1056/NEJMsa055505.
Economics and Social Research Council: Public services programme: Quality, performance and delivery. 2006, [http://www.publicservices.ac.uk/]
NHS R&D Service Delivery and Organisation of Care Programme: Governance, incentives and outcomes programme. 2006, [http://www.sdo.lshtm.ac.uk/studyinghealthcare.htm]
The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/7/74/prepub
GM is funded by Glasgow University. BG is funded by the Health Foundation and the Chief Scientist Office of the Scottish Executive Health Department. MS is funded by the Health Economics Research Unit (Aberdeen University), which receives funding from the Chief Scientist Office of the Scottish Executive Health Department. The authors alone are responsible for the views expressed.
The author(s) declare that they have no competing interests.
About this article
Cite this article
McLean, G., Guthrie, B. & Sutton, M. Differences in the quality of primary medical care for CVD and diabetes across the NHS: evidence from the quality and outcomes framework. BMC Health Serv Res 7, 74 (2007). https://doi.org/10.1186/1472-6963-7-74