This article has Open Peer Review reports available.
Development and feasibility of a set of quality indicators relative to the timeliness and organisation of care for new breast cancer patients undergoing surgery
© Ferrua et al.; licensee BioMed Central Ltd. 2012
Received: 6 October 2011
Accepted: 21 June 2012
Published: 21 June 2012
Because breast cancer is a major public health issue, it is particularly important to measure the quality of the care provided to patients. Survival rates are affected by the timeliness of care, and waiting times constitute key quality criteria. The aim of this study was to develop and validate a set of quality indicators (QIs) relative to the timeliness and organisation of care in new patients with infiltrating, non-inflammatory and metastasis-free breast cancer undergoing surgery. The ultimate aim was to use these QIs to compare hospitals.
The method of QI construction and testing was developed by COMPAQ-HPST. We first derived a set of 8 QIs from consensus guidelines with the aid of experts and professional associations and then tested their metrological properties in a panel of 60 volunteer hospitals. We assessed feasibility using a grid exploring 5 dimensions, discriminatory power using the Gini coefficient as a measure of dispersion, and inter-observer reliability using the Kappa coefficient.
Overall, 3728 records were included in the analyses. All 8 QIs showed acceptable feasibility (but one QI was subject to misinterpretation), fairly strong agreement between observers (Kappa = 0.66), and wide variations in implementation among hospitals (Gini coefficient < 0.45 except for QI 6 (patient information)). They are thus suitable for use to compare hospitals and measure quality improvement.
Of the 8 QIs, 3 are ready for nationwide implementation (time to surgery, time to postoperative multidisciplinary team meeting (MDTM), conformity of MDTM). Four are suitable for use only in hospitals offering surgery with on-site postoperative treatment (waiting time to first appointment after surgery, patient information, time to first postoperative treatment, and traceability of information relating to prognosis). Currently, in the French healthcare system, a patient receives cancer care from different institutions whose databases cannot as yet be easily merged. Nationwide implementation of QIs covering the entire care pathway will thus be a challenge.
Breast cancer is a major public health issue. It has the highest incidence amongst cancers in women (52,000 new cases in 2010) and is the first cause of death in women aged 35–65 years in France (11,300 deaths in 2008) . However, measuring quality of care delivered to breast cancer patients is a challenging issue. In 2004, the United States Federal Agency for Healthcare Research and Quality (AHRQ) highlighted the paucity and need to develop validated quality measures to assess the quality of breast cancer care . This need for “reliable, validated quality measures […] to afford accountability, improvement, and research” was reiterated many times in the USA and in Europe.
Since then, European guidelines for quality assurance, produced under the auspices of the European Commission, have listed 39 performance indicators for screening and diagnosis . A 2010 position paper from the European Society of breast cancer specialists (EUSOMA) has proposed 17 main quality indicators (QIs) covering diagnosis, staging, surgery and loco-regional treatment, systemic treatment, counselling, follow-up and rehabilitation . In France, the development of QIs in breast cancer care was flagged as a high priority in 2007. The treatment plan for each cancer patient has now to be discussed in a multidisciplinary team meeting (MDTM) held according to the rules laid down by the French National Cancer Institute (Institut National du Cancer - INCa) and the French National Authority for Health (Haute Autorité de Santé - HAS) [5, 6].
Many of the QIs developed in the wake of the 2004 AHRQ report have been quality of life and patient satisfaction indicators . However, more recently, in view of the importance given to waiting times by patients and many health care organisations, emphasis has also been placed on QIs measuring the timeliness of care [7–11]. The EUSOMA position paper proposes time to obtain mammography results, time between mammography results and the first consultation or between the core biopsy and surgical excision . The second French national Cancer Plan (2009–2013) has urged that more is learnt about waiting times in order to reduce inequalities in access to care that may arise from undue delay . Deviations from guidelines on timeliness can adversely affect 5-year survival rates [13–15], and patients who receive their test results promptly are less prone to anxiety [16–19].
To respond to this enquiry from the French health authorities, there is a need for simple, validated QIs that can be used to measure and compare quality of care in hospitals in order to identify room for improvement. Key methodological concerns, on which depends QI validity, are standardisation of data collection, reducing the workload of collection, and monitoring of QI inter-hospital variability. Only validated QIs can be implemented nationally or internationally, for instance in quality improvement programmes or paying for quality schemes, or used for public reporting.
The objective of this study was to establish the validity, for comparing hospitals, of a simple set of 8 easy-to-use QIs that assess the timeliness of key steps in the care of patients with infiltrating, non-inflammatory and metastasis-free breast cancer undergoing surgery.
The task of developing QIs for breast cancer management was delegated by the French authorities to the research project COMPAQH (COordination for Measuring Performance and Assuring Quality in Hospitals). COMPAQH’s remit is to develop QIs for monitoring quality in French hospitals and to design ranking methods and pay-for-quality programmes . The project is run by INSERM (French National Institute for Health and Medical Research) and is sponsored by the French Ministry of Health and HAS which, together in 2003, listed 9 priority areas in need of quality improvement: pain management, practice guidelines, human resources management, iatrogenic events, nutritional disorders, access to care, taking account of patients’ views, coordination of care, and continuity of care. Quality of breast cancer care comes under the topic “practice guidelines” [21, 22].
QI development in breast cancer care began in 2007 as a partnership between COMPAQH, the French National Federation of Cancer Centres (FNLCC), HAS, INCa, the Société Française de sénologie et de pathologie mammaire (SFSPM) and the Collège National des Gynécologues et Obstétriciens français (CNGOF). Attention was focussed on “new patients with infiltrating, non-inflammatory and metastasis-free breast cancer undergoing surgery within the institution” as this was the main concern of experts in the field and of consulting physicians. “New patients” were women with unilateral breast cancer who had never undergone treatment for breast disease, and who had not yet been seen in consultation or been admitted to the hospital for breast disease. We chose “infiltrating, non-inflammatory breast cancer” as this is the most common type of breast cancer (about 75 % of all cases) and constitutes a homogeneous population. Surgery is the primary treatment for most patients with metastasis-free T0 to T2 tumours.
QI development and testing
Two tests were performed, first a preliminary test of QI feasibility in a small number of hospitals, then a larger scale test to measure QI performance.
For the preliminary test, performed in 2008, we asked 23 hospitals performing breast cancer surgery (including 20 comprehensive cancer centres) whether they would be willing to test the 8 QIs using their data for 2006. They assessed QI feasibility using a validated grid of 12 items exploring 5 dimensions (QI acceptability by staff, their understanding of the QI, their availability to respond in the allotted time, the ability of the hospital and the IT system to collect and handle the necessary data, and the workload) . At the end of the test, we also assessed QI relevance as given by inter-hospital variability and deviation from expected performance.
For the second test (July–October 2009), we approached, via French hospital federations, all 633 hospitals performing breast cancer surgery in France (351 public, 231 private not-for-profit, and 51 private profit-making organisations). We assessed inter-hospital variability, internal validity (i.e. whether the QI really measured what it was intended to measure both from qualitative and quantitative points of view), and inter-observer reproducibility. Assessment was double-blind on 20 medical records from each of 14 hospitals. None of the QIs required adjustment.
For each test in each hospital, 80 patient records were analysed manually. This number was a compromise between acceptable workload and statistical validity [27, 28]. The records were selected randomly from the PMSI database (Programme de médicalisation des systèmes d’information) which reports diagnosis-related group (DRG) statistics in a public hospital setting. The selected DRG code was breast cancer surgery. Apart from the 80 records for analysis, 20 additional records were also selected in the event of exclusions. Each hospital received an explanatory guide on the 8 QIs and a grid with instructions for its completion (available in French at http://www.compaqhpst.fr/data/indicateurs/12_GYC_V2_Grille_de_recueil_images.pdf).
For any given QI, at least 30 completed grids were required per hospital to support the assumption of a Gaussian distribution and to compute confidence intervals. This meant that, for any given QI, only those hospitals with at least 30 medical records for this QI were entered into the comparison among hospitals. Inter-hospital variability was given by QI score variance and the Gini coefficient which measures score dispersion. Variability (i.e. discriminatory power) is high if the Gini coefficient is under 0.2 and low if it is above 0.5 . Internal validity was given by the overall concordance rate and inter-observer reproducibility by the Kappa coefficient [30, 31]. We used SAS version 8.1 software (SAS Institute Inc, Cary, NC).
Choice of QIs and feasibility test results
Waiting time to first appointment with surgeon: Median time to obtain first appointment with surgeon 
Time to surgery: Proportion of patients undergoing surgery within 21 days of the first appointment with surgeon 
Time to postoperative MDTM: Proportion of patients whose records were discussed in a MDTM held within 14 days of surgery 
Waiting time to appointment after surgery: Proportion of patients given appointment relative to MDTM proposals within 14 days of MDTM 
Time to first postoperative treatment: Proportion of patients whose first postoperative treatment was initiated within 30 days of urgery in the event of chemotherapy and within 56 days in the event of radiotherapy 
Patient information: Proportion of patients receiving full information before surgery as detailed in article 40 of the French national Cancer Plan 
All the 23 hospitals (3 public, 20 cancer centres) taking part in the feasibility test completed and returned the grids. They randomly selected 2044 medical records: inclusion criteria were not met in 274 records, so 1770 (72 %) were included in our analysis. All QIs, except QI 1 (time to first appointment with surgeon), showed fair feasibility and high inter-hospital variability.
QI 1 presented ambiguities with regard to wording. Hospitals understood “time to first appointment” (QI 1) differently. Some hospitals thought that it was the date the patient’s medical record had been created, some that it was the date when the patient had called the hospital for an appointment, and some that it was the date on the GP’s letter requesting an appointment for the patient. Because of this ambiguity, QI 1 was reserved for the hospital’s information and was excluded from the hospital comparisons below.
QI performance and inter-hospital comparisons
The 54 hospitals were situated in different regions of France and differed in their status (public/private) and number of beds. All except one met the threshold volume of activity (> 30 breast cancer surgeries/year) that is required by French health authorities. The breakdown according to annual volume of activity was as follows: 28–80 operations (n = 6 hospitals), 122–197 (n = 11), 205–377 (n = 13), 476–945 (n = 18), 1036–1873 (n = 6).
We analysed 3624/5043 records (72 %) from the 54 hospitals. The main reasons for exclusions are given in the footnote to Figure 2. The incidence of missing data was by decreasing rank order: 40.6 % (1471/2153) for date of adjuvant therapy, 26.5 % (960/2664) for date of the postoperative appointment, 11.1 % (402/3222) for date of the postoperative MDTM, 8.8 % (319/3305) for the first appointment with the surgeon, and 0.4 % (9/3615) for the date of surgery.
QI 2 (“time to surgery”) was subject to misinterpretation: it was not clear whether it referred to (i) the appointment when the decision to perform surgery was taken or (ii) the appointment when the surgeon diagnosed suspected cancer and ordered tests before deciding to operate. Q2 was used, without modification, in the hospital comparisons.
QI conformity scores and discriminatory power
Mean conformity scorea% (range)
Discriminatory power (Gini coefficient)
Having defined quality as compliance with the care process, as this has been shown to be associated with patient outcomes, we developed 7 process QIs relating to the timeliness and organisation of breast cancer care. All 7 QIs were robust as indicated by their metrological properties and feasibility. In addition, all 7 highlighted considerable inter-hospital variability, thus revealing that there is substantial room for improvement in the quality of care.
Three of the 7 QIs are ready for nationwide implementation, namely, QI 2 (time to surgery), QI 3 (time to postoperative MDTM), and QI 8 (conformity of postoperative MDTM). Although some hospitals misunderstood the wording of QI 2 in the feasibility test, no change was made in the performance assessment test. The meaning has, however, since been clarified with a view to nationwide implementation of this QI. QI 2 now refers unambiguously to the date of the appointment when the decision to perform surgery is taken and not to the date of the appointment when the surgeon diagnoses suspected cancer and orders tests before deciding to operate.
The four other validated QIs (QI 6 – patient information, QI 4 – waiting time to first appointment after surgery, QI 5 – time to first postoperative treatment, and QI 7 – traceability of information relating to prognosis) are applicable only to hospitals that can offer both surgery and postoperative radio- or chemotherapy. Comparing all hospitals is rather hazardous as data for these QIs was often missing (11 %–40 % missing data). QI 6 had a very low mean conformity score (12.8 %) because of poor traceability of the information given to patients.
An 8th QI we developed (QI 1 – waiting time to first appointment with the surgeon) proved to be too ambiguous to be used for comparisons among hospitals.
The external validity of our results may be considered satisfactory because (i) our patient sample was fairly representative as the 70 volunteer hospitals were a good reflection of available facilities for breast cancer care in France, (ii) it was homogeneous as we focussed on a subset of breast cancer patients, (iii) the number of audited and analysed medical records was large, (iv) results were insensitive to the reactive effects of testing and reactive settings because of the retrospective nature of the audit.
Our results reflect real-life conditions, i.e. the technical and organisational constraints observed when implementing QIs in hospitals. We anticipated the problems, taking into account the absence of validated quality measures of breast cancer care, leading to define quality as compliance with the process of care that has been shown to correlate with patient outcome. A systematic review, published in 2006, underlined the paucity of validated indicators of quality measures in breast cancer care and the need to develop “reliable, validated quality measures […] to afford accountability, improvement, and research” . Several health care facilities have emphasized the importance of measuring timeliness of care from screening to pathology results, allowing to compare institutional performances, and (in these times of patient centred care) when patients were asked which aspects of care they would improve if they could, aspects relating to waiting times were most frequently mentioned . So we decided to concentrate on timely access as a good representative of “quality care”.
Although we tried to forestall many of the problems that might arise when designing our QIs, we nevertheless had to contend with several hurdles.
The first hurdle was absence of all the required information in the French PMSI database which was used to randomly select the 80 medical records. We used restrictive inclusion criteria (“infiltrating, non-inflammatory breast cancer”) to obtain a homogeneous population. We excluded patients with carcinoma in situ and patients with prior breast cancer treatment. This had, however, to be done manually and represented a fairly heavy workload. The extra 20 records selected from the database to compensate for exclusions did not always make up for the recorded 28 % exclusion rate.
A second hurdle was that, in the French health care system, each hospital does not have access to all the data on a given patient. For example, QI 4 and QI 5 could not be calculated when follow-up or all care did not take place within the same hospital (e.g. appointment in private practice (QI 4), and appointment in one hospital with treatment in another hospital (QI 5)). The situation was even more complex when these hospitals had a different status (public, private, or not-for-profit).
A third hurdle, which also represent a limitation of our results, was that criteria on waiting times and delays are based on consensus among experts and not on standards derived from practice guidelines with a high level of evidence, which are normally used to construct QIs. Each country has its own standards . The good practice guide produced in 2009 by the National Collaborating Centre for Cancer for NICE (National Institute for Health and Clinical Excellence) recommends not more than a 4-week delay from diagnosis to treatment, and starting chemotherapy or radiotherapy within 31 days of surgery . In contrast, French guidelines published back in 2002 recommend a 21-day delay from the first appointment with the surgeon to surgery (similarly to the National Initiative on Cancer Care Quality (NICCQ) recommendation in the US ), a 30-day delay from surgery to chemotherapy, and 56 days from surgery to radiotherapy .
This hurdle could be partly overcome by using as targets the proportion of patients treated within set times. Such targets better satisfy health professionals for whom delays should reflect organisational constraints and not include patient-related causes (e.g. patient not turning up for the appointment, treatment postponed at the patient’s request). According to European guidelines, a threshold of 90 % is acceptable for ≤15 working days between the decision to operate and surgery and 70 % for ≤10 days . According to EUSOMA, the minimum standard is >75 % and the target is >90 % for surgery performed within 6 weeks after the first diagnostic examination in the breast unit . The Dutch auditing system has established a 90 % standard for 5 QIs . However, a comparison with our results is difficult because of differences in QI definitions. Should the French health authorities take 90 % of patients registered in each time period as standard, there is much room for improvement in many hospitals as shown in Table 2.
Recent experiences in Europe and the USA have shown that QI implementation at a local . or national level using a variety of methodologies can improve the quality of care of breast cancer patients but that this takes time [35–37]. According to the Dutch experience, none of 9 QIs met standards in 2002 whereas 4 did in 2008, with a significant improvement in all 9 QIs. Because hospitals simply perform better when they know they are being evaluated (Hawthorne effect), but also because comparison is able to promote a better registration process and compliance with best clinical practice, improvements can be expected in France also.
Whether QI scores may qualify hospitals in the certification of breast cancer centres is a moot point. We could propose to follow the example of the National Quality Measures for Breast Care (NQMBC), which uses the degree of participation to on-line registration of the answers to a set of quality questions to grant 3 levels of certification for quality breast health care [17, 36].
Our selected QIs on timeliness of breast cancer care proved feasible and applicable in the clinic. Their implementation was highly dependent on care organisation, patient behaviour, and the quality of the information systems used by French hospitals. Future QIs should cover the entire care pathway from before to after hospital consultations and admission, and should include the patient’s perspective [7, 38]. The COMPAQ-HPST project is currently focusing on the construction of QIs covering care from an abnormal screening result right through to post-treatment follow-up, as in the ambitious programme developed by the American Society of Clinical Oncology (ASCO) . This is a challenge because of the need to merge patient databases that are managed by hospitals using information systems that are often not compatible and of the need to guarantee access to these data. The challenge is even greater at the European level where account has to be taken of differences in the organisation of care among countries.
The authors acknowledge the support provided by the French Ministry of Health, the French National Cancer Institute and HAS, and the participation of the representatives of the 43 hospitals included in the study. They thank the members of the COMPAQ-HPST team and Dr. Catherine Grenier (former COMPAQH Project Manager and current Director of the Quality Department of the French Federation of Cancer Centres) for their collaboration.
- INVs: Epidémiologie du cancer du sein en France et en Europe. 2010, http://www.e-cancer.fr/depistage/depistage-du-cancer-du-sein/dossier-pour-les-professionnels/epidemiologie.Google Scholar
- Moher D, Schachter HM, Mamaladze V, Lewin G, Paszat L, Verma S, et al: Measuring the quality of breast cancer care in women. Evid Rep Technol Assess (Summ). 2004, 105: 1-8.Google Scholar
- Perry N, Broeders M, de Wolf C, Tornberg S, Holland R: von Karsa L (Eds): European guidelines for quality assurance in breast cancer screening and diagnosis. 2006, Luxembourg: Office for Official Publications of the European Communities, 416p-4Google Scholar
- Rosselli Del Turco M, Ponti A, Bick U, Biganzoli L, Cserni G, Cutuli B, et al: Quality indicators in breast cancer care. Eur J Cancer. 2010, 46 (13): 2344-2356. 10.1016/j.ejca.2010.06.119.View ArticlePubMedGoogle Scholar
- HAS-INCa: Réunion de concertation pluridisciplinaire en cancérologie. 2006, Paris: HAS-INCa, http://www.has-sante.fr/portail/jcms/c_438502/reunion-de-concertation-pluridisciplinaire-en-cancerologie-rcp-4-pages?xtmc=RCP axtcr=4.Google Scholar
- INCa: Recommandations nationales pour la mise en oeuvre du dispositif d’annonce du cancer dans les établissements de santé, mesure 40 du plan cancer. 2005, Paris: INCa, http://www.e-cancer.fr/soins/parcours-de-soins/dispositif-dannonce.Google Scholar
- de Kok M, Scholte RW, Sixma HJ, van der Weijden T, Spijkers KF, van de Velde CJ, et al: The patient’s perspective of the quality of breast cancer care. The development of an instrument to measure quality of care through focus groups and concept mapping with breast cancer patients. Eur J Cancer. 2007, 43 (8): 1257-1264. 10.1016/j.ejca.2007.03.012.View ArticlePubMedGoogle Scholar
- Hislop TG, Harris SR, Jackson J, Thorne SE, Rousseau EJ, Coldman AJ, et al: Satisfaction and Anxiety for Women During Investigation of an Abnormal Screening Mammogram. Breast Cancer Res Treat. 2002, 76 (3): 245-254. 10.1023/A:1020820103126.View ArticlePubMedGoogle Scholar
- McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, De Cristofaro A, et al: The quality of health care delivered to adults in the United States. N Engl J Med. 2003, 348 (26): 2635-2645. 10.1056/NEJMsa022615.View ArticlePubMedGoogle Scholar
- Malin JL, Schneider EC, Epstein AM, Adams J, Emanuel EJ, Kahn KL: Results of the National Initiative for Cancer Care Quality: how can we improve the quality of cancer care in the United States?. J Clin Oncol. 2006, 24 (4): 626-634. 10.1200/JCO.2005.03.3365.View ArticlePubMedGoogle Scholar
- National Consortium of Breast Centers: Definition of Quality of Breast Cancer Care. 2008, http://www.breastcare.org/.Google Scholar
- Ministère des Solidarités de la Santé et de la Famille: Direction de l’hospitalisation et de l’organisation des soins: Circulaire N°DHOS/SDO/2005/101 du 22 février 2005 relative à l’organisation des soins en cancérologie. 2005Google Scholar
- Chattelier C, Vallier N, Ricordeau Ph, Colonna F, Allemand H: Modalities of early-stage breast cancer management: review of practices in France in 2005. Prat Organ Soins. 2007, 38 (4): 249-258.Google Scholar
- Latarche C, Desandes E, Mayeux D, Stines J, Guillemin F: Provider delays among patients with breast cancer in a Regional Cancer Care Network: feasibility of personal care schedule. Bull Cancer. 2004, 91 (12): 965-971.PubMedGoogle Scholar
- Richards MA, Westcombe AM, Love SB, Littlejohns P, Ramirez AJ: Influence of delay on survival in patients with breast cancer: a systematic review. Lancet. 1999, 353 (9159): 1119-1126. 10.1016/S0140-6736(99)02143-1.View ArticlePubMedGoogle Scholar
- Barton MB, Morley DS, Moore S, Allen JD, Kleinman KP, Emmons KM, et al: Decreasing Women’s Anxieties After Abnormal Mammograms: A Controlled Trial. J Natl Cancer Inst. 2004, 96 (7): 529-538. 10.1093/jnci/djh083.View ArticlePubMedGoogle Scholar
- Kaufman CS, Shockney L, Rabinowitz B, Coleman C, Beard C, Landercasper J, et al: National Quality Measures for Breast Centers (NQMBC): a robust quality tool: breast center quality measures. Ann Surg Oncol. 2010, 17 (2): 377-385. 10.1245/s10434-009-0729-5.View ArticlePubMedGoogle Scholar
- Pineault P: Breast Cancer Screening: Women’s Experiences of Waiting for Further Testing. Oncol Nurs Forum. 2007, 34 (4): 847-853. 10.1188/07.ONF.847-853.View ArticlePubMedGoogle Scholar
- Sally ET, Susan RH, Hislop TG, Judith AV: The Experience of Waiting for Diagnosis After an Abnormal Mammogram. Breast J. 1999, 5 (1): 42-51. 10.1046/j.1524-4741.1999.005001042.x.View ArticleGoogle Scholar
- Leleu H, Capuano F, Couralet M, Nitenberg G, Campos A, Minvielle E: Developing and using quality indicators in French health care organisations: A new area of health services and management research. Lessons from the COMPAQ-HPST project. J d’Econ Méd. 2011, 29: 37-46.Google Scholar
- HAS-ANAES: Les référentiels d’évaluation des pratiques professionnelles - Base méthodologique pour leur réalisation en France. 2004, Paris: HAS, http://www.has-sante.fr/portail/jcms/c_5232/evaluation-des-pratiques-professionnelles?cid = c_5232.Google Scholar
- Mauriac L, Luporsi E, Cutuli B, Fourquet A, Garbay JR, Giard S, et al: Summary version of the Standards, Options and Recommendations for nonmetastatic breast cancer (updated January 2001). Br J Cancer. 2003, 89 (Suppl 1): S17-S31.View ArticlePubMedPubMed CentralGoogle Scholar
- Corriol C, Grenier C, Coudert C, Daucourt V, Minvielle E: The COMPAQH project: researches on quality indicators in hospitals. Rev Epidemiol Sante Publique. 2008, 56 (Suppl 3): S179-S188.View ArticlePubMedGoogle Scholar
- Minvielle E, Leleu H, Capuano F, Grenier C, Loirat P, Degos L: Suitability of three indicators measuring the quality of coordination within hospitals. BMC Health Serv Res. 2010, 10: 93-10.1186/1472-6963-10-93.View ArticlePubMedPubMed CentralGoogle Scholar
- HAS-ANAES: Chirurgie des lésions mammaires: Prise en charge de première intention. Evaluation des Pratiques. 2002, 151http://www.has-sante.fr/portail/jcms/c_447405/chirurgie-des-lesions-mammaires-prise-en-charge-de-premiere-intention.Google Scholar
- Corriol C, Daucourt V, Grenier C, Minvielle E: How to limit the burden of data collection for Quality Indicators based on medical records? The COMPAQH experience. BMC Health Serv Res. 2008, 8: 215-10.1186/1472-6963-8-215.View ArticlePubMedPubMed CentralGoogle Scholar
- Arkin CF, Wachtel MS: How many patients are necessary to assess test performance?. JAMA. 1990, 263 (2): 275-278. 10.1001/jama.1990.03440020109043.View ArticlePubMedGoogle Scholar
- McGlynn EA, Kerr EA, Adams J, Keesey J, Asch SM: Quality of health care for women: a demonstration of the quality assessment tools system. Med Care. 2003, 41 (5): 616-625.PubMedGoogle Scholar
- Gini C: Measurement of inequality of income. Econ J. 1921, 31: 22-43.View ArticleGoogle Scholar
- Steiner D, Norman G: Validity. Health measurement scales. In A practical guide to their development and use. 1995, Oxford: Oxford University Press, 144-162.Google Scholar
- Cohen J: Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull. 1968, 70 (4): 213-220.View ArticlePubMedGoogle Scholar
- Schachter HM, Mamaladze V, Lewin G, Graham ID, Brouwers M, Sampson M, et al: Many quality measurements, but few quality measures assessing the quality of breast cancer care in women: a systematic review. BMC Cancer. 2006, 6: 291-10.1186/1471-2407-6-291.View ArticlePubMedPubMed CentralGoogle Scholar
- Landercasper J, Linebarger JH, Ellis RL, Mathiason MA, Johnson JM, Marcou KA, et al: A Quality Review of the Timeliness of Breast Cancer Diagnosis and Treatment in an Integrated Breast Center. J Am Coll Surg. 2010, 210 (4): 449-455. 10.1016/j.jamcollsurg.2010.01.015.View ArticlePubMedGoogle Scholar
- National Collaborating Centre for Cancer: Breast cancer (early & locally advanced): diagnosis and treatment. Clinical Guidelines. 2009, updated 30 March 2010http://guidance.nice.org.uk/CG80.Google Scholar
- Veerbeek L, van der Geest L, Wouters M, Guicherit O, Does-den Heijer A, Nortier J, et al: Enhancing the quality of care for patients with breast cancer: Seven years of experience with a Dutch auditing system. Eur J Surg Oncol. 2011, 37 (8): 714-718. 10.1016/j.ejso.2011.03.003.View ArticlePubMedGoogle Scholar
- NQMBC: National Quality Measures for Breast Centers™. 2011, http://www.nqmbc.org/QualityPerformanceYouShouldMeasure.htm.Google Scholar
- Brucker SY, Wallwiener M, Kreienberg R, Jonat W, Beckmann MW, Bamberg M, et al: Optimizing the quality of breast cancer care at certified german breast centers: a benchmarking analysis for 2003–2009 with a particular focus on the interdisciplinary specialty of radiation oncology. Strahlenther Onkol. 2011, 87 (2): 89-99.View ArticleGoogle Scholar
- Bentzon N, Erichsen CE, Axelsson CK, Freil M: Patients’ experiences–indications for treatment quality in breast cancer surgery. Ugeskr Laeger. 2006, 168 (23): 2252-2257.PubMedGoogle Scholar
- Schneider EC, Malin JL, Kahn KL, Emanuel EJ, Epstein AM: Developing a system to assess the quality of cancer care: ASCO’s national initiative on cancer care quality. J Clin Oncol. 2004, 22 (15): 2985-2991. 10.1200/JCO.2004.09.087.View ArticlePubMedGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6963/12/167/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.