Skip to main content


The accuracy and completeness for receipt of colorectal cancer care using Veterans Health Administration administrative data



The National Comprehensive Cancer Network and the American Society of Clinical Oncology have established guidelines for the treatment and surveillance of colorectal cancer (CRC), respectively. Considering these guidelines, an accurate and efficient method is needed to measure receipt of care.


The accuracy and completeness of Veterans Health Administration (VA) administrative data were assessed by comparing them with data manually abstracted during the Colorectal Cancer Care Collaborative (C4) quality improvement initiative for 618 patients with stage I-III CRC.


The VA administrative data contained gender, marital, and birth information for all patients but race information was missing for 62.1 % of patients. The percent agreement for demographic variables ranged from 98.1–100 %. The kappa statistic for receipt of treatments ranged from 0.21 to 0.60 and there was a 96.9 % agreement for the date of surgical resection. The percentage of post-diagnosis surveillance events in C4 also in VA administrative data were 76.0 % for colonoscopy, 84.6 % for physician visit, and 26.3 % for carcinoembryonic antigen (CEA) test.


VA administrative data are accurate and complete for non-race demographic variables, receipt of CRC treatment, colonoscopy, and physician visits; but alternative data sources may be necessary to capture patient race and receipt of CEA tests.

Peer Review reports


Approximately 3,400 colorectal cancers (CRC) are diagnosed in the Veterans Health Administration (VA) each year [1]. Considering CRC’s 5 year survival rate of 65.2 % [2], most of these veterans will become long-term survivors. As such, the receipt of CRC-related care, which includes curative intent treatment for CRC (i.e. surgical resection, chemotherapy, and radiation) [3, 4] as well as tests for assessment and surveillance after the diagnosis of CRC (i.e. colonoscopy, physician visit, and carcinoembryonic antigen (CEA) test) [5], is of broad interest for both veteran and non-veteran populations [69].

An accurate and efficient method is needed to measure receipt of CRC-related care. Potential data sources include surveys [10], health record review [11], and health claims (billing) data [10, 1217]. The VA uses an electronic health record system, known as the Computerized Patient Record System (CPRS), which allows searching for procedure notes and other services within a given patient record to determine what care has been provided [18]. However, review of health records for research purposes typically occurs on a facility by facility basis because patient health records are not routinely shared between medical centers except for direct patient care. Use of health claims data has practical advantages in terms of access to large samples across multiple sites and low incremental costs. Thus, in this methodologic study, our objective was to compare the receipt of CRC-related care in VA according to administrative data compared with data from a manual chart abstraction of the electronic health record performed during a quality improvement collaborative (EHR/QI) called the Colorectal Cancer Care Collaborative (C4). C4 was undertaken to improve processes of care delivery for CRC [7, 19].

Administrative data has been compared with health record review for receipt of cancer surveillance tests [12] and colonoscopy [10] by Medicare patients; receipt of colonoscopy prior to CRC diagnosis [17] and CRC surgery [16] in Alberta, Canada; and receipt of colonoscopy, surgery, chemotherapy, and radiation by CRC patients in New South Wales, Australia [15]. But this study is the first to attempt to validate the use of VA administrative data for measuring the receipt of CRC treatment and surveillance care. The objectives of this study are to identify types of care in which the use of VA administrative data to measure receipt of CRC-related care is reasonably complete and accurate as well as measures in which the VA administrative data may not be sufficiently complete or accurate so alternative methods may be necessary to collect this data in studies involving VA data.


The study was approved by the Institutional Review Board at Indiana University (0805–61), the VA research committee at the Roudebush VA Medical Center, and the Institutional Review Board at Louisiana Tech University (HUC 1048). The research is in compliance with the Helsinki Declaration.

Data sources

C4 was a national quality improvement project involving teams from 28 VA medical centers (VAMCs) with at least one team from each of the 21 regional Veterans Integrated Service Networks (VISNs) covering the entire U.S. (see Jackson et al., 2013 [9] for a detailed description of the C4 manual data abstraction.). Briefly, C4 sites used local cancer registries or clinical patient lists to identify a cohort of newly diagnosed CRC patients. C4 sites used an abstraction tool, called the Cancer Care Quality Measurement System (CCQMS), to systematically guide retrospective manual abstraction of approximately 230 data elements from the VA’s Computerized Patient Record System (CPRS). These elements were used to evaluate compliance with 25 quality indicators, based upon National Comprehensive Cancer Network (NCCN) guidelines, for colon and rectal cancer treatment. These quality indicators include receipt of curative intent CRC treatment and dates of CRC-related tests received by individual patients (which the current study uses to quantify the accuracy and completeness of VA administrative data for measuring the receipt of this care). Abstractors were cancer registrars, clinical, or administrative staff. The level of training and number of abstractors varied between sites but data variability was limited by a standard computerized data abstraction tool, data definition documents, availability of a centralized help desk, and training of abstractors [9]. A validation of inter-observer health record abstraction accuracy was not performed for the C4 study, but its data abstractors reported a high level of confidence in knowing what data needed to be entered, where to find the data in medical records, and that the instructions were clear [9].

Study population

The study cohort included patients identified during C4 with an incident diagnosis of non-metastatic (AJCC collaborative stage I, II, or III) CRC between January 1, 2004 and October 1, 2006 with complete data collection in which all NCCN quality measures and surveillance tests were evaluated by C4 abstractors. Follow-up care data were collected by October 1, 2007. But, because there was also no documentation of when each manual search of the EHR/QI was conducted, we assumed that the final search for a patient was conducted on the last documented date in the EHR/QI data for that patient. Patients with stage IV CRC were excluded from the current study because these patients may not be eligible for curative intent treatment and the American Society of Clinical Oncology (ASCO) guidelines for CRC surveillance care are more uniformly defined for patients diagnosed with stage I-III CRC [5].

Electronic health record / quality improvement (EHR/QI) data

While all C4 study patients were eligible for data collection for at least 12 months after CRC diagnosis, the EHR/QI data collection was non-systematic. Data abstractions were not done at a fixed period of time after diagnosis. Therefore, a patient’s data were censored at the latest date recorded by the EHR/QI abstractor so only the period of care reviewed by the abstractors was used for analysis.

Administrative data

VA administrative data were drawn from inpatient (acute, extended, observation, and non-VA care) and outpatient files from January 1, 2004 through October 1, 2007. This period spans the EHR/QI data collection from the earliest diagnosis of CRC in EHR/QI to one year after the latest diagnosis of CRC when EHR/QI surveillance data collection ended. For the inpatient file, the main, procedure, and surgery datasets were included. CRC-related care events were identified using relevant Current Procedure Terminology (CPT-4) procedure codes and International Classification of Diseases, 9th revision, Clinical Modification (ICD-9-CM) diagnosis codes (Table 1). The ICD-9 codes are used in both inpatient and outpatient administrative files and CPT-4 codes are used in the outpatient administrative data files so both types of codes are required to identify all diagnoses and procedures. VA administrative data are based on information entered into CPRS and coded (e.g., ICD-9 and CPT-4 codes) at the time of the patient encounter. VA administrative data are then stored in a centralized location from across the VA healthcare system. We obtained the VA administrative data for this study by querying the VA national data system (NDS) for records containing any of the codes listed in Table 1 during the study period for the study patients. Demographic variables were identified from the visit dataset in the outpatient file. In addition, demographic characteristics were also drawn from the VA Central Cancer Registry (VACCR) for a separate comparison with the EHR/QI data.

Table 1 ICD-9 diagnostic, CPT-4 procedure, and laboratory codes used to identify CRC-related treatments and test events

Variables collected

CRC-related care included both curative intent treatments and surveillance tests done after CRC diagnosis to detect CRC recurrence. Receipt of the following curative intent treatments were drawn from the EHR/QI and VA administrative data: surgical resection, neoadjuvant chemotherapy, and adjuvant chemotherapy, and neoadjuvant radiation, and adjuvant radiation therapies. In VA administrative data, chemotherapy or radiation prior to the date of surgical resection in EHR/QI data were considered neoadjuvant treatment and chemotherapy or radiation after this date was considered adjuvant treatment; the completion of treatment courses was not evaluated.

Receipt of the following surveillance tests [20] were compared between the EHR/QI and VA administrative data: colonoscopy [21], CEA test [5], and physician visits. Two administrative records were considered to be the same event if both events 1) were of the same type and 2) occurred within a specified time window of each other. Colonoscopies and surgeries that occurred within 30 days of each other were considered the same event and CEA tests and physician visits that occurred within 7 days of each other were considered the same event [22]. In these cases, the event that occurred first was considered the “true” event because, for a short time window, the repeat test is likely a completion or follow-up to the first test and not an independent event. These time windows were applied to every instance of every test.

The demographic variables of date of birth, date of death, gender, race, and marital status were collected from EHR/QI and VA administrative data as well as the VACCR.


Patients in the VA administrative data were matched with patients in the EHR/QI by their social security number. For demographic variables, values from VA administrative data were compared to EHR/QI data for accuracy and completeness (see Table 2). The accuracy of a variable was assessed using the percent agreement between the values in EHR/QI and VA administrative data for patients who have non-missing observations in both sources. The completeness of a variable was the percentage of patients with an observation in the EHR/QI data who also had the observation in VA administrative data. While this metric misses EHR/QI false negative tests, it does demonstrate the ability of the VA administrative data to identify outcomes that would also be identified by manual abstraction.

Table 2 Metrics used to evaluate the accuracy of the VA administrative data relative to EHR/QI manual abstraction data

To evaluate the receipt of individual surveillance test events, the accuracy of VA administrative data relative to the EHR/QI data for events of each type of post-diagnosis test was assessed using the positive predictive value relative to the EHR/QI data. The advantage of the positive predictive value is that it assesses the overall receipt of a test and not individual test events; this is important because it was unclear whether the EHR/QI continued to collect surveillance events once the recommended surveillance was achieved. The completeness of VA administrative data for events of each type of post-diagnosis test was the fraction of test events in the EHR/QI also present in VA administrative data.

To evaluate whether a type of care (curative treatment or surveillance test) was received at any time during the eligible time period, the accuracy of VA administrative data were determined using kappa statistics for agreement versus the EHR/QI data. The kappa statistics for agreement for each type of care was based on comparing the EHR/QI and VA administrative data predicted receipt of care for individual patients over the eligible time interval. In general, kappa values <0 indicate less than chance agreement, 0.01–0.20 indicate slight agreement, 0.21–0.40 indicate fair agreement, 0.41–0.60 indicate moderate agreement, 0.61–0.80 indicate substantial agreement, and 0.81–1.00 indicate almost perfect agreement [23]. The sensitivity for identifying whether a type of CRC-related care was received at any time during the eligible time period was the fraction of patients documented by the EHR/QI data as receiving a type of care who also had documentation of that care in VA administrative data.

All analyses were performed using R 2.15.2.


The EHR/QI project completed chart abstraction for 624 patients with AJCC collaborative stage I, II, or III CRC; as shown in Table 3, the demographic characteristics of the EHR/QI cohort are similar to those of all patients diagnosed with AJCC collaborative stage I-III CRC in the VACCR during the study period. A total of 618 patients (99.0 %) identified in the EHR/QI process were also identified in VA administrative data. A total of 413 of these patients were also present in the VACCR with AJCC collaborative stage I, II, or III cancer.

Table 3 Patient characteristics and agreement between VA administrative data and EHR/QI chart abstraction

According to the EHR/QI data abstraction, the average age at the diagnosis date was 69.0 years and the population was predominantly male (98.1 %) and white (80.6 %) as shown in Table 3. Roughly half (50.8 %) of the patients were married. CRC cases were fairly evenly distributed between stage I (30.0 %), II (37.1 %), and III (33.0 %) and the majority of patients had colon cancer (69.7 %). Most patients had a surgical procedure (95.8 %; 598/624) and some received neoadjuvant chemotherapy (13.0 %; 81/624), adjuvant chemotherapy (27.1 %; 169/624), neoadjuvant radiation (13.6 %; 85/624), or adjuvant radiation (5.6 %; 35/624). Post-diagnosis care was monitored for an average of 10.4 months (7.9 month standard deviation).

Percent agreement and completeness of demographic characteristics

For the 618 of 624 patients in EHR/QI data also in the VA administrative data, the agreement between the EHR/QI and VA administrative data for demographic and CRC variables was 98.7 % (610/618) for date of birth, 98.1 % (606/618) for date of death, 100.0 % (618/618) for gender, 93.0 % (213/229) for race, and 91.4 % (512/560) for marital status (Table 3). VA administrative data contained complete data for all patients (100 %, 618/618) for date of birth, gender, and marital status. However, the VA administrative data contained patient data for race for only 37.9 % (234/618) of patients.

Percent agreement and completeness of cancer and clinical characteristics

For the 413 EHR/QI patients also present in the VACCR with stage I, II, or III cancer, the CRC stage in the VACCR agreed with the EHR/QI data for 95.2 % (393/413) of patients and the agreement for CRC location was 97.0 % (392/407 with missing data for 6 EHR/QI patients). The agreement for number of lymph nodes examined and number of lymph nodes positive for cancer were 94.3 % (367/389 with missing data in EHR/QI for 19 patients and VACCR for 5 patients) and 98.2 % (385/392 with missing data in EHR/QI for 19 patients and VACCR for 8 patients), respectively. The agreement between the C4 and the VACCR for demographic and CRC variables was 98.3 % (406/413) for age at diagnosis, 100.0 % (413/413) for gender, 98.5 % (392/398) for race, and 95.2 % (357/375) for marital status. The VACCR contained complete patient data (100 %, 413/413) for age at diagnosis, gender, and marital status and race data for 96.3 % (398/413) of patients.

Completeness and positive predictive value of VA administrative data for post-diagnosis test events

VA administrative data identified 174 of 229 (76.0 %; 95 % CI, 70.1–81.1 %) post-diagnosis colonoscopies, 294 of 1,117 (26.3 %; 95 % CI, 23.8–29.0 %) CEA tests, and 2,027 of 2,396 (84.6 %; 95 % CI, 83.1–86.0 %) physician visit events in the EHR/QI data (Table 4).

Table 4 Completeness and PPV for post-diagnosis test events in VA administrative data versus EHR/QI manual data abstraction

The positive predictive value of VA administrative data for post-diagnosis test events identified in the EHR/QI data was 61.1 % (95 % CI, 55.1–66.7 %) for colonoscopy, 54.1 % (95 % CI, 49.8–58.4 %) for CEA tests, and 31.9 % (95 % CI, 30.7–33.0 %) for physician visits (Table 4). A lower positive predictive value indicated that more events were identified in the administrative data that were not also identified by C4.

Sensitivity and agreement for receipt of CRC-related care (treatments and tests) at any time during eligible period

For treatments, VA administrative data identified 557 of 592 (94.1 %; 95 % CI, 91.8–95.6 %) patients who received surgical resection in the EHR/QI data; and the date of surgical resection agreed between the two data sources for 540 of these 557 (96.9 %; 95 % CI, 95.2–98.1 %) patients. VA administrative data identified 33 of 36 (91.7 %; 95 % CI, 76.4–97.8 %) patients who received neoadjuvant chemotherapy, 121 of 187 (65.1 %; 95 % CI, 57.7–71.8 %) who received adjuvant chemotherapy, 44 of 85 (51.8 %; 95 % CI, 40.7–62.6 %) who received neoadjuvant radiation, and 24 of 35 (68.6 %; 95 % CI, 50.6–82.6 %) who received adjuvant radiation (Table 5). VA administrative data and the EHR/QI data showed moderate agreement for receipt of neoadjuvant radiation (κ = 0.60; 95 % CI, 0.49–0.71), adjuvant chemotherapy (κ = 0.55; 95 % CI, 0.48–0.63), and neoadjuvant chemotherapy (κ = 0.53; 95 % CI, 0.40–0.65). The agreement was fair for the receipt of surgical resection (κ = 0.38; 95 % CI, 0.21–0.56) and adjuvant radiation (κ = 0.21; 95 % CI, 0.09–0.34).

Table 5 Sensitivity and kappa static for receipt of any CRC-related care in VA administrative data versus EHR/QI manual data abstraction

For post-diagnosis tests, VA administrative data identified 171 of 200 (85.5 %; 95 % CI, 79.7–89.9 %) patients who received a colonoscopy in the EHR/QI data, 190 of 382 (49.7 %; 95 % CI, 44.6–54.9 %) who received a CEA, and 498 of 500 (99.6 %; 95 % CI, 98.4–99.9 %) who received a physician visit. VA administrative data and the EHR/QI showed almost perfect agreement for whether there was a primary care visit (κ = 0.83; 95 % CI, 0.78–0.89) at any time during the eligible period. The agreement for colonoscopy was moderate (κ = 0.66; 95 % CI, 0.59–0.72) and the agreement for CEA tests was fair (κ = 0.35; 95 % CI, 0.28–0.42).

The supplemental accuracy measures of the specificity, positive predictive value, and negative predictive value of receipt of a type of care during the eligible period for VA administrative data versus the EHR/QI data are provided in Table 6.

Table 6 Specificity, PPV, and NPV for receipt of any CRC-related care in VA administrative data versus EMR/QI manual data abstraction


The use of cancer registries linked with administrative data represents a widely accepted methodologic approach in cancer health services research to identify cancer treatment and surveillance care that may not be captured by tumor registries alone. Linked SEER-Medicare databases have been used to develop relapse algorithms [24]; evaluate receipt of CRC surgical resection [25], chemotherapy [26], and radiation [27]; and study cancer screening rates [28]. Similarly, linkages between VA cancer registries and VA administrative data could be used to evaluate care received within the VA system if this care is adequately captured in administrative data. The methodologic study presented here evaluated the completeness and accuracy of VA administrative data for CRC-related care by comparing procedures identified in the administrative data with those from a manual abstraction of electronic health records performed during the course of a quality improvement collaborative.

There was good agreement between VA administrative and EHR/QI data for demographic variables (91.4 % to 100 % agreement) and CRC treatments received (80.3 % to 92.9 % agreement). The agreements for treatment are similar to what has been observed in Medicare vs. record review for receipt of chemotherapy (88 % colon and 91 % rectal) [26] and radiation (93.5 %) [27]. As previously noted by others [29, 30], there is considerable missingness in VA administrative data for race (62.1 % missing). Our result is consistent with what is known about race in VA administrative data; according to the VIReC Technical Report: VA Race Data Quality, race is missing from 33–37 % of VA administrative data records for 2004–2005, although the quality of the data has been improving with time [31]. The missingness of race in the VA cancer registry was considerably lower (2.2 %) and may be considered as the primary source of race for studies involving VA patients with cancer.

VA administrative data were reasonably complete for post-diagnosis colonoscopy (76 %) and physician visit (84.6 %) events but the completeness for the CEA (26.3 %) events was considerable lower. This lower completeness for the CEA tests may be due to the coding of many of these procedures in laboratory data, which was not considered here, or a lower rate of coding laboratory tests in the administrative data. The agreement for receipt of colonoscopy (κ = 0.66) was moderate but lower than the almost perfect agreement that has been observed between Medicare data and health record review (κ = 0.89) [10]. But, as discussed below, the majority of the disagreements in VA data were due to colonoscopies in the VA administrative data that were not documented by the EHR/QI.

There were a significant number of procedures of all types identified in VA administrative data but not by the EHR/QI data manual abstraction process. For example, the positive predictive value for colonoscopy after diagnosis was 61.1 %, or 111 of the 285 colonoscopy events in VA administrative data were not in the EHR/QI data. This difference between administrative claims data and chart abstraction can be explained by either “false-positive” claims or incomplete/inaccurate abstraction. The latter explanation is the more compelling. Specifically, it was unclear whether the EHR/QI data continued to collect surveillance events once the recommended surveillance was achieved; this would lead to apparent false positives in the VA administrative data which would lower the positive predictive value. Also, differences between administrative data and manual abstraction may occur if abstractors misinterpreted coding directions (e.g., difficulty in defining a physician visit was noted as a limitation of the EHR/QI [9]), did not collect all procedures (e.g., local site variations [32] or patient met the indicator criteria so no additional procedures were collected), or simply missed procedures. Because the C4 was undertaken as part of a quality improvement effort rather than a systematic CRC surveillance study, these abstraction errors are likely to be more prevalent in the C4 data than traditional research studies. Previous studies stemming from the C4 have identified the greatest opportunity for improvement as timely surveillance of cancer survivors, however due to missing data, previous studies may have overestimated the size of the quality gap [7, 9]. Also of note is that colon and rectal cancer were not differentiated using VA administrative data which – because, in the EHR/QI, there are indicator variables for neoadjuvant chemotherapy and neoadjuvant radiation of rectal but not colon cancer as well as no indicator variable for adjuvant radiation [9] – likely led to many of the “false-positives” for these variables in VA administrative data.

There are several potential limitations to this study. While the receipt of CRC surveillance procedures was compared, the indication for the procedure (i.e. whether the procedure was for surveillance or diagnostic) was unknown. Without the indication to identify whether procedures were related, time windows were used to identify related procedures but this may combine two unrelated procedures together. There was also no documentation of when each manual search of the EHR/QI was conducted. We assumed that the final search for a patient was conducted on the last documented date in the EHR/QI data for that patient, but it is possible that subsequent searches were conducted that found no new information. We also assumed that all EHR/QI variables were searched for during each manual abstraction, but it is possible that not all variables were searched for during each examination of the health record. Of note, our study represents a comparison of administrative data and electronic health records in the VA system. While these methods may serve as a guide, they should be validated in other healthcare systems. In addition, the VA CRC patient population is predominantly male and diagnosed at an earlier stage than the general population [1] so the study cohort may not be generalizable. Finally, the comparison on patient characteristics in Table 3 was based patients whose data was not missing so there is the potential for selection bias.


In summary, national Veterans Affairs administrative data can be a powerful tool to assess the quality of CRC surveillance care. However, before measures of care quality can be instituted, the accuracy of these data should be examined. Results from this study suggest that VA administrative data are accurate and complete for non-race demographic variables, receipt of CRC treatment, colonoscopy, and physician visits and may be a potential alternative to manual abstraction for the measurement of CRC care. However alternative data sources will be necessary to capture patient race and receipt of CEA tests.



American Joint Committee on Cancer


American Society of Clinical Oncology


Colorectal Cancer Care Collaborative


Cancer Care Quality Measurement System


carcinoembryonic antigen


Computerized Patient Record System


Current Procedure Terminology


colorectal cancer


electronic health record / quality improvement collaborative


International Classification of Diseases, 9th revision


National Comprehensive Cancer Network


national data system


Surveillance, Epidemiology, and End Results


Veterans Health Administration


VA Central Cancer Registry


VA medical center


VA Information Resource Center


Veterans Integrated Service Network


  1. 1.

    Zullig LL, Jackson GL, Dorn RA, Provenzale DT, McNeil R, Thomas CM, et al. Cancer incidence among patients of the United States Veterans Affairs (VA) healthcare system. Mil Med. 2012;177:693–701.

  2. 2.

    O’Connell JB, Maggard MA, Ko CY. Colon cancer survival rates with the new American Joint Committee on Cancer Sixth Edition Staging. J Natl Cancer Inst. 2004;96:1420–5.

  3. 3.

    Engstrom PF, Arnoletti JP, Benson III AB, Chen YJ, Choti MA, Cooper HS, et al. Colon cancer. J Natl Compr Canc Netw. 2009;7:778–831.

  4. 4.

    Engstrom PF, Arnoletti JP, Benson III AB, Chen YJ, Choti MA, Cooper HS, et al. Rectal cancer. J Natl Compr Canc Netw. 2009;7:838–81.

  5. 5.

    Desch CE, Benson III AB, Somerfield MR, Flynn PJ, Krause C, Loprinzi CL, et al. Colorectal cancer surveillance: 2005 update of an American Society of Clinical Oncology practice guideline. J Clin Oncol. 2005;23:8512–9.

  6. 6.

    Malin JL, Schneider EC, Epstein AM, Adams J, Emanuel EJ, Kahn KL. Results of the National Initiative for Cancer Care Quality: How can we improve the quality of cancer care in the United States? J Clin Oncol. 2006;24:626–34.

  7. 7.

    Jackson GL, Melton DL, Abbott D, Zullig LL, Ordin DL, Grambow SC, et al. Quality of nonmetastatic colorectal cancer care in the Department of Veterans Affairs. J Clin Oncol. 2010;28:3176–81.

  8. 8.

    Keating NL, Landrum MB, Lamont EB, Bozeman SR, Krasnow SH, Shulman LN, et al. Quality of care for older patients with cancer in the Veterans Health Administration versus the private sector: A cohort study. Ann Intern Med. 2011;154:727–6.

  9. 9.

    Jackson GL, Zullig LL, Zafar SY, Powell AA, Ordin DL, Gellad ZF, et al. Using NCCN clinical practice guidelines in oncology to measure the quality of colorectal care in the Veterans Health Administration. J Natl Compr Cancer Netw. 2013;11:431–41.

  10. 10.

    Schenck AP, Klabunde CN, Warren JL, Peacock S, Davis WW, Hawley ST, et al. Data sources for measuring colorectal endoscopy use among Medicare enrollees. Cancer Epidemiol Biomark Prev. 2007;16:2118–27.

  11. 11.

    Abraham NS, Gossey JT, Davalia JA, Al-Oudat S, Kramer JK. Receipt of recommended therapy by patients with advance colorectal cancer. Am J Gastroenterol. 2006;101:1320–8.

  12. 12.

    Butler Nattinger A, Schapira MM, Warren JL, Earle CC. Methodological issues in the use of administrative claims data to study surveillance after cancer treatment. Med Care. 2002;40(IV):69–74.

  13. 13.

    Cooper GS, Schultz L, Simpkins J, Lafata JE. The utility of administrative data for measuring adherence to cancer surveillance care guidelines. Med Care. 2007;45:66–72.

  14. 14.

    Cooper GS, Kou TD, Reynolds Jr HL. Receipt of guideline-recommended follow-up in older colorectal cancer survivors: A population-based analysis. Cancer. 2008;113:2029–37.

  15. 15.

    Goldsbury DE, Armstrong K, Simonella L, Armstrong BK, O’Connell DL. Using administrative health data to describe colorectal and lung cancer care in New South Wales, Australia: A validation study. BMC Health Sci Res. 2012;12:article 387.

  16. 16.

    Li X, King C, deGara C, White J, Winget M. Validation of colorectal cancer surgery data from administrative data sources. BMC Health Serv Res. 2012;12:article 97.

  17. 17.

    Li X, Hilsden R, Hossain S, Fleming J, Winget M. Validation of administrative data sources for endoscopy utilization in colorectal cancer diagnosis. BMC Health Serv Res. 2012;12:article 358.

  18. 18.

    Jackson GL, Krein SL, Alverson DC, Darkins AW, Gunnar W, Harada ND, et al. Defining core issues in utilizing information technology to improve access: Evaluation and research agenda. J Gen Intern Med. 2011;26:623–7.

  19. 19.

    Jackson GL, Powell AA, Ordin DL, Schlosser JE, Murawsky J, Hersh J, et al. Developing and sustaining quality improvement partnerships in the VA: The Colorectal Cancer Care Collaborative. J Gen Intern Med. 2010;25:38–43.

  20. 20.

    Anthony T, Simmang C, Hyman N, Buie D, Kim D, Cataldo P, et al. Practice parameters for the surveillance and follow-up of patients with colon and rectal cancer. Dis Colon Rectum. 2004;47:807–17.

  21. 21.

    Rex DK, Kahi CJ, Levin B, Smith RA, Bond JH, Brooks D, et al. Guidelines for colonoscopy surveillance after cancer resection: a consensus update by the American Cancer Society and the US Multi-Society Task Force on Colorectal Cancer. Gastroenterology. 2006;130:1865–71.

  22. 22.

    Ramsey SD, Howlader N, Etzioni R, Brown ML, Warren JL, Newcomb P. Surveillance endoscopy does not improve survival for patients with local and regional stage colorectal cancer. Cancer. 2006;109:2222–8.

  23. 23.

    Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74.

  24. 24.

    Earle CC, Nattinger AB, Potosky AL, Lang K, Mallick R, Berger M, et al. Identifying cancer relapse using SEER-Medicare data. Med Care. 2002;40(IV):75–81.

  25. 25.

    Cooper GS, Virnig B, Klabunde CN, Schussler N, Freeman J, Warren JL. Use of SEER-Medicare data for measuring cancer surgery. Med Care. 2002;40(IV):43–8.

  26. 26.

    Warren JL, Harlan LC, Fahey A, Virnig BA, Freeman JL, Klabunde CN, et al. Utility of the SEER-Medicare data to identify chemotherapy use. Med Care. 2002;40(IV):55–61.

  27. 27.

    Virnig BA, Warren JL, Cooper GS, Klabunde CN, Schussler N, Freeman J. Studying radiation therapy using SEER-Medicare-linked data. Med Care. 2001;40(IV):49–54.

  28. 28.

    Freeman JL, Klabunde CN, Schussler N, Warren JL, Virnig BA, Cooper GS. Measuring breast, colorectal, and prostate cancer screening with Medicare claims data. Med Care. 2002;40(IV):36–42.

  29. 29.

    Long JA, Bamba MI, Ling B, Shea JA. Missing race/ethnicity data in Veterans Health Administration based disparities research: A systematic review. J Health Care Poor Underserved. 2006;17:128–40.

  30. 30.

    Stroupe KT, Tarlov E, Zhang Q, Haywood T, Owens A, Hynes DM. Use of Medicare and DOD data for improving VA race data quality. J Rehabil Res Dev. 2010;47:781–96.

  31. 31.

    VA Information Resource Center. VIReC Technical Report: VA Race Data Quality. Hines: U.S. Department of Veterans Affairs, Health Services Research and Development Service, VA Information Resource Center; 2011.

  32. 32.

    Hayman AV, Chang ET, Molokie RE, Kahng LS, Prystowsky JB, Bentrem DJ. Assessing compliance with national quality measures to improve colorectal cancer care in the VA. Am J Surg. 2010;200:572–3.

  33. 33.

    Cooper GS, Yuan Z, Stange KC, Dennis LK, Amini SB, Rimm AA. Agreement of Medicare claims and tumor registry data for assessment of cancer-related treatment. Med Care. 2000;38:411–21.

Download references


This material is based upon work supported in part by the Department of Veterans Affairs, Health Services Research and Development Career Development Award CA207016-2 and the VA / Robert Wood Johnson Foundation Physician Faculty Scholar Program (DAH) and with funds from the Center of Excellence on Implementation of Evidence-based Practice, Richard. L. Roudebush VA Medical Center.

Author information

Correspondence to Eric A. Sherer.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

DAH conceived the study and EAS designed the study. JB acquired the VA data and DAF, GLJ, and DP acquired the C4 data. EAS and DAH analysed and interpreted the data. EAS and DAH wrote the first draft of the manuscript and all authors contributed critical revisions for important intellectual content. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sherer, E.A., Fisher, D.A., Barnd, J. et al. The accuracy and completeness for receipt of colorectal cancer care using Veterans Health Administration administrative data. BMC Health Serv Res 16, 50 (2016).

Download citation


  • Colorectal cancer
  • Cancer surveillance
  • Cancer treatment
  • Administrative data
  • Electronic medical record
  • Veterans Health Administration