Interpreting Incidence Data

What to know

Each year when U.S. Cancer Statistics data are released, we update data products with the most recent data submission. Users of cancer incidence data published by federal agencies should be mindful of the data submission dates for all data used in their analyses.

Choice of standard population and population denominator

The U.S. Department of Health and Human Services' policy for reporting death and disease rates was motivated by a need to standardize age adjustment procedures across government agencies.12 Because of the aging U.S. population, the 2000 US standard population gives more weight to older age categories than the 1940 and 1970 standard populations did.2 The National Center for Health Statistics (NCHS) regularly evaluates the population standard and currently recommends using the 2000 U.S. standard population for calculating age-adjusted rates.

The age-adjusted rates in the Data Visualizations tool should not be compared with cancer incidence rates adjusted to different standard populations.

Incidence rates also are influenced by the choice of population denominators used in calculating these rates. Because some state health departments use customized projections of the state's population when calculating incidence rates, the rates in the Data Visualizations tool may differ slightly from those published by individual states.

Registries’ data quality

Data quality is evaluated routinely by CDC's National Program of Cancer Registries (NPCR) and the National Cancer Institute's (NCI's) Surveillance, Epidemiology, and End Results (SEER) Program.34 Comprehensive evaluation activities are conducted to find missing cases or to identify errors in the data. Although the cancer registries meet data quality criteria for all invasive sites combined, the completeness and quality of site-specific data may vary, making in-depth analyses critical to present reliable data.

The observed rates may have been influenced by differences in the timeliness, completeness, and accuracy of the data from one registry to another, from one reporting period to another, or from one cancer site to another. In rare instances, a registry may identify a data quality issue after the file is submitted to CDC. In those instances, CDC will either suppress the identified segment or exclude the registry's data from analytic products.

Reporting

Reporting time intervals

Completeness and accuracy of the site-specific data also may be affected by the time interval allowed for reporting data to the two federal programs. The NPCR and SEER time interval for reporting data differ: For each submission year, NPCR allows a 23-month interval after the close of the diagnosis year and SEER allows a 22-month interval.

Reporting delays

Delays in reporting cancer cases can affect timely and accurate calculation of cancer incidence rates.5 Cases are reported continuously to state and metropolitan area cancer registries in accordance with statutory and contractual requirements.

After the initial submission of the most recent year's data to the federal funding agency, cancer registries update their data on the basis of new information received. Therefore, some cancer cases likely will have been reported to state and metropolitan-area cancer registries after the registries submitted their data to CDC or NCI. For this reason, incidence rates and case counts reported directly by state or metropolitan area cancer registries may differ from those that appear in the Data Visualizations tool.

Reporting delays appear to be more common for cancers that usually are diagnosed and treated in non-hospital settings such as physicians' offices (for example, early-stage prostate and breast cancers and melanoma of the skin). Efforts are underway to reduce reporting delays. Methods to adjust incidence rates for reporting delay were not applied to the data in this report.5

Continual data updates

Each year, central cancer registries submit an updated version of previous years’ data and data for a new diagnosis year to CDC, NCI, or both agencies. Federal agencies, in turn, update their cancer incidence statistics with each data submission and document the registries’ date of data submission whenever the data are published. These continual updates illustrate the dynamic nature of cancer surveillance and the attention to detail that characterizes cancer registries. Each year when U.S. Cancer Statistics data are released, we update data products with the most recent data submission.

Geographic variation

Geographic variation in cancer incidence rates may result from regional differences in the exposure of the population to known or unknown risk factors.6789 Differences may arise because of differences in:

  • Sociodemographic characteristics of the population (age, race and ethnicity, geographic region, urban or rural residence).
  • Screening test use.
  • Health-related behaviors such as tobacco use, diet, and physical activity.
  • Exposure to cancer-causing agents.
  • Factors associated with the registries' operations, such as completeness, timeliness, and specificity in coding cancer sites.

Researchers are investigating variability associated with known factors that may affect cancer rates. Researchers use model-based statistical techniques and other approaches for surveillance research. Differences in registry operations are being evaluated to ensure consistency and quality in reporting data.

  1. Anderson RN, Rosenberg HM. Report of the Second Workshop on Age Adjustment.Vital Health Stat 4. 1998;(30):I–VI, 1–37.
  2. Anderson RN, Rosenberg HM. Age standardization of death rates: implementation of the year 2000 standard. Natl Vital Stat Rep. 1998;47(3):1–16, 20.
  3. Fritz A. The SEER Program's commitment to data quality. J Registry Manag. 2001;28(1):35–40.
  4. Hutton MD, Simpson LD, Miller DS, Weir HK, McDavid K, Hall HI. Progress toward nationwide cancer surveillance: an evaluation of the National Program of Cancer Registries, 1994–1999. J Registry Manag. 2001;28(3):113–120.
  5. Clegg LX, Feuer EJ, Midthune DN, Fay MP, Hankey BF. Impact of reporting delay and reporting error on cancer incidence rates and trends. J Natl Cancer Inst. 2002;94(20):1537–1545.
  6. Centers for Disease Control and Prevention. Behavioral Risk Factor Surveillance System Operational and User's Guide. Version 3.0. Atlanta (GA): Centers for Disease Control and Prevention; 2005.
  7. Devesa SS, Grauman DJ, Blot WJ, Pennello GA, Hoover RN. Atlas of Cancer Mortality in the United States, 1950–1994. Bethesda (MD): National Cancer Institute; 1999.
  8. Howe HL, Keller JE, Lehnherr M. Relation between population density and cancer incidence, Illinois, 1986–1990. Am J Epidemiol. 1993;138(1):29–36.
  9. Wingo PA, Jamison PM, Hiatt RA, et al. Building the infrastructure for nationwide cancer surveillance and control—a comparison between the National Program of Cancer Registries (NPCR) and the Surveillance, Epidemiology, and End Results (SEER) Program (United States). Cancer Causes Control. 2003;14(2):175–193.