viernes, 13 de septiembre de 2013

Comparison of Provisional with Final Notifiable Disease Case Counts — National Notifiable Diseases Surveillance System, 2009

full-text:
Comparison of Provisional with Final Notifiable Disease Case Counts — National Notifiable Diseases Surveillance System, 2009

HHS, CDC and MMWR Logos
MMWR Weekly
Volume 62, No. 36
September 13, 2013

Comparison of Provisional with Final Notifiable Disease Case Counts — National Notifiable Diseases Surveillance System, 2009


Weekly


September 13, 2013 / 62(36);747-751

States report notifiable disease cases to CDC through the National Notifiable Diseases Surveillance System (NNDSS). This allows CDC to assist with public health action and monitor infectious diseases across jurisdictional boundaries nationwide. The Morbidity and Mortality Weekly Report (MMWR) is used to disseminate these data on infectious disease incidence. The extent to which the weekly notifiable conditions are overreported or underreported can affect public health understanding of changes in the burden, distribution, and trends in disease, which is essential for control of communicable diseases (1). NNDSS encourages state health departments to notify CDC of a case when initially reported. These cases are included in the weekly provisional counts. The status of reported cases can change after further investigation by the states, resulting in differences between provisional and final counts. Increased knowledge of these differences can help in guiding the use of information from NNDSS. To quantify the extent to which final counts differ from provisional counts of notifiable infectious disease in the United States, CDC analyzed 2009 NNDSS data for 67 conditions. The results of this analysis demonstrate that for five conditions, final case counts were lower than provisional counts, but for 59 conditions, final counts were higher than provisional counts. The median difference between final and provisional counts was 16.7%; differences were ≤20% for 39 diseases but > 50% for 12. These differences occur for various diseases and in all states. Provisional case counts should be interpreted with caution and an understanding of the reporting process.
Reporting of cases of certain diseases is mandated at the state or local level, and states, the Council of State and Territorial Epidemiologists (CSTE), and CDC establish policies and procedures for submitting data from these jurisdictions to NNDSS. Not all notifiable diseases are reportable at the state level, and although disease reporting is mandated by legislation or regulation, state reporting to CDC is voluntary. States send reports of cases of nationally notifiable diseases to CDC on a weekly basis in one of several standard formats. Amended reports can be sent, as well as new reports. Cases are reported by week of notification to CDC. Cases reported each week to CDC and published in MMWR are deemed provisional. The NNDSS database is open throughout the year, allowing states to update their records as new information becomes available. Annually, CDC provides each state epidemiologist with a cutoff date (usually 6 months after the end of the reporting year) by which all records must be reconciled and no additional updates are accepted for that reporting period. After the database is closed, final case counts, prepared after the states have reconciled the year-to-date data with local reporting units, are approved by state epidemiologists as accurate reflections of final case counts for the year and are published in the MMWR Summary of Notifiable Diseases — United States. Data for 2009 were published in 2011 (2).
CDC's publication schedule allows states time to complete case investigation tasks. To examine the extent that provisional counts of infectious diseases differ from final counts, CDC compared the cumulative case counts published for week 52 of 2009 in the MMWR of January 8, 2010 to the case counts published in the NNDSS final data set for 2009 (cutoff date of June 2010) published in MMWR on August 20, 2010. To assess whether discrepancies between provisional and final counts were more common in specific states or regions, or everywhere, reporting was examined, by state, of four diverse diseases: one sexually transmitted disease (Chlamydia trachomatis, genital infection), one vaccine-preventable disease (pertussis), one foodborne disease (salmonellosis), and one vectorborne disease (Lyme disease). Data are not presented for tuberculosis and human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome because these data are published quarterly rather than weekly in MMWR. Weekly reports of these conditions to the public health community are of limited value because of differences in reporting patterns for these diseases, and long-term variations in the number of cases are more important to public health practitioners than weekly variations (3).
Reported data for 67 notifiable diseases were reviewed. Final counts were lower than provisional counts for five diseases, the same as provisional counts for three, and higher for 59 (Table 1). The median difference between final and provisional counts was 16.7%; differences were ≤ 20% for 39 diseases but > 50% for 12. Among diseases with ≥10 cases reported in 2009, final counts were lower than provisional counts for just four: invasive Haemophilus influenzae disease, ages < 5 years, unknown serotype (final: 166, provisional: 218); acute hepatitis C (final: 782, provisional: 844); toxic-shock syndrome, other than streptococcal (final: 74, provisional: 76); and influenza-associated pediatric mortality (final: 358, provisional: 360). Final counts were higher than provisional counts for 51 diseases. The greatest percentage differences between provisional and final case counts were for arboviral disease, West Nile virus (neuro/nonneuro) (final: 720, provisional: 0); mumps (final: 1,991, provisional: 982); and Hansen disease (final: 103, provisional: 59).
Examining four diverse but commonly reported diseases in detail revealed no consistent association between state or region and the magnitude of the discrepancy between final and provisional counts (Table 2). For Chlamydia trachomatis, genital infections, the final case count was 13.1% higher than the provisional count nationally; it was < 2% lower everywhere and ≥20% higher in six states. Two states, Arkansas and North Carolina, reported no cases provisionally, but reported final case counts of 14,354 and 41,045, respectively. For Lyme disease, the final case count was 29.2% higher than the provisional count nationally. Only 23 jurisdictions reported >100 cases, including 21 states, upstate New York, and New York City. Of these, four states reported a final count lower than their provisional count (range: 13.4%–29.2%) and eight jurisdictions reported final counts ≥20% higher. The greatest percentage differences between provisional and final case counts were in Connecticut (final: 4,156, provisional: none), Minnesota, (final: 1,543, provisional: 169), Texas (final: 276, provisional: 48), and New York City (final: 1,051, provisional: 262). For pertussis, the final case count was 24.8% higher than the provisional count nationally; it was < 2% lower everywhere and ≥20% higher in 18 states and the District of Columbia (DC). Of the five states that reported > 1,000 cases, the states with the greatest percentage differences between provisional and final counts were Minnesota (final: 1,121, provisional: 165) and Texas (final: 3,358, provisional: 2,437). For salmonellosis, the final case count was 10.6% higher than provisional count nationally. Six states reported a final count lower than their provisional count (range: 0.1%–2.9%) and nine states plus DC reported final counts ≥20% higher, the highest being DC (final: 100, provisional: 26), Louisiana (final: 1,180, provisional: 599), and Indiana (final: 629, provisional: 349).

Reported by

Nelson Adekoya, DrPH, Div of Notifiable Diseases and Healthcare Information, Public Health Surveillance and Informatics Program Office; Henry Roberts, PhD, Div of Viral Hepatitis, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention, CDC. Corresponding contributor: Nelson Adekoya, nba7@cdc.gov, 404-498-6258.

Editorial Note

The findings in this report corroborate previous observations that provisional NNDSS data should be interpreted with caution (1,4,5). The primary appeal of provisional counts is timeliness; in comparison, final counts are more complete and accurate. As additional information is collected during investigations, final case counts might be higher or lower than the provisional counts. Local and state health departments collect reportable surveillance data primarily to assist with disease control and prevention efforts (i.e., to monitor local outbreaks of infectious diseases), to measure disease burden among high-risk populations, and to assess effectiveness of local interventions. At the national level, these data can be compared with baseline data to detect unusual disease occurrences. Final data sets are useful in monitoring national trends and for determining the effectiveness of national intervention efforts. In 2009, final case counts did not differ from end-of-year provisional counts by > 20% for two thirds of the 67 notifiable diseases examined. Understanding how provisional counts relate to final counts is essential for interpreting provisional data (6,7).
Final counts might be higher than provisional counts for several possible reasons: 1) as amended records are sent by states during the notification process, cases might be reclassified among confirmed, probable, suspected, and not-a-case categories; 2) states vary in their practices regarding when they report cases with incomplete data or that are under investigation, leading to variable delays; 3) allocation of cases to a state can be delayed; 4) laboratory testing, case investigation, and data entry can be delayed as a result of temporary staff absences (e.g., leave, furlough, or turnover); 5) states sometimes delay sending some reports to CDC until the end of the year; and 6) internal CDC data processing problems can cause a discrepancy.
The findings in this report are subject to at least one limitation. It was impossible to determine when final counts were known to the state and local jurisdictions so that they could take public health action. This report focuses only on counts published in MMWR. The jurisdictions might have been aware of final case counts sooner, and only notification to CDC was delayed. Although this study examined 1 year of data, previous research using multiple years of data for hepatitis A and B concluded that provisional data generally tend to underrepresent the final data counts for those conditions (1). The addition of more years to the current research, which examined multiple notifiable conditions and documents substantial differences across states, regions, and numerous conditions, would not be expected to change the overall results.
Interpreting weekly incidence data is complex because of surveillance system limitations. Nonetheless, health practitioners have to respond to public health threats based on preliminary surveillance information. In 2006, CDC and CSTE reconsidered data presentation formats and included additional information (e.g., 5-year weekly average, previous 52 weeks median, and maximum number of cases) to aid interpreting these data (3). However, the findings in this report illustrate that major challenges still exist in presenting and interpreting provisional data and highlights the need to examine specific factors that can contribute to late reporting of cases (e.g., late case reporting by providers to health departments or late reporting of cases by health departments to CDC) (4). Although information technology has improved notifiable disease reporting (8), NNDSS data remain subject to reporting artifacts. Understanding specific reasons for the variation between the provisional and final case counts for each condition can improve the use of provisional data for disease surveillance and notification.

Acknowledgments

Richard Hopkins, MD, Florida Dept of Health. John Davis-Cole, PhD, District of Columbia Dept of Health. Michael Landen, MD, New Mexico Dept of Health. Participating state health departments and reporting jurisdictions.

References

  1. Smallman-Raynor M, Cliff AD, Haggett P, Stroup DF, Williamson GD. Spatial and temporal patterns in final amendments to provisional disease counts. J Public Health Manag Pract 1999;5:68–83.
  2. CDC. Summary of notifiable diseases—United States, 2009. MMWR 2011;58(53).
  3. CDC. Notice to readers: changes in presentation of data from the National Notifiable Diseases Surveillance System. MMWR 2006;55:13–4.
  4. Stroup DF, Williamson GD, Herndon JL, Karon JM. Detection of aberrations in the occurrence of notifiable diseases surveillance data. Stat Med 1989;8:323–9.
  5. Stroup DF, Wharton M, Kafadar K, Dean AG. Evaluation of a method for detecting aberrations in public health surveillance system data. Am J Epidemiol 1993;137:373–80.
  6. Birkhead G, Chorba TL, Root S, Klaucke DN, Gibbs NJ. Timeliness of national reporting of communicable diseases: the experience of the National Electronic Telecommunications System for Surveillance. Am J Public Health 1991;81:1313–5.
  7. Boehmer TK, Patnaik JL, Burnite SJ, Ghosh TS, Gershman K, Vogt RL. Use of hospital discharge data to evaluate notifiable disease reporting to Colorado's Electronic Disease Reporting System. Public Health Rep 2011;126:100–6.
  8. Silk BJ, Berkelman RL. A review of strategies for enhancing the completeness of notifiable disease reporting. J Public Health Manag Pract 2005;11:191–200.

No hay comentarios:

Publicar un comentario