January 16, 2013 — This article by Lawrence F Muscarella, PhD, raises questions about appraisals of the safety of health care and hospitals based on infection data that have not been validated, may be incomplete and inaccurate, and are documented to be prone to under-reporting.


Published studies, newspaper articles, and federal reports frequently focus on healthcare-associated infections (HAIs), sentinel events, or medical errors as a metric to evaluate health care in the U.S.1-16  Measured reductions in these or another type of “patient harm” are often reported to indicate, and may be used as a metric to measure, quality and safety improvements.

A number of recently published studies evaluated the effectiveness of a checklist, initiative, or bundle of “best practices” for the prevention of central line-associated bloodstream infections (CLABSIs) in the intensive care units (ICUs) of hospitals.17-24  Several of these studies observed reductions in the incidence of CLABSIs over a period of time, reporting that these results demonstrate the success of the studied intervention.

The Centers for Disease Control and Prevention (CDC)

The Centers for Disease Control and Prevention (CDC) has published a number of reports in the past few years discussing quality improvements in U.S. healthcare facilities.3,4,12,24-26

One of these reports published in May, 2010, provides a summary of “state -specific” CLABSI data that acute-care hospitals reported to the National Healthcare Safety Network (NHSN)—an Internet-based surveillance system that is managed by the CDC and to which healthcare facilities in every state report HAIs.11,12

According to this state-specific report, more than 1,500 hospitals in 17 states “observed” 18% fewer CLABSIs during the first 6 months of 2009 than “predicted.”

Based on these results, the CDC concluded that patient care in U.S. hospitals is “getting safer.”12

Morbidity and Mortality Weekly Report (MMWR)

Another CDC report focusing on the safety of U.S. healthcare facilities was published in the March 4, 2011, issue of Morbidity and Mortality Weekly Report (MMWR).25

This CDC report25 is a retrospective analysis, estimates (without direct measurements) that the number of CLABSIs in the ICUs of U.S. hospitals decreased by 58% during the past decade—from an estimated 43,000 infections (reported by approximately 260 hospitals participating in NHSN’s predecessor, the National Nosocomial Infections Surveillance System, or NNIS) in 2001 to an estimated 18,000 infections (reported by approximately 1,600 hospitals participating in NHSN) in 2009.25,26



Click here to read a peer-reviewed article by Dr. Muscarella on the topic of CLABSIs – “Assessment of the Reported Effectiveness of Five Different Quality-Improvement Initiatives for the Prevention of Central Line-Associated Bloodstream Infections in Intensive Care Units” – that complements this post’s discussion.



According to the CDC, these results (which are based on a number of assumptions):25,26

  • demonstrate significant improvements in the quality of health care in the ICUs of U.S. hospitals;
  • are “likely” due to state and federal efforts, coordinated and supported by the CDC (among others), to prevent HAIs; and
  • indicate that “the cumulative excess health-care costs of all CLABSIs prevented in ICUs (from 2001 to 2009) could approach $1.8 billion, and the number of lives saved could be as high as 27,000.”

CLABSIs

Like a number of other reports that discuss the quality of health care in ICUs,13-15,17-24 this CDC report in MMWR focuses on CLABSIs, which are associated with a mortality rate of as high as 25%.20,21,24,25

Whether valid or not, the use of CLABSIs as a metric not only to rate and compare the safety of hospitals, but also to evaluate the impact of initiatives, projects, and both state and federal efforts to prevent HAIs has become commonplace.13-15,27

The American Recovery and Reinvestment Act (ARRA)

Discussed in this CDC report in MMWR, the federal government in 2009 established the goal of a 50% reduction in CLABSIs, nationwide, by 2013.25,28

Also noted in this CDC report, the American Recovery and Reinvestment Act (ARRA) of 2009 appropriated $40 million for the CDC to support efforts by state health departments to monitor and prevent HAIs.11,25

“National success”

An expert committed to the prevention of CLABSIs described the CDC’s findings presented in this report in MMWR as “the first national success we have for patient safety in this country.”26 

Similarly, one of this report’s co-authors stated that the CDC’s estimated 58% reduction in the number of CLABSIs “stands out in terms of a national, large-scale, dramatic reduction in healthcare-associated infections,” adding that there are few, if any, “other examples like this in the quality improvement literature.”26

According to the CDC:  Its report’s estimated 58% reduction in the number of CLABSIs “stands out in terms of a national, large-scale, dramatic reduction in healthcare-associated infections,” adding that there are few, if any, “other examples like this in the quality improvement literature.”26

AIM and FINDINGS

This review evaluated and questions the estimates, conclusions, and scientific rigor of this CDC report published in MMWR.

Click here to read a letter Dr. Muscarella wrote about a related topic that is entitled “Dear Pediatrics: An Assessment of the Effectiveness of Bundles and Checklists.”

DISCUSSION

This CDC report in MMWR suggests significant improvements—at least in enhanced leadership and targeted funding by the federal government (e.g., the ARRA of 2009) to support state-based efforts for the prevention of HAIs.25,26,28

Moreover, this report in MMWR highlights laudable initiatives by the CDC to improve the quality and safety of health care.25,26  This review found this CDC report to be more conjectural than scientific, however, and as salient for its estimates and conclusions as for its limitations and oversights.

The CDC’s report in MMWR concluded that the number of CLABSIs has decreased dramatically since 2001, likely due to state and federal efforts coordinated by the CDC.  The majority of the CLABSI data used by the CDC to calculate this dramatic reduction, however, had not been validated, which suggests that this CDC report’s conclusions might be more conjectural and speculative than scientific and sound.

Corroborating this review’s conclusions, Passaretti et al. (2011)* wrote this past August in an infection-control journal that:

  • “public reporting of HAIs is fraught with problems”;
  • “the politics of measuring HAIs may have outpaced the
    science”; and
  • in many instances of the public reporting of HAIs, “the role of politics far exceeds that of science.”

“Reporting biases”

Like that of any publication that rates, ranks and compares the safety of hospitals based on reported rates of HAIs, this CDC report assumes—indeed, its validity requires—that the infection data it used for its analysis be accurate and complete.11-15,25,27

Notably, however, the majority of all reported CLABSI data have not been validated for accuracy and completeness—including this CDC report’s data (which were self-reported to the NNIS in 2001 and to the CDC’s NHSN in 2009 and were not reported by a random sample of healthcare facilities).11,13,14,27,29-36

As a consequence, the U.S. Government Accountability Office (GAO), among others, has concluded that there is a “substantial risk” that these published infection data (which have not been validated) may be “misleading,” yield unreliable national estimates of HAIs, and under-report the true incidence of infection (one consequence of which can be to over-exaggerate the success of initiatives and other interventions that are evaluated based on these HAI data).11,13,14,27,29-35

There is a ‘substantial risk’ that published infection data that have not been validated for accuracy and completeness may be ‘misleading.’ — The GAO

The CDC’s report in MMWR acknowledges as much, agreeing that the CLABSI data on which its analysis is based are subject to “reporting biases.”25 (Examples of such biases would include measurement, sampling, publication, and confounding biases.27)

Additionally, the CDC  concedes in its state-specific report about CLABSIs that infection data that have not been validated may lack quality and completeness.11

Similarly, the American Hospital Association (AHA) has expressed “serious concern” about the public reporting of HAI data through the CDC’s NHSN, concluding that these data, which generally lack validation, may not be a sound indicator of a hospital’s quality and performance.35 These concerns of the GAO and AHA, and, too, of the CDC, raise questions about the validity of this CDC report in MMWR.

A retrospective comparative analysis

In addition to its infection data being prone to biases (e.g., measurement inaccuracies), this CDC report in MMWR is a retrospective study that compares the number of CLABSIs reported in 2001 to those reported in 2009. Unlike randomized controlled studies, which are significantly more scientifically robust,27 such retrospective comparisons, like prospective cohort studies, are limited in strength and cannot control or eliminate unrecognized confounding factors.

As a result, these studies cannot demonstrate causal (i.e., cause-and-effect) relationships between an intervention and observed reductions in CLABSIs.27

Questions raised

Therefore, primarily because this CDC report in MMWR is retrospective, not controlled, and compares CLABSI data that, although they are prone to biases and to under-reporting the true incidence of infection, have not been validated for accuracy and completeness, this review questions this CDC report’s estimates and conclusions that:25

  • the number of CLABSIs in the ICUs of U.S hospitals was reduced by 58% between 2001 and 2009;
  • state and federal efforts coordinated by the CDC (among others) to prevent HAIs—such as those for which the ARRA of 2009 allocated $40 million in targeted funding—were“likely” responsible for this reduction in CLABSIs; and
  • “the cumulative excess health-care costs of all CLABSIs prevented in ICUs (since 2001) could approach $1.8 billion, and the number of lives saved could be as high as 27,000.”

Click here to read a blog Dr. Muscarella wrote entitled “Three Facts and Myths about Central Line-Associated Bloodstream Infections.”


The Pittsburgh Regional Healthcare Initiative and the Michigan Keystone Project

This CDC report in MMWR asserts that: “in recent years, large-scale regional and statewide projects, such as the Pittsburgh Regional Healthcare Initiative and the Michigan Keystone Project, have demonstrated roughly 70% reductions in CLABSI rates (in ICUs) by increasing adherence to recommended best-practices for the insertion of central lines,” adding that: “the successes of (this) Initiative and (this) Project demonstrate the impact of regional and state-based CLABSI prevention programs.”25

This review found, however, that these assertions by the CDC are seemingly overly simplistic, if not over-reaching.

First, both the CDC in an earlier study published in 200524 and Pronovost et al. (2006)20 evaluated this Initiative and Project, respectively, using CLABSI data that had not been validated (the laws in neither Pennsylvania nor Michigan require such data validation11)—therefore, the claim that either study demonstrated roughly a 70% reduction in CLABSI rates may be inaccurate.

Second, these studies by the CDC in 200524 and Pronovost et al. (2006)20 were of a prospective-cohort design, not of a randomized controlled design—therefore, this CDC report’s assertion25 that the successes of this Initiative and Project demonstrate their effectiveness for the prevention of CLABSIs (i.e., a cause-and-effect relationship) may be more conjectural than scientific.

Pronovost et al. (2006) would seemingly agree, acknowledging that:

  1. the infection data (used to evaluate the Michigan Keystone Project) were incomplete (i.e. these data had not been validated) and could have “exaggerated” the study’s results due to a measurement bias;20 and
  2. their study’s prospective-cohort design “reduces the ability to make a causal connection between the intervention and reduced rates of (CLABSI).”20 So, too, might the GAO and AHA agree, having both concluded that infection data that have not been validated may lack quality and yield faulty conclusions.34-36

Confirmed adherence to “best” practices?

The soundness of these assertions by the CDC in this report in MMWR is questioned for a third reason, too. At odds with the CDC’s claim that such efforts as this Initiative and Project have reduced CLABSI rates “by increasing adherence to recommended best-practices,”25 neither the CDC’s study in 200524 nor Pronovost et al.’s (2006) evaluated staff adherence to the studied practices.

According to the CDC’s study in 2005:24 “data on implementation of and adherence to the promoted practices or other facility-specific interventions were not systematically reported; therefore, determining the relationship between adherence and the observed decrease in infection rate was not possible.” Pronovost et al. (2006) similarly wrote that their study did not evaluate staff adherence to the studied intervention’s practices because of “limited resources.”20

Although not discussed in this CDC report in MMWR,25 unless not only validated CLABSI data and a controlled study are used, but also staff adherence to an evaluated intervention or “promoted practices”24 is confirmed (among other criteria), conclusions that the intervention—including such state and federal efforts as those that are the focus of this CDC report in MMWR, or for which the ARRA of 2009 appropriated targeted funding25—caused or was likely responsible for achieving a percentage reduction in HAIs would be speculative.27

Confirmatory bias?

The aforementioned CDC’s state-specific report states that HAI data reported to the CDC’s NHSN are the “primary data” used to evaluate the impact of federal funds allocated by this ARRA of 2009 and administered by the CDC to prevent HAIs.11  Similarly the CDC’s report in MMWR states that the CDC is using the NHSN’s data to monitor progress toward achieving the national goal of a 50% reduction in CLABSIs by 2013.25,28



Note: This article does not include a number of important tables and box articles, which are only available in this blog’s PDF version.  Check here to read and print this article in its entirety and as a PDF document.



This review provides a cautionary note about the use of the NHSN’s infection data for these purposes. Not only can the NHSN’s self-reported infection data (a majority of which have not been validated) yield misleading estimates of HAIs, but also such a prospective comparison of these data to evaluate quality improvements and progress is prone to misinterpretations and faulty conclusions about the incidence of CLABSIs and the impact of interventions to prevent them.27

(Note: Confirmatory bias in this context is defined as a study’s tendency (usually unintentionally) to introduce error by favoring data that advances a conclusion (or excluding data that are inconsistent with this conclusion) or that promotes an outcome that is somehow auspicious or beneficial to the study or its authors.)

The CDC’s use of the NHSN’s infection data for these purposes raises an additional issue for debate: whether this CDC’s administration of targeted funding by the ARRA of 2009 to prevent HAIs, as well as the CDC’s evaluation of the progress toward a national goal of a 50% reduction in CLABSIs, might have inadvertently introduced confirmatory bias (among other biases27), causing this CDC report’s conclusions to have unintentionally advanced an auspicious outcome, assigned more validity to the NHSN’s data than scientifically warranted, and described the impact of coordinated state and federal efforts for the prevention of CLABSIs (such as those currently funded by the ARRA of 2009) more favorably than empirically demonstrated.

(Note: Again, this article’s findings and conclusions are not limited to this one report by the CDC in MMWR in 2011, but also may apply to other similar reports by the CDC and others, including the CDC’s study published in the March 27, 2014, issue of New England Journal of Medicineclick here.)

Data validation

Consistent with conclusions that infection data that have not been validated (e.g., the NHSN’s) can yield misleading results and unreliable estimates of HAIs, cases of under-reporting the true CLABSI rate have been identified during independent audits,11,13,14,27,29-35,37 one consequence of which can be to over-exaggerate the actual impact of an evaluated initiative or project to prevent CLABSIs.

For example, the Connecticut Department of Public Health (C-DPH) found that during its (blinded) retrospective audit in 2009 of the medical records of this state’s 30 acute-care hospitals, more than half of the infections the C-DPH confirmed to be CLABSIs had not been reported (to the NHSN).32 Similar cases of the under-reporting of HAIs have been reported by the New York State Health Department.37

A conclusion that is in agreement with this review’s questioning of this CDC report’s use of infection data that have not been validated, the C-DPH wrote that data validation is “essential if data from performance measurement systems are to be credible,”32 adding that “a method to validate data must be considered in any mandatory reporting system” (to ensure the reported HAIs’ accuracy and completeness).32

Indeed, validation of the data’s accuracy and completeness is “essential” if infection data are to be “credible.”

Clarification

This review lauds the diligent efforts of federal and state agencies, infection-control researchers, and healthcare facilities to prevent and publicize awareness about HAIs. It questions, however, the validity of reports claiming quality improvements and the likely responsible interventions that are based on:

  1. the use and comparison of self-reported HAI data (e.g., the CDC’s NHSN), the majority of which have not been validated; are prone to inaccuracies; lack statistical soundness; and may under-report the true incidence of HAIs; and
  2. a retrospective (or, a prospective-cohort27) study design or methodology, the scientific rigor of which is limited. An example is this CDC report in MMWR, which concludes significant reductions in its estimated number of CLABSIs likely due to state and federal efforts.11,13,14,27,29-35

Echoing concerns previously expressed in this newsletter about published prospective cohort studies that evaluated the impact of an intervention on the incidence of CLABSIs in ICUs,27 retrospective studies, even more so, are not sufficiently robust to determine whether an intervention caused an observed reduction in HAIs. Indeed, retrospective studies comparing un-validated HAI data (e.g., this CDC report in MMWR) are prone:

  • to misinterpretations of their results;
  • to over-exaggerations of the actual impact of associated interventions on observed reductions of HAIs (e.g., state and federal efforts to prevent CLABSIs); and
  • to misattributing to the intervention reductions in CLABSIs that might have been caused instead by one or more unrecognized confounding factors (e.g., reduced sensitivity of the surveillance methods used to detect, measure, and record a true infection; or, the more aggressive use of antibiotics).27

Conclusions

Underscoring the importance of the publication of accurate and circumspective depictions of both the quality of health care and of the incidence of HAIs in the U.S., this review calls into question the findings of the CDC report published in the March 1, 2011, issue of MMWR primarily because this report’s retrospective methodology is limiting, and its results are based on infection data that have not been validated, possibly causing its national estimates of, and conclusions about, CLABSIs to be in error.34

Due to a number of limitations, oversights, and other factors, the possibility cannot be ruled out that this CDC report:

  • under-reported and underestimated the true incidence and risk, respectively, of CLABSIs in ICUs;
  • over-exaggerated the percentage by which the estimated  number of CLABSIs might have been reduced since 2001;
  • overstated not only the validity of the NHSN’s data but also (e.g., by millions of dollars) the cumulative amount of money saved and (e.g., by the thousands) the number of patient deaths prevented by efforts to prevent CLABSIs; and
  • misattributed to these efforts an estimated reduction in CLABSIs caused instead by one or more biases and/or confounding factors.

Final words

This review raises the additional concern that publication of (un-validated) HAI data that are prone to biases and under-reporting the true incidence of CLABSIs might cause: opportunities to prevent HAIs to be missed; less vigilance and attention to infection control; and, therefore, an increased risk of CLABSIs in ICUs.13,14,27. In truth, HAIs remain a “danger” and are “far more common and deadly than many people understand.”38

(Whether the estimates of other CDC reports, such as one recently published concluding that colorectal cancer incidence and mortality have declined in recent years in the U.S., are scientifically sound is unclear.39)

In closing, validation of the accuracy, completeness, quality and statistical soundness of reported HAI data is recommended, without which analyses, estimates and conclusions that are based on these data—such as the conclusion of this CDC report in MMWR that the estimated number of CLABSIs in ICUs of U.S. hospitals has decreased since 2001 by 58% likely due to coordinated state and federal efforts to prevent HAIs;25,26 or claims of the touted safety of one healthcare facility compared to another15,27,35—may be questioned.

A cautious approach is also recommended whenever a retrospective comparison (or prospective cohort study) is used to evaluate quality improvements in health care or the percentage by which an initiative might have reduced CLABSIs, lest the study’s results be misleading and observed reductions in infections caused instead by one or more confounding factors be misattributed to the initiative.


This article’s references are available upon request.

[* Passaretti CL, Barclay P, Pronovost P, et al. Public reporting of health care-associated infections (HAIs): approach to choosing HAI measures. Infect Control Hosp Epidemiol 2011; 32(8): 768-74.]



Article by: Lawrence F Muscarella, PhD. Posted on January 16, 2013; updated on April 21, 2014. LFM Healthcare Solutions, LLC Copyright 2016. LFM Healthcare Solutions, LLC.  All rights reserved.

Lawrence F Muscarella PhD is the owner of LFM Healthcare Solutions, LLC, a Pennsylvania-based quality improvement and consulting company that provides safety services for hospitals, manufacturers and the publicEmail Dr. Muscarella for more details.

Leave a Reply

Your email address will not be published. Required fields are marked *