December 12, 2012 — The study by Schulman et al. (2011) in Pediatrics is a significant contribution to the prevention of central line-associated bloodstream infections (CLABSIs) in neonatal intensive care units (NICUs).[1]

This study’s design and methods feature a number of limitations, however, four of which are discussed herein.

Study design

Like that of other studies also quantitatively evaluating the effectiveness of an intervention for the prevention of CLABSIs,[2-10] the prospective-cohort design of Schulman et al.’s (2011) study necessarily limits its results to displaying associations.

Nevertheless, these authors conclude that both the increased use of maintenance checklists and the statewide adoption of standardized bundles causally reduced CLABSI rates.

In general, such a claimed cause-and-effect relationship would require that the study be that which it is not: both randomized and controlled (and blinded), to minimize or eliminate the potential effects on the study’s results of biases, confounding factors and chance.

Although common of studies similarly evaluating an initiative’s effectiveness for reducing CLABSIs,[2] Schulman et al.’s (2011) depiction of an association as a causal relationship is no less of an overstep.

Click here to read Dr. Muscarella’s peer-reviewed article on this topic – “Assessment of the Reported Effectiveness of Five Different Quality-Improvement Initiatives for the Prevention of Central Line-Associated Bloodstream Infections in Intensive Care Units” – that complements this blog’s discussion of  published evaluations of the effectiveness of “best practices” for the prevention of CLABSIs in ICUs.

Feedback bias?

Schulman et al.’s (2011) study emphasizes the importance of verbal discussions among NICU staff members about the intervention’s intent and progress.[1]

Although such dialogue during the study’s post-intervention period is common and often encouraged to improve the quality of central-line care,[2] it can, like discussions during an “open label” drug study, manifestly compromise the study’s validity and quantitative determination of the intervention’s performance.[2]

Indeed, these verbal discussions (or “social interactions”[1]) not only can introduce biases and confounding factors (e.g., feedback bias, confirmatory bias) that result in under-reporting the true incidence of infection, but they can also cause the study to mis-attribute to the intervention reductions in CLABSIs caused instead unwittingly by the verbal exchange of information between NICU staff members.[5]

Click here to read a review Dr. Muscarella wrote questioning the scientific merit of some of the conclusions about efforts to prevent CLABSIs in intensive care units that the Centers for Disease Control and Prevention (CDC) published and advanced in March, 2011, in its journal Mortality and Morbidity Weekly Report.


Schulman et al. (2011) did not confirm that the actual use of the checklists by NICU staff members was the same as their reported use. Another factor often overlooked,[2] without verification that staff strictly adhered to each of an intervention’s prescribed elements, the validity of a study’s conclusion that, for example, the use of maintenance checklists reduced CLABSIs[1] may be questioned.[5]

(As these authors disclose, “reported checklist use” may not be the same as “actual checklist use.”[1])

Consequently, there is the possibility that the reductions in the CLABSI rate reported by Schulman et al. (2011) were caused—not by the studied bundles and checklists—but instead by one or more unrecognized confounding factors.

Data validation

Schulman et al. (2011) acknowledge that the public reporting of CLABSIs can bias their rates “downward,”[1] which both underscores the importance of data validation and raises the question whether the majority of the CLABSI data that these authors used as the primary metric to evaluate the studied intervention’s performance might be unreliable (e.g., might under-report the true CLABSI rate in the participating NICUs).[2]

Although they report having audited a sample of the CLABSI data, Schulman et al. (2011) do not discuss whether:

  1. this sample’s size was statistically sound;
  2. the number of central-line days, like the number of infections, was audited; and
  3. these audited data were valid.

To be sure, an underestimation of the true incidence of infection can cause not only the actual performance of the studied bundles and checklists on CLABSI rates to be over-exaggerated,[11-13] but also the participating NICUs to mis-characterize unwittingly the quality of central-line care and to forgo the adoption of preemptive practices otherwise necessary to the prevention of infection, ironically posing an increased risk of CLABSIs and patient harms.


This article does not question the objectives of Schulman et al.’s (2011) study, which are laudable and noble.

Rather, it emphasizes mitigation of the four aforementioned limitations, which, if overlooked and not addressed, can compromise the quality, significance and validity of a study’s both CLABSI data and conclusions (that are based on these data).

In short, this letter suggests that prospective-cohort studies (non-randomized, un-controlled) aiming to evaluate quantitatively an intervention’s impact on the CLABSI rate in NICUs apply a more cautious approach and

  1. confirm the validity of their CLABSI data (both the infection rate’s numerator and denominator);
  2. verify that NICU staff members strictly adhered to all of the intervention’s elements (and adhered to none during the pre-intervention period)[2];
  3. do not advance an association between two variable as a causal relationship (unless the study’s design is other than prospective or retrospective and permits doing so – for example, is controlled); and
  4. understand that “feedback” among staff members can introduce errors into the study’s data and findings.

Final remarks

A consideration discussed in more detail in a peer-reviewed article written by Dr. Muscarella,[2] this article advances, among other considerations:

  • an appreciation of the limitations imposed by a study’s use of an uncontrolled (and not randomized) prospective cohort (or retrospective trend) design;
  • of the importance of data validation; and
  • of the distinction between a qualitative assessment and quantitative determination of an initiative’s effectiveness, the former of which prospective (and retrospective) studies can readily yield, the latter of which they cannot.

Incongruities, as well as an inadvertent mis-characterizations of an initiative’s performance, can arise if the findings of a prospective cohort study, limited to yielding associations and qualitative assessments (e.g., “the initiative was effective and performed ‘well’ ”), are used instead to advance a quantitative determination (e.g., “the initiative reduced the CLABSI rate by a calculated amount of more than 50%”), which is generally reserved for and derived from more rigorous and demanding study designs, such as randomized controlled studies.[2]

Further, this article appreciates un-controlled, prospective-cohort studies advancing qualitative assessments of an intervention’s effectiveness. It raises questions, however, about those that instead provide quantitative determinations.

In closing, the clinical implications of the advancement of claims about an intervention’s effectiveness for the prevention of CLABSIs in NICUs (and ICUs) using infection data that have not been validated and may be inaccurate, incomplete, and under-report the true incidence of infection are not academic and may include:[2]

  1. exaggerated depictions of the evaluated intervention’s actual effectiveness, of the safety of NICUs, and of the quality of central-line care;
  2. a false sense of security and less vigilance; and
  3. reduced infection controls, thereby posing, paradoxically, an increased risk of patient infection, morbidity and mortality.

This article’s references are available for the reader’s convenience by clicking here.

Article by: Lawrence F Muscarella, PhD. Posted on December 12, 2012; updated September 9, 2014. LFM Healthcare Solutions, LLC Copyright 2016. LFM Healthcare Solutions, LLC.  All rights reserved.

Lawrence F Muscarella PhD is the owner of LFM Healthcare Solutions, LLC, a Pennsylvania-based quality improvement and consulting company that provides safety services for hospitals, manufacturers and the publicEmail Dr. Muscarella for more details.

Leave a Reply

Your email address will not be published. Required fields are marked *