Popular sepsis prediction tool less accurate than claimed

Hospital

One in three patients who dies in a hospital has sepsis, a severe inflammatory response to an infection, marked by organ dysfunction, according to the Centers for Disease Control and Prevention. This heavy toll makes predicting which patients are at risk for developing the devastating condition a top priority for clinicians.

Additional motivation to identify and treat sepsis cases lies in the fact that sepsis serves as a system-level quality measure, with hospitals judged by both the by the federal Department of Health and Human Services and the CDC on their sepsis rates. Complicating efforts to reduce sepsis is how difficult it can be to diagnose—both accurately and quickly.

“Sepsis is something we can know occurs with certainty after the fact, but when it’s unfolding, it’s often unclear whether a patient has sepsis or not,” said Karandeep Singh, MD, MMSc, assistant professor of Learning Health Sciences and Internal Medicine at Michigan Medicine. “But the cornerstone of sepsis treatment is timely recognition and timely therapy.”

Singh and his colleagues recently evaluated a sepsis prediction model developed by Epic Systems, a healthcare software vendor used by 56 percent of hospitals and health systems in the U.S. In a new paper published in JAMA Internal Medicine, they reveal that the prediction tool performs much worse than indicated by the model’s information sheet, correctly sorting patients on their risk of sepsis just 63 percent of the time.

The discrepancy lies in how the model was developed, explained Singh. The first problem, he says, is that the model incorporates data from all cases billed as sepsis, which is problematic because “people bill differently across services and hospitals and it’s been well recognized that trying to figure out who has sepsis based on billing codes alone is probably not accurate.” Second, in the model’s development, the onset of sepsis was defined as the time the clinician intervened—for example, ordering antibiotics or lab work.

“In essence, they developed the model to predict sepsis that was recognized by clinicians at the time it was recognized by clinicians. However, we know that clinicians miss sepsis.”

To evaluate the model using a definition of sepsis more closely aligned to that used by Medicare and CDC, the research team looked at close to 40,000 hospitalizations at Michigan Medicine from 2018-2019, removing scores from patients who were alerted by the model to have sepsis after a clinician had already intervened. Doing so brought the tool’s area under the curve from 76-83 percent as reported by Epic Systems to 63 percent determined by the validation study.

What’s more, the model sent out an alert on nearly 1 in 5 of all patients, with most of those patients not actually having sepsis. “When it alerts, the chance of a patient actually has sepsis during the remainder of their hospital stay is 12 percent. What that essentially means is that even if you only evaluated people the first time the system alerted, you’d still need to evaluate 8 people to find one case of sepsis,” said Singh.

Prediction tools come with a trade-off, noted Singh. “The tradeoff is basically between generating alerts on a patient who turned out not to have the predicted condition or not generating alerts on patients who do.” But in this instance, if a health system is using the Epic sepsis model to improve its quality measures, “it’s not really going to be able to do that.”

The results of the study point to a need for more regulatory oversight and governance of clinical software tools, said Singh, as well as a need for more open-source models that can be easily externally validated and turned off if it turns out they aren’t useful.

Source: Read Full Article