AI Can Be Used to Help with Sepsis Treatment, Fracture Diagnosis
TEHRAN (Tasnim) - It’s possible to train algorithms using the experience of thousands of doctors for treating patients with deficiencies in their immune system.
Treating patients effectively involves a combination of training and experience. That's one of the reasons that people have been excited about the prospects of using AI in medicine: it's possible to train algorithms using the experience of thousands of doctors, giving them more information than any single human could accumulate.
This week has provided some indications that software may be on the verge of living up to that promise, as two papers describe excellent preliminary results with using AI for both diagnosis and treatment decisions.
The papers involve very different problems and approaches, which suggests that the range of situations where AI could prove useful is very broad, ArsTechnica reported.
Choosing treatments
One of the two studies focuses on sepsis, which occurs when the immune system mounts an excessive response to an infection. Sepsis is apparently the third leading cause of death worldwide, and it remains a problem even when the patient is already hospitalized. There are guidelines available for treating sepsis patients, but the numbers suggest there's still considerable room for improvement. So a small UK-US team decided to see if software could help provide some of that improvement.
They used a reinforcement learning algorithm because those are considered effective when there are what they term "sparse reward signals." In other words, a patient population this large is going to have a lot going on other than sepsis, and a lot of it will influence the results of any treatments, so the signal from effective treatments is going to be small and hard to spot. This approach was designed to increase the odds of spotting one.
As was the large number of data points used to train the software: over 17,000 intensive care unit patients, and another 79,000 general hospital admissions from a total of over 125 hospitals. Data on the patients included 48 different bits of information, from vital signs and lab tests to demographic information. The algorithm used the data to identify the treatments that would maximize the 90-day survival of the patients. The researchers termed the resulting software AI Clinician.
A separate set of patient records were used to evaluate the AI Clinician's performance. The algorithm was used to choose a treatment, and the patients were evaluated based on whether their actual treatments were similar to the ones the algorithm recommended. Overall, the software recommended lower doses of IV fluids and higher doses of drugs that cause constriction of blood vessels. People who received similar treatments to these recommendations had the lowest mortality among this group of patients.
Diagnosis
The second paper focused on identifying problems that require treatment. The issue the focused on is bone fractures. While these are often easy to spot, small chip or hairline fractures can be difficult for even a specialist to spot. And, in most cases, the diagnosis falls to a non-specialist, typically a doctor working in emergency medicine. The new research isn't intended to create an AI that replaces these doctors; rather, it's intended to help them out.
The team recruited 18 orthopedic surgeons to diagnose over 1,350,000 images of potential wrist fractures, and then it used that data to train their algorithm, a deep-learning convolutional neural network. The algorithm was used to highlight areas of interest to doctors who don't specialize in orthopedics. In essence, it was helping them focus on areas that are mostly likely to contain a break.
In the past, trials like this have resulted in over-diagnosis, where doctors would recommend further tests for something that's harmless. But in this case, the accuracy went up as false positives went down. The sensitivity (or ability) to identify fractures went from 81 percent up to 92 percent, while the specificity (or ability to make the right diagnosis) rose from 88 percent to 94 percent. Combined, these results mean that ER docs would have seen their misdiagnosis rate drop by nearly half.
Neither of these involved using the software in a context that fully reflects medically relevant circumstances. Both ER doctors and those treating sepsis (who may be one and the same) will normally have a lot of additional concerns and distractions, so it may be a challenge to integrate AI use into their process. But the success of these efforts suggests that clinical trials of AIs will be happening sooner rather than later, and then we'll have a real sense how much they help with actual diagnosis and treatment.