The Death Predictor: A Helpful New Tool or an Ethical Morass?

Feature Story


Whenever Eric Karl Oermann has to tell a patient about a terrible prognosis, their first question is always: "how long do I have?" Oermann would like to offer a precise answer, to provide some certainty and help guide treatment. But although he's one of the country's foremost experts in medical artificial intelligence, Oermann is still dependent on a computer algorithm that's often wrong.

Doctors are notoriously terrible at guessing how long their patients will live.

Artificial intelligence, now often called deep learning or neural networks, has radically transformed language and image processing. It's allowed computers to play chess better than the world's grand masters and outwit the best Jeopardy players. But it still can't precisely tell a doctor how long a patient has left – or how to help that person live longer.

Someday, researchers predict, computers will be able to watch a video of a patient to determine their health status. Doctors will no longer have to spend hours inputting data into medical records. And computers will do a better job than specialists at identifying tiny tumors, impending crises, and, yes, figuring out how long the patient has to live. Oermann, a neurosurgeon at Mount Sinai, says all that technology will allow doctors to spend more time doing what they do best: talking with their patients. "I want to see more deep learning and computers in a clinical setting," he says, "so there can be more human interaction." But those days are still at least three to five years off, Oermann and other researchers say.

Doctors are notoriously terrible at guessing how long their patients will live, says Nigam Shah, an associate professor at Stanford University and assistant director of the school's Center for Biomedical Informatics Research. Doctors don't want to believe that their patient – whom they've come to like – will die. "Doctors over-estimate survival many-fold," Shah says. "How do you go into work, in say, oncology, and not be delusionally optimistic? You have to be."

But patients near the end of life will get better treatment – and even live longer – if they are overseen by hospice or palliative care, research shows. So, instead of relying on human bias to select those whose lives are nearing their end, Shah and his colleagues showed that they could use a deep learning algorithm based on medical records to flag incoming patients with a life expectancy of three months to a year. They use that data to indicate who might need palliative care. Then, the palliative care team can reach out to treating physicians proactively, instead of relying on their referrals or taking the time to read extensive medical charts.

But, although the system works well, Shah isn't yet sure if such indicators actually get the appropriate patients into palliative care. He's recently partnered with a palliative care doctor to run a gold-standard clinical trial to test whether patients who are flagged by this algorithm are indeed a better match for palliative care.

"What is effective from a health system perspective might not be effective from a treating physician's perspective and might not be effective from the patient's perspective," Shah notes. "I don't have a good way to guess everybody's reaction without actually studying it." Whether palliative care is appropriate, for instance, depends on more than just the patient's health status. "If the patient's not ready, the family's not ready and the doctor's not ready, then you're just banging your head against the wall," Shah says. "Given limited capacity, it's a waste of resources" to put that person in palliative care.

The algorithm isn't perfect, but "on balance, it leads to better decisions more often."

Alexander Smith and Sei Lee, both palliative care doctors, work together at the University of California, San Francisco, to develop predictions for patients who come to the hospital with a complicated prognosis or a history of decline. Their algorithm, they say, helps decide if this patient's problems – which might include diabetes, heart disease, a slow-growing cancer, and memory issues – make them eligible for hospice. The algorithm isn't perfect, they both agree, but "on balance, it leads to better decisions more often," Smith says.

Bethany Percha, an assistant professor at Mount Sinai, says that an algorithm may tell doctors that their patient is trending downward, but it doesn't do anything to change that trajectory. "Even if you can predict something, what can you do about it?" Algorithms may be able to offer treatment suggestions – but not what specific actions will alter a patient's future, says Percha, also the chief technology officer of Precise Health Enterprise, a product development group within Mount Sinai. And the algorithms remain challenging to develop. Electronic medical records may be great at her hospital, but if the patient dies at a different one, her system won't know. If she wants to be certain a patient has died, she has to merge social security records of death with her system's medical records – a time-consuming and cumbersome process.

An algorithm that learns from biased data will be biased, Shah says. Patients who are poor or African American historically have had worse health outcomes. If researchers train an algorithm on data that includes those biases, they get baked into the algorithms, which can then lead to a self-fulfilling prophesy. Smith and Lee say they've taken race out of their algorithms to avoid this bias.

Age is even trickier. There's no question that someone's risk of illness and death goes up with age. But an 85-year-old who breaks a hip running a marathon should probably be treated very differently than an 85-year-old who breaks a hip trying to get out of a chair in a dementia care unit. That's why the doctor can never be taken out of the equation, Shah says. Human judgment will always be required in medical care and an algorithm should never be followed blindly, he says.

Experts say that the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully.

Researchers are also concerned that their algorithms will be used to ration care, or that insurance companies will use their data to justify a rate increase. If an algorithm predicts a patient is going to end up back in the hospital soon, "who's benefitting from knowing a patient is going to be readmitted? Probably the insurance company," Percha says.

Still, Percha and others say, the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully. "These are new and exciting tools that have a lot of potential uses. We need to be conscious about how to use them going forward, but it doesn't mean we shouldn't go down this road," she says. "I think the potential benefits outweigh the risks, especially because we've barely scratched the surface of what big data can do right now."

A senior in hospice care.
(© bilderstoeckchen/Fotolia)