The Death Predictor: A Helpful New Tool or an Ethical Morass?

The Death Predictor: A Helpful New Tool or an Ethical Morass?

A senior in hospice care.

(© bilderstoeckchen/Fotolia)



Whenever Eric Karl Oermann has to tell a patient about a terrible prognosis, their first question is always: "how long do I have?" Oermann would like to offer a precise answer, to provide some certainty and help guide treatment. But although he's one of the country's foremost experts in medical artificial intelligence, Oermann is still dependent on a computer algorithm that's often wrong.

Doctors are notoriously terrible at guessing how long their patients will live.

Artificial intelligence, now often called deep learning or neural networks, has radically transformed language and image processing. It's allowed computers to play chess better than the world's grand masters and outwit the best Jeopardy players. But it still can't precisely tell a doctor how long a patient has left – or how to help that person live longer.

Someday, researchers predict, computers will be able to watch a video of a patient to determine their health status. Doctors will no longer have to spend hours inputting data into medical records. And computers will do a better job than specialists at identifying tiny tumors, impending crises, and, yes, figuring out how long the patient has to live. Oermann, a neurosurgeon at Mount Sinai, says all that technology will allow doctors to spend more time doing what they do best: talking with their patients. "I want to see more deep learning and computers in a clinical setting," he says, "so there can be more human interaction." But those days are still at least three to five years off, Oermann and other researchers say.

Doctors are notoriously terrible at guessing how long their patients will live, says Nigam Shah, an associate professor at Stanford University and assistant director of the school's Center for Biomedical Informatics Research. Doctors don't want to believe that their patient – whom they've come to like – will die. "Doctors over-estimate survival many-fold," Shah says. "How do you go into work, in say, oncology, and not be delusionally optimistic? You have to be."

But patients near the end of life will get better treatment – and even live longer – if they are overseen by hospice or palliative care, research shows. So, instead of relying on human bias to select those whose lives are nearing their end, Shah and his colleagues showed that they could use a deep learning algorithm based on medical records to flag incoming patients with a life expectancy of three months to a year. They use that data to indicate who might need palliative care. Then, the palliative care team can reach out to treating physicians proactively, instead of relying on their referrals or taking the time to read extensive medical charts.

But, although the system works well, Shah isn't yet sure if such indicators actually get the appropriate patients into palliative care. He's recently partnered with a palliative care doctor to run a gold-standard clinical trial to test whether patients who are flagged by this algorithm are indeed a better match for palliative care.

"What is effective from a health system perspective might not be effective from a treating physician's perspective and might not be effective from the patient's perspective," Shah notes. "I don't have a good way to guess everybody's reaction without actually studying it." Whether palliative care is appropriate, for instance, depends on more than just the patient's health status. "If the patient's not ready, the family's not ready and the doctor's not ready, then you're just banging your head against the wall," Shah says. "Given limited capacity, it's a waste of resources" to put that person in palliative care.

The algorithm isn't perfect, but "on balance, it leads to better decisions more often."

Alexander Smith and Sei Lee, both palliative care doctors, work together at the University of California, San Francisco, to develop predictions for patients who come to the hospital with a complicated prognosis or a history of decline. Their algorithm, they say, helps decide if this patient's problems – which might include diabetes, heart disease, a slow-growing cancer, and memory issues – make them eligible for hospice. The algorithm isn't perfect, they both agree, but "on balance, it leads to better decisions more often," Smith says.

Bethany Percha, an assistant professor at Mount Sinai, says that an algorithm may tell doctors that their patient is trending downward, but it doesn't do anything to change that trajectory. "Even if you can predict something, what can you do about it?" Algorithms may be able to offer treatment suggestions – but not what specific actions will alter a patient's future, says Percha, also the chief technology officer of Precise Health Enterprise, a product development group within Mount Sinai. And the algorithms remain challenging to develop. Electronic medical records may be great at her hospital, but if the patient dies at a different one, her system won't know. If she wants to be certain a patient has died, she has to merge social security records of death with her system's medical records – a time-consuming and cumbersome process.

An algorithm that learns from biased data will be biased, Shah says. Patients who are poor or African American historically have had worse health outcomes. If researchers train an algorithm on data that includes those biases, they get baked into the algorithms, which can then lead to a self-fulfilling prophesy. Smith and Lee say they've taken race out of their algorithms to avoid this bias.

Age is even trickier. There's no question that someone's risk of illness and death goes up with age. But an 85-year-old who breaks a hip running a marathon should probably be treated very differently than an 85-year-old who breaks a hip trying to get out of a chair in a dementia care unit. That's why the doctor can never be taken out of the equation, Shah says. Human judgment will always be required in medical care and an algorithm should never be followed blindly, he says.

Experts say that the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully.

Researchers are also concerned that their algorithms will be used to ration care, or that insurance companies will use their data to justify a rate increase. If an algorithm predicts a patient is going to end up back in the hospital soon, "who's benefitting from knowing a patient is going to be readmitted? Probably the insurance company," Percha says.

Still, Percha and others say, the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully. "These are new and exciting tools that have a lot of potential uses. We need to be conscious about how to use them going forward, but it doesn't mean we shouldn't go down this road," she says. "I think the potential benefits outweigh the risks, especially because we've barely scratched the surface of what big data can do right now."

Karen Weintraub
Karen Weintraub, an independent health and science journalist, writes regularly for The New York Times, The Washington Post, Scientific American and other news outlets. She also teaches journalism at Boston University, MIT and the Harvard Extension School, and she's writing a book about the history of Cambridge, MA, where she lives with her husband and two daughters.
New tech aims to make the ocean healthier for marine life

Overabundance of dissolved carbon dioxide poses a threat to marine life. A new system detects elevated levels of the greenhouse gases and mitigates them on the spot.

Adobe Stock

A defunct drydock basin arched by a rusting 19th century steel bridge seems an incongruous place to conduct state-of-the-art climate science. But this placid and protected sliver of water connecting Brooklyn’s Navy Yard to the East River was just right for Garrett Boudinot to float a small dock topped with water carbon-sensing gear. And while his system right now looks like a trio of plastic boxes wired up together, it aims to mediate the growing ocean acidification problem, caused by overabundance of dissolved carbon dioxide.

Boudinot, a biogeochemist and founder of a carbon-management startup called Vycarb, is honing his method for measuring CO2 levels in water, as well as (at least temporarily) correcting their negative effects. It’s a challenge that’s been occupying numerous climate scientists as the ocean heats up, and as states like New York recognize that reducing emissions won’t be enough to reach their climate goals; they’ll have to figure out how to remove carbon, too.

Keep Reading Keep Reading
Lela Nargi
Lela Nargi is a Brooklyn, NY-based veteran freelance journalist covering food and agriculture system, social justice issues, science & the environment, and the places where those topics intersect for The New York Times, The Guardian, the Food and Environment Reporting Network (FERN), Eater, Modern Farmer, USA Today, and other outlets. Find her at lelanargi.com.
Advocates fight back against harmful chemicals in kids' makeup and hair products

Many children use makeup and body products such as glitter, face paint and lip gloss. Scientists are pointing to the harmful chemicals in some of these products, and advocates are looking to outlaw them.

Adobe Stock

When Erika Schreder’s 14-year-old daughter, who is Black, had her curly hair braided at a Seattle-area salon two or three times recently, the hairdresser applied a styling gel to seal the tresses in place.

Schreder and her daughter had been trying to avoid harmful chemicals, so they were shocked to later learn that this particular gel had the highest level of formaldehyde of any product tested by the Washington State Departments of Ecology and Health. In January 2023, the agencies released a report that uncovered high levels of formaldehyde in certain hair products, creams and lotions marketed to or used by people of color. When Schreder saw the report, she mentioned it to her daughter, who told her the name of the gel smoothed on her hair.

“It was really upsetting,” said Schreder, science director at Toxic-Free Future, a Seattle-based nonprofit environmental health research and advocacy organization. “Learning that this product used on my daughter’s hair contained cancer-causing formaldehyde made me even more committed to advocating for our state to ban toxic ingredients in cosmetics and personal care products.”

Keep Reading Keep Reading
Susan Kreimer
Susan Kreimer is a New York-based freelance journalist who has followed the landscape of health care since the late 1990s, initially as a staff reporter for major daily newspapers. She writes about breakthrough studies, personal health, and the business of clinical practice. Raised in the Chicago area, she holds a B.A. in Journalism/Mass Communication and French, with minors in German and Russian, from the University of Iowa and an M.S. from the Columbia University Graduate School of Journalism.