The Death Predictor: A Helpful New Tool or an Ethical Morass?

The Death Predictor: A Helpful New Tool or an Ethical Morass?

A senior in hospice care.

(© bilderstoeckchen/Fotolia)



Whenever Eric Karl Oermann has to tell a patient about a terrible prognosis, their first question is always: "how long do I have?" Oermann would like to offer a precise answer, to provide some certainty and help guide treatment. But although he's one of the country's foremost experts in medical artificial intelligence, Oermann is still dependent on a computer algorithm that's often wrong.

Doctors are notoriously terrible at guessing how long their patients will live.

Artificial intelligence, now often called deep learning or neural networks, has radically transformed language and image processing. It's allowed computers to play chess better than the world's grand masters and outwit the best Jeopardy players. But it still can't precisely tell a doctor how long a patient has left – or how to help that person live longer.

Someday, researchers predict, computers will be able to watch a video of a patient to determine their health status. Doctors will no longer have to spend hours inputting data into medical records. And computers will do a better job than specialists at identifying tiny tumors, impending crises, and, yes, figuring out how long the patient has to live. Oermann, a neurosurgeon at Mount Sinai, says all that technology will allow doctors to spend more time doing what they do best: talking with their patients. "I want to see more deep learning and computers in a clinical setting," he says, "so there can be more human interaction." But those days are still at least three to five years off, Oermann and other researchers say.

Doctors are notoriously terrible at guessing how long their patients will live, says Nigam Shah, an associate professor at Stanford University and assistant director of the school's Center for Biomedical Informatics Research. Doctors don't want to believe that their patient – whom they've come to like – will die. "Doctors over-estimate survival many-fold," Shah says. "How do you go into work, in say, oncology, and not be delusionally optimistic? You have to be."

But patients near the end of life will get better treatment – and even live longer – if they are overseen by hospice or palliative care, research shows. So, instead of relying on human bias to select those whose lives are nearing their end, Shah and his colleagues showed that they could use a deep learning algorithm based on medical records to flag incoming patients with a life expectancy of three months to a year. They use that data to indicate who might need palliative care. Then, the palliative care team can reach out to treating physicians proactively, instead of relying on their referrals or taking the time to read extensive medical charts.

But, although the system works well, Shah isn't yet sure if such indicators actually get the appropriate patients into palliative care. He's recently partnered with a palliative care doctor to run a gold-standard clinical trial to test whether patients who are flagged by this algorithm are indeed a better match for palliative care.

"What is effective from a health system perspective might not be effective from a treating physician's perspective and might not be effective from the patient's perspective," Shah notes. "I don't have a good way to guess everybody's reaction without actually studying it." Whether palliative care is appropriate, for instance, depends on more than just the patient's health status. "If the patient's not ready, the family's not ready and the doctor's not ready, then you're just banging your head against the wall," Shah says. "Given limited capacity, it's a waste of resources" to put that person in palliative care.

The algorithm isn't perfect, but "on balance, it leads to better decisions more often."

Alexander Smith and Sei Lee, both palliative care doctors, work together at the University of California, San Francisco, to develop predictions for patients who come to the hospital with a complicated prognosis or a history of decline. Their algorithm, they say, helps decide if this patient's problems – which might include diabetes, heart disease, a slow-growing cancer, and memory issues – make them eligible for hospice. The algorithm isn't perfect, they both agree, but "on balance, it leads to better decisions more often," Smith says.

Bethany Percha, an assistant professor at Mount Sinai, says that an algorithm may tell doctors that their patient is trending downward, but it doesn't do anything to change that trajectory. "Even if you can predict something, what can you do about it?" Algorithms may be able to offer treatment suggestions – but not what specific actions will alter a patient's future, says Percha, also the chief technology officer of Precise Health Enterprise, a product development group within Mount Sinai. And the algorithms remain challenging to develop. Electronic medical records may be great at her hospital, but if the patient dies at a different one, her system won't know. If she wants to be certain a patient has died, she has to merge social security records of death with her system's medical records – a time-consuming and cumbersome process.

An algorithm that learns from biased data will be biased, Shah says. Patients who are poor or African American historically have had worse health outcomes. If researchers train an algorithm on data that includes those biases, they get baked into the algorithms, which can then lead to a self-fulfilling prophesy. Smith and Lee say they've taken race out of their algorithms to avoid this bias.

Age is even trickier. There's no question that someone's risk of illness and death goes up with age. But an 85-year-old who breaks a hip running a marathon should probably be treated very differently than an 85-year-old who breaks a hip trying to get out of a chair in a dementia care unit. That's why the doctor can never be taken out of the equation, Shah says. Human judgment will always be required in medical care and an algorithm should never be followed blindly, he says.

Experts say that the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully.

Researchers are also concerned that their algorithms will be used to ration care, or that insurance companies will use their data to justify a rate increase. If an algorithm predicts a patient is going to end up back in the hospital soon, "who's benefitting from knowing a patient is going to be readmitted? Probably the insurance company," Percha says.

Still, Percha and others say, the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully. "These are new and exciting tools that have a lot of potential uses. We need to be conscious about how to use them going forward, but it doesn't mean we shouldn't go down this road," she says. "I think the potential benefits outweigh the risks, especially because we've barely scratched the surface of what big data can do right now."

Karen Weintraub
Karen Weintraub, an independent health and science journalist, writes regularly for The New York Times, The Washington Post, Scientific American and other news outlets. She also teaches journalism at Boston University, MIT and the Harvard Extension School, and she's writing a book about the history of Cambridge, MA, where she lives with her husband and two daughters.
Opioid prescription policies may hurt those in chronic pain

Guidelines aim to prevent opioid-related deaths by making it more challenging to get prescriptions, but they can also block access for those who desperately need them.

Adobe Stock

Tinu Abayomi-Paul works as a writer and activist, plus one unwanted job: Trying to fill her opioid prescription. She says that some pharmacists laugh and tell her that no one needs the amount of pain medication that she is seeking. Another pharmacist near her home in Venus, Tex., refused to fill more than seven days of a 30-day prescription.

To get a new prescription—partially filled opioid prescriptions can’t be dispensed later—Abayomi-Paul needed to return to her doctor’s office. But without her medication, she was having too much pain to travel there, much less return to the pharmacy. She rationed out the pills over several weeks, an agonizing compromise that left her unable to work, interact with her children, sleep restfully, or leave the house. “Don’t I deserve to do more than survive?” she says.

Keep Reading Keep Reading
Robin Donovan
Robin Donovan is a science journalist based in Portland, Oregon. Her work has appeared in Vice, Neo.Life, The Scientist, Willamette Week and many other outlets.
Immigrant Scientists—and America’s Edge—Face a Moment of Truth This Election

Recent immigration restrictions have left many foreign researchers' projects and careers in limbo—and some in jeopardy.

Unsplash

This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.

When COVID-19 cases were surging in New York City in early spring, Chitra Mohan, a postdoctoral fellow at Weill Cornell, was overwhelmed with worry. But the pandemic was only part of her anxieties. Having come to the United States from India on a student visa that allowed her to work for a year after completing her degree, she had applied for a two-year extension, typically granted for those in STEM fields. But due to a clerical error—Mohan used an electronic signatureinstead of a handwritten one— her application was denied and she could no longerwork in the United States.

"I was put on unpaid leave and I lost my apartment and my health insurance—and that was in the middle of COVID!" she says.

Meanwhile her skills were very much needed in those unprecedented times. A molecular biologist studying how DNA can repair itself, Mohan was trained in reverse transcription polymerase chain reaction or RT-PCR—a lab technique that detects pathogens and is used to diagnose COVID-19. Mohan wanted to volunteer at testing centers, but because she couldn't legally work in the U.S., she wasn't allowed to help either. She moved to her cousin's house, hired a lawyer, and tried to restore her work status.

Keep Reading Keep Reading
Lina Zeldovich

Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.