Clever Firm Predicts Patients Most at Risk, Then Tries to Intervene Before They Get Sicker
The diabetic patient hit the danger zone.
Ideally, blood sugar, measured by an A1C test, rests at 5.9 or less. A 7 is elevated, according to the Diabetes Council. Over 10, and you're into the extreme danger zone, at risk of every diabetic crisis from kidney failure to blindness.
In three months of working with a case manager, Jen's blood sugar had dropped to 7.2, a much safer range.
This patient's A1C was 10. Let's call her Jen for the sake of this story. (Although the facts of her case are real, the patient's actual name wasn't released due to privacy laws.).
Jen happens to live in Pennsylvania's Lehigh Valley, home of the nonprofit Lehigh Valley Health Network, which has eight hospital campuses and various clinics and other services. This network has invested more than $1 billion in IT infrastructure and founded Populytics, a spin-off firm that tracks and analyzes patient data, and makes care suggestions based on that data.
When Jen left the doctor's office, the Populytics data machine started churning, analyzing her data compared to a wealth of information about future likely hospital visits if she did not comply with recommendations, as well as the potential positive impacts of outreach and early intervention.
About a month after Jen received the dangerous blood test results, a community outreach specialist with psychological training called her. She was on a list generated by Populytics of follow-up patients to contact.
"It's a very gentle conversation," says Cathryn Kelly, who manages a care coordination team at Populytics. "The case manager provides them understanding and support and coaching." The goal, in this case, was small behavioral changes that would actually stick, like dietary ones.
In three months of working with a case manager, Jen's blood sugar had dropped to 7.2, a much safer range. The odds of her cycling back to the hospital ER or veering into kidney failure, or worse, had dropped significantly.
While the health network is extremely localized to one area of one state, using data to inform precise medical decision-making appears to be the wave of the future, says Ann Mongovern, the associate director of Health Care Ethics at the Markkula Center for Applied Ethics at Santa Clara University in California.
"Many hospitals and hospital systems don't yet try to do this at all, which is striking given where we're at in terms of our general technical ability in this society," Mongovern says.
How It Happened
While many hospitals make money by filling beds, the Lehigh Valley Health Network, as a nonprofit, accepts many patients on Medicaid and other government insurances that don't cover some of the costs of a hospitalization. The area's population is both poorer and older than national averages, according to the U.S. Census data, meaning more people with higher medical needs that may not have the support to care for themselves. They end up in the ER, or worse, again and again.
In the early 2000s, LVHN CEO Dr. Brian Nester started wondering if his health network could develop a way to predict who is most likely to land themselves a pricey ICU stay -- and offer support before those people end up needing serious care.
Embracing data use in such specific ways also brings up issues of data security and patient safety.
"There was an early understanding, even if you go back to the (federal) balanced budget act of 1997, that we were just kicking the can down the road to having a functional financial model to deliver healthcare to everyone with a reasonable price," Nester says. "We've got a lot of people living longer without more of an investment in the healthcare trust."
Popultyics, founded in 2013, was the result of years of planning and agonizing over those population numbers and cost concerns.
"We looked at our own health plan," Nester says. Out of all the employees and dependants on the LVHN's own insurance network, "roughly 1.5 percent of our 25,000 people — under 400 people — drove $30 million of our $130 million on insurance costs -- about 25 percent."
"You don't have to boil the ocean to take cost out of the system," he says. "You just have to focus on that 1.5%."
Take Jen, the diabetic patient. High blood sugar can lead to kidney failure, which can mean weekly expensive dialysis for 20 years. Investing in the data and staff to reach patients, he says, is "pennies compared to $100 bills."
For most doctors, "there's no awareness for providers to know who they should be seeing vs. who they are seeing. There's no incentive, because the incentive is to see as many patients as you can," he says.
To change that, first the LVHN invested in the popular medical management system, Epic. Then, they negotiated with the top 18 insurance companies that cover patients in the region to allow access to their patient care data, which means they have reams of patient history to feed the analytics machine in order to make predictions about outcomes. Nester admits not every hospital could do that -- with 52 percent of the market share, LVHN had a very strong negotiating position.
Third party services take that data and churn out analytics that feeds models and care management plans. All identifying information is stripped from the data.
"We can do predictive modeling in patients," says Populytics President and CEO Gregory Kile. "We can identify care gaps. Those care gaps are noted as alerts when the patient presents at the office."
Kile uses himself as a hypothetical patient.
"I pull up Gregory Kile, and boom, I see a flag or an alert. I see he hasn't been in for his last blood test. There is a care gap there we need to complete."
"There's just so much more you can do with that information," he says, envisioning a future where follow-up for, say, knee replacement surgery and outcomes could be tracked, and either validated or changed.
Ethical Issues at the Forefront
Of course, embracing data use in such specific ways also brings up issues of security and patient safety. For example, says medical ethicist Mongovern, there are many touchpoints where breaches could occur. The public has a growing awareness of how data used to personalize their experiences, such as social media analytics, can also be monetized and sold in ways that benefit a company, but not the user. That's not to say data supporting medical decisions is a bad thing, she says, just one with potential for public distrust if not handled thoughtfully.
"You're going to need to do this to stay competitive," she says. "But there's obviously big challenges, not the least of which is patient trust."
So far, a majority of the patients targeted – 62 percent -- appear to embrace the effort.
Among the ways the LVHN uses the data is monthly reports they call registries, which include patients who have just come in contact with the health network, either through the hospital or a doctor that works with them. The community outreach team members at Populytics take the names from the list, pull their records, and start calling. So far, a majority of the patients targeted – 62 percent -- appear to embrace the effort.
Says Nester: "Most of these are vulnerable people who are thrilled to have someone care about them. So they engage, and when a person engages in their care, they take their insulin shots. It's not rocket science. The rocket science is in identifying who the people are — the delivery of care is easy."
Dadbot, Wifebot, Friendbot: The Future of Memorializing Avatars
In 2016, when my family found out that my father was dying from cancer, I did something that at the time felt completely obvious: I started building a chatbot replica of him.
I simply wanted to create an interactive way to share key parts of his life story.
I was not under any delusion that the Dadbot, as I soon began calling it, would be a true avatar of him. From my research about the voice computing revolution—Siri, Alexa, the Google Assistant—I knew that fully humanlike AIs, like you see in the movies, were a vast ways from technological reality. Replicating my dad in any real sense was never the goal, anyway; that notion gave me the creeps.
Instead, I simply wanted to create an interactive way to share key parts of his life story: facts about his ancestors in Greece. Memories from growing up. Stories about his hobbies, family life, and career. And I wanted the Dadbot, which sent text messages and audio clips over Facebook Messenger, to remind me of his personality—warm, erudite, and funny. So I programmed it to use his distinctive phrasings; to tell a few of his signature jokes and sing his favorite songs.
While creating the Dadbot, a laborious undertaking that sprawled into 2017, I fixated on two things. The first was getting the programming right, which I did using a conversational agent authoring platform called PullString. The second, far more wrenching concern was my father's health. Failing to improve after chemotherapy and immunotherapy, and steadily losing energy, weight, and the animating sparkle of life, he died on February 9.
John Vlahos at a family reunion in the summer of 2016, a few months after his cancer diagnosis.
(Courtesy James Vlahos)
After a magazine article that I wrote about the Dadbot came out in the summer of 2017, messages poured in from readers. While most people simply expressed sympathy, some conveyed a more urgent message: They wanted their own memorializing chatbots. One man implored me to make a bot for him; he had been diagnosed with cancer and wanted his six-month-old daughter to have a way to remember him. A technology entrepreneur needed advice on replicating what I did for her father, who had stage IV cancer. And a teacher in India asked me to engineer a conversational replica of her son, who had recently been struck and killed by a bus.
Journalists from around the world also got in touch for interviews, and they inevitably came around to the same question. Will virtual immortality, they asked, ever become a business?
The prospect of this happening had never crossed my mind. I was consumed by my father's struggle and my own grief. But the notion has since become head-slappingly obvious. I am not the only person to confront the loss of a loved one; the experience is universal. And I am not alone in craving a way to keep memories alive. Of course people like the ones who wrote me will get Dadbots, Mombots, and Childbots of their own. If a moonlighting writer like me can create a minimum viable product, then a company employing actual computer scientists could do much more.
But this prospect raises unanswered and unsettling questions. For businesses, profit, and not some deeply personal mission, will be the motivation. This shift will raise issues that I didn't have to confront. To make money, a virtual immortality company could follow the lucrative but controversial business model that has worked so well for Google and Facebook. To wit, a company could provide the memorializing chatbot for free and then find ways to monetize the attention and data of whoever communicated with it. Given the copious amount of personal information flowing back and forth in conversations with replica bots, this would be a data gold mine for the company—and a massive privacy risk for users.
Virtual immortality as commercial product will doubtless become more sophisticated.
Alternately, a company could charge for memorializing avatars, perhaps with an annual subscription fee. This would put the business in a powerful position. Imagine the fee getting hiked each year. A customer like me would find himself facing a terrible decision—grit my teeth and keep paying, or be forced to pull the plug on the best, closest reminder of a loved one that I have. The same person would effectively wind up dying twice.
Another way that a beloved digital avatar could die is if the company that creates it ceases to exist. This is no mere academic concern for me: Earlier this year, PullString was swallowed up by Apple. I'm still able to access the Dadbot on my own computer, fortunately, but the acquisition means that other friends and family members can no longer chat with him remotely.
Startups like PullString, of course, are characterized by impermanence; they tend to get snapped up by bigger companies or run out of venture capital and fold. But even if big players like, say, Facebook or Google get into the virtual immortality game, we can't count on them existing even a few decades from now, which means that the avatars enabled by their technology would die, too.
The permanence problem is the biggest hurdle faced by the fledgling enterprise of virtual immortality. So some entrepreneurs are attempting to enable avatars whose existence isn't reliant upon any one company or set of computer servers. "By leveraging the power of blockchain and decentralized software to replicate information, we help users create avatars that live on forever," says Alex Roy, the founder and CEO of the startup Everlife.ai. But until this type of solution exists, give props to conventional technology for preserving memories: printed photos and words on paper can last for centuries.
The fidelity of avatars—just how lifelike they are—also raises serious concerns. Before I started creating the Dadbot, I worried that the tech might be just good enough to remind my family of the man it emulated, but so far off from my real father that it gave us all the creeps. But because the Dadbot was a simple chatbot and not some all-knowing AI, and because the interface was a messaging app, there was no danger of him encroaching on the reality of my actual dad.
But virtual immortality as commercial product will doubtless become more sophisticated. Avatars will have brains built by teams of computer scientists employing the latest techniques in conversational AI. The replicas will not just text but also speak, using synthetic voices that emulate the ones of the people being memorialized. They may even come to life as animated clones on computer screens or in 3D with the help of virtual reality headsets.
What fascinates me is how technology can help to preserve the past—genuine facts and memories from peoples' lives.
These are all lines that I don't personally want to cross; replicating my dad was never the goal. I also never aspired to have some synthetic version of him that continued to exist in the present, capable of acquiring knowledge about the world or my life and of reacting to it in real time.
Instead, what fascinates me is how technology can help to preserve the past—genuine facts and memories from people's lives—and their actual voices so that their stories can be shared interactively after they have gone. I'm working on ideas for doing this via voice computing platforms like Alexa and Assistant, and while I don't have all of the answers yet, I'm excited to figure out what might be possible.
[Adapted from Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think (Houghton Mifflin Harcourt, March 26, 2019).]
The Best Kept Secret on the International Space Station
[Editor's Note: This video is the second of a five-part series titled "The Future Is Now: The Revolutionary Power of Stem Cell Research." Produced in partnership with the Regenerative Medicine Foundation, and filmed at the annual 2019 World Stem Cell Summit, this series illustrates how stem cell research will profoundly impact life on earth.]
Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.