Clever Firm Predicts Patients Most at Risk, Then Tries to Intervene Before They Get Sicker
The diabetic patient hit the danger zone.
Ideally, blood sugar, measured by an A1C test, rests at 5.9 or less. A 7 is elevated, according to the Diabetes Council. Over 10, and you're into the extreme danger zone, at risk of every diabetic crisis from kidney failure to blindness.
In three months of working with a case manager, Jen's blood sugar had dropped to 7.2, a much safer range.
This patient's A1C was 10. Let's call her Jen for the sake of this story. (Although the facts of her case are real, the patient's actual name wasn't released due to privacy laws.).
Jen happens to live in Pennsylvania's Lehigh Valley, home of the nonprofit Lehigh Valley Health Network, which has eight hospital campuses and various clinics and other services. This network has invested more than $1 billion in IT infrastructure and founded Populytics, a spin-off firm that tracks and analyzes patient data, and makes care suggestions based on that data.
When Jen left the doctor's office, the Populytics data machine started churning, analyzing her data compared to a wealth of information about future likely hospital visits if she did not comply with recommendations, as well as the potential positive impacts of outreach and early intervention.
About a month after Jen received the dangerous blood test results, a community outreach specialist with psychological training called her. She was on a list generated by Populytics of follow-up patients to contact.
"It's a very gentle conversation," says Cathryn Kelly, who manages a care coordination team at Populytics. "The case manager provides them understanding and support and coaching." The goal, in this case, was small behavioral changes that would actually stick, like dietary ones.
In three months of working with a case manager, Jen's blood sugar had dropped to 7.2, a much safer range. The odds of her cycling back to the hospital ER or veering into kidney failure, or worse, had dropped significantly.
While the health network is extremely localized to one area of one state, using data to inform precise medical decision-making appears to be the wave of the future, says Ann Mongovern, the associate director of Health Care Ethics at the Markkula Center for Applied Ethics at Santa Clara University in California.
"Many hospitals and hospital systems don't yet try to do this at all, which is striking given where we're at in terms of our general technical ability in this society," Mongovern says.
How It Happened
While many hospitals make money by filling beds, the Lehigh Valley Health Network, as a nonprofit, accepts many patients on Medicaid and other government insurances that don't cover some of the costs of a hospitalization. The area's population is both poorer and older than national averages, according to the U.S. Census data, meaning more people with higher medical needs that may not have the support to care for themselves. They end up in the ER, or worse, again and again.
In the early 2000s, LVHN CEO Dr. Brian Nester started wondering if his health network could develop a way to predict who is most likely to land themselves a pricey ICU stay -- and offer support before those people end up needing serious care.
Embracing data use in such specific ways also brings up issues of data security and patient safety.
"There was an early understanding, even if you go back to the (federal) balanced budget act of 1997, that we were just kicking the can down the road to having a functional financial model to deliver healthcare to everyone with a reasonable price," Nester says. "We've got a lot of people living longer without more of an investment in the healthcare trust."
Popultyics, founded in 2013, was the result of years of planning and agonizing over those population numbers and cost concerns.
"We looked at our own health plan," Nester says. Out of all the employees and dependants on the LVHN's own insurance network, "roughly 1.5 percent of our 25,000 people — under 400 people — drove $30 million of our $130 million on insurance costs -- about 25 percent."
"You don't have to boil the ocean to take cost out of the system," he says. "You just have to focus on that 1.5%."
Take Jen, the diabetic patient. High blood sugar can lead to kidney failure, which can mean weekly expensive dialysis for 20 years. Investing in the data and staff to reach patients, he says, is "pennies compared to $100 bills."
For most doctors, "there's no awareness for providers to know who they should be seeing vs. who they are seeing. There's no incentive, because the incentive is to see as many patients as you can," he says.
To change that, first the LVHN invested in the popular medical management system, Epic. Then, they negotiated with the top 18 insurance companies that cover patients in the region to allow access to their patient care data, which means they have reams of patient history to feed the analytics machine in order to make predictions about outcomes. Nester admits not every hospital could do that -- with 52 percent of the market share, LVHN had a very strong negotiating position.
Third party services take that data and churn out analytics that feeds models and care management plans. All identifying information is stripped from the data.
"We can do predictive modeling in patients," says Populytics President and CEO Gregory Kile. "We can identify care gaps. Those care gaps are noted as alerts when the patient presents at the office."
Kile uses himself as a hypothetical patient.
"I pull up Gregory Kile, and boom, I see a flag or an alert. I see he hasn't been in for his last blood test. There is a care gap there we need to complete."
"There's just so much more you can do with that information," he says, envisioning a future where follow-up for, say, knee replacement surgery and outcomes could be tracked, and either validated or changed.
Ethical Issues at the Forefront
Of course, embracing data use in such specific ways also brings up issues of security and patient safety. For example, says medical ethicist Mongovern, there are many touchpoints where breaches could occur. The public has a growing awareness of how data used to personalize their experiences, such as social media analytics, can also be monetized and sold in ways that benefit a company, but not the user. That's not to say data supporting medical decisions is a bad thing, she says, just one with potential for public distrust if not handled thoughtfully.
"You're going to need to do this to stay competitive," she says. "But there's obviously big challenges, not the least of which is patient trust."
So far, a majority of the patients targeted – 62 percent -- appear to embrace the effort.
Among the ways the LVHN uses the data is monthly reports they call registries, which include patients who have just come in contact with the health network, either through the hospital or a doctor that works with them. The community outreach team members at Populytics take the names from the list, pull their records, and start calling. So far, a majority of the patients targeted – 62 percent -- appear to embrace the effort.
Says Nester: "Most of these are vulnerable people who are thrilled to have someone care about them. So they engage, and when a person engages in their care, they take their insulin shots. It's not rocket science. The rocket science is in identifying who the people are — the delivery of care is easy."
Is there a robot nanny in your child's future?
From ROBOTS AND THE PEOPLE WHO LOVE THEM: Holding on to Our Humanity in an Age of Social Robots by Eve Herold. Copyright © 2024 by the author and reprinted by permission of St. Martin’s Publishing Group.
Could the use of robots take some of the workload off teachers, add engagement among students, and ultimately invigorate learning by taking it to a new level that is more consonant with the everyday experiences of young people? Do robots have the potential to become full-fledged educators and further push human teachers out of the profession? The preponderance of opinion on this subject is that, just as AI and medical technology are not going to eliminate doctors, robot teachers will never replace human teachers. Rather, they will change the job of teaching.
A 2017 study led by Google executive James Manyika suggested that skills like creativity, emotional intelligence, and communication will always be needed in the classroom and that robots aren’t likely to provide them at the same level that humans naturally do. But robot teachers do bring advantages, such as a depth of subject knowledge that teachers can’t match, and they’re great for student engagement.
The teacher and robot can complement each other in new ways, with the teacher facilitating interactions between robots and students. So far, this is the case with teaching “assistants” being adopted now in China, Japan, the U.S., and Europe. In this scenario, the robot (usually the SoftBank child-size robot NAO) is a tool for teaching mainly science, technology, engineering, and math (the STEM subjects), but the teacher is very involved in planning, overseeing, and evaluating progress. The students get an entertaining and enriched learning experience, and some of the teaching load is taken off the teacher. At least, that’s what researchers have been able to observe so far.
To be sure, there are some powerful arguments for having robots in the classroom. A not-to-be-underestimated one is that robots “speak the language” of today’s children, who have been steeped in technology since birth. These children are adept at navigating a media-rich environment that is highly visual and interactive. They are plugged into the Internet 24-7. They consume music, games, and huge numbers of videos on a weekly basis. They expect to be dazzled because they are used to being dazzled by more and more spectacular displays of digital artistry. Education has to compete with social media and the entertainment vehicles of students’ everyday lives.
Another compelling argument for teaching robots is that they help prepare students for the technological realities they will encounter in the real world when robots will be ubiquitous. From childhood on, they will be interacting and collaborating with robots in every sphere of their lives from the jobs they do to dealing with retail robots and helper robots in the home. Including robots in the classroom is one way of making sure that children of all socioeconomic backgrounds will be better prepared for a highly automated age, when successfully using robots will be as essential as reading and writing. We’ve already crossed this threshold with computers and smartphones.
Students need multimedia entertainment with their teaching. This is something robots can provide through their ability to connect to the Internet and act as a centralized host to videos, music, and games. Children also need interaction, something robots can deliver up to a point, but which humans can surpass. The education of a child is not just intended to make them technologically functional in a wired world, it’s to help them grow in intellectual, creative, social, and emotional ways. When considered through this perspective, it opens the door to questions concerning just how far robots should go. Robots don’t just teach and engage children; they’re designed to tug at their heartstrings.
It’s no coincidence that many toy makers and manufacturers are designing cute robots that look and behave like real children or animals, says Turkle. “When they make eye contact and gesture toward us, they predispose us to view them as thinking and caring,” she has written in The Washington Post. “They are designed to be cute, to provide a nurturing response” from the child. As mentioned previously, this nurturing experience is a powerful vehicle for drawing children in and promoting strong attachment. But should children really love their robots?
ROBOTS AND THE PEOPLE WHO LOVE THEM: Holding on to Our Humanity in an Age of Social Robots by Eve Herold (January 9, 2024).
St. Martin’s Publishing Group
The problem, once again, is that a child can be lulled into thinking that she’s in an actual relationship, when a robot can’t possibly love her back. If adults have these vulnerabilities, what might such asymmetrical relationships do to the emotional development of a small child? Turkle notes that while we tend to ascribe a mind and emotions to a socially interactive robot, “simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love.”
Always a consideration is the fact that in the first few years of life, a child’s brain is undergoing rapid growth and development that will form the foundation of their lifelong emotional health. These formative experiences are literally shaping the child’s brain, their expectations, and their view of the world and their place in it. In Alone Together, Turkle asks: What are we saying to children about their importance to us when we’re willing to outsource their care to a robot? A child might be superficially entertained by the robot while his self-esteem is systematically undermined.
Research has emerged showing that there are clear downsides to child-robot relationships.
Still, in the case of robot nannies in the home, is active, playful engagement with a robot for a few hours a day any more harmful than several hours in front of a TV or with an iPad? Some, like Xiong, regard interacting with a robot as better than mere passive entertainment. iPal’s manufacturers say that their robot can’t replace parents or teachers and is best used by three- to eight-year-olds after school, while they wait for their parents to get off work. But as robots become ever-more sophisticated, they’re expected to perform more of the tasks of day-to-day care and to be much more emotionally advanced. There is no question children will form deep attachments to some of them. And research has emerged showing that there are clear downsides to child-robot relationships.
Some studies, performed by Turkle and fellow MIT colleague Cynthia Breazeal, have revealed a darker side to the child-robot bond. Turkle has reported extensively on these studies in The Washington Post and in her book Alone Together. Most children love robots, but some act out their inner bully on the hapless machines, hitting and kicking them and otherwise trying to hurt them. The trouble is that the robot can’t fight back, teaching children that they can bully and abuse without consequences. As in any other robot relationship, such harmful behavior could carry over into the child’s human relationships.
And, ironically, it turns out that communicative machines don’t actually teach kids good communication skills. It’s well known that parent-child communication in the first three years of life sets the stage for a very young child’s intellectual and academic success. Verbal back-and-forth with parents and care-givers is like fuel for a child’s growing brain. One article that examined several types of play and their effect on children’s communication skills, published in JAMA Pediatrics in 2015, showed that babies who played with electronic toys—like the popular robot dog Aibo—show a decrease in both the quantity and quality of their language skills.
Anna V. Sosa of the Child Speech and Language Lab at Northern Arizona University studied twenty-six ten- to sixteen- month-old infants to compare the growth of their language skills after they played with three types of toys: electronic toys like a baby laptop and talking farm; traditional toys like wooden puzzles and building blocks; and books read aloud by their parents. The play that produced the most growth in verbal ability was having books read to them by a caregiver, followed by play with traditional toys. Language gains after playing with electronic toys came dead last. This form of play involved the least use of adult words, the least conversational turntaking, and the least verbalizations from the children. While the study sample was small, it’s not hard to extrapolate that no electronic toy or even more abled robot could supply the intimate responsiveness of a parent reading stories to a child, explaining new words, answering the child’s questions, and modeling the kind of back- and-forth interaction that promotes empathy and reciprocity in relationships.
***
Most experts acknowledge that robots can be valuable educational tools. But they can’t make a child feel truly loved, validated, and valued. That’s the job of parents, and when parents abdicate this responsibility, it’s not only the child who misses out on one of life’s most profound experiences.
We really don’t know how the tech-savvy children of today will ultimately process their attachments to robots and whether they will be excessively predisposed to choosing robot companionship over that of humans. It’s possible their techno literacy will draw for them a bold line between real life and a quasi-imaginary history with a robot. But it will be decades before we see long-term studies culminating in sufficient data to help scientists, and the rest of us, to parse out the effects of a lifetime spent with robots.
This is an excerpt from ROBOTS AND THE PEOPLE WHO LOVE THEM: Holding on to Our Humanity in an Age of Social Robots by Eve Herold. The book will be published on January 9, 2024.
Story by Big Think
In rare cases, a woman’s heart can start to fail in the months before or after giving birth. The all-important muscle weakens as its chambers enlarge, reducing the amount of blood pumped with each beat. Peripartum cardiomyopathy can threaten the lives of both mother and child. Viral illness, nutritional deficiency, the bodily stress of pregnancy, or an abnormal immune response could all play a role, but the causes aren’t concretely known.
If there is a silver lining to peripartum cardiomyopathy, it’s that it is perhaps the most survivable form of heart failure. A remarkable 50% of women recover spontaneously. And there’s an even more remarkable explanation for that glowing statistic: The fetus‘ stem cells migrate to the heart and regenerate the beleaguered muscle. In essence, the developing or recently born child saves its mother’s life.
Saving mama
While this process has not been observed directly in humans, it has been witnessed in mice. In a 2015 study, researchers tracked stem cells from fetal mice as they traveled to mothers’ damaged cardiac cells and integrated themselves into hearts.
Evolutionarily, this function makes sense: It is in the fetus’ best interest that its mother remains healthy.
Scientists also have spotted cells from the fetus within the hearts of human mothers, as well as countless other places inside the body, including the skin, spleen, liver, brain, lung, kidney, thyroid, lymph nodes, salivary glands, gallbladder, and intestine. These cells essentially get everywhere. While most are eliminated by the immune system during pregnancy, some can persist for an incredibly long time — up to three decades after childbirth.
This integration of the fetus’ cells into the mother’s body has been given a name: fetal microchimerism. The process appears to start between the fourth and sixth week of gestation in humans. Scientists are actively trying to suss out its purpose. Fetal stem cells, which can differentiate into all sorts of specialized cells, appear to target areas of injury. So their role in healing seems apparent. Evolutionarily, this function makes sense: It is in the fetus’ best interest that its mother remains healthy.
Sending cells into the mother’s body may also prime her immune system to grow more tolerant of the developing fetus. Successful pregnancy requires that the immune system not see the fetus as an interloper and thus dispatch cells to attack it.
Fetal microchimerism
But fetal microchimerism might not be entirely beneficial. Greater concentrations of the cells have been associated with various autoimmune diseases such as lupus, Sjogren’s syndrome, and even multiple sclerosis. After all, they are foreign cells living in the mother’s body, so it’s possible that they might trigger subtle, yet constant inflammation. Fetal cells also have been linked to cancer, although it isn’t clear whether they abet or hinder the disease.
A team of Spanish scientists summarized the apparent give and take of fetal microchimerism in a 2022 review article. “On the one hand, fetal microchimerism could be a source of progenitor cells with a beneficial effect on the mother’s health by intervening in tissue repair, angiogenesis, or neurogenesis. On the other hand, fetal microchimerism might have a detrimental function by activating the immune response and contributing to autoimmune diseases,” they wrote.
Regardless of a fetus’ cells net effect, their existence alone is intriguing. In a paper published earlier this year, University of London biologist Francisco Úbeda and University of Western Ontario mathematical biologist Geoff Wild noted that these cells might very well persist within mothers for life.
“Therefore, throughout their reproductive lives, mothers accumulate fetal cells from each of their past pregnancies including those resulting in miscarriages. Furthermore, mothers inherit, from their own mothers, a pool of cells contributed by all fetuses carried by their mothers, often referred to as grandmaternal microchimerism.”
So every mother may carry within her literal pieces of her ancestors.