Clever Firm Predicts Patients Most at Risk, Then Tries to Intervene Before They Get Sicker
The diabetic patient hit the danger zone.
Ideally, blood sugar, measured by an A1C test, rests at 5.9 or less. A 7 is elevated, according to the Diabetes Council. Over 10, and you're into the extreme danger zone, at risk of every diabetic crisis from kidney failure to blindness.
In three months of working with a case manager, Jen's blood sugar had dropped to 7.2, a much safer range.
This patient's A1C was 10. Let's call her Jen for the sake of this story. (Although the facts of her case are real, the patient's actual name wasn't released due to privacy laws.).
Jen happens to live in Pennsylvania's Lehigh Valley, home of the nonprofit Lehigh Valley Health Network, which has eight hospital campuses and various clinics and other services. This network has invested more than $1 billion in IT infrastructure and founded Populytics, a spin-off firm that tracks and analyzes patient data, and makes care suggestions based on that data.
When Jen left the doctor's office, the Populytics data machine started churning, analyzing her data compared to a wealth of information about future likely hospital visits if she did not comply with recommendations, as well as the potential positive impacts of outreach and early intervention.
About a month after Jen received the dangerous blood test results, a community outreach specialist with psychological training called her. She was on a list generated by Populytics of follow-up patients to contact.
"It's a very gentle conversation," says Cathryn Kelly, who manages a care coordination team at Populytics. "The case manager provides them understanding and support and coaching." The goal, in this case, was small behavioral changes that would actually stick, like dietary ones.
In three months of working with a case manager, Jen's blood sugar had dropped to 7.2, a much safer range. The odds of her cycling back to the hospital ER or veering into kidney failure, or worse, had dropped significantly.
While the health network is extremely localized to one area of one state, using data to inform precise medical decision-making appears to be the wave of the future, says Ann Mongovern, the associate director of Health Care Ethics at the Markkula Center for Applied Ethics at Santa Clara University in California.
"Many hospitals and hospital systems don't yet try to do this at all, which is striking given where we're at in terms of our general technical ability in this society," Mongovern says.
How It Happened
While many hospitals make money by filling beds, the Lehigh Valley Health Network, as a nonprofit, accepts many patients on Medicaid and other government insurances that don't cover some of the costs of a hospitalization. The area's population is both poorer and older than national averages, according to the U.S. Census data, meaning more people with higher medical needs that may not have the support to care for themselves. They end up in the ER, or worse, again and again.
In the early 2000s, LVHN CEO Dr. Brian Nester started wondering if his health network could develop a way to predict who is most likely to land themselves a pricey ICU stay -- and offer support before those people end up needing serious care.
Embracing data use in such specific ways also brings up issues of data security and patient safety.
"There was an early understanding, even if you go back to the (federal) balanced budget act of 1997, that we were just kicking the can down the road to having a functional financial model to deliver healthcare to everyone with a reasonable price," Nester says. "We've got a lot of people living longer without more of an investment in the healthcare trust."
Popultyics, founded in 2013, was the result of years of planning and agonizing over those population numbers and cost concerns.
"We looked at our own health plan," Nester says. Out of all the employees and dependants on the LVHN's own insurance network, "roughly 1.5 percent of our 25,000 people — under 400 people — drove $30 million of our $130 million on insurance costs -- about 25 percent."
"You don't have to boil the ocean to take cost out of the system," he says. "You just have to focus on that 1.5%."
Take Jen, the diabetic patient. High blood sugar can lead to kidney failure, which can mean weekly expensive dialysis for 20 years. Investing in the data and staff to reach patients, he says, is "pennies compared to $100 bills."
For most doctors, "there's no awareness for providers to know who they should be seeing vs. who they are seeing. There's no incentive, because the incentive is to see as many patients as you can," he says.
To change that, first the LVHN invested in the popular medical management system, Epic. Then, they negotiated with the top 18 insurance companies that cover patients in the region to allow access to their patient care data, which means they have reams of patient history to feed the analytics machine in order to make predictions about outcomes. Nester admits not every hospital could do that -- with 52 percent of the market share, LVHN had a very strong negotiating position.
Third party services take that data and churn out analytics that feeds models and care management plans. All identifying information is stripped from the data.
"We can do predictive modeling in patients," says Populytics President and CEO Gregory Kile. "We can identify care gaps. Those care gaps are noted as alerts when the patient presents at the office."
Kile uses himself as a hypothetical patient.
"I pull up Gregory Kile, and boom, I see a flag or an alert. I see he hasn't been in for his last blood test. There is a care gap there we need to complete."
"There's just so much more you can do with that information," he says, envisioning a future where follow-up for, say, knee replacement surgery and outcomes could be tracked, and either validated or changed.
Ethical Issues at the Forefront
Of course, embracing data use in such specific ways also brings up issues of security and patient safety. For example, says medical ethicist Mongovern, there are many touchpoints where breaches could occur. The public has a growing awareness of how data used to personalize their experiences, such as social media analytics, can also be monetized and sold in ways that benefit a company, but not the user. That's not to say data supporting medical decisions is a bad thing, she says, just one with potential for public distrust if not handled thoughtfully.
"You're going to need to do this to stay competitive," she says. "But there's obviously big challenges, not the least of which is patient trust."
So far, a majority of the patients targeted – 62 percent -- appear to embrace the effort.
Among the ways the LVHN uses the data is monthly reports they call registries, which include patients who have just come in contact with the health network, either through the hospital or a doctor that works with them. The community outreach team members at Populytics take the names from the list, pull their records, and start calling. So far, a majority of the patients targeted – 62 percent -- appear to embrace the effort.
Says Nester: "Most of these are vulnerable people who are thrilled to have someone care about them. So they engage, and when a person engages in their care, they take their insulin shots. It's not rocket science. The rocket science is in identifying who the people are — the delivery of care is easy."
Today’s Focus on STEM Education Is Missing A Crucial Point
I once saw a fascinating TED talk on 3D printing. As I watched the presenter discuss the custom fabrication, not of plastic gears or figurines, but of living, implantable kidneys, I thought I was finally living in the world of Star Trek, and I experienced a flush of that eager, expectant enthusiasm I felt as a child looking toward the future. I looked at my current career and felt a rejuvenation of my commitment to teach young people the power of science.
The well-rounded education of human beings needs to include lessons learned both from a study of the physical world, and from a study of humanity.
Whether we are teachers or not, those of us who admire technology and innovation, and who wish to support progress, usually embrace the importance of educating the next generation of scientists and inventors. Growing a healthy technological civilization takes a lot of work, skill, and wisdom, and its continued health depends on future generations of competent thinkers. Thus, we may find it encouraging that there is currently an abundance of interest in STEM– the common acronym for the study of science, technology, engineering, and math.
But education is as challenging an endeavor as science itself. Educating youth--if we want to do it right--requires as much thought, work, and expertise as discovering a cure or pioneering regenerative medicine. Before we give our money, time, or support to any particular school or policy, let's give some thought to the details of the educational process.
A Well-Balanced Diet
For one thing, STEM education cannot stand in isolation. The well-rounded education of human beings needs to include lessons learned both from a study of the physical world, and from a study of humanity. This is especially true for the basic education of children, but it is true even for college students. And even for those in science and engineering, there are important lessons to be learned from the study of history, literature, and art.
Scientists have their own emotions and values, and also need financial support. The fruits of their labor ultimately benefit other people. How are we all to function together in our division-of-labor society, without some knowledge of the way societies work? How are we to fully thrive and enjoy life, without some understanding of ourselves, our motives, our moral values, and our relationships to others? STEM education needs the humanities as a partner. That flourishing civilization we dream of requires both technical competence and informed life-choices.
Think for Yourself (Even in Science)
Perhaps even more important than what is taught, is the subject of how things are taught. We want our children to learn the skill of thinking independently, but even in the sciences, we often fail completely to demonstrate how. Instead of teaching science as a thinking process, we indoctrinate, using the grand discoveries of the great scientists as our sacred texts. But consider the words of Isaac Newton himself, regarding rote learning:
A Vulgar Mechanick can practice what he has been taught or seen done, but if he is in an error he knows not how to find it out and correct it, and if you put him out of his road he is at a stand. Whereas he that is able to reason nimbly and judiciously about figure, force, and motion, is never at rest till he gets over every rub.
What's the point of all this formal schooling in the first place? Is it, as many of the proponents of STEM education might argue, to train students for a "good" career?
If our goal is to help students "reason nimbly" about the world around them, as the great scientists themselves did, are we succeeding? When we "teach" middle school students about DNA or cellular respiration by presenting as our only supporting evidence cartoon pictures, are we showing students a process of discovery based on evidence and hard work? Or are we just training them to memorize and repeat what the authorities say?
A useful education needs to give students the skill of following a line of reasoning, of asking rational questions, and of chewing things through in their minds--even if we regard the material as beyond question. Besides feeding students a well-balanced diet of knowledge, healthy schooling needs to teach them to digest this information thoroughly.
Thinking Training
Now step back for a moment and think about the purpose of education. What's the point of all this formal schooling in the first place? Is it, as many of the proponents of STEM education might argue, to train students for a "good" career? That view may have some validity for young adults, who are beginning to choose electives in favored subjects, and have started to choose a direction for their career.
But for the basic education of children, this way of thinking is presumptuous and disastrous. I would argue that the central purpose of a basic education is not to teach children how to perform this or that particular skill, but simply to teach them to think clearly. We should not be aiming to provide job training, but thinking training. We should be helping children learn how to "reason nimbly" about the world around them, and breathing life into their thinking processes, by which they will grapple with the events and circumstances of their lives.
So as we admire innovation, dream of a wonderful future, and attempt to nurture the next generation of scientists and engineers, instead of obsessing over STEM education, let us focus on rational education. Let's worry about showing children how to think--about all the important things in life. Let's give them the basic facts of human existence -- physical and humanitarian -- and show them how to fluently and logically understand them.
Some students will become the next generation of creators, and some will follow other careers, but together -- if they are educated properly -- they will continue to grow their inheritance, and to keep our civilization healthy and flourishing, in body and in mind.
Do New Tools Need New Ethics?
Scarcely a week goes by without the announcement of another breakthrough owing to advancing biotechnology. Recent examples include the use of gene editing tools to successfully alter human embryos or clone monkeys; new immunotherapy-based treatments offering longer lives or even potential cures for previously deadly cancers; and the creation of genetically altered mosquitos using "gene drives" to quickly introduce changes into the population in an ecosystem and alter the capacity to carry disease.
The environment for conducting science is dramatically different today than it was in the 1970s, 80s, or even the early 2000s.
Each of these examples puts pressure on current policy guidelines and approaches, some existing since the late 1970s, which were created to help guide the introduction of controversial new life sciences technologies. But do the policies that made sense decades ago continue to make sense today, or do the tools created during different eras in science demand new ethics guidelines and policies?
Advances in biotechnology aren't new of course, and in fact have been the hallmark of science since the creation of the modern U.S. National Institutes of Health in the 1940s and similar government agencies elsewhere. Funding agencies focused on health sciences research with the hope of creating breakthroughs in human health, and along the way, basic science discoveries led to the creation of new scientific tools that offered the ability to approach life, death, and disease in fundamentally new ways.
For example, take the discovery in the 1970s of the "chemical scissors" in living cells called restriction enzymes, which could be controlled and used to introduce cuts at predictable locations in a strand of DNA. This led to the creation of tools that for the first time allowed for genetic modification of any organism with DNA, which meant bacteria, plants, animals, and even humans could in theory have harmful mutations repaired, but also that changes could be made to alter or even add genetic traits, with potentially ominous implications.
The scientists involved in that early research convened a small conference to discuss not only the science, but how to responsibly control its potential uses and their implications. The meeting became known as the Asilomar Conference for the meeting center where it was held, and is often noted as the prime example of the scientific community policing itself. While the Asilomar recommendations were not sufficient from a policy standpoint, they offered a blueprint on which policies could be based and presented a model of the scientific community setting responsible controls for itself.
But the environment for conducting science changed over the succeeding decades and it is dramatically different today than it was in the 1970s, 80s, or even the early 2000s. The regime for oversight and regulation that has provided controls for the introduction of so-called "gene therapy" in humans starting in the mid-1970s is beginning to show signs of fraying. The vast majority of such research was performed in the U.S., U.K., and Europe, where policies were largely harmonized. But as the tools for manipulating humans at the molecular level advanced, they also became more reliable and more precise, as well as cheaper and easier to use—think CRISPR—and therefore more accessible to more people in many more countries, many without clear oversight or policies laying out responsible controls.
There is no precedent for global-scale science policy, though that is exactly what this moment seems to demand.
As if to make the point through news headlines, scientists in China announced in 2017 that they had attempted to perform gene editing on in vitro human embryos to repair an inherited mutation for beta thalassemia--research that would not be permitted in the U.S. and most European countries and at the time was also banned in the U.K. Similarly, specialists from a reproductive medicine clinic in the U.S. announced in 2016 that they had performed a highly controversial reproductive technology by which DNA from two women is combined (so-called "three parent babies"), in a satellite clinic they had opened in Mexico to avoid existing prohibitions on the technique passed by the U.S. Congress in 2015.
In both cases, genetic changes were introduced into human embryos that if successful would lead to the birth of a child with genetically modified germline cells—the sperm in boys or eggs in girls—with those genetic changes passed on to all future generations of related offspring. Those are just two very recent examples, and it doesn't require much imagination to predict the list of controversial possible applications of advancing biotechnologies: attempts at genetic augmentation or even cloning in humans, and alterations of the natural environment with genetically engineered mosquitoes or other insects in areas with endemic disease. In fact, as soon as this month, scientists in Africa may release genetically modified mosquitoes for the first time.
The technical barriers are falling at a dramatic pace, but policy hasn't kept up, both in terms of what controls make sense and how to address what is an increasingly global challenge. There is no precedent for global-scale science policy, though that is exactly what this moment seems to demand. Mechanisms for policy at global scale are limited–-think UN declarations, signatory countries, and sometimes international treaties, but all are slow, cumbersome and have limited track records of success.
But not all the news is bad. There are ongoing efforts at international discussion, such as an international summit on human genome editing convened in 2015 by the National Academies of Sciences and Medicine (U.S.), Royal Academy (U.K.), and Chinese Academy of Sciences (China), a follow-on international consensus committee whose report was issued in 2017, and an upcoming 2nd international summit in Hong Kong in November this year.
These efforts need to continue to focus less on common regulatory policies, which will be elusive if not impossible to create and implement, but on common ground for the principles that ought to guide country-level rules. Such principles might include those from the list proposed by the international consensus committee, including transparency, due care, responsible science adhering to professional norms, promoting wellbeing of those affected, and transnational cooperation. Work to create a set of shared norms is ongoing and worth continued effort as the relevant stakeholders attempt to navigate what can only be called a brave new world.