AI and you: Is the promise of personalized nutrition apps worth the hype?
As a type 2 diabetic, Michael Snyder has long been interested in how blood sugar levels vary from one person to another in response to the same food, and whether a more personalized approach to nutrition could help tackle the rapidly cascading levels of diabetes and obesity in much of the western world.
Eight years ago, Snyder, who directs the Center for Genomics and Personalized Medicine at Stanford University, decided to put his theories to the test. In the 2000s continuous glucose monitoring, or CGM, had begun to revolutionize the lives of diabetics, both type 1 and type 2. Using spherical sensors which sit on the upper arm or abdomen – with tiny wires that pierce the skin – the technology allowed patients to gain real-time updates on their blood sugar levels, transmitted directly to their phone.
It gave Snyder an idea for his research at Stanford. Applying the same technology to a group of apparently healthy people, and looking for ‘spikes’ or sudden surges in blood sugar known as hyperglycemia, could provide a means of observing how their bodies reacted to an array of foods.
“We discovered that different foods spike people differently,” he says. “Some people spike to pasta, others to bread, others to bananas, and so on. It’s very personalized and our feeling was that building programs around these devices could be extremely powerful for better managing people’s glucose.”
Unbeknown to Snyder at the time, thousands of miles away, a group of Israeli scientists at the Weizmann Institute of Science were doing exactly the same experiments. In 2015, they published a landmark paper which used CGM to track the blood sugar levels of 800 people over several days, showing that the biological response to identical foods can vary wildly. Like Snyder, they theorized that giving people a greater understanding of their own glucose responses, so they spend more time in the normal range, may reduce the prevalence of type 2 diabetes.
The commercial potential of such apps is clear, but the underlying science continues to generate intriguing findings.
“At the moment 33 percent of the U.S. population is pre-diabetic, and 70 percent of those pre-diabetics will become diabetic,” says Snyder. “Those numbers are going up, so it’s pretty clear we need to do something about it.”
Fast forward to 2022,and both teams have converted their ideas into subscription-based dietary apps which use artificial intelligence to offer data-informed nutritional and lifestyle recommendations. Snyder’s spinoff, January AI, combines CGM information with heart rate, sleep, and activity data to advise on foods to avoid and the best times to exercise. DayTwo–a start-up which utilizes the findings of Weizmann Institute of Science–obtains microbiome information by sequencing stool samples, and combines this with blood glucose data to rate ‘good’ and ‘bad’ foods for a particular person.
“CGMs can be used to devise personalized diets,” says Eran Elinav, an immunology professor and microbiota researcher at the Weizmann Institute of Science in addition to serving as a scientific consultant for DayTwo. “However, this process can be cumbersome. Therefore, in our lab we created an algorithm, based on data acquired from a big cohort of people, which can accurately predict post-meal glucose responses on a personal basis.”
The commercial potential of such apps is clear. DayTwo, who market their product to corporate employers and health insurers rather than individual consumers, recently raised $37 million in funding. But the underlying science continues to generate intriguing findings.
Last year, Elinav and colleagues published a study on 225 individuals with pre-diabetes which found that they achieved better blood sugar control when they followed a personalized diet based on DayTwo’s recommendations, compared to a Mediterranean diet. The journal Cell just released a new paper from Snyder’s group which shows that different types of fibre benefit people in different ways.
“The idea is you hear different fibres are good for you,” says Snyder. “But if you look at fibres they’re all over the map—it’s like saying all animals are the same. The responses are very individual. For a lot of people [a type of fibre called] arabinoxylan clearly reduced cholesterol while the fibre inulin had no effect. But in some people, it was the complete opposite.”
Eight years ago, Stanford's Michael Snyder began studying how continuous glucose monitors could be used by patients to gain real-time updates on their blood sugar levels, transmitted directly to their phone.
The Snyder Lab, Stanford Medicine
Because of studies like these, interest in precision nutrition approaches has exploded in recent years. In January, the National Institutes of Health announced that they are spending $170 million on a five year, multi-center initiative which aims to develop algorithms based on a whole range of data sources from blood sugar to sleep, exercise, stress, microbiome and even genomic information which can help predict which diets are most suitable for a particular individual.
“There's so many different factors which influence what you put into your mouth but also what happens to different types of nutrients and how that ultimately affects your health, which means you can’t have a one-size-fits-all set of nutritional guidelines for everyone,” says Bruce Y. Lee, professor of health policy and management at the City University of New York Graduate School of Public Health.
With the falling costs of genomic sequencing, other precision nutrition clinical trials are choosing to look at whether our genomes alone can yield key information about what our diets should look like, an emerging field of research known as nutrigenomics.
The ASPIRE-DNA clinical trial at Imperial College London is aiming to see whether particular genetic variants can be used to classify individuals into two groups, those who are more glucose sensitive to fat and those who are more sensitive to carbohydrates. By following a tailored diet based on these sensitivities, the trial aims to see whether it can prevent people with pre-diabetes from developing the disease.
But while much hope is riding on these trials, even precision nutrition advocates caution that the field remains in the very earliest of stages. Lars-Oliver Klotz, professor of nutrigenomics at Friedrich-Schiller-University in Jena, Germany, says that while the overall goal is to identify means of avoiding nutrition-related diseases, genomic data alone is unlikely to be sufficient to prevent obesity and type 2 diabetes.
“Genome data is rather simple to acquire these days as sequencing techniques have dramatically advanced in recent years,” he says. “However, the predictive value of just genome sequencing is too low in the case of obesity and prediabetes.”
Others say that while genomic data can yield useful information in terms of how different people metabolize different types of fat and specific nutrients such as B vitamins, there is a need for more research before it can be utilized in an algorithm for making dietary recommendations.
“I think it’s a little early,” says Eileen Gibney, a professor at University College Dublin. “We’ve identified a limited number of gene-nutrient interactions so far, but we need more randomized control trials of people with different genetic profiles on the same diet, to see whether they respond differently, and if that can be explained by their genetic differences.”
Some start-ups have already come unstuck for promising too much, or pushing recommendations which are not based on scientifically rigorous trials. The world of precision nutrition apps was dubbed a ‘Wild West’ by some commentators after the founders of uBiome – a start-up which offered nutritional recommendations based on information obtained from sequencing stool samples –were charged with fraud last year. The weight-loss app Noom, which was valued at $3.7 billion in May 2021, has been criticized on Twitter by a number of users who claimed that its recommendations have led to them developed eating disorders.
With precision nutrition apps marketing their technology at healthy individuals, question marks have also been raised about the value which can be gained through non-diabetics monitoring their blood sugar through CGM. While some small studies have found that wearing a CGM can make overweight or obese individuals more motivated to exercise, there is still a lack of conclusive evidence showing that this translates to improved health.
However, independent researchers remain intrigued by the technology, and say that the wealth of data generated through such apps could be used to help further stratify the different types of people who become at risk of developing type 2 diabetes.
“CGM not only enables a longer sampling time for capturing glucose levels, but will also capture lifestyle factors,” says Robert Wagner, a diabetes researcher at University Hospital Düsseldorf. “It is probable that it can be used to identify many clusters of prediabetic metabolism and predict the risk of diabetes and its complications, but maybe also specific cardiometabolic risk constellations. However, we still don’t know which forms of diabetes can be prevented by such approaches and how feasible and long-lasting such self-feedback dietary modifications are.”
Snyder himself has now been wearing a CGM for eight years, and he credits the insights it provides with helping him to manage his own diabetes. “My CGM still gives me novel insights into what foods and behaviors affect my glucose levels,” he says.
He is now looking to run clinical trials with his group at Stanford to see whether following a precision nutrition approach based on CGM and microbiome data, combined with other health information, can be used to reverse signs of pre-diabetes. If it proves successful, January AI may look to incorporate microbiome data in future.
“Ultimately, what I want to do is be able take people’s poop samples, maybe a blood draw, and say, ‘Alright, based on these parameters, this is what I think is going to spike you,’ and then have a CGM to test that out,” he says. “Getting very predictive about this, so right from the get go, you can have people better manage their health and then use the glucose monitor to help follow that.”
Matt Trau, a professor of chemistry at the University of Queensland, stunned the science world back in December when the prestigious journal Nature Communications published his lab's discovery about a unique property of cancer DNA that could lead to a simple, cheap, and accurate test to detect any type of cancer in under 10 minutes.
No one believed it. I didn't believe it. I thought, "Gosh, okay, maybe it's a fluke."
Trau granted very few interviews in the wake of the news, but he recently opened up to leapsmag about the significance of this promising early research. Here is his story in his own words, as told to Editor-in-Chief Kira Peikoff.
There's been an incredible explosion of knowledge over the past 20 years, particularly since the genome was sequenced. The area of diagnostics has a tremendous amount of promise and has caught our lab's interest. If you catch cancer early, you can improve survival rates to as high as 98 percent, sometimes even now surpassing that.
My lab is interested in devices to improve the trajectory of cancer patients. So, once people get diagnosed, can we get really sophisticated information about the molecular origins of the disease, and can we measure it in real time? And then can we match that with the best treatment and monitor it in real time, too?
I think those approaches, also coupled with immunotherapy, where one dreams of monitoring the immune system simultaneously with the disease progress, will be the future.
But currently, the methodologies for cancer are still pretty old. So, for example, let's talk about biopsies in general. Liquid biopsy just means using a blood test or a urine test, rather than extracting out a piece of solid tissue. Now consider breast cancer. Still, the cutting-edge screening method is mammography or the physical interrogation for lumps. This has had a big impact in terms of early detection and awareness, but it's still primitive compared to interrogating, forensically, blood samples to look at traces of DNA.
Large machines like CAT scans, PET scans, MRIs, are very expensive and very subjective in terms of the operator. They don't look at the root causes of the cancer. Cancer is caused by changes in DNA. These can be changes in the hard drive of the DNA (the genomic changes) or changes in the apps that the DNA are running (the epigenetics and the transcriptomics).
We don't look at that now, even though we have, emerging, all of these technologies to do it, and those technologies are getting so much cheaper. I saw some statistics at a conference just a few months ago that, in the United States, less than 1 percent of cancer patients have their DNA interrogated. That's the current state-of-the-art in the modern medical system.
Professor Matt Trau, a cancer researcher at the University of Queensland in Australia.
(Courtesy)
Blood, as the highway of the body, is carrying all of this information. Cancer cells, if they are present in the body, are constantly getting turned over. When they die, they release their contents into the blood. Many of these cells end up in the urine and saliva. Having technologies that can forensically scan the highways looking for evidence of cancer is little bit like looking for explosives at the airport. That's very valuable as a security tool.
The trouble is that there are thousands of different types of cancer. Going back to breast cancer, there's at least a dozen different types, probably more, and each of them change the DNA (the hard drive of the disease) and the epigenetics (or the RAM memory). So one of the problems for diagnostics in cancer is to find something that is a signature of all cancers. That's been a really, really, really difficult problem.
Ours was a completely serendipitous discovery. What we found in the lab was this one marker that just kept coming up in all of the types of breast cancers we were studying.
No one believed it. I didn't believe it. I thought, "Gosh, okay, maybe it's a fluke, maybe it works just for breast cancer." So we went on to test it in prostate cancer, which is also many different types of diseases, and it seemed to be working in all of those. We then tested it further in lymphoma. Again, many different types of lymphoma. It worked across all of those. We tested it in gastrointestinal cancer. Again, many different types, and still, it worked, but we were skeptical.
Then we looked at cell lines, which are cells that have come from previous cancer patients, that we grow in the lab, but are used as model experimental systems. We have many of those cell lines, both ones that are cancerous, and ones that are healthy. It was quite remarkable that the marker worked in all of the cancer cell lines and didn't work in the healthy cell lines.
What could possibly be going on?
Well, imagine DNA as a piece of string, that's your hard drive. Epigenetics is like the beads that you put on that string. Those beads you can take on and off as you wish and they control which apps are run, meaning which genetic programs the cell runs. We hypothesized that for cancer, those beads cluster together, rather than being randomly distributed across the string.
Ultimately, I see this as something that would be like a pregnancy test you could take at your doctor's office.
The implications of this are profound. It means that DNA from cancer folds in water into three-dimensional structures that are very different from healthy cells' DNA. It's quite literally the needle in a haystack. Because when you do a liquid biopsy for early detection of cancer, most of the DNA from blood contains a vast abundance of healthy DNA. And that's not of interest. What's of interest is to find the cancerous DNA. That's there only in trace.
Once we figured out what was going on, we could easily set up a system to detect the trace cancerous DNA. It binds to gold nanoparticles in water and changes color. The test takes 10 minutes, and you can detect it by eye. Red indicates cancer and blue doesn't.
We're very, very excited about where we go from here. We're starting to test the test on a greater number of cancers, in thousands of patient samples. We're looking to the scientific community to engage with us, and we're getting a really good response from groups around the world who are supplying more samples to us so we can test this more broadly.
We also are very interested in testing how early can we go with this test. Can we detect cancer through a simple blood test even before there are any symptoms whatsoever? If so, we might be able to convert a cancer diagnosis to something almost as good as a vaccine.
Of course, we have to watch what are called false positives. We don't want to be detecting people as positives when they don't have cancer, and so the technology needs to improve there. We see this version as the iPhone 1. We're interested in the iPhone 2, 3, 4, getting better and better.
Ultimately, I see this as something that would be like a pregnancy test you could take at your doctor's office. If it came back positive, your doctor could say, "Look, there's some news here, but actually, it's not bad news, it's good news. We've caught this so early that we will be able to manage this, and this won't be a problem for you."
If this were to be in routine use in the medical system, countless lives could be saved. Cancer is now becoming one of the biggest killers in the world. We're talking millions upon millions upon millions of people who are affected. This really motivates our work. We might make a difference there.
Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.
Ethan Lindenberger, the Ohio teenager who sought out vaccinations after he was denied them as a child, recently testified before Congress about why his parents became anti-vaxxers. The trouble, he believes, stems from the pervasiveness of misinformation online.
There is evidence that 'educating' people with facts about the benefits of vaccination may not be effective.
"For my mother, her love and affection and care as a parent was used to push an agenda to create a false distress," he told the Senate Committee. His mother read posts on social media saying vaccines are dangerous, and that was enough to persuade her against them.
His story is an example of how widespread and harmful the current discourse on vaccinations is—and more importantly—how traditional strategies to convince people about the merits of vaccination have largely failed.
As responsible members of society, all of us have implicitly signed on to what ethicists call the "Social Contract" -- we agree to abide by certain moral and political rules of behavior. This is what our societal values, norms, and often governments are based upon. However, with the unprecedented rise of social media, alternative facts, and fake news, it is evident that our understanding—and application—of the social contract must also evolve.
Nowhere is this breakdown of societal norms more visible than in the failure to contain the spread of vaccine-preventable diseases like measles. What started off as unexplained episodes in New York City last October, mostly in communities that are under-vaccinated, has exploded into a national epidemic: 880 cases of measles across 24 states in 2019, according to the CDC (as of May 17, 2019). In fact, the Unites States is only eight months away from losing its "measles free" status, joining Venezuela as the second country out of North and South America with that status.
The U.S. is not the only country facing this growing problem. Such constant and perilous reemergence of measles and other vaccine-preventable diseases in various parts of the world raises doubts about the efficacy of current vaccination policies. In addition to the loss of valuable life, these outbreaks lead to loss of millions of dollars in unnecessary expenditure of scarce healthcare resources. While we may be living through an age of information, we are also navigating an era whose hallmark is a massive onslaught on truth.
There is ample evidence on how these outbreaks start: low-vaccination rates. At the same time, there is evidence that 'educating' people with facts about the benefits of vaccination may not be effective. Indeed, human reasoning has a limit, and facts alone rarely change a person's opinion. In a fascinating report by researchers from the University of Pennsylvania, a small experiment revealed how "behavioral nudges" could inform policy decisions around vaccination.
In the reported experiment, the vaccination rate for employees of a company increased by 1.5 percent when they were prompted to name the date when they planned to get their flu shot. In the same experiment, when employees were prompted to name both a date and a time for their planned flu shot, vaccination rate increased by 4 percent.
A randomized trial revealed the subtle power of "announcements" – direct, brief, assertive statements by physicians that assumed parents were ready to vaccinate their children.
This experiment is a part of an emerging field of behavioral economics—a scientific undertaking that uses insights from psychology to understand human decision-making. The field was born from a humbling realization that humans probably do not possess an unlimited capacity for processing information. Work in this field could inform how we can formulate vaccination policy that is effective, conserves healthcare resources, and is applicable to current societal norms.
Take, for instance, the case of Human Papilloma Virus (HPV) that can cause several types of cancers in both men and women. Research into the quality of physician communication has repeatedly revealed how lukewarm recommendations for HPV vaccination by primary care physicians likely contributes to under-immunization of eligible adolescents and can cause confusion for parents.
A randomized trial revealed the subtle power of "announcements" – direct, brief, assertive statements by physicians that assumed parents were ready to vaccinate their children. These announcements increased vaccination rates by 5.4 percent. Lengthy, open-ended dialogues demonstrated no benefit in vaccination rates. It seems that uncertainty from the physician translates to unwillingness from a parent.
Choice architecture is another compelling concept. The premise is simple: We hardly make any of our decisions in vacuum; the environment in which these decisions are made has an influence. If health systems were designed with these insights in mind, people would be more likely to make better choices—without being forced.
This theory, proposed by Richard Thaler, who won the 2017 Nobel Prize in Economics, was put to the test by physicians at the University of Pennsylvania. In their study, flu vaccination rates at primary care practices increased by 9.5 percent all because the staff implemented "active choice intervention" in their electronic health records—a prompt that nudged doctors and nurses to ask patients if they'd gotten the vaccine yet. This study illustrated how an intervention as simple as a reminder can save lives.
To be sure, some bioethicists do worry about implementing these policies. Are behavioral nudges akin to increased scrutiny or a burden for the disadvantaged? For example, would incentives to quit smoking unfairly target the poor, who are more likely to receive criticism for bad choices?
The measles outbreak is a sober reminder of how devastating it can be when the social contract breaks down.
While this is a valid concern, behavioral economics offers one of the only ethical solutions to increasing vaccination rates by addressing the most critical—and often legal—challenge to universal vaccinations: mandates. Choice architecture and other interventions encourage and inform a choice, allowing an individual to retain his or her right to refuse unwanted treatment. This distinction is especially important, as evidence suggests that people who refuse vaccinations often do so as a result of cognitive biases – systematic errors in thinking resulting from emotional attachment or a lack of information.
For instance, people are prone to "confirmation bias," or a tendency to selectively believe in information that confirms their preexisting theories, rather than the available evidence. At the same time, people do not like mandates. In such situations, choice architecture provides a useful option: people are nudged to make the right choice via the design of health delivery systems, without needing policies that rely on force.
The measles outbreak is a sober reminder of how devastating it can be when the social contract breaks down and people fall prey to misinformation. But all is not lost. As we fight a larger societal battle against alternative facts, we now have another option in the trenches to subtly encourage people to make better choices.
Using insights from research in decision-making, we can all contribute meaningfully in controversial conversations with family, friends, neighbors, colleagues, and our representatives — and push for policies that protect those we care about. A little more than a hundred years ago, thousands of lives were routinely lost to preventive illnesses. We've come too far to let ignorance destroy us now.