Genetically Sequencing Healthy Babies Yielded Surprising Results
Today in Melrose, Massachusetts, Cora Stetson is the picture of good health, a bubbly precocious 2-year-old. But Cora has two separate mutations in the gene that produces a critical enzyme called biotinidase and her body produces only 40 percent of the normal levels of that enzyme.
In the last few years, the dream of predicting and preventing diseases through genomics, starting in childhood, is finally within reach.
That's enough to pass conventional newborn (heelstick) screening, but may not be enough for normal brain development, putting baby Cora at risk for seizures and cognitive impairment. But thanks to an experimental study in which Cora's DNA was sequenced after birth, this condition was discovered and she is being treated with a safe and inexpensive vitamin supplement.
Stories like these are beginning to emerge from the BabySeq Project, the first clinical trial in the world to systematically sequence healthy newborn infants. This trial was led by my research group with funding from the National Institutes of Health. While still controversial, it is pointing the way to a future in which adults, or even newborns, can receive comprehensive genetic analysis in order to determine their risk of future disease and enable opportunities to prevent them.
Some believe that medicine is still not ready for genomic population screening, but others feel it is long overdue. After all, the sequencing of the Human Genome Project was completed in 2003, and with this milestone, it became feasible to sequence and interpret the genome of any human being. The costs have come down dramatically since then; an entire human genome can now be sequenced for about $800, although the costs of bioinformatic and medical interpretation can add another $200 to $2000 more, depending upon the number of genes interrogated and the sophistication of the interpretive effort.
Two-year-old Cora Stetson, whose DNA sequencing after birth identified a potentially dangerous genetic mutation in time for her to receive preventive treatment.
(Photo courtesy of Robert Green)
The ability to sequence the human genome yielded extraordinary benefits in scientific discovery, disease diagnosis, and targeted cancer treatment. But the ability of genomes to detect health risks in advance, to actually predict the medical future of an individual, has been mired in controversy and slow to manifest. In particular, the oft-cited vision that healthy infants could be genetically tested at birth in order to predict and prevent the diseases they would encounter, has proven to be far tougher to implement than anyone anticipated.
But in the last few years, the dream of predicting and preventing diseases through genomics, starting in childhood, is finally within reach. Why did it take so long? And what remains to be done?
Great Expectations
Part of the problem was the unrealistic expectations that had been building for years in advance of the genomic science itself. For example, the 1997 film Gattaca portrayed a near future in which the lifetime risk of disease was readily predicted the moment an infant is born. In the fanfare that accompanied the completion of the Human Genome Project, the notion of predicting and preventing future disease in an individual became a powerful meme that was used to inspire investment and public support for genomic research long before the tools were in place to make it happen.
Another part of the problem was the success of state-mandated newborn screening programs that began in the 1960's with biochemical tests of the "heel-stick" for babies with metabolic disorders. These programs have worked beautifully, costing only a few dollars per baby and saving thousands of infants from death and severe cognitive impairment. It seemed only logical that a new technology like genome sequencing would add power and promise to such programs. But instead of embracing the notion of newborn sequencing, newborn screening laboratories have thus far rejected the entire idea as too expensive, too ambiguous, and too threatening to the comfortable constituency that they had built within the public health framework.
"What can you find when you look as deeply as possible into the medical genomes of healthy individuals?"
Creating the Evidence Base for Preventive Genomics
Despite a number of obstacles, there are researchers who are exploring how to achieve the original vision of genomic testing as a tool for disease prediction and prevention. For example, in our NIH-funded MedSeq Project, we were the first to ask the question: "What can you find when you look as deeply as possible into the medical genomes of healthy individuals?"
Most people do not understand that genetic information comes in four separate categories: 1) dominant mutations putting the individual at risk for rare conditions like familial forms of heart disease or cancer, (2) recessive mutations putting the individual's children at risk for rare conditions like cystic fibrosis or PKU, (3) variants across the genome that can be tallied to construct polygenic risk scores for common conditions like heart disease or type 2 diabetes, and (4) variants that can influence drug metabolism or predict drug side effects such as the muscle pain that occasionally occurs with statin use.
The technological and analytical challenges of our study were formidable, because we decided to systematically interrogate over 5000 disease-associated genes and report results in all four categories of genetic information directly to the primary care physicians for each of our volunteers. We enrolled 200 adults and found that everyone who was sequenced had medically relevant polygenic and pharmacogenomic results, over 90 percent carried recessive mutations that could have been important to reproduction, and an extraordinary 14.5 percent carried dominant mutations for rare genetic conditions.
A few years later we launched the BabySeq Project. In this study, we restricted the number of genes to include only those with child/adolescent onset that could benefit medically from early warning, and even so, we found 9.4 percent carried dominant mutations for rare conditions.
At first, our interpretation around the high proportion of apparently healthy individuals with dominant mutations for rare genetic conditions was simple – that these conditions had lower "penetrance" than anticipated; in other words, only a small proportion of those who carried the dominant mutation would get the disease. If this interpretation were to hold, then genetic risk information might be far less useful than we had hoped.
Suddenly the information available in the genome of even an apparently healthy individual is looking more robust, and the prospect of preventive genomics is looking feasible.
But then we circled back with each adult or infant in order to examine and test them for any possible features of the rare disease in question. When we did this, we were surprised to see that in over a quarter of those carrying such mutations, there were already subtle signs of the disease in question that had not even been suspected! Now our interpretation was different. We now believe that genetic risk may be responsible for subclinical disease in a much higher proportion of people than has ever been suspected!
Meanwhile, colleagues of ours have been demonstrating that detailed analysis of polygenic risk scores can identify individuals at high risk for common conditions like heart disease. So adding up the medically relevant results in any given genome, we start to see that you can learn your risks for a rare monogenic condition, a common polygenic condition, a bad effect from a drug you might take in the future, or for having a child with a devastating recessive condition. Suddenly the information available in the genome of even an apparently healthy individual is looking more robust, and the prospect of preventive genomics is looking feasible.
Preventive Genomics Arrives in Clinical Medicine
There is still considerable evidence to gather before we can recommend genomic screening for the entire population. For example, it is important to make sure that families who learn about such risks do not suffer harms or waste resources from excessive medical attention. And many doctors don't yet have guidance on how to use such information with their patients. But our research is convincing many people that preventive genomics is coming and that it will save lives.
In fact, we recently launched a Preventive Genomics Clinic at Brigham and Women's Hospital where information-seeking adults can obtain predictive genomic testing with the highest quality interpretation and medical context, and be coached over time in light of their disease risks toward a healthier outcome. Insurance doesn't yet cover such testing, so patients must pay out of pocket for now, but they can choose from a menu of genetic screening tests, all of which are more comprehensive than consumer-facing products. Genetic counseling is available but optional. So far, this service is for adults only, but sequencing for children will surely follow soon.
As the costs of sequencing and other Omics technologies continue to decline, we will see both responsible and irresponsible marketing of genetic testing, and we will need to guard against unscientific claims. But at the same time, we must be far more imaginative and fast moving in mainstream medicine than we have been to date in order to claim the emerging benefits of preventive genomics where it is now clear that suffering can be averted, and lives can be saved. The future has arrived if we are bold enough to grasp it.
Funding and Disclosures:
Dr. Green's research is supported by the National Institutes of Health, the Department of Defense and through donations to The Franca Sozzani Fund for Preventive Genomics. Dr. Green receives compensation for advising the following companies: AIA, Applied Therapeutics, Helix, Ohana, OptraHealth, Prudential, Verily and Veritas; and is co-founder and advisor to Genome Medical, Inc, a technology and services company providing genetics expertise to patients, providers, employers and care systems.
Why You Can’t Blame Your Behavior On Your Gut Microbiome
See a hot pizza sitting on a table. Count the missing pieces: three. They tasted delicious and yes, you've eaten enough—but you're still eyeing a fourth piece. Do you reach out and take it, or not?
"The difficulty comes in translating the animal data into the human situation."
Your behavior in that next moment is anything but simple: as far as scientists can tell, it comes down to a complex confluence of circumstances, genes, and personality characteristics. And the latest proposed addition to this list is the gut microbiome—the community of microorganisms, including bacteria, archaea, fungi, and viruses—that are full-time residents of your digestive tract.
It is entirely plausible that your gut microbiome might influence your behavior, scientists say: a well-known communication channel, called the gut-brain axis, runs both ways between your brain and your digestive tract. Gut bugs, which are close to the action, could amplify or dampen the messages, thereby shaping how you act. Messages about food-related behaviors could be particularly susceptible to interception by these microorganisms.
Perhaps it's convenient to imagine your resident microbes sitting greedily in your gut, crying for more pizza and tricking your brain into getting them what they want. The problem is, there's a distinct lack of scientific support for this actually happening in humans.
John Bienenstock, professor of pathology and molecular medicine at McMaster University (Canada), has worked on the gut microbiome-behavior connection for several decades. "There's a lot of evidence now in animals—particularly in mice," he says.
Indeed, his group and others have shown that, by eliminating or altering gut bugs, they can make mice exhibit different social behaviors or respond more coolly to stress; they can even make a shy mouse turn brave. But Bienenstock cautions: "The difficulty comes in translating the animal data into the human situation."
Animal behaviors are worlds apart from what we do on a daily basis—from brushing our teeth to navigating complex social situations.
Not that it's an easy task to figure out which aspects of animal research are relevant to people in everyday life. Animal behaviors are worlds apart from what we do on a daily basis—from brushing our teeth to navigating complex social situations.
Elaine Hsiao, assistant professor of integrative biology and physiology at UCLA, has also looked closely at the microbiome-gut-brain axis in mice and pondered how to translate the results into humans. She says, "Both the microbiome and behavior vary substantially [from person to person] and can be strongly influenced by environmental factors—which makes it difficult to run a well-controlled study on effects of the microbiome on human behavior."
She adds, "Human behaviors are very complex and the metrics used to quantify behavior are often not precise enough to derive clear interpretations." So the challenge is not only to figure out what people actually do, but also to give those actions numerical codes that allow them to be compared against other actions.
Hsiao and colleagues are nevertheless attempting to make connections: building on some animal research, their recent study found a three-way association in humans between molecules produced by their gut bacteria (that is, indole metabolites), the connectedness of different brain regions as measured through functional magnetic resonance imaging, and measures of behavior: questionnaires assessing food addiction and anxiety.
Meanwhile, other studies have found it may be possible to change a person's behavior through either probiotics or gut-localized antibiotics. Several probiotics even show promise for altering behavior in clinical conditions like depression. Yet how these phenomena occur is still unknown and, overall, scientists lack solid evidence on how bugs control behavior.
Bienenstock, however, is one of many continuing to investigate. He says, "Some of these observations are very striking. They're so striking that clearly something's up."
He says that after identifying a behavior-changing bug, or set of bugs, in mice: "The obvious next thing is: How [is it] occurring? Why is it occurring? What are the molecules involved?" Bienenstock favors the approach of nailing down a mechanism in animal models before starting to investigate its relevance to humans.
He explains, "[This preclinical work] should allow us to identify either target molecules or target pathways, which then can be translated."
Bienenstock also acknowledges the 'hype' that appears to surround this particular field of study. Despite the decidedly slow emergence of data linking the microbiome to human behavior, scientific reviews have appeared in brain-related scientific journals—for instance, Trends in Cognitive Sciences; CNS Drugs—with remarkable frequency. Not only this, but popular books and media articles have given the idea wings.
It might be compelling to blame our microbiomes for behaviors we don't prefer or can't explain—like reaching for another slice of pizza. But until the scientific observations yield stronger results, we still lack proof that we're doing what we do—or eating what we eat—exclusively at the behest of our resident microorganisms.
Who’s Responsible If a Scientist’s Work Is Used for Harm?
Are scientists morally responsible for the uses of their work? To some extent, yes. Scientists are responsible for both the uses that they intend with their work and for some of the uses they don't intend. This is because scientists bear the same moral responsibilities that we all bear, and we are all responsible for the ends we intend to help bring about and for some (but not all) of those we don't.
To not think about plausible unintended effects is to be negligent -- and to recognize, but do nothing about, such effects is to be reckless.
It should be obvious that the intended outcomes of our work are within our sphere of moral responsibility. If a scientist intends to help alleviate hunger (by, for example, breeding new drought-resistant crop strains), and they succeed in that goal, they are morally responsible for that success, and we would praise them accordingly. If a scientist intends to produce a new weapon of mass destruction (by, for example, developing a lethal strain of a virus), and they are unfortunately successful, they are morally responsible for that as well, and we would blame them accordingly. Intention matters a great deal, and we are most praised or blamed for what we intend to accomplish with our work.
But we are responsible for more than just the intended outcomes of our choices. We are also responsible for unintended but readily foreseeable uses of our work. This is in part because we are all responsible for thinking not just about what we intend, but also what else might follow from our chosen course of action. In cases where severe and egregious harms are plausible, we should act in ways that strive to prevent such outcomes. To not think about plausible unintended effects is to be negligent -- and to recognize, but do nothing about, such effects is to be reckless. To be negligent or reckless is to be morally irresponsible, and thus blameworthy. Each of us should think beyond what we intend to do, reflecting carefully on what our course of action could entail, and adjusting our choices accordingly.
It is this area, of unintended but readily foreseeable (and plausible) impacts, that often creates the most difficulty for scientists. Many scientists can become so focused on their work (which is often demanding) and so focused on achieving their intended goals, that they fail to stop and think about other possible implications.
Debates over "dual-use" research exemplify these concerns, where harmful potential uses of research might mean the work should not be pursued, or the full publication of results should be curtailed. When researchers perform gain-of-function research, pushing viruses to become more transmissible or more deadly, it is clear how dangerous such work could be in the wrong hands. In these cases, it is not enough to simply claim that such uses were not intended and that it is someone else's job to ensure that the materials remain secure. We know securing infectious materials can be error-prone (recall events at the CDC and the FDA).
In some areas of research, scientists are already worrying about the unintended possible downsides of their work.
Further, securing viral strains does nothing to secure the knowledge that could allow for reproducing the viral strain (particularly when the methodologies and/or genetic sequences are published after the fact, as was the case for H5N1 and horsepox). It is, in fact, the researcher's moral responsibility to be concerned not just about the biosafety controls in their own labs, but also which projects should be pursued (Will the gain in knowledge be worth the possible downsides?) and which results should be published (Will a result make it easier for a malicious actor to deploy a new bioweapon?).
We have not yet had (to my knowledge) a use of gain-of-function research to harm people. If that does happen, those who actually released the virus on the public will be most blameworthy–-intentions do matter. But the scientists who developed the knowledge deployed by the malicious actors may also be held blameworthy, especially if the malicious use was easy to foresee, even if it was not pleasant to think about.
In some areas of research, scientists are already worrying about the unintended possible downsides of their work. Scientists investigating gene drives have thought beyond the immediate desired benefits of their work (e.g. reducing invasive species populations) and considered the possible spread of gene drives to untargeted populations. Modeling the impacts of such possibilities has led some researchers to pull back from particular deployment possibilities. It is precisely such thinking through both the intended and unintended possible outcomes that is needed for responsible work.
The world has gotten too small, too vulnerable for scientists to act as though they are not responsible for the uses of their work, intended or not. They must seek to ensure that, as the recent AAAS Statement on Scientific Freedom and Responsibility demands, their work is done "in the interest of humanity." This requires thinking beyond one's intentions, potentially drawing on the expertise of others, sometimes from other disciplines, to help explore implications. The need for such thinking does not guarantee good outcomes, but it will ensure that we are doing the best we can, and that is what being morally responsible is all about.