Genetically Sequencing Healthy Babies Yielded Surprising Results
Today in Melrose, Massachusetts, Cora Stetson is the picture of good health, a bubbly precocious 2-year-old. But Cora has two separate mutations in the gene that produces a critical enzyme called biotinidase and her body produces only 40 percent of the normal levels of that enzyme.
In the last few years, the dream of predicting and preventing diseases through genomics, starting in childhood, is finally within reach.
That's enough to pass conventional newborn (heelstick) screening, but may not be enough for normal brain development, putting baby Cora at risk for seizures and cognitive impairment. But thanks to an experimental study in which Cora's DNA was sequenced after birth, this condition was discovered and she is being treated with a safe and inexpensive vitamin supplement.
Stories like these are beginning to emerge from the BabySeq Project, the first clinical trial in the world to systematically sequence healthy newborn infants. This trial was led by my research group with funding from the National Institutes of Health. While still controversial, it is pointing the way to a future in which adults, or even newborns, can receive comprehensive genetic analysis in order to determine their risk of future disease and enable opportunities to prevent them.
Some believe that medicine is still not ready for genomic population screening, but others feel it is long overdue. After all, the sequencing of the Human Genome Project was completed in 2003, and with this milestone, it became feasible to sequence and interpret the genome of any human being. The costs have come down dramatically since then; an entire human genome can now be sequenced for about $800, although the costs of bioinformatic and medical interpretation can add another $200 to $2000 more, depending upon the number of genes interrogated and the sophistication of the interpretive effort.
Two-year-old Cora Stetson, whose DNA sequencing after birth identified a potentially dangerous genetic mutation in time for her to receive preventive treatment.
(Photo courtesy of Robert Green)
The ability to sequence the human genome yielded extraordinary benefits in scientific discovery, disease diagnosis, and targeted cancer treatment. But the ability of genomes to detect health risks in advance, to actually predict the medical future of an individual, has been mired in controversy and slow to manifest. In particular, the oft-cited vision that healthy infants could be genetically tested at birth in order to predict and prevent the diseases they would encounter, has proven to be far tougher to implement than anyone anticipated.
But in the last few years, the dream of predicting and preventing diseases through genomics, starting in childhood, is finally within reach. Why did it take so long? And what remains to be done?
Great Expectations
Part of the problem was the unrealistic expectations that had been building for years in advance of the genomic science itself. For example, the 1997 film Gattaca portrayed a near future in which the lifetime risk of disease was readily predicted the moment an infant is born. In the fanfare that accompanied the completion of the Human Genome Project, the notion of predicting and preventing future disease in an individual became a powerful meme that was used to inspire investment and public support for genomic research long before the tools were in place to make it happen.
Another part of the problem was the success of state-mandated newborn screening programs that began in the 1960's with biochemical tests of the "heel-stick" for babies with metabolic disorders. These programs have worked beautifully, costing only a few dollars per baby and saving thousands of infants from death and severe cognitive impairment. It seemed only logical that a new technology like genome sequencing would add power and promise to such programs. But instead of embracing the notion of newborn sequencing, newborn screening laboratories have thus far rejected the entire idea as too expensive, too ambiguous, and too threatening to the comfortable constituency that they had built within the public health framework.
"What can you find when you look as deeply as possible into the medical genomes of healthy individuals?"
Creating the Evidence Base for Preventive Genomics
Despite a number of obstacles, there are researchers who are exploring how to achieve the original vision of genomic testing as a tool for disease prediction and prevention. For example, in our NIH-funded MedSeq Project, we were the first to ask the question: "What can you find when you look as deeply as possible into the medical genomes of healthy individuals?"
Most people do not understand that genetic information comes in four separate categories: 1) dominant mutations putting the individual at risk for rare conditions like familial forms of heart disease or cancer, (2) recessive mutations putting the individual's children at risk for rare conditions like cystic fibrosis or PKU, (3) variants across the genome that can be tallied to construct polygenic risk scores for common conditions like heart disease or type 2 diabetes, and (4) variants that can influence drug metabolism or predict drug side effects such as the muscle pain that occasionally occurs with statin use.
The technological and analytical challenges of our study were formidable, because we decided to systematically interrogate over 5000 disease-associated genes and report results in all four categories of genetic information directly to the primary care physicians for each of our volunteers. We enrolled 200 adults and found that everyone who was sequenced had medically relevant polygenic and pharmacogenomic results, over 90 percent carried recessive mutations that could have been important to reproduction, and an extraordinary 14.5 percent carried dominant mutations for rare genetic conditions.
A few years later we launched the BabySeq Project. In this study, we restricted the number of genes to include only those with child/adolescent onset that could benefit medically from early warning, and even so, we found 9.4 percent carried dominant mutations for rare conditions.
At first, our interpretation around the high proportion of apparently healthy individuals with dominant mutations for rare genetic conditions was simple – that these conditions had lower "penetrance" than anticipated; in other words, only a small proportion of those who carried the dominant mutation would get the disease. If this interpretation were to hold, then genetic risk information might be far less useful than we had hoped.
Suddenly the information available in the genome of even an apparently healthy individual is looking more robust, and the prospect of preventive genomics is looking feasible.
But then we circled back with each adult or infant in order to examine and test them for any possible features of the rare disease in question. When we did this, we were surprised to see that in over a quarter of those carrying such mutations, there were already subtle signs of the disease in question that had not even been suspected! Now our interpretation was different. We now believe that genetic risk may be responsible for subclinical disease in a much higher proportion of people than has ever been suspected!
Meanwhile, colleagues of ours have been demonstrating that detailed analysis of polygenic risk scores can identify individuals at high risk for common conditions like heart disease. So adding up the medically relevant results in any given genome, we start to see that you can learn your risks for a rare monogenic condition, a common polygenic condition, a bad effect from a drug you might take in the future, or for having a child with a devastating recessive condition. Suddenly the information available in the genome of even an apparently healthy individual is looking more robust, and the prospect of preventive genomics is looking feasible.
Preventive Genomics Arrives in Clinical Medicine
There is still considerable evidence to gather before we can recommend genomic screening for the entire population. For example, it is important to make sure that families who learn about such risks do not suffer harms or waste resources from excessive medical attention. And many doctors don't yet have guidance on how to use such information with their patients. But our research is convincing many people that preventive genomics is coming and that it will save lives.
In fact, we recently launched a Preventive Genomics Clinic at Brigham and Women's Hospital where information-seeking adults can obtain predictive genomic testing with the highest quality interpretation and medical context, and be coached over time in light of their disease risks toward a healthier outcome. Insurance doesn't yet cover such testing, so patients must pay out of pocket for now, but they can choose from a menu of genetic screening tests, all of which are more comprehensive than consumer-facing products. Genetic counseling is available but optional. So far, this service is for adults only, but sequencing for children will surely follow soon.
As the costs of sequencing and other Omics technologies continue to decline, we will see both responsible and irresponsible marketing of genetic testing, and we will need to guard against unscientific claims. But at the same time, we must be far more imaginative and fast moving in mainstream medicine than we have been to date in order to claim the emerging benefits of preventive genomics where it is now clear that suffering can be averted, and lives can be saved. The future has arrived if we are bold enough to grasp it.
Funding and Disclosures:
Dr. Green's research is supported by the National Institutes of Health, the Department of Defense and through donations to The Franca Sozzani Fund for Preventive Genomics. Dr. Green receives compensation for advising the following companies: AIA, Applied Therapeutics, Helix, Ohana, OptraHealth, Prudential, Verily and Veritas; and is co-founder and advisor to Genome Medical, Inc, a technology and services company providing genetics expertise to patients, providers, employers and care systems.
Today’s Focus on STEM Education Is Missing A Crucial Point
I once saw a fascinating TED talk on 3D printing. As I watched the presenter discuss the custom fabrication, not of plastic gears or figurines, but of living, implantable kidneys, I thought I was finally living in the world of Star Trek, and I experienced a flush of that eager, expectant enthusiasm I felt as a child looking toward the future. I looked at my current career and felt a rejuvenation of my commitment to teach young people the power of science.
The well-rounded education of human beings needs to include lessons learned both from a study of the physical world, and from a study of humanity.
Whether we are teachers or not, those of us who admire technology and innovation, and who wish to support progress, usually embrace the importance of educating the next generation of scientists and inventors. Growing a healthy technological civilization takes a lot of work, skill, and wisdom, and its continued health depends on future generations of competent thinkers. Thus, we may find it encouraging that there is currently an abundance of interest in STEM– the common acronym for the study of science, technology, engineering, and math.
But education is as challenging an endeavor as science itself. Educating youth--if we want to do it right--requires as much thought, work, and expertise as discovering a cure or pioneering regenerative medicine. Before we give our money, time, or support to any particular school or policy, let's give some thought to the details of the educational process.
A Well-Balanced Diet
For one thing, STEM education cannot stand in isolation. The well-rounded education of human beings needs to include lessons learned both from a study of the physical world, and from a study of humanity. This is especially true for the basic education of children, but it is true even for college students. And even for those in science and engineering, there are important lessons to be learned from the study of history, literature, and art.
Scientists have their own emotions and values, and also need financial support. The fruits of their labor ultimately benefit other people. How are we all to function together in our division-of-labor society, without some knowledge of the way societies work? How are we to fully thrive and enjoy life, without some understanding of ourselves, our motives, our moral values, and our relationships to others? STEM education needs the humanities as a partner. That flourishing civilization we dream of requires both technical competence and informed life-choices.
Think for Yourself (Even in Science)
Perhaps even more important than what is taught, is the subject of how things are taught. We want our children to learn the skill of thinking independently, but even in the sciences, we often fail completely to demonstrate how. Instead of teaching science as a thinking process, we indoctrinate, using the grand discoveries of the great scientists as our sacred texts. But consider the words of Isaac Newton himself, regarding rote learning:
A Vulgar Mechanick can practice what he has been taught or seen done, but if he is in an error he knows not how to find it out and correct it, and if you put him out of his road he is at a stand. Whereas he that is able to reason nimbly and judiciously about figure, force, and motion, is never at rest till he gets over every rub.
What's the point of all this formal schooling in the first place? Is it, as many of the proponents of STEM education might argue, to train students for a "good" career?
If our goal is to help students "reason nimbly" about the world around them, as the great scientists themselves did, are we succeeding? When we "teach" middle school students about DNA or cellular respiration by presenting as our only supporting evidence cartoon pictures, are we showing students a process of discovery based on evidence and hard work? Or are we just training them to memorize and repeat what the authorities say?
A useful education needs to give students the skill of following a line of reasoning, of asking rational questions, and of chewing things through in their minds--even if we regard the material as beyond question. Besides feeding students a well-balanced diet of knowledge, healthy schooling needs to teach them to digest this information thoroughly.
Thinking Training
Now step back for a moment and think about the purpose of education. What's the point of all this formal schooling in the first place? Is it, as many of the proponents of STEM education might argue, to train students for a "good" career? That view may have some validity for young adults, who are beginning to choose electives in favored subjects, and have started to choose a direction for their career.
But for the basic education of children, this way of thinking is presumptuous and disastrous. I would argue that the central purpose of a basic education is not to teach children how to perform this or that particular skill, but simply to teach them to think clearly. We should not be aiming to provide job training, but thinking training. We should be helping children learn how to "reason nimbly" about the world around them, and breathing life into their thinking processes, by which they will grapple with the events and circumstances of their lives.
So as we admire innovation, dream of a wonderful future, and attempt to nurture the next generation of scientists and engineers, instead of obsessing over STEM education, let us focus on rational education. Let's worry about showing children how to think--about all the important things in life. Let's give them the basic facts of human existence -- physical and humanitarian -- and show them how to fluently and logically understand them.
Some students will become the next generation of creators, and some will follow other careers, but together -- if they are educated properly -- they will continue to grow their inheritance, and to keep our civilization healthy and flourishing, in body and in mind.
Do New Tools Need New Ethics?
Scarcely a week goes by without the announcement of another breakthrough owing to advancing biotechnology. Recent examples include the use of gene editing tools to successfully alter human embryos or clone monkeys; new immunotherapy-based treatments offering longer lives or even potential cures for previously deadly cancers; and the creation of genetically altered mosquitos using "gene drives" to quickly introduce changes into the population in an ecosystem and alter the capacity to carry disease.
The environment for conducting science is dramatically different today than it was in the 1970s, 80s, or even the early 2000s.
Each of these examples puts pressure on current policy guidelines and approaches, some existing since the late 1970s, which were created to help guide the introduction of controversial new life sciences technologies. But do the policies that made sense decades ago continue to make sense today, or do the tools created during different eras in science demand new ethics guidelines and policies?
Advances in biotechnology aren't new of course, and in fact have been the hallmark of science since the creation of the modern U.S. National Institutes of Health in the 1940s and similar government agencies elsewhere. Funding agencies focused on health sciences research with the hope of creating breakthroughs in human health, and along the way, basic science discoveries led to the creation of new scientific tools that offered the ability to approach life, death, and disease in fundamentally new ways.
For example, take the discovery in the 1970s of the "chemical scissors" in living cells called restriction enzymes, which could be controlled and used to introduce cuts at predictable locations in a strand of DNA. This led to the creation of tools that for the first time allowed for genetic modification of any organism with DNA, which meant bacteria, plants, animals, and even humans could in theory have harmful mutations repaired, but also that changes could be made to alter or even add genetic traits, with potentially ominous implications.
The scientists involved in that early research convened a small conference to discuss not only the science, but how to responsibly control its potential uses and their implications. The meeting became known as the Asilomar Conference for the meeting center where it was held, and is often noted as the prime example of the scientific community policing itself. While the Asilomar recommendations were not sufficient from a policy standpoint, they offered a blueprint on which policies could be based and presented a model of the scientific community setting responsible controls for itself.
But the environment for conducting science changed over the succeeding decades and it is dramatically different today than it was in the 1970s, 80s, or even the early 2000s. The regime for oversight and regulation that has provided controls for the introduction of so-called "gene therapy" in humans starting in the mid-1970s is beginning to show signs of fraying. The vast majority of such research was performed in the U.S., U.K., and Europe, where policies were largely harmonized. But as the tools for manipulating humans at the molecular level advanced, they also became more reliable and more precise, as well as cheaper and easier to use—think CRISPR—and therefore more accessible to more people in many more countries, many without clear oversight or policies laying out responsible controls.
There is no precedent for global-scale science policy, though that is exactly what this moment seems to demand.
As if to make the point through news headlines, scientists in China announced in 2017 that they had attempted to perform gene editing on in vitro human embryos to repair an inherited mutation for beta thalassemia--research that would not be permitted in the U.S. and most European countries and at the time was also banned in the U.K. Similarly, specialists from a reproductive medicine clinic in the U.S. announced in 2016 that they had performed a highly controversial reproductive technology by which DNA from two women is combined (so-called "three parent babies"), in a satellite clinic they had opened in Mexico to avoid existing prohibitions on the technique passed by the U.S. Congress in 2015.
In both cases, genetic changes were introduced into human embryos that if successful would lead to the birth of a child with genetically modified germline cells—the sperm in boys or eggs in girls—with those genetic changes passed on to all future generations of related offspring. Those are just two very recent examples, and it doesn't require much imagination to predict the list of controversial possible applications of advancing biotechnologies: attempts at genetic augmentation or even cloning in humans, and alterations of the natural environment with genetically engineered mosquitoes or other insects in areas with endemic disease. In fact, as soon as this month, scientists in Africa may release genetically modified mosquitoes for the first time.
The technical barriers are falling at a dramatic pace, but policy hasn't kept up, both in terms of what controls make sense and how to address what is an increasingly global challenge. There is no precedent for global-scale science policy, though that is exactly what this moment seems to demand. Mechanisms for policy at global scale are limited–-think UN declarations, signatory countries, and sometimes international treaties, but all are slow, cumbersome and have limited track records of success.
But not all the news is bad. There are ongoing efforts at international discussion, such as an international summit on human genome editing convened in 2015 by the National Academies of Sciences and Medicine (U.S.), Royal Academy (U.K.), and Chinese Academy of Sciences (China), a follow-on international consensus committee whose report was issued in 2017, and an upcoming 2nd international summit in Hong Kong in November this year.
These efforts need to continue to focus less on common regulatory policies, which will be elusive if not impossible to create and implement, but on common ground for the principles that ought to guide country-level rules. Such principles might include those from the list proposed by the international consensus committee, including transparency, due care, responsible science adhering to professional norms, promoting wellbeing of those affected, and transnational cooperation. Work to create a set of shared norms is ongoing and worth continued effort as the relevant stakeholders attempt to navigate what can only be called a brave new world.