Genetic Test Scores Predicting Intelligence Are Not the New Eugenics

Genetic Test Scores Predicting Intelligence Are Not the New Eugenics

A thinking person.

(© psdesign1/Fotolia)



"A world where people are slotted according to their inborn ability – well, that is Gattaca. That is eugenics."

This was the assessment of Dr. Catherine Bliss, a sociologist who wrote a new book on social science genetics, when asked by MIT Technology Review about polygenic scores that can predict a person's intelligence or performance in school. Like a credit score, a polygenic score is statistical tool that combines a lot of information about a person's genome into a single number. Fears about using polygenic scores for genetic discrimination are understandable, given this country's ugly history of using the science of heredity to justify atrocities like forcible sterilization. But polygenic scores are not the new eugenics. And, rushing to discuss polygenic scores in dystopian terms only contributes to widespread public misunderstanding about genetics.

Can we start genotyping toddlers to identify the budding geniuses among them? The short answer is no.

Let's begin with some background on how polygenic scores are developed. In a genome wide-association study, researchers conduct millions of statistical tests to identify small differences in people's DNA sequence that are correlated with differences in a target outcome (beyond what can attributed to chance or ancestry differences). Successful studies of this sort require enormous sample sizes, but companies like 23andMe are now contributing genetic data from their consumers to research studies, and national biorepositories like U.K. Biobank have put genetic information from hundreds of thousands of people online. When applied to studying blood lipids or myopia, this kind of study strikes people as a straightforward and uncontroversial scientific tool. But it can also be conducted for cognitive and behavioral outcomes, like how many years of school a person has completed. When researchers have finished a genome-wide association study, they are left with a dataset with millions of rows (one for each genetic variant analyzed) and one column with the correlations between each variant and the outcome being studied.

The trick to polygenic scoring is to use these results and apply them to people who weren't participants in the original study. Measure the genes of a new person, weight each one of her millions of genetic variants by its correlation with educational attainment from a genome-wide association study, and then simply add everything up into a single number. Voila! -- you've created a polygenic score for educational attainment. On its face, the idea of "scoring" a person's genotype does immediately suggest Gattaca-type applications. Can we now start screening embryos for their "inborn ability," as Bliss called it? Can we start genotyping toddlers to identify the budding geniuses among them?

The short answer is no. Here are four reasons why dystopian projections about polygenic scores are out of touch with the current science:

The phrase "DNA tests for IQ" makes for an attention-grabbing headline, but it's scientifically meaningless.

First, a polygenic score currently predicts the life outcomes of an individual child with a great deal of uncertainty. The amount of uncertainty around polygenic predictions will decrease in the future, as genetic discovery samples get bigger and genetic studies include more of the variation in the genome, including rare variants that are particular to a few families. But for now, knowing a child's polygenic score predicts his ultimate educational attainment about as well as knowing his family's income, and slightly worse than knowing how far his mother went in school. These pieces of information are also readily available about children before they are born, but no one is writing breathless think-pieces about the dystopian outcomes that will result from knowing whether a pregnant woman graduated from college.

Second, using polygenic scoring for embryo selection requires parents to create embryos using reproductive technology, rather than conceiving them by having sex. The prediction that many women will endure medically-unnecessary IVF, in order to select the embryo with the highest polygenic score, glosses over the invasiveness, indignity, pain, and heartbreak that these hormonal and surgical procedures can entail.

Third, and counterintuitively, a polygenic score might be using DNA to measure aspects of the child's environment. Remember, a child inherits her DNA from her parents, who typically also shape the environment she grows up in. And, children's environments respond to their unique personalities and temperaments. One Icelandic study found that parents' polygenic scores predicted their children's educational attainment, even if the score was constructed using only the half of the parental genome that the child didn't inherit. For example, imagine mom has genetic variant X that makes her more likely to smoke during her pregnancy. Prenatal exposure to nicotine, in turn, affects the child's neurodevelopment, leading to behavior problems in school. The school responds to his behavioral problems with suspension, causing him to miss out on instructional content. A genome-wide association study will collapse this long and winding causal path into a simple correlation -- "genetic variant X is correlated with academic achievement." But, a child's polygenic score, which includes variant X, will partly reflect his likelihood of being exposed to adverse prenatal and school environments.

Finally, the phrase "DNA tests for IQ" makes for an attention-grabbing headline, but it's scientifically meaningless. As I've written previously, it makes sense to talk about a bacterial test for strep throat, because strep throat is a medical condition defined as having streptococcal bacteria growing in the back of your throat. If your strep test is positive, you have strep throat, no matter how serious your symptoms are. But a polygenic score is not a test "for" IQ, because intelligence is not defined at the level of someone's DNA. It doesn't matter how high your polygenic score is, if you can't reason abstractly or learn from experience. Equating your intelligence, a cognitive capacity that is tested behaviorally, with your polygenic score, a number that is a weighted sum of genetic variants discovered to be statistically associated with educational attainment in a hypothesis-free data mining exercise, is misleading about what intelligence is and is not.

The task for many scientists like me, who are interested in understanding why some children do better in school than other children, is to disentangle correlations from causation.

So, if we're not going to build a Gattaca-style genetic hierarchy, what are polygenic scores good for? They are not useless. In fact, they give scientists a valuable new tool for studying how to improve children's lives. The task for many scientists like me, who are interested in understanding why some children do better in school than other children, is to disentangle correlations from causation. The best way to do that is to run an experiment where children are randomized to environments, but often a true experiment is unethical or impractical. You can't randomize children to be born to a teenage mother or to go to school with inexperienced teachers. By statistically controlling for some of the relevant genetic differences between people using a polygenic score, scientists are better able to identify potential environmental causes of differences in children's life outcomes. As we have seen with other methods from genetics, like twin studies, understanding genes illuminates the environment.

Research that examines genetics in relation to social inequality, such as differences in higher education outcomes, will obviously remind people of the horrors of the eugenics movement. Wariness regarding how genetic science will be applied is certainly warranted. But, polygenic scores are not pure measures of "inborn ability," and genome-wide association studies of human intelligence and educational attainment are not inevitably ushering in a new eugenics age.

Paige Harden
Dr. Paige Harden is tenured professor of clinical psychology at the University of Texas at Austin, where she is the Principal Investigator of the Developmental Behavior Genetics lab and co-director of the Texas Twin Project. Dr. Harden has published over 100 scientific articles on genetic influences on complex human behavior, including child cognitive development, academic achievement, risk-taking, mental health, sexual activity, and childbearing. In 2017, she was honored with a prestigious national award from the American Psychological Association for her distinguished scientific contributions to the study of genetics and human individual differences. Dr. Harden’s research is supported by a Jacobs Foundation Early Career Research Fellowship, the Templeton Foundation, and the National Institutes of Health. She is a Public Voices Fellow with the Op-Ed Project.
Indigenous wisdom plus honeypot ants could provide new antibiotics

Indigenous people in Australia dig pits next to a honeypot colony. Scientists think the honey can be used to make new antimicrobial drugs.

Danny Ulrich

For generations, the Indigenous Tjupan people of Australia enjoyed the sweet treat of honey made by honeypot ants. As a favorite pastime, entire families would go searching for the underground colonies, first spotting a worker ant and then tracing it to its home. The ants, which belong to the species called Camponotus inflatus, usually build their subterranean homes near the mulga trees, Acacia aneura. Having traced an ant to its tree, it would be the women who carefully dug a pit next to a colony, cautious not to destroy the entire structure. Once the ant chambers were exposed, the women would harvest a small amount to avoid devastating the colony’s stocks—and the family would share the treat.

The Tjupan people also knew that the honey had antimicrobial properties. “You could use it for a sore throat,” says Danny Ulrich, a member of the Tjupan nation. “You could also use it topically, on cuts and things like that.”

Keep Reading Keep Reading
Lina Zeldovich

Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.

Blood Test Can Detect Lymphoma Cells Before a Tumor Grows Back

David Kurtz making DNA sequencing libraries in his lab.

Photo credit: Florian Scherer

When David M. Kurtz was doing his clinical fellowship at Stanford University Medical Center in 2009, specializing in lymphoma treatments, he found himself grappling with a question no one could answer. A typical regimen for these blood cancers prescribed six cycles of chemotherapy, but no one knew why. "The number seemed to be drawn out of a hat," Kurtz says. Some patients felt much better after just two doses, but had to endure the toxic effects of the entire course. For some elderly patients, the side effects of chemo are so harsh, they alone can kill. Others appeared to be cancer-free on the CT scans after the requisite six but then succumbed to it months later.

"Anecdotally, one patient decided to stop therapy after one dose because he felt it was so toxic that he opted for hospice instead," says Kurtz, now an oncologist at the center. "Five years down the road, he was alive and well. For him, just one dose was enough." Others would return for their one-year check up and find that their tumors grew back. Kurtz felt that while CT scans and MRIs were powerful tools, they weren't perfect ones. They couldn't tell him if there were any cancer cells left, stealthily waiting to germinate again. The scans only showed the tumor once it was back.

Blood cancers claim about 68,000 people a year, with a new diagnosis made about every three minutes, according to the Leukemia Research Foundation. For patients with B-cell lymphoma, which Kurtz focuses on, the survival chances are better than for some others. About 60 percent are cured, but the remaining 40 percent will relapse—possibly because they will have a negative CT scan, but still harbor malignant cells. "You can't see this on imaging," says Michael Green, who also treats blood cancers at University of Texas MD Anderson Medical Center.

Keep Reading Keep Reading
Lina Zeldovich

Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.