Abortions Before Fetal Viability Are Legal: Might Science and the Change on the Supreme Court Undermine That?
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
Viability—the potential for a fetus to survive outside the womb—is a core dividing line in American law. For almost 50 years, the Supreme Court of the United States has struck down laws that ban all or most abortions, ruling that women's constitutional rights include choosing to end pregnancies before the point of viability. Once viability is reached, however, states have a "compelling interest" in protecting fetal life. At that point, states can choose to ban or significantly restrict later-term abortions provided states allow an exception to preserve the life or health of the mother.
This distinction between a fetus that could survive outside its mother's body, albeit with significant medical intervention, and one that could not, is at the heart of the court's landmark 1973 decision in Roe v. Wade. The framework of viability remains central to the country's abortion law today, even as some states have passed laws in the name of protecting women's health that significantly undermine Roe. Over the last 30 years, the Supreme Court has upheld these laws, which have the effect of restricting pre-viability abortion access, imposing mandatory waiting periods, requiring parental consent for minors, and placing restrictions on abortion providers.
Viability has always been a slippery notion on which to pin legal rights.
Today, the Guttmacher Institute reports that more than half of American women live in states whose laws are considered hostile to abortion, largely as a result of these intrusions on pre-viability abortion access. Nevertheless, the viability framework stands: while states can pass pre-viability abortion restrictions that (ostensibly) protect the health of the woman or that strike some kind a balance between women's rights and fetal life, it is only after viability that they can completely favor fetal life over the rights of the woman (with limited exceptions when the woman's life is threatened). As a result, judges have struck down certain states' so-called heartbeat laws, which tried to prohibit abortions after detection of a fetal heartbeat (as early as six weeks of pregnancy). Bans on abortion after 12 or 15 weeks' gestation have also been reversed.
Now, with a new Supreme Court Justice expected to be hostile to abortion rights, advances in the care of preterm babies and ongoing research on artificial wombs suggest that the point of viability is already sooner than many assume and could soon be moved radically earlier in gestation, potentially providing a legal basis for earlier and earlier abortion bans.
Viability has always been a slippery notion on which to pin legal rights. It represents an inherently variable and medically shifting moment in the pregnancy timeline that the Roe majority opinion declined to firmly define, noting instead that "[v]iability is usually placed at about seven months (28 weeks) but may occur earlier, even at 24 weeks." Even in 1977, this definition was an optimistic generalization. Every baby is different, and while some 28-week infants born the year Roe was decided did indeed live into adulthood, most died at or shortly after birth. The prognosis for infants born at 24 weeks was much worse.
Today, a baby born at 28 weeks' gestation can be expected to do much better, largely due to the development of surfactant treatment in the early 1990s to help ease the air into babies' lungs. Now, the majority of 24-week-old babies can survive, and several very premature babies, born just shy of 22 weeks' gestation, have lived into childhood. All this variability raises the question: Should the law take a very optimistic, if largely unrealistic, approach to defining viability and place it at 22 weeks, even though the overall survival rate for those preemies remains less than 10% today? Or should the law recognize that keeping a premature infant alive requires specialist care, meaning that actual viability differs not just pregnancy-to-pregnancy but also by healthcare facility and from country to country? A 24-week premature infant born in a rural area or in a developing nation may not be viable as a practical matter, while one born in a major U.S. city with access to state-of-the-art care has a greater than 70% chance of survival. Just as some extremely premature newborns survive, some full-term babies die before, during, or soon after birth, regardless of whether they have access to advanced medical care.
To be accurate, viability should be understood as pregnancy-specific and should take into account the healthcare resources available to that woman. But state laws can't capture this degree of variability by including gestation limits in their abortion laws. Instead, many draw a somewhat arbitrary line at 22, 24, or 28 weeks' gestation, regardless of the particulars of the pregnancy or the medical resources available in that state.
As variable and resource-dependent as viability is today, science may soon move that point even earlier. Ectogenesis is a term coined in 1923 for the growth of an organism outside the body. Long considered science fiction, this technology has made several key advances in the past few years, with scientists announcing in 2017 that they had successfully gestated premature lamb fetuses in an artificial womb for four weeks. Currently in development for use in human fetuses between 22 and 23 weeks' gestation, this technology will almost certainly seek to push viability earlier in pregnancy.
Ectogenesis and other improvements in managing preterm birth deserve to be celebrated, offering new hope to the parents of very premature infants. But in the U.S., and in other nations whose abortion laws are fixed to viability, these same advances also pose a threat to abortion access. Abortion opponents have long sought to move the cutoff for legal abortions, and it is not hard to imagine a state prohibiting all abortions after 18 or 20 weeks by arguing that medical advances render this stage "the new viability," regardless of whether that level of advanced care is available to women in that state. If ectogenesis advances further, the limit could be moved to keep pace.
The Centers for Disease Control and Prevention reports that over 90% of abortions in America are performed at or before 13 weeks, meaning that in the short term, only a small number women would be affected by shifting viability standards. Yet these women are in difficult situations and deserve care and consideration. Research has shown that women seeking later terminations often did not recognize that they were pregnant or had their dates quite wrong, while others report that they had trouble accessing a termination earlier in pregnancy, were afraid to tell their partner or parents, or only recently received a diagnosis of health problems with the fetus.
Shifts in viability over the past few decades have already affected these women, many of whom report struggling to find a provider willing to perform a termination at 18 or 20 weeks out of concern that the woman may have her dates wrong. Ever-earlier gestational limits would continue this chilling effect, making doctors leery of terminating a pregnancy that might be within 2–4 weeks of each new ban. Some states' existing gestational limits on abortion are also inconsistent with prenatal care, which includes genetic testing between 12 and 20 weeks' gestation, as well as an anatomy scan to check the fetus's organ development performed at approximately 20 weeks. If viability moves earlier, prenatal care will be further undermined.
Perhaps most importantly, earlier and earlier abortion bans are inconsistent with the rights and freedoms on which abortion access is based, including recognition of each woman's individual right to bodily integrity and decision-making authority over her own medical care. Those rights and freedoms become meaningless if abortion bans encroach into the weeks that women need to recognize they are pregnant, assess their options, seek medical advice, and access appropriate care. Fetal viability, with its shifting goalposts, isn't the best framework for abortion protection in light of advancing medical science.
Ideally, whether to have an abortion would be a decision that women make in consultation with their doctors, free of state interference. The vast majority of women already make this decision early in pregnancy; the few who come to the decision later do so because something has gone seriously wrong in their lives or with their pregnancies. If states insist on drawing lines based on historical measures of viability, at 24 or 26 or 28 weeks, they should stick with those gestational limits and admit that they no longer represent actual viability but correspond instead to some form of common morality about when the fetus has a protected, if not absolute, right to life. Women need a reasonable amount of time to make careful and informed decisions about whether to continue their pregnancies precisely because these decisions have a lasting impact on their bodies and their lives. To preserve that time, legislators and the courts should decouple abortion rights from ectogenesis and other advances in the care of extremely premature infants that move the point of viability ever earlier.
[Editor's Note: This article was updated after publication to reflect Amy Coney Barrett's confirmation. To read other articles in this special magazine issue, visit the e-reader version.]
Can Biotechnology Take the Allergies Out of Cats?
Amy Bitterman, who teaches at Rutgers Law School in Newark, gets enormous pleasure from her three mixed-breed rescue cats, Spike, Dee, and Lucy. To manage her chronically stuffy nose, three times a week she takes Allegra D, which combines the antihistamine fexofenadine with the decongestant pseudoephedrine. Amy's dog allergy is rougher--so severe that when her sister launched a business, Pet Care By Susan, from their home in Edison, New Jersey, they knew Susan would have to move elsewhere before she could board dogs. Amy has tried to visit their brother, who owns a Labrador Retriever, taking Allegra D beforehand. But she began sneezing, and then developed watery eyes and phlegm in her chest.
"It gets harder and harder to breathe," she says.
Animal lovers have long dreamed of "hypo-allergenic" cats and dogs. Although to date, there is no such thing, biotechnology is beginning to provide solutions for cat-lovers. Cats are a simpler challenge than dogs. Dog allergies involve as many as seven proteins. But up to 95 percent of people who have cat allergies--estimated at 10 to 30 percent of the population in North America and Europe--react to one protein, Fel d1. Interestingly, cats don't seem to need Fel d1. There are cats who don't produce much Fel d1 and have no known health problems.
The current technologies fight Fel d1 in ingenious ways. Nestle Purina reached the market first with a cat food, Pro Plan LiveClear, launched in the U.S. a year and a half ago. It contains Fel d1 antibodies from eggs that in effect neutralize the protein. HypoCat, a vaccine for cats, induces them to create neutralizing antibodies to their own Fel d1. It may be available in the United States by 2024, says Gary Jennings, chief executive officer of Saiba Animal Health, a University of Zurich spin-off. Another approach, using the gene-editing tool CRISPR to create a medication that would splice out Fel d1 genes in particular tissues, is the furthest from fruition.
"Our goal was to ensure that whatever we do has no negative impact on the cat."
Customer demand is high. "We already have a steady stream of allergic cat owners contacting us desperate to have access to the vaccine or participate in the testing program," Jennings said. "There is a major unmet medical need."
More than a third of Americans own a cat (while half own a dog), and pet ownership is rising. With more Americans living alone, pets may be just the right amount of company. But the number of Americans with asthma increases every year. Of that group, some 20 to 30 percent have pet allergies that could trigger a possibly deadly attack. It is not clear how many pets end up in shelters because their owners could no longer manage allergies. Instead, allergists commonly report that their patients won't give up a beloved companion.
No one can completely avoid Fel d1, which clings to clothing and lands everywhere cat-owners go, even in schools and new homes never occupied by cats. Myths among cat-lovers may lead them to underestimate their own level of risk. Short hair doesn't help: the length of cat hair doesn't affect the production of Fel d1. Bathing your cat will likely upset it and accomplish little. Washing cuts the amount on its skin and fur only for two days. In one study, researchers measured the Fel d1 in the ambient air in a small chamber occupied by a cat—and then washed the cat. Three hours later, with the cat in the chamber again, the measurable Fel d1 in the air was lower. But this benefit was gone after 24 hours.
For years, the best option has been shots for people that prompt protective antibodies. Bitterman received dog and cat allergy injections twice a week as a child. However, these treatments require up to 100 injections over three to five years, and, as in her case, the effect may be partial or wear off. Even if you do opt for shots, treating the cat also makes sense, since you could protect more than one allergic member of your household and any allergic visitors as well.
An Allergy-Neutralizing Diet
Cats produce much of their Fel d1 in their saliva, which then spreads it to their fur when they groom, observed Nestle Purina immunologist Ebenezer Satyaraj. He realized that this made saliva—and therefore a cat's mouth--an unusually effective site for change. Hens exposed to Fel d1 produce their own antibodies, which survive in their eggs. The team coated LiveClear food with a powder form of these eggs; once in a cat's mouth, the chicken antibody binds to the Fel d1 in the cat's saliva, neutralizing it.
The results are partial: In a study with 105 cats, the level of active Fel d1 in their fur had dropped on average by 47 percent after ten weeks eating LiveClear. Cats that produced more Fel d1 at baseline had a more robust response, with a drop of up to 71 percent. A safety study found no effects on cats after six months on the diet. "Our goal was to ensure that whatever we do has no negative impact on the cat," Satyaraj said. Might a dogfood that minimizes dog allergens be on the way? "There is some early work," he said.
A Vaccine
This is a year when vaccines changed the lives of billions. Saiba's vaccine, HypoCat, delivers recombinant Fel d1 and the coat from a plant virus (the Cucumber mosaic virus) without any vital genetic information. The viral coat serves as a carrier. A cat would need shots once or twice a year to produce antibodies that neutralize Fel d1.
HypoCat works much like any vaccine, with the twist that the enemy is the cat's own protein. Is that safe? Saiba's team has followed 70 cats treated with the vaccine over two years and they remain healthy. Again the active Fel d1 doesn't disappear but diminishes. The team asked 10 people with cat allergies to report on their symptoms when they pet their vaccinated cats. Eight of them could pet their cat for nearly a half hour before their symptoms began, compared with an average of 17 minutes before the vaccine.
Jennings hopes to develop a HypoDog shot with a similar approach. However, the goal would be to target four or five proteins in one vaccine, and that increases the risk of hurting the dog. In the meantime, allergic dog-lovers considering an expensive breeder dog might think again: Independent research does not support the idea that any breed of dog produces less dander in the home. In fact, one well-designed study found that Spanish water dogs, Airedales, poodles and Labradoodles--breeds touted as hypo-allergenic--had significantly more of the most common allergen on their coat than an ordinary Lab and the control group.
Gene Editing
One day you might be able to bring your cat to the vet once a year for an injection that would modify specific tissues so they wouldn't produce Fel d1.
Nicole Brackett, a postdoctoral scientist at Viriginia-based Indoor Biotechnologies, which specializes in manufacturing biologics for allergy and asthma, most recently has used CRISPR to identify Fel d1 genetic sequences in cells from 50 domestic cats and 24 exotic ones. She learned that the sequences vary substantially from one cat to the next. This discovery, she says, backs up the observations that Fel d1 doesn't have a vital purpose.
The next step will be a CRISPR knockout of the relevant genes in cells from feline salivary glands, a prime source of Fel d1. Although the company is considering using CRISPR to edit the genes in a cat embryo and possibly produce a Fel d1-free cat, designer cats won't be its ultimate product. Instead, the company aims to produce injections that could treat any cat.
Reducing pet allergens at home could have a compound benefit, Indoor Biotechnologies founder Martin Chapman, an immunologist, notes: "When you dampen down the response to one allergen, you could also dampen it down to multiple allergens." As allergies become more common around the world, that's especially good news.
Earlier this year, California-based Ambry Genetics announced that it was discontinuing a test meant to estimate a person's risk of developing prostate or breast cancer. The test looks for variations in a person's DNA that are known to be associated with these cancers.
Known as a polygenic risk score, this type of test adds up the effects of variants in many genes — often in the dozens or hundreds — and calculates a person's risk of developing a particular health condition compared to other people. In this way, polygenic risk scores are different from traditional genetic tests that look for mutations in single genes, such as BRCA1 and BRCA2, which raise the risk of breast cancer.
Traditional genetic tests look for mutations that are relatively rare in the general population but have a large impact on a person's disease risk, like BRCA1 and BRCA2. By contrast, polygenic risk scores scan for more common genetic variants that, on their own, have a small effect on risk. Added together, however, they can raise a person's risk for developing disease.
These scores could become a part of routine healthcare in the next few years. Researchers are developing polygenic risk scores for cancer, heart, disease, diabetes and even depression. Before they can be rolled out widely, they'll have to overcome a key limitation: racial bias.
"The issue with these polygenic risk scores is that the scientific studies which they're based on have primarily been done in individuals of European ancestry," says Sara Riordan, president of the National Society of Genetics Counselors. These scores are calculated by comparing the genetic data of people with and without a particular disease. To make these scores accurate, researchers need genetic data from tens or hundreds of thousands of people.
Myriad's old test would have shown that a Black woman had twice as high of a risk for breast cancer compared to the average woman even if she was at low or average risk.
A 2018 analysis found that 78% of participants included in such large genetic studies, known as genome-wide association studies, were of European descent. That's a problem, because certain disease-associated genetic variants don't appear equally across different racial and ethnic groups. For example, a particular variant in the TTR gene, known as V1221, occurs more frequently in people of African descent. In recent years, the variant has been found in 3 to 4 percent of individuals of African ancestry in the United States. Mutations in this gene can cause protein to build up in the heart, leading to a higher risk of heart failure. A polygenic risk score for heart disease based on genetic data from mostly white people likely wouldn't give accurate risk information to African Americans.
Accuracy in genetic testing matters because such polygenic risk scores could help patients and their doctors make better decisions about their healthcare.
For instance, if a polygenic risk score determines that a woman is at higher-than-average risk of breast cancer, her doctor might recommend more frequent mammograms — X-rays that take a picture of the breast. Or, if a risk score reveals that a patient is more predisposed to heart attack, a doctor might prescribe preventive statins, a type of cholesterol-lowering drug.
"Let's be clear, these are not diagnostic tools," says Alicia Martin, a population and statistical geneticist at the Broad Institute of MIT and Harvard. "We can't use a polygenic score to say you will or will not get breast cancer or have a heart attack."
But combining a patient's polygenic risk score with other factors that affect disease risk — like age, weight, medication use or smoking status — may provide a better sense of how likely they are to develop a specific health condition than considering any one risk factor one its own. The accuracy of polygenic risk scores becomes even more important when considering that these scores may be used to guide medication prescription or help patients make decisions about preventive surgery, such as a mastectomy.
In a study published in September, researchers used results from large genetics studies of people with European ancestry and data from the UK Biobank to calculate polygenic risk scores for breast and prostate cancer for people with African, East Asian, European and South Asian ancestry. They found that they could identify individuals at higher risk of breast and prostate cancer when they scaled the risk scores within each group, but the authors say this is only a temporary solution. Recruiting more diverse participants for genetics studies will lead to better cancer detection and prevent, they conclude.
Recent efforts to do just that are expected to make these scores more accurate in the future. Until then, some genetics companies are struggling to overcome the European bias in their tests.
Acknowledging the limitations of its polygenic risk score, Ambry Genetics said in April that it would stop offering the test until it could be recalibrated. The company launched the test, known as AmbryScore, in 2018.
"After careful consideration, we have decided to discontinue AmbryScore to help reduce disparities in access to genetic testing and to stay aligned with current guidelines," the company said in an email to customers. "Due to limited data across ethnic populations, most polygenic risk scores, including AmbryScore, have not been validated for use in patients of diverse backgrounds." (The company did not make a spokesperson available for an interview for this story.)
In September 2020, the National Comprehensive Cancer Network updated its guidelines to advise against the use of polygenic risk scores in routine patient care because of "significant limitations in interpretation." The nonprofit, which represents 31 major cancer cancers across the United States, said such scores could continue to be used experimentally in clinical trials, however.
Holly Pederson, director of Medical Breast Services at the Cleveland Clinic, says the realization that polygenic risk scores may not be accurate for all races and ethnicities is relatively recent. Pederson worked with Salt Lake City-based Myriad Genetics, a leading provider of genetic tests, to improve the accuracy of its polygenic risk score for breast cancer.
The company announced in August that it had recalibrated the test, called RiskScore, for women of all ancestries. Previously, Myriad did not offer its polygenic risk score to women who self-reported any ancestry other than sole European or Ashkenazi ancestry.
"Black women, while they have a similar rate of breast cancer to white women, if not lower, had twice as high of a polygenic risk score because the development and validation of the model was done in white populations," Pederson said of the old test. In other words, Myriad's old test would have shown that a Black woman had twice as high of a risk for breast cancer compared to the average woman even if she was at low or average risk.
To develop and validate the new score, Pederson and other researchers assessed data from more than 275,000 women, including more than 31,000 African American women and nearly 50,000 women of East Asian descent. They looked at 56 different genetic variants associated with ancestry and 93 associated with breast cancer. Interestingly, they found that at least 95% of the breast cancer variants were similar amongst the different ancestries.
The company says the resulting test is now more accurate for all women across the board, but Pederson cautions that it's still slightly less accurate for Black women.
"It's not only the lack of data from Black women that leads to inaccuracies and a lack of validation in these types of risk models, it's also the pure genomic diversity of Africa," she says, noting that Africa is the most genetically diverse continent on the planet. "We just need more data, not only in American Black women but in African women to really further characterize that continent."
Martin says it's problematic that such scores are most accurate for white people because they could further exacerbate health disparities in traditionally underserved groups, such as Black Americans. "If we were to set up really representative massive genetic studies, we would do a much better job at predicting genetic risk for everybody," she says.
Earlier this year, the National Institutes of Health awarded $38 million to researchers to improve the accuracy of polygenic risk scores in diverse populations. Researchers will create new genome datasets and pool information from existing ones in an effort to diversify the data that polygenic scores rely on. They plan to make these datasets available to other scientists to use.
"By having adequate representation, we can ensure that the results of a genetic test are widely applicable," Riordan says.