Abortions Before Fetal Viability Are Legal: Might Science and the Change on the Supreme Court Undermine That?
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
Viability—the potential for a fetus to survive outside the womb—is a core dividing line in American law. For almost 50 years, the Supreme Court of the United States has struck down laws that ban all or most abortions, ruling that women's constitutional rights include choosing to end pregnancies before the point of viability. Once viability is reached, however, states have a "compelling interest" in protecting fetal life. At that point, states can choose to ban or significantly restrict later-term abortions provided states allow an exception to preserve the life or health of the mother.
This distinction between a fetus that could survive outside its mother's body, albeit with significant medical intervention, and one that could not, is at the heart of the court's landmark 1973 decision in Roe v. Wade. The framework of viability remains central to the country's abortion law today, even as some states have passed laws in the name of protecting women's health that significantly undermine Roe. Over the last 30 years, the Supreme Court has upheld these laws, which have the effect of restricting pre-viability abortion access, imposing mandatory waiting periods, requiring parental consent for minors, and placing restrictions on abortion providers.
Viability has always been a slippery notion on which to pin legal rights.
Today, the Guttmacher Institute reports that more than half of American women live in states whose laws are considered hostile to abortion, largely as a result of these intrusions on pre-viability abortion access. Nevertheless, the viability framework stands: while states can pass pre-viability abortion restrictions that (ostensibly) protect the health of the woman or that strike some kind a balance between women's rights and fetal life, it is only after viability that they can completely favor fetal life over the rights of the woman (with limited exceptions when the woman's life is threatened). As a result, judges have struck down certain states' so-called heartbeat laws, which tried to prohibit abortions after detection of a fetal heartbeat (as early as six weeks of pregnancy). Bans on abortion after 12 or 15 weeks' gestation have also been reversed.
Now, with a new Supreme Court Justice expected to be hostile to abortion rights, advances in the care of preterm babies and ongoing research on artificial wombs suggest that the point of viability is already sooner than many assume and could soon be moved radically earlier in gestation, potentially providing a legal basis for earlier and earlier abortion bans.
Viability has always been a slippery notion on which to pin legal rights. It represents an inherently variable and medically shifting moment in the pregnancy timeline that the Roe majority opinion declined to firmly define, noting instead that "[v]iability is usually placed at about seven months (28 weeks) but may occur earlier, even at 24 weeks." Even in 1977, this definition was an optimistic generalization. Every baby is different, and while some 28-week infants born the year Roe was decided did indeed live into adulthood, most died at or shortly after birth. The prognosis for infants born at 24 weeks was much worse.
Today, a baby born at 28 weeks' gestation can be expected to do much better, largely due to the development of surfactant treatment in the early 1990s to help ease the air into babies' lungs. Now, the majority of 24-week-old babies can survive, and several very premature babies, born just shy of 22 weeks' gestation, have lived into childhood. All this variability raises the question: Should the law take a very optimistic, if largely unrealistic, approach to defining viability and place it at 22 weeks, even though the overall survival rate for those preemies remains less than 10% today? Or should the law recognize that keeping a premature infant alive requires specialist care, meaning that actual viability differs not just pregnancy-to-pregnancy but also by healthcare facility and from country to country? A 24-week premature infant born in a rural area or in a developing nation may not be viable as a practical matter, while one born in a major U.S. city with access to state-of-the-art care has a greater than 70% chance of survival. Just as some extremely premature newborns survive, some full-term babies die before, during, or soon after birth, regardless of whether they have access to advanced medical care.
To be accurate, viability should be understood as pregnancy-specific and should take into account the healthcare resources available to that woman. But state laws can't capture this degree of variability by including gestation limits in their abortion laws. Instead, many draw a somewhat arbitrary line at 22, 24, or 28 weeks' gestation, regardless of the particulars of the pregnancy or the medical resources available in that state.
As variable and resource-dependent as viability is today, science may soon move that point even earlier. Ectogenesis is a term coined in 1923 for the growth of an organism outside the body. Long considered science fiction, this technology has made several key advances in the past few years, with scientists announcing in 2017 that they had successfully gestated premature lamb fetuses in an artificial womb for four weeks. Currently in development for use in human fetuses between 22 and 23 weeks' gestation, this technology will almost certainly seek to push viability earlier in pregnancy.
Ectogenesis and other improvements in managing preterm birth deserve to be celebrated, offering new hope to the parents of very premature infants. But in the U.S., and in other nations whose abortion laws are fixed to viability, these same advances also pose a threat to abortion access. Abortion opponents have long sought to move the cutoff for legal abortions, and it is not hard to imagine a state prohibiting all abortions after 18 or 20 weeks by arguing that medical advances render this stage "the new viability," regardless of whether that level of advanced care is available to women in that state. If ectogenesis advances further, the limit could be moved to keep pace.
The Centers for Disease Control and Prevention reports that over 90% of abortions in America are performed at or before 13 weeks, meaning that in the short term, only a small number women would be affected by shifting viability standards. Yet these women are in difficult situations and deserve care and consideration. Research has shown that women seeking later terminations often did not recognize that they were pregnant or had their dates quite wrong, while others report that they had trouble accessing a termination earlier in pregnancy, were afraid to tell their partner or parents, or only recently received a diagnosis of health problems with the fetus.
Shifts in viability over the past few decades have already affected these women, many of whom report struggling to find a provider willing to perform a termination at 18 or 20 weeks out of concern that the woman may have her dates wrong. Ever-earlier gestational limits would continue this chilling effect, making doctors leery of terminating a pregnancy that might be within 2–4 weeks of each new ban. Some states' existing gestational limits on abortion are also inconsistent with prenatal care, which includes genetic testing between 12 and 20 weeks' gestation, as well as an anatomy scan to check the fetus's organ development performed at approximately 20 weeks. If viability moves earlier, prenatal care will be further undermined.
Perhaps most importantly, earlier and earlier abortion bans are inconsistent with the rights and freedoms on which abortion access is based, including recognition of each woman's individual right to bodily integrity and decision-making authority over her own medical care. Those rights and freedoms become meaningless if abortion bans encroach into the weeks that women need to recognize they are pregnant, assess their options, seek medical advice, and access appropriate care. Fetal viability, with its shifting goalposts, isn't the best framework for abortion protection in light of advancing medical science.
Ideally, whether to have an abortion would be a decision that women make in consultation with their doctors, free of state interference. The vast majority of women already make this decision early in pregnancy; the few who come to the decision later do so because something has gone seriously wrong in their lives or with their pregnancies. If states insist on drawing lines based on historical measures of viability, at 24 or 26 or 28 weeks, they should stick with those gestational limits and admit that they no longer represent actual viability but correspond instead to some form of common morality about when the fetus has a protected, if not absolute, right to life. Women need a reasonable amount of time to make careful and informed decisions about whether to continue their pregnancies precisely because these decisions have a lasting impact on their bodies and their lives. To preserve that time, legislators and the courts should decouple abortion rights from ectogenesis and other advances in the care of extremely premature infants that move the point of viability ever earlier.
[Editor's Note: This article was updated after publication to reflect Amy Coney Barrett's confirmation. To read other articles in this special magazine issue, visit the e-reader version.]
More than 20 percent of American adults suffer from chronic pain. And as many as one in four of those prescribed opioids to manage that pain go on to misuse – or abuse – them, often with devastating consequences. Patients afflicted by both chronic pain and opioid addiction are especially difficult to treat, according to Eric Garland, PhD, Director of the University of Utah’s Center on Mindfulness and Integrative Health Intervention Development, because opioid overuse increases pain sensitivity, and pain promotes relapse among those being treated for addiction.
A new study, however, shows that a mindfulness-based therapy can successfully tackle both problems at once, pointing to a tool that could potentially help in fighting the opioid crisis. “This is the first large-scale clinical trial to show that any psychological intervention can reduce opioid misuse and chronic pain for the long term,” says Garland, lead author of the study, published February 28th in JAMA Internal Medicine.
Garland’s study focused on 250 adults who had received opioid therapy for chronic pain for 90 days or longer, randomly assigning them to eight weeks of either a standard psychotherapy support group or Mindfulness-Oriented Recovery Enhancement (MORE) therapy, which combines mindfulness training, cognitive-behavioral therapy (CBT) and positive psychology. Nine months after getting these treatments in primary care settings, 45 percent of patients in the MORE group were no longer misusing opioids, compared to 24 percent of those in group therapy. In fact, about a third of the patients in the MORE group were able to cut their opioid dose in half or reduce it even further.
Patients treated with MORE also experienced more significant pain relief than those in support groups, according to Garland. Conventional approaches to treating opioid addiction include 12-step programs and medically-assisted treatment using drugs like methadone and Suboxone, sometimes coupled with support groups. But patients with Opioid Use Disorder (OUD) – the official diagnosis for opioid addiction – have high relapse rates following treatment, especially if they have chronic pain.
While medically-assisted treatments help to control drug cravings, they do nothing to control chronic pain, which is where psychological therapies like MORE come in.
“For patients suffering from moderate pain and OUD, the relapse rate is three times higher than in patients without chronic pain; for those with severe chronic pain, the relapse rate is five times higher,” says Amy Wachholtz, PhD, Director of Clinical Health Psychology and associate professor at University of Colorado in Denver. “So if we don’t treat the chronic pain along with the OUD addiction simultaneously, we are setting patients up for failure.”
Unfortunately, notes Garland, the standard of care for patients with chronic pain who are misusing their prescribed painkillers is “woefully inadequate.” Many patients don’t meet the criteria for OUD, he says, but instead fall into a gray zone somewhere between legitimate opioid use and full-blown addiction. And while medically-assisted treatments help to control drug cravings, they do nothing to control chronic pain, which is where psychological therapies like MORE come in. But behavioral therapies are often not available in primary care settings, and even when clinicians do refer patients to behavioral health providers, they often prescribe CBT. A large scale study last year showed that CBT – without the added components of mindfulness training and positive psychology – reduced pain but not opioid misuse.
Psychotherapist Eric Garland teaches mindfulness.
University of Utah
Reward Circuitry Rewired
Opioids are highly physiologically addictive. Repeated and high-dose drug use causes the brain to become hypersensitive to stress, pain, and drug-related cues, such as the sight of one’s pill bottle, says Garland, while at the same time becoming increasingly insensitive to natural pleasures. “As an individual becomes more and more dependent on the opioids just to feel okay, they feel less able to extract a healthy sense of joy, pleasure and meaning out of everyday life,” he explains. “This drives them to take higher and higher doses of the opioid to maintain a dwindling sense of well-being.”
The changes are not just psychological: Chronic opioid use actually causes changes in the brain’s reward circuitry. “You can see on brain imaging,” says Garland. “The brain’s reward circuitry becomes more responsive when a person is viewing opioid related images than when they are viewing images of smiling babies, lovers holding hands, or sunsets over the beach.” MORE, he says, teaches “savoring” – a tenet of positive psychology – as a means of restructuring the reward processes in the brain so the patient becomes sensitive to pleasure from natural, healthy rewards, decreasing cravings for drug-related rewards.
Mindfulness and Addiction
Mindfulness, a form of meditation that teaches people to observe their feelings and sensations without judgement, has been increasingly applied to the treatment of addiction. By observing their pain and cravings objectively, for example, patients gain increased awareness of their responses to pain and their habits of opioid use. “They learn how to be with discomfort, whether emotional or physical, in a more compassionate way,” says Sarah Bowen, PhD, associate professor of psychology at Pacific University in Oregon. “And if your mind gives you a message like ‘Oh, I can’t handle that,’ to recognize that that’s a thought that might not be true.”
Bowen’s research is focused on Mindfulness-Based Relapse Prevention, which addresses the cravings associated with addiction. She has patients practice what she calls “urge surfing”: riding out a craving or urge rather than relying on a substance for immediate relief. “Craving will happen, so rather than fighting it, we look at understanding it better,” she says.
MORE differs from other forms of mindfulness-based therapy in that it integrates reappraisal and savoring training. Reappraisal is a technique often used in CBT in which patients learn to change negative thought patterns in order to reduce their emotional impact, while savoring helps to restructure the reward processes in the brain.
Mindfulness training not only helps patients to understand and gain control over their behavior in response to cravings and triggers like pain, says Garland, but also provides a means of pain relief. “We use mindfulness to zoom into pain and break it down into its subcomponents – feelings of heat or tightness or tingling – which reduces the impact that negative emotions have on pain processing in the brain.”
Eric Garland examines brain waves.
University of Utah
Powerful interventions
As the dangers of opioid addiction have become increasingly evident, some scientists are developing less addictive, non-opioid painkillers, but more trials are needed. Meanwhile, behavioral approaches to chronic pain relief have continued to gain traction, and researchers like Garland are probing the possibilities of integrative treatments to treat the addiction itself. Given that the number of people suffering from chronic pain and OUD have reached new heights during the COVID-19 pandemic, says Wachholtz, new treatment alternatives for patients caught in the relentless cycle of chronic pain and opioid misuse are sorely needed. “We’re trying to refine the techniques,” she says, “but we’re starting to realize just how powerful some of these mind-body interventions can be.”
Exactly 67 years ago, in 1955, a group of scientists and reporters gathered at the University of Michigan and waited with bated breath for Dr. Thomas Francis Jr., director of the school’s Poliomyelitis Vaccine Evaluation Center, to approach the podium. The group had gathered to hear the news that seemingly everyone in the country had been anticipating for the past two years – whether the vaccine for poliomyelitis, developed by Francis’s former student Jonas Salk, was effective in preventing the disease.
Polio, at that point, had become a household name. As the highly contagious virus swept through the United States, cities closed their schools, movie theaters, swimming pools, and even churches to stop the spread. For most, polio presented as a mild illness, and was usually completely asymptomatic – but for an unlucky few, the virus took hold of the central nervous system and caused permanent paralysis of muscles in the legs, arms, and even people’s diaphragms, rendering the person unable to walk and breathe. It wasn’t uncommon to hear reports of people – mostly children – who fell sick with a flu-like virus and then, just days later, were relegated to spend the rest of their lives in an iron lung.
For two years, researchers had been testing a vaccine that would hopefully be able to stop the spread of the virus and prevent the 45,000 infections each year that were keeping the nation in a chokehold. At the podium, Francis greeted the crowd and then proceeded to change the course of human history: The vaccine, he reported, was “safe, effective, and potent.” Widespread vaccination could begin in just a few weeks. The nightmare was over.
The road to success
Jonas Salk, a medical researcher and virologist who developed the vaccine with his own research team, would rightfully go down in history as the man who eradicated polio. (Today, wild poliovirus circulates in just two countries, Afghanistan and Pakistan – with only 140 cases reported in 2020.) But many people today forget that the widespread vaccination campaign that effectively ended wild polio across the globe would have never been possible without the human clinical trials that preceded it.
As with the COVID-19 vaccine, skepticism and misinformation around the polio vaccine abounded. But even more pervasive than the skepticism was fear. The consequences of polio had arguably never been more visible.
The road to human clinical trials – and the resulting vaccine – was a long one. In 1938, President Franklin Delano Roosevelt launched the National Foundation for Infantile Paralysis in order to raise funding for research and development of a polio vaccine. (Today, we know this organization as the March of Dimes.) A polio survivor himself, Roosevelt elevated awareness and prevention into the national spotlight, even more so than it had been previously. Raising funds for a safe and effective polio vaccine became a cornerstone of his presidency – and the funds raked in by his foundation went primarily to Salk to fund his research.
The Trials Begin
Salk’s vaccine, which included an inactivated (killed) polio virus, was promising – but now the researchers needed test subjects to make global vaccination a possibility. Because the aim of the vaccine was to prevent paralytic polio, researchers decided that they had to test the vaccine in the population that was most vulnerable to paralysis – young children. And, because the rate of paralysis was so low even among children, the team required many children to collect enough data. Francis, who led the trial to evaluate Salk’s vaccine, began the process of recruiting more than one million school-aged children between the ages of six and nine in 272 counties that had the highest incidence of the disease. The participants were nicknamed the “Polio Pioneers.”
Double-blind, placebo-based trials were considered the “gold standard” of epidemiological research back in Francis's day - and they remain the best approach we have today. These rigorous scientific studies are designed with two participant groups in mind. One group, called the test group, receives the experimental treatment (such as a vaccine); the other group, called the control, receives an inactive treatment known as a placebo. The researchers then compare the effects of the active treatment against the effects of the placebo, and every researcher is “blinded” as to which participants receive what treatment. That way, the results aren’t tainted by any possible biases.
But the study was controversial in that only some of the individual field trials at the county and state levels had a placebo group. Researchers described this as a “calculated risk,” meaning that while there were risks involved in giving the vaccine to a large number of children, the bigger risk was the potential paralysis or death that could come with being infected by polio. In all, just 200,000 children across the US received a placebo treatment, while an additional 725,000 children acted as observational controls – in other words, researchers monitored them for signs of infection, but did not give them any treatment.
As with the COVID-19 vaccine, skepticism and misinformation around the polio vaccine abounded. But even more pervasive than the skepticism was fear. President Roosevelt, who had made many public and televised appearances in a wheelchair, served as a perpetual reminder of the consequences of polio, as an infection at age 39 had rendered him permanently unable to walk. The consequences of polio had arguably never been more visible, and parents signed up their children in droves to participate in the study and offer them protection.
The Polio Pioneer Legacy
In a little less than a year, roughly half a million children received a dose of Salk’s polio vaccine. While plenty of children were hesitant to get the shot, many former participants still remember the fear surrounding the disease. One former participant, a Polio Pioneer named Debbie LaCrosse, writes of her experience: “There was no discussion, no listing of pros and cons. No amount of concern over possible side effects or other unknowns associated with a new vaccine could compare to the terrifying threat of polio.” For their participation, each kid received a certificate – and sometimes a pin – with the words “Polio Pioneer” emblazoned across the front.
When Francis announced the results of the trial on April 12, 1955, people did more than just breathe a sigh of relief – they openly celebrated, ringing church bells and flooding into the streets to embrace. Salk, who had become the face of the vaccine at that point, was instantly hailed as a national hero – and teachers around the country had their students to write him ‘thank you’ notes for his years of diligent work.
But while Salk went on to win national acclaim – even accepting the Presidential Medal of Freedom for his work on the polio vaccine in 1977 – his success was due in no small part to the children (and their parents) who took a risk in order to advance medical science. And that risk paid off: By the early 1960s, the yearly cases of polio in the United States had gone down to just 910. Where before the vaccine polio had caused around 15,000 cases of paralysis each year, only ten cases of paralysis were recorded in the entire country throughout the 1970s. And in 1979, the virus that once shuttered entire towns was declared officially eradicated in this country. Thanks to the efforts of these brave pioneers, the nation – along with the majority of the world – remains free of polio even today.