Abortions Before Fetal Viability Are Legal: Might Science and the Change on the Supreme Court Undermine That?
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
Viability—the potential for a fetus to survive outside the womb—is a core dividing line in American law. For almost 50 years, the Supreme Court of the United States has struck down laws that ban all or most abortions, ruling that women's constitutional rights include choosing to end pregnancies before the point of viability. Once viability is reached, however, states have a "compelling interest" in protecting fetal life. At that point, states can choose to ban or significantly restrict later-term abortions provided states allow an exception to preserve the life or health of the mother.
This distinction between a fetus that could survive outside its mother's body, albeit with significant medical intervention, and one that could not, is at the heart of the court's landmark 1973 decision in Roe v. Wade. The framework of viability remains central to the country's abortion law today, even as some states have passed laws in the name of protecting women's health that significantly undermine Roe. Over the last 30 years, the Supreme Court has upheld these laws, which have the effect of restricting pre-viability abortion access, imposing mandatory waiting periods, requiring parental consent for minors, and placing restrictions on abortion providers.
Viability has always been a slippery notion on which to pin legal rights.
Today, the Guttmacher Institute reports that more than half of American women live in states whose laws are considered hostile to abortion, largely as a result of these intrusions on pre-viability abortion access. Nevertheless, the viability framework stands: while states can pass pre-viability abortion restrictions that (ostensibly) protect the health of the woman or that strike some kind a balance between women's rights and fetal life, it is only after viability that they can completely favor fetal life over the rights of the woman (with limited exceptions when the woman's life is threatened). As a result, judges have struck down certain states' so-called heartbeat laws, which tried to prohibit abortions after detection of a fetal heartbeat (as early as six weeks of pregnancy). Bans on abortion after 12 or 15 weeks' gestation have also been reversed.
Now, with a new Supreme Court Justice expected to be hostile to abortion rights, advances in the care of preterm babies and ongoing research on artificial wombs suggest that the point of viability is already sooner than many assume and could soon be moved radically earlier in gestation, potentially providing a legal basis for earlier and earlier abortion bans.
Viability has always been a slippery notion on which to pin legal rights. It represents an inherently variable and medically shifting moment in the pregnancy timeline that the Roe majority opinion declined to firmly define, noting instead that "[v]iability is usually placed at about seven months (28 weeks) but may occur earlier, even at 24 weeks." Even in 1977, this definition was an optimistic generalization. Every baby is different, and while some 28-week infants born the year Roe was decided did indeed live into adulthood, most died at or shortly after birth. The prognosis for infants born at 24 weeks was much worse.
Today, a baby born at 28 weeks' gestation can be expected to do much better, largely due to the development of surfactant treatment in the early 1990s to help ease the air into babies' lungs. Now, the majority of 24-week-old babies can survive, and several very premature babies, born just shy of 22 weeks' gestation, have lived into childhood. All this variability raises the question: Should the law take a very optimistic, if largely unrealistic, approach to defining viability and place it at 22 weeks, even though the overall survival rate for those preemies remains less than 10% today? Or should the law recognize that keeping a premature infant alive requires specialist care, meaning that actual viability differs not just pregnancy-to-pregnancy but also by healthcare facility and from country to country? A 24-week premature infant born in a rural area or in a developing nation may not be viable as a practical matter, while one born in a major U.S. city with access to state-of-the-art care has a greater than 70% chance of survival. Just as some extremely premature newborns survive, some full-term babies die before, during, or soon after birth, regardless of whether they have access to advanced medical care.
To be accurate, viability should be understood as pregnancy-specific and should take into account the healthcare resources available to that woman. But state laws can't capture this degree of variability by including gestation limits in their abortion laws. Instead, many draw a somewhat arbitrary line at 22, 24, or 28 weeks' gestation, regardless of the particulars of the pregnancy or the medical resources available in that state.
As variable and resource-dependent as viability is today, science may soon move that point even earlier. Ectogenesis is a term coined in 1923 for the growth of an organism outside the body. Long considered science fiction, this technology has made several key advances in the past few years, with scientists announcing in 2017 that they had successfully gestated premature lamb fetuses in an artificial womb for four weeks. Currently in development for use in human fetuses between 22 and 23 weeks' gestation, this technology will almost certainly seek to push viability earlier in pregnancy.
Ectogenesis and other improvements in managing preterm birth deserve to be celebrated, offering new hope to the parents of very premature infants. But in the U.S., and in other nations whose abortion laws are fixed to viability, these same advances also pose a threat to abortion access. Abortion opponents have long sought to move the cutoff for legal abortions, and it is not hard to imagine a state prohibiting all abortions after 18 or 20 weeks by arguing that medical advances render this stage "the new viability," regardless of whether that level of advanced care is available to women in that state. If ectogenesis advances further, the limit could be moved to keep pace.
The Centers for Disease Control and Prevention reports that over 90% of abortions in America are performed at or before 13 weeks, meaning that in the short term, only a small number women would be affected by shifting viability standards. Yet these women are in difficult situations and deserve care and consideration. Research has shown that women seeking later terminations often did not recognize that they were pregnant or had their dates quite wrong, while others report that they had trouble accessing a termination earlier in pregnancy, were afraid to tell their partner or parents, or only recently received a diagnosis of health problems with the fetus.
Shifts in viability over the past few decades have already affected these women, many of whom report struggling to find a provider willing to perform a termination at 18 or 20 weeks out of concern that the woman may have her dates wrong. Ever-earlier gestational limits would continue this chilling effect, making doctors leery of terminating a pregnancy that might be within 2–4 weeks of each new ban. Some states' existing gestational limits on abortion are also inconsistent with prenatal care, which includes genetic testing between 12 and 20 weeks' gestation, as well as an anatomy scan to check the fetus's organ development performed at approximately 20 weeks. If viability moves earlier, prenatal care will be further undermined.
Perhaps most importantly, earlier and earlier abortion bans are inconsistent with the rights and freedoms on which abortion access is based, including recognition of each woman's individual right to bodily integrity and decision-making authority over her own medical care. Those rights and freedoms become meaningless if abortion bans encroach into the weeks that women need to recognize they are pregnant, assess their options, seek medical advice, and access appropriate care. Fetal viability, with its shifting goalposts, isn't the best framework for abortion protection in light of advancing medical science.
Ideally, whether to have an abortion would be a decision that women make in consultation with their doctors, free of state interference. The vast majority of women already make this decision early in pregnancy; the few who come to the decision later do so because something has gone seriously wrong in their lives or with their pregnancies. If states insist on drawing lines based on historical measures of viability, at 24 or 26 or 28 weeks, they should stick with those gestational limits and admit that they no longer represent actual viability but correspond instead to some form of common morality about when the fetus has a protected, if not absolute, right to life. Women need a reasonable amount of time to make careful and informed decisions about whether to continue their pregnancies precisely because these decisions have a lasting impact on their bodies and their lives. To preserve that time, legislators and the courts should decouple abortion rights from ectogenesis and other advances in the care of extremely premature infants that move the point of viability ever earlier.
[Editor's Note: This article was updated after publication to reflect Amy Coney Barrett's confirmation. To read other articles in this special magazine issue, visit the e-reader version.]
“Virtual Biopsies” May Soon Make Some Invasive Tests Unnecessary
At his son's college graduation in 2017, Dan Chessin felt "terribly uncomfortable" sitting in the stadium. The bouts of pain persisted, and after months of monitoring, a urologist took biopsies of suspicious areas in his prostate.
This innovation may enhance diagnostic precision and promptness, but it also brings ethical concerns to the forefront.
"In my case, the biopsies came out cancerous," says Chessin, 60, who underwent robotic surgery for intermediate-grade prostate cancer at University Hospitals Cleveland Medical Center.
Although he needed a biopsy, as most patients today do, advances in radiologic technology may make such invasive measures unnecessary in the future. Researchers are developing better imaging techniques and algorithms—a form of computer science called artificial intelligence, in which machines learn and execute tasks that typically require human brain power.
This innovation may enhance diagnostic precision and promptness. But it also brings ethical concerns to the forefront of the conversation, highlighting the potential for invasion of privacy, unequal patient access, and less physician involvement in patient care.
A National Academy of Medicine Special Publication, released in December, emphasizes that setting industry-wide standards for use in patient care is essential to AI's responsible and transparent implementation as the industry grapples with voluminous quantities of data. The technology should be viewed as a tool to supplement decision-making by highly trained professionals, not to replace it.
MRI--a test that uses powerful magnets, radio waves, and a computer to take detailed images inside the body--has become highly accurate in detecting aggressive prostate cancer, but its reliability is more limited in identifying low and intermediate grades of malignancy. That's why Chessin opted to have his prostate removed rather than take the chance of missing anything more suspicious that could develop.
His urologist, Lee Ponsky, says AI's most significant impact is yet to come. He hopes University Hospitals Cleveland Medical Center's collaboration with research scientists at its academic affiliate, Case Western Reserve University, will lead to the invention of a virtual biopsy.
A National Cancer Institute five-year grant is funding the project, launched in 2017, to develop a combined MRI and computerized tool to support more accurate detection and grading of prostate cancer. Such a tool would be "the closest to a crystal ball that we can get," says Ponsky, professor and chairman of the Urology Institute.
In situations where AI has guided diagnostics, radiologists' interpretations of breast, lung, and prostate lesions have improved as much as 25 percent, says Anant Madabhushi, a biomedical engineer and director of the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve, who is collaborating with Ponsky. "AI is very nascent," Madabhushi says, estimating that fewer than 10 percent of niche academic medical centers have used it. "We are still optimizing and validating the AI and virtual biopsy technology."
In October, several North American and European professional organizations of radiologists, imaging informaticists, and medical physicists released a joint statement on the ethics of AI. "Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future," reads the statement, published in the Journal of the American College of Radiology. "The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes."
Overreliance on new technology also poses concern when humans "outsource the process to a machine."
The statement's leader author, radiologist J. Raymond Geis, says "there's no question" that machines equipped with artificial intelligence "can extract more information than two human eyes" by spotting very subtle patterns in pixels. Yet, such nuances are "only part of the bigger picture of taking care of a patient," says Geis, a senior scientist with the American College of Radiology's Data Science Institute. "We have to be able to combine that with knowledge of what those pixels mean."
Setting ethical standards is high on all physicians' radar because the intricacies of each patient's medical record are factored into the computer's algorithm, which, in turn, may be used to help interpret other patients' scans, says radiologist Frank Rybicki, vice chair of operations and quality at the University of Cincinnati's department of radiology. Although obtaining patients' informed consent in writing is currently necessary, ethical dilemmas arise if and when patients have a change of heart about the use of their private health information. It is likely that removing individual data may be possible for some algorithms but not others, Rybicki says.
The information is de-identified to protect patient privacy. Using it to advance research is akin to analyzing human tissue removed in surgical procedures with the goal of discovering new medicines to fight disease, says Maryellen Giger, a University of Chicago medical physicist who studies computer-aided diagnosis in cancers of the breast, lung, and prostate, as well as bone diseases. Physicians who become adept at using AI to augment their interpretation of imaging will be ahead of the curve, she says.
As with other new discoveries, patient access and equality come into play. While AI appears to "have potential to improve over human performance in certain contexts," an algorithm's design may result in greater accuracy for certain groups of patients, says Lucia M. Rafanelli, a political theorist at The George Washington University. This "could have a disproportionately bad impact on one segment of the population."
Overreliance on new technology also poses concern when humans "outsource the process to a machine." Over time, they may cease developing and refining the skills they used before the invention became available, said Chloe Bakalar, a visiting research collaborator at Princeton University's Center for Information Technology Policy.
"AI is a paradigm shift with magic power and great potential."
Striking the right balance in the rollout of the technology is key. Rushing to integrate AI in clinical practice may cause harm, whereas holding back too long could undermine its ability to be helpful. Proper governance becomes paramount. "AI is a paradigm shift with magic power and great potential," says Ge Wang, a biomedical imaging professor at Rensselaer Polytechnic Institute in Troy, New York. "It is only ethical to develop it proactively, validate it rigorously, regulate it systematically, and optimize it as time goes by in a healthy ecosystem."
How Emerging Technologies Can Help Us Fight the New Coronavirus
In nature, few species remain dominant for long. Any sizable population of similar individuals offers immense resources to whichever parasite can evade its defenses, spreading rapidly from one member to the next.
Which will prove greater: our defenses or our vulnerabilities?
Humans are one such dominant species. That wasn't always the case: our hunter-gatherer ancestors lived in groups too small and poorly connected to spread pathogens like wildfire. Our collective vulnerability to pandemics began with the dawn of cities and trade networks thousands of years ago. Roman cities were always demographic sinks, but never more so than when a pandemic agent swept through. The plague of Cyprian, the Antonine plague, the plague of Justinian – each is thought to have killed over ten million people, an appallingly high fraction of the total population of the empire.
With the advent of sanitation, hygiene, and quarantines, we developed our first non-immunological defenses to curtail the spread of plagues. With antibiotics, we began to turn the weapons of microbes against our microbial foes. Most potent of all, we use vaccines to train our immune systems to fight pathogens before we are even exposed. Edward Jenner's original vaccine alone is estimated to have saved half a billion lives.
It's been over a century since we suffered from a swift and deadly pandemic. Even the last deadly influenza of 1918 killed only a few percent of humanity – nothing so bad as any of the Roman plagues, let alone the Black Death of medieval times.
How much of our recent winning streak has been due to luck?
Much rides on that question, because the same factors that first made our ancestors vulnerable are now ubiquitous. Our cities are far larger than those of ancient times. They're inhabited by an ever-growing fraction of humanity, and are increasingly closely connected: we now routinely travel around the world in the course of a day. Despite urbanization, global population growth has increased contact with wild animals, creating more opportunities for zoonotic pathogens to jump species. Which will prove greater: our defenses or our vulnerabilities?
The tragic emergence of coronavirus 2019-nCoV in Wuhan may provide a test case. How devastating this virus will become is highly uncertain at the time of writing, but its rapid spread to many countries is deeply worrisome. That it seems to kill only the already infirm and spare the healthy is small comfort, and may counterintuitively assist its spread: it's easy to implement a quarantine when everyone infected becomes extremely ill, but if carriers may not exhibit symptoms as has been reported, it becomes exceedingly difficult to limit transmission. The virus, a distant relative of the more lethal SARS virus that killed 800 people in 2002 to 2003, has evolved to be transmitted between humans and spread to 18 countries in just six weeks.
Humanity's response has been faster than ever, if not fast enough. To its immense credit, China swiftly shared information, organized and built new treatment centers, closed schools, and established quarantines. The Coalition for Epidemic Preparedness Innovations, which was founded in 2017, quickly funded three different companies to develop three different varieties of vaccine: a standard protein vaccine, a DNA vaccine, and an RNA vaccine, with more planned. One of the agreements was signed after just four days of discussion, far faster than has ever been done before.
The new vaccine candidates will likely be ready for clinical trials by early summer, but even if successful, it will be additional months before the vaccine will be widely available. The delay may well be shorter than ever before thanks to advances in manufacturing and logistics, but a delay it will be.
The 1918 influenza virus killed more than half of its victims in the United Kingdom over just three months.
If we faced a truly nasty virus, something that spreads like pandemic influenza – let alone measles – yet with the higher fatality rate of, say, H7N9 avian influenza, the situation would be grim. We are profoundly unprepared, on many different levels.
So what would it take to provide us with a robust defense against pandemics?
Minimize the attack surface: 2019-nCoV jumped from an animal, most probably a bat, to humans. China has now banned the wildlife trade in response to the epidemic. Keeping it banned would be prudent, but won't be possible in all nations. Still, there are other methods of protection. Influenza viruses commonly jump from birds to pigs to humans; the new coronavirus may have similarly passed through a livestock animal. Thanks to CRISPR, we can now edit the genomes of most livestock. If we made them immune to known viruses, and introduced those engineered traits to domesticated animals everywhere, we would create a firewall in those intermediate hosts. We might even consider heritably immunizing the wild organisms most likely to serve as reservoirs of disease.
None of these defenses will be cheap, but they'll be worth every penny.
Rapid diagnostics: We need a reliable method of detection costing just pennies to be available worldwide inside of a week of discovering a new virus. This may eventually be possible thanks to a technology called SHERLOCK, which is based on a CRISPR system more commonly used for precision genome editing. Instead of using CRISPR to find and edit a particular genome sequence in a cell, SHERLOCK programs it to search for a desired target and initiate an easily detected chain reaction upon discovery. The technology is capable of fantastic sensitivity: with an attomolar (10-18) detection limit, it senses single molecules of a unique DNA or RNA fingerprint, and the components can be freeze-dried onto paper strips.
Better preparations: China acted swiftly to curtail the spread of the Wuhan virus with traditional public health measures, but not everything went as smoothly as it might have. Most cities and nations have never conducted a pandemic preparedness drill. Best give people a chance to practice keeping the city barely functional while minimizing potential exposure events before facing the real thing.
Faster vaccines: Three months to clinical trials is too long. We need a robust vaccine discovery and production system that can generate six candidates within a week of the pathogen's identification, manufacture a million doses the week after, and scale up to a hundred million inside of a month. That may be possible for novel DNA and RNA-based vaccines, and indeed anything that can be delivered using a standardized gene therapy vector. For example, instead of teaching each person's immune system to evolve protective antibodies by showing it pieces of the virus, we can program cells to directly produce known antibodies via gene therapy. Those antibodies could be discovered by sifting existing diverse libraries of hundreds of millions of candidates, computationally designed from scratch, evolved using synthetic laboratory ecosystems, or even harvested from the first patients to report symptoms. Such a vaccine might be discovered and produced fast enough at scale to halt almost any natural pandemic.
Robust production and delivery: Our defenses must not be vulnerable to the social and economic disruptions caused by a pandemic. Unfortunately, our economy selects for speed and efficiency at the expense of robustness. Just-in-time supply chains that wing their way around the world require every node to be intact. If workers aren't on the job producing a critical component, the whole chain breaks until a substitute can be found. A truly nasty pandemic would disrupt economies all over the world, so we will need to pay extra to preserve the capacity for independent vertically integrated production chains in multiple nations. Similarly, vaccines are only useful if people receive them, so delivery systems should be as robustly automated as possible.
None of these defenses will be cheap, but they'll be worth every penny. Our nations collectively spend trillions on defense against one another, but only billions to protect humanity from pandemic viruses known to have killed more people than any human weapon. That's foolish – especially since natural animal diseases that jump the species barrier aren't the only pandemic threats.
We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural.
The complete genomes of all historical pandemic viruses ever to have been sequenced are freely available to anyone with an internet connection. True, these are all agents we've faced before, so we have a pre-existing armory of pharmaceuticals and vaccines and experience. There's no guarantee that they would become pandemics again; for example, a large fraction of humanity is almost certainly immune to the 1918 influenza virus due to exposure to the related 2009 pandemic, making it highly unlikely that the virus would take off if released.
Still, making the blueprints publicly available means that a large and growing number of people with the relevant technical skills can single-handedly make deadly biological agents that might be able to spread autonomously -- at least if they can get their hands on the relevant DNA. At present, such people most certainly can, so long as they bother to check the publicly available list of which gene synthesis companies do the right thing and screen orders -- and by implication, which ones don't.
One would hope that at least some of the companies that don't advertise that they screen are "honeypots" paid by intelligence agencies to catch would-be bioterrorists, but even if most of them are, it's still foolish to let individuals access that kind of destructive power. We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural. Hence, we should build a secure and adaptive system capable of screening all DNA synthesis for known and potential future pandemic agents... without disclosing what we think is a credible bioweapon.
Whether or not it becomes a global pandemic, the emergence of Wuhan coronavirus has underscored the need for coordinated action to prevent the spread of pandemic disease. Let's ensure that our reactive response minimally prepares us for future threats, for one day, reacting may not be enough.