What’s the Right Way to Regulate Gene-Edited Crops?
In the next few decades, humanity faces its biggest food crisis since the invention of the plow. The planet's population, currently 7.6 billion, is expected to reach 10 billion by 2050; to avoid mass famine, according to the World Resource Institute, we'll need to produce 70 percent more calories than we do today.
Imagine that a cheap, easy-to-use, and rapidly deployable technology could make crops more fertile and strengthen their resistance to threats.
Meanwhile, climate change will bring intensifying assaults by heat, drought, storms, pests, and weeds, depressing farm yields around the globe. Epidemics of plant disease—already laying waste to wheat, citrus, bananas, coffee, and cacao in many regions—will spread ever further through the vectors of modern trade and transportation.
So here's a thought experiment: Imagine that a cheap, easy-to-use, and rapidly deployable technology could make crops more fertile and strengthen their resistance to these looming threats. Imagine that it could also render them more nutritious and tastier, with longer shelf lives and less vulnerability to damage in shipping—adding enhancements to human health and enjoyment, as well as reduced food waste, to the possible benefits.
Finally, imagine that crops bred with the aid of this tool might carry dangers. Some could contain unsuspected allergens or toxins. Others might disrupt ecosystems, affecting the behavior or very survival of other species, or infecting wild relatives with their altered DNA.
Now ask yourself: If such a technology existed, should policymakers encourage its adoption, or ban it due to the risks? And if you chose the former alternative, how should crops developed by this method be regulated?
In fact, this technology does exist, though its use remains mostly experimental. It's called gene editing, and in the past five years it has emerged as a potentially revolutionary force in many areas—among them, treating cancer and genetic disorders; growing transplantable human organs in pigs; controlling malaria-spreading mosquitoes; and, yes, transforming agriculture. Several versions are currently available, the newest and nimblest of which goes by the acronym CRISPR.
Gene editing is far simpler and more efficient than older methods used to produce genetically modified organisms (GMOs). Unlike those methods, moreover, it can be used in ways that leave no foreign genes in the target organism—an advantage that proponents argue should comfort anyone leery of consuming so-called "Frankenfoods." But debate persists over what precautions must be taken before these crops come to market.
Recently, two of the world's most powerful regulatory bodies offered very different answers to that question. The United States Department of Agriculture (USDA) declared in March 2018 that it "does not currently regulate, or have any plans to regulate" plants that are developed through most existing methods of gene editing. The Court of Justice of the European Union (ECJ), by contrast, ruled in July that such crops should be governed by the same stringent regulations as conventional GMOs.
Some experts suggest that the broadly permissive American approach and the broadly restrictive EU policy are equally flawed.
Each announcement drew protests, for opposite reasons. Anti-GMO activists assailed the USDA's statement, arguing that all gene-edited crops should be tested and approved before marketing. "You don't know what those mutations or rearrangements might do in a plant," warned Michael Hansen, a senior scientist with the advocacy group Consumers Union. Biotech boosters griped that the ECJ's decision would stifle innovation and investment. "By any sensible standard, this judgment is illogical and absurd," wrote the British newspaper The Observer.
Yet some experts suggest that the broadly permissive American approach and the broadly restrictive EU policy are equally flawed. "What's behind these regulatory decisions is not science," says Jennifer Kuzma, co-director of the Genetic Engineering and Society Center at North Carolina State University, a former advisor to the World Economic Forum, who has researched and written extensively on governance issues in biotechnology. "It's politics, economics, and culture."
The U.S. Welcomes Gene-Edited Food
Humans have been modifying the genomes of plants and animals for 10,000 years, using selective breeding—a hit-or-miss method that can take decades or more to deliver rewards. In the mid-20th century, we learned to speed up the process by exposing organisms to radiation or mutagenic chemicals. But it wasn't until the 1980s that scientists began modifying plants by altering specific stretches of their DNA.
Today, about 90 percent of the corn, cotton and soybeans planted in the U.S. are GMOs; such crops cover nearly 4 million square miles (10 million square kilometers) of land in 29 countries. Most of these plants are transgenic, meaning they contain genes from an unrelated species—often as biologically alien as a virus or a fish. Their modifications are designed primarily to boost profit margins for mechanized agribusiness: allowing crops to withstand herbicides so that weeds can be controlled by mass spraying, for example, or to produce their own pesticides to lessen the need for chemical inputs.
In the early days, the majority of GM crops were created by extracting the gene for a desired trait from a donor organism, multiplying it, and attaching it to other snippets of DNA—usually from a microbe called an agrobacterium—that could help it infiltrate the cells of the target plant. Biotechnologists injected these particles into the target, hoping at least one would land in a place where it would perform its intended function; if not, they kept trying. The process was quicker than conventional breeding, but still complex, scattershot, and costly.
Because agrobacteria can cause plant tumors, Kuzma explains, policymakers in the U.S. decided to regulate GMO crops under an existing law, the Plant Pest Act of 1957, which addressed dangers like imported trees infested with invasive bugs. Every GMO containing the DNA of agrobacterium or another plant pest had to be tested to see whether it behaved like a pest, and undergo a lengthy approval process. By 2010, however, new methods had been developed for creating GMOs without agrobacteria; such plants could typically be marketed without pre-approval.
Soon after that, the first gene-edited crops began appearing. If old-school genetic engineering was a shotgun, techniques like TALEN and CRISPR were a scalpel—or the search-and-replace function on a computer program. With CRISPR/Cas9, for example, an enzyme that bacteria use to recognize and chop up hostile viruses is reprogrammed to find and snip out a desired bit of a plant or other organism's DNA. The enzyme can also be used to insert a substitute gene. If a DNA sequence is simply removed, or the new gene comes from a similar species, the changes in the target plant's genotype and phenotype (its general characteristics) may be no different from those that could be produced through selective breeding. If a foreign gene is added, the plant becomes a transgenic GMO.
Companies are already teeing up gene-edited products for the U.S. market, like a cooking oil and waxy corn.
This development, along with the emergence of non-agrobacterium GMOs, eventually prompted the USDA to propose a tiered regulatory system for all genetically engineered crops, beginning with an initial screening for potentially hazardous metaboloids or ecological impacts. (The screening was intended, in part, to guard against the "off-target effects"—stray mutations—that occasionally appear in gene-edited organisms.) If no red flags appeared, the crop would be approved; otherwise, it would be subject to further review, and possible regulation.
The plan was unveiled in January 2017, during the last week of the Obama presidency. Then, under the Trump administration, it was shelved. Although the USDA continues to promise a new set of regulations, the only hint of what they might contain has been Secretary of Agriculture Sonny Perdue's statement last March that gene-edited plants would remain unregulated if they "could otherwise have been developed through traditional breeding techniques, as long as they are not plant pests or developed using plant pests."
Because transgenic plants could not be "developed through traditional breeding techniques," this statement could be taken to mean that gene editing in which foreign DNA is introduced might actually be regulated. But because the USDA regulates conventional transgenic GMOs only if they trigger the plant-pest stipulation, experts assume gene-edited crops will face similarly limited oversight.
Meanwhile, companies are already teeing up gene-edited products for the U.S. market. An herbicide-resistant oilseed rape, developed using a proprietary technique, has been available since 2016. A cooking oil made from TALEN-tweaked soybeans, designed to have a healthier fatty-acid profile, is slated for release within the next few months. A CRISPR-edited "waxy" corn, designed with a starch profile ideal for processed foods, should be ready by 2021.
In all likelihood, none of these products will have to be tested for safety.
In the E.U., Stricter Rules Apply
Now let's look at the European Union. Since the late 1990s, explains Gregory Jaffe, director of the Project on Biotechnology at the Center for Science in the Public Interest, the EU has had a "process-based trigger" for genetically engineered products: "If you use recombinant DNA, you are going to be regulated." All foods and animal feeds must be approved and labeled if they consist of or contain more than 0.9 percent GM ingredients. (In the U.S., "disclosure" of GM ingredients is mandatory, if someone asks, but labeling is not required.) The only GM crop that can be commercially grown in EU member nations is a type of insect-resistant corn, though some countries allow imports.
European scientists helped develop gene editing, and they—along with the continent's biotech entrepreneurs—have been busy developing applications for crops. But European farmers seem more divided over the technology than their American counterparts. The main French agricultural trades union, for example, supports research into non-transgenic gene editing and its exemption from GMO regulation. But it was the country's small-farmers' union, the Confédération Paysanne, along with several allied groups, that in 2015 submitted a complaint to the ECJ, asking that all plants produced via mutagenesis—including gene-editing—be regulated as GMOs.
At this point, it should be mentioned that in the past 30 years, large population studies have found no sign that consuming GM foods is harmful to human health. GMO critics can, however, point to evidence that herbicide-resistant crops have encouraged overuse of herbicides, giving rise to poison-proof "superweeds," polluting the environment with suspected carcinogens, and inadvertently killing beneficial plants. Those allegations were key to the French plaintiffs' argument that gene-edited crops might similarly do unexpected harm. (Disclosure: Leapsmag's parent company, Bayer, recently acquired Monsanto, a maker of herbicides and herbicide-resistant seeds. Also, Leaps by Bayer, an innovation initiative of Bayer and Leapsmag's direct founder, has funded a biotech startup called JoynBio that aims to reduce the amount of nitrogen fertilizer required to grow crops.)
The ruling was "scientifically nonsensical. It's because of things like this that I'll never go back to Europe."
In the end, the EU court found in the Confédération's favor on gene editing—though the court maintained the regulatory exemption for mutagenesis induced by chemicals or radiation, citing the 'long safety record' of those methods.
The ruling was "scientifically nonsensical," fumes Rodolphe Barrangou, a French food scientist who pioneered CRISPR while working for DuPont in Wisconsin and is now a professor at NC State. "It's because of things like this that I'll never go back to Europe."
Nonetheless, the decision was consistent with longstanding EU policy on crops made with recombinant DNA. Given the difficulty and expense of getting such products through the continent's regulatory system, many other European researchers may wind up following Barrangou to America.
Getting to the Root of the Cultural Divide
What explains the divergence between the American and European approaches to GMOs—and, by extension, gene-edited crops? In part, Jennifer Kuzma speculates, it's that Europeans have a different attitude toward eating. "They're generally more tied to where their food comes from, where it's produced," she notes. They may also share a mistrust of government assurances on food safety, borne of the region's Mad Cow scandals of the 1980s and '90s. In Catholic countries, consumers may have misgivings about tinkering with the machinery of life.
But the principal factor, Kuzma argues, is that European and American agriculture are structured differently. "GM's benefits have mostly been designed for large-scale industrial farming and commodity crops," she says. That kind of farming is dominant in the U.S., but not in Europe, leading to a different balance of political power. In the EU, there was less pressure on decisionmakers to approve GMOs or exempt gene-edited crops from regulation—and more pressure to adopt a GM-resistant stance.
Such dynamics may be operating in other regions as well. In China, for example, the government has long encouraged research in GMOs; a state-owned company recently acquired Syngenta, a Swiss-based multinational corporation that is a leading developer of GM and gene-edited crops. GM animal feed and cooking oil can be freely imported. Yet commercial cultivation of most GM plants remains forbidden, out of deference to popular suspicions of genetically altered food. "As a new item, society has debates and doubts on GMO techniques, which is normal," President Xi Jinping remarked in 2014. "We must be bold in studying it, [but] be cautious promoting it."
The proper balance between boldness and caution is still being worked out all over the world. Europe's process-based approach may prevent researchers from developing crops that, with a single DNA snip, could rescue millions from starvation. EU regulations will also make it harder for small entrepreneurs to challenge Big Ag with a technology that, as Barrangou puts it, "can be used affordably, quickly, scalably, by anyone, without even a graduate degree in genetics." America's product-based approach, conversely, may let crops with hidden genetic dangers escape detection. And by refusing to investigate such risks, regulators may wind up exacerbating consumers' doubts about GM and gene-edited products, rather than allaying them.
"Science...can't tell you what to regulate. That's a values-based decision."
Perhaps the solution lies in combining both approaches, and adding some flexibility and nuance to the mix. "I don't believe in regulation by the product or the process," says CSPI's Jaffe. "I think you need both." Deleting a DNA base pair to silence a gene, for example, might be less risky than inserting a foreign gene into a plant—unless the deletion enables the production of an allergen, and the transgene comes from spinach.
Kuzma calls for the creation of "cooperative governance networks" to oversee crop genome editing, similar to bodies that already help develop and enforce industry standards in fisheries, electronics, industrial cleaning products, and (not incidentally) organic agriculture. Such a network could include farmers, scientists, advocacy groups, private companies, and governmental agencies. "Safety isn't an all-or-nothing concept," Kuzma says. "Science can tell you what some of the issues are in terms of risk and benefit, but it can't tell you what to regulate. That's a values-based decision."
By drawing together a wide range of stakeholders to make such decisions, she adds, "we're more likely to anticipate future consequences, and to develop a robust approach—one that not only seems more legitimate to people, but is actually just plain old better."
Artificial Wombs Are Getting Closer to Reality for Premature Babies
In 2017, researchers at the Children's Hospital of Philadelphia grew extremely preterm lambs from hairless to fluffy inside a "biobag," a dark, fluid-filled bag designed to mimic a mother's womb.
"There could be quite a lot of infants that would benefit from artificial womb technologies."
This happened over the course of a month, across a delicate period of fetal development that scientists consider the "edge of viability" for survival at birth.
In 2019, Australian and Japanese scientists repeated the success of keeping extremely premature lambs inside an artificial womb environment until they were ready to survive on their own. Those researchers are now developing a treatment strategy for infants born at "the hard limit of viability," between 20 and 23 weeks of gestation. At the same time, Dutch researchers are going so far as to replicate the sound of a mother's heartbeat inside a biobag. These developments signal exciting times ahead--with a touch of science fiction--for artificial womb technologies. But is there a catch?
"There could be quite a lot of infants that would benefit from artificial womb technologies," says Josephine Johnston, a bioethicist and lawyer at The Hastings Center, an independent bioethics research institute in New York. "These technologies can decrease morbidity and mortality for infants at the edge of viability and help them survive without significant damage to the lungs or other problems," she says.
It is a viewpoint shared by Frans van de Vosse, leader of the Cardiovascular Biomechanics research group at Eindhoven University of Technology in the Netherlands. He participates in a university project that recently received more than $3 million in funding from the E.U. to produce a prototype artificial womb for preterm babies between 24 and 28 weeks of gestation by 2024.
The Eindhoven design comes with a fluid-based environment, just like that of the natural womb, where the baby receives oxygen and nutrients through an artificial placenta that is connected to the baby's umbilical cord. "With current incubators, when a respiratory device delivers oxygen into the lungs in order for the baby to breathe, you may harm preterm babies because their lungs are not yet mature for that," says van de Vosse. "But when the lungs are under water, then they can develop, they can mature, and the baby will receive the oxygen through the umbilical cord, just like in the natural womb," he says.
His research team is working to achieve the "perfectly natural" artificial womb based on strict mathematical models and calculations, van de Vosse says. They are even employing 3D printing technology to develop the wombs and artificial babies to test in them--the mannequins, as van de Vosse calls them. These mannequins are being outfitted with sensors that can replicate the environment a fetus experiences inside a mother's womb, including the soothing sound of her heartbeat.
"The Dutch study's artificial womb design is slightly different from everything else we have seen as it encourages a gestateling to experience the kind of intimacy that a fetus does in pregnancy," says Elizabeth Chloe Romanis, an assistant professor in biolaw at Durham Law School in the U.K. But what is a "gestateling" anyway? It's a term Romanis has coined to describe neither a fetus nor a newborn, but an in-between artificial stage.
"Because they aren't born, they are not neonates," Romanis explains. "But also, they are not inside a pregnant person's body, so they are not fetuses. In an artificial womb the fetus is still gestating, hence why I call it gestateling."
The terminology is not just a semantic exercise to lend a name to what medical dictionaries haven't yet defined. "Gestatelings might have a slightly different psychology," says Romanis. "A fetus inside a mother's womb interacts with the mother. A neonate has some kind of self-sufficiency in terms of physiology. But the gestateling doesn't do either of those things," she says, urging us to be mindful of the still-obscure effects that experiencing early life as a gestateling might have on future humans. Psychology aside, there are also legal repercussions.
The Universal Declaration of Human Rights proclaims the "inalienable rights which everyone is entitled to as a human being," with "everyone" including neonates. However, such a legal umbrella is absent when it comes to fetuses, which have no rights under the same declaration. "We might need a new legal category for a gestateling," concludes Romanis.
But not everyone agrees. "However well-meaning, a new legal category would almost certainly be used to further erode the legality of abortion in countries like the U.S.," says Johnston.
The "abortion war" in the U.S. has risen to a crescendo since 2019, when states like Missouri, Mississippi, Kentucky, Louisiana and Georgia passed so-called "fetal heartbeat bills," which render an abortion illegal once a fetal heartbeat is detected. The situation is only bound to intensify now that Justice Ruth Bader Ginsburg, one of the Supreme Court's fiercest champions for abortion rights, has passed away. If President Trump appoints Ginsburg's replacement, he will probably grant conservatives on the Court the votes needed to revoke or weaken Roe v. Wade, the milestone decision of 1973 that established women's legal right to an abortion.
"A gestateling with intermediate status would almost certainly be considered by some in the U.S. (including some judges) to have at least certain legal rights, likely including right-to-life," says Johnston. This would enable a fetus on the edge of viability to make claims on the mother, and lead either to a shortening of the window in which abortion is legal—or a practice of denying abortion altogether. Instead, Johnston predicts, doctors might offer to transfer the fetus to an artificial womb for external gestation as a new standard of care.
But the legal conundrum does not stop there. The viability threshold is an estimate decided by medical professionals based on the clinical evidence and the technology available. It is anything but static. In the 1970s when Roe v. Wade was decided, for example, a fetus was considered legally viable starting at 28 weeks. Now, with improved technology and medical management, "the hard limit today is probably 20 or 21 weeks," says Matthew Kemp, associate professor at the University of Western Australia and one of the Australian-Japanese artificial womb project's senior researchers.
The changing threshold can result in situations where lots of people invested in the decision disagree. "Those can be hard decisions, but they are case-by-case decisions that families make or parents make with the key providers to determine when to proceed and when to let the infant die. Usually, it's a shared decision where the parents have the final say," says Johnston. But this isn't always the case.
On May 9th 2016, a boy named Alfie Evans was born in Liverpool, UK. Suffering seizures a few months after his birth, Alfie was diagnosed with an unknown neurodegenerative disorder and soon went into a semi-vegetative state, which lasted for more than a year. Alfie's medical team decided to withdraw his ventilation support, suggesting further treatment was unlawful and inhumane, but his parents wanted permission to fly him to a hospital in Rome and attempt to prolong his life there. In the end, the case went all the way up to the Supreme Court, which ruled that doctors could stop providing life support for Alfie, saying that the child required "peace, quiet and privacy." What happened to little Alfie raised huge publicity in the UK and pointedly highlighted the dilemma of whether parents or doctors should have the final say in the fate of a terminally-ill child in life-support treatment.
"In a few years from now, women who cannot get pregnant because of uterine infertility will be able to have a fully functional uterus made from their own tissue."
Alfie was born and, thus had legal rights, yet legal and ethical mayhem arose out of his case. When it comes to gestatelings, the scenarios will be even more complicated, says Romanis. "I think there's a really big question about who has parental rights and who doesn't," she says. "The assisted reproductive technology (ART) law in the U.K. hasn't been updated since 2008....It certainly needs an update when you think about all the things we have done since [then]."
This June, for instance, scientists from the Wake Forest Institute for Regenerative Medicine in North Carolina published research showing that they could take a small sample of tissue from a rabbit's uterus and create a bioengineered uterus, which then supported both fertilization and normal pregnancy like a natural uterus does.
"In [a number of] years from now, women who cannot get pregnant because of uterine infertility will be able to have a fully functional uterus made from their own tissue," says Dr. Anthony Atala, the Institute's director and a pioneer in regenerative medicine. These bioengineered uteri will eventually be covered by insurance, Atala expects. But when it comes to artificial wombs that externally gestate premature infants, will all mothers have equal access?
Medical reports have already shown racial and ethnic disparities in infertility treatments and access to assisted reproductive technologies. Costs on average total $12,400 per cycle of treatment and may require several cycles to achieve a live birth. "There's no indication that artificial wombs would be treated any differently. That's what we see with almost every expensive new medical technology," says Johnston. In a much more dystopian future, there is even a possibility that inequity in healthcare might create disturbing chasms in how women of various class levels bear children. Romanis asks us to picture the following scenario:
We live in a world where artificial wombs have become mainstream. Most women choose to end their pregnancies early and transfer their gestatelings to the care of machines. After a while, insurers deem full-term pregnancy and childbirth a risky non-necessity, and are lobbying to stop covering them altogether. Wealthy white women continue opting out of their third trimesters (at a high cost), since natural pregnancy has become a substandard route for poorer women. Those women are strongly judged for any behaviors that could risk their fetus's health, in contrast with the machine's controlled environment. "Why are you having a coffee during your pregnancy?" critics might ask. "Why are you having a glass of red wine? If you can't be perfect, why don't you have it the artificial way?"
Problem is, even if they want to, they won't be able to afford it.
In a more sanguine version, however, the artificial wombs are only used in cases of prematurity as a life-saving medical intervention rather than as a lifestyle accommodation. The 15 million babies who are born prematurely each year and may face serious respiratory, cardiovascular, visual and hearing problems, as well as learning disabilities, instead continue their normal development in artificial wombs. After lots of deliberation, insurers agree to bear the cost of external wombs because they are cheaper than a lifetime of medical care for a disabled or diseased person. This enables racial and ethnic minority women, who make up the majority of women giving premature birth, to access the technology.
Even extremely premature babies, those babies (far) below the threshold of 28 weeks of gestation, half of which die, could now discover this thing called life. In this scenario, as the Australian researcher Kemp says, we are simply giving a good shot at healthy, long-term survival to those who were unfortunate enough to start too soon.
Real-Time Monitoring of Your Health Is the Future of Medicine
The same way that it's harder to lose 100 pounds than it is to not gain 100 pounds, it's easier to stop a disease before it happens than to treat an illness once it's developed.
In Morris' dream scenario "everyone will be implanted with a sensor" ("…the same way most people are vaccinated") and the sensor will alert people to go to the doctor if something is awry.
Bio-engineers working on the next generation of diagnostic tools say today's technology, such as colonoscopies or mammograms, are reactionary; that is, they tell a person they are sick often when it's too late to reverse course. Surveillance medicine — such as implanted sensors — will detect disease at its onset, in real time.
What Is Possible?
Ever since the Human Genome Project — which concluded in 2003 after mapping the DNA sequence of all 30,000 human genes — modern medicine has shifted to "personalized medicine." Also called, "precision health," 21st-century doctors can in some cases assess a person's risk for specific diseases from his or her DNA. The information enables women with a BRCA gene mutation, for example, to undergo more frequent screenings for breast cancer or to pro-actively choose to remove their breasts, as a "just in case" measure.
But your DNA is not always enough to determine your risk of illness. Not all genetic mutations are harmful, for example, and people can get sick without a genetic cause, such as with an infection. Hence the need for a more "real-time" way to monitor health.
Aaron Morris, a postdoctoral researcher in the Department of Biomedical Engineering at the University of Michigan, wants doctors to be able to predict illness with pinpoint accuracy well before symptoms show up. Working in the lab of Dr. Lonnie Shea, the team is building "a tiny diagnostic lab" that can live under a person's skin and monitor for illness, 24/7. Currently being tested in mice, the Michigan team's porous biodegradable implant becomes part of the body as "cells move right in," says Morris, allowing engineered tissue to be biopsied and analyzed for diseases. The information collected by the sensors will enable doctors to predict disease flareups, such as for cancer relapses, so that therapies can begin well before a person comes out of remission. The technology will also measure the effectiveness of those therapies in real time.
In Morris' dream scenario "everyone will be implanted with a sensor" ("…the same way most people are vaccinated") and the sensor will alert people to go to the doctor if something is awry.
While it may be four or five decades before Morris' sensor becomes mainstream, "the age of surveillance medicine is here," says Jamie Metzl, a technology and healthcare futurist who penned Hacking Darwin: Genetic Engineering and the Future of Humanity. "It will get more effective and sophisticated and less obtrusive over time," says Metzl.
Already, Google compiles public health data about disease hotspots by amalgamating individual searches for medical symptoms; pill technology can digitally track when and how much medication a patient takes; and, the Apple watch heart app can predict with 85-percent accuracy if an individual using the wrist device has Atrial Fibrulation (AFib) — a condition that causes stroke, blood clots and heart failure, and goes undiagnosed in 700,000 people each year in the U.S.
"We'll never be able to predict everything," says Metzl. "But we will always be able to predict and prevent more and more; that is the future of healthcare and medicine."
Morris believes that within ten years there will be surveillance tools that can predict if an individual has contracted the flu well before symptoms develop.
At City College of New York, Ryan Williams, assistant professor of biomedical engineering, has built an implantable nano-sensor that works with a florescent wand to scope out if cancer cells are growing at the implant site. "Instead of having the ovary or breast removed, the patient could just have this [surveillance] device that can say 'hey we're monitoring for this' in real-time… [to] measure whether the cancer is maybe coming back,' as opposed to having biopsy tests or undergoing treatments or invasive procedures."
Not all surveillance technologies that are being developed need to be implanted. At Case Western, Colin Drummond, PhD, MBA, a data scientist and assistant department chair of the Department of Biomedical Engineering, is building a "surroundable." He describes it as an Alexa-style surveillance system (he's named her Regina) that will "tell" the user, if a need arises for medication, how much to take and when.
Bioethical Red Flags
"Everyone should be extremely excited about our move toward what I call predictive and preventive health care and health," says Metzl. "We should also be worried about it. Because all of these technologies can be used well and they can [also] be abused." The concerns are many layered:
Discriminatory practices
For years now, bioethicists have expressed concerns about employee-sponsored wellness programs that encourage fitness while also tracking employee health data."Getting access to your health data can change the way your employer thinks about your employability," says Keisha Ray, assistant professor at the University of Texas Health Science Center at Houston (UTHealth). Such access can lead to discriminatory practices against employees that are less fit. "Surveillance medicine only heightens those risks," says Ray.
Who owns the data?
Surveillance medicine may help "democratize healthcare" which could be a good thing, says Anita Ho, an associate professor in bioethics at both the University of California, San Francisco and at the University of British Columbia. It would enable easier access by patients to their health data, delivered to smart phones, for example, rather than waiting for a call from the doctor. But, she also wonders who will own the data collected and if that owner has the right to share it or sell it. "A direct-to-consumer device is where the lines get a little blurry," says Ho. Currently, health data collected by Apple Watch is owned by Apple. "So we have to ask bigger ethical questions in terms of what consent should be required" by users.
Insurance coverage
"Consumers of these products deserve some sort of assurance that using a product that will predict future needs won't in any way jeopardize their ability to access care for those needs," says Hastings Center bioethicist Carolyn Neuhaus. She is urging lawmakers to begin tackling policy issues created by surveillance medicine, now, well ahead of the technology becoming mainstream, not unlike GINA, the Genetic Information Nondiscrimination Act of 2008 -- a federal law designed to prevent discrimination in health insurance on the basis of genetic information.
And, because not all Americans have insurance, Ho wants to know, who's going to pay for this technology and how much will it cost?
Trusting our guts
Some bioethicists are concerned that surveillance technology will reduce individuals to their "risk profiles," leaving health care systems to perceive them as nothing more than a "bundle of health and security risks." And further, in our quest to predict and prevent ailments, Neuhaus wonders if an over-reliance on data could damage the ability of future generations to trust their gut and tune into their own bodies?
It "sounds kind of hippy-dippy and feel-goodie," she admits. But in our culture of medicine where efficiency is highly valued, there's "a tendency to not value and appreciate what one feels inside of their own body … [because] it's easier to look at data than to listen to people's really messy stories of how they 'felt weird' the other day. It takes a lot less time to look at a sheet, to read out what the sensor implanted inside your body or planted around your house says."
Ho, too, worries about lost narratives. "For surveillance medicine to actually work we have to think about how we educate clinicians about the utility of these devices and how to how to interpret the data in the broader context of patients' lives."
Over-diagnosing
While one of the goals of surveillance medicine is to cut down on doctor visits, Ho wonders if the technology will have the opposite effect. "People may be going to the doctor more for things that actually are benign and are really not of concern yet," says Ho. She is also concerned that surveillance tools could make healthcare almost "recreational" and underscores the importance of making sure that the goals of surveillance medicine are met before the technology is unleashed.
"We can't just assume that any of these technologies are inherently technologies of liberation."
AI doesn't fix existing healthcare problems
"Knowing that you're going to have a fall or going to relapse or have a disease isn't all that helpful if you have no access to the follow-up care and you can't afford it and you can't afford the prescription medication that's going to ward off the onset," says Neuhaus. "It may still be worth knowing … but we can't fool ourselves into thinking that this technology is going to reshape medicine in America if we don't pay attention to … the infrastructure that we don't currently have."
Race-based medicine
How surveillances devices are tested before being approved for human use is a major concern for Ho. In recent years, alerts have been raised about the homogeneity of study group participants — too white and too male. Ho wonders if the devices will be able to "accurately predict the disease progression for people whose data has not been used in developing the technology?" COVID-19 has killed Black people at a rate 2.5 time greater than white people, for example, and new, virtual clinical research is focused on recruiting more people of color.
The Biggest Question
"We can't just assume that any of these technologies are inherently technologies of liberation," says Metzl.
Especially because we haven't yet asked the 64-thousand dollar question: Would patients even want to know?
Jenny Ahlstrom is an IT professional who was diagnosed at 43 with multiple myeloma, a blood cancer that typically attacks people in their late 60s and 70s and for which there is no cure. She believes that most people won't want to know about their declining health in real time. People like to live "optimistically in denial most of the time. If they don't have a problem, they don't want to really think they have a problem until they have [it]," especially when there is no cure. "Psychologically? That would be hard to know."
Ahlstrom says there's also the issue of trust, something she experienced first-hand when she launched her non-profit, HealthTree, a crowdsourcing tool to help myeloma patients "find their genetic twin" and learn what therapies may or may not work. "People want to share their story, not their data," says Ahlstrom. "We have been so conditioned as a nation to believe that our medical data is so valuable."
Metzl acknowledges that adoption of new technologies will be uneven. But he also believes that "over time, it will be abundantly clear that it's much, much cheaper to predict and prevent disease than it is to treat disease once it's already emerged."
Beyond cost, the tremendous potential of these technologies to help us live healthier and longer lives is a game-changer, he says, as long as we find ways "to ultimately navigate this terrain and put systems in place ... to minimize any potential harms."