What’s the Right Way to Regulate Gene-Edited Crops?
In the next few decades, humanity faces its biggest food crisis since the invention of the plow. The planet's population, currently 7.6 billion, is expected to reach 10 billion by 2050; to avoid mass famine, according to the World Resource Institute, we'll need to produce 70 percent more calories than we do today.
Imagine that a cheap, easy-to-use, and rapidly deployable technology could make crops more fertile and strengthen their resistance to threats.
Meanwhile, climate change will bring intensifying assaults by heat, drought, storms, pests, and weeds, depressing farm yields around the globe. Epidemics of plant disease—already laying waste to wheat, citrus, bananas, coffee, and cacao in many regions—will spread ever further through the vectors of modern trade and transportation.
So here's a thought experiment: Imagine that a cheap, easy-to-use, and rapidly deployable technology could make crops more fertile and strengthen their resistance to these looming threats. Imagine that it could also render them more nutritious and tastier, with longer shelf lives and less vulnerability to damage in shipping—adding enhancements to human health and enjoyment, as well as reduced food waste, to the possible benefits.
Finally, imagine that crops bred with the aid of this tool might carry dangers. Some could contain unsuspected allergens or toxins. Others might disrupt ecosystems, affecting the behavior or very survival of other species, or infecting wild relatives with their altered DNA.
Now ask yourself: If such a technology existed, should policymakers encourage its adoption, or ban it due to the risks? And if you chose the former alternative, how should crops developed by this method be regulated?
In fact, this technology does exist, though its use remains mostly experimental. It's called gene editing, and in the past five years it has emerged as a potentially revolutionary force in many areas—among them, treating cancer and genetic disorders; growing transplantable human organs in pigs; controlling malaria-spreading mosquitoes; and, yes, transforming agriculture. Several versions are currently available, the newest and nimblest of which goes by the acronym CRISPR.
Gene editing is far simpler and more efficient than older methods used to produce genetically modified organisms (GMOs). Unlike those methods, moreover, it can be used in ways that leave no foreign genes in the target organism—an advantage that proponents argue should comfort anyone leery of consuming so-called "Frankenfoods." But debate persists over what precautions must be taken before these crops come to market.
Recently, two of the world's most powerful regulatory bodies offered very different answers to that question. The United States Department of Agriculture (USDA) declared in March 2018 that it "does not currently regulate, or have any plans to regulate" plants that are developed through most existing methods of gene editing. The Court of Justice of the European Union (ECJ), by contrast, ruled in July that such crops should be governed by the same stringent regulations as conventional GMOs.
Some experts suggest that the broadly permissive American approach and the broadly restrictive EU policy are equally flawed.
Each announcement drew protests, for opposite reasons. Anti-GMO activists assailed the USDA's statement, arguing that all gene-edited crops should be tested and approved before marketing. "You don't know what those mutations or rearrangements might do in a plant," warned Michael Hansen, a senior scientist with the advocacy group Consumers Union. Biotech boosters griped that the ECJ's decision would stifle innovation and investment. "By any sensible standard, this judgment is illogical and absurd," wrote the British newspaper The Observer.
Yet some experts suggest that the broadly permissive American approach and the broadly restrictive EU policy are equally flawed. "What's behind these regulatory decisions is not science," says Jennifer Kuzma, co-director of the Genetic Engineering and Society Center at North Carolina State University, a former advisor to the World Economic Forum, who has researched and written extensively on governance issues in biotechnology. "It's politics, economics, and culture."
The U.S. Welcomes Gene-Edited Food
Humans have been modifying the genomes of plants and animals for 10,000 years, using selective breeding—a hit-or-miss method that can take decades or more to deliver rewards. In the mid-20th century, we learned to speed up the process by exposing organisms to radiation or mutagenic chemicals. But it wasn't until the 1980s that scientists began modifying plants by altering specific stretches of their DNA.
Today, about 90 percent of the corn, cotton and soybeans planted in the U.S. are GMOs; such crops cover nearly 4 million square miles (10 million square kilometers) of land in 29 countries. Most of these plants are transgenic, meaning they contain genes from an unrelated species—often as biologically alien as a virus or a fish. Their modifications are designed primarily to boost profit margins for mechanized agribusiness: allowing crops to withstand herbicides so that weeds can be controlled by mass spraying, for example, or to produce their own pesticides to lessen the need for chemical inputs.
In the early days, the majority of GM crops were created by extracting the gene for a desired trait from a donor organism, multiplying it, and attaching it to other snippets of DNA—usually from a microbe called an agrobacterium—that could help it infiltrate the cells of the target plant. Biotechnologists injected these particles into the target, hoping at least one would land in a place where it would perform its intended function; if not, they kept trying. The process was quicker than conventional breeding, but still complex, scattershot, and costly.
Because agrobacteria can cause plant tumors, Kuzma explains, policymakers in the U.S. decided to regulate GMO crops under an existing law, the Plant Pest Act of 1957, which addressed dangers like imported trees infested with invasive bugs. Every GMO containing the DNA of agrobacterium or another plant pest had to be tested to see whether it behaved like a pest, and undergo a lengthy approval process. By 2010, however, new methods had been developed for creating GMOs without agrobacteria; such plants could typically be marketed without pre-approval.
Soon after that, the first gene-edited crops began appearing. If old-school genetic engineering was a shotgun, techniques like TALEN and CRISPR were a scalpel—or the search-and-replace function on a computer program. With CRISPR/Cas9, for example, an enzyme that bacteria use to recognize and chop up hostile viruses is reprogrammed to find and snip out a desired bit of a plant or other organism's DNA. The enzyme can also be used to insert a substitute gene. If a DNA sequence is simply removed, or the new gene comes from a similar species, the changes in the target plant's genotype and phenotype (its general characteristics) may be no different from those that could be produced through selective breeding. If a foreign gene is added, the plant becomes a transgenic GMO.
Companies are already teeing up gene-edited products for the U.S. market, like a cooking oil and waxy corn.
This development, along with the emergence of non-agrobacterium GMOs, eventually prompted the USDA to propose a tiered regulatory system for all genetically engineered crops, beginning with an initial screening for potentially hazardous metaboloids or ecological impacts. (The screening was intended, in part, to guard against the "off-target effects"—stray mutations—that occasionally appear in gene-edited organisms.) If no red flags appeared, the crop would be approved; otherwise, it would be subject to further review, and possible regulation.
The plan was unveiled in January 2017, during the last week of the Obama presidency. Then, under the Trump administration, it was shelved. Although the USDA continues to promise a new set of regulations, the only hint of what they might contain has been Secretary of Agriculture Sonny Perdue's statement last March that gene-edited plants would remain unregulated if they "could otherwise have been developed through traditional breeding techniques, as long as they are not plant pests or developed using plant pests."
Because transgenic plants could not be "developed through traditional breeding techniques," this statement could be taken to mean that gene editing in which foreign DNA is introduced might actually be regulated. But because the USDA regulates conventional transgenic GMOs only if they trigger the plant-pest stipulation, experts assume gene-edited crops will face similarly limited oversight.
Meanwhile, companies are already teeing up gene-edited products for the U.S. market. An herbicide-resistant oilseed rape, developed using a proprietary technique, has been available since 2016. A cooking oil made from TALEN-tweaked soybeans, designed to have a healthier fatty-acid profile, is slated for release within the next few months. A CRISPR-edited "waxy" corn, designed with a starch profile ideal for processed foods, should be ready by 2021.
In all likelihood, none of these products will have to be tested for safety.
In the E.U., Stricter Rules Apply
Now let's look at the European Union. Since the late 1990s, explains Gregory Jaffe, director of the Project on Biotechnology at the Center for Science in the Public Interest, the EU has had a "process-based trigger" for genetically engineered products: "If you use recombinant DNA, you are going to be regulated." All foods and animal feeds must be approved and labeled if they consist of or contain more than 0.9 percent GM ingredients. (In the U.S., "disclosure" of GM ingredients is mandatory, if someone asks, but labeling is not required.) The only GM crop that can be commercially grown in EU member nations is a type of insect-resistant corn, though some countries allow imports.
European scientists helped develop gene editing, and they—along with the continent's biotech entrepreneurs—have been busy developing applications for crops. But European farmers seem more divided over the technology than their American counterparts. The main French agricultural trades union, for example, supports research into non-transgenic gene editing and its exemption from GMO regulation. But it was the country's small-farmers' union, the Confédération Paysanne, along with several allied groups, that in 2015 submitted a complaint to the ECJ, asking that all plants produced via mutagenesis—including gene-editing—be regulated as GMOs.
At this point, it should be mentioned that in the past 30 years, large population studies have found no sign that consuming GM foods is harmful to human health. GMO critics can, however, point to evidence that herbicide-resistant crops have encouraged overuse of herbicides, giving rise to poison-proof "superweeds," polluting the environment with suspected carcinogens, and inadvertently killing beneficial plants. Those allegations were key to the French plaintiffs' argument that gene-edited crops might similarly do unexpected harm. (Disclosure: Leapsmag's parent company, Bayer, recently acquired Monsanto, a maker of herbicides and herbicide-resistant seeds. Also, Leaps by Bayer, an innovation initiative of Bayer and Leapsmag's direct founder, has funded a biotech startup called JoynBio that aims to reduce the amount of nitrogen fertilizer required to grow crops.)
The ruling was "scientifically nonsensical. It's because of things like this that I'll never go back to Europe."
In the end, the EU court found in the Confédération's favor on gene editing—though the court maintained the regulatory exemption for mutagenesis induced by chemicals or radiation, citing the 'long safety record' of those methods.
The ruling was "scientifically nonsensical," fumes Rodolphe Barrangou, a French food scientist who pioneered CRISPR while working for DuPont in Wisconsin and is now a professor at NC State. "It's because of things like this that I'll never go back to Europe."
Nonetheless, the decision was consistent with longstanding EU policy on crops made with recombinant DNA. Given the difficulty and expense of getting such products through the continent's regulatory system, many other European researchers may wind up following Barrangou to America.
Getting to the Root of the Cultural Divide
What explains the divergence between the American and European approaches to GMOs—and, by extension, gene-edited crops? In part, Jennifer Kuzma speculates, it's that Europeans have a different attitude toward eating. "They're generally more tied to where their food comes from, where it's produced," she notes. They may also share a mistrust of government assurances on food safety, borne of the region's Mad Cow scandals of the 1980s and '90s. In Catholic countries, consumers may have misgivings about tinkering with the machinery of life.
But the principal factor, Kuzma argues, is that European and American agriculture are structured differently. "GM's benefits have mostly been designed for large-scale industrial farming and commodity crops," she says. That kind of farming is dominant in the U.S., but not in Europe, leading to a different balance of political power. In the EU, there was less pressure on decisionmakers to approve GMOs or exempt gene-edited crops from regulation—and more pressure to adopt a GM-resistant stance.
Such dynamics may be operating in other regions as well. In China, for example, the government has long encouraged research in GMOs; a state-owned company recently acquired Syngenta, a Swiss-based multinational corporation that is a leading developer of GM and gene-edited crops. GM animal feed and cooking oil can be freely imported. Yet commercial cultivation of most GM plants remains forbidden, out of deference to popular suspicions of genetically altered food. "As a new item, society has debates and doubts on GMO techniques, which is normal," President Xi Jinping remarked in 2014. "We must be bold in studying it, [but] be cautious promoting it."
The proper balance between boldness and caution is still being worked out all over the world. Europe's process-based approach may prevent researchers from developing crops that, with a single DNA snip, could rescue millions from starvation. EU regulations will also make it harder for small entrepreneurs to challenge Big Ag with a technology that, as Barrangou puts it, "can be used affordably, quickly, scalably, by anyone, without even a graduate degree in genetics." America's product-based approach, conversely, may let crops with hidden genetic dangers escape detection. And by refusing to investigate such risks, regulators may wind up exacerbating consumers' doubts about GM and gene-edited products, rather than allaying them.
"Science...can't tell you what to regulate. That's a values-based decision."
Perhaps the solution lies in combining both approaches, and adding some flexibility and nuance to the mix. "I don't believe in regulation by the product or the process," says CSPI's Jaffe. "I think you need both." Deleting a DNA base pair to silence a gene, for example, might be less risky than inserting a foreign gene into a plant—unless the deletion enables the production of an allergen, and the transgene comes from spinach.
Kuzma calls for the creation of "cooperative governance networks" to oversee crop genome editing, similar to bodies that already help develop and enforce industry standards in fisheries, electronics, industrial cleaning products, and (not incidentally) organic agriculture. Such a network could include farmers, scientists, advocacy groups, private companies, and governmental agencies. "Safety isn't an all-or-nothing concept," Kuzma says. "Science can tell you what some of the issues are in terms of risk and benefit, but it can't tell you what to regulate. That's a values-based decision."
By drawing together a wide range of stakeholders to make such decisions, she adds, "we're more likely to anticipate future consequences, and to develop a robust approach—one that not only seems more legitimate to people, but is actually just plain old better."
Exactly 67 years ago, in 1955, a group of scientists and reporters gathered at the University of Michigan and waited with bated breath for Dr. Thomas Francis Jr., director of the school’s Poliomyelitis Vaccine Evaluation Center, to approach the podium. The group had gathered to hear the news that seemingly everyone in the country had been anticipating for the past two years – whether the vaccine for poliomyelitis, developed by Francis’s former student Jonas Salk, was effective in preventing the disease.
Polio, at that point, had become a household name. As the highly contagious virus swept through the United States, cities closed their schools, movie theaters, swimming pools, and even churches to stop the spread. For most, polio presented as a mild illness, and was usually completely asymptomatic – but for an unlucky few, the virus took hold of the central nervous system and caused permanent paralysis of muscles in the legs, arms, and even people’s diaphragms, rendering the person unable to walk and breathe. It wasn’t uncommon to hear reports of people – mostly children – who fell sick with a flu-like virus and then, just days later, were relegated to spend the rest of their lives in an iron lung.
For two years, researchers had been testing a vaccine that would hopefully be able to stop the spread of the virus and prevent the 45,000 infections each year that were keeping the nation in a chokehold. At the podium, Francis greeted the crowd and then proceeded to change the course of human history: The vaccine, he reported, was “safe, effective, and potent.” Widespread vaccination could begin in just a few weeks. The nightmare was over.
The road to success
Jonas Salk, a medical researcher and virologist who developed the vaccine with his own research team, would rightfully go down in history as the man who eradicated polio. (Today, wild poliovirus circulates in just two countries, Afghanistan and Pakistan – with only 140 cases reported in 2020.) But many people today forget that the widespread vaccination campaign that effectively ended wild polio across the globe would have never been possible without the human clinical trials that preceded it.
As with the COVID-19 vaccine, skepticism and misinformation around the polio vaccine abounded. But even more pervasive than the skepticism was fear. The consequences of polio had arguably never been more visible.
The road to human clinical trials – and the resulting vaccine – was a long one. In 1938, President Franklin Delano Roosevelt launched the National Foundation for Infantile Paralysis in order to raise funding for research and development of a polio vaccine. (Today, we know this organization as the March of Dimes.) A polio survivor himself, Roosevelt elevated awareness and prevention into the national spotlight, even more so than it had been previously. Raising funds for a safe and effective polio vaccine became a cornerstone of his presidency – and the funds raked in by his foundation went primarily to Salk to fund his research.
The Trials Begin
Salk’s vaccine, which included an inactivated (killed) polio virus, was promising – but now the researchers needed test subjects to make global vaccination a possibility. Because the aim of the vaccine was to prevent paralytic polio, researchers decided that they had to test the vaccine in the population that was most vulnerable to paralysis – young children. And, because the rate of paralysis was so low even among children, the team required many children to collect enough data. Francis, who led the trial to evaluate Salk’s vaccine, began the process of recruiting more than one million school-aged children between the ages of six and nine in 272 counties that had the highest incidence of the disease. The participants were nicknamed the “Polio Pioneers.”
Double-blind, placebo-based trials were considered the “gold standard” of epidemiological research back in Francis's day - and they remain the best approach we have today. These rigorous scientific studies are designed with two participant groups in mind. One group, called the test group, receives the experimental treatment (such as a vaccine); the other group, called the control, receives an inactive treatment known as a placebo. The researchers then compare the effects of the active treatment against the effects of the placebo, and every researcher is “blinded” as to which participants receive what treatment. That way, the results aren’t tainted by any possible biases.
But the study was controversial in that only some of the individual field trials at the county and state levels had a placebo group. Researchers described this as a “calculated risk,” meaning that while there were risks involved in giving the vaccine to a large number of children, the bigger risk was the potential paralysis or death that could come with being infected by polio. In all, just 200,000 children across the US received a placebo treatment, while an additional 725,000 children acted as observational controls – in other words, researchers monitored them for signs of infection, but did not give them any treatment.
As with the COVID-19 vaccine, skepticism and misinformation around the polio vaccine abounded. But even more pervasive than the skepticism was fear. President Roosevelt, who had made many public and televised appearances in a wheelchair, served as a perpetual reminder of the consequences of polio, as an infection at age 39 had rendered him permanently unable to walk. The consequences of polio had arguably never been more visible, and parents signed up their children in droves to participate in the study and offer them protection.
The Polio Pioneer Legacy
In a little less than a year, roughly half a million children received a dose of Salk’s polio vaccine. While plenty of children were hesitant to get the shot, many former participants still remember the fear surrounding the disease. One former participant, a Polio Pioneer named Debbie LaCrosse, writes of her experience: “There was no discussion, no listing of pros and cons. No amount of concern over possible side effects or other unknowns associated with a new vaccine could compare to the terrifying threat of polio.” For their participation, each kid received a certificate – and sometimes a pin – with the words “Polio Pioneer” emblazoned across the front.
When Francis announced the results of the trial on April 12, 1955, people did more than just breathe a sigh of relief – they openly celebrated, ringing church bells and flooding into the streets to embrace. Salk, who had become the face of the vaccine at that point, was instantly hailed as a national hero – and teachers around the country had their students to write him ‘thank you’ notes for his years of diligent work.
But while Salk went on to win national acclaim – even accepting the Presidential Medal of Freedom for his work on the polio vaccine in 1977 – his success was due in no small part to the children (and their parents) who took a risk in order to advance medical science. And that risk paid off: By the early 1960s, the yearly cases of polio in the United States had gone down to just 910. Where before the vaccine polio had caused around 15,000 cases of paralysis each year, only ten cases of paralysis were recorded in the entire country throughout the 1970s. And in 1979, the virus that once shuttered entire towns was declared officially eradicated in this country. Thanks to the efforts of these brave pioneers, the nation – along with the majority of the world – remains free of polio even today.
Why you should (virtually) care
As the pandemic turns endemic, healthcare providers have been eagerly urging patients to return to their offices to enjoy the benefits of in-person care.
But wait.
The last two years have forced all sorts of organizations to be nimble, adaptable and creative in how they work, and this includes healthcare providers’ efforts to maintain continuity of care under the most challenging of conditions. So before we go back to “business as usual,” don’t we owe it to those providers and ourselves to admit that business as usual did not work for most of the people the industry exists to help? If we’re going to embrace yet another period of change – periods that don’t happen often in our complex industry – shouldn’t we first stop and ask ourselves what we’re trying to achieve?
Certainly, COVID has shown that telehealth can be an invaluable tool, particularly for patients in rural and underserved communities that lack access to specialty care. It’s also become clear that many – though not all – healthcare encounters can be effectively conducted from afar. That said, the telehealth tactics that filled the gap during the pandemic were largely stitched together substitutes for existing visit-based workflows: with offices closed, patients scheduled video visits for help managing the side effects of their blood pressure medications or to see their endocrinologist for a quarterly check-in. Anyone whose children slogged through the last year or two of remote learning can tell you that simply virtualizing existing processes doesn’t necessarily improve the experience or the outcomes!
But what if our approach to post-pandemic healthcare came from a patient-driven perspective? We have a fleeting opportunity to advance a care model centered on convenient and equitable access that first prioritizes good outcomes, then selects approaches to care – and locations – tailored to each patient. Using the example of education, imagine how effective it would be if each student, regardless of their school district and aptitude, received such individualized attention.
That’s the idea behind virtual-first care (V1C), a new care model centered on convenient, customized, high-quality care that integrates a full suite of services tailored directly to patients’ clinical needs and preferences. This package includes asynchronous communication such as texting; video and other live virtual modes; and in-person options.
V1C goes beyond what you might think of as standard “telehealth” by using evidence-based protocols and tools that include traditional and digital therapeutics and testing, personalized care plans, dynamic patient monitoring, and team-based approaches to care. This could include spit kits mailed for laboratory tests and complementing clinical care with health coaching. V1C also replaces some in-person exams with ongoing monitoring, using sensors for more ‘whole person’ care.
Amidst all this momentum, we have the opportunity to rethink the goals of healthcare innovation, but that means bringing together key stakeholders to demonstrate that sustainable V1C can redefine healthcare.
Established V1C healthcare providers such as Omada, Headspace, and Heartbeat Health, as well as emerging market entrants like Oshi, Visana, and Wellinks, work with a variety of patients who have complicated long-term conditions such as diabetes, heart failure, gastrointestinal illness, endometriosis, and COPD. V1C is comprehensive in ways that are lacking in digital health and its other predecessors: it has the potential to integrate multiple data streams, incorporate more frequent touches and check-ins over time, and manage a much wider range of chronic health conditions, improving lives and reducing disease burden now and in the future.
Recognizing the pandemic-driven interest in virtual care, significant energy and resources are already flowing fast toward V1C. Some of the world’s largest innovators jumped into V1C early on: Verily, Alphabet’s Life Sciences Company, launched Onduo in 2016 to disrupt the diabetes healthcare market, and is now well positioned to scale its solutions. Major insurers like Aetna and United now offer virtual-first plans to members, responding as organizations expand virtual options for employees. Amidst all this momentum, we have the opportunity to rethink the goals of healthcare innovation, but that means bringing together key stakeholders to demonstrate that sustainable V1C can redefine healthcare.
That was the immediate impetus for IMPACT, a consortium of V1C companies, investors, payers and patients founded last year to ensure access to high-quality, evidence-based V1C. Developed by our team at the Digital Medicine Society (DiMe) in collaboration with the American Telemedicine Association (ATA), IMPACT has begun to explore key issues that include giving patients more integrated experiences when accessing both virtual and brick-and-mortar care.
Digital Medicine Society
V1C is not, nor should it be, virtual-only care. In this new era of hybrid healthcare, success will be defined by how well providers help patients navigate the transitions. How do we smoothly hand a patient off from an onsite primary care physician to, say, a virtual cardiologist? How do we get information from a brick-and-mortar to a digital portal? How do you manage dataflow while still staying HIPAA compliant? There are many complex regulatory implications for these new models, as well as an evolving landscape in terms of privacy, security and interoperability. It will be no small task for groups like IMPACT to determine the best path forward.
None of these factors matter unless the industry can recruit and retain clinicians. Our field is facing an unprecedented workforce crisis. Traditional healthcare is making clinicians miserable, and COVID has only accelerated the trend of overworked, disenchanted healthcare workers leaving in droves. Clinicians want more interactions with patients, and fewer with computer screens – call it “More face time, less FaceTime.” No new model will succeed unless the industry can more efficiently deploy its talent – arguably its most scarce and precious resource. V1C can help with alleviating the increasing burden and frustration borne by individual physicians in today’s status quo.
In healthcare, new technological approaches inevitably provoke no shortage of skepticism. Past lessons from Silicon Valley-driven fixes have led to understandable cynicism. But V1C is a different breed of animal. By building healthcare around the patient, not the clinic, V1C can make healthcare work better for patients, payers and providers. We’re at a fork in the road: we can revert back to a broken sick-care system, or dig in and do the hard work of figuring out how this future-forward healthcare system gets financed, organized and executed. As a field, we must find the courage and summon the energy to embrace this moment, and make it a moment of change.