What’s the Right Way to Regulate Gene-Edited Crops?
In the next few decades, humanity faces its biggest food crisis since the invention of the plow. The planet's population, currently 7.6 billion, is expected to reach 10 billion by 2050; to avoid mass famine, according to the World Resource Institute, we'll need to produce 70 percent more calories than we do today.
Imagine that a cheap, easy-to-use, and rapidly deployable technology could make crops more fertile and strengthen their resistance to threats.
Meanwhile, climate change will bring intensifying assaults by heat, drought, storms, pests, and weeds, depressing farm yields around the globe. Epidemics of plant disease—already laying waste to wheat, citrus, bananas, coffee, and cacao in many regions—will spread ever further through the vectors of modern trade and transportation.
So here's a thought experiment: Imagine that a cheap, easy-to-use, and rapidly deployable technology could make crops more fertile and strengthen their resistance to these looming threats. Imagine that it could also render them more nutritious and tastier, with longer shelf lives and less vulnerability to damage in shipping—adding enhancements to human health and enjoyment, as well as reduced food waste, to the possible benefits.
Finally, imagine that crops bred with the aid of this tool might carry dangers. Some could contain unsuspected allergens or toxins. Others might disrupt ecosystems, affecting the behavior or very survival of other species, or infecting wild relatives with their altered DNA.
Now ask yourself: If such a technology existed, should policymakers encourage its adoption, or ban it due to the risks? And if you chose the former alternative, how should crops developed by this method be regulated?
In fact, this technology does exist, though its use remains mostly experimental. It's called gene editing, and in the past five years it has emerged as a potentially revolutionary force in many areas—among them, treating cancer and genetic disorders; growing transplantable human organs in pigs; controlling malaria-spreading mosquitoes; and, yes, transforming agriculture. Several versions are currently available, the newest and nimblest of which goes by the acronym CRISPR.
Gene editing is far simpler and more efficient than older methods used to produce genetically modified organisms (GMOs). Unlike those methods, moreover, it can be used in ways that leave no foreign genes in the target organism—an advantage that proponents argue should comfort anyone leery of consuming so-called "Frankenfoods." But debate persists over what precautions must be taken before these crops come to market.
Recently, two of the world's most powerful regulatory bodies offered very different answers to that question. The United States Department of Agriculture (USDA) declared in March 2018 that it "does not currently regulate, or have any plans to regulate" plants that are developed through most existing methods of gene editing. The Court of Justice of the European Union (ECJ), by contrast, ruled in July that such crops should be governed by the same stringent regulations as conventional GMOs.
Some experts suggest that the broadly permissive American approach and the broadly restrictive EU policy are equally flawed.
Each announcement drew protests, for opposite reasons. Anti-GMO activists assailed the USDA's statement, arguing that all gene-edited crops should be tested and approved before marketing. "You don't know what those mutations or rearrangements might do in a plant," warned Michael Hansen, a senior scientist with the advocacy group Consumers Union. Biotech boosters griped that the ECJ's decision would stifle innovation and investment. "By any sensible standard, this judgment is illogical and absurd," wrote the British newspaper The Observer.
Yet some experts suggest that the broadly permissive American approach and the broadly restrictive EU policy are equally flawed. "What's behind these regulatory decisions is not science," says Jennifer Kuzma, co-director of the Genetic Engineering and Society Center at North Carolina State University, a former advisor to the World Economic Forum, who has researched and written extensively on governance issues in biotechnology. "It's politics, economics, and culture."
The U.S. Welcomes Gene-Edited Food
Humans have been modifying the genomes of plants and animals for 10,000 years, using selective breeding—a hit-or-miss method that can take decades or more to deliver rewards. In the mid-20th century, we learned to speed up the process by exposing organisms to radiation or mutagenic chemicals. But it wasn't until the 1980s that scientists began modifying plants by altering specific stretches of their DNA.
Today, about 90 percent of the corn, cotton and soybeans planted in the U.S. are GMOs; such crops cover nearly 4 million square miles (10 million square kilometers) of land in 29 countries. Most of these plants are transgenic, meaning they contain genes from an unrelated species—often as biologically alien as a virus or a fish. Their modifications are designed primarily to boost profit margins for mechanized agribusiness: allowing crops to withstand herbicides so that weeds can be controlled by mass spraying, for example, or to produce their own pesticides to lessen the need for chemical inputs.
In the early days, the majority of GM crops were created by extracting the gene for a desired trait from a donor organism, multiplying it, and attaching it to other snippets of DNA—usually from a microbe called an agrobacterium—that could help it infiltrate the cells of the target plant. Biotechnologists injected these particles into the target, hoping at least one would land in a place where it would perform its intended function; if not, they kept trying. The process was quicker than conventional breeding, but still complex, scattershot, and costly.
Because agrobacteria can cause plant tumors, Kuzma explains, policymakers in the U.S. decided to regulate GMO crops under an existing law, the Plant Pest Act of 1957, which addressed dangers like imported trees infested with invasive bugs. Every GMO containing the DNA of agrobacterium or another plant pest had to be tested to see whether it behaved like a pest, and undergo a lengthy approval process. By 2010, however, new methods had been developed for creating GMOs without agrobacteria; such plants could typically be marketed without pre-approval.
Soon after that, the first gene-edited crops began appearing. If old-school genetic engineering was a shotgun, techniques like TALEN and CRISPR were a scalpel—or the search-and-replace function on a computer program. With CRISPR/Cas9, for example, an enzyme that bacteria use to recognize and chop up hostile viruses is reprogrammed to find and snip out a desired bit of a plant or other organism's DNA. The enzyme can also be used to insert a substitute gene. If a DNA sequence is simply removed, or the new gene comes from a similar species, the changes in the target plant's genotype and phenotype (its general characteristics) may be no different from those that could be produced through selective breeding. If a foreign gene is added, the plant becomes a transgenic GMO.
Companies are already teeing up gene-edited products for the U.S. market, like a cooking oil and waxy corn.
This development, along with the emergence of non-agrobacterium GMOs, eventually prompted the USDA to propose a tiered regulatory system for all genetically engineered crops, beginning with an initial screening for potentially hazardous metaboloids or ecological impacts. (The screening was intended, in part, to guard against the "off-target effects"—stray mutations—that occasionally appear in gene-edited organisms.) If no red flags appeared, the crop would be approved; otherwise, it would be subject to further review, and possible regulation.
The plan was unveiled in January 2017, during the last week of the Obama presidency. Then, under the Trump administration, it was shelved. Although the USDA continues to promise a new set of regulations, the only hint of what they might contain has been Secretary of Agriculture Sonny Perdue's statement last March that gene-edited plants would remain unregulated if they "could otherwise have been developed through traditional breeding techniques, as long as they are not plant pests or developed using plant pests."
Because transgenic plants could not be "developed through traditional breeding techniques," this statement could be taken to mean that gene editing in which foreign DNA is introduced might actually be regulated. But because the USDA regulates conventional transgenic GMOs only if they trigger the plant-pest stipulation, experts assume gene-edited crops will face similarly limited oversight.
Meanwhile, companies are already teeing up gene-edited products for the U.S. market. An herbicide-resistant oilseed rape, developed using a proprietary technique, has been available since 2016. A cooking oil made from TALEN-tweaked soybeans, designed to have a healthier fatty-acid profile, is slated for release within the next few months. A CRISPR-edited "waxy" corn, designed with a starch profile ideal for processed foods, should be ready by 2021.
In all likelihood, none of these products will have to be tested for safety.
In the E.U., Stricter Rules Apply
Now let's look at the European Union. Since the late 1990s, explains Gregory Jaffe, director of the Project on Biotechnology at the Center for Science in the Public Interest, the EU has had a "process-based trigger" for genetically engineered products: "If you use recombinant DNA, you are going to be regulated." All foods and animal feeds must be approved and labeled if they consist of or contain more than 0.9 percent GM ingredients. (In the U.S., "disclosure" of GM ingredients is mandatory, if someone asks, but labeling is not required.) The only GM crop that can be commercially grown in EU member nations is a type of insect-resistant corn, though some countries allow imports.
European scientists helped develop gene editing, and they—along with the continent's biotech entrepreneurs—have been busy developing applications for crops. But European farmers seem more divided over the technology than their American counterparts. The main French agricultural trades union, for example, supports research into non-transgenic gene editing and its exemption from GMO regulation. But it was the country's small-farmers' union, the Confédération Paysanne, along with several allied groups, that in 2015 submitted a complaint to the ECJ, asking that all plants produced via mutagenesis—including gene-editing—be regulated as GMOs.
At this point, it should be mentioned that in the past 30 years, large population studies have found no sign that consuming GM foods is harmful to human health. GMO critics can, however, point to evidence that herbicide-resistant crops have encouraged overuse of herbicides, giving rise to poison-proof "superweeds," polluting the environment with suspected carcinogens, and inadvertently killing beneficial plants. Those allegations were key to the French plaintiffs' argument that gene-edited crops might similarly do unexpected harm. (Disclosure: Leapsmag's parent company, Bayer, recently acquired Monsanto, a maker of herbicides and herbicide-resistant seeds. Also, Leaps by Bayer, an innovation initiative of Bayer and Leapsmag's direct founder, has funded a biotech startup called JoynBio that aims to reduce the amount of nitrogen fertilizer required to grow crops.)
The ruling was "scientifically nonsensical. It's because of things like this that I'll never go back to Europe."
In the end, the EU court found in the Confédération's favor on gene editing—though the court maintained the regulatory exemption for mutagenesis induced by chemicals or radiation, citing the 'long safety record' of those methods.
The ruling was "scientifically nonsensical," fumes Rodolphe Barrangou, a French food scientist who pioneered CRISPR while working for DuPont in Wisconsin and is now a professor at NC State. "It's because of things like this that I'll never go back to Europe."
Nonetheless, the decision was consistent with longstanding EU policy on crops made with recombinant DNA. Given the difficulty and expense of getting such products through the continent's regulatory system, many other European researchers may wind up following Barrangou to America.
Getting to the Root of the Cultural Divide
What explains the divergence between the American and European approaches to GMOs—and, by extension, gene-edited crops? In part, Jennifer Kuzma speculates, it's that Europeans have a different attitude toward eating. "They're generally more tied to where their food comes from, where it's produced," she notes. They may also share a mistrust of government assurances on food safety, borne of the region's Mad Cow scandals of the 1980s and '90s. In Catholic countries, consumers may have misgivings about tinkering with the machinery of life.
But the principal factor, Kuzma argues, is that European and American agriculture are structured differently. "GM's benefits have mostly been designed for large-scale industrial farming and commodity crops," she says. That kind of farming is dominant in the U.S., but not in Europe, leading to a different balance of political power. In the EU, there was less pressure on decisionmakers to approve GMOs or exempt gene-edited crops from regulation—and more pressure to adopt a GM-resistant stance.
Such dynamics may be operating in other regions as well. In China, for example, the government has long encouraged research in GMOs; a state-owned company recently acquired Syngenta, a Swiss-based multinational corporation that is a leading developer of GM and gene-edited crops. GM animal feed and cooking oil can be freely imported. Yet commercial cultivation of most GM plants remains forbidden, out of deference to popular suspicions of genetically altered food. "As a new item, society has debates and doubts on GMO techniques, which is normal," President Xi Jinping remarked in 2014. "We must be bold in studying it, [but] be cautious promoting it."
The proper balance between boldness and caution is still being worked out all over the world. Europe's process-based approach may prevent researchers from developing crops that, with a single DNA snip, could rescue millions from starvation. EU regulations will also make it harder for small entrepreneurs to challenge Big Ag with a technology that, as Barrangou puts it, "can be used affordably, quickly, scalably, by anyone, without even a graduate degree in genetics." America's product-based approach, conversely, may let crops with hidden genetic dangers escape detection. And by refusing to investigate such risks, regulators may wind up exacerbating consumers' doubts about GM and gene-edited products, rather than allaying them.
"Science...can't tell you what to regulate. That's a values-based decision."
Perhaps the solution lies in combining both approaches, and adding some flexibility and nuance to the mix. "I don't believe in regulation by the product or the process," says CSPI's Jaffe. "I think you need both." Deleting a DNA base pair to silence a gene, for example, might be less risky than inserting a foreign gene into a plant—unless the deletion enables the production of an allergen, and the transgene comes from spinach.
Kuzma calls for the creation of "cooperative governance networks" to oversee crop genome editing, similar to bodies that already help develop and enforce industry standards in fisheries, electronics, industrial cleaning products, and (not incidentally) organic agriculture. Such a network could include farmers, scientists, advocacy groups, private companies, and governmental agencies. "Safety isn't an all-or-nothing concept," Kuzma says. "Science can tell you what some of the issues are in terms of risk and benefit, but it can't tell you what to regulate. That's a values-based decision."
By drawing together a wide range of stakeholders to make such decisions, she adds, "we're more likely to anticipate future consequences, and to develop a robust approach—one that not only seems more legitimate to people, but is actually just plain old better."
Podcast: The future of brain health with Percy Griffin
Today's guest is Percy Griffin, director of scientific engagement for the Alzheimer’s Association, a nonprofit that’s focused on speeding up research, finding better ways to detect Alzheimer’s earlier and other approaches for reducing risk. Percy has a doctorate in molecular cell biology from Washington University, he’s led important research on Alzheimer’s, and you can find the link to his full bio in the show notes, below.
Our topic for this conversation is the present and future of the fight against dementia. Billions of dollars have been spent by the National Institutes of Health and biotechs to research new treatments for Alzheimer's and other forms of dementia, but so far there's been little to show for it. Last year, Aduhelm became the first drug to be approved by the FDA for Alzheimer’s in 20 years, but it's received a raft of bad publicity, with red flags about its effectiveness, side effects and cost.
Meanwhile, 6.5 million Americans have Alzheimer's, and this number could increase to 13 million in 2050. Listen to this conversation if you’re concerned about your own brain health, that of family members getting older, or if you’re just concerned about the future of this country with experts predicting the number people over 65 will increase dramatically in the very near future.
Listen to the Episode
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
4:40 - We talk about the parts of Percy’s life that led to him to concentrate on working in this important area.
6:20 - He defines Alzheimer's and dementia, and discusses the key elements of communicating science.
10:20 - Percy explains why the Alzheimer’s Association has been supportive of Aduhelm, even as others have been critical.
17:58 - We talk about therapeutics under development, which ones to be excited about, and how they could be tailored to a person's own biology.
24:25 - Percy discusses funding and tradeoffs between investing more money into Alzheimer’s research compared to other intractable diseases like cancer, and new opportunities to accelerate progress, such as ARPA-H, President Biden’s proposed agency to speed up health breakthroughs.
27:24 - We talk about the social determinants of brain health. What are the pros/cons of continuing to spend massive sums of money to develop new drugs like Aduhelm versus refocusing on expanding policies to address social determinants - like better education, nutritious food and safe drinking water - that have enabled some groups more than others to enjoy improved cognition late in life.
34:18 - Percy describes his top lifestyle recommendations for protecting your mind.
37:33 - Is napping bad for the brain?
39:39 - Circadian rhythm and Alzheimer's.
42:34 - What tests can people take to check their brain health today, and which biomarkers are we making progress on?
47:25 - Percy highlights important programs run by the Alzheimer’s Association to support advances.
Show links:
** After this episode was recorded, the Centers for Medicare and Medicaid Services affirmed its decision from last June to limit coverage of Aduhelm. More here.
- Percy Griffin's bio: https://www.alz.org/manh/events/alztalks/upcoming-...
- The Alzheimer's Association's Part the Cloud program: https://alz.org/partthecloud/about-us.asp
- The paradox of dementia rates decreasing: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7455342/
- The argument for focusing more resources on improving institutions and social processes for brain health: https://www.statnews.com/2021/09/23/the-brain-heal...
- Recent research on napping: https://www.ocregister.com/2022/03/25/alzheimers-s...
- The Alzheimer's Association helpline: https://www.alz.org/help-support/resources/helpline
- ALZConnected, a free online community for people affected by dementia https://www.alzconnected.org/
- TrialMatch for people with dementia and healthy volunteers to find clinical trials for Alzheimer's and other dementia: https://www.alz.org/alzheimers-dementia/research_p...
COVID-19 prompted numerous companies to reconsider their approach to the future of work. Many leaders felt reluctant about maintaining hybrid and remote work options after vaccines became widely available. Yet the emergence of dangerous COVID variants such as Omicron has shown the folly of this mindset.
To mitigate the risks of new variants and other public health threats, as well as to satisfy the desires of a large majority of employees who express a strong desire in multiple surveys for a flexible hybrid or fully remote schedule, leaders are increasingly accepting that hybrid and remote options represent the future of work. No wonder that a February 2022 survey by the Federal Reserve Bank of Richmond showed that more and more firms are offering hybrid and fully-remote work options. The firms expect to have more remote workers next year and more geographically-distributed workers.
Although hybrid and remote work mitigates public health risks, it poses another set of health concerns relevant to employee wellbeing, due to the threat of proximity bias. This term refers to the negative impact on work culture from the prospect of inequality among office-centric, hybrid, and fully remote employees.
The difference in time spent in the office leads to concerns ranging from decreased career mobility for those who spend less facetime with their supervisor to resentment building up against the staff who have the most flexibility in where to work. In fact, a January 2022 survey by the company Slack of over 10,000 knowledge workers and their leaders shows that proximity bias is the top concern – expressed by 41% of executives - about hybrid and remote work.
To address this problem requires using best practices based on cognitive science for creating a culture of “Excellence From Anywhere.” This solution is based on guidance that I developed for leaders at 17 pioneering organizations for a company culture fit for the future of work.
Protect from proximity bias via the "Excellence From Anywhere" strategy
So why haven’t firms addressed the obvious problem of proximity bias? Any reasonable external observer could predict the issues arising from differences of time spent in the office.
Unfortunately, leaders often fail to see the clear threat in front of their nose. You might have heard of black swans: low-probability, high-impact threats. Well, the opposite kind of threats are called gray rhinos: obvious dangers that we fail to see because of our mental blindspots. The scientific name for these blindspots is cognitive biases, which cause leaders to resist best practices in transitioning to a hybrid-first model.
The core idea is to get all of your workforce to pull together to achieve business outcomes: the location doesn’t matter.
Leaders can address this by focusing on a shared culture of “Excellence From Anywhere.” This term refers to a flexible organizational culture that takes into account the nature of an employee's work and promotes evaluating employees based on task completion, allowing remote work whenever possible.
Addressing Resentments Due to Proximity Bias
The “Excellence From Anywhere” strategy addresses concerns about treatment of remote workers by focusing on deliverables, regardless of where you work. Doing so also involves adopting best practices for hybrid and remote collaboration and innovation.
By valuing deliverables, collaboration, and innovation through a focus on a shared work culture of “Excellence From Anywhere,” you can instill in your employees a focus on deliverables. The core idea is to get all of your workforce to pull together to achieve business outcomes: the location doesn’t matter.
This work culture addresses concerns about fairness by reframing the conversation to focus on accomplishing shared goals, rather than the method of doing so. After all, no one wants their colleagues to have to commute out of spite.
This technique appeals to the tribal aspect of our brains. We are evolutionarily adapted to living in small tribal groups of 50-150 people. Spending different amounts of time in the office splits apart the work tribe into different tribes. However, cultivating a shared focus on business outcomes helps mitigate such divisions and create a greater sense of unity, alleviating frustrations and resentments. Doing so helps improve employee emotional wellbeing and facilitates good collaboration.
Solving the facetime concerns of proximity bias
But what about facetime with the boss? To address this problem necessitates shifting from the traditional, high-stakes, large-scale quarterly or even annual performance evaluations to much more frequent weekly or biweekly, low-stakes, brief performance evaluation through one-on-one in-person or videoconference check-ins.
Supervisees agree with their supervisor on three to five weekly or biweekly performance goals. Then, 72 hours before their check-in meeting, they send a brief report, under a page, to their boss of how they did on these goals, what challenges they faced and how they overcame them, a quantitative self-evaluation, and proposed goals for next week. Twenty-four hours before the meeting, the supervisor responds in a paragraph-long response with their initial impressions of the report.
It’s hard to tell how much any employee should worry about not being able to chat by the watercooler with their boss: knowing exactly where they stand is the key concern for employees, and they can take proactive action if they see their standing suffer.
At the one-on-one, the supervisor reinforces positive aspects of performance and coaches the supervisee on how to solve challenges better, agrees or revises the goals for next time, and affirms or revises the performance evaluation. That performance evaluation gets fed into a constant performance and promotion review system, which can replace or complement a more thorough annual evaluation.
This type of brief and frequent performance evaluation meeting ensures that the employee’s work is integrated with efforts by the supervisor’s other employees, thereby ensuring more unity in achieving business outcomes. It also mitigates concerns about facetime, since all get at least some personalized attention from their team leader. But more importantly, it addresses the underlying concerns about career mobility by giving all staff a clear indication of where they stand at all times. After all, it’s hard to tell how much any employee should worry about not being able to chat by the watercooler with their boss: knowing exactly where they stand is the key concern for employees, and they can take proactive action if they see their standing suffer.
Such best practices help integrate employees into a work culture fit for the future of work while fostering good relationships with managers. Research shows supervisor-supervisee relationships are the most critical ones for employee wellbeing, engagement, and retention.
Conclusion
You don’t have to be the CEO to implement these techniques. Lower-level leaders of small rank-and-file teams can implement these shifts within their own teams, adapting their culture and performance evaluations. And if you are a staff member rather than a leader, send this article to your supervisor and other employees at your company: start a conversation about the benefits of addressing proximity bias using such research-based best practices.