Are the gains from gain-of-function research worth the risks?
Scientists have long argued that gain-of-function research, which can make viruses and other infectious agents more contagious or more deadly, was necessary to develop therapies and vaccines to counter the pathogens in case they were used for biological warfare. As the SARS-CoV-2 origins are being investigated, one prominent theory suggests it had leaked from a biolab that conducted gain-of-function research, causing a global pandemic that claimed nearly 6.9 million lives. Now some question the wisdom of engaging in this type of research, stating that the risks may far outweigh the benefits.
“Gain-of-function research means genetically changing a genome in a way that might enhance the biological function of its genes, such as its transmissibility or the range of hosts it can infect,” says George Church, professor of genetics at Harvard Medical School. This can occur through direct genetic manipulation as well as by encouraging mutations while growing successive generations of micro-organism in culture. “Some of these changes may impact pathogenesis in a way that is hard to anticipate in advance,” Church says.
In the wake of the global pandemic, the pros and cons of gain-of-function research are being fiercely debated. Some scientists say this type of research is vital for preventing future pandemics or for preparing for bioweapon attacks. Others consider it another disaster waiting to happen. The Government Accounting Office issued a report charging that a framework developed by the U.S. Department of Health & Human Services (HHS) provided inadequate oversight of this potentially deadly research. There’s a movement to stop it altogether. In January, the Viral Gain-of-Function Research Moratorium Act (S. 81) was introduced into the Senate to cease awarding federal research funding to institutions doing gain-of-function studies.
While testifying before the House COVID Origins Select Committee on March 8th, Robert Redfield, former director of the U.S. Centers for Disease Control and Prevention, said that COVID-19 may have resulted from an accidental lab leak involving gain-of-function research. Redfield said his conclusion is based upon the “rapid and high infectivity for human-to-human transmission, which then predicts the rapid evolution of new variants.”
“It is a very, very, very small subset of life science research that could potentially generate a potential pandemic pathogen,” said Gerald Parker, associate dean for Global One Health at Texas A&M University.
“In my opinion,” Redfield continues, “the COVID-19 pandemic presents a case study on the potential dangers of such research. While many believe that gain-of-function research is critical to get ahead of viruses by developing vaccines, in this case, I believe that was the exact opposite.” Consequently, Redfield called for a moratorium on gain-of-function research until there is consensus about the value of such risky science.
What constitutes risky?
The Federal Select Agent Program lists 68 specific infectious agents as risky because they are either very contagious or very deadly. In order to work with these 68 agents, scientists must register with the federal government. Meanwhile, research on deadly pathogens that aren’t easily transmitted, or pathogens that are quite contagious but not deadly, can be conducted without such oversight. “If you’re not working with select agents, you’re not required to register the research with the federal government,” says Gerald Parker, associate dean for Global One Health at Texas A&M University. But the 68-item list may not have everything that could possibly become dangerous or be engineered to be dangerous, thus escaping the government’s scrutiny—an issue that new regulations aim to address.
In January 2017, the White House Office of Science and Technology Policy (OSTP) issued additional guidance. It required federal departments and agencies to follow a series of steps when reviewing proposed research that could create, transfer, or use potential pandemic pathogens resulting from the enhancement of a pathogen’s transmissibility or virulence in humans.
In defining risky pathogens, OSTP included viruses that were likely to be highly transmissible and highly virulent, and thus very deadly. The Proposed Biosecurity Oversight Framework for the Future of Science, outlined in 2023, broadened the scope to require federal review of research “that is reasonably anticipated to enhance the transmissibility and/or virulence of any pathogen” likely to pose a threat to public health, health systems or national security. Those types of experiments also include the pathogens’ ability to evade vaccines or therapeutics, or diagnostic detection.
However, Parker says that dangers of generating a pandemic-level germ are tiny. “It is a very, very, very small subset of life science research that could potentially generate a potential pandemic pathogen.” Since gain-of-function guidelines were first issued in 2017, only three such research projects have met those requirements for HHS review. They aimed to study influenza and bird flu. Only two of those projects were funded, according to the NIH Office of Science Policy. For context, NIH funded approximately 11,000 of the 54,000 grant applications it received in 2022.
Guidelines governing gain-of-function research are being strengthened, but Church points out they aren’t ideal yet. “They need to be much clearer about penalties and avoiding positive uses before they would be enforceable.”
What do we gain from gain-of-function research?
The most commonly cited reason to conduct gain-of-function research is for biodefense—the government’s ability to deal with organisms that may pose threats to public health.
In the era of mRNA vaccines, the advance preparedness argument may be even less relevant.
“The need to work with potentially dangerous viruses is central to our preparedness,” Parker says. “It’s essential that we know and understand the basic biology, microbiology, etc. of some of these dangerous pathogens.” That includes increasing our knowledge of the molecular mechanisms by which a virus could become a sustained threat to humans. “Knowing that could help us detect [risks] earlier,” Parker says—and could make it possible to have medical countermeasures, like vaccines and therapeutics, ready.
Most vaccines, however, aren’t affected by this type of research. Essentially, scientists hope they will never need to use it. Moreover, Paul Mango, HSS former deputy chief of staff for policy, and author of the 2022 book Warp Speed, says he believes that in the era of mRNA vaccines, the advance preparedness argument may be even less relevant. “That’s because these vaccines can be developed and produced in less than 12 months, unlike traditional vaccines that require years of development,” he says.
Can better oversight guarantee safety?
Another situation, which Parker calls unnecessarily dangerous, is when regulatory bodies cannot verify that the appropriate biosafety and biosecurity controls are in place.
Gain-of-function studies, Parker points out, are conducted at the basic research level, and they’re performed in high-containment labs. “As long as all the processes, procedures and protocols are followed and there’s appropriate oversight at the institutional and scientific level, it can be conducted safely.”
Globally, there are 69 Biosafety Level 4 (BSL4) labs operating, under construction or being planned, according to recent research from King’s College London and George Mason University for Global BioLabs. Eleven of these 18 high-containment facilities that are planned or under construction are in Asia. Overall, three-quarters of the BSL4 labs are in cities, increasing public health risks if leaks occur.
Researchers say they are confident in the oversight system for BSL4 labs within the U.S. They are less confident in international labs. Global BioLabs’ report concurs. It gives the highest scores for biosafety to industrialized nations, led by France, Australia, Canada, the U.S. and Japan, and the lowest scores to Saudi Arabia, India and some developing African nations. Scores for biosecurity followed similar patterns.
“There are no harmonized international biosafety and biosecurity standards,” Parker notes. That issue has been discussed for at least a decade. Now, in the wake of SARS and the COVID-19 pandemic, scientists and regulators are likely to push for unified oversight standards. “It’s time we got serious about international harmonization of biosafety and biosecurity standards and guidelines,” Parker says. New guidelines are being worked on. The National Science Advisory Board for Biosecurity (NSABB) outlined its proposed recommendations in the document titled Proposed Biosecurity Oversight Framework for the Future of Science.
The debates about whether gain-of-function research is useful or poses unnecessary risks to humanity are likely to rage on for a while. The public too has a voice in this debate and should weigh in by communicating with their representatives in government, or by partaking in educational forums or initiatives offered by universities and other institutions. In the meantime, scientists should focus on improving the research regulations, Parker notes. “We need to continue to look for lessons learned and for gaps in our oversight system,” he says. “That’s what we need to do right now.”
Awash in a fluid finely calibrated to keep it alive, a human eye rests inside a transparent cubic device. This ECaBox, or Eyes in a Care Box, is a one-of-a-kind system built by scientists at Barcelona’s Centre for Genomic Regulation (CRG). Their goal is to preserve human eyes for transplantation and related research.
In recent years, scientists have learned to transplant delicate organs such as the liver, lungs or pancreas, but eyes are another story. Even when preserved at the average transplant temperature of 4 Centigrade, they last for 48 hours max. That's one explanation for why transplanting the whole eye isn’t possible—only the cornea, the dome-shaped, outer layer of the eye, can withstand the procedure. The retina, the layer at the back of the eyeball that turns light into electrical signals, which the brain converts into images, is extremely difficult to transplant because it's packed with nerve tissue and blood vessels.
These challenges also make it tough to research transplantation. “This greatly limits their use for experiments, particularly when it comes to the effectiveness of new drugs and treatments,” said Maria Pia Cosma, a biologist at Barcelona’s Centre for Genomic Regulation (CRG), whose team is working on the ECaBox.
Eye transplants are desperately needed, but they're nowhere in sight. About 12.7 million people worldwide need a corneal transplant, which means that only one in 70 people who require them, get them. The gaps are international. Eye banks in the United Kingdom are around 20 percent below the level needed to supply hospitals, while Indian eye banks, which need at least 250,000 corneas per year, collect only around 45 to 50 thousand donor corneas (and of those 60 to 70 percent are successfully transplanted).
As for retinas, it's impossible currently to put one into the eye of another person. Artificial devices can be implanted to restore the sight of patients suffering from severe retinal diseases, but the number of people around the world with such “bionic eyes” is less than 600, while in America alone 11 million people have some type of retinal disease leading to severe vision loss. Add to this an increasingly aging population, commonly facing various vision impairments, and you have a recipe for heavy burdens on individuals, the economy and society. In the U.S. alone, the total annual economic impact of vision problems was $51.4 billion in 2017.
Even if you try growing tissues in the petri dish route into organoids mimicking the function of the human eye, you will not get the physiological complexity of the structure and metabolism of the real thing, according to Cosma. She is a member of a scientific consortium that includes researchers from major institutions from Spain, the U.K., Portugal, Italy and Israel. The consortium has received about $3.8 million from the European Union to pursue innovative eye research. Her team’s goal is to give hope to at least 2.2 billion people across the world afflicted with a vision impairment and 33 million who go through life with avoidable blindness.
Their method? Resuscitating cadaveric eyes for at least a month.
If we succeed, it will be the first intact human model of the eye capable of exploring and analyzing regenerative processes ex vivo. -- Maria Pia Cosma.
“We proposed to resuscitate eyes, that is to restore the global physiology and function of human explanted tissues,” Cosma said, referring to living tissues extracted from the eye and placed in a medium for culture. Their ECaBox is an ex vivo biological system, in which eyes taken from dead donors are placed in an artificial environment, designed to preserve the eye’s temperature and pH levels, deter blood clots, and remove the metabolic waste and toxins that would otherwise spell their demise.
Scientists work on resuscitating eyes in the lab of Maria Pia Cosma.
Courtesy of Maria Pia Cosma.
“One of the great challenges is the passage of the blood in the capillary branches of the eye, what we call long-term perfusion,” Cosma said. Capillaries are an intricate network of very thin blood vessels that transport blood, nutrients and oxygen to cells in the body’s organs and systems. To maintain the garland-shaped structure of this network, sufficient amounts of oxygen and nutrients must be provided through the eye circulation and microcirculation. “Our ambition is to combine perfusion of the vessels with artificial blood," along with using a synthetic form of vitreous, or the gel-like fluid that lets in light and supports the the eye's round shape, Cosma said.
The scientists use this novel setup with the eye submersed in its medium to keep the organ viable, so they can test retinal function. “If we succeed, we will ensure full functionality of a human organ ex vivo. It will be the first intact human model of the eye capable of exploring and analyzing regenerative processes ex vivo,” Cosma added.
A rapidly developing field of regenerative medicine aims to stimulate the body's natural healing processes and restore or replace damaged tissues and organs. But for people with retinal diseases, regenerative medicine progress has been painfully slow. “Experiments on rodents show progress, but the risks for humans are unacceptable,” Cosma said.
The ECaBox could boost progress with regenerative medicine for people with retinal diseases, which has been painfully slow because human experiments involving their eyes are too risky. “We will test emerging treatments while reducing animal research, and greatly accelerate the discovery and preclinical research phase of new possible treatments for vision loss at significantly reduced costs,” Cosma explained. Much less time and money would be wasted during the drug discovery process. Their work may even make it possible to transplant the entire eyeball for those who need it.
“It is a very exciting project,” said Sanjay Sharma, a professor of ophthalmology and epidemiology at Queen's University, in Kingston, Canada. “The ability to explore and monitor regenerative interventions will increasingly be of importance as we develop therapies that can regenerate ocular tissues, including the retina.”
Seemingly, there's no sacred religious text or a holy book prohibiting the practice of eye donation.
But is the world ready for eye transplants? “People are a bit weird or very emotional about donating their eyes as compared to other organs,” Cosma said. And much can be said about the problem of eye donor shortage. Concerns include disfigurement and healthcare professionals’ fear that the conversation about eye donation will upset the departed person’s relatives because of cultural or religious considerations. As just one example, Sharma noted the paucity of eye donations in his home country, Canada.
Yet, experts like Sharma stress the importance of these donations for both the recipients and their family members. “It allows them some psychological benefit in a very difficult time,” he said. So why are global eye banks suffering? Is it because the eyes are the windows to the soul?
Seemingly, there's no sacred religious text or a holy book prohibiting the practice of eye donation. In fact, most major religions of the world permit and support organ transplantation and donation, and by extension eye donation, because they unequivocally see it as an “act of neighborly love and charity.” In Hinduism, the concept of eye donation aligns with the Hindu principle of daan or selfless giving, where individuals donate their organs or body after death to benefit others and contribute to society. In Islam, eye donation is a form of sadaqah jariyah, a perpetual charity, as it can continue to benefit others even after the donor's death.
Meanwhile, Buddhist masters teach that donating an organ gives another person the chance to live longer and practice dharma, the universal law and order, more meaningfully; they also dismiss misunderstandings of the type “if you donate an eye, you’ll be born without an eye in the next birth.” And Christian teachings emphasize the values of love, compassion, and selflessness, all compatible with organ donation, eye donation notwithstanding; besides, those that will have a house in heaven, will get a whole new body without imperfections and limitations.
The explanation for people’s resistance may lie in what Deepak Sarma, a professor of Indian religions and philosophy at Case Western Reserve University in Cleveland, calls “street interpretation” of religious or spiritual dogmas. Consider the mechanism of karma, which is about the causal relation between previous and current actions. “Maybe some Hindus believe there is karma in the eyes and, if the eye gets transplanted into another person, they will have to have that karmic card from now on,” Sarma said. “Even if there is peculiar karma due to an untimely death–which might be interpreted by some as bad karma–then you have the karma of the recipient, which is tremendously good karma, because they have access to these body parts, a tremendous gift,” Sarma said. The overall accumulation is that of good karma: “It’s a beautiful kind of balance,” Sarma said.
For the Jews, Christians, and Muslims who believe in the physical resurrection of the body that will be made new in an afterlife, the already existing body is sacred since it will be the basis of a new refashioned body in an afterlife.---Omar Sultan Haque.
With that said, Sarma believes it is a fallacy to personify or anthropomorphize the eye, which doesn’t have a soul, and stresses that the karma attaches itself to the soul and not the body parts. But for scholars like Omar Sultan Haque—a psychiatrist and social scientist at Harvard Medical School, investigating questions across global health, anthropology, social psychology, and bioethics—the hierarchy of sacredness of body parts is entrenched in human psychology. You cannot equate the pinky toe with the face, he explained.
“The eyes are the window to the soul,” Haque said. “People have a hierarchy of body parts that are considered more sacred or essential to the self or soul, such as the eyes, face, and brain.” In his view, the techno-utopian transhumanist communities (especially those in Silicon Valley) have reduced the totality of a person to a mere material object, a “wet robot” that knows no sacredness or hierarchy of human body parts. “But for the Jews, Christians, and Muslims who believe in the physical resurrection of the body that will be made new in an afterlife, the [already existing] body is sacred since it will be the basis of a new refashioned body in an afterlife,” Haque said. “You cannot treat the body like any old material artifact, or old chair or ragged cloth, just because materialistic, secular ideologies want so,” he continued.
For Cosma and her peers, however, the very definition of what is alive or not is a bit semantic. “As soon as we die, the electrophysiological activity in the eye stops,” she said. “The goal of the project is to restore this activity as soon as possible before the highly complex tissue of the eye starts degrading.” Cosma’s group doesn’t yet know when they will be able to keep the eyes alive and well in the ECaBox, but the consensus is that the sooner the better. Hopefully, the taboos and fears around the eye donations will dissipate around the same time.
As Our AI Systems Get Better, So Must We
As the power and capability of our AI systems increase by the day, the essential question we now face is what constitutes peak human. If we stay where we are while the AI systems we are unleashing continually get better, they will meet and then exceed our capabilities in an ever-growing number of domains. But while some technology visionaries like Elon Musk call for us to slow down the development of AI systems to buy time, this approach alone will simply not work in our hyper-competitive world, particularly when the potential benefits of AI are so great and our frameworks for global governance are so weak. In order to build the future we want, we must also become ever better humans.
The list of activities we once saw as uniquely human where AIs have now surpassed us is long and growing. First, AI systems could beat our best chess players, then our best Go players, then our best champions of multi-player poker. They can see patterns far better than we can, generate medical and other hypotheses most human specialists miss, predict and map out new cellular structures, and even generate beautiful, and, yes, creative, art.
A recent paper by Microsoft researchers analyzing the significant leap in capabilities in OpenAI’s latest AI bot, ChatGPT-4, asserted that the algorithm can “solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting.” Calling this functionality “strikingly close to human-level performance,” the authors conclude it “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
The concept of AGI has been around for decades. In its common use, the term suggests a time when individual machines can do many different things at a human level, not just one thing like playing Go or analyzing radiological images. Debating when AGI might arrive, a favorite pastime of computer scientists for years, now has become outdated.
We already have AI algorithms and chatbots that can do lots of different things. Based on the generalist definition, in other words, AGI is essentially already here.
Unfettered by the evolved capacity and storage constraints of our brains, AI algorithms can access nearly all of the digitized cultural inheritance of humanity since the dawn of recorded history and have increasing access to growing pools of digitized biological data from across the spectrum of life.
Once we recognize that both AI systems and humans have unique superpowers, the essential question becomes what each of us can do better than the other and what humans and AIs can best do in active collaboration. The future of our species will depend upon our ability to safely, dynamically, and continually figure that out.
With these ever-larger datasets, rapidly increasing computing and memory power, and new and better algorithms, our AI systems will keep getting better faster than most of us can today imagine. These capabilities have the potential to help us radically improve our healthcare, agriculture, and manufacturing, make our economies more productive and our development more sustainable, and do many important things better.
Soon, they will learn how to write their own code. Like human children, in other words, AI systems will grow up. But even that doesn’t mean our human goose is cooked.
Just like dolphins and dogs, these alternate forms of intelligence will be uniquely theirs, not a lesser or greater version of ours. There are lots of things AI systems can't do and will never be able to do because our AI algorithms, for better and for worse, will never be human. Our embodied human intelligence is its own thing.
Our human intelligence is uniquely ours based on the capacities we have developed in our 3.8-billion-year journey from single cell organisms to us. Our brains and bodies represent continuous adaptations on earlier models, which is why our skeletal systems look like those of lizards and our brains like most other mammals with some extra cerebral cortex mixed in. Human intelligence isn’t just some type of disembodied function but the inextricable manifestation of our evolved physical reality. It includes our sensory analytical skills and all of our animal instincts, intuitions, drives, and perceptions. Disembodied machine intelligence is something different than what we have evolved and possess.
Because of this, some linguists including Noam Chomsky have recently argued that AI systems will never be intelligent as long as they are just manipulating symbols and mathematical tokens without any inherent understanding. Nothing could be further from the truth. Anyone interacting with even first-generation AI chatbots quickly realizes that while these systems are far from perfect or omniscient and can sometimes be stupendously oblivious, they are surprisingly smart and versatile and will get more so… forever. We have little idea even how our own minds work, so judging AI systems based on their output is relatively close to how we evaluate ourselves.
Anyone not awed by the potential of these AI systems is missing the point. AI’s newfound capacities demand that we work urgently to establish norms, standards, and regulations at all levels from local to global to manage the very real risks. Pausing our development of AI systems now doesn’t make sense, however, even if it were possible, because we have no sufficient ways of uniformly enacting such a pause, no plan for how we would use the time, and no common framework for addressing global collective challenges like this.
But if all we feel is a passive awe for these new capabilities, we will also be missing the point.
Human evolution, biology, and cultural history are not just some kind of accidental legacy, disability, or parlor trick, but our inherent superpower. Our ancestors outcompeted rivals for billions of years to make us so well suited to the world we inhabit and helped build. Our social organization at scale has made it possible for us to forge civilizations of immense complexity, engineer biology and novel intelligence, and extend our reach to the stars. Our messy, embodied, intuitive, social human intelligence is roughly mimicable by AI systems but, by definition, never fully replicable by them.
Once we recognize that both AI systems and humans have unique superpowers, the essential question becomes what each of us can do better than the other and what humans and AIs can best do in active collaboration. We still don't know. The future of our species will depend upon our ability to safely, dynamically, and continually figure that out.
As we do, we'll learn that many of our ideas and actions are made up of parts, some of which will prove essentially human and some of which can be better achieved by AI systems. Those in every walk of work and life who most successfully identify the optimal contributions of humans, AIs, and the two together, and who build systems and workflows empowering humans to do human things, machines to do machine things, and humans and machines to work together in ways maximizing the respective strengths of each, will be the champions of the 21st century across all fields.
The dawn of the age of machine intelligence is upon us. It’s a quantum leap equivalent to the domestication of plants and animals, industrialization, electrification, and computing. Each of these revolutions forced us to rethink what it means to be human, how we live, and how we organize ourselves. The AI revolution will happen more suddenly than these earlier transformations but will follow the same general trajectory. Now is the time to aggressively prepare for what is fast heading our way, including by active public engagement, governance, and regulation.
AI systems will not replace us, but, like these earlier technology-driven revolutions, they will force us to become different humans as we co-evolve with our technology. We will never reach peak human in our ongoing evolutionary journey, but we’ve got to manage this transition wisely to build the type of future we’d like to inhabit.
Alongside our ascending AIs, we humans still have a lot of climbing to do.