6 Biotech Breakthroughs of 2021 That Missed the Attention They Deserved
News about COVID-19 continues to relentlessly dominate as Omicron surges around the globe. Yet somehow, during the pandemic’s exhausting twists and turns, progress in other areas of health and biotech has marched on.
In some cases, these innovations have occurred despite a broad reallocation of resources to address the COVID crisis. For other breakthroughs, COVID served as the forcing function, pushing scientists and medical providers to rethink key aspects of healthcare, including how cancer, Alzheimer’s and other diseases are studied, diagnosed and treated. Regardless of why they happened, many of these advances didn’t make the headlines of major media outlets, even when they represented turning points in overcoming our toughest health challenges.
If it bleeds, it leads—and many disturbing stories, such as COVID surges, deserve top billing. Too often, though, mainstream media’s parallel strategy seems to be: if it innovates, it fades to the background. But our breakthroughs are just as critical to understanding the state of the world as our setbacks. I asked six pragmatic yet forward-thinking experts on health and biotech for their perspectives on the most important, but under-appreciated, breakthrough of 2021.
Their descriptions, below, were lightly edited by Leaps.org for style and format.
New Alzheimer's Therapies
Mary Carrillo, Chief Science Officer at the Alzheimer’s Association
Alzheimer's Association
One of the biggest health stories of 2021 was the FDA’s accelerated approval of aducanumab, the first drug that treats the underlying biology of Alzheimer’s, not just the symptoms. But, Alzheimer’s is a complex disease and will likely need multiple treatment strategies that target various aspects of the disease. It’s been exciting to see many of these types of therapies advance in 2021.
Following the FDA action in June, we saw renewed excitement in this class of disease-modifying drugs that target beta-amyloid, a protein that accumulates in the brain and leads to brain cell death. This class includes drugs from Eli Lilly (donanemab), Eisai (lecanemab) and Roche (gantenerumab), all of which received Breakthrough Designation by the FDA in 2021, advancing the drugs more quickly through the approval process.
We’ve also seen treatments advance that target other hallmarks of Alzheimer’s this year. We heard topline results from a phase 2 trial of semorinemab, a drug that targets tau tangles, a toxic protein that destroys neurons in the Alzheimer’s brain. Plus, strategies targeting neuroinflammation, protecting brain cells, and reducing vascular contributions to dementia – all funded through the Alzheimer's Association Part the Cloud program – advanced into clinical trials.
The future of Alzheimer’s treatment will likely be combination therapy, including drug therapies and healthy lifestyle changes, similar to how we treat heart disease. Washington University announced they will be testing a combination of both anti-amyloid and anti-tau drugs in a first-of-its-kind clinical trial, with funding from the Alzheimer’s Association.
AlphaFold
Olivier Elemento, Director of the Caryl and Israel Englander Institute for Precision Medicine at Cornell University
Cornell University
AlphaFold is an artificial intelligence system designed by Google’s DeepMind that opens the door to understanding the three-dimensional structures and functions of proteins, the building blocks that make up almost half of our bodies' dry weight. In 2021, Google made AlphaFold available for free and since then, researchers have used it to drive greater understanding of how proteins interact. This is a foundational event in the field of biotech.
It’s going to take time for the benefits from AlphaFold to transpire, but once we know the 3-D structures of proteins that cause various diseases, it will be much easier to design new drugs that can bind to these proteins and change their activity. Prior to AlphaFold, scientists had identified the 3-D structure of just 17 percent of about 20,000 proteins in the body, partly because mapping the structures was extremely difficult and expensive. Thanks to AlphaFold, we’ve now jumped to knowing – with at least some degree of certainty – the protein structures of 98.5 percent of the proteome.
For example, kinases are a class of proteins that modify other proteins and are often aberrantly active in cancer due to DNA mutations. Some of the earliest targeted therapies for cancer were ones that block kinases but, before AlphaFold, we had only a premature understanding of a few hundred kinases. We can now determine the structures of all 1,500 kinases. This opens up a universe of drug targets we didn’t have before.
Additional progress has been made this year toward potentially using AlphaFold to develop blockers of certain protein receptors that contribute to psychiatric illnesses and other neurological diseases. And in July, scientists used AlphaFold to map the dimensions of a bacterial protein that may be key to countering antibiotic resistance. Another discovery in May could be essential to finding treatments for COVID-19. Ongoing research is using AlphaFold principles to create entirely new proteins from scratch that could have therapeutic uses. The AlphaFold revolution is just beginning.
Virtual First Care
Jennifer Goldsack, CEO of Digital Medicine Society
Digital Medicine Society
Imagine a new paradigm of healthcare defined by how good we are at keeping people healthy and out of the clinic, not how good we are at offering services to a sick person at the clinic. That is the promise of virtual-first care, or V1C, what I consider to be the greatest, and most underappreciated, advance that occurred in medicine this year.
V1C is defined as medical care accessed through digital interactions where possible, guided by a clinician, and integrated into a person’s everyday life. This type of care includes spit kits mailed for laboratory tests and replacing in-person exams with biometric sensors. It’s built around the patient, not the clinic, and provides us with the opportunity to fundamentally reimagine what good healthcare looks like.
V1C flew under the radar in 2021, eclipsed by the ongoing debate about the value of telehealth more broadly as we emerge from the pandemic. However, the growth in the number of specialty and primary care virtual-first providers has been matched only by the number of national health plans offering virtual-first plans. Our own virtual-first community, IMPACT, has tripled in size, mirroring the rapid growth of the field driven by patient demand for care on their terms.
V1C differs from the ‘bolt on’ approach of video visits as an add-on to traditional visit-based, episodic care. V1C takes a much more holistic approach; it allows individuals to initiate care at any time in any place, recognizing that healthcare needs extend beyond 9-5. It matches the care setting with each individual’s clinical needs and personal preferences, advancing a thorough, evidence-based, safe practice while protecting privacy and recognizing that patients’ expectations have changed following the pandemic. V1C puts the promise of digital health into practice. This is the blueprint for what good healthcare looks like in the digital era.
Digital Clinical Trials
Craig Lipset, Founder of Clinical Innovation Partners and former Head of Clinical Innovation at Pfizer
Craig Lipset
In 2021, a number of digital- and data-enabled approaches have sustained decentralized clinical trials around the world for many different disease types. Pharma companies and clinical researchers are enthusiastic about this development for good reason. Throughout the pandemic, these decentralized trials have allowed patients to continue in studies with a reduced need for site visits, without compromising their safety or data quality.
Risk-based monitoring was deployed using data and thoughtful algorithms to identify quality and safety issues without relying entirely on human monitors visiting research sites. Some trials used digital measures to ensure high quality data on target health outcomes that could be captured in ways that made the participants’ physical location irrelevant. More than three-quarters of research organizations, such as pharma and biotech, have accelerated their decentralized clinical trial strategies. Before COVID-19, 72 percent of trial sites “rarely or never” used telemedicine for trial participants; during COVID, 64 percent “sometimes, often or always” do.
While the research community does appreciate the tremendous hope and promise brought by these innovations, perhaps what has been under-appreciated is the culture shift toward thoughtful risk-taking and a willingness to embrace and adopt clinical trial innovations. These solutions existed before COVID, but the pandemic shifted the perception of risks versus benefits involved in these trials. If there is one breakthrough that is perhaps under-appreciated in life sciences clinical research today, it’s the power of this new culture of willingness and receptivity to outlast the pandemic. Perhaps the greatest loss to the research ecosystem would be if we lose the momentum with recent trial innovations and must wait for another global pandemic in order to see it again.
Designing Biology
Sudip Parikh, CEO of the American Association for the Advancement of Science and Executive Publisher of the Science family of journals
American Association for the Advancement of Science
As our understanding of basic biology has grown, we are fast approaching an era where it will be possible to design and direct biological machinery to create treatments, medicine, and materials. 2021 saw many breakthroughs in this area, three of which are listed below.
The understanding of the human microbiome is growing as is our ability to modify it. One example is the movement toward the notion of the “bug as the drug.” In June, scientists at the Brigham and Women’s Hospital published a paper showing that they had genetically engineered yeast – using CRISPR/Cas9 – to sense and treat inflammation in the body to relieve symptoms of irritable bowel syndrome in mice. This approach could potentially be used to address issues with your microbiome to treat other chronic conditions.
Another way in which we saw the application of basic biology discoveries to real world problems in 2021 is through groundbreaking research on synthetic biology. Several institutions and companies are pursuing this path. Ginkgo Bioworks, valued at $15 billion, already claims to engineer cells with assembly-line efficiency. Imagine the possibilities of programming cells and tissue to perform chemistry for the manufacturing process, inspired by the way your body does chemistry. That could mean cleaner, more controllable, and affordable ways to manufacture food, therapeutics, and other materials in a factory-like setting.
A final example: consider the possibility of leveraging the mechanics of your own body to deliver proteins as treatments, vaccines, and more. In 2021, several scientists accelerated research to apply the mRNA technology underlying COVID-19 vaccines to make and replace proteins that, when they’re missing or don’t work, cause rare conditions such as cystic fibrosis and multiple sclerosis.
These applications of basic biology to solve real world problems are exciting on their own, but their convergence with incredible advances in computing, materials, and drug delivery hold the promise of game-changing progress in health care and beyond.
Brain Biomarkers
David R. Walt, Professor of Biologically Inspired Engineering, Harvard Medical School, Brigham and Women’s Hospital, Wyss Institute at Harvard University
David Walt
2021 brought the first real hope for identifying biomarkers that can predict neurodegenerative disease. Multiple biomarkers (which are measurable indicators of the presence or severity of disease) were identified that can diagnose disease and that correlate with disease progression. Some of these biomarkers were detected in cerebrospinal fluid (CSF) but others were measured directly in blood by examining precursors of protein fibers.
The blood-brain barrier prevents many biomolecules from both exiting and entering the brain, so it has been a longstanding challenge to detect and identify biomarkers that signal changes in brain chemistry due to neurodegenerative disease. With the advent of omics-based approaches (an emerging field that encompasses genomics, epigenomics, transcriptomics, proteomics, and metabolomics), coupled with new ultrasensitive analytical methods, researchers are beginning to identify informative brain biomarkers. Such biomarkers portend our ability to detect earlier stages of disease when therapeutic intervention could be effective at halting progression.
In addition, these biomarkers should enable drug developers to monitor the efficacy of candidate drugs in the blood of participants enrolled in clinical trials aimed at slowing neurodegeneration. These biomarkers begin to move us away from relying on cognitive performance indicators and imaging—methods that do not directly measure the underlying biology of neurodegenerative disease. The identity of these biomarkers may also provide researchers with clues about the causes of neurodegenerative disease, which can serve as new targets for drug intervention.
Autonomous, indoor farming gives a boost to crops
The glass-encased cabinet looks like a display meant to hold reasonably priced watches, or drugstore beauty creams shipped from France. But instead of this stagnant merchandise, each of its five shelves is overgrown with leaves — moss-soft pea sprouts, spikes of Lolla rosa lettuces, pale bok choy, dark kale, purple basil or red-veined sorrel or green wisps of dill. The glass structure isn’t a cabinet, but rather a “micro farm.”
The gadget is on display at the Richmond, Virginia headquarters of Babylon Micro-Farms, a company that aims to make indoor farming in the U.S. more accessible and sustainable. Babylon’s soilless hydroponic growing system, which feeds plants via nutrient-enriched water, allows chefs on cruise ships, cafeterias and elsewhere to provide home-grown produce to patrons, just seconds after it’s harvested. Currently, there are over 200 functioning systems, either sold or leased to customers, and more of them are on the way.
The chef-farmers choose from among 45 types of herb and leafy-greens seeds, plop them into grow trays, and a few weeks later they pick and serve. While success is predicated on at least a small amount of these humans’ care, the systems are autonomously surveilled round-the-clock from Babylon’s base of operations. And artificial intelligence is helping to run the show.
Babylon piloted the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off.
Imagine consistently perfect greens and tomatoes and strawberries, grown hyper-locally, using less water, without chemicals or environmental contaminants. This is the hefty promise of controlled environment agriculture (CEA) — basically, indoor farms that can be hydroponic, aeroponic (plant roots are suspended and fed through misting), or aquaponic (where fish play a role in fertilizing vegetables). But whether they grow 4,160 leafy-green servings per year, like one Babylon farm, or millions of servings, like some of the large, centralized facilities starting to supply supermarkets across the U.S., they seek to minimize failure as much as possible.
Babylon’s soilless hydroponic growing system
Courtesy Babylon Micro-Farms
Here, AI is starting to play a pivotal role. CEA growers use it to help “make sense of what’s happening” to the plants in their care, says Scott Lowman, vice president of applied research at the Institute for Advanced Learning and Research (IALR) in Virginia, a state that’s investing heavily in CEA companies. And although these companies say they’re not aiming for a future with zero human employees, AI is certainly poised to take a lot of human farming intervention out of the equation — for better and worse.
Most of these companies are compiling their own data sets to identify anything that might block the success of their systems. Babylon had already integrated sensor data into its farms to measure heat and humidity, the nutrient content of water, and the amount of light plants receive. Last year, they got a National Science Foundation grant that allowed them to pilot the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off. “Will this plant be healthy tomorrow? Are there things…that the human eye can't see that the plant starts expressing?” says Amandeep Ratte, the company’s head of data science. “If our system can say, Hey, this plant is unhealthy, we can reach out to [users] preemptively about what they’re doing wrong, or is there a disease at the farm?” Ratte says. The earlier the better, to avoid crop failures.
Natural light accounts for 70 percent of Greenswell Growers’ energy use on a sunny day.
Courtesy Greenswell Growers
IALR’s Lowman says that other CEA companies are developing their AI systems to account for the different crops they grow — lettuces come in all shapes and sizes, after all, and each has different growing needs than, for example, tomatoes. The ways they run their operations differs also. Babylon is unusual in its decentralized structure. But centralized growing systems with one main location have variabilities, too. AeroFarms, which recently declared bankruptcy but will continue to run its 140,000-square foot vertical operation in Danville, Virginia, is entirely enclosed and reliant on the intense violet glow of grow lights to produce microgreens.
Different companies have different data needs. What data is essential to AeroFarms isn’t quite the same as for Greenswell Growers located in Goochland County, Virginia. Raising four kinds of lettuce in a 77,000-square-foot automated hydroponic greenhouse, the vagaries of naturally available light, which accounts for 70 percent of Greenswell’s energy use on a sunny day, affect operations. Their tech needs to account for “outside weather impacts,” says president Carl Gupton. “What adjustments do we have to make inside of the greenhouse to offset what's going on outside environmentally, to give that plant optimal conditions? When it's 85 percent humidity outside, the system needs to do X, Y and Z to get the conditions that we want inside.”
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen.
Nevertheless, every CEA system has the same core needs — consistent yield of high quality crops to keep up year-round supply to customers. Additionally, “Everybody’s got the same set of problems,” Gupton says. Pests may come into a facility with seeds. A disease called pythium, one of the most common in CEA, can damage plant roots. “Then you have root disease pressures that can also come internally — a change in [growing] substrate can change the way the plant performs,” Gupton says.
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen. So, while companies amass their own hyper-specific data sets, Lowman foresees a time within the next decade “when there will be some type of [open-source] database that has the most common types of plant stress identified” that growers will be able to tap into. Such databases will “create a community and move the science forward,” says Lowman.
In fact, IALR is working on assembling images for just such a database now. On so-called “smart tables” inside an Institute lab, a team is growing greens and subjects them to various stressors. Then, they’re administering treatments while taking images of every plant every 15 minutes, says Lowman. Some experiments generate 80,000 images; the challenge lies in analyzing and annotating the vast trove of them, marking each one to reflect outcome—for example increasing the phosphate delivery and the plant’s response to it. Eventually, they’ll be fed into AI systems to help them learn.
For all the enthusiasm surrounding this technology, it’s not without downsides. Training just one AI system can emit over 250,000 pounds of carbon dioxide, according to MIT Technology Review. AI could also be used “to enhance environmental benefit for CEA and optimize [its] energy consumption,” says Rozita Dara, a computer science professor at the University of Guelph in Canada, specializing in AI and data governance, “but we first need to collect data to measure [it].”
The chef-farmers can choose from 45 types of herb and leafy-greens seeds.
Courtesy Babylon Micro-Farms
Any system connected to the Internet of Things is also vulnerable to hacking; if CEA grows to the point where “there are many of these similar farms, and you're depending on feeding a population based on those, it would be quite scary,” Dara says. And there are privacy concerns, too, in systems where imaging is happening constantly. It’s partly for this reason, says Babylon’s Ratte, that the company’s in-farm cameras all “face down into the trays, so the only thing [visible] is pictures of plants.”
Tweaks to improve AI for CEA are happening all the time. Greenswell made its first harvest in 2022 and now has annual data points they can use to start making more intelligent choices about how to feed, water, and supply light to plants, says Gupton. Ratte says he’s confident Babylon’s system can already “get our customers reliable harvests. But in terms of how far we have to go, it's a different problem,” he says. For example, if AI could detect whether the farm is mostly empty—meaning the farm’s user hasn’t planted a new crop of greens—it can alert Babylon to check “what's going on with engagement with this user?” Ratte says. “Do they need more training? Did the main person responsible for the farm quit?”
Lowman says more automation is coming, offering greater ability for systems to identify problems and mitigate them on the spot. “We still have to develop datasets that are specific, so you can have a very clear control plan, [because] artificial intelligence is only as smart as what we tell it, and in plant science, there's so much variation,” he says. He believes AI’s next level will be “looking at those first early days of plant growth: when the seed germinates, how fast it germinates, what it looks like when it germinates.” Imaging all that and pairing it with AI, “can be a really powerful tool, for sure.”
Scientists make progress with growing organs for transplants
Story by Big Think
For over a century, scientists have dreamed of growing human organs sans humans. This technology could put an end to the scarcity of organs for transplants. But that’s just the tip of the iceberg. The capability to grow fully functional organs would revolutionize research. For example, scientists could observe mysterious biological processes, such as how human cells and organs develop a disease and respond (or fail to respond) to medication without involving human subjects.
Recently, a team of researchers from the University of Cambridge has laid the foundations not just for growing functional organs but functional synthetic embryos capable of developing a beating heart, gut, and brain. Their report was published in Nature.
The organoid revolution
In 1981, scientists discovered how to keep stem cells alive. This was a significant breakthrough, as stem cells have notoriously rigorous demands. Nevertheless, stem cells remained a relatively niche research area, mainly because scientists didn’t know how to convince the cells to turn into other cells.
Then, in 1987, scientists embedded isolated stem cells in a gelatinous protein mixture called Matrigel, which simulated the three-dimensional environment of animal tissue. The cells thrived, but they also did something remarkable: they created breast tissue capable of producing milk proteins. This was the first organoid — a clump of cells that behave and function like a real organ. The organoid revolution had begun, and it all started with a boob in Jello.
For the next 20 years, it was rare to find a scientist who identified as an “organoid researcher,” but there were many “stem cell researchers” who wanted to figure out how to turn stem cells into other cells. Eventually, they discovered the signals (called growth factors) that stem cells require to differentiate into other types of cells.
For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells.
By the end of the 2000s, researchers began combining stem cells, Matrigel, and the newly characterized growth factors to create dozens of organoids, from liver organoids capable of producing the bile salts necessary for digesting fat to brain organoids with components that resemble eyes, the spinal cord, and arguably, the beginnings of sentience.
Synthetic embryos
Organoids possess an intrinsic flaw: they are organ-like. They share some characteristics with real organs, making them powerful tools for research. However, no one has found a way to create an organoid with all the characteristics and functions of a real organ. But Magdalena Żernicka-Goetz, a developmental biologist, might have set the foundation for that discovery.
Żernicka-Goetz hypothesized that organoids fail to develop into fully functional organs because organs develop as a collective. Organoid research often uses embryonic stem cells, which are the cells from which the developing organism is created. However, there are two other types of stem cells in an early embryo: stem cells that become the placenta and those that become the yolk sac (where the embryo grows and gets its nutrients in early development). For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells. In other words, Żernicka-Goetz suspected the best way to grow a functional organoid was to produce a synthetic embryoid.
As described in the aforementioned Nature paper, Żernicka-Goetz and her team mimicked the embryonic environment by mixing these three types of stem cells from mice. Amazingly, the stem cells self-organized into structures and progressed through the successive developmental stages until they had beating hearts and the foundations of the brain.
“Our mouse embryo model not only develops a brain, but also a beating heart [and] all the components that go on to make up the body,” said Żernicka-Goetz. “It’s just unbelievable that we’ve got this far. This has been the dream of our community for years and major focus of our work for a decade and finally we’ve done it.”
If the methods developed by Żernicka-Goetz’s team are successful with human stem cells, scientists someday could use them to guide the development of synthetic organs for patients awaiting transplants. It also opens the door to studying how embryos develop during pregnancy.