The Science of Why Adjusting to Omicron Is So Tough
We are sticking our heads into the sand of reality on Omicron, and the results may be catastrophic.
Omicron is over 4 times more infectious than Delta. The Pfizer two-shot vaccine offers only 33% protection from infection. A Pfizer booster vaccine does raises protection to about 75%, but wanes to around 30-40 percent 10 weeks after the booster.
The only silver lining is that Omicron appears to cause a milder illness than Delta. Yet the World Health Organization has warned about the “mildness” narrative.
That’s because the much faster disease transmission and vaccine escape undercut the less severe overall nature of Omicron. That’s why hospitals have a large probability of being overwhelmed, as the Center for Disease Control warned, in this major Omicron wave.
Yet despite this very serious threat, we see the lack of real action. The federal government tightened international travel guidelines and is promoting boosters. Certainly, it’s crucial to get as many people to get their booster – and initial vaccine doses – as soon as possible. But the government is not taking the steps that would be the real game-changers.
Pfizer’s anti-viral drug Paxlovid decreases the risk of hospitalization and death from COVID by 89%. Due to this effectiveness, the FDA approved Pfizer ending the trial early, because it would be unethical to withhold the drug from people in the control group. Yet the FDA chose not to hasten the approval process along with the emergence of Omicron in late November, only getting around to emergency authorization in late December once Omicron took over. That delay meant the lack of Paxlovid for the height of the Omicron wave, since it takes many weeks to ramp up production, resulting in an unknown number of unnecessary deaths.
We humans are prone to falling for dangerous judgment errors called cognitive biases.
Widely available at-home testing would enable people to test themselves quickly, so that those with mild symptoms can quarantine instead of infecting others. Yet the federal government did not make tests available to patients when Omicron emerged in late November. That’s despite the obviousness of the coming wave based on the precedent of South Africa, UK, and Denmark and despite the fact that the government made vaccines freely available. Its best effort was to mandate that insurance cover reimbursements for these kits, which is way too much of a barrier for most people. By the time Omicron took over, the federal government recognized its mistake and ordered 500 million tests to be made available in January. However, that’s far too late. And the FDA also played a harmful role here, with its excessive focus on accuracy going back to mid-2020, blocking the widespread availability of cheap at-home tests. By contrast, Europe has a much better supply of tests, due to its approval of quick and slightly less accurate tests.
Neither do we see meaningful leadership at the level of employers. Some are bringing out the tired old “delay the office reopening” play. For example, Google, Uber, and Ford, along with many others, have delayed the return to the office for several months. Those that already returned are calling for stricter pandemic measures, such as more masks and social distancing, but not changing their work arrangements or adding sufficient ventilation to address the spread of COVID.
Despite plenty of warnings from risk management and cognitive bias experts, leaders are repeating the same mistakes we fell into with Delta. And so are regular people. For example, surveys show that Omicron has had very little impact on the willingness of unvaccinated Americans to get a first vaccine dose, or of vaccinated Americans to get a booster. That’s despite Omicron having taken over from Delta in late December.
What explains this puzzling behavior on both the individual and society level? We humans are prone to falling for dangerous judgment errors called cognitive biases. Rooted in wishful thinking and gut reactions, these mental blindspots lead to poor strategic and financial decisions when evaluating choices.
These cognitive biases stem from the more primitive, emotional, and intuitive part of our brains that ensured survival in our ancestral environment. This quick, automatic reaction of our emotions represents the autopilot system of thinking, one of the two systems of thinking in our brains. It makes good decisions most of the time but also regularly makes certain systematic thinking errors, since it’s optimized to help us survive. In modern society, our survival is much less at risk, and our gut is more likely to compel us to focus on the wrong information to make decisions.
One of the biggest challenges relevant to Omicron is the cognitive bias known as the ostrich effect. Named after the myth that ostriches stick their heads into the sand when they fear danger, the ostrich effect refers to people denying negative reality. Delta illustrated the high likelihood of additional dangerous variants, yet we failed to pay attention to and prepare for such a threat.
We want the future to be normal. We’re tired of the pandemic and just want to get back to pre-pandemic times. Thus, we greatly underestimate the probability and impact of major disruptors, like new COVID variants. That cognitive bias is called the normalcy bias.
When we learn one way of functioning in any area, we tend to stick to that way of functioning. You might have heard of this as the hammer-nail syndrome: when you have a hammer, everything looks like a nail. That syndrome is called functional fixedness. This cognitive bias causes those used to their old ways of action to reject any alternatives, including to prepare for a new variant.
Our minds naturally prioritize the present. We want what we want now, and downplay the long-term consequences of our current desires. That fallacious mental pattern is called hyperbolic discounting, where we excessively discount the benefits of orienting toward the future and focus on the present. A clear example is focusing on the short-term perceived gains of trying to return to normal over managing the risks of future variants.
The way forward into the future is to defeat cognitive biases and avoid denying reality by rethinking our approach to the future.
The FDA requires a serious overhaul. It’s designed for a non-pandemic environment, where the goal is to have a highly conservative, slow-going, and risk-averse approach so that the public feels confident trusting whatever it approved. That’s simply unacceptable in a fast-moving pandemic, and we are bound to face future pandemics in the future.
The federal government needs to have cognitive bias experts weigh in on federal policy. Putting all of its eggs in one basket – vaccinations – is not a wise move when we face the risks of a vaccine-escaping variant. Its focus should also be on expediting and prioritizing anti-virals, scaling up cheap rapid testing, and subsidizing high-filtration masks.
For employers, instead of dictating a top-down approach to how employees collaborate, companies need to adopt a decentralized team-led approach. Each individual team leader of a rank-and-file employee team should determine what works best for their team. After all, team leaders tend to know much more of what their teams need, after all. Moreover, they can respond to local emergencies like COVID surges.
At the same time, team leaders need to be trained to integrate best practices for hybrid and remote team leadership. Companies transitioned to telework abruptly as part of the March 2020 lockdowns. They fell into the cognitive bias of functional fixedness and transposed their pre-existing, in-office methods of collaboration on remote work. Zoom happy hours are a clear example: The large majority of employees dislike them, and research shows they are disconnecting, rather than connecting.
Yet supervisors continue to use them, despite the existence of much better methods of facilitating colalboration, which have been shown to work, such as virtual water cooler discussions, virtual coworking, and virtual mentoring. Leaders also need to facilitate innovation in hybrid and remote teams through techniques such as virtual asynchronous brainstorming. Finally, team leaders need to adjust performance evaluation to adapt to the needs of hybrid and remote teams.
On an individual level, people built up certain expectations during the first two years of the pandemic, and they don't apply with Omicron. For example, most people still think that a cloth mask is a fine source of protection. In reality, you really need an N-95 mask, since Omicron is so much more infectious. Another example is that many people don’t realize that symptom onset is much quicker with Omicron, and they aren’t prepared for the consequences.
Remember that we have a huge number of people who are asymptomatic, often without knowing it, due to the much higher mildness of Omicron. About 8% of people admitted to hospitals for other reasons in San Francisco test positive for COVID without symptoms, which we can assume translates for other cities. That means many may think they're fine and they're actually infectious. The result is a much higher chance of someone getting many other people sick.
During this time of record-breaking cases, you need to be mindful about your internalized assumptions and adjust your risk calculus accordingly. So if you can delay higher-risk activities, January and February might be the time to do it. Prepare for waves of disruptions to continue over time, at least through the end of February.
Of course, you might also choose to not worry about getting infected. If you are vaccinated and boosted, and do not have any additional health risks, you are very unlikely to have a serious illness due to Omicron. You can just take the small risk of a serious illness – which can happen – and go about your daily life. If doing so, watch out for those you care about who do have health concerns, since if you infect them, they might not have a mild case even with Omicron.
In short, instead of trying to turn back the clock to the lost world of January 2020, consider how we might create a competitive advantage in our new future. COVID will never go away: we need to learn to live with it. That means reacting appropriately and thoughtfully to new variants and being intentional about our trade-offs.
Autonomous, indoor farming gives a boost to crops
The glass-encased cabinet looks like a display meant to hold reasonably priced watches, or drugstore beauty creams shipped from France. But instead of this stagnant merchandise, each of its five shelves is overgrown with leaves — moss-soft pea sprouts, spikes of Lolla rosa lettuces, pale bok choy, dark kale, purple basil or red-veined sorrel or green wisps of dill. The glass structure isn’t a cabinet, but rather a “micro farm.”
The gadget is on display at the Richmond, Virginia headquarters of Babylon Micro-Farms, a company that aims to make indoor farming in the U.S. more accessible and sustainable. Babylon’s soilless hydroponic growing system, which feeds plants via nutrient-enriched water, allows chefs on cruise ships, cafeterias and elsewhere to provide home-grown produce to patrons, just seconds after it’s harvested. Currently, there are over 200 functioning systems, either sold or leased to customers, and more of them are on the way.
The chef-farmers choose from among 45 types of herb and leafy-greens seeds, plop them into grow trays, and a few weeks later they pick and serve. While success is predicated on at least a small amount of these humans’ care, the systems are autonomously surveilled round-the-clock from Babylon’s base of operations. And artificial intelligence is helping to run the show.
Babylon piloted the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off.
Imagine consistently perfect greens and tomatoes and strawberries, grown hyper-locally, using less water, without chemicals or environmental contaminants. This is the hefty promise of controlled environment agriculture (CEA) — basically, indoor farms that can be hydroponic, aeroponic (plant roots are suspended and fed through misting), or aquaponic (where fish play a role in fertilizing vegetables). But whether they grow 4,160 leafy-green servings per year, like one Babylon farm, or millions of servings, like some of the large, centralized facilities starting to supply supermarkets across the U.S., they seek to minimize failure as much as possible.
Babylon’s soilless hydroponic growing system
Courtesy Babylon Micro-Farms
Here, AI is starting to play a pivotal role. CEA growers use it to help “make sense of what’s happening” to the plants in their care, says Scott Lowman, vice president of applied research at the Institute for Advanced Learning and Research (IALR) in Virginia, a state that’s investing heavily in CEA companies. And although these companies say they’re not aiming for a future with zero human employees, AI is certainly poised to take a lot of human farming intervention out of the equation — for better and worse.
Most of these companies are compiling their own data sets to identify anything that might block the success of their systems. Babylon had already integrated sensor data into its farms to measure heat and humidity, the nutrient content of water, and the amount of light plants receive. Last year, they got a National Science Foundation grant that allowed them to pilot the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off. “Will this plant be healthy tomorrow? Are there things…that the human eye can't see that the plant starts expressing?” says Amandeep Ratte, the company’s head of data science. “If our system can say, Hey, this plant is unhealthy, we can reach out to [users] preemptively about what they’re doing wrong, or is there a disease at the farm?” Ratte says. The earlier the better, to avoid crop failures.
Natural light accounts for 70 percent of Greenswell Growers’ energy use on a sunny day.
Courtesy Greenswell Growers
IALR’s Lowman says that other CEA companies are developing their AI systems to account for the different crops they grow — lettuces come in all shapes and sizes, after all, and each has different growing needs than, for example, tomatoes. The ways they run their operations differs also. Babylon is unusual in its decentralized structure. But centralized growing systems with one main location have variabilities, too. AeroFarms, which recently declared bankruptcy but will continue to run its 140,000-square foot vertical operation in Danville, Virginia, is entirely enclosed and reliant on the intense violet glow of grow lights to produce microgreens.
Different companies have different data needs. What data is essential to AeroFarms isn’t quite the same as for Greenswell Growers located in Goochland County, Virginia. Raising four kinds of lettuce in a 77,000-square-foot automated hydroponic greenhouse, the vagaries of naturally available light, which accounts for 70 percent of Greenswell’s energy use on a sunny day, affect operations. Their tech needs to account for “outside weather impacts,” says president Carl Gupton. “What adjustments do we have to make inside of the greenhouse to offset what's going on outside environmentally, to give that plant optimal conditions? When it's 85 percent humidity outside, the system needs to do X, Y and Z to get the conditions that we want inside.”
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen.
Nevertheless, every CEA system has the same core needs — consistent yield of high quality crops to keep up year-round supply to customers. Additionally, “Everybody’s got the same set of problems,” Gupton says. Pests may come into a facility with seeds. A disease called pythium, one of the most common in CEA, can damage plant roots. “Then you have root disease pressures that can also come internally — a change in [growing] substrate can change the way the plant performs,” Gupton says.
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen. So, while companies amass their own hyper-specific data sets, Lowman foresees a time within the next decade “when there will be some type of [open-source] database that has the most common types of plant stress identified” that growers will be able to tap into. Such databases will “create a community and move the science forward,” says Lowman.
In fact, IALR is working on assembling images for just such a database now. On so-called “smart tables” inside an Institute lab, a team is growing greens and subjects them to various stressors. Then, they’re administering treatments while taking images of every plant every 15 minutes, says Lowman. Some experiments generate 80,000 images; the challenge lies in analyzing and annotating the vast trove of them, marking each one to reflect outcome—for example increasing the phosphate delivery and the plant’s response to it. Eventually, they’ll be fed into AI systems to help them learn.
For all the enthusiasm surrounding this technology, it’s not without downsides. Training just one AI system can emit over 250,000 pounds of carbon dioxide, according to MIT Technology Review. AI could also be used “to enhance environmental benefit for CEA and optimize [its] energy consumption,” says Rozita Dara, a computer science professor at the University of Guelph in Canada, specializing in AI and data governance, “but we first need to collect data to measure [it].”
The chef-farmers can choose from 45 types of herb and leafy-greens seeds.
Courtesy Babylon Micro-Farms
Any system connected to the Internet of Things is also vulnerable to hacking; if CEA grows to the point where “there are many of these similar farms, and you're depending on feeding a population based on those, it would be quite scary,” Dara says. And there are privacy concerns, too, in systems where imaging is happening constantly. It’s partly for this reason, says Babylon’s Ratte, that the company’s in-farm cameras all “face down into the trays, so the only thing [visible] is pictures of plants.”
Tweaks to improve AI for CEA are happening all the time. Greenswell made its first harvest in 2022 and now has annual data points they can use to start making more intelligent choices about how to feed, water, and supply light to plants, says Gupton. Ratte says he’s confident Babylon’s system can already “get our customers reliable harvests. But in terms of how far we have to go, it's a different problem,” he says. For example, if AI could detect whether the farm is mostly empty—meaning the farm’s user hasn’t planted a new crop of greens—it can alert Babylon to check “what's going on with engagement with this user?” Ratte says. “Do they need more training? Did the main person responsible for the farm quit?”
Lowman says more automation is coming, offering greater ability for systems to identify problems and mitigate them on the spot. “We still have to develop datasets that are specific, so you can have a very clear control plan, [because] artificial intelligence is only as smart as what we tell it, and in plant science, there's so much variation,” he says. He believes AI’s next level will be “looking at those first early days of plant growth: when the seed germinates, how fast it germinates, what it looks like when it germinates.” Imaging all that and pairing it with AI, “can be a really powerful tool, for sure.”
Scientists make progress with growing organs for transplants
Story by Big Think
For over a century, scientists have dreamed of growing human organs sans humans. This technology could put an end to the scarcity of organs for transplants. But that’s just the tip of the iceberg. The capability to grow fully functional organs would revolutionize research. For example, scientists could observe mysterious biological processes, such as how human cells and organs develop a disease and respond (or fail to respond) to medication without involving human subjects.
Recently, a team of researchers from the University of Cambridge has laid the foundations not just for growing functional organs but functional synthetic embryos capable of developing a beating heart, gut, and brain. Their report was published in Nature.
The organoid revolution
In 1981, scientists discovered how to keep stem cells alive. This was a significant breakthrough, as stem cells have notoriously rigorous demands. Nevertheless, stem cells remained a relatively niche research area, mainly because scientists didn’t know how to convince the cells to turn into other cells.
Then, in 1987, scientists embedded isolated stem cells in a gelatinous protein mixture called Matrigel, which simulated the three-dimensional environment of animal tissue. The cells thrived, but they also did something remarkable: they created breast tissue capable of producing milk proteins. This was the first organoid — a clump of cells that behave and function like a real organ. The organoid revolution had begun, and it all started with a boob in Jello.
For the next 20 years, it was rare to find a scientist who identified as an “organoid researcher,” but there were many “stem cell researchers” who wanted to figure out how to turn stem cells into other cells. Eventually, they discovered the signals (called growth factors) that stem cells require to differentiate into other types of cells.
For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells.
By the end of the 2000s, researchers began combining stem cells, Matrigel, and the newly characterized growth factors to create dozens of organoids, from liver organoids capable of producing the bile salts necessary for digesting fat to brain organoids with components that resemble eyes, the spinal cord, and arguably, the beginnings of sentience.
Synthetic embryos
Organoids possess an intrinsic flaw: they are organ-like. They share some characteristics with real organs, making them powerful tools for research. However, no one has found a way to create an organoid with all the characteristics and functions of a real organ. But Magdalena Żernicka-Goetz, a developmental biologist, might have set the foundation for that discovery.
Żernicka-Goetz hypothesized that organoids fail to develop into fully functional organs because organs develop as a collective. Organoid research often uses embryonic stem cells, which are the cells from which the developing organism is created. However, there are two other types of stem cells in an early embryo: stem cells that become the placenta and those that become the yolk sac (where the embryo grows and gets its nutrients in early development). For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells. In other words, Żernicka-Goetz suspected the best way to grow a functional organoid was to produce a synthetic embryoid.
As described in the aforementioned Nature paper, Żernicka-Goetz and her team mimicked the embryonic environment by mixing these three types of stem cells from mice. Amazingly, the stem cells self-organized into structures and progressed through the successive developmental stages until they had beating hearts and the foundations of the brain.
“Our mouse embryo model not only develops a brain, but also a beating heart [and] all the components that go on to make up the body,” said Żernicka-Goetz. “It’s just unbelievable that we’ve got this far. This has been the dream of our community for years and major focus of our work for a decade and finally we’ve done it.”
If the methods developed by Żernicka-Goetz’s team are successful with human stem cells, scientists someday could use them to guide the development of synthetic organs for patients awaiting transplants. It also opens the door to studying how embryos develop during pregnancy.