Autonomous, indoor farming gives a boost to crops
The glass-encased cabinet looks like a display meant to hold reasonably priced watches, or drugstore beauty creams shipped from France. But instead of this stagnant merchandise, each of its five shelves is overgrown with leaves — moss-soft pea sprouts, spikes of Lolla rosa lettuces, pale bok choy, dark kale, purple basil or red-veined sorrel or green wisps of dill. The glass structure isn’t a cabinet, but rather a “micro farm.”
The gadget is on display at the Richmond, Virginia headquarters of Babylon Micro-Farms, a company that aims to make indoor farming in the U.S. more accessible and sustainable. Babylon’s soilless hydroponic growing system, which feeds plants via nutrient-enriched water, allows chefs on cruise ships, cafeterias and elsewhere to provide home-grown produce to patrons, just seconds after it’s harvested. Currently, there are over 200 functioning systems, either sold or leased to customers, and more of them are on the way.
The chef-farmers choose from among 45 types of herb and leafy-greens seeds, plop them into grow trays, and a few weeks later they pick and serve. While success is predicated on at least a small amount of these humans’ care, the systems are autonomously surveilled round-the-clock from Babylon’s base of operations. And artificial intelligence is helping to run the show.
Babylon piloted the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off.
Imagine consistently perfect greens and tomatoes and strawberries, grown hyper-locally, using less water, without chemicals or environmental contaminants. This is the hefty promise of controlled environment agriculture (CEA) — basically, indoor farms that can be hydroponic, aeroponic (plant roots are suspended and fed through misting), or aquaponic (where fish play a role in fertilizing vegetables). But whether they grow 4,160 leafy-green servings per year, like one Babylon farm, or millions of servings, like some of the large, centralized facilities starting to supply supermarkets across the U.S., they seek to minimize failure as much as possible.
Babylon’s soilless hydroponic growing system
Courtesy Babylon Micro-Farms
Here, AI is starting to play a pivotal role. CEA growers use it to help “make sense of what’s happening” to the plants in their care, says Scott Lowman, vice president of applied research at the Institute for Advanced Learning and Research (IALR) in Virginia, a state that’s investing heavily in CEA companies. And although these companies say they’re not aiming for a future with zero human employees, AI is certainly poised to take a lot of human farming intervention out of the equation — for better and worse.
Most of these companies are compiling their own data sets to identify anything that might block the success of their systems. Babylon had already integrated sensor data into its farms to measure heat and humidity, the nutrient content of water, and the amount of light plants receive. Last year, they got a National Science Foundation grant that allowed them to pilot the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off. “Will this plant be healthy tomorrow? Are there things…that the human eye can't see that the plant starts expressing?” says Amandeep Ratte, the company’s head of data science. “If our system can say, Hey, this plant is unhealthy, we can reach out to [users] preemptively about what they’re doing wrong, or is there a disease at the farm?” Ratte says. The earlier the better, to avoid crop failures.
Natural light accounts for 70 percent of Greenswell Growers’ energy use on a sunny day.
Courtesy Greenswell Growers
IALR’s Lowman says that other CEA companies are developing their AI systems to account for the different crops they grow — lettuces come in all shapes and sizes, after all, and each has different growing needs than, for example, tomatoes. The ways they run their operations differs also. Babylon is unusual in its decentralized structure. But centralized growing systems with one main location have variabilities, too. AeroFarms, which recently declared bankruptcy but will continue to run its 140,000-square foot vertical operation in Danville, Virginia, is entirely enclosed and reliant on the intense violet glow of grow lights to produce microgreens.
Different companies have different data needs. What data is essential to AeroFarms isn’t quite the same as for Greenswell Growers located in Goochland County, Virginia. Raising four kinds of lettuce in a 77,000-square-foot automated hydroponic greenhouse, the vagaries of naturally available light, which accounts for 70 percent of Greenswell’s energy use on a sunny day, affect operations. Their tech needs to account for “outside weather impacts,” says president Carl Gupton. “What adjustments do we have to make inside of the greenhouse to offset what's going on outside environmentally, to give that plant optimal conditions? When it's 85 percent humidity outside, the system needs to do X, Y and Z to get the conditions that we want inside.”
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen.
Nevertheless, every CEA system has the same core needs — consistent yield of high quality crops to keep up year-round supply to customers. Additionally, “Everybody’s got the same set of problems,” Gupton says. Pests may come into a facility with seeds. A disease called pythium, one of the most common in CEA, can damage plant roots. “Then you have root disease pressures that can also come internally — a change in [growing] substrate can change the way the plant performs,” Gupton says.
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen. So, while companies amass their own hyper-specific data sets, Lowman foresees a time within the next decade “when there will be some type of [open-source] database that has the most common types of plant stress identified” that growers will be able to tap into. Such databases will “create a community and move the science forward,” says Lowman.
In fact, IALR is working on assembling images for just such a database now. On so-called “smart tables” inside an Institute lab, a team is growing greens and subjects them to various stressors. Then, they’re administering treatments while taking images of every plant every 15 minutes, says Lowman. Some experiments generate 80,000 images; the challenge lies in analyzing and annotating the vast trove of them, marking each one to reflect outcome—for example increasing the phosphate delivery and the plant’s response to it. Eventually, they’ll be fed into AI systems to help them learn.
For all the enthusiasm surrounding this technology, it’s not without downsides. Training just one AI system can emit over 250,000 pounds of carbon dioxide, according to MIT Technology Review. AI could also be used “to enhance environmental benefit for CEA and optimize [its] energy consumption,” says Rozita Dara, a computer science professor at the University of Guelph in Canada, specializing in AI and data governance, “but we first need to collect data to measure [it].”
The chef-farmers can choose from 45 types of herb and leafy-greens seeds.
Courtesy Babylon Micro-Farms
Any system connected to the Internet of Things is also vulnerable to hacking; if CEA grows to the point where “there are many of these similar farms, and you're depending on feeding a population based on those, it would be quite scary,” Dara says. And there are privacy concerns, too, in systems where imaging is happening constantly. It’s partly for this reason, says Babylon’s Ratte, that the company’s in-farm cameras all “face down into the trays, so the only thing [visible] is pictures of plants.”
Tweaks to improve AI for CEA are happening all the time. Greenswell made its first harvest in 2022 and now has annual data points they can use to start making more intelligent choices about how to feed, water, and supply light to plants, says Gupton. Ratte says he’s confident Babylon’s system can already “get our customers reliable harvests. But in terms of how far we have to go, it's a different problem,” he says. For example, if AI could detect whether the farm is mostly empty—meaning the farm’s user hasn’t planted a new crop of greens—it can alert Babylon to check “what's going on with engagement with this user?” Ratte says. “Do they need more training? Did the main person responsible for the farm quit?”
Lowman says more automation is coming, offering greater ability for systems to identify problems and mitigate them on the spot. “We still have to develop datasets that are specific, so you can have a very clear control plan, [because] artificial intelligence is only as smart as what we tell it, and in plant science, there's so much variation,” he says. He believes AI’s next level will be “looking at those first early days of plant growth: when the seed germinates, how fast it germinates, what it looks like when it germinates.” Imaging all that and pairing it with AI, “can be a really powerful tool, for sure.”
The Friday Five covers five stories in research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on new scientific theories and progress to give you a therapeutic dose of inspiration headed into the weekend.
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
Here are the stories covered this week:
- The eyes are the windows to the soul - and biological aging?
- What bean genes mean for health and the planet
- This breathing practice could lower levels of tau proteins
- AI beats humans at assessing heart health
- Should you get a nature prescription?
Two-and-a-half year-old Huckleberry, a blue merle Australian shepherd, pulls hard at her leash; her yelps can be heard by skiers and boarders high above on the chairlift that carries them over the ski patrol hut to the top of the mountain. Huckleberry is an avalanche rescue dog — or avy dog, for short. She lives and works with her owner and handler, a ski patroller at Breckenridge Ski Resort in Colorado. As she watches the trainer play a game of hide-and-seek with six-month-old Lume, a golden retriever and avy dog-in-training, Huckleberry continues to strain on her leash; she loves the game. Hide-and-seek is one of the key training methods for teaching avy dogs the rescue skills they need to find someone caught in an avalanche — skier, snowmobiler, hiker, climber.
Lume’s owner waves a T-shirt in front of the puppy. While another patroller holds him back, Lume’s owner runs away and hides. About a minute later — after a lot of barking — Lume is released and commanded to “search.” He springs free, running around the hut to find his owner who reacts with a great amount of excitement and fanfare. Lume’s scent training will continue for the rest of the ski season (Breckenridge plans operating through May or as long as weather permits) and through the off-season. “We make this game progressively harder by not allowing the dog watch the victim run away,” explains Dave Leffler, Breckenridge's ski patroller and head of the avy dog program, who has owned, trained and raised many of them. Eventually, the trainers “dig an open hole in the snow to duck out of sight and gradually turn the hole into a cave where the dog has to dig to get the victim,” explains Leffler.
By the time he is three, Lume, like Huckleberry, will be a fully trained avy pup and will join seven other avy dogs on Breckenridge ski patrol team. Some of the team members, both human and canine, are also certified to work with Colorado Rapid Avalanche Deployment, a coordinated response team that works with the Summit County Sheriff’s office for avalanche emergencies outside of the ski slopes’ boundaries.
There have been 19 avalanche deaths in the U.S. this season, according to avalanche.org, which tracks slides; eight in Colorado. During the entirety of last season there were 17. Avalanche season runs from November through June, but avalanches can occur year-round.
High tech and high stakes
Complementing avy dogs’ ability to smell people buried in a slide, avalanche detection, rescue and recovery is becoming increasingly high tech. There are transceivers, signal locators, ground scanners and drones, which are considered “games changers” by many in avalanche rescue and recovery
For a person buried in an avalanche, the chance of survival plummets after 20 minutes, so every moment counts.
A drone can provide thermal imaging of objects caught in a slide; what looks like a rock from far away might be a human with a heat signature. Transceivers, also known as beacons, send a signal from an avalanche victim to a companion. Signal locators, like RECCO reflectors which are often sewn directly into gear, can echo back a radar signal sent by a detector; most ski resorts have RECCO detector units.
Research suggests that Ground Penetrating Radar (GPR), an electromagnetic tool used by geophysicists to pull images from inside the ground, could be used to locate an avalanche victim. A new study from the Department of Energy’s Sandia National Laboratories suggests that a computer program developed to pinpoint the source of a chemical or biological terrorist attack could also be used to find someone submerged in an avalanche. The search algorithm allows for small robots (described as cockroach-sized) to “swarm” a search area. Researchers say that this distributed optimization algorithm can help find avalanche victims four times faster than current search mechanisms. For a person buried in an avalanche, the chance of survival plummets after 20 minutes, so every moment counts.
An avy dog in training is picking up scent
Sarah McLear
While rescue gear has been evolving, predicting when a slab will fall remains an emerging science — kind of where weather forecasting science was in the 1980s. Avalanche forecasting still relies on documenting avalanches by going out and looking,” says Ethan Greene, director of the Colorado Avalanche Information Center (CAIC). “So if there's a big snowstorm, and as you might remember, most avalanches happened during snowstorms, we could have 10,000 avalanches that release and we document 50,” says Greene. “Avalanche forecasting is essentially pattern recognition,” he adds--and understanding the layering structure of snow.
However, determining where the hazards lie can be tricky. While a dense layer of snow over a softer, weaker layer may be a recipe for an avalanche, there’s so much variability in snowpack that no one formula can predict the trigger. Further, observing and measuring snow at a single point may not be representative of all nearby slopes. Finally, there’s not enough historical data to help avalanche scientists create better prediction models.
That, however, may be changing.
Last year, an international group of researchers created computer simulations of snow cover using 16 years of meteorological data to forecast avalanche hazards, publishing their research in Cold Regions Science and Technology. They believe their models, which categorize different kinds of avalanches, can support forecasting and determine whether the avalanche is natural (caused by temperature changes, wind, additional snowfall) or artificial (triggered by a human or animal).
With smell receptors ranging from 800 million for an average dog, to 4 billion for scent hounds, canines remain key to finding people caught in slides.
With data from two sites in British Columbia and one in Switzerland, researchers built computer simulations of five different avalanche types. “In terms of real time avalanche forecasting, this has potential to fill in a lot of data gaps, where we don't have field observations of what the snow looks like,” says Simon Horton, a postdoctoral fellow with the Simon Fraser University Centre for Natural Hazards Research and a forecaster with Avalanche Canada, who participated in the study. While complex models that simulate snowpack layers have been around for a few decades, they weren’t easy to apply until recently. “It's been difficult to find out how to apply that to actual decision-making and improving safety,” says Horton. If you can derive avalanche problem types from simulated snowpack properties, he says, you’ll learn “a lot about how you want to manage that risk.”
The five categories include “new snow,” which is unstable and slides down the slope, “wet snow,” when rain or heat makes it liquidly, as well as “wind-drifted snow,” “persistent weak layers” and “old snow.” “That's when there's some type of deeply buried weak layer in the snow that releases without any real change in the weather,” Horton explains. “These ones tend to cause the most accidents.” One step by a person on that structurally weak layer of snow will cause a slide. Horton is hopeful that computer simulations of avalanche types can be used by scientists in different snow climates to help predict hazard levels.
Greene is doubtful. “If you have six slopes that are lined up next to each other, and you're going to try to predict which one avalanches and the exact dimensions and what time, that's going to be really hard to do. And I think it's going to be a long time before we're able to do that,” says Greene.
What both researchers do agree on, though, is that what avalanche prediction really needs is better imagery through satellite detection. “Just being able to count the number of avalanches that are out there will have a huge impact on what we do,” Greene says. “[Satellites] will change what we do, dramatically.” In a 2022 paper, scientists at the University of Aberdeen in England used satellites to study two deadly Himalayan avalanches. The imaging helped them determine that sediment from a 2016 ice avalanche plus subsequent snow avalanches contributed to the 2021 avalanche that caused a flash flood, killing over 200 people. The researchers say that understanding the avalanches characteristics through satellite imagery can inform them how one such event increases the magnitude of another in the same area.
Avy dogs trainers hide in dug-out holes in the snow, teaching the dogs to find buried victims
Sarah McLear
Lifesaving combo: human tech and Mother Nature’s gear
Even as avalanche forecasting evolves, dogs with their built-in rescue mechanisms will remain invaluable. With smell receptors ranging from 800 million for an average dog, to 4 billion for scent hounds, canines remain key to finding people caught in slides. (Humans in comparison, have a meager 12 million.) A new study published in the Journal of Neuroscience revealed that in dogs smell and vision are connected in the brain, which has not been found in other animals. “They can detect the smell of their owner's fingerprints on a glass slide six weeks after they touched it,” says Nicholas Dodman, professor emeritus at Cummings School of Veterinary Medicine at Tufts University. “And they can track from a boat where a box filled with meat was buried in the water, 100 feet below,” says Dodman, who is also co-founder and president of the Center for Canine Behavior Studies.
Another recent study from Queens College in Belfast, United Kingdom, further confirms that dogs can smell when humans are stressed. They can also detect the smell of a person’s breath and the smell of the skin cells of a deceased person.
The emerging avalanche-predicting human-made tech and the incredible nature-made tech of dogs’ olfactory talents is the lifesaving “equipment” that Leffler believes in. Even when human-made technology develops further, it will be most efficient when used together with the millions of dogs’ smell receptors, Leffler believes. “It is a combination of technology and the avalanche dog that will always be effective in finding an avalanche victim.”