This Dog's Nose Is So Good at Smelling Cancer That Scientists Are Trying to Build One Just Like It
Daisy wouldn't leave Claire Guest alone. Instead of joining Guest's other dogs for a run in the park, the golden retriever with the soulful eyes kept nudging Guest's chest, and stared at her intently, somehow hoping she'd get the message.
"I was incredibly lucky to be told by Daisy."
When Guest got home, she detected a tiny lump in one of her breasts. She dismissed it, but her sister, who is a family doctor, insisted she get it checked out.
That saved her life. A series of tests, including a biopsy and a mammogram, revealed the cyst was benign. But doctors discovered a tumor hidden deep inside her chest wall, an insidious malignancy that normally isn't detected until the cancer has rampaged out of control throughout the body. "My prognosis would have been very poor," says Guest, who is an animal behavioralist. "I was incredibly lucky to be told by Daisy."
Ironically, at the time, Guest was training hearing dogs for the deaf—alerting them to doorbells or phones--for a charitable foundation. But she had been working on a side project to harness dogs' exquisitely sensitive sense of smell to spot cancer at its earliest and most treatable stages. When Guest was diagnosed with cancer two decades ago, however, the use of dogs to detect diseases was in its infancy and scientific evidence was largely anecdotal.
In the years since, Guest and the British charitable foundation she co-founded with Dr. John Church in 2008, Medical Detection Dogs (MDD), has shown that dogs can be trained to detect odors that predict a looming medical crisis hours in advance, in the case of diabetes or epilepsy, as well as the presence of cancers.
In a proof of principle study published in the BMJ in 2004, they showed dogs had better than a 40 percent success rate in identifying bladder cancer, which was significantly better than random chance (14 percent). Subsequent research indicated dogs can detect odors down to parts per trillion, which is the equivalent of sniffing out a teaspoon of sugar in two Olympic size swimming pools (a million gallons).
American scientists are devising artificial noses that mimic dogs' sense of smell, so these potentially life-saving diagnostic tools are widely available.
But the problem is "dogs can't be scaled up"—it costs upwards of $25,000 to train them—"and you can't keep a trained dog in every oncology practice," says Guest.
The good news is that the pivotal 2004 BMJ paper caught the attention of two American scientists—Andreas Mershin, a physicist at MIT, and Wen-Yee Yee, a chemistry professor at The University of Texas at El Paso. They have joined Guest's quest to leverage canines' highly attuned olfactory systems and devise artificial noses that mimic dogs' sense of smell, so these potentially life-saving diagnostic tools are widely available.
"What we do know is that this is real," says Guest. "Anything that can improve diagnosis of cancer is something we ought to know about."
Dogs have routinely been used for centuries as trackers for hunting and more recently, for ferreting out bombs and bodies. Dogs like Daisy, who went on to become a star performer in Guest's pack of highly trained cancer detecting canines before her death in 2018, have shared a special bond with their human companions for thousands of years. But their vastly superior olfaction is the result of simple anatomy.
Humans possess about six million olfactory receptors—the antenna-like structures inside cell membranes in our nose that latch on to the molecules in the air when we inhale. In contrast, dogs have about 300 million of them and the brain region that analyzes smells is, proportionally, about 40 times greater than ours.
Research indicates that cancerous cells interfere with normal metabolic processes, prompting them to produce volatile organic compounds (VOCs), which enter the blood stream and are either exhaled in our breath or excreted in urine. Dogs can identify these VOCs in urine samples at the tiniest concentrations, 0.001 parts per million, and can be trained to identify the specific "odor fingerprint" of different cancers, although teaching them how to distinguish these signals from background odors is far more complicated than training them to detect drugs or explosives.
For the past fifteen years, Andreas Mershin of MIT has been grappling with this complexity in his quest to devise an artificial nose, which he calls the Nano-Nose, first as a military tool to spot land mines and IEDS, and more recently as a cancer detection tool that can be used in doctors' offices. The ultimate goal is to create an easy-to-use olfaction system powered by artificial intelligence that can fit inside of smartphones and can replicate dogs' ability to sniff out early signs of prostate cancer, which could eliminate a lot of painful and costly biopsies.
Andreas Mershin works on his artificial nose.
Trained canines have a better than 90 percent accuracy in spotting prostate cancer, which is normally difficult to detect. The current diagnostic, the prostate specific antigen test, which measures levels of certain immune system cells associated with prostate cancer, has about as much accuracy "as a coin toss," according to the scientist who discovered PSA. These false positives can lead to unnecessary and horrifically invasive biopsies to retrieve tissue samples.
So far, Mershin's prototype device has the same sensitivity as the dogs—and can detect odors at parts per trillion—but it still can't distinguish that cancer smell in individual human patients the way a dog can. "What we're trying to understand from the dogs is how they look at the data they are collecting so we can copy it," says Mershin. "We still have to make it intelligent enough to know what it is looking at—what we are lacking is artificial dog intelligence."
The intricate parts of the artificial nose are designed to fit inside a smartphone.
At UT El Paso, Wen-Yee Lee and her research team has used the canine olfactory system as a model for a new screening test for prostate cancer, which has a 92 percent accuracy in tests of urine samples and could be eventually developed as a kit similar to the home pregnancy test. "If dogs can do it, we can do it better," says Lee, whose husband was diagnosed with prostate cancer in 2005.
The UT scientists used samples from about 150 patients, and looked at about 9,000 compounds before they were able to zero in on the key VOCs that are released by prostate cancers—"it was like finding a needle in the haystack," says Lee. But a more reliable test that can also distinguish which cancers are more aggressive could help patients decide their best treatment options and avoid invasive procedures that can render them incontinent and impotent.
"This is much more accurate than the PSA—we were able to see a very distinct difference between people with prostate cancer and those without cancer," says Lee, who has been sharing her research with Guest and hopes to have the test on the market within the next few years.
In the meantime, Guest's foundation has drawn the approving attention of royal animal lovers: Camilla, the Duchess of Cornwall, is a patron, which opened up the charitable floodgates and helped legitimize MDD in the scientific community. Even Camilla's mother-in-law, Queen Elizabeth, has had a demonstration of these canny canines' unique abilities.
Claire Guest, and two of MDDs medical detection dogs, Jodie and Nimbus, meet with queen Elizabeth.
"She actually held one of my [artificial] noses in her hand and asked really good questions, including things we hadn't thought of, like the range of how far away a dog can pick up the scent or if this can be used to screen for malaria," says Mershin. "I was floored by this curious 93-year-old lady. Half of humanity's deaths are from chronic diseases and what the dogs are showing is a whole new way of understanding holistic diseases of the system."
Autonomous, indoor farming gives a boost to crops
The glass-encased cabinet looks like a display meant to hold reasonably priced watches, or drugstore beauty creams shipped from France. But instead of this stagnant merchandise, each of its five shelves is overgrown with leaves — moss-soft pea sprouts, spikes of Lolla rosa lettuces, pale bok choy, dark kale, purple basil or red-veined sorrel or green wisps of dill. The glass structure isn’t a cabinet, but rather a “micro farm.”
The gadget is on display at the Richmond, Virginia headquarters of Babylon Micro-Farms, a company that aims to make indoor farming in the U.S. more accessible and sustainable. Babylon’s soilless hydroponic growing system, which feeds plants via nutrient-enriched water, allows chefs on cruise ships, cafeterias and elsewhere to provide home-grown produce to patrons, just seconds after it’s harvested. Currently, there are over 200 functioning systems, either sold or leased to customers, and more of them are on the way.
The chef-farmers choose from among 45 types of herb and leafy-greens seeds, plop them into grow trays, and a few weeks later they pick and serve. While success is predicated on at least a small amount of these humans’ care, the systems are autonomously surveilled round-the-clock from Babylon’s base of operations. And artificial intelligence is helping to run the show.
Babylon piloted the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off.
Imagine consistently perfect greens and tomatoes and strawberries, grown hyper-locally, using less water, without chemicals or environmental contaminants. This is the hefty promise of controlled environment agriculture (CEA) — basically, indoor farms that can be hydroponic, aeroponic (plant roots are suspended and fed through misting), or aquaponic (where fish play a role in fertilizing vegetables). But whether they grow 4,160 leafy-green servings per year, like one Babylon farm, or millions of servings, like some of the large, centralized facilities starting to supply supermarkets across the U.S., they seek to minimize failure as much as possible.
Babylon’s soilless hydroponic growing system
Courtesy Babylon Micro-Farms
Here, AI is starting to play a pivotal role. CEA growers use it to help “make sense of what’s happening” to the plants in their care, says Scott Lowman, vice president of applied research at the Institute for Advanced Learning and Research (IALR) in Virginia, a state that’s investing heavily in CEA companies. And although these companies say they’re not aiming for a future with zero human employees, AI is certainly poised to take a lot of human farming intervention out of the equation — for better and worse.
Most of these companies are compiling their own data sets to identify anything that might block the success of their systems. Babylon had already integrated sensor data into its farms to measure heat and humidity, the nutrient content of water, and the amount of light plants receive. Last year, they got a National Science Foundation grant that allowed them to pilot the use of specialized cameras that take pictures in different spectrums to gather some less-obvious visual data about plants’ wellbeing and alert people if something seems off. “Will this plant be healthy tomorrow? Are there things…that the human eye can't see that the plant starts expressing?” says Amandeep Ratte, the company’s head of data science. “If our system can say, Hey, this plant is unhealthy, we can reach out to [users] preemptively about what they’re doing wrong, or is there a disease at the farm?” Ratte says. The earlier the better, to avoid crop failures.
Natural light accounts for 70 percent of Greenswell Growers’ energy use on a sunny day.
Courtesy Greenswell Growers
IALR’s Lowman says that other CEA companies are developing their AI systems to account for the different crops they grow — lettuces come in all shapes and sizes, after all, and each has different growing needs than, for example, tomatoes. The ways they run their operations differs also. Babylon is unusual in its decentralized structure. But centralized growing systems with one main location have variabilities, too. AeroFarms, which recently declared bankruptcy but will continue to run its 140,000-square foot vertical operation in Danville, Virginia, is entirely enclosed and reliant on the intense violet glow of grow lights to produce microgreens.
Different companies have different data needs. What data is essential to AeroFarms isn’t quite the same as for Greenswell Growers located in Goochland County, Virginia. Raising four kinds of lettuce in a 77,000-square-foot automated hydroponic greenhouse, the vagaries of naturally available light, which accounts for 70 percent of Greenswell’s energy use on a sunny day, affect operations. Their tech needs to account for “outside weather impacts,” says president Carl Gupton. “What adjustments do we have to make inside of the greenhouse to offset what's going on outside environmentally, to give that plant optimal conditions? When it's 85 percent humidity outside, the system needs to do X, Y and Z to get the conditions that we want inside.”
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen.
Nevertheless, every CEA system has the same core needs — consistent yield of high quality crops to keep up year-round supply to customers. Additionally, “Everybody’s got the same set of problems,” Gupton says. Pests may come into a facility with seeds. A disease called pythium, one of the most common in CEA, can damage plant roots. “Then you have root disease pressures that can also come internally — a change in [growing] substrate can change the way the plant performs,” Gupton says.
AI will help identify diseases, as well as when a plant is thirsty or overly hydrated, when it needs more or less calcium, phosphorous, nitrogen. So, while companies amass their own hyper-specific data sets, Lowman foresees a time within the next decade “when there will be some type of [open-source] database that has the most common types of plant stress identified” that growers will be able to tap into. Such databases will “create a community and move the science forward,” says Lowman.
In fact, IALR is working on assembling images for just such a database now. On so-called “smart tables” inside an Institute lab, a team is growing greens and subjects them to various stressors. Then, they’re administering treatments while taking images of every plant every 15 minutes, says Lowman. Some experiments generate 80,000 images; the challenge lies in analyzing and annotating the vast trove of them, marking each one to reflect outcome—for example increasing the phosphate delivery and the plant’s response to it. Eventually, they’ll be fed into AI systems to help them learn.
For all the enthusiasm surrounding this technology, it’s not without downsides. Training just one AI system can emit over 250,000 pounds of carbon dioxide, according to MIT Technology Review. AI could also be used “to enhance environmental benefit for CEA and optimize [its] energy consumption,” says Rozita Dara, a computer science professor at the University of Guelph in Canada, specializing in AI and data governance, “but we first need to collect data to measure [it].”
The chef-farmers can choose from 45 types of herb and leafy-greens seeds.
Courtesy Babylon Micro-Farms
Any system connected to the Internet of Things is also vulnerable to hacking; if CEA grows to the point where “there are many of these similar farms, and you're depending on feeding a population based on those, it would be quite scary,” Dara says. And there are privacy concerns, too, in systems where imaging is happening constantly. It’s partly for this reason, says Babylon’s Ratte, that the company’s in-farm cameras all “face down into the trays, so the only thing [visible] is pictures of plants.”
Tweaks to improve AI for CEA are happening all the time. Greenswell made its first harvest in 2022 and now has annual data points they can use to start making more intelligent choices about how to feed, water, and supply light to plants, says Gupton. Ratte says he’s confident Babylon’s system can already “get our customers reliable harvests. But in terms of how far we have to go, it's a different problem,” he says. For example, if AI could detect whether the farm is mostly empty—meaning the farm’s user hasn’t planted a new crop of greens—it can alert Babylon to check “what's going on with engagement with this user?” Ratte says. “Do they need more training? Did the main person responsible for the farm quit?”
Lowman says more automation is coming, offering greater ability for systems to identify problems and mitigate them on the spot. “We still have to develop datasets that are specific, so you can have a very clear control plan, [because] artificial intelligence is only as smart as what we tell it, and in plant science, there's so much variation,” he says. He believes AI’s next level will be “looking at those first early days of plant growth: when the seed germinates, how fast it germinates, what it looks like when it germinates.” Imaging all that and pairing it with AI, “can be a really powerful tool, for sure.”
Scientists make progress with growing organs for transplants
Story by Big Think
For over a century, scientists have dreamed of growing human organs sans humans. This technology could put an end to the scarcity of organs for transplants. But that’s just the tip of the iceberg. The capability to grow fully functional organs would revolutionize research. For example, scientists could observe mysterious biological processes, such as how human cells and organs develop a disease and respond (or fail to respond) to medication without involving human subjects.
Recently, a team of researchers from the University of Cambridge has laid the foundations not just for growing functional organs but functional synthetic embryos capable of developing a beating heart, gut, and brain. Their report was published in Nature.
The organoid revolution
In 1981, scientists discovered how to keep stem cells alive. This was a significant breakthrough, as stem cells have notoriously rigorous demands. Nevertheless, stem cells remained a relatively niche research area, mainly because scientists didn’t know how to convince the cells to turn into other cells.
Then, in 1987, scientists embedded isolated stem cells in a gelatinous protein mixture called Matrigel, which simulated the three-dimensional environment of animal tissue. The cells thrived, but they also did something remarkable: they created breast tissue capable of producing milk proteins. This was the first organoid — a clump of cells that behave and function like a real organ. The organoid revolution had begun, and it all started with a boob in Jello.
For the next 20 years, it was rare to find a scientist who identified as an “organoid researcher,” but there were many “stem cell researchers” who wanted to figure out how to turn stem cells into other cells. Eventually, they discovered the signals (called growth factors) that stem cells require to differentiate into other types of cells.
For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells.
By the end of the 2000s, researchers began combining stem cells, Matrigel, and the newly characterized growth factors to create dozens of organoids, from liver organoids capable of producing the bile salts necessary for digesting fat to brain organoids with components that resemble eyes, the spinal cord, and arguably, the beginnings of sentience.
Synthetic embryos
Organoids possess an intrinsic flaw: they are organ-like. They share some characteristics with real organs, making them powerful tools for research. However, no one has found a way to create an organoid with all the characteristics and functions of a real organ. But Magdalena Żernicka-Goetz, a developmental biologist, might have set the foundation for that discovery.
Żernicka-Goetz hypothesized that organoids fail to develop into fully functional organs because organs develop as a collective. Organoid research often uses embryonic stem cells, which are the cells from which the developing organism is created. However, there are two other types of stem cells in an early embryo: stem cells that become the placenta and those that become the yolk sac (where the embryo grows and gets its nutrients in early development). For a human embryo (and its organs) to develop successfully, there needs to be a “dialogue” between these three types of stem cells. In other words, Żernicka-Goetz suspected the best way to grow a functional organoid was to produce a synthetic embryoid.
As described in the aforementioned Nature paper, Żernicka-Goetz and her team mimicked the embryonic environment by mixing these three types of stem cells from mice. Amazingly, the stem cells self-organized into structures and progressed through the successive developmental stages until they had beating hearts and the foundations of the brain.
“Our mouse embryo model not only develops a brain, but also a beating heart [and] all the components that go on to make up the body,” said Żernicka-Goetz. “It’s just unbelievable that we’ve got this far. This has been the dream of our community for years and major focus of our work for a decade and finally we’ve done it.”
If the methods developed by Żernicka-Goetz’s team are successful with human stem cells, scientists someday could use them to guide the development of synthetic organs for patients awaiting transplants. It also opens the door to studying how embryos develop during pregnancy.