Tech and the science of dogs’ olfactory receptors combat avalanche threats
Two-and-a-half year-old Huckleberry, a blue merle Australian shepherd, pulls hard at her leash; her yelps can be heard by skiers and boarders high above on the chairlift that carries them over the ski patrol hut to the top of the mountain. Huckleberry is an avalanche rescue dog — or avy dog, for short. She lives and works with her owner and handler, a ski patroller at Breckenridge Ski Resort in Colorado. As she watches the trainer play a game of hide-and-seek with six-month-old Lume, a golden retriever and avy dog-in-training, Huckleberry continues to strain on her leash; she loves the game. Hide-and-seek is one of the key training methods for teaching avy dogs the rescue skills they need to find someone caught in an avalanche — skier, snowmobiler, hiker, climber.
Lume’s owner waves a T-shirt in front of the puppy. While another patroller holds him back, Lume’s owner runs away and hides. About a minute later — after a lot of barking — Lume is released and commanded to “search.” He springs free, running around the hut to find his owner who reacts with a great amount of excitement and fanfare. Lume’s scent training will continue for the rest of the ski season (Breckenridge plans operating through May or as long as weather permits) and through the off-season. “We make this game progressively harder by not allowing the dog watch the victim run away,” explains Dave Leffler, Breckenridge's ski patroller and head of the avy dog program, who has owned, trained and raised many of them. Eventually, the trainers “dig an open hole in the snow to duck out of sight and gradually turn the hole into a cave where the dog has to dig to get the victim,” explains Leffler.
By the time he is three, Lume, like Huckleberry, will be a fully trained avy pup and will join seven other avy dogs on Breckenridge ski patrol team. Some of the team members, both human and canine, are also certified to work with Colorado Rapid Avalanche Deployment, a coordinated response team that works with the Summit County Sheriff’s office for avalanche emergencies outside of the ski slopes’ boundaries.
There have been 19 avalanche deaths in the U.S. this season, according to avalanche.org, which tracks slides; eight in Colorado. During the entirety of last season there were 17. Avalanche season runs from November through June, but avalanches can occur year-round.
High tech and high stakes
Complementing avy dogs’ ability to smell people buried in a slide, avalanche detection, rescue and recovery is becoming increasingly high tech. There are transceivers, signal locators, ground scanners and drones, which are considered “games changers” by many in avalanche rescue and recovery
For a person buried in an avalanche, the chance of survival plummets after 20 minutes, so every moment counts.
A drone can provide thermal imaging of objects caught in a slide; what looks like a rock from far away might be a human with a heat signature. Transceivers, also known as beacons, send a signal from an avalanche victim to a companion. Signal locators, like RECCO reflectors which are often sewn directly into gear, can echo back a radar signal sent by a detector; most ski resorts have RECCO detector units.
Research suggests that Ground Penetrating Radar (GPR), an electromagnetic tool used by geophysicists to pull images from inside the ground, could be used to locate an avalanche victim. A new study from the Department of Energy’s Sandia National Laboratories suggests that a computer program developed to pinpoint the source of a chemical or biological terrorist attack could also be used to find someone submerged in an avalanche. The search algorithm allows for small robots (described as cockroach-sized) to “swarm” a search area. Researchers say that this distributed optimization algorithm can help find avalanche victims four times faster than current search mechanisms. For a person buried in an avalanche, the chance of survival plummets after 20 minutes, so every moment counts.
An avy dog in training is picking up scent
Sarah McLear
While rescue gear has been evolving, predicting when a slab will fall remains an emerging science — kind of where weather forecasting science was in the 1980s. Avalanche forecasting still relies on documenting avalanches by going out and looking,” says Ethan Greene, director of the Colorado Avalanche Information Center (CAIC). “So if there's a big snowstorm, and as you might remember, most avalanches happened during snowstorms, we could have 10,000 avalanches that release and we document 50,” says Greene. “Avalanche forecasting is essentially pattern recognition,” he adds--and understanding the layering structure of snow.
However, determining where the hazards lie can be tricky. While a dense layer of snow over a softer, weaker layer may be a recipe for an avalanche, there’s so much variability in snowpack that no one formula can predict the trigger. Further, observing and measuring snow at a single point may not be representative of all nearby slopes. Finally, there’s not enough historical data to help avalanche scientists create better prediction models.
That, however, may be changing.
Last year, an international group of researchers created computer simulations of snow cover using 16 years of meteorological data to forecast avalanche hazards, publishing their research in Cold Regions Science and Technology. They believe their models, which categorize different kinds of avalanches, can support forecasting and determine whether the avalanche is natural (caused by temperature changes, wind, additional snowfall) or artificial (triggered by a human or animal).
With smell receptors ranging from 800 million for an average dog, to 4 billion for scent hounds, canines remain key to finding people caught in slides.
With data from two sites in British Columbia and one in Switzerland, researchers built computer simulations of five different avalanche types. “In terms of real time avalanche forecasting, this has potential to fill in a lot of data gaps, where we don't have field observations of what the snow looks like,” says Simon Horton, a postdoctoral fellow with the Simon Fraser University Centre for Natural Hazards Research and a forecaster with Avalanche Canada, who participated in the study. While complex models that simulate snowpack layers have been around for a few decades, they weren’t easy to apply until recently. “It's been difficult to find out how to apply that to actual decision-making and improving safety,” says Horton. If you can derive avalanche problem types from simulated snowpack properties, he says, you’ll learn “a lot about how you want to manage that risk.”
The five categories include “new snow,” which is unstable and slides down the slope, “wet snow,” when rain or heat makes it liquidly, as well as “wind-drifted snow,” “persistent weak layers” and “old snow.” “That's when there's some type of deeply buried weak layer in the snow that releases without any real change in the weather,” Horton explains. “These ones tend to cause the most accidents.” One step by a person on that structurally weak layer of snow will cause a slide. Horton is hopeful that computer simulations of avalanche types can be used by scientists in different snow climates to help predict hazard levels.
Greene is doubtful. “If you have six slopes that are lined up next to each other, and you're going to try to predict which one avalanches and the exact dimensions and what time, that's going to be really hard to do. And I think it's going to be a long time before we're able to do that,” says Greene.
What both researchers do agree on, though, is that what avalanche prediction really needs is better imagery through satellite detection. “Just being able to count the number of avalanches that are out there will have a huge impact on what we do,” Greene says. “[Satellites] will change what we do, dramatically.” In a 2022 paper, scientists at the University of Aberdeen in England used satellites to study two deadly Himalayan avalanches. The imaging helped them determine that sediment from a 2016 ice avalanche plus subsequent snow avalanches contributed to the 2021 avalanche that caused a flash flood, killing over 200 people. The researchers say that understanding the avalanches characteristics through satellite imagery can inform them how one such event increases the magnitude of another in the same area.
Avy dogs trainers hide in dug-out holes in the snow, teaching the dogs to find buried victims
Sarah McLear
Lifesaving combo: human tech and Mother Nature’s gear
Even as avalanche forecasting evolves, dogs with their built-in rescue mechanisms will remain invaluable. With smell receptors ranging from 800 million for an average dog, to 4 billion for scent hounds, canines remain key to finding people caught in slides. (Humans in comparison, have a meager 12 million.) A new study published in the Journal of Neuroscience revealed that in dogs smell and vision are connected in the brain, which has not been found in other animals. “They can detect the smell of their owner's fingerprints on a glass slide six weeks after they touched it,” says Nicholas Dodman, professor emeritus at Cummings School of Veterinary Medicine at Tufts University. “And they can track from a boat where a box filled with meat was buried in the water, 100 feet below,” says Dodman, who is also co-founder and president of the Center for Canine Behavior Studies.
Another recent study from Queens College in Belfast, United Kingdom, further confirms that dogs can smell when humans are stressed. They can also detect the smell of a person’s breath and the smell of the skin cells of a deceased person.
The emerging avalanche-predicting human-made tech and the incredible nature-made tech of dogs’ olfactory talents is the lifesaving “equipment” that Leffler believes in. Even when human-made technology develops further, it will be most efficient when used together with the millions of dogs’ smell receptors, Leffler believes. “It is a combination of technology and the avalanche dog that will always be effective in finding an avalanche victim.”
A new type of cancer therapy is shrinking deadly brain tumors with just one treatment
Few cancers are deadlier than glioblastomas—aggressive and lethal tumors that originate in the brain or spinal cord. Five years after diagnosis, less than five percent of glioblastoma patients are still alive—and more often, glioblastoma patients live just 14 months on average after receiving a diagnosis.
But an ongoing clinical trial at Mass General Cancer Center is giving new hope to glioblastoma patients and their families. The trial, called INCIPIENT, is meant to evaluate the effects of a special type of immune cell, called CAR-T cells, on patients with recurrent glioblastoma.
How CAR-T cell therapy works
CAR-T cell therapy is a type of cancer treatment called immunotherapy, where doctors modify a patient’s own immune system specifically to find and destroy cancer cells. In CAR-T cell therapy, doctors extract the patient’s T-cells, which are immune system cells that help fight off disease—particularly cancer. These T-cells are harvested from the patient and then genetically modified in a lab to produce proteins on their surface called chimeric antigen receptors (thus becoming CAR-T cells), which makes them able to bind to a specific protein on the patient’s cancer cells. Once modified, these CAR-T cells are grown in the lab for several weeks so that they can multiply into an army of millions. When enough cells have been grown, these super-charged T-cells are infused back into the patient where they can then seek out cancer cells, bind to them, and destroy them. CAR-T cell therapies have been approved by the US Food and Drug Administration (FDA) to treat certain types of lymphomas and leukemias, as well as multiple myeloma, but haven’t been approved to treat glioblastomas—yet.
CAR-T cell therapies don’t always work against solid tumors, such as glioblastomas. Because solid tumors contain different kinds of cancer cells, some cells can evade the immune system’s detection even after CAR-T cell therapy, according to a press release from Massachusetts General Hospital. For the INCIPIENT trial, researchers modified the CAR-T cells even further in hopes of making them more effective against solid tumors. These second-generation CAR-T cells (called CARv3-TEAM-E T cells) contain special antibodies that attack EFGR, a protein expressed in the majority of glioblastoma tumors. Unlike other CAR-T cell therapies, these particular CAR-T cells were designed to be directly injected into the patient’s brain.
The INCIPIENT trial results
The INCIPIENT trial involved three patients who were enrolled in the study between March and July 2023. All three patients—a 72-year-old man, a 74-year-old man, and a 57-year-old woman—were treated with chemo and radiation and enrolled in the trial with CAR-T cells after their glioblastoma tumors came back.
The results, which were published earlier this year in the New England Journal of Medicine (NEJM), were called “rapid” and “dramatic” by doctors involved in the trial. After just a single infusion of the CAR-T cells, each patient experienced a significant reduction in their tumor sizes. Just two days after receiving the infusion, the glioblastoma tumor of the 72-year-old man decreased by nearly twenty percent. Just two months later the tumor had shrunk by an astonishing 60 percent, and the change was maintained for more than six months. The most dramatic result was in the 57-year-old female patient, whose tumor shrank nearly completely after just one infusion of the CAR-T cells.
The results of the INCIPIENT trial were unexpected and astonishing—but unfortunately, they were also temporary. For all three patients, the tumors eventually began to grow back regardless of the CAR-T cell infusions. According to the press release from MGH, the medical team is now considering treating each patient with multiple infusions or prefacing each treatment with chemotherapy to prolong the response.
While there is still “more to do,” says co-author of the study neuro-oncologist Dr. Elizabeth Gerstner, the results are still promising. If nothing else, these second-generation CAR-T cell infusions may someday be able to give patients more time than traditional treatments would allow.
“These results are exciting but they are also just the beginning,” says Dr. Marcela Maus, a doctor and professor of medicine at Mass General who was involved in the clinical trial. “They tell us that we are on the right track in pursuing a therapy that has the potential to change the outlook for this intractable disease.”
Since the early 2000s, AI systems have eliminated more than 1.7 million jobs, and that number will only increase as AI improves. Some research estimates that by 2025, AI will eliminate more than 85 million jobs.
But for all the talk about job security, AI is also proving to be a powerful tool in healthcare—specifically, cancer detection. One recently published study has shown that, remarkably, artificial intelligence was able to detect 20 percent more cancers in imaging scans than radiologists alone.
Published in The Lancet Oncology, the study analyzed the scans of 80,000 Swedish women with a moderate hereditary risk of breast cancer who had undergone a mammogram between April 2021 and July 2022. Half of these scans were read by AI and then a radiologist to double-check the findings. The second group of scans was read by two researchers without the help of AI. (Currently, the standard of care across Europe is to have two radiologists analyze a scan before diagnosing a patient with breast cancer.)
The study showed that the AI group detected cancer in 6 out of every 1,000 scans, while the radiologists detected cancer in 5 per 1,000 scans. In other words, AI found 20 percent more cancers than the highly-trained radiologists.
Scientists have been using MRI images (like the ones pictured here) to train artificial intelligence to detect cancers earlier and with more accuracy. Here, MIT's AI system, MIRAI, looks for patterns in a patient's mammograms to detect breast cancer earlier than ever before. news.mit.edu
But even though the AI was better able to pinpoint cancer on an image, it doesn’t mean radiologists will soon be out of a job. Dr. Laura Heacock, a breast radiologist at NYU, said in an interview with CNN that radiologists do much more than simply screening mammograms, and that even well-trained technology can make errors. “These tools work best when paired with highly-trained radiologists who make the final call on your mammogram. Think of it as a tool like a stethoscope for a cardiologist.”
AI is still an emerging technology, but more and more doctors are using them to detect different cancers. For example, researchers at MIT have developed a program called MIRAI, which looks at patterns in patient mammograms across a series of scans and uses an algorithm to model a patient's risk of developing breast cancer over time. The program was "trained" with more than 200,000 breast imaging scans from Massachusetts General Hospital and has been tested on over 100,000 women in different hospitals across the world. According to MIT, MIRAI "has been shown to be more accurate in predicting the risk for developing breast cancer in the short term (over a 3-year period) compared to traditional tools." It has also been able to detect breast cancer up to five years before a patient receives a diagnosis.
The challenges for cancer-detecting AI tools now is not just accuracy. AI tools are also being challenged to perform consistently well across different ages, races, and breast density profiles, particularly given the increased risks that different women face. For example, Black women are 42 percent more likely than white women to die from breast cancer, despite having nearly the same rates of breast cancer as white women. Recently, an FDA-approved AI device for screening breast cancer has come under fire for wrongly detecting cancer in Black patients significantly more often than white patients.
As AI technology improves, radiologists will be able to accurately scan a more diverse set of patients at a larger volume than ever before, potentially saving more lives than ever.