New tools could catch disease outbreaks earlier - or predict them
Every year, the villages which lie in the so-called ‘Nipah belt’— which stretches along the western border between Bangladesh and India, brace themselves for the latest outbreak. For since 1998, when Nipah virus—a form of hemorrhagic fever most common in Bangladesh—first spilled over into humans, it has been a grim annual visitor to the people of this region.
With a 70 percent fatality rate, no vaccine, and no known treatments, Nipah virus has been dubbed in the Western world as ‘the worst disease no one has ever heard of.’ Currently, outbreaks tend to be relatively contained because it is not very transmissible. The virus circulates throughout Asia in fruit eating bats, and only tends to be passed on to people who consume contaminated date palm sap, a sweet drink which is harvested across Bangladesh.
But as SARS-CoV-2 has shown the world, this can quickly change.
“Nipah virus is among what virologists call ‘the Big 10,’ along with things like Lassa fever and Crimean Congo hemorrhagic fever,” says Noam Ross, a disease ecologist at New York-based non-profit EcoHealth Alliance. “These are pretty dangerous viruses from a lethality perspective, which don’t currently have the capacity to spread into broader human populations. But that can evolve, and you could very well see a variant emerge that has human-human transmission capability.”
That’s not an overstatement. Surveys suggest that mammals harbour about 40,000 viruses, with roughly a quarter capable of infecting humans. The vast majority never get a chance to do so because we don’t encounter them, but climate change can alter that. Recent studies have found that as animals relocate to new habitats due to shifting environmental conditions, the coming decades will bring around 300,000 first encounters between species which normally don’t interact, especially in tropical Africa and southeast Asia. All these interactions will make it far more likely for hitherto unknown viruses to cross paths with humans.
That’s why for the last 16 years, EcoHealth Alliance has been conducting ongoing viral surveillance projects across Bangladesh. The goal is to understand why Nipah is so much more prevalent in the western part of the country, compared to the east, and keep a watchful eye out for new Nipah strains as well as other dangerous pathogens like Ebola.
"There are a lot of different infectious agents that are sensitive to climate change that don't have these sorts of software tools being developed for them," says Cat Lippi, medical geography researcher at the University of Florida.
Until very recently this kind of work has been hampered by the limitations of viral surveillance technology. The PREDICT project, a $200 million initiative funded by the United States Agency for International Development, which conducted surveillance across the Amazon Basin, Congo Basin and extensive parts of South and Southeast Asia, relied upon so-called nucleic acid assays which enabled scientists to search for the genetic material of viruses in animal samples.
However, the project came under criticism for being highly inefficient. “That approach requires a big sampling effort, because of the rarity of individual infections,” says Ross. “Any particular animal may be infected for a couple of weeks, maybe once or twice in its lifetime. So if you sample thousands and thousands of animals, you'll eventually get one that has an Ebola virus infection right now.”
Ross explains that there is now far more interest in serological sampling—the scientific term for the process of drawing blood for antibody testing. By searching for the presence of antibodies in the blood of humans and animals, scientists have a greater chance of detecting viruses which started circulating recently.
Despite the controversy surrounding EcoHealth Alliance’s involvement in so-called gain of function research—experiments that study whether viruses might mutate into deadlier strains—the organization’s separate efforts to stay one step ahead of pathogen evolution are key to stopping the next pandemic.
“Having really cheap and fast surveillance is really important,” says Ross. “Particularly in a place where there's persistent, low level, moderate infections that potentially have the ability to develop into more epidemic or pandemic situations. It means there’s a pathway that something more dangerous can come through."
Scientists are searching for the presence of antibodies in the blood of humans and animals in hopes to detect viruses that recently started circulating.
EcoHealth Alliance
In Bangladesh, EcoHealth Alliance is attempting to do this using a newer serological technology known as a multiplex Luminex assay, which tests samples against a panel of known antibodies against many different viruses. It collects what Ross describes as a ‘footprint of information,’ which allows scientists to tell whether the sample contains the presence of a known pathogen or something completely different and needs to be investigated further.
By using this technology to sample human and animal populations across the country, they hope to gain an idea of whether there are any novel Nipah virus variants or strains from the same family, as well as other deadly viral families like Ebola.
This is just one of several novel tools being used for viral discovery in surveillance projects around the globe. Multiple research groups are taking PREDICT’s approach of looking for novel viruses in animals in various hotspots. They collect environmental DNA—mucus, faeces or shed skin left behind in soil, sediment or water—which can then be genetically sequenced.
Five years ago, this would have been a painstaking work requiring bringing collected samples back to labs. Today, thanks to the vast amounts of money spent on new technologies during COVID-19, researchers now have portable sequencing tools they can take out into the field.
Christopher Jerde, a researcher at the UC Santa Barbara Marine Science Institute, points to the Oxford Nanopore MinION sequencer as one example. “I tried one of the early versions of it four years ago, and it was miserable,” he says. “But they’ve really improved, and what we’re going to be able to do in the next five to ten years will be amazing. Instead of having to carefully transport samples back to the lab, we're going to have cigar box-shaped sequencers that we take into the field, plug into a laptop, and do the whole sequencing of an organism.”
In the past, viral surveillance has had to be very targeted and focused on known families of viruses, potentially missing new, previously unknown zoonotic pathogens. Jerde says that the rise of portable sequencers will lead to what he describes as “true surveillance.”
“Before, this was just too complex,” he says. “It had to be very focused, for example, looking for SARS-type viruses. Now we’re able to say, ‘Tell us all the viruses that are here?’ And this will give us true surveillance – we’ll be able to see the diversity of all the pathogens which are in these spots and have an understanding of which ones are coming into the population and causing damage.”
But being able to discover more viruses also comes with certain challenges. Some scientists fear that the speed of viral discovery will soon outpace the human capacity to analyze them all and assess the threat that they pose to us.
“I think we're already there,” says Jason Ladner, assistant professor at Northern Arizona University’s Pathogen and Microbiome Institute. “If you look at all the papers on the expanding RNA virus sphere, there are all of these deposited partial or complete viral sequences in groups that we just don't know anything really about yet.” Bats, for example, carry a myriad of viruses, whose ability to infect human cells we understand very poorly.
Cultivating these viruses under laboratory conditions and testing them on organoids— miniature, simplified versions of organs created from stem cells—can help with these assessments, but it is a slow and painstaking work. One hope is that in the future, machine learning could help automate this process. The new SpillOver Viral Risk Ranking platform aims to assess the risk level of a given virus based on 31 different metrics, while other computer models have tried to do the same based on the similarity of a virus’s genomic sequence to known zoonotic threats.
However, Ladner says that these types of comparisons are still overly simplistic. For one thing, scientists are still only aware of a few hundred zoonotic viruses, which is a very limited data sample for accurately assessing a novel pathogen. Instead, he says that there is a need for virologists to develop models which can determine viral compatibility with human cells, based on genomic data.
“One thing which is really useful, but can be challenging to do, is understand the cell surface receptors that a given virus might use,” he says. “Understanding whether a virus is likely to be able to use proteins on the surface of human cells to gain entry can be very informative.”
As the Earth’s climate heats up, scientists also need to better model the so-called vector borne diseases such as dengue, Zika, chikungunya and yellow fever. Transmitted by the Aedes mosquito residing in humid climates, these blights currently disproportionally affect people in low-income nations. But predictions suggest that as the planet warms and the pests find new homes, an estimated one billion people who currently don’t encounter them might be threatened by their bites by 2080. “When it comes to mosquito-borne diseases we have to worry about shifts in suitable habitat,” says Cat Lippi, a medical geography researcher at the University of Florida. “As climate patterns change on these big scales, we expect to see shifts in where people will be at risk for contracting these diseases.”
Public health practitioners and government decision-makers need tools to make climate-informed decisions about the evolving threat of different infectious diseases. Some projects are already underway. An ongoing collaboration between the Catalan Institution for Research and Advanced Studies and researchers in Brazil and Peru is utilizing drones and weather stations to collect data on how mosquitoes change their breeding patterns in response to climate shifts. This information will then be fed into computer algorithms to predict the impact of mosquito-borne illnesses on different regions.
The team at the Catalan Institution for Research and Advanced Studies is using drones and weather stations to collect data on how mosquito breeding patterns change due to climate shifts.
Gabriel Carrasco
Lippi says that similar models are urgently needed to predict how changing climate patterns affect respiratory, foodborne, waterborne and soilborne illnesses. The UK-based Wellcome Trust has allocated significant assets to fund such projects, which should allow scientists to monitor the impact of climate on a much broader range of infections. “There are a lot of different infectious agents that are sensitive to climate change that don't have these sorts of software tools being developed for them,” she says.
COVID-19’s havoc boosted funding for infectious disease research, but as its threats begin to fade from policymakers’ focus, the money may dry up. Meanwhile, scientists warn that another major infectious disease outbreak is inevitable, potentially within the next decade, so combing the planet for pathogens is vital. “Surveillance is ultimately a really boring thing that a lot of people don't want to put money into, until we have a wide scale pandemic,” Jerde says, but that vigilance is key to thwarting the next deadly horror. “It takes a lot of patience and perseverance to keep looking.”
This article originally appeared in One Health/One Planet, a single-issue magazine that explores how climate change and other environmental shifts are increasing vulnerabilities to infectious diseases by land and by sea. The magazine probes how scientists are making progress with leaders in other fields toward solutions that embrace diverse perspectives and the interconnectedness of all lifeforms and the planet.
The Death Predictor: A Helpful New Tool or an Ethical Morass?
Whenever Eric Karl Oermann has to tell a patient about a terrible prognosis, their first question is always: "how long do I have?" Oermann would like to offer a precise answer, to provide some certainty and help guide treatment. But although he's one of the country's foremost experts in medical artificial intelligence, Oermann is still dependent on a computer algorithm that's often wrong.
Doctors are notoriously terrible at guessing how long their patients will live.
Artificial intelligence, now often called deep learning or neural networks, has radically transformed language and image processing. It's allowed computers to play chess better than the world's grand masters and outwit the best Jeopardy players. But it still can't precisely tell a doctor how long a patient has left – or how to help that person live longer.
Someday, researchers predict, computers will be able to watch a video of a patient to determine their health status. Doctors will no longer have to spend hours inputting data into medical records. And computers will do a better job than specialists at identifying tiny tumors, impending crises, and, yes, figuring out how long the patient has to live. Oermann, a neurosurgeon at Mount Sinai, says all that technology will allow doctors to spend more time doing what they do best: talking with their patients. "I want to see more deep learning and computers in a clinical setting," he says, "so there can be more human interaction." But those days are still at least three to five years off, Oermann and other researchers say.
Doctors are notoriously terrible at guessing how long their patients will live, says Nigam Shah, an associate professor at Stanford University and assistant director of the school's Center for Biomedical Informatics Research. Doctors don't want to believe that their patient – whom they've come to like – will die. "Doctors over-estimate survival many-fold," Shah says. "How do you go into work, in say, oncology, and not be delusionally optimistic? You have to be."
But patients near the end of life will get better treatment – and even live longer – if they are overseen by hospice or palliative care, research shows. So, instead of relying on human bias to select those whose lives are nearing their end, Shah and his colleagues showed that they could use a deep learning algorithm based on medical records to flag incoming patients with a life expectancy of three months to a year. They use that data to indicate who might need palliative care. Then, the palliative care team can reach out to treating physicians proactively, instead of relying on their referrals or taking the time to read extensive medical charts.
But, although the system works well, Shah isn't yet sure if such indicators actually get the appropriate patients into palliative care. He's recently partnered with a palliative care doctor to run a gold-standard clinical trial to test whether patients who are flagged by this algorithm are indeed a better match for palliative care.
"What is effective from a health system perspective might not be effective from a treating physician's perspective and might not be effective from the patient's perspective," Shah notes. "I don't have a good way to guess everybody's reaction without actually studying it." Whether palliative care is appropriate, for instance, depends on more than just the patient's health status. "If the patient's not ready, the family's not ready and the doctor's not ready, then you're just banging your head against the wall," Shah says. "Given limited capacity, it's a waste of resources" to put that person in palliative care.
The algorithm isn't perfect, but "on balance, it leads to better decisions more often."
Alexander Smith and Sei Lee, both palliative care doctors, work together at the University of California, San Francisco, to develop predictions for patients who come to the hospital with a complicated prognosis or a history of decline. Their algorithm, they say, helps decide if this patient's problems – which might include diabetes, heart disease, a slow-growing cancer, and memory issues – make them eligible for hospice. The algorithm isn't perfect, they both agree, but "on balance, it leads to better decisions more often," Smith says.
Bethany Percha, an assistant professor at Mount Sinai, says that an algorithm may tell doctors that their patient is trending downward, but it doesn't do anything to change that trajectory. "Even if you can predict something, what can you do about it?" Algorithms may be able to offer treatment suggestions – but not what specific actions will alter a patient's future, says Percha, also the chief technology officer of Precise Health Enterprise, a product development group within Mount Sinai. And the algorithms remain challenging to develop. Electronic medical records may be great at her hospital, but if the patient dies at a different one, her system won't know. If she wants to be certain a patient has died, she has to merge social security records of death with her system's medical records – a time-consuming and cumbersome process.
An algorithm that learns from biased data will be biased, Shah says. Patients who are poor or African American historically have had worse health outcomes. If researchers train an algorithm on data that includes those biases, they get baked into the algorithms, which can then lead to a self-fulfilling prophesy. Smith and Lee say they've taken race out of their algorithms to avoid this bias.
Age is even trickier. There's no question that someone's risk of illness and death goes up with age. But an 85-year-old who breaks a hip running a marathon should probably be treated very differently than an 85-year-old who breaks a hip trying to get out of a chair in a dementia care unit. That's why the doctor can never be taken out of the equation, Shah says. Human judgment will always be required in medical care and an algorithm should never be followed blindly, he says.
Experts say that the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully.
Researchers are also concerned that their algorithms will be used to ration care, or that insurance companies will use their data to justify a rate increase. If an algorithm predicts a patient is going to end up back in the hospital soon, "who's benefitting from knowing a patient is going to be readmitted? Probably the insurance company," Percha says.
Still, Percha and others say, the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully. "These are new and exciting tools that have a lot of potential uses. We need to be conscious about how to use them going forward, but it doesn't mean we shouldn't go down this road," she says. "I think the potential benefits outweigh the risks, especially because we've barely scratched the surface of what big data can do right now."
“Young Blood” Transfusions Are Not Ready For Primetime – Yet
The world of dementia research erupted into cheers when news of the first real victory in a clinical trial against Alzheimer's Disease in over a decade was revealed last October.
By connecting the circulatory systems of a young and an old mouse, the regenerative potential of the young mouse decreased, and the old mouse became healthier.
Alzheimer's treatments have been famously difficult to develop; 99 percent of the 200-plus such clinical trials since 2000 have utterly failed. Even the few slight successes have failed to produce what is called 'disease modifying' agents that really help people with the disease. This makes the success, by the midsize Spanish pharma company Grifols, worthy of special attention.
However, the specifics of the Grifols treatment, a process called plasmapheresis, are atypical for another reason - they did not give patients a small molecule or an elaborate gene therapy, but rather simply the most common component of normal human blood plasma, a protein called albumin. A large portion of the patients' normal plasma was removed, and then a sterile solution of albumin was infused back into them to keep their overall blood volume relatively constant.
So why does replacing Alzheimer's patients' plasma with albumin seem to help their brains? One theory is that the action is direct. Alzheimer's patients have low levels of serum albumin, which is needed to clear out the plaques of amyloid that slowly build up in the brain. Supplementing those patients with extra albumin boosts their ability to clear the plaques and improves brain health. However, there is also evidence suggesting that the problem may be something present in the plasma of the sick person and pulling their plasma out and replacing it with a filler, like an albumin solution, may be what creates the purported benefit.
This scientific question is the tip of an iceberg that goes far beyond Alzheimer's Disease and albumin, to a debate that has been waged on the pages of scientific journals about the secrets of using young, healthy blood to extend youth and health.
This debate started long before the Grifols data was released, in 2014 when a group of researchers at Stanford found that by connecting the circulatory systems of a young and an old mouse, the regenerative potential of the young mouse decreased, and the old mouse became healthier. There was something either present in young blood that allowed tissues to regenerate, or something present in old blood that prevented regeneration. Whatever the biological reason, the effects in the experiment were extraordinary, providing a startling boost in health in the older mouse.
After the initial findings, multiple research groups got to work trying to identify the "active factor" of regeneration (or the inhibitor of that regeneration). They soon uncovered a variety of compounds such as insulin-like growth factor 1 (IGF1), CCL11, and GDF11, but none seemed to provide all the answers researchers were hoping for, with a number of high-profile retractions based on unsound experimental practices, or inconclusive data.
Years of research later, the simplest conclusion is that the story of plasma regeneration is not simple - there isn't a switch in our blood we can flip to turn back our biological clocks. That said, these hypotheses are far from dead, and many researchers continue to explore the possibility of using the rejuvenating ability of youthful plasma to treat a variety of diseases of aging.
But the bold claims of improved vigor thanks to young blood are so far unsupported by clinical evidence.
The data remain intriguing because of the astounding results from the conjoined circulatory system experiments. The current surge in interest in studying the biology of aging is likely to produce a new crop of interesting results in the next few years. Both CCL11 and GDF11 are being researched as potential drug targets by two startups, Alkahest and Elevian, respectively.
Without clarity on a single active factor driving rejuvenation, it's tempting to try a simpler approach: taking actual blood plasma provided by young people and infusing it into elderly subjects. This is what at least one startup company, Ambrosia, is now offering in five commercial clinics across the U.S. -- for $8,000 a liter.
By using whole plasma, the idea is to sidestep our ignorance, reaping the benefits of young plasma transfusion without knowing exactly what the active factors are that make the treatment work in mice. This space has attracted both established players in the plasmapheresis field – Alkahest and Grifols have teamed up to test fractions of whole plasma in Alzheimer's and Parkinson's – but also direct-to-consumer operations like Ambrosia that just want to offer patients access to treatments without regulatory oversight.
But the bold claims of improved vigor thanks to young blood are so far unsupported by clinical evidence. We simply haven't performed trials to test whether dosing a mostly healthy person with plasma can slow down aging, at least not yet. There is some evidence that plasma replacement works in mice, yes, but those experiments are all done in very different systems than what a human receiving young plasma might experience. To date, I have not seen any plasma transfusion clinic doing young blood plasmapheresis propose a clinical trial that is anything more than a shallow advertisement for their procedures.
The efforts I have seen to perform prophylactic plasmapheresis will fail to impact societal health. Without clearly defined endpoints and proper clinical trials, we won't know whether the procedure really lowers the risk of disease or helps with conditions of aging. So even if their hypothesis is correct, the lack of strong evidence to fall back on means that the procedure will never spread beyond the fringe groups willing to take the risk. If their hypothesis is wrong, then people are paying a huge amount of money for false hope, just as they do, sadly, at the phony stem cell clinics that started popping up all through the 2000s when stem cell hype was at its peak.
Until then, prophylactic plasma transfusions will be the domain of the optimistic and the gullible.
The real progress in the field will be made slowly, using carefully defined products either directly isolated from blood or targeting a bloodborne factor, just as the serious pharma and biotech players are doing already.
The field will progress in stages, first creating and carefully testing treatments for well-defined diseases, and only then will it progress to large-scale clinical trials in relatively healthy people to look for the prevention of disease. Most of us will choose to wait for this second stage of trials before undergoing any new treatments. Until then, prophylactic plasma transfusions will be the domain of the optimistic and the gullible.