New tools could catch disease outbreaks earlier - or predict them
Every year, the villages which lie in the so-called ‘Nipah belt’— which stretches along the western border between Bangladesh and India, brace themselves for the latest outbreak. For since 1998, when Nipah virus—a form of hemorrhagic fever most common in Bangladesh—first spilled over into humans, it has been a grim annual visitor to the people of this region.
With a 70 percent fatality rate, no vaccine, and no known treatments, Nipah virus has been dubbed in the Western world as ‘the worst disease no one has ever heard of.’ Currently, outbreaks tend to be relatively contained because it is not very transmissible. The virus circulates throughout Asia in fruit eating bats, and only tends to be passed on to people who consume contaminated date palm sap, a sweet drink which is harvested across Bangladesh.
But as SARS-CoV-2 has shown the world, this can quickly change.
“Nipah virus is among what virologists call ‘the Big 10,’ along with things like Lassa fever and Crimean Congo hemorrhagic fever,” says Noam Ross, a disease ecologist at New York-based non-profit EcoHealth Alliance. “These are pretty dangerous viruses from a lethality perspective, which don’t currently have the capacity to spread into broader human populations. But that can evolve, and you could very well see a variant emerge that has human-human transmission capability.”
That’s not an overstatement. Surveys suggest that mammals harbour about 40,000 viruses, with roughly a quarter capable of infecting humans. The vast majority never get a chance to do so because we don’t encounter them, but climate change can alter that. Recent studies have found that as animals relocate to new habitats due to shifting environmental conditions, the coming decades will bring around 300,000 first encounters between species which normally don’t interact, especially in tropical Africa and southeast Asia. All these interactions will make it far more likely for hitherto unknown viruses to cross paths with humans.
That’s why for the last 16 years, EcoHealth Alliance has been conducting ongoing viral surveillance projects across Bangladesh. The goal is to understand why Nipah is so much more prevalent in the western part of the country, compared to the east, and keep a watchful eye out for new Nipah strains as well as other dangerous pathogens like Ebola.
"There are a lot of different infectious agents that are sensitive to climate change that don't have these sorts of software tools being developed for them," says Cat Lippi, medical geography researcher at the University of Florida.
Until very recently this kind of work has been hampered by the limitations of viral surveillance technology. The PREDICT project, a $200 million initiative funded by the United States Agency for International Development, which conducted surveillance across the Amazon Basin, Congo Basin and extensive parts of South and Southeast Asia, relied upon so-called nucleic acid assays which enabled scientists to search for the genetic material of viruses in animal samples.
However, the project came under criticism for being highly inefficient. “That approach requires a big sampling effort, because of the rarity of individual infections,” says Ross. “Any particular animal may be infected for a couple of weeks, maybe once or twice in its lifetime. So if you sample thousands and thousands of animals, you'll eventually get one that has an Ebola virus infection right now.”
Ross explains that there is now far more interest in serological sampling—the scientific term for the process of drawing blood for antibody testing. By searching for the presence of antibodies in the blood of humans and animals, scientists have a greater chance of detecting viruses which started circulating recently.
Despite the controversy surrounding EcoHealth Alliance’s involvement in so-called gain of function research—experiments that study whether viruses might mutate into deadlier strains—the organization’s separate efforts to stay one step ahead of pathogen evolution are key to stopping the next pandemic.
“Having really cheap and fast surveillance is really important,” says Ross. “Particularly in a place where there's persistent, low level, moderate infections that potentially have the ability to develop into more epidemic or pandemic situations. It means there’s a pathway that something more dangerous can come through."
Scientists are searching for the presence of antibodies in the blood of humans and animals in hopes to detect viruses that recently started circulating.
EcoHealth Alliance
In Bangladesh, EcoHealth Alliance is attempting to do this using a newer serological technology known as a multiplex Luminex assay, which tests samples against a panel of known antibodies against many different viruses. It collects what Ross describes as a ‘footprint of information,’ which allows scientists to tell whether the sample contains the presence of a known pathogen or something completely different and needs to be investigated further.
By using this technology to sample human and animal populations across the country, they hope to gain an idea of whether there are any novel Nipah virus variants or strains from the same family, as well as other deadly viral families like Ebola.
This is just one of several novel tools being used for viral discovery in surveillance projects around the globe. Multiple research groups are taking PREDICT’s approach of looking for novel viruses in animals in various hotspots. They collect environmental DNA—mucus, faeces or shed skin left behind in soil, sediment or water—which can then be genetically sequenced.
Five years ago, this would have been a painstaking work requiring bringing collected samples back to labs. Today, thanks to the vast amounts of money spent on new technologies during COVID-19, researchers now have portable sequencing tools they can take out into the field.
Christopher Jerde, a researcher at the UC Santa Barbara Marine Science Institute, points to the Oxford Nanopore MinION sequencer as one example. “I tried one of the early versions of it four years ago, and it was miserable,” he says. “But they’ve really improved, and what we’re going to be able to do in the next five to ten years will be amazing. Instead of having to carefully transport samples back to the lab, we're going to have cigar box-shaped sequencers that we take into the field, plug into a laptop, and do the whole sequencing of an organism.”
In the past, viral surveillance has had to be very targeted and focused on known families of viruses, potentially missing new, previously unknown zoonotic pathogens. Jerde says that the rise of portable sequencers will lead to what he describes as “true surveillance.”
“Before, this was just too complex,” he says. “It had to be very focused, for example, looking for SARS-type viruses. Now we’re able to say, ‘Tell us all the viruses that are here?’ And this will give us true surveillance – we’ll be able to see the diversity of all the pathogens which are in these spots and have an understanding of which ones are coming into the population and causing damage.”
But being able to discover more viruses also comes with certain challenges. Some scientists fear that the speed of viral discovery will soon outpace the human capacity to analyze them all and assess the threat that they pose to us.
“I think we're already there,” says Jason Ladner, assistant professor at Northern Arizona University’s Pathogen and Microbiome Institute. “If you look at all the papers on the expanding RNA virus sphere, there are all of these deposited partial or complete viral sequences in groups that we just don't know anything really about yet.” Bats, for example, carry a myriad of viruses, whose ability to infect human cells we understand very poorly.
Cultivating these viruses under laboratory conditions and testing them on organoids— miniature, simplified versions of organs created from stem cells—can help with these assessments, but it is a slow and painstaking work. One hope is that in the future, machine learning could help automate this process. The new SpillOver Viral Risk Ranking platform aims to assess the risk level of a given virus based on 31 different metrics, while other computer models have tried to do the same based on the similarity of a virus’s genomic sequence to known zoonotic threats.
However, Ladner says that these types of comparisons are still overly simplistic. For one thing, scientists are still only aware of a few hundred zoonotic viruses, which is a very limited data sample for accurately assessing a novel pathogen. Instead, he says that there is a need for virologists to develop models which can determine viral compatibility with human cells, based on genomic data.
“One thing which is really useful, but can be challenging to do, is understand the cell surface receptors that a given virus might use,” he says. “Understanding whether a virus is likely to be able to use proteins on the surface of human cells to gain entry can be very informative.”
As the Earth’s climate heats up, scientists also need to better model the so-called vector borne diseases such as dengue, Zika, chikungunya and yellow fever. Transmitted by the Aedes mosquito residing in humid climates, these blights currently disproportionally affect people in low-income nations. But predictions suggest that as the planet warms and the pests find new homes, an estimated one billion people who currently don’t encounter them might be threatened by their bites by 2080. “When it comes to mosquito-borne diseases we have to worry about shifts in suitable habitat,” says Cat Lippi, a medical geography researcher at the University of Florida. “As climate patterns change on these big scales, we expect to see shifts in where people will be at risk for contracting these diseases.”
Public health practitioners and government decision-makers need tools to make climate-informed decisions about the evolving threat of different infectious diseases. Some projects are already underway. An ongoing collaboration between the Catalan Institution for Research and Advanced Studies and researchers in Brazil and Peru is utilizing drones and weather stations to collect data on how mosquitoes change their breeding patterns in response to climate shifts. This information will then be fed into computer algorithms to predict the impact of mosquito-borne illnesses on different regions.
The team at the Catalan Institution for Research and Advanced Studies is using drones and weather stations to collect data on how mosquito breeding patterns change due to climate shifts.
Gabriel Carrasco
Lippi says that similar models are urgently needed to predict how changing climate patterns affect respiratory, foodborne, waterborne and soilborne illnesses. The UK-based Wellcome Trust has allocated significant assets to fund such projects, which should allow scientists to monitor the impact of climate on a much broader range of infections. “There are a lot of different infectious agents that are sensitive to climate change that don't have these sorts of software tools being developed for them,” she says.
COVID-19’s havoc boosted funding for infectious disease research, but as its threats begin to fade from policymakers’ focus, the money may dry up. Meanwhile, scientists warn that another major infectious disease outbreak is inevitable, potentially within the next decade, so combing the planet for pathogens is vital. “Surveillance is ultimately a really boring thing that a lot of people don't want to put money into, until we have a wide scale pandemic,” Jerde says, but that vigilance is key to thwarting the next deadly horror. “It takes a lot of patience and perseverance to keep looking.”
This article originally appeared in One Health/One Planet, a single-issue magazine that explores how climate change and other environmental shifts are increasing vulnerabilities to infectious diseases by land and by sea. The magazine probes how scientists are making progress with leaders in other fields toward solutions that embrace diverse perspectives and the interconnectedness of all lifeforms and the planet.
In The Fake News Era, Are We Too Gullible? No, Says Cognitive Scientist
One of the oddest political hoaxes of recent times was Pizzagate, in which conspiracy theorists claimed that Hillary Clinton and her 2016 campaign chief ran a child sex ring from the basement of a Washington, DC, pizzeria.
To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility.
Millions of believers spread the rumor on social media, abetted by Russian bots; one outraged netizen stormed the restaurant with an assault rifle and shot open what he took to be the dungeon door. (It actually led to a computer closet.) Pundits cited the imbroglio as evidence that Americans had lost the ability to tell fake news from the real thing, putting our democracy in peril.
Such fears, however, are nothing new. "For most of history, the concept of widespread credulity has been fundamental to our understanding of society," observes Hugo Mercier in Not Born Yesterday: The Science of Who We Trust and What We Believe (Princeton University Press, 2020). In the fourth century BCE, he points out, the historian Thucydides blamed Athens' defeat by Sparta on a demagogue who hoodwinked the public into supporting idiotic military strategies; Plato extended that argument to condemn democracy itself. Today, atheists and fundamentalists decry one another's gullibility, as do climate-change accepters and deniers. Leftists bemoan the masses' blind acceptance of the "dominant ideology," while conservatives accuse those who do revolt of being duped by cunning agitators.
What's changed, all sides agree, is the speed at which bamboozlement can propagate. In the digital age, it seems, a sucker is born every nanosecond.
The Case Against Credulity
Yet Mercier, a cognitive scientist at the Jean Nicod Institute in Paris, thinks we've got the problem backward. To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility. "We don't credulously accept whatever we're told—even when those views are supported by the majority of the population, or by prestigious, charismatic individuals," he writes. "On the contrary, we are skilled at figuring out who to trust and what to believe, and, if anything, we're too hard rather than too easy to influence."
He bases those contentions on a growing body of research in neuropsychiatry, evolutionary psychology, and other fields. Humans, Mercier argues, are hardwired to balance openness with vigilance when assessing communicated information. To gauge a statement's accuracy, we instinctively test it from many angles, including: Does it jibe with what I already believe? Does the speaker share my interests? Has she demonstrated competence in this area? What's her reputation for trustworthiness? And, with more complex assertions: Does the argument make sense?
This process, Mercier says, enables us to learn much more from one another than do other animals, and to communicate in a far more complex way—key to our unparalleled adaptability. But it doesn't always save us from trusting liars or embracing demonstrably false beliefs. To better understand why, leapsmag spoke with the author.
How did you come to write Not Born Yesterday?
In 2010, I collaborated with the cognitive scientist Dan Sperber and some other colleagues on a paper called "Epistemic Vigilance," which laid out the argument that evolutionarily, it would make no sense for humans to be gullible. If you can be easily manipulated and influenced, you're going to be in major trouble. But as I talked to people, I kept encountering resistance. They'd tell me, "No, no, people are influenced by advertising, by political campaigns, by religious leaders." I started doing more research to see if I was wrong, and eventually I had enough to write a book.
With all the talk about "fake news" these days, the topic has gotten a lot more timely.
Yes. But on the whole, I'm skeptical that fake news matters very much. And all the energy we spend fighting it is energy not spent on other pursuits that may be better ways of improving our informational environment. The real challenge, I think, is not how to shut up people who say stupid things on the internet, but how to make it easier for people who say correct things to convince people.
"History shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion."
You start the book with an anecdote about your encounter with a con artist several years ago, who scammed you out of 20 euros. Why did you choose that anecdote?
Although I'm arguing that people aren't generally gullible, I'm not saying we're completely impervious to attempts at tricking us. It's just that we're much better than we think at resisting manipulation. And while there's a risk of trusting someone who doesn't deserve to be trusted, there's also a risk of not trusting someone who could have been trusted. You miss out on someone who could help you, or from whom you might have learned something—including figuring out who to trust.
You argue that in humans, vigilance and open-mindedness evolved hand-in-hand, leading to a set of cognitive mechanisms you call "open vigilance."
There's a common view that people start from a state of being gullible and easy to influence, and get better at rejecting information as they become smarter and more sophisticated. But that's not what really happens. It's much harder to get apes than humans to do anything they don't want to do, for example. And research suggests that over evolutionary time, the better our species became at telling what we should and shouldn't listen to, the more open to influence we became. Even small children have ways to evaluate what people tell them.
The most basic is what I call "plausibility checking": if you tell them you're 200 years old, they're going to find that highly suspicious. Kids pay attention to competence; if someone is an expert in the relevant field, they'll trust her more. They're likelier to trust someone who's nice to them. My colleagues and I have found that by age 2 ½, children can distinguish between very strong and very weak arguments. Obviously, these skills keep developing throughout your life.
But you've found that even the most forceful leaders—and their propaganda machines—have a hard time changing people's minds.
Throughout history, there's been this fear of demagogues leading whole countries into terrible decisions. In reality, these leaders are mostly good at feeling the crowd and figuring out what people want to hear. They're not really influencing [the masses]; they're surfing on pre-existing public opinion. We know from a recent study, for instance, that if you match cities in which Hitler gave campaign speeches in the late '20s through early '30s with similar cities in which he didn't give campaign speeches, there was no difference in vote share for the Nazis. Nazi propaganda managed to make Germans who were already anti-Semitic more likely to express their anti-Semitism or act on it. But Germans who were not already anti-Semitic were completely inured to the propaganda.
So why, in totalitarian regimes, do people seem so devoted to the ruler?
It's not a very complex psychology. In these regimes, the slightest show of discontent can be punished by death, or by you and your whole family being sent to a labor camp. That doesn't mean propaganda has no effect, but you can explain people's obedience without it.
What about cult leaders and religious extremists? Their followers seem willing to believe anything.
Prophets and preachers can inspire the kind of fervor that leads people to suicidal acts or doomed crusades. But history shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion. Only when people are ready for extreme actions can a charismatic figure provide the spark that lights the fire.
Once a religion becomes ubiquitous, the limits of its persuasive powers become clear. Every anthropologist knows that in societies that are nominally dominated by orthodox belief systems—whether Christian or Muslim or anything else—most people share a view of God, or the spirit, that's closer to what you find in societies that lack such religions. In the Middle Ages, for instance, you have records of priests complaining of how unruly the people are—how they spend the whole Mass chatting or gossiping, or go on pilgrimages mostly because of all the prostitutes and wine-drinking. They continue pagan practices. They resist attempts to make them pay tithes. It's very far from our image of how much people really bought the dominant religion.
"The mainstream media is extremely reliable. The scientific consensus is extremely reliable."
And what about all those wild rumors and conspiracy theories on social media? Don't those demonstrate widespread gullibility?
I think not, for two reasons. One is that most of these false beliefs tend to be held in a way that's not very deep. People may say Pizzagate is true, yet that belief doesn't really interact with the rest of their cognition or their behavior. If you really believe that children are being abused, then trying to free them is the moral and rational thing to do. But the only person who did that was the guy who took his assault weapon to the pizzeria. Most people just left one-star reviews of the restaurant.
The other reason is that most of these beliefs actually play some useful role for people. Before any ethnic massacre, for example, rumors circulate about atrocities having been committed by the targeted minority. But those beliefs aren't what's really driving the phenomenon. In the horrendous pogrom of Kishinev, Moldova, 100 years ago, you had these stories of blood libel—a child disappeared, typical stuff. And then what did the Christian inhabitants do? They raped the [Jewish] women, they pillaged the wine stores, they stole everything they could. They clearly wanted to get that stuff, and they made up something to justify it.
Where do skeptics like climate-change deniers and anti-vaxxers fit into the picture?
Most people in most countries accept that vaccination is good and that climate change is real and man-made. These ideas are deeply counter-intuitive, so the fact that scientists were able to get them across is quite fascinating. But the environment in which we live is vastly different from the one in which we evolved. There's a lot more information, which makes it harder to figure out who we can trust. The main effect is that we don't trust enough; we don't accept enough information. We also rely on shortcuts and heuristics—coarse cues of trustworthiness. There are people who abuse these cues. They may have a PhD or an MD, and they use those credentials to help them spread messages that are not true and not good. Mostly, they're affirming what people want to believe, but they may also be changing minds at the margins.
How can we improve people's ability to resist that kind of exploitation?
I wish I could tell you! That's literally my next project. Generally speaking, though, my advice is very vanilla. The mainstream media is extremely reliable. The scientific consensus is extremely reliable. If you trust those sources, you'll go wrong in a very few cases, but on the whole, they'll probably give you good results. Yet a lot of the problems that we attribute to people being stupid and irrational are not entirely their fault. If governments were less corrupt, if the pharmaceutical companies were irreproachable, these problems might not go away—but they would certainly be minimized.
“Virtual Biopsies” May Soon Make Some Invasive Tests Unnecessary
At his son's college graduation in 2017, Dan Chessin felt "terribly uncomfortable" sitting in the stadium. The bouts of pain persisted, and after months of monitoring, a urologist took biopsies of suspicious areas in his prostate.
This innovation may enhance diagnostic precision and promptness, but it also brings ethical concerns to the forefront.
"In my case, the biopsies came out cancerous," says Chessin, 60, who underwent robotic surgery for intermediate-grade prostate cancer at University Hospitals Cleveland Medical Center.
Although he needed a biopsy, as most patients today do, advances in radiologic technology may make such invasive measures unnecessary in the future. Researchers are developing better imaging techniques and algorithms—a form of computer science called artificial intelligence, in which machines learn and execute tasks that typically require human brain power.
This innovation may enhance diagnostic precision and promptness. But it also brings ethical concerns to the forefront of the conversation, highlighting the potential for invasion of privacy, unequal patient access, and less physician involvement in patient care.
A National Academy of Medicine Special Publication, released in December, emphasizes that setting industry-wide standards for use in patient care is essential to AI's responsible and transparent implementation as the industry grapples with voluminous quantities of data. The technology should be viewed as a tool to supplement decision-making by highly trained professionals, not to replace it.
MRI--a test that uses powerful magnets, radio waves, and a computer to take detailed images inside the body--has become highly accurate in detecting aggressive prostate cancer, but its reliability is more limited in identifying low and intermediate grades of malignancy. That's why Chessin opted to have his prostate removed rather than take the chance of missing anything more suspicious that could develop.
His urologist, Lee Ponsky, says AI's most significant impact is yet to come. He hopes University Hospitals Cleveland Medical Center's collaboration with research scientists at its academic affiliate, Case Western Reserve University, will lead to the invention of a virtual biopsy.
A National Cancer Institute five-year grant is funding the project, launched in 2017, to develop a combined MRI and computerized tool to support more accurate detection and grading of prostate cancer. Such a tool would be "the closest to a crystal ball that we can get," says Ponsky, professor and chairman of the Urology Institute.
In situations where AI has guided diagnostics, radiologists' interpretations of breast, lung, and prostate lesions have improved as much as 25 percent, says Anant Madabhushi, a biomedical engineer and director of the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve, who is collaborating with Ponsky. "AI is very nascent," Madabhushi says, estimating that fewer than 10 percent of niche academic medical centers have used it. "We are still optimizing and validating the AI and virtual biopsy technology."
In October, several North American and European professional organizations of radiologists, imaging informaticists, and medical physicists released a joint statement on the ethics of AI. "Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future," reads the statement, published in the Journal of the American College of Radiology. "The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes."
Overreliance on new technology also poses concern when humans "outsource the process to a machine."
The statement's leader author, radiologist J. Raymond Geis, says "there's no question" that machines equipped with artificial intelligence "can extract more information than two human eyes" by spotting very subtle patterns in pixels. Yet, such nuances are "only part of the bigger picture of taking care of a patient," says Geis, a senior scientist with the American College of Radiology's Data Science Institute. "We have to be able to combine that with knowledge of what those pixels mean."
Setting ethical standards is high on all physicians' radar because the intricacies of each patient's medical record are factored into the computer's algorithm, which, in turn, may be used to help interpret other patients' scans, says radiologist Frank Rybicki, vice chair of operations and quality at the University of Cincinnati's department of radiology. Although obtaining patients' informed consent in writing is currently necessary, ethical dilemmas arise if and when patients have a change of heart about the use of their private health information. It is likely that removing individual data may be possible for some algorithms but not others, Rybicki says.
The information is de-identified to protect patient privacy. Using it to advance research is akin to analyzing human tissue removed in surgical procedures with the goal of discovering new medicines to fight disease, says Maryellen Giger, a University of Chicago medical physicist who studies computer-aided diagnosis in cancers of the breast, lung, and prostate, as well as bone diseases. Physicians who become adept at using AI to augment their interpretation of imaging will be ahead of the curve, she says.
As with other new discoveries, patient access and equality come into play. While AI appears to "have potential to improve over human performance in certain contexts," an algorithm's design may result in greater accuracy for certain groups of patients, says Lucia M. Rafanelli, a political theorist at The George Washington University. This "could have a disproportionately bad impact on one segment of the population."
Overreliance on new technology also poses concern when humans "outsource the process to a machine." Over time, they may cease developing and refining the skills they used before the invention became available, said Chloe Bakalar, a visiting research collaborator at Princeton University's Center for Information Technology Policy.
"AI is a paradigm shift with magic power and great potential."
Striking the right balance in the rollout of the technology is key. Rushing to integrate AI in clinical practice may cause harm, whereas holding back too long could undermine its ability to be helpful. Proper governance becomes paramount. "AI is a paradigm shift with magic power and great potential," says Ge Wang, a biomedical imaging professor at Rensselaer Polytechnic Institute in Troy, New York. "It is only ethical to develop it proactively, validate it rigorously, regulate it systematically, and optimize it as time goes by in a healthy ecosystem."