COVID Variants Are Like “a Thief Changing Clothes” – and Our Camera System Barely Exists
Whether it's "natural selection" as Darwin called it, or it's "mutating" as the X-Men called it, living organisms change over time, developing thumbs or more efficient protein spikes, depending on the organism and the demands of its environment. The coronavirus that causes COVID-19, SARS-CoV-2, is not an exception, and now, after the virus has infected millions of people around the globe for more than a year, scientists are beginning to see those changes.
The notorious variants that have popped up include B.1.1.7, sometimes called the UK variant, as well as P.1 and B.1.351, which seem to have emerged in Brazil and South Africa respectively. As vaccinations are picking up pace, officials are warning that now
is not the time to become complacent or relax restrictions because the variants aren't well understood.
Some appear to be more transmissible, and deadlier, while others can evade the immune system's defenses better than earlier versions of the virus, potentially undermining the effectiveness of vaccines to some degree. Genomic surveillance, the process of sequencing the genetic code of the virus widely to observe changes and patterns, is a critical way that scientists can keep track of its evolution and work to understand how the variants might affect humans.
"It's like a thief changing clothes"
It's important to note that viruses mutate all the time. If there were funding and personnel to sequence the genome of every sample of the virus, scientists would see thousands of mutations. Not every variant deserves our attention. The vast majority of mutations are not important at all, but recognizing those that are is a crucial tool in getting and staying ahead of the virus. The work of sequencing, analyzing, observing patterns, and using public health tools as necessary is complicated and confusing to those without years of specialized training.
Jeremy Kamil, associate professor of microbiology and immunology at LSU Health Shreveport, in Louisiana, says that the variants developing are like a thief changing clothes. The thief goes in your house, steals your stuff, then leaves and puts on a different shirt and a wig, in the hopes you won't recognize them. Genomic surveillance catches the "thief" even in those different clothes.
One of the tricky things about variants is recognizing the point at which they move from interesting, to concerning at a local level, to dangerous in a larger context.
Understanding variants, both the uninteresting ones and the potentially concerning ones, gives public health officials and researchers at different levels a useful set of tools. Locally, knowing which variants are circulating in the community helps leaders know whether mask mandates and similar measures should be implemented or discontinued, or whether businesses and schools can open relatively safely.
There's more to it than observing new variants
Analysis is complex, particularly when it comes to understanding which variants are of concern. "So the question is always if a mutation becomes common, is that a random occurrence?" says Phoebe Lostroh, associate professor of molecular biology at Colorado College. "Or is the variant the result of some kind of selection because the mutation changes some property about the virus that makes it reproduce more quickly than variants of the virus that don't have that mutation? For a virus, [mutations can affect outcomes like] how much it replicates inside a person's body, how much somebody breathes it out, whether the particles that somebody might breathe in get smaller and can lead to greater transmission."
Along with all of those factors, accurate and useful genomic surveillance requires an understanding of where variants are occurring, how they are related, and an examination of why they might be prevalent.
For example, if a potentially worrisome variant appears in a community and begins to spread very quickly, it's not time to raise a public health alarm until several important questions have been answered, such as whether the variant is spreading due to specific events, or if it's happening because the mutation has allowed the virus to infect people more efficiently. Kamil offered a hypothetical scenario to explain: Imagine that a member of a community became infected and the virus mutated. That person went to church and three more people were infected, but one of them went to a karaoke bar and while singing infected 100 other people. Examining the conditions under which the virus has spread is, therefore, an essential part of untangling whether a mutation itself made the virus more transmissible or if an infected person's behaviors contributed to a local outbreak.
One of the tricky things about variants is recognizing the point at which they move from interesting, to concerning at a local level, to dangerous in a larger context. Genomic sequencing can help with that, but only when it's coordinated. When the same mutation occurs frequently, but is localized to one region, it's a concern, but when the same mutation happens in different places at the same time, it's much more likely that the "virus is learning that's a good mutation," explains Kamil.
The process is called convergent evolution, and it was a fascinating topic long before COVID. Just as your heritage can be traced through DNA, so can that of viruses, and when separate lineages develop similar traits it's almost like scientists can see evolution happening in real time. A mutation to SARS-CoV-2 that happens in more than one place at once is a mutation that makes it easier in some way for the virus to survive and that is when it may become alarming. The widespread, documented variants P.1 and B.1.351 are examples of convergence because they share some of the same virulent mutations despite having developed thousands of miles apart.
However, even variants that are emerging in different places at the same time don't present the kind of threat SARS-CoV-2 did in 2019. "This is nature," says Kamil. "It just means that this virus will not easily be driven to extinction or complete elimination by vaccines." Although a person who has already had COVID-19 can be reinfected with a variant, "it is almost always much milder disease" than the original infection, Kamil adds. Rather than causing full-fledged disease, variants have the potiental to "penetrate herd immunity, spreading relatively quietly among people who have developed natural immunity or been vaccinated, until the virus finds someone who has no immunity yet, and that person would be at risk of hospitalization-grade severe disease or death."
Surveillance and predictions
According to Lostroh, genomic surveillance can help scientists predict what's going to happen. "With the British strain, for instance, that's more transmissible, you can measure how fast it's doubling in the population and you can sort of tell whether we should take more measures against this mutation. Should we shut things down a little longer because that mutation is present in the population? That could be really useful if you did enough sampling in the population that you knew where it was," says Lostroh. If, for example, the more transmissible strain was present in 50 percent of cases, but in another county or state it was barely present, it would allow for rolling lockdowns instead of sweeping measures.
Variants are also extremely important when it comes to the development, manufacture, and distribution of vaccines. "You're also looking at medical countermeasures, such as whether your vaccine is still effective, or if your antiviral needs to be updated," says Lane Warmbrod, a senior analyst and research associate at Johns Hopkins Center for Health Security.
Properly funded and extensive genomic surveillance could eventually help control endemic diseases, too, like the seasonal flu, or other common respiratory infections. Kamil says he envisions a future in which genomic surveillance allows for prediction of sickness just as the weather is predicted today. "It's a 51 for infection today at the San Francisco Airport. There's been detection of some respiratory viruses," he says, offering an example. He says that if you're a vulnerable person, if you're immune-suppressed for some reason, you may want to wear a mask based on the sickness report.
The U.S. has the ability, but lacks standards
The benefits of widespread genomic surveillance are clear, and the United States certainly has the necessary technology, equipment, and personnel to carry it out. But, it's not happening at the speed and extent it needs to for the country to gain the benefits.
"The numbers are improving," said Kamil. "We're probably still at less than half a percent of all the samples that have been taken have been sequenced since the beginning of the pandemic."
Although there's no consensus on how many sequences is ideal for a robust surveillance program, modeling performed by the company Illumina suggests about 5 percent of positive tests should be sequenced. The reasons the U.S. has lagged in implementing a sequencing program are complex and varied, but solvable.
Perhaps the most important element that is currently missing is leadership. In order to conduct an effective genomic surveillance program, there need to be standards. The Johns Hopkins Center for Health Security recently published a paper with recommendations as to what kinds of elements need to be standardized in order to make the best use of sequencing technology and analysis.
"Along with which bioinformatic pipelines you're going to use to do the analyses, which sequencing strategy protocol are you going to use, what's your sampling strategy going to be, how is the data is going to be reported, what data gets reported," says Warmbrod. Currently, there's no guidance from the CDC on any of those things. So, while scientists can collect and report information, they may be collecting and reporting different information that isn't comparable, making it less useful for public health measures and vaccine updates.
Globally, one of the most important tools in making the information from genomic surveillance useful is GISAID, a platform designed for scientists to share -- and, importantly, to be credited for -- their data regarding genetic sequences of influenza. Originally, it was launched as a database of bird flu sequences, but has evolved to become an essential tool used by the WHO to make flu vaccine virus recommendations each year. Scientists who share their credentials have free access to the database, and anyone who uses information from the database must credit the scientist who uploaded that information.
Safety, logistics, and funding matter
Scientists at university labs and other small organizations have been uploading sequences to GISAID almost from the beginning of the pandemic, but their funding is generally limited, and there are no standards regarding information collection or reporting. Private, for-profit labs haven't had motivation to set up sequencing programs, although many of them have the logistical capabilities and funding to do so. Public health departments are understaffed, underfunded, and overwhelmed.
University labs may also be limited by safety concerns. The SARS-CoV-2 virus is dangerous, and there's a question of how samples should be transported to labs for sequencing.
Larger, for-profit organizations often have the tools and distribution capabilities to safely collect and sequence samples, but there hasn't been a profit motive. Genomic sequencing is less expensive now than ever before, but even at $100 per sample, the cost adds up -- not to mention the cost of employing a scientist with the proper credentials to analyze the sequence.
The path forward
The recently passed COVID-19 relief bill does have some funding to address genomic sequencing. Specifically, the American Rescue Plan Act includes $1.75 billion in funding for the Centers for Disease Control and Prevention's Advanced Molecular Detection (AMD) program. In an interview last month, CDC Director Rochelle Walensky said that the additional funding will be "a dial. And we're going to need to dial it up." AMD has already announced a collaboration called the Sequencing for Public Health Emergency Response, Epidemiology, and Surveillance (SPHERES) Initiative that will bring together scientists from public health, academic, clinical, and non-profit laboratories across the country with the goal of accelerating sequencing.
Such a collaboration is a step toward following the recommendations in the paper Warmbrod coauthored. Building capacity now, creating a network of labs, and standardizing procedures will mean improved health in the future. "I want to be optimistic," she says. "The good news is there are a lot of passionate, smart, capable people who are continuing to work with government and work with different stakeholders." She cautions, however, that without a national strategy we won't succeed.
"If we maximize the potential and create that framework now, we can also use it for endemic diseases," she says. "It's a very helpful system for more than COVID if we're smart in how we plan it."
Real-Time Monitoring of Your Health Is the Future of Medicine
The same way that it's harder to lose 100 pounds than it is to not gain 100 pounds, it's easier to stop a disease before it happens than to treat an illness once it's developed.
In Morris' dream scenario "everyone will be implanted with a sensor" ("…the same way most people are vaccinated") and the sensor will alert people to go to the doctor if something is awry.
Bio-engineers working on the next generation of diagnostic tools say today's technology, such as colonoscopies or mammograms, are reactionary; that is, they tell a person they are sick often when it's too late to reverse course. Surveillance medicine — such as implanted sensors — will detect disease at its onset, in real time.
What Is Possible?
Ever since the Human Genome Project — which concluded in 2003 after mapping the DNA sequence of all 30,000 human genes — modern medicine has shifted to "personalized medicine." Also called, "precision health," 21st-century doctors can in some cases assess a person's risk for specific diseases from his or her DNA. The information enables women with a BRCA gene mutation, for example, to undergo more frequent screenings for breast cancer or to pro-actively choose to remove their breasts, as a "just in case" measure.
But your DNA is not always enough to determine your risk of illness. Not all genetic mutations are harmful, for example, and people can get sick without a genetic cause, such as with an infection. Hence the need for a more "real-time" way to monitor health.
Aaron Morris, a postdoctoral researcher in the Department of Biomedical Engineering at the University of Michigan, wants doctors to be able to predict illness with pinpoint accuracy well before symptoms show up. Working in the lab of Dr. Lonnie Shea, the team is building "a tiny diagnostic lab" that can live under a person's skin and monitor for illness, 24/7. Currently being tested in mice, the Michigan team's porous biodegradable implant becomes part of the body as "cells move right in," says Morris, allowing engineered tissue to be biopsied and analyzed for diseases. The information collected by the sensors will enable doctors to predict disease flareups, such as for cancer relapses, so that therapies can begin well before a person comes out of remission. The technology will also measure the effectiveness of those therapies in real time.
In Morris' dream scenario "everyone will be implanted with a sensor" ("…the same way most people are vaccinated") and the sensor will alert people to go to the doctor if something is awry.
While it may be four or five decades before Morris' sensor becomes mainstream, "the age of surveillance medicine is here," says Jamie Metzl, a technology and healthcare futurist who penned Hacking Darwin: Genetic Engineering and the Future of Humanity. "It will get more effective and sophisticated and less obtrusive over time," says Metzl.
Already, Google compiles public health data about disease hotspots by amalgamating individual searches for medical symptoms; pill technology can digitally track when and how much medication a patient takes; and, the Apple watch heart app can predict with 85-percent accuracy if an individual using the wrist device has Atrial Fibrulation (AFib) — a condition that causes stroke, blood clots and heart failure, and goes undiagnosed in 700,000 people each year in the U.S.
"We'll never be able to predict everything," says Metzl. "But we will always be able to predict and prevent more and more; that is the future of healthcare and medicine."
Morris believes that within ten years there will be surveillance tools that can predict if an individual has contracted the flu well before symptoms develop.
At City College of New York, Ryan Williams, assistant professor of biomedical engineering, has built an implantable nano-sensor that works with a florescent wand to scope out if cancer cells are growing at the implant site. "Instead of having the ovary or breast removed, the patient could just have this [surveillance] device that can say 'hey we're monitoring for this' in real-time… [to] measure whether the cancer is maybe coming back,' as opposed to having biopsy tests or undergoing treatments or invasive procedures."
Not all surveillance technologies that are being developed need to be implanted. At Case Western, Colin Drummond, PhD, MBA, a data scientist and assistant department chair of the Department of Biomedical Engineering, is building a "surroundable." He describes it as an Alexa-style surveillance system (he's named her Regina) that will "tell" the user, if a need arises for medication, how much to take and when.
Bioethical Red Flags
"Everyone should be extremely excited about our move toward what I call predictive and preventive health care and health," says Metzl. "We should also be worried about it. Because all of these technologies can be used well and they can [also] be abused." The concerns are many layered:
Discriminatory practices
For years now, bioethicists have expressed concerns about employee-sponsored wellness programs that encourage fitness while also tracking employee health data."Getting access to your health data can change the way your employer thinks about your employability," says Keisha Ray, assistant professor at the University of Texas Health Science Center at Houston (UTHealth). Such access can lead to discriminatory practices against employees that are less fit. "Surveillance medicine only heightens those risks," says Ray.
Who owns the data?
Surveillance medicine may help "democratize healthcare" which could be a good thing, says Anita Ho, an associate professor in bioethics at both the University of California, San Francisco and at the University of British Columbia. It would enable easier access by patients to their health data, delivered to smart phones, for example, rather than waiting for a call from the doctor. But, she also wonders who will own the data collected and if that owner has the right to share it or sell it. "A direct-to-consumer device is where the lines get a little blurry," says Ho. Currently, health data collected by Apple Watch is owned by Apple. "So we have to ask bigger ethical questions in terms of what consent should be required" by users.
Insurance coverage
"Consumers of these products deserve some sort of assurance that using a product that will predict future needs won't in any way jeopardize their ability to access care for those needs," says Hastings Center bioethicist Carolyn Neuhaus. She is urging lawmakers to begin tackling policy issues created by surveillance medicine, now, well ahead of the technology becoming mainstream, not unlike GINA, the Genetic Information Nondiscrimination Act of 2008 -- a federal law designed to prevent discrimination in health insurance on the basis of genetic information.
And, because not all Americans have insurance, Ho wants to know, who's going to pay for this technology and how much will it cost?
Trusting our guts
Some bioethicists are concerned that surveillance technology will reduce individuals to their "risk profiles," leaving health care systems to perceive them as nothing more than a "bundle of health and security risks." And further, in our quest to predict and prevent ailments, Neuhaus wonders if an over-reliance on data could damage the ability of future generations to trust their gut and tune into their own bodies?
It "sounds kind of hippy-dippy and feel-goodie," she admits. But in our culture of medicine where efficiency is highly valued, there's "a tendency to not value and appreciate what one feels inside of their own body … [because] it's easier to look at data than to listen to people's really messy stories of how they 'felt weird' the other day. It takes a lot less time to look at a sheet, to read out what the sensor implanted inside your body or planted around your house says."
Ho, too, worries about lost narratives. "For surveillance medicine to actually work we have to think about how we educate clinicians about the utility of these devices and how to how to interpret the data in the broader context of patients' lives."
Over-diagnosing
While one of the goals of surveillance medicine is to cut down on doctor visits, Ho wonders if the technology will have the opposite effect. "People may be going to the doctor more for things that actually are benign and are really not of concern yet," says Ho. She is also concerned that surveillance tools could make healthcare almost "recreational" and underscores the importance of making sure that the goals of surveillance medicine are met before the technology is unleashed.
"We can't just assume that any of these technologies are inherently technologies of liberation."
AI doesn't fix existing healthcare problems
"Knowing that you're going to have a fall or going to relapse or have a disease isn't all that helpful if you have no access to the follow-up care and you can't afford it and you can't afford the prescription medication that's going to ward off the onset," says Neuhaus. "It may still be worth knowing … but we can't fool ourselves into thinking that this technology is going to reshape medicine in America if we don't pay attention to … the infrastructure that we don't currently have."
Race-based medicine
How surveillances devices are tested before being approved for human use is a major concern for Ho. In recent years, alerts have been raised about the homogeneity of study group participants — too white and too male. Ho wonders if the devices will be able to "accurately predict the disease progression for people whose data has not been used in developing the technology?" COVID-19 has killed Black people at a rate 2.5 time greater than white people, for example, and new, virtual clinical research is focused on recruiting more people of color.
The Biggest Question
"We can't just assume that any of these technologies are inherently technologies of liberation," says Metzl.
Especially because we haven't yet asked the 64-thousand dollar question: Would patients even want to know?
Jenny Ahlstrom is an IT professional who was diagnosed at 43 with multiple myeloma, a blood cancer that typically attacks people in their late 60s and 70s and for which there is no cure. She believes that most people won't want to know about their declining health in real time. People like to live "optimistically in denial most of the time. If they don't have a problem, they don't want to really think they have a problem until they have [it]," especially when there is no cure. "Psychologically? That would be hard to know."
Ahlstrom says there's also the issue of trust, something she experienced first-hand when she launched her non-profit, HealthTree, a crowdsourcing tool to help myeloma patients "find their genetic twin" and learn what therapies may or may not work. "People want to share their story, not their data," says Ahlstrom. "We have been so conditioned as a nation to believe that our medical data is so valuable."
Metzl acknowledges that adoption of new technologies will be uneven. But he also believes that "over time, it will be abundantly clear that it's much, much cheaper to predict and prevent disease than it is to treat disease once it's already emerged."
Beyond cost, the tremendous potential of these technologies to help us live healthier and longer lives is a game-changer, he says, as long as we find ways "to ultimately navigate this terrain and put systems in place ... to minimize any potential harms."
How Smallpox Was Wiped Off the Planet By a Vaccine and Global Cooperation
For 3000 years, civilizations all over the world were brutalized by smallpox, an infectious and deadly virus characterized by fever and a rash of painful, oozing sores.
Doctors had to contend with wars, floods, and language barriers to make their campaign a success.
Smallpox was merciless, killing one third of people it infected and leaving many survivors permanently pockmarked and blind. Although smallpox was more common during the 18th and 19th centuries, it was still a leading cause of death even up until the early 1950s, killing an estimated 50 million people annually.
A Primitive Cure
Sometime during the 10th century, Chinese physicians figured out that exposing people to a tiny bit of smallpox would sometimes result in a milder infection and immunity to the disease afterward (if the person survived). Desperate for a cure, people would huff powders made of smallpox scabs or insert smallpox pus into their skin, all in the hopes of getting immunity without having to get too sick. However, this method – called inoculation – didn't always work. People could still catch the full-blown disease, spread it to others, or even catch another infectious disease like syphilis in the process.
A Breakthrough Treatment
For centuries, inoculation – however imperfect – was the only protection the world had against smallpox. But in the late 18th century, an English physician named Edward Jenner created a more effective method. Jenner discovered that inoculating a person with cowpox – a much milder relative of the smallpox virus – would make that person immune to smallpox as well, but this time without the possibility of actually catching or transmitting smallpox. His breakthrough became the world's first vaccine against a contagious disease. Other researchers, like Louis Pasteur, would use these same principles to make vaccines for global killers like anthrax and rabies. Vaccination was considered a miracle, conferring all of the rewards of having gotten sick (immunity) without the risk of death or blindness.
Scaling the Cure
As vaccination became more widespread, the number of global smallpox deaths began to drop, particularly in Europe and the United States. But even as late as 1967, smallpox was still killing anywhere from 10 to 15 million people in poorer parts of the globe. The World Health Assembly (a decision-making body of the World Health Organization) decided that year to launch the first coordinated effort to eradicate smallpox from the planet completely, aiming for 80 percent vaccine coverage in every country in which the disease was endemic – a total of 33 countries.
But officials knew that eradicating smallpox would be easier said than done. Doctors had to contend with wars, floods, and language barriers to make their campaign a success. The vaccination initiative in Bangladesh proved the most challenging, due to its population density and the prevalence of the disease, writes journalist Laurie Garrett in her book, The Coming Plague.
In one instance, French physician Daniel Tarantola on assignment in Bangladesh confronted a murderous gang that was thought to be spreading smallpox throughout the countryside during their crime sprees. Without police protection, Tarantola confronted the gang and "faced down guns" in order to immunize them, protecting the villagers from repeated outbreaks.
Because not enough vaccines existed to vaccinate everyone in a given country, doctors utilized a strategy called "ring vaccination," which meant locating individual outbreaks and vaccinating all known and possible contacts to stop an outbreak at its source. Fewer than 50 percent of the population in Nigeria received a vaccine, for example, but thanks to ring vaccination, it was eradicated in that country nonetheless. Doctors worked tirelessly for the next eleven years to immunize as many people as possible.
The World Health Organization declared smallpox officially eradicated on May 8, 1980.
A Resounding Success
In November 1975, officials discovered a case of variola major — the more virulent strain of the smallpox virus — in a three-year-old Bangladeshi girl named Rahima Banu. Banu was forcibly quarantined in her family's home with armed guards until the risk of transmission had passed, while officials went door-to-door vaccinating everyone within a five-mile radius. Two years later, the last case of variola major in human history was reported in Somalia. When no new community-acquired cases appeared after that, the World Health Organization declared smallpox officially eradicated on May 8, 1980.
Because of smallpox, we now know it's possible to completely eliminate a disease. But is it likely to happen again with other diseases, like COVID-19? Some scientists aren't so sure. As dangerous as smallpox was, it had a few characteristics that made eradication possibly easier than for other diseases. Smallpox, for instance, has no animal reservoir, meaning that it could not circulate in animals and resurge in a human population at a later date. Additionally, a person who had smallpox once was guaranteed immunity from the disease thereafter — which is not the case for COVID-19.
In The Coming Plague, Japanese physician Isao Arita, who led the WHO's Smallpox Eradication Unit, admitted to routinely defying orders from the WHO, mobilizing to parts of the world without official approval and sometimes even vaccinating people against their will. "If we hadn't broken every single WHO rule many times over, we would have never defeated smallpox," Arita said. "Never."
Still, thanks to the life-saving technology of vaccines – and the tireless efforts of doctors and scientists across the globe – a once-lethal disease is now a thing of the past.