Health breakthroughs of 2022 that should have made bigger news
As the world has attempted to move on from COVID-19 in 2022, attention has returned to other areas of health and biotech with major regulatory approvals such as the Alzheimer's drug lecanemab – which can slow the destruction of brain cells in the early stages of the disease – being hailed by some as momentous breakthroughs.
This has been a year where psychedelic medicines have gained the attention of mainstream researchers with a groundbreaking clinical trial showing that psilocybin treatment can help relieve some of the symptoms of major depressive disorder. And with messenger RNA (mRNA) technology still very much capturing the imagination, the readouts of cancer vaccine trials have made headlines around the world.
But at the same time there have been vital advances which will likely go on to change medicine, and yet have slipped beneath the radar. I asked nine forward-thinking experts on health and biotech about the most important, but underappreciated, breakthrough of 2022.
Their descriptions, below, were lightly edited by Leaps.org for style and format.
New drug targets for Alzheimer’s disease
Professor Julie Williams, Director, Dementia Research Institute, Cardiff University
Genetics has changed our view of Alzheimer’s disease in the last five to six years. The beta amyloid hypothesis has dominated Alzheimer’s research for a long time, but there are multiple components to this complex disease, of which getting rid of amyloid plaques is one, but it is not the whole story. In April 2022, Nature published a paper which is the culmination of a decade’s worth of work - groups all over the world working together to identify 75 genes associated with risk of developing Alzheimer’s. This provides us with a roadmap for understanding the disease mechanisms.
For example, it is showing that there is something different about the immune systems of people who develop Alzheimer’s disease. There is something different about the way they process lipids in the brain, and very specific processes of how things travel through cells called endocytosis. When it comes to immunity, it indicates that the complement system is affecting whether synapses, which are the connections between neurons, get eliminated or not. In Alzheimer’s this process is more severe, so patients are losing more synapses, and this is correlated with cognition.
The genetics also implicates very specific tissues like microglia, which are the housekeepers in the brain. One of their functions is to clear away beta amyloid, but they also prune and nibble away at parts of the brain that are indicated to be diseased. If you have these risk genes, it seems that you are likely to prune more tissue, which may be part of the cell death and neurodegeneration that we observe in Alzheimer’s patients.
Genetics is telling us that we need to be looking at multiple causes of this complex disease, and we are doing that now. It is showing us that there are a number of different processes which combine to push patients into a disease state which results in the death of connections between nerve cells. These findings around the complement system and other immune-related mechanisms are very interesting as there are already drugs which are available for other diseases which could be repurposed in clinical trials. So it is really a turning point for us in the Alzheimer’s disease field.
Preventing Pandemics with Organ-Tissue Equivalents
Anthony Atala, Director of the Wake Forest Institute for Regenerative Medicine
COVID-19 has shown us that we need to be better prepared ahead of future pandemics and have systems in place where we can quickly catalogue a new virus and have an idea of which treatment agents would work best against it.
At Wake Forest Institute, our scientists have developed what we call organ-tissue equivalents. These are miniature tissues and organs, created using the same regenerative medicine technologies which we have been using to create tissues for patients. For example, if we are making a miniature liver, we will recreate this structure using the six different cell types you find in the liver, in the right proportions, and then the right extracellular matrix which holds the structure together. You're trying to replicate all the characteristics of the liver, but just in a miniature format.
We can now put these organ-tissue equivalents in a chip-like device, where we can expose them to different types of viral infections, and start to get a realistic idea of how the human body reacts to these viruses. We can use artificial intelligence and machine learning to map the pathways of the body’s response. This will allow us to catalogue known viruses far more effectively, and begin storing information on them.
Powering Deep Brain Stimulators with Breath
Islam Mosa, Co-Founder and CTO of VoltXon
Deep brain stimulation (DBS) devices are becoming increasingly common with 150,000 new devices being implanted every year for people with Parkinson’s disease, but also psychiatric conditions such as treatment-resistant depression and obsessive-compulsive disorders. But one of the biggest limitations is the power source – I call DBS devices energy monsters. While cardiac pacemakers use similar technology, their batteries last seven to ten years, but DBS batteries need changing every two to three years. This is because they are generating between 60-180 pulses per second.
Replacing the batteries requires surgery which costs a lot of money, and with every repeat operation comes a risk of infection, plus there is a lot of anxiety on behalf of the patient that the battery is running out.
My colleagues at the University of Connecticut and I, have developed a new way of charging these devices using the person’s own breathing movements, which would mean that the batteries never need to be changed. As the patient breathes in and out, their chest wall presses on a thin electric generator, which converts that movement into static electricity, charging a supercapacitor. This discharges the electricity required to power the DBS device and send the necessary pulses to the brain.
So far it has only been tested in a simulated pig, using a pig lung connected to a pump, but there are plans now to test it in a real animal, and then progress to clinical trials.
Smartwatches for Disease Detection
Jessilyn Dunn, Assistant Professor in Duke Biomedical Engineering
A group of researchers recently showed that digital biomarkers of infection can reveal when someone is sick, often before they feel sick. The team, which included Duke biomedical engineers, used information from smartwatches to detect Covid-19 cases five to 10 days earlier than diagnostic tests. Smartwatch data included aspects of heart rate, sleep quality and physical activity. Based on this data, we developed an algorithm to decide which people have the most need to take the diagnostic tests. With this approach, the percent of tests that come back positive are about four- to six-times higher, depending on which factors we monitor through the watches.
Our study was one of several showing the value of digital biomarkers, rather than a single blockbuster paper. With so many new ideas and technologies coming out around Covid, it’s hard to be that signal through the noise. More studies are needed, but this line of research is important because, rather than treat everyone as equally likely to have an infectious disease, we can use prior knowledge from smartwatches. With monkeypox, for example, you've got many more people who need to be tested than you have tests available. Information from the smartwatches enables you to improve how you allocate those tests.
Smartwatch data could also be applied to chronic diseases. For viruses, we’re looking for information about anomalies – a big change point in people’s health. For chronic diseases, it’s more like a slow, steady change. Our research lays the groundwork for the signals coming from smartwatches to be useful in a health setting, and now it’s up to us to detect more of these chronic cases. We want to go from the idea that we have this single change point, like a heart attack or stroke, and focus on the part before that, to see if we can detect it.
A Vaccine For RSV
Norbert Pardi, Vaccines Group Lead, Penn Institute for RNA Innovation, University of Pennsylvania
Scientists have long been trying to develop a vaccine for respiratory syncytial virus (RSV), and it looks like Pfizer are closing in on this goal, based on the latest clinical trial data in newborns which they released in November. Pfizer have developed a protein-based vaccine against the F protein of RSV, which they are giving to pregnant women. It turns out that it induces a robust immune response after the administration of a single shot and it seems to be highly protective in newborns. The efficacy was over 80% after 90 days, so it protected very well against severe disease, and even though this dropped a little after six month, it was still pretty high.
I think this has been a very important breakthrough, and very timely at the moment with both COVID-19, influenza and RSV circulating, which just shows the importance of having a vaccine which works well in both the very young and the very old.
The road to an RSV vaccine has also illustrated the importance of teamwork in 21st century vaccine development. You need people with different backgrounds to solve these challenges – microbiologists, immunologists and structural biologists working together to understand how viruses work, and how our immune system induces protective responses against certain viruses. It has been this kind of teamwork which has yielded the findings that targeting the prefusion stabilized form of the F protein in RSV induces much stronger and highly protective immune responses.
Gene therapy shows its potential
Nicole Paulk, Assistant Professor of Gene Therapy at the University of California, San Francisco
The recent US Food and Drug Administration (FDA) approval of Hemgenix, a gene therapy for hemophilia B, is big for a lot of reasons. While hemophilia is absolutely a rare disease, it is astronomically more common than the first two approvals – Luxturna for RPE65-meidated inherited retinal dystrophy and Zolgensma for spinal muscular atrophy - so many more patients will be treated with this. In terms of numbers of patients, we are now starting to creep up into things that are much more common, which is a huge step in terms of our ability to scale the production of an adeno-associated virus (AAV) vector for gene therapy.
Hemophilia is also a really special patient population because this has been the darling indication for AAV gene therapy for the last 20 to 30 years. AAV trafficks to the liver so well, it’s really easy for us to target the tissues that we want. If you look at the numbers, there have been more gene therapy scientists working on hemophilia than any other condition. There have just been thousands and thousands of us working on gene therapy indications for the last 20 or 30 years, so to see the first of these approvals make it, feels really special.
I am sure it is even more special for the patients because now they have a choice – do I want to stay on my recombinant factor drug that I need to take every day for the rest of my life, or right now I could get a one-time infusion of this virus and possibly experience curative levels of expression for the rest of my life. And this is just the first one for hemophilia, there’s going to end up being a dozen gene therapies within the next five years, targeted towards different hemophilias.
Every single approval is momentous for the entire field because it gets investors excited, it gets companies and physicians excited, and that helps speed things up. Right now, it's still a challenge to produce enough for double digit patients. But with more interest comes the experiments and trials that allow us to pick up the knowledge to scale things up, so that we can go after bigger diseases like diabetes, congestive heart failure, cancer, all of these much bigger afflictions.
Treating Thickened Hearts
John Spertus, Professor in Metabolic and Vascular Disease Research, UMKC School of Medicine
Hypertrophic cardiomyopathy (HCM) is a disease that causes your heart muscle to enlarge, and the walls of your heart chambers thicken and reduce in size. Because of this, they cannot hold as much blood and may stiffen, causing some sufferers to experience progressive shortness of breath, fatigue and ultimately heart failure.
So far we have only had very crude ways of treating it, using beta blockers, calcium channel blockers or other medications which cause the heart to beat less strongly. This works for some patients but a lot of time it does not, which means you have to consider removing part of the wall of the heart with surgery.
Earlier this year, a trial of a drug called mavacamten, became the first study to show positive results in treating HCM. What is remarkable about mavacamten is that it is directed at trying to block the overly vigorous contractile proteins in the heart, so it is a highly targeted, focused way of addressing the key problem in these patients. The study demonstrated a really large improvement in patient quality of life where they were on the drug, and when they went off the drug, the quality of life went away.
Some specialists are now hypothesizing that it may work for other cardiovascular diseases where the heart either beats too strongly or it does not relax well enough, but just having a treatment for HCM is a really big deal. For years we have not been very aggressive in identifying and treating these patients because there have not been great treatments available, so this could lead to a new era.
Regenerating Organs
David Andrijevic, Associate Research Scientist in neuroscience at Yale School of Medicine
As soon as the heartbeat stops, a whole chain of biochemical processes resulting from ischemia – the lack of blood flow, oxygen and nutrients – begins to destroy the body’s cells and organs. My colleagues and I at Yale School of Medicine have been investigating whether we can recover organs after prolonged ischemia, with the main goal of expanding the organ donor pool.
Earlier this year we published a paper in which we showed that we could use technology to restore blood circulation, other cellular functions and even heart activity in pigs, one hour after their deaths. This was done using a perfusion technology to substitute heart, lung and kidney function, and deliver an experimental cell protective fluid to these organs which aimed to stop cell death and aid in the recovery.
One of the aims of this technology is that it can be used in future to lengthen the time window for recovering organs for donation after a person has been declared dead, a logistical hurdle which would allow us to substantially increase the donor pool. We might also be able to use this cell protective fluid in studies to see if it can help people who have suffered from strokes and myocardial infarction. In future, if we managed to achieve an adequate brain recovery – and the brain, out of all the organs, is the most susceptible to ischemia – this might also change some paradigms in resuscitation medicine.
Antibody-Drug Conjugates for Cancer
Yosi Shamay, Cancer Nanomedicine and Nanoinformatics researcher at the Technion Israel Institute of Technology
For the past four or five years, antibody-drug conjugates (ADCs) - a cancer drug where you have an antibody conjugated to a toxin - have been used only in patients with specific cancers that display high expression of a target protein, for example HER2-positive breast cancer. But in 2022, there have been clinical trials where ADCs have shown remarkable results in patients with low expression of HER2, which is something we never expected to see.
In July 2022, AstraZeneca published the results of a clinical trial, which showed that an ADC called trastuzumab deruxtecan can offer a very big survival benefit to breast cancer patients with very little expression of HER2, levels so low that they would be borderline undetectable for a pathologist. They got a strong survival signal for patients with very aggressive, metastatic disease.
I think this is very interesting and important because it means that it might pave the way to include more patients in clinical trials looking at ADCs for other cancers, for example lymphoma, colon cancer, lung cancers, even if they have low expression of the protein target. It also holds implications for CAR-T cells - where you genetically engineer a T cell to attack the cancer - because the concept is very similar. If we now know that an ADC can have a survival benefit, even in patients with very low target expression, the same might be true for T cells.
Look back further: Breakthroughs of 2021
https://leaps.org/6-biotech-breakthroughs-of-2021-that-missed-the-attention-they-deserved/
The Case for an Outright Ban on Facial Recognition Technology
[Editor's Note: This essay is in response to our current Big Question, which we posed to experts with different perspectives: "Do you think the use of facial recognition technology by the police or government should be banned? If so, why? If not, what limits, if any, should be placed on its use?"]
In a surprise appearance at the tail end of Amazon's much-hyped annual product event last month, CEO Jeff Bezos casually told reporters that his company is writing its own facial recognition legislation.
The use of computer algorithms to analyze massive databases of footage and photographs could render human privacy extinct.
It seems that when you're the wealthiest human alive, there's nothing strange about your company––the largest in the world profiting from the spread of face surveillance technology––writing the rules that govern it.
But if lawmakers and advocates fall into Silicon Valley's trap of "regulating" facial recognition and other forms of invasive biometric surveillance, that's exactly what will happen.
Industry-friendly regulations won't fix the dangers inherent in widespread use of face scanning software, whether it's deployed by governments or for commercial purposes. The use of this technology in public places and for surveillance purposes should be banned outright, and its use by private companies and individuals should be severely restricted. As artificial intelligence expert Luke Stark wrote, it's dangerous enough that it should be outlawed for "almost all practical purposes."
Like biological or nuclear weapons, facial recognition poses such a profound threat to the future of humanity and our basic rights that any potential benefits are far outweighed by the inevitable harms.
We live in cities and towns with an exponentially growing number of always-on cameras, installed in everything from cars to children's toys to Amazon's police-friendly doorbells. The use of computer algorithms to analyze massive databases of footage and photographs could render human privacy extinct. It's a world where nearly everything we do, everywhere we go, everyone we associate with, and everything we buy — or look at and even think of buying — is recorded and can be tracked and analyzed at a mass scale for unimaginably awful purposes.
Biometric tracking enables the automated and pervasive monitoring of an entire population. There's ample evidence that this type of dragnet mass data collection and analysis is not useful for public safety, but it's perfect for oppression and social control.
Law enforcement defenders of facial recognition often state that the technology simply lets them do what they would be doing anyway: compare footage or photos against mug shots, drivers licenses, or other databases, but faster. And they're not wrong. But the speed and automation enabled by artificial intelligence-powered surveillance fundamentally changes the impact of that surveillance on our society. Being able to do something exponentially faster, and using significantly less human and financial resources, alters the nature of that thing. The Fourth Amendment becomes meaningless in a world where private companies record everything we do and provide governments with easy tools to request and analyze footage from a growing, privately owned, panopticon.
Tech giants like Microsoft and Amazon insist that facial recognition will be a lucrative boon for humanity, as long as there are proper safeguards in place. This disingenuous call for regulation is straight out of the same lobbying playbook that telecom companies have used to attack net neutrality and Silicon Valley has used to scuttle meaningful data privacy legislation. Companies are calling for regulation because they want their corporate lawyers and lobbyists to help write the rules of the road, to ensure those rules are friendly to their business models. They're trying to skip the debate about what role, if any, technology this uniquely dangerous should play in a free and open society. They want to rush ahead to the discussion about how we roll it out.
We need spaces that are free from government and societal intrusion in order to advance as a civilization.
Facial recognition is spreading very quickly. But backlash is growing too. Several cities have already banned government entities, including police and schools, from using biometric surveillance. Others have local ordinances in the works, and there's state legislation brewing in Michigan, Massachusetts, Utah, and California. Meanwhile, there is growing bipartisan agreement in U.S. Congress to rein in government use of facial recognition. We've also seen significant backlash to facial recognition growing in the U.K., within the European Parliament, and in Sweden, which recently banned its use in schools following a fine under the General Data Protection Regulation (GDPR).
At least two frontrunners in the 2020 presidential campaign have backed a ban on law enforcement use of facial recognition. Many of the largest music festivals in the world responded to Fight for the Future's campaign and committed to not use facial recognition technology on music fans.
There has been widespread reporting on the fact that existing facial recognition algorithms exhibit systemic racial and gender bias, and are more likely to misidentify people with darker skin, or who are not perceived by a computer to be a white man. Critics are right to highlight this algorithmic bias. Facial recognition is being used by law enforcement in cities like Detroit right now, and the racial bias baked into that software is doing harm. It's exacerbating existing forms of racial profiling and discrimination in everything from public housing to the criminal justice system.
But the companies that make facial recognition assure us this bias is a bug, not a feature, and that they can fix it. And they might be right. Face scanning algorithms for many purposes will improve over time. But facial recognition becoming more accurate doesn't make it less of a threat to human rights. This technology is dangerous when it's broken, but at a mass scale, it's even more dangerous when it works. And it will still disproportionately harm our society's most vulnerable members.
Persistent monitoring and policing of our behavior breeds conformity, benefits tyrants, and enriches elites.
We need spaces that are free from government and societal intrusion in order to advance as a civilization. If technology makes it so that laws can be enforced 100 percent of the time, there is no room to test whether those laws are just. If the U.S. government had ubiquitous facial recognition surveillance 50 years ago when homosexuality was still criminalized, would the LGBTQ rights movement ever have formed? In a world where private spaces don't exist, would people have felt safe enough to leave the closet and gather, build community, and form a movement? Freedom from surveillance is necessary for deviation from social norms as well as to dissent from authority, without which societal progress halts.
Persistent monitoring and policing of our behavior breeds conformity, benefits tyrants, and enriches elites. Drawing a line in the sand around tech-enhanced surveillance is the fundamental fight of this generation. Lining up to get our faces scanned to participate in society doesn't just threaten our privacy, it threatens our humanity, and our ability to be ourselves.
[Editor's Note: Read the opposite perspective here.]
Scientists Are Building an “AccuWeather” for Germs to Predict Your Risk of Getting the Flu
Applied mathematician Sara del Valle works at the U.S.'s foremost nuclear weapons lab: Los Alamos. Once colloquially called Atomic City, it's a hidden place 45 minutes into the mountains northwest of Santa Fe. Here, engineers developed the first atomic bomb.
Like AccuWeather, an app for disease prediction could help people alter their behavior to live better lives.
Today, Los Alamos still a small science town, though no longer a secret, nor in the business of building new bombs. Instead, it's tasked with, among other things, keeping the stockpile of nuclear weapons safe and stable: not exploding when they're not supposed to (yes, please) and exploding if someone presses that red button (please, no).
Del Valle, though, doesn't work on any of that. Los Alamos is also interested in other kinds of booms—like the explosion of a contagious disease that could take down a city. Predicting (and, ideally, preventing) such epidemics is del Valle's passion. She hopes to develop an app that's like AccuWeather for germs: It would tell you your chance of getting the flu, or dengue or Zika, in your city on a given day. And like AccuWeather, it could help people alter their behavior to live better lives, whether that means staying home on a snowy morning or washing their hands on a sickness-heavy commute.
Sara del Valle of Los Alamos is working to predict and prevent epidemics using data and machine learning.
Since the beginning of del Valle's career, she's been driven by one thing: using data and predictions to help people behave practically around pathogens. As a kid, she'd always been good at math, but when she found out she could use it to capture the tentacular spread of disease, and not just manipulate abstractions, she was hooked.
When she made her way to Los Alamos, she started looking at what people were doing during outbreaks. Using social media like Twitter, Google search data, and Wikipedia, the team started to sift for trends. Were people talking about hygiene, like hand-washing? Or about being sick? Were they Googling information about mosquitoes? Searching Wikipedia for symptoms? And how did those things correlate with the spread of disease?
It was a new, faster way to think about how pathogens propagate in the real world. Usually, there's a 10- to 14-day lag in the U.S. between when doctors tap numbers into spreadsheets and when that information becomes public. By then, the world has moved on, and so has the disease—to other villages, other victims.
"We found there was a correlation between actual flu incidents in a community and the number of searches online and the number of tweets online," says del Valle. That was when she first let herself dream about a real-time forecast, not a 10-days-later backcast. Del Valle's group—computer scientists, mathematicians, statisticians, economists, public health professionals, epidemiologists, satellite analysis experts—has continued to work on the problem ever since their first Twitter parsing, in 2011.
They've had their share of outbreaks to track. Looking back at the 2009 swine flu pandemic, they saw people buying face masks and paying attention to the cleanliness of their hands. "People were talking about whether or not they needed to cancel their vacation," she says, and also whether pork products—which have nothing to do with swine flu—were safe to buy.
At the latest meeting with all the prediction groups, del Valle's flu models took first and second place.
They watched internet conversations during the measles outbreak in California. "There's a lot of online discussion about anti-vax sentiment, and people trying to convince people to vaccinate children and vice versa," she says.
Today, they work on predicting the spread of Zika, Chikungunya, and dengue fever, as well as the plain old flu. And according to the CDC, that latter effort is going well.
Since 2015, the CDC has run the Epidemic Prediction Initiative, a competition in which teams like de Valle's submit weekly predictions of how raging the flu will be in particular locations, along with other ailments occasionally. Michael Johannson is co-founder and leader of the program, which began with the Dengue Forecasting Project. Its goal, he says, was to predict when dengue cases would blow up, when previously an area just had a low-level baseline of sick people. "You'll get this massive epidemic where all of a sudden, instead of 3,000 to 4,000 cases, you have 20,000 cases," he says. "They kind of come out of nowhere."
But the "kind of" is key: The outbreaks surely come out of somewhere and, if scientists applied research and data the right way, they could forecast the upswing and perhaps dodge a bomb before it hit big-time. Questions about how big, when, and where are also key to the flu.
A big part of these projects is the CDC giving the right researchers access to the right information, and the structure to both forecast useful public-health outcomes and to compare how well the models are doing. The extra information has been great for the Los Alamos effort. "We don't have to call departments and beg for data," says del Valle.
When data isn't available, "proxies"—things like symptom searches, tweets about empty offices, satellite images showing a green, wet, mosquito-friendly landscape—are helpful: You don't have to rely on anyone's health department.
At the latest meeting with all the prediction groups, del Valle's flu models took first and second place. But del Valle wants more than weekly numbers on a government website; she wants that weather-app-inspired fortune-teller, incorporating the many diseases you could get today, standing right where you are. "That's our dream," she says.
This plot shows the the correlations between the online data stream, from Wikipedia, and various infectious diseases in different countries. The results of del Valle's predictive models are shown in brown, while the actual number of cases or illness rates are shown in blue.
(Courtesy del Valle)
The goal isn't to turn you into a germophobic agoraphobe. It's to make you more aware when you do go out. "If you know it's going to rain today, you're more likely to bring an umbrella," del Valle says. "When you go on vacation, you always look at the weather and make sure you bring the appropriate clothing. If you do the same thing for diseases, you think, 'There's Zika spreading in Sao Paulo, so maybe I should bring even more mosquito repellent and bring more long sleeves and pants.'"
They're not there yet (don't hold your breath, but do stop touching your mouth). She estimates it's at least a decade away, but advances in machine learning could accelerate that hypothetical timeline. "We're doing baby steps," says del Valle, starting with the flu in the U.S., dengue in Brazil, and other efforts in Colombia, Ecuador, and Canada. "Going from there to forecasting all diseases around the globe is a long way," she says.
But even AccuWeather started small: One man began predicting weather for a utility company, then helping ski resorts optimize their snowmaking. His influence snowballed, and now private forecasting apps, including AccuWeather's, populate phones across the planet. The company's progression hasn't been without controversy—privacy incursions, inaccuracy of long-term forecasts, fights with the government—but it has continued, for better and for worse.
Disease apps, perhaps spun out of a small, unlikely team at a nuclear-weapons lab, could grow and breed in a similar way. And both the controversies and public-health benefits that may someday spin out of them lie in the future, impossible to predict with certainty.