Your Digital Avatar May One Day Get Sick Before You Do
Artificial intelligence is everywhere, just not in the way you think it is.
These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."
"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."
"And this is, of course, as far from the truth as you can possibly get."
What Exactly Is Artificial Intelligence, Anyway?
Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.
"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."
The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.
"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."
Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)
But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."
The Fundamental Breakthrough That Must Be Solved
To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."
In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.
Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.
In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.
One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.
That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.
The Ethical Problem That Deserves More Attention
But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.
It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.
A company uses AI to fight muscle loss and unhealthy aging
There’s a growing need to slow down the aging process. The world’s population is getting older and, according to one estimate, 80 million Americans will be 65 or older by 2040. As we age, the risk of many chronic diseases goes up, from cancer to heart disease to Alzheimer’s.
BioAge Labs, a company based in California, is using genetic data to help people stay healthy for longer. CEO Kristen Fortney was inspired by the genetics of people who live long lives and resist many age-related diseases. In 2015, she started BioAge to study them and develop drug therapies based on the company’s learnings.
The team works with special biobanks that have been collecting blood samples and health data from individuals for up to 45 years. Using artificial intelligence, BioAge is able to find the distinctive molecular features that distinguish those who have healthy longevity from those who don’t.
In December 2022, BioAge published findings on a drug that worked to prevent muscular atrophy, or the loss of muscle strength and mass, in older people. Much of the research on aging has been in worms and mice, but BioAge is focused on human data, Fortney says. “This boosts our chances of developing drugs that will be safe and effective in human patients.”
How it works
With assistance from AI, BioAge measures more than 100,000 molecules in each blood sample, looking at proteins, RNA and metabolites, or small molecules that are produced through chemical processes. The company uses many techniques to identify these molecules, some of which convert the molecules into charged atoms and then separating them according to their weight and charge. The resulting data is very complex, with many thousands of data points from patients being followed over the decades.
BioAge validates its targets by examining whether a pathway going awry is actually linked to the development of diseases, based on the company’s analysis of biobank health records and blood samples. The team uses AI and machine learning to identify these pathways, and the key proteins in the unhealthy pathways become their main drug targets. “The approach taken by BioAge is an excellent example of how we can harness the power of big data and advances in AI technology to identify new drugs and therapeutic targets,” says Lorna Harries, a professor of molecular genetics at the University of Exeter Medical School.
Martin Borch Jensen is the founder of Gordian Biotechnology, a company focused on using gene therapy to treat aging. He says BioAge’s use of AI allows them to speed up the process of finding promising drug candidates. However, it remains a challenge to separate pathologies from aspects of the natural aging process that aren’t necessarily bad. “Some of the changes are likely protective responses to things going wrong,” Jensen says. “Their data doesn’t…distinguish that so they’ll need to validate and be clever.”
Developing a drug for muscle loss
BioAge decided to focus on muscular atrophy because it affects many elderly people, making it difficult to perform everyday activities and increasing the risk of falls. Using the biobank samples, the team modeled different pathways that looked like they could improve muscle health. They found that people who had faster walking speeds, better grip strength and lived longer had higher levels of a protein called apelin.
Apelin is a peptide, or a small protein, that circulates in the blood. It is involved in the process by which exercise increases and preserves muscle mass. BioAge wondered if they could prevent muscular atrophy by increasing the amount of signaling in the apelin pathway. Instead of the long process of designing a drug, they decided to repurpose an existing drug made by another biotech company. This company, called Amgen, had explored the drug as a way to treat heart failure. It didn’t end up working for that purpose, but BioAge took note that the drug did seem to activate the apelin pathway.
BioAge tested its new, repurposed drug, BGE-105, and, in a phase 1 clinical trial, it protected subjects from getting muscular atrophy compared to a placebo group that didn’t receive the drug. Healthy volunteers over age 65 received infusions of the drug during 10 days spent in bed, as if they were on bed rest while recovering from an illness or injury; the elderly are especially vulnerable to muscle loss in this situation. The 11 people taking BGE-105 showed a 100 percent improvement in thigh circumference compared to 10 people taking the placebo. Ultrasound observations also revealed that the group taking the durg had enhanced muscle quality and a 73 percent increase in muscle thickness. One volunteer taking BGE-105 did have muscle loss compared to the the placebo group.
Heather Whitson, the director of the Duke University Centre for the study of aging and human development, says that, overall, the results are encouraging. “The clinical findings so far support the premise that AI can help us sort through enormous amounts of data and identify the most promising points for beneficial interventions.”
More studies are needed to find out which patients benefit the most and whether there are side effects. “I think further studies will answer more questions,” Whitson says, noting that BGE-105 was designed to enhance only one aspect of physiology associated with exercise, muscle strength. But exercise itself has many other benefits on mood, sleep, bones and glucose metabolism. “We don’t know whether BGE-105 will impact these other outcomes,” she says.
The future
BioAge is planning phase 2 trials for muscular atrophy in patients with obesity and those who have been hospitalized in an intensive care unit. Using the data from biobanks, they’ve also developed another drug, BGE-100, to treat chronic inflammation in the brain, a condition that can worsen with age and contributes to neurodegenerative diseases. The team is currently testing the drug in animals to assess its effects and find the right dose.
BioAge envisions that its drugs will have broader implications for health than treating any one specific disease. “Ultimately, we hope to pioneer a paradigm shift in healthcare, from treatment to prevention, by targeting the root causes of aging itself,” Fortney says. “We foresee a future where healthy longevity is within reach for all.”
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
When COVID-19 cases were surging in New York City in early spring, Chitra Mohan, a postdoctoral fellow at Weill Cornell, was overwhelmed with worry. But the pandemic was only part of her anxieties. Having come to the United States from India on a student visa that allowed her to work for a year after completing her degree, she had applied for a two-year extension, typically granted for those in STEM fields. But due to a clerical error—Mohan used an electronic signatureinstead of a handwritten one— her application was denied and she could no longerwork in the United States.
"I was put on unpaid leave and I lost my apartment and my health insurance—and that was in the middle of COVID!" she says.
Meanwhile her skills were very much needed in those unprecedented times. A molecular biologist studying how DNA can repair itself, Mohan was trained in reverse transcription polymerase chain reaction or RT-PCR—a lab technique that detects pathogens and is used to diagnose COVID-19. Mohan wanted to volunteer at testing centers, but because she couldn't legally work in the U.S., she wasn't allowed to help either. She moved to her cousin's house, hired a lawyer, and tried to restore her work status.
"I spent about $4,000 on lawyer fees and another $1,200 to pay for the motions I filed," she recalls. "I had to borrow money from my parents and my cousin because without my salary I just didn't have the $7,000 at hand." But the already narrow window of opportunity slammed completely shut when the Trump administration suspended issuing new visas for foreign researchers in June. All Mohan's attempts were denied. In August, she had to leave the country. "Given the recent work visa ban by the administration, all my options in the U.S. are closed," she wrote a bitter note on Twitter. "I have to uproot my entire life in NY for the past 6 years and leave." She eventually found a temporary position in Calcutta, where she can continue research.
Mohan is hardly alone in her visa saga. Many foreign scholars on H- and J-type visas and other permits that let them remain employed in America had been struggling to keep their rights to continue research, which in certain cases is crucial to battling the pandemic. Some had to leave the country, some filed every possible extension to buy time, and others are stuck in their home countries, unable to return. The already cumbersome process of applying for visas and extensions became crippled during the lockdowns. But in June, when President Trump extended and expanded immigration restrictions to cut the number of immigrant workers entering the U.S., the new limits left researchers' projects and careers in limbo—and some in jeopardy.
"We have been a beneficiary of this flow of human capacity and resource investment for many generations—and this is now threatened."
Rakesh Ramachandran, whose computational biology work contributed to one of the first coronavirus studies to map out its protein structures—is stranded in India. In early March, he had travelled there to attend a conference and visit the American consulate to stamp his H1 visa for a renewal, already granted. The pandemic shut down both the conference and the consulates, and Ramachandran hasn't been able to come back since. The consulates finally opened in September, but so far the online portal has no available appointment slots. "I'm told to keep trying," Ramachandran says.
The visa restrictions affected researchers worldwide, regardless of disciplines or countries. A Ph.D. student in neuroscience, Morgane Leroux had to do her experiments with mice at Gladstone Institutes in America and analyze the data back home at Sorbonne University in France. She had finished her first round of experiments when the lockdowns forced her to return to Paris, and she hasn't been able to come back to resume her work since. "I can't continue the experiments, which is really frustrating," she says, especially because she doesn't know what it means for her Ph.D. "I may have to entirely change my subject," she says, which she doesn't want to do—it would be a waste of time and money.
But besides wreaking havoc in scholars' personal lives and careers, the visa restrictions had—and will continue to have—tremendous deleterious effects on America's research and its global scientific competitiveness. "It's incredibly short-sighted and self-destructing to restrict the immigration of scientists into the U.S.," says Benjamin G. Neel, who directs the Laura and Isaac Perlmutter Cancer Center at New York University. "If they can't come here, they will go elsewhere," he says, causing a brain drain.
Neel in his lab with postdocs
(Courtesy of Neel)
Neel felt the outcomes of the shortsighted policies firsthand. In the past few months, his lab lost two postdoctoral researchers who had made major strides in understanding the biology of several particularly stubborn, treatment-resistant malignancies. One postdoc studied the underlying mechanisms responsible for 90 percent of pancreatic cancers and half of the colon ones. The other one devised a new system of modeling ovarian cancer in mice to test new therapeutic drug combinations for the deadliest tumor types—but had to return home to China.
"By working around the clock, she was able to get her paper accepted, but she hasn't been able to train us to use this new system, which can set us back six months," Neel says.
Her discoveries also helped the lab secure about $900,000 in grants for new research. Losing people like this is "literally killing the goose that lays the golden eggs," Neel adds. "If you want to make America poor again, this is the way to do it."
Cassidy R. Sugimoto at Indiana University Bloomington, who studies how scientific knowledge is produced and disseminated, says that scientists are the most productive when they are free to move, exchange ideas, and work at labs with the best equipment. Restricting that freedom reduces their achievement.
"Several empirical studied demonstrated the benefits to the U.S. by attracting and retaining foreign scientists. The disproportional number of our Nobel Prize winners were not only foreign-born but also foreign-educated," she says. Scientific advancement bolsters the country's economic prowess, too, so turning scholars away is bad for the economy long-term. "We have been a beneficiary of this flow of human capacity and resource investment for many generations—and this is now threatened," Sugimoto adds—because scientists will look elsewhere. "We are seeing them shifting to other countries that are more hospitable, both ideologically and in terms of health security. Many visiting scholars, postdocs, and graduate students who would otherwise come to the United States are now moving to Canada."
It's not only the Ph.D. students and postdocs who are affected. In some cases, even well-established professors who have already made their marks in the field and direct their own labs at prestigious research institutions may have to pack up and leave the country in the next few months. One scientist who directs a prominent neuroscience lab is betting on his visa renewal and a green card application, but if that's denied, the entire lab may be in jeopardy, as many grants hinge on his ability to stay employed in America.
"It's devastating to even think that it can happen," he says—after years of efforts invested. "I can't even comprehend how it would feel. It would be terrifying and really sad." (He asked to withhold his name for fear that it may adversely affect his applications.) Another scientist who originally shared her story for this article, later changed her mind and withdrew, worrying that speaking out may hurt the entire project, a high-profile COVID-19 effort. It's not how things should work in a democratic country, scientists admit, but that's the reality.
Still, some foreign scholars are speaking up. Mehmet Doğan, a physicist at University of California Berkeley who has been fighting a visa extension battle all year, says it's important to push back in an organized fashion with petitions and engage legislators. "This administration was very creative in finding subtle and not so subtle ways to make our lives more difficult," Doğan says. He adds that the newest rules, proposed by the Department of Homeland Security on September 24, could further limit the time scholars can stay, forcing them into continuous extension battles. That's why the upcoming election might be a turning point for foreign academics. "This election will decide if many of us will see the U.S. as the place to stay and work or whether we look at other countries," Doğan says, echoing the worries of Neel, Sugimoto, and others in academia.
Dogan on Zoom talking to his fellow union members of the Academic Researchers United, a union of almost 5,000 Academic Researchers.
(Credit: Ceyda Durmaz Dogan)
If this year has shown us anything, it is that viruses and pandemics know no borders as they sweep across the globe. Likewise, science can't be restrained by borders either. "Science is an international endeavor," says Neel—and right now humankind now needs unified scientific research more than ever, unhindered by immigration hurdles and visa wars. Humanity's wellbeing in America and beyond depends on it.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.