Your Digital Avatar May One Day Get Sick Before You Do
Artificial intelligence is everywhere, just not in the way you think it is.
These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."
"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."
"And this is, of course, as far from the truth as you can possibly get."
What Exactly Is Artificial Intelligence, Anyway?
Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.
"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."
The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.
"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."
Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)
But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."
The Fundamental Breakthrough That Must Be Solved
To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."
In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.
Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.
In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.
One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.
That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.
The Ethical Problem That Deserves More Attention
But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.
It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.
"Making Sense of Science" is a monthly podcast that features interviews with leading medical and scientific experts about the latest developments and the big ethical and societal questions they raise. This episode is hosted by science and biotech journalist Emily Mullin, summer editor of the award-winning science outlet Leaps.org.
Listen to the episode:
Meet the Psychologist Using Psychedelics to Treat Racial Trauma
Monnica Williams was stuck. The veteran psychologist wanted to conduct a study using psychedelics, but her university told her they didn't have the expertise to evaluate it via an institutional review board, which is responsible for providing ethical and regulatory oversight for research that involves human participants. Instead, they directed her to a hospital, whose reviewers turned it down, citing research of a banned substance as unethical.
"I said, 'We're not using illegal psilocybin, we're going through Health Canada,'" Williams said. Psilocybin was banned in Canada in 1974, but can now be obtained with an exemption from Health Canada, the federal government's health policy department. After learning this, the hospital review board told Williams they couldn't review her proposal because she's not affiliated with the hospital, after all.
It's all part of balancing bureaucracy with research goals for Williams, a leading expert on racial trauma and psychedelic medicine, as well as obsessive compulsive disorder (OCD), at the University of Ottawa. She's exploring the use of hallucinogenic substances like MDMA and psilocybin — commonly known as ecstasy and magic mushrooms, respectively — to help people of color address the psychological impacts of systemic racism. A prolific researcher, Williams also works as an expert witness, offering clinical evaluations for racial trauma cases.
Scientists have long known that psychedelics produce an altered state of consciousness and openness to new perspectives. For people with mental health conditions who haven't benefited from traditional therapy, psychedelics may be able to help them discover what's causing their pain or trauma, including racial trauma—the mental and emotional injury spurred by racial bias.
"Using psychedelics can not only bring these pain points to the surface for healing, but can reduce the anxiety or response to these memories and allow them to speak openly about them without the pain they bring," Williams says. Her research harnesses the potential of psychedelics to increase neuroplasticity, which includes the brain's ability to build new pathways.
"People of color are dealing with racism all the time, in large and small ways, and even dealing with racism in healthcare, even dealing with racism in therapy."
But she says therapists of color aren't automatically equipped to treat racial trauma. First, she notes, people of color are "vastly underrepresented in the mental health workforce." This is doubly true in psychedelic-assisted psychotherapy, in which a person is guided through a psychedelic session by a therapist or team of therapists, then processes the experience in subsequent therapy sessions.
"On top of that, the therapists of color are getting the same training that the white therapists are getting, so it's not even really guaranteed that they're going to be any better at helping a person that may have racial trauma emerging as part of their experience," she says.
In her own training to become a clinical psychologist at the University of Virginia, Williams says she was taught "how to be a great psychologist for white people." Yet even people of color, she argues, need specialized training to work with marginalized groups, particularly when it comes to MDMA, psilocybin and other psychedelics. Because these drugs can lower natural psychological defense mechanisms, Williams says, it's important for providers to be specially trained.
"People of color are dealing with racism all the time, in large and small ways, and even dealing with racism in healthcare, even dealing with racism in therapy. So [they] generally develop a lot of defenses and coping strategies to ward off racism so that they can function." she says. This is particularly true with psychedelic-assisted psychotherapy: "One possibility is that you're going to be stripped of your defenses, you're going to be vulnerable. And so you have to work with a therapist who is going to understand that and not enact more racism in their work with you."
Williams has struggled to find funding and institutional approval for research involving psychedelics, or funding for investigations into racial trauma or the impacts of conditions like OCD and post-traumatic stress disorder (PTSD) in people of color. With the bulk of her work focusing on OCD, she hoped to focus on people of color, but found there was little funding for that type of research. In 2020, that started to change as structural racism garnered more media attention.
After the killing of George Floyd, a 46-year-old Black man, by a white police officer in May 2020, Williams was flooded with media requests. "Usually, when something like that happens, I get contacted a lot for a couple of weeks, and it dies off. But after George Floyd, it just never did."
Monnica Williams, clinical psychologist at the University of Ottawa
Williams was no stranger to the questions that soon blazed across headlines: How can we mitigate microaggressions? How do race and ethnicity impact mental health? What terms should we use to discuss racial issues? What constitutes an ally, and why aren't there more of them? Why aren't there more people of color in academia, and so many other fields?
Now, she's hoping that the increased attention on racial justice will mean more acceptance for the kind of research she's doing.
In fact, Williams herself has used psychedelics in order to gain a better understanding of how to use them to treat racial trauma. In a study published in January, she and two other Black female psychotherapists took MDMA in a supervised setting, guided by a team of mental health practitioners who helped them process issues that came up as the session progressed. Williams, who was also the study's lead author, found that participants' experiences centered around processing and finding release from racial identities, and, in one case, of simply feeling wholly human without the burden of racial identity for the first time.
The purpose of the study was twofold: to understand how Black women react to psychedelics and to provide safe, firsthand, psychedelic experiences to Black mental health practitioners. One of the other study participants has since gone on to offer psychedelic-assisted psychotherapy to her own patients.
Psychedelic research, and psilocybin in particular, has become a hot topic of late, particularly after Oregon became the first state to legalize it for therapeutic use last November. A survey-based, observational study with 313 participants, published in 2020, paved the way for Williams' more recent MDMA experiments by describing improvements in depression, anxiety and racial trauma among people of color who had used LSD, psilocybin or MDMA in a non-research setting.
Williams and her team included only respondents who reported a moderate to strong psychoactive effect of past psychedelic consumption and believed these experiences provided "relief from the challenging effects of ethnic discrimination." Participants reported a memorable psychedelic experience as well as its acute and lasting effects, completing assessments of psychological insight, mystical experience and emotional challenges experienced during psychedelic experience, then describing their mental health — including depression, anxiety and trauma symptoms — before and after that experience.
Still, Williams says addressing racism is much more complex than treating racial trauma. "One of the questions I get asked a lot is, 'How can Black people cope with racism?' And I don't really like that question," she says. "I think it's important and I don't mind answering it, but I think the more important question is, how can we end racism? What can Black people do to stop racism that's happening to them and what can we do as a society to stop racism? And people aren't really asking this question."