Your Digital Avatar May One Day Get Sick Before You Do

Your Digital Avatar May One Day Get Sick Before You Do

Artificial neurons in a concept of artificial intelligence.

(© ktsdesign/Fotolia)



Artificial intelligence is everywhere, just not in the way you think it is.

These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."

"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."

"And this is, of course, as far from the truth as you can possibly get."

What Exactly Is Artificial Intelligence, Anyway?

Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.

"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."

The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.

"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."

Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)

But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."

The Fundamental Breakthrough That Must Be Solved

To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."

In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.

Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.

In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.

One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.

That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.

The Ethical Problem That Deserves More Attention

But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.

It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.

Noah Davis
Noah Davis is a writer living in Brooklyn. Visit his website at http://www.noahedavis.com.
Jurassic Park Without the Scary Parts: How Stem Cells May Rescue the Near-Extinct Rhinoceros

The Northern white rhinoceros Nola, the last one in the U.S. at that time in 2015, pictured here with author Jeanne Loring and Oliver Ryder (in truck), with a film crew and keepers in the San Diego Zoo's savanna. Nola sadly passed away that year.

Kel O'Neill, Jongsma + O'Neill Documentary filmmaking studio

I am a stem cell scientist. In my day job I work on developing ways to use stem cells to treat neurological disease – human disease. This is the story about how I became part of a group dedicated to rescuing the northern white rhinoceros from extinction.

The earth is now in an era that is called the "sixth mass extinction." The first extinction, 400 million years ago, put an end to 86 percent of the existing species, including most of the trilobites. When the earth grew hotter, dustier, or darker, it lost fish, amphibians, reptiles, plants, dinosaurs, mammals and birds. Each extinction event wiped out 80 to 90 percent of the life on the planet at the time. The first 5 mass extinctions were caused by natural disasters: volcanoes, fires, a meteor. But humans can take credit for the 6th.

Keep Reading Keep Reading
Jeanne Loring
Jeanne Loring is an American stem cell biologist, developmental neurobiologist, and geneticist. She is the director of the Center for Regenerative Medicine and professor at the Scripps Research Institute in La Jolla, California.
One Year In, Our Biggest Lessons and Unsolved Mysteries about COVID-19

A leading virologist reflects on the past year and the unknowns about COVID-19 that we still need to solve.

Photo by ERROR 420 📷 on Unsplash

On the one-year anniversary of the World Health Organization declaring SARS-CoV-2 a global pandemic, it's hard to believe that so much and yet so little time has passed. The past twelve months seem to have dragged by, with each day feeling like an eternity, yet also it feels as though it has flashed by in a blur.

Nearly everyone I've spoken with, from recent acquaintances to my closest friends and family, have reported feeling the same odd sense of disconnectedness, which I've taken to calling "pandemic relativity." Just this week, Ellen Cushing published a piece in The Atlantic about the effects of "late-stage pandemic" on memory and cognitive function. Perhaps, thanks to twelve months of living this way, we have all found it that much more difficult to distill the key lessons that will allow us to emerge from the relentless, disconnected grind of our current reality, return to some semblance of normalcy, and take the decisive steps needed to ensure the mistakes of this pandemic are not repeated in the next one.

As a virologist who studies SARS-CoV-2 and other emerging viruses, and who sometimes writes and publicly comments on my thoughts, I've been asked frequently about what we've learned as we approach a year of living in suspension. While I always come up with an answer, the truth is similar to my perception of time: we've learned a lot, but at the same time, that's only served to highlight how much we still don't know. We have uncovered and clarified many scientific truths, but also revealed the limits of our scientific knowledge.

Keep Reading Keep Reading
Angela Rasmussen
Dr. Angela Rasmussen uses systems biology techniques to interrogate the host response to viral infection. She has studied a huge range of viral pathogens, from the “common cold” (rhinovirus) to Ebola virus to highly pathogenic avian influenza virus to SARS-CoV-2/COVID-19. By combining current classical approaches to modeling infection and pathogenesis with sequencing technology and machine learning, Dr. Rasmussen and her colleagues and collaborators have identified new host mechanisms by which viruses cause disease.