Your Digital Avatar May One Day Get Sick Before You Do

Your Digital Avatar May One Day Get Sick Before You Do

Artificial neurons in a concept of artificial intelligence.

(© ktsdesign/Fotolia)



Artificial intelligence is everywhere, just not in the way you think it is.

These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."

"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."

"And this is, of course, as far from the truth as you can possibly get."

What Exactly Is Artificial Intelligence, Anyway?

Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.

"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."

The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.

"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."

Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)

But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."

The Fundamental Breakthrough That Must Be Solved

To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."

In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.

Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.

In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.

One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.

That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.

The Ethical Problem That Deserves More Attention

But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.

It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.

Noah Davis
Noah Davis is a writer living in Brooklyn. Visit his website at http://www.noahedavis.com.
Time to visit your TikTok doc? The good and bad of doctors on social media

Rakhi Patel is among an increasing number of health care professionals, including doctors and nurses, who maintain an active persona on Instagram, TikTok and other social media sites.

Rakhi Patel

Rakhi Patel has carved a hobby out of reviewing pizza — her favorite food — on Instagram. In a nod to her preferred topping, she calls herself thepepperoniqueen. Photos and videos show her savoring slices from scores of pizzerias. In some of them, she’s wearing scrubs — her attire as an inpatient neurology physician associate at Tufts Medical Center in Boston.

“Depending on how you dress your pizza, it can be more nutritious,” said Patel, who suggests a thin crust, sugarless tomato sauce and vegetables galore as healthier alternatives. “There are no boundaries for a health care professional to enjoy pizza.”

Beyond that, “pizza fuels my mental health and makes me happy, especially when loaded with pepperoni,” she said. “If I’m going to be a pizza connoisseur, then I also need to take care of my physical health by ensuring that I get at least three days of exercise per week and eat nutritiously when I’m not eating pizza.”

She’s among an increasing number of health care professionals, including doctors and nurses, who maintain an active persona on social media, according to bioethics researchers. They share their hobbies and interests with people inside and outside the world of medicine, helping patients and the public become acquainted with the humans behind the scrubs or white coats. Other health care experts limit their posts to medical topics, while some opt for a combination of personal and professional commentaries. Depending on the posts, ethical issues may come into play.

Keep Reading Keep Reading
Susan Kreimer
Susan Kreimer is a New York-based freelance journalist who has followed the landscape of health care since the late 1990s, initially as a staff reporter for major daily newspapers. She writes about breakthrough studies, personal health, and the business of clinical practice. Raised in the Chicago area, she holds a B.A. in Journalism/Mass Communication and French, with minors in German and Russian, from the University of Iowa and an M.S. from the Columbia University Graduate School of Journalism.
Could this habit related to eating slow down rates of aging?

Previous research showed that restricting calories results in longer lives for mice, worms and flies. A new study by Columbia University researchers applied those findings to people. But what does this paper actually show?

Angela Jimenez

Last Thursday, scientists at Columbia University published a new study finding that cutting down on calories could lead to longer, healthier lives. In the phase 2 trial, 220 healthy people without obesity dropped their calories significantly and, at least according to one test, their rate of biological aging slowed by 2 to 3 percent in over a couple of years. Small though that may seem, the researchers estimate that it would translate into a decline of about 10 percent in the risk of death as people get older. That's basically the same as quitting smoking.

Previous research has shown that restricting calories results in longer lives for mice, worms and flies. This research is unique because it applies those findings to people. It was published in Nature Aging.

But what did the researchers actually show? Why did two other tests indicate that the biological age of the research participants didn't budge? Does the new paper point to anything people should be doing for more years of healthy living? Spoiler alert: Maybe, but don't try anything before talking with a medical expert about it. I had the chance to chat with someone with inside knowledge of the research -- Dr. Evan Hadley, director of the National Institute of Aging's Division of Geriatrics and Clinical Gerontology, which funded the study. Dr. Hadley describes how the research participants went about reducing their calories, as well as the risks and benefits involved. He also explains the "aging clock" used to measure the benefits.

Keep Reading Keep Reading
Matt Fuchs
Matt Fuchs is the host of the Making Sense of Science podcast and served previously as the editor-in-chief of Leaps.org. He writes as a contributor to the Washington Post, and his articles have also appeared in the New York Times, WIRED, Nautilus Magazine, Fortune Magazine and TIME Magazine. Follow him @fuchswriter.