Your Digital Avatar May One Day Get Sick Before You Do

Your Digital Avatar May One Day Get Sick Before You Do

Artificial neurons in a concept of artificial intelligence.

(© ktsdesign/Fotolia)



Artificial intelligence is everywhere, just not in the way you think it is.

These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."

"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."

"And this is, of course, as far from the truth as you can possibly get."

What Exactly Is Artificial Intelligence, Anyway?

Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.

"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."

The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.

"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."

Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)

But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."

The Fundamental Breakthrough That Must Be Solved

To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."

In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.

Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.

In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.

One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.

That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.

The Ethical Problem That Deserves More Attention

But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.

It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.

Noah Davis
Noah Davis is a writer living in Brooklyn. Visit his website at http://www.noahedavis.com.
Podcast: The future of brain health with Percy Griffin

Percy Griffin, director of scientific engagement for the Alzheimer’s Association, joins Leaps.org to discuss the present and future of the fight against dementia.

The Alzheimer's Association

Today's guest is Percy Griffin, director of scientific engagement for the Alzheimer’s Association, a nonprofit that’s focused on speeding up research, finding better ways to detect Alzheimer’s earlier and other approaches for reducing risk. Percy has a doctorate in molecular cell biology from Washington University, he’s led important research on Alzheimer’s, and you can find the link to his full bio in the show notes, below.

Our topic for this conversation is the present and future of the fight against dementia. Billions of dollars have been spent by the National Institutes of Health and biotechs to research new treatments for Alzheimer's and other forms of dementia, but so far there's been little to show for it. Last year, Aduhelm became the first drug to be approved by the FDA for Alzheimer’s in 20 years, but it's received a raft of bad publicity, with red flags about its effectiveness, side effects and cost.

Meanwhile, 6.5 million Americans have Alzheimer's, and this number could increase to 13 million in 2050. Listen to this conversation if you’re concerned about your own brain health, that of family members getting older, or if you’re just concerned about the future of this country with experts predicting the number people over 65 will increase dramatically in the very near future.

Keep Reading Keep Reading
Matt Fuchs
Matt Fuchs is the host of the Making Sense of Science podcast and served previously as the editor-in-chief of Leaps.org. He writes as a contributor to the Washington Post, and his articles have also appeared in the New York Times, WIRED, Nautilus Magazine, Fortune Magazine and TIME Magazine. Follow him @fuchswriter.
Reducing proximity bias in remote work can improve public health and wellbeing

Employers can create a culture of “Excellence From Anywhere” to reduce the risk of inequality among office-centric, hybrid, and fully remote employees.

Photo by Christin Hume on Unsplash

COVID-19 prompted numerous companies to reconsider their approach to the future of work. Many leaders felt reluctant about maintaining hybrid and remote work options after vaccines became widely available. Yet the emergence of dangerous COVID variants such as Omicron has shown the folly of this mindset.

To mitigate the risks of new variants and other public health threats, as well as to satisfy the desires of a large majority of employees who express a strong desire in multiple surveys for a flexible hybrid or fully remote schedule, leaders are increasingly accepting that hybrid and remote options represent the future of work. No wonder that a February 2022 survey by the Federal Reserve Bank of Richmond showed that more and more firms are offering hybrid and fully-remote work options. The firms expect to have more remote workers next year and more geographically-distributed workers.

Although hybrid and remote work mitigates public health risks, it poses another set of health concerns relevant to employee wellbeing, due to the threat of proximity bias. This term refers to the negative impact on work culture from the prospect of inequality among office-centric, hybrid, and fully remote employees.

The difference in time spent in the office leads to concerns ranging from decreased career mobility for those who spend less facetime with their supervisor to resentment building up against the staff who have the most flexibility in where to work. In fact, a January 2022 survey by the company Slack of over 10,000 knowledge workers and their leaders shows that proximity bias is the top concern – expressed by 41% of executives - about hybrid and remote work.

Keep Reading Keep Reading
Gleb Tsipursky
Dr. Gleb Tsipursky is an internationally recognized thought leader on a mission to protect leaders from dangerous judgment errors known as cognitive biases by developing the most effective decision-making strategies. A best-selling author, he wrote Resilience: Adapt and Plan for the New Abnormal of the COVID-19 Coronavirus Pandemic and Pro Truth: A Practical Plan for Putting Truth Back Into Politics. His expertise comes from over 20 years of consulting, coaching, and speaking and training as the CEO of Disaster Avoidance Experts, and over 15 years in academia as a behavioral economist and cognitive neuroscientist. He co-founded the Pro-Truth Pledge project.