Your Digital Avatar May One Day Get Sick Before You Do
Artificial intelligence is everywhere, just not in the way you think it is.
These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."
"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."
"And this is, of course, as far from the truth as you can possibly get."
What Exactly Is Artificial Intelligence, Anyway?
Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.
"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."
The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.
"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."
Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)
But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."
The Fundamental Breakthrough That Must Be Solved
To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."
In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.
Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.
In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.
One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.
That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.
The Ethical Problem That Deserves More Attention
But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.
It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.
Should We Use Technologies to Enhance Morality?
Our moral ‘hardware’ evolved over 100,000 years ago while humans were still scratching the savannah. The perils we encountered back then were radically different from those that confront us now. To survive and flourish in the face of complex future challenges our archaic operating systems might need an upgrade – in non-traditional ways.
Morality refers to standards of right and wrong when it comes to our beliefs, behaviors, and intentions. Broadly, moral enhancement is the use of biomedical technology to improve moral functioning. This could include augmenting empathy, altruism, or moral reasoning, or curbing antisocial traits like outgroup bias and aggression.
The claims related to moral enhancement are grand and polarizing: it’s been both tendered as a solution to humanity’s existential crises and bluntly dismissed as an armchair hypothesis. So, does the concept have any purchase? The answer leans heavily on our definition and expectations.
One issue is that the debate is often carved up in dichotomies – is moral enhancement feasible or unfeasible? Permissible or impermissible? Fact or fiction? On it goes. While these gesture at imperatives, trading in absolutes blurs the realities at hand. A sensible approach must resist extremes and recognize that moral disrupters are already here.
We know that existing interventions, whether they occur unknowingly or on purpose, have the power to modify moral dispositions in ways both good and bad. For instance, neurotoxins can promote antisocial behavior. The ‘lead-crime hypothesis’ links childhood lead-exposure to impulsivity, antisocial aggression, and various other problems. Mercury has been associated with cognitive deficits, which might impair moral reasoning and judgement. It’s well documented that alcohol makes people more prone to violence.
So, what about positive drivers? Here’s where it gets more tangled.
Medicine has long treated psychiatric disorders with drugs like sedatives and antipsychotics. However, there’s short mention of morality in the Diagnostic and Statistical Manual of Mental Disorders (DSM) despite the moral merits of pharmacotherapy – these effects are implicit and indirect. Such cases are regarded as treatments rather than enhancements.
It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
Conventionally, an enhancement must go beyond what is ‘normal,’ species-typical, or medically necessary – this is known as the ‘treatment-enhancement distinction.’ But boundaries of health and disease are fluid, so whether we call a procedure ‘moral enhancement’ or ‘medical treatment’ is liable to change with shifts in social values, expert opinions, and clinical practices.
Human enhancements are already used for a range of purported benefits: caffeine, smart drugs, and other supplements to boost cognitive performance; cosmetic procedures for aesthetic reasons; and steroids and stimulants for physical advantage. More boldly, cyborgs like Moon Ribas and Neil Harbisson are pushing transpecies boundaries with new kinds of sensory perception. It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
How might it work?
One possibility for shaping moral temperaments is with neurostimulation devices. These use electrodes to deliver a low-intensity current that alters the electromagnetic activity of specific neural regions. For instance, transcranial Direct Current Stimulation (tDCS) can target parts of the brain involved in self-awareness, moral judgement, and emotional decision-making. It’s been shown to increase empathy and valued-based learning, and decrease aggression and risk-taking behavior. Many countries already use tDCS to treat pain and depression, but evidence for enhancement effects on healthy subjects is mixed.
Another suggestion is targeting neuromodulators like serotonin and dopamine. Serotonin is linked to prosocial attributes like trust, fairness, and cooperation, but low activity is thought to motivate desires for revenge and harming others. It’s not as simple as indiscriminately boosting brain chemicals though. While serotonin is amenable to SSRIs, precise levels are difficult to measure and track, and there’s no scientific consensus on the “optimum” amount or on whether such a value even exists. Fluctuations due to lifestyle factors such as diet, stress, and exercise add further complexity. Currently, more research is needed on the significance of neuromodulators and their network dynamics across the moral landscape.
There are a range of other prospects. The ‘love drugs’ oxytocin and MDMA mediate pair bonding, cooperation, and social attachment, although some studies suggest that people with high levels of oxytocin are more aggressive toward outsiders. Lithium is a mood stabilizer that has been shown to reduce aggression in prison populations; beta-blockers like propranolol and the supplement omega-3 have similar effects. Increasingly, brain-computer interfaces augur a world of brave possibilities. Such appeals are not without limitations, but they indicate some ways that external tools can positively nudge our moral sentiments.
Who needs morally enhancing?
A common worry is that enhancement technologies could be weaponized for social control by authoritarian regimes, or used like the oppressive eugenics of the early 20th century. Fortunately, the realities are far more mundane and such dystopian visions are fantastical. So, what are some actual possibilities?
Some researchers suggest that neurotechnologies could help to reactivate brain regions of those suffering from moral pathologies, including healthy people with psychopathic traits (like a lack of empathy). Another proposal is using such technology on young people with conduct problems to prevent serious disorders in adulthood.
Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it?
A question is whether these kinds of interventions should be compulsory for dangerous criminals. On the other hand, a voluntary treatment for inmates wouldn’t be so different from existing incentive schemes. For instance, some U.S. jurisdictions already offer drug treatment programs in exchange for early release or instead of prison time. Then there’s the difficult question of how we should treat non-criminal but potentially harmful ‘successful’ psychopaths.
Others argue that if virtues have a genetic component, there is no technological reason why present practices of embryo screening for genetic diseases couldn’t also be used for selecting socially beneficial traits.
Perhaps the most immediate scenario is a kind of voluntary moral therapy, which would use biomedicine to facilitate ideal brain-states to augment traditional psychotherapy. Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it? Approaches like neurofeedback and psychedelic-assisted therapy could prove helpful.
What are the challenges?
A general challenge is that of setting. Morality is context dependent; what’s good in one environment may be bad in another and vice versa, so we don’t want to throw out the baby with the bathwater. Of course, common sense tells us that some tendencies are more socially desirable than others: fairness, altruism, and openness are clearly preferred over aggression, dishonesty, and prejudice.
One argument is that remoulding ‘brute impulses’ via biology would not count as moral enhancement. This view claims that for an action to truly count as moral it must involve cognition – reasoning, deliberation, judgement – as a necessary part of moral behavior. Critics argue that we should be concerned more with ends rather than means, so ultimately it’s outcomes that matter most.
Another worry is that modifying one biological aspect will have adverse knock-on effects for other valuable traits. Certainly, we must be careful about the network impacts of any intervention. But all stimuli have distributed effects on the body, so it’s really a matter of weighing up the cost/benefit trade-offs as in any standard medical decision.
Is it ethical?
Our values form a big part of who we are – some bioethicists argue that altering morality would pose a threat to character and personal identity. Another claim is that moral enhancement would compromise autonomy by limiting a person’s range of choices and curbing their ‘freedom to fall.’ Any intervention must consider the potential impacts on selfhood and personal liberty, in addition to the wider social implications.
This includes the importance of social and genetic diversity, which is closely tied to considerations of fairness, equality, and opportunity. The history of psychiatry is rife with examples of systematic oppression, like ‘drapetomania’ – the spurious mental illness that was thought to cause African slaves’ desire to flee captivity. Advocates for using moral enhancement technologies to help kids with conduct problems should be mindful that they disproportionately come from low-income communities. We must ensure that any habilitative practice doesn’t perpetuate harmful prejudices by unfairly targeting marginalized people.
Human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good.
Then, there are concerns that morally-enhanced persons would be vulnerable to predation by those who deliberately avoid moral therapies. This relates to what’s been dubbed the ‘bootstrapping problem’: would-be moral enhancement candidates are the types of individuals that benefit from not being morally enhanced. Imagine if every senator was asked to undergo an honesty-boosting procedure prior to entering public office – would they go willingly? Then again, perhaps a technological truth-serum wouldn’t be such a bad requisite for those in positions of stern social consequence.
Advocates argue that biomedical moral betterment would simply offer another means of pursuing the same goals as fixed social mechanisms like religion, education, and community, and non-invasive therapies like cognitive-behavior therapy and meditation. It’s even possible that technological efforts would be more effective. After all, human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good. If we can safely improve ourselves in direct and deliberate ways then there’s no morally significant difference whether this happens via conventional methods or new technology.
Future prospects
Where speculation about human enhancement has led to hype and technophilia, many bioethicists urge restraint. We can be grounded in current science while anticipating feasible medium-term prospects. It’s unlikely moral enhancement heralds any metamorphic post-human utopia (or dystopia), but that doesn’t mean dismissing its transformative potential. In one sense, we should be wary of transhumanist fervour about the salvatory promise of new technology. By the same token we must resist technofear and alarmist efforts to balk social and scientific progress. Emerging methods will continue to shape morality in subtle and not-so-subtle ways – the critical steps are spotting and scaffolding these with robust ethical discussion, public engagement, and reasonable policy options. Steering a bright and judicious course requires that we pilot the possibilities of morally-disruptive technologies.
Podcast: The Friday Five - your health research roundup
The Friday Five is a new podcast series in which Leaps.org covers five breakthroughs in research over the previous week that you may have missed. There are plenty of controversies and ethical issues in science – and we get into many of them in our online magazine – but there’s also plenty to be excited about, and this news roundup is focused on inspiring scientific work to give you some momentum headed into the weekend.
Covered in this week's Friday Five:
- Puffer fish chemical for treating chronic pain
- Sleep study on the health benefits of waking up multiples times per night
- Best exercise regimens for reducing the risk of mortality aka living longer
- AI breakthrough in mapping protein structures with DeepMind
- Ultrasound stickers to see inside your body