Should We Use Technologies to Enhance Morality?
Our moral ‘hardware’ evolved over 100,000 years ago while humans were still scratching the savannah. The perils we encountered back then were radically different from those that confront us now. To survive and flourish in the face of complex future challenges our archaic operating systems might need an upgrade – in non-traditional ways.
Morality refers to standards of right and wrong when it comes to our beliefs, behaviors, and intentions. Broadly, moral enhancement is the use of biomedical technology to improve moral functioning. This could include augmenting empathy, altruism, or moral reasoning, or curbing antisocial traits like outgroup bias and aggression.
The claims related to moral enhancement are grand and polarizing: it’s been both tendered as a solution to humanity’s existential crises and bluntly dismissed as an armchair hypothesis. So, does the concept have any purchase? The answer leans heavily on our definition and expectations.
One issue is that the debate is often carved up in dichotomies – is moral enhancement feasible or unfeasible? Permissible or impermissible? Fact or fiction? On it goes. While these gesture at imperatives, trading in absolutes blurs the realities at hand. A sensible approach must resist extremes and recognize that moral disrupters are already here.
We know that existing interventions, whether they occur unknowingly or on purpose, have the power to modify moral dispositions in ways both good and bad. For instance, neurotoxins can promote antisocial behavior. The ‘lead-crime hypothesis’ links childhood lead-exposure to impulsivity, antisocial aggression, and various other problems. Mercury has been associated with cognitive deficits, which might impair moral reasoning and judgement. It’s well documented that alcohol makes people more prone to violence.
So, what about positive drivers? Here’s where it gets more tangled.
Medicine has long treated psychiatric disorders with drugs like sedatives and antipsychotics. However, there’s short mention of morality in the Diagnostic and Statistical Manual of Mental Disorders (DSM) despite the moral merits of pharmacotherapy – these effects are implicit and indirect. Such cases are regarded as treatments rather than enhancements.
It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
Conventionally, an enhancement must go beyond what is ‘normal,’ species-typical, or medically necessary – this is known as the ‘treatment-enhancement distinction.’ But boundaries of health and disease are fluid, so whether we call a procedure ‘moral enhancement’ or ‘medical treatment’ is liable to change with shifts in social values, expert opinions, and clinical practices.
Human enhancements are already used for a range of purported benefits: caffeine, smart drugs, and other supplements to boost cognitive performance; cosmetic procedures for aesthetic reasons; and steroids and stimulants for physical advantage. More boldly, cyborgs like Moon Ribas and Neil Harbisson are pushing transpecies boundaries with new kinds of sensory perception. It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
How might it work?
One possibility for shaping moral temperaments is with neurostimulation devices. These use electrodes to deliver a low-intensity current that alters the electromagnetic activity of specific neural regions. For instance, transcranial Direct Current Stimulation (tDCS) can target parts of the brain involved in self-awareness, moral judgement, and emotional decision-making. It’s been shown to increase empathy and valued-based learning, and decrease aggression and risk-taking behavior. Many countries already use tDCS to treat pain and depression, but evidence for enhancement effects on healthy subjects is mixed.
Another suggestion is targeting neuromodulators like serotonin and dopamine. Serotonin is linked to prosocial attributes like trust, fairness, and cooperation, but low activity is thought to motivate desires for revenge and harming others. It’s not as simple as indiscriminately boosting brain chemicals though. While serotonin is amenable to SSRIs, precise levels are difficult to measure and track, and there’s no scientific consensus on the “optimum” amount or on whether such a value even exists. Fluctuations due to lifestyle factors such as diet, stress, and exercise add further complexity. Currently, more research is needed on the significance of neuromodulators and their network dynamics across the moral landscape.
There are a range of other prospects. The ‘love drugs’ oxytocin and MDMA mediate pair bonding, cooperation, and social attachment, although some studies suggest that people with high levels of oxytocin are more aggressive toward outsiders. Lithium is a mood stabilizer that has been shown to reduce aggression in prison populations; beta-blockers like propranolol and the supplement omega-3 have similar effects. Increasingly, brain-computer interfaces augur a world of brave possibilities. Such appeals are not without limitations, but they indicate some ways that external tools can positively nudge our moral sentiments.
Who needs morally enhancing?
A common worry is that enhancement technologies could be weaponized for social control by authoritarian regimes, or used like the oppressive eugenics of the early 20th century. Fortunately, the realities are far more mundane and such dystopian visions are fantastical. So, what are some actual possibilities?
Some researchers suggest that neurotechnologies could help to reactivate brain regions of those suffering from moral pathologies, including healthy people with psychopathic traits (like a lack of empathy). Another proposal is using such technology on young people with conduct problems to prevent serious disorders in adulthood.
Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it?
A question is whether these kinds of interventions should be compulsory for dangerous criminals. On the other hand, a voluntary treatment for inmates wouldn’t be so different from existing incentive schemes. For instance, some U.S. jurisdictions already offer drug treatment programs in exchange for early release or instead of prison time. Then there’s the difficult question of how we should treat non-criminal but potentially harmful ‘successful’ psychopaths.
Others argue that if virtues have a genetic component, there is no technological reason why present practices of embryo screening for genetic diseases couldn’t also be used for selecting socially beneficial traits.
Perhaps the most immediate scenario is a kind of voluntary moral therapy, which would use biomedicine to facilitate ideal brain-states to augment traditional psychotherapy. Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it? Approaches like neurofeedback and psychedelic-assisted therapy could prove helpful.
What are the challenges?
A general challenge is that of setting. Morality is context dependent; what’s good in one environment may be bad in another and vice versa, so we don’t want to throw out the baby with the bathwater. Of course, common sense tells us that some tendencies are more socially desirable than others: fairness, altruism, and openness are clearly preferred over aggression, dishonesty, and prejudice.
One argument is that remoulding ‘brute impulses’ via biology would not count as moral enhancement. This view claims that for an action to truly count as moral it must involve cognition – reasoning, deliberation, judgement – as a necessary part of moral behavior. Critics argue that we should be concerned more with ends rather than means, so ultimately it’s outcomes that matter most.
Another worry is that modifying one biological aspect will have adverse knock-on effects for other valuable traits. Certainly, we must be careful about the network impacts of any intervention. But all stimuli have distributed effects on the body, so it’s really a matter of weighing up the cost/benefit trade-offs as in any standard medical decision.
Is it ethical?
Our values form a big part of who we are – some bioethicists argue that altering morality would pose a threat to character and personal identity. Another claim is that moral enhancement would compromise autonomy by limiting a person’s range of choices and curbing their ‘freedom to fall.’ Any intervention must consider the potential impacts on selfhood and personal liberty, in addition to the wider social implications.
This includes the importance of social and genetic diversity, which is closely tied to considerations of fairness, equality, and opportunity. The history of psychiatry is rife with examples of systematic oppression, like ‘drapetomania’ – the spurious mental illness that was thought to cause African slaves’ desire to flee captivity. Advocates for using moral enhancement technologies to help kids with conduct problems should be mindful that they disproportionately come from low-income communities. We must ensure that any habilitative practice doesn’t perpetuate harmful prejudices by unfairly targeting marginalized people.
Human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good.
Then, there are concerns that morally-enhanced persons would be vulnerable to predation by those who deliberately avoid moral therapies. This relates to what’s been dubbed the ‘bootstrapping problem’: would-be moral enhancement candidates are the types of individuals that benefit from not being morally enhanced. Imagine if every senator was asked to undergo an honesty-boosting procedure prior to entering public office – would they go willingly? Then again, perhaps a technological truth-serum wouldn’t be such a bad requisite for those in positions of stern social consequence.
Advocates argue that biomedical moral betterment would simply offer another means of pursuing the same goals as fixed social mechanisms like religion, education, and community, and non-invasive therapies like cognitive-behavior therapy and meditation. It’s even possible that technological efforts would be more effective. After all, human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good. If we can safely improve ourselves in direct and deliberate ways then there’s no morally significant difference whether this happens via conventional methods or new technology.
Future prospects
Where speculation about human enhancement has led to hype and technophilia, many bioethicists urge restraint. We can be grounded in current science while anticipating feasible medium-term prospects. It’s unlikely moral enhancement heralds any metamorphic post-human utopia (or dystopia), but that doesn’t mean dismissing its transformative potential. In one sense, we should be wary of transhumanist fervour about the salvatory promise of new technology. By the same token we must resist technofear and alarmist efforts to balk social and scientific progress. Emerging methods will continue to shape morality in subtle and not-so-subtle ways – the critical steps are spotting and scaffolding these with robust ethical discussion, public engagement, and reasonable policy options. Steering a bright and judicious course requires that we pilot the possibilities of morally-disruptive technologies.
Two-and-a-half year-old Huckleberry, a blue merle Australian shepherd, pulls hard at her leash; her yelps can be heard by skiers and boarders high above on the chairlift that carries them over the ski patrol hut to the top of the mountain. Huckleberry is an avalanche rescue dog — or avy dog, for short. She lives and works with her owner and handler, a ski patroller at Breckenridge Ski Resort in Colorado. As she watches the trainer play a game of hide-and-seek with six-month-old Lume, a golden retriever and avy dog-in-training, Huckleberry continues to strain on her leash; she loves the game. Hide-and-seek is one of the key training methods for teaching avy dogs the rescue skills they need to find someone caught in an avalanche — skier, snowmobiler, hiker, climber.
Lume’s owner waves a T-shirt in front of the puppy. While another patroller holds him back, Lume’s owner runs away and hides. About a minute later — after a lot of barking — Lume is released and commanded to “search.” He springs free, running around the hut to find his owner who reacts with a great amount of excitement and fanfare. Lume’s scent training will continue for the rest of the ski season (Breckenridge plans operating through May or as long as weather permits) and through the off-season. “We make this game progressively harder by not allowing the dog watch the victim run away,” explains Dave Leffler, Breckenridge's ski patroller and head of the avy dog program, who has owned, trained and raised many of them. Eventually, the trainers “dig an open hole in the snow to duck out of sight and gradually turn the hole into a cave where the dog has to dig to get the victim,” explains Leffler.
By the time he is three, Lume, like Huckleberry, will be a fully trained avy pup and will join seven other avy dogs on Breckenridge ski patrol team. Some of the team members, both human and canine, are also certified to work with Colorado Rapid Avalanche Deployment, a coordinated response team that works with the Summit County Sheriff’s office for avalanche emergencies outside of the ski slopes’ boundaries.
There have been 19 avalanche deaths in the U.S. this season, according to avalanche.org, which tracks slides; eight in Colorado. During the entirety of last season there were 17. Avalanche season runs from November through June, but avalanches can occur year-round.
High tech and high stakes
Complementing avy dogs’ ability to smell people buried in a slide, avalanche detection, rescue and recovery is becoming increasingly high tech. There are transceivers, signal locators, ground scanners and drones, which are considered “games changers” by many in avalanche rescue and recovery
For a person buried in an avalanche, the chance of survival plummets after 20 minutes, so every moment counts.
A drone can provide thermal imaging of objects caught in a slide; what looks like a rock from far away might be a human with a heat signature. Transceivers, also known as beacons, send a signal from an avalanche victim to a companion. Signal locators, like RECCO reflectors which are often sewn directly into gear, can echo back a radar signal sent by a detector; most ski resorts have RECCO detector units.
Research suggests that Ground Penetrating Radar (GPR), an electromagnetic tool used by geophysicists to pull images from inside the ground, could be used to locate an avalanche victim. A new study from the Department of Energy’s Sandia National Laboratories suggests that a computer program developed to pinpoint the source of a chemical or biological terrorist attack could also be used to find someone submerged in an avalanche. The search algorithm allows for small robots (described as cockroach-sized) to “swarm” a search area. Researchers say that this distributed optimization algorithm can help find avalanche victims four times faster than current search mechanisms. For a person buried in an avalanche, the chance of survival plummets after 20 minutes, so every moment counts.
An avy dog in training is picking up scent
Sarah McLear
While rescue gear has been evolving, predicting when a slab will fall remains an emerging science — kind of where weather forecasting science was in the 1980s. Avalanche forecasting still relies on documenting avalanches by going out and looking,” says Ethan Greene, director of the Colorado Avalanche Information Center (CAIC). “So if there's a big snowstorm, and as you might remember, most avalanches happened during snowstorms, we could have 10,000 avalanches that release and we document 50,” says Greene. “Avalanche forecasting is essentially pattern recognition,” he adds--and understanding the layering structure of snow.
However, determining where the hazards lie can be tricky. While a dense layer of snow over a softer, weaker layer may be a recipe for an avalanche, there’s so much variability in snowpack that no one formula can predict the trigger. Further, observing and measuring snow at a single point may not be representative of all nearby slopes. Finally, there’s not enough historical data to help avalanche scientists create better prediction models.
That, however, may be changing.
Last year, an international group of researchers created computer simulations of snow cover using 16 years of meteorological data to forecast avalanche hazards, publishing their research in Cold Regions Science and Technology. They believe their models, which categorize different kinds of avalanches, can support forecasting and determine whether the avalanche is natural (caused by temperature changes, wind, additional snowfall) or artificial (triggered by a human or animal).
With smell receptors ranging from 800 million for an average dog, to 4 billion for scent hounds, canines remain key to finding people caught in slides.
With data from two sites in British Columbia and one in Switzerland, researchers built computer simulations of five different avalanche types. “In terms of real time avalanche forecasting, this has potential to fill in a lot of data gaps, where we don't have field observations of what the snow looks like,” says Simon Horton, a postdoctoral fellow with the Simon Fraser University Centre for Natural Hazards Research and a forecaster with Avalanche Canada, who participated in the study. While complex models that simulate snowpack layers have been around for a few decades, they weren’t easy to apply until recently. “It's been difficult to find out how to apply that to actual decision-making and improving safety,” says Horton. If you can derive avalanche problem types from simulated snowpack properties, he says, you’ll learn “a lot about how you want to manage that risk.”
The five categories include “new snow,” which is unstable and slides down the slope, “wet snow,” when rain or heat makes it liquidly, as well as “wind-drifted snow,” “persistent weak layers” and “old snow.” “That's when there's some type of deeply buried weak layer in the snow that releases without any real change in the weather,” Horton explains. “These ones tend to cause the most accidents.” One step by a person on that structurally weak layer of snow will cause a slide. Horton is hopeful that computer simulations of avalanche types can be used by scientists in different snow climates to help predict hazard levels.
Greene is doubtful. “If you have six slopes that are lined up next to each other, and you're going to try to predict which one avalanches and the exact dimensions and what time, that's going to be really hard to do. And I think it's going to be a long time before we're able to do that,” says Greene.
What both researchers do agree on, though, is that what avalanche prediction really needs is better imagery through satellite detection. “Just being able to count the number of avalanches that are out there will have a huge impact on what we do,” Greene says. “[Satellites] will change what we do, dramatically.” In a 2022 paper, scientists at the University of Aberdeen in England used satellites to study two deadly Himalayan avalanches. The imaging helped them determine that sediment from a 2016 ice avalanche plus subsequent snow avalanches contributed to the 2021 avalanche that caused a flash flood, killing over 200 people. The researchers say that understanding the avalanches characteristics through satellite imagery can inform them how one such event increases the magnitude of another in the same area.
Avy dogs trainers hide in dug-out holes in the snow, teaching the dogs to find buried victims
Sarah McLear
Lifesaving combo: human tech and Mother Nature’s gear
Even as avalanche forecasting evolves, dogs with their built-in rescue mechanisms will remain invaluable. With smell receptors ranging from 800 million for an average dog, to 4 billion for scent hounds, canines remain key to finding people caught in slides. (Humans in comparison, have a meager 12 million.) A new study published in the Journal of Neuroscience revealed that in dogs smell and vision are connected in the brain, which has not been found in other animals. “They can detect the smell of their owner's fingerprints on a glass slide six weeks after they touched it,” says Nicholas Dodman, professor emeritus at Cummings School of Veterinary Medicine at Tufts University. “And they can track from a boat where a box filled with meat was buried in the water, 100 feet below,” says Dodman, who is also co-founder and president of the Center for Canine Behavior Studies.
Another recent study from Queens College in Belfast, United Kingdom, further confirms that dogs can smell when humans are stressed. They can also detect the smell of a person’s breath and the smell of the skin cells of a deceased person.
The emerging avalanche-predicting human-made tech and the incredible nature-made tech of dogs’ olfactory talents is the lifesaving “equipment” that Leffler believes in. Even when human-made technology develops further, it will be most efficient when used together with the millions of dogs’ smell receptors, Leffler believes. “It is a combination of technology and the avalanche dog that will always be effective in finding an avalanche victim.”
Living with someone changes your microbiome, new research shows
Some roommate frustration can be expected, whether it’s a sink piled high with crusty dishes or crumbs where a clean tabletop should be. Now, research suggests a less familiar issue: person-to-person transmission of shared bacterial strains in our gut and oral microbiomes. For the first time, the lab of Nicola Segata, a professor of genetics and computational biology at the University of Trento, located in Italy, has shown that bacteria of the microbiome are transmitted between many individuals, not just infants and their mothers, in ways that can’t be explained by their shared diet or geography.
It’s a finding with wide-ranging implications, yet frustratingly few predictable outcomes. Our microbiomes are an ever-growing and changing collection of helpful and harmful bacteria that we begin to accumulate the moment we’re born, but experts are still struggling to unravel why and how bacteria from one person’s gut or mouth become established in another person’s microbiome, as opposed to simply passing through.
“If we are looking at the overall species composition of the microbiome, then there is an effect of age of course, and many other factors,” Segata says. “But if we are looking at where our strains are coming from, 99 percent of them are only present in other people’s guts. They need to come from other guts.”
If we could better understand this process, we might be able to control and use it; perhaps hospital patients could avoid infections from other patients when their microbiome is depleted by antibiotics and their immune system is weakened, for example. But scientists are just beginning to link human microbiomes with various ailments. Growing evidence shows that our microbiomes steer our long-term health, impacting conditions like obesity, irritable bowel syndrome, type 2 diabetes, and cancer.
Previous work from Segata’s lab and others illuminated the ways bacteria are passed from mothers to infants during the first few months of life during vaginal birth, breastfeeding and other close contact. And scientists have long known that people in close proximity tend to share bacteria. But the factors related to that overlap, such as genetics and diet, were unclear, especially outside the mother-baby dyad.
“If we look at strain sharing between a mother and an infant at five years of age, for example, we cannot really tell which was due to transmission at birth and which is due to continued transmission because of contact,” Segata says. Experts hypothesized that they could be caused by bacterial similarities in the environment itself, genetics, or bacteria from shared foods that colonized the guts of people in close contact.
Strain sharing was highest in mother-child pairs, with 96 percent of them sharing strains, and only slightly lower in members of shared households, at 95 percent.
In Italy, researchers led by Mireia Valles-Colomer, including Segata, hoped to unravel this mystery. They compared data from 9,715 stool and saliva samples in 31 genomic datasets with existing metadata. Scientists zoomed in on variations in each bacterial strain down to the individual level. They examined not only mother-child pairs, but people living in the same household, adult twins, and people living in the same village in a level of detail that wasn’t possible before, due to its high cost and difficulties in retrieving data about interactions between individuals, Segata explained.
“This paper is, with high granularity, quantifying the percent sharing that you expect between different types of social interactions, controlling for things like genetics and diet,” Gibbons says. Strain sharing was highest in mother-child pairs, with 96 percent of them sharing strains, and only slightly lower in members of shared households, at 95 percent. And at least half of the mother-infant pairs shared 30 percent of their strains; the median was 12 percent among people in shared households. Yet, there was no sharing among eight percent of adult twins who lived separately, and 16 percent of people within villages who resided in different households. The results were published in Nature.
It’s not a regional phenomenon. Although the types of bacterial strains varied depending on whether people lived in western and eastern nations — datasets were drawn from 20 countries on five continents — the patterns of sharing were much the same. To establish these links, scientists focused on individual variations in shared bacterial strains, differences that create unique bacterial “fingerprints” in each person, while controlling for variables like diet, demonstrating that the bacteria had been transmitted between people and were not the result of environmental similarities.
The impact of this bacterial sharing isn’t clear, but shouldn’t be viewed with trepidation, according to Sean Gibbons, a microbiome scientist at the nonprofit Institute for Systems Biology.
“The vast majority of these bugs are actually either benign or beneficial to our health, and the fact that we're swapping and sharing them and that we can take someone else's strain and supplement or better diversify our own little garden is not necessarily a bad thing,” he says.
"There are hundreds of billions of dollars of investment capital moving into these microbiome therapeutic companies; bugs as drugs, so to speak,” says Sean Gibbons, a microbiome scientist at the Institute for Systems Biology.
Everyday habits like exercising and eating vegetables promote a healthy, balanced gut microbiome, which is linked to better metabolic and immune function, and fewer illnesses. While many people’s microbiomes contain bacteria like C. diff or E. coli, these bacteria don’t cause diseases in most cases because they’re present in low levels. But a microbiome that’s been wiped out by, say, antibiotics, may no longer keep these bacteria in check, allowing them to proliferate and make us sick.
“A big challenge in the microbiome field is being able to rationally predict whether, if you're exposed to a particular bug, it will stick in the context of your specific microbiome,” Gibbons says.
Gibbons predicts that explorations of microbe-based therapeutics will be “exploding” in the coming decades. “There are hundreds of billions of dollars of investment capital moving into these microbiome therapeutic companies; bugs as drugs, so to speak,” he says. Rather than taking a mass-marketed probiotic, a precise understanding of an individual’s microbiome could help target the introduction of just the right bacteria at just the right time to prevent or treat a particular illness.
Because the current study did not differentiate between different types of contact or relationships among household members sharing bacterial strains or determine the direction of transmission, Segata says his current project is examining children in daycare settings and tracking their microbiomes over time to understand the role genetics and everyday interactions play in the level of transmission that occurs.
This relatively newfound ability to trace bacterial variants to minute levels has unlocked the chance for scientists to untangle when and how bacteria leap from one microbiome to another. As researchers come to better understand the factors that permit a strain to establish itself within a microbiome, they could uncover new strategies to control these microbes, harnessing the makeup of each microbiome to help people to resist life-altering medical conditions.