Short Story Contest Winner: "The Gerry Program"
It's an odd sensation knowing you're going to die, but it was a feeling Gerry Ferguson had become relatively acquainted with over the past two years. What most perplexed the terminally ill, he observed, was not the concept of death so much as the continuation of all other life.
Gerry's secret project had been in the works for two years now, ever since they found the growth.
Who will mourn me when I'm gone? What trait or idiosyncrasy will people most recall? Will I still be talked of, 100 years from now?
But Gerry didn't worry about these questions. He was comfortable that his legacy would live on, in one form or another. From his cozy flat in the west end of Glasgow, Gerry had managed to put his affairs in order and still find time for small joys.
Feeding the geese in summer at the park just down from his house, reading classics from the teeming bookcase in the living room, talking with his son Michael on Skype. It was Michael who had first suggested reading some of the new works of non-fiction that now littered the large oak desk in Gerry's study.
He was just finishing 'The Master Algorithm' when his shabby grandfather clock chimed six o'clock. Time to call Michael. Crammed into his tiny study, Gerry pulled his computer's webcam close and waved at Michael's smiling face.
"Hi Dad! How're you today?"
"I'm alright, son. How're things in sunny Australia?"
"Hot as always. How's things in Scotland?"
"I'd 'ave more chance gettin' a tan from this computer screen than I do goin' out there."
Michael chuckled. He's got that hearty Ferguson laugh, Gerry thought.
"How's the project coming along?" Michael asked. "Am I going to see it one of these days?"
"Of course," grinned Gerry, "I designed it for you."
Gerry's secret project had been in the works for two years now, ever since they found the growth. He had decided it was better not to tell Michael. He would only worry.
The two men chatted for hours. They discussed Michael's love life (or lack thereof), memories of days walking in the park, and their shared passion, the unending woes of Rangers Football Club. It wasn't until Michael said his goodbyes that Gerry noticed he'd been sitting in the dark for the best part of three hours, his mesh curtains casting a dim orange glow across the room from the street light outside. Time to get back to work.
*
Every night, Gerry sat at his computer, crawling forums, nourishing his project, feeding his knowledge and debating with other programmers. Even at age 82, Gerry knew more than most about algorithms. Never wanting to feel old, and with all the kids so adept at this digital stuff, Gerry figured he should give the Internet a try too. Besides, it kept his brain active and restored some of the sociability he'd lost in the previous decades as old friends passed away and the physical scope of his world contracted.
This night, like every night, Gerry worked away into the wee hours. His back would ache come morning, but this was the only time he truly felt alive these days. From his snug red brick home in Scotland, Gerry could share thoughts and information with strangers from all over the world. It truly was a miracle of modern science!
*
The next day, Gerry woke to the warm amber sun seeping in between a crack in the curtains. Like every morning, his thoughts took a little time to come into focus. Instinctively his hand went to the other side of the bed. Nobody there. Of course; she was gone. Rita, the sweetest woman he'd ever known. Four years this spring, God rest her soul.
Puttering around the cramped kitchen, Gerry heard a knock at the door. Who could that be? He could see two women standing in the hallway, their bodies contorted in the fisheye glass of the peephole. One looked familiar, but Gerry couldn't be sure. He fiddled with the locks and pulled the door open.
"Hi Gerry. How are you today?"
"Fine, thanks," he muttered, still searching his mind for where he'd seen her face before.
Noting the confusion in his eyes, the woman proffered a hand. "Alice, Alice Corgan. I pop round every now and again to check on you."
It clicked. "Ah aye! Come in, come in. Lemme get ya a cuppa." Gerry turned and shuffled into the flat.
As Gerry set about his tiny kitchen, Alice called from the living room, "This is Mandy. She's a care worker too. She's going to pay you occasional visits if that's alright with you."
Gerry poked his head around the doorway. "I'll always welcome a beautiful young lady in ma home. Though, I've tae warn you I'm a married man, so no funny business." He winked and ducked back into the kitchen.
Alice turned to Mandy with a grin. "He's a good man, our Gerry. You'll get along just fine." She lowered her voice. "As I said, with the Alzheimer's, he has to be reminded to take his medication, but he's still mostly self-sufficient. We installed a medi-bot to remind him every day and dispense the pills. If he doesn't respond, we'll get a message to send someone over."
Mandy nodded and scribbled notes in a pad.
"When I'm gone, Michael will have somethin' to remember me by."
"Also, and this is something we've been working on for a few months now, Gerry is convinced he has something…" her voice trailed off. "He thinks he has cancer. Now, while the Alzheimer's may affect his day-to-day life, it's not at a stage where he needs to be taken into care. The last time we went for a checkup, the doctor couldn't find any sign of cancer. I think it stems from--"
Gerry shouted from the other room: "Does the young lady take sugar?"
"No, I'm fine thanks," Mandy called back.
"Of course you don't," smiled Gerry. "Young lady like yersel' is sweet enough."
*
The following week, Mandy arrived early at Gerry's. He looked unsure at first, but he invited her in.
Sitting on the sofa nurturing a cup of tea, Alice tried to keep things light. "So what do you do in your spare time, Gerry?"
"I've got nothing but spare time these days, even if it's running a little low."
"Do you have any hobbies?"
"Yes actually." Gerry smiled. "I'm makin' a computer program."
Alice was taken aback. She knew very little about computers herself. "What's the program for?" she asked.
"Well, despite ma appearance, I'm no spring chicken. I know I don't have much time left. Ma son, he lives down in Australia now, he worked on a computer program that uses AI - that's artificial intelligence - to imitate a person."
Alice still looked confused, so Gerry pressed on.
"Well, I know I've not long left, so I've been usin' this open source code to make ma own for when I'm gone. I've already written all the code. Now I just have to add the things that make it seem like me. I can upload audio, text, even videos of masel'. That way, when I'm gone, Michael will have somethin' to remember me by."
Mandy sat there, stunned. She had no idea anybody could do this, much less an octogenarian from his small, ramshackle flat in Glasgow.
"That's amazing Gerry. I'd love to see the real thing when you're done."
"O' course. I mean, it'll take time. There's so much to add, but I'll be happy to give a demonstration."
Mandy sat there and cradled her mug. Imagine, she thought, being able to preserve yourself, or at least some basic caricature of yourself, forever.
*
As the weeks went on, Gerry slowly added new shades to his coded double. Mandy would leaf through the dusty photo albums on Gerry's bookcase, pointing to photos and asking for the story behind each one. Gerry couldn't always remember but, when he could, the accompanying stories were often hilarious, incredible, and usually a little of both. As he vividly recounted tales of bombing missions over Burma, trips to the beach with a young Michael and, in one particularly interesting story, giving the finger to Margaret Thatcher, Mandy would diligently record them through a Dictaphone to be uploaded to the program.
Gerry loved the company, particularly when he could regale the young woman with tales of his son Michael. One day, as they sat on the sofa flicking through a box of trinkets from his days as a travelling salesman, Mandy asked why he didn't have a smartphone.
He shrugged. "If I'm out 'n about then I want to see the world, not some 2D version of it. Besides, there's nothin' on there for me."
Alice explained that you could get Skype on a smartphone: "You'd be able to talk with Michael and feed the geese at the park at the same time," she offered.
Gerry seemed interested but didn't mention it again.
"Only thing I'm worried about with ma computer," he remarked, "is if there's another power cut and I can't call Michael. There's been a few this year from the snow 'n I hate not bein' able to reach him."
"Well, if you ever want to use the Skype app on my phone to call him you're welcome," said Mandy. "After all, you just need to add him to my contacts."
Gerry was flattered. "That's a relief, knowing I won't miss out on calling Michael if the computer goes bust."
*
Then, in early spring, just as the first green buds burst forth from the bare branches, Gerry asked Mandy to come by. "Bring that Alice girl if ya can - I know she's excited to see this too."
The next day, Mandy and Alice dutifully filed into the cramped study and sat down on rickety wooden chairs brought from the living room for this special occasion.
An image of Gerry, somewhat younger than the man himself, flashed up on the screen.
With a dramatic throat clearing, Gerry opened the program on his computer. An image of Gerry, somewhat younger than the man himself, flashed up on the screen.
The room was silent.
"Hiya Michael!" AI Gerry blurted. The real Gerry looked flustered and clicked around the screen. "I forgot to put the facial recognition on. Michael's just the go-to name when it doesn't recognize a face." His voice lilted with anxious excitement. "This is Alice," Gerry said proudly to the camera, pointing at Alice, "and this is Mandy."
AI Gerry didn't take his eyes from real Gerry, but grinned. "Hello, Alice. Hiya Mandy." The voice was definitely his, even if the flow of speech was slightly disjointed.
"Hi," Alice and Mandy stuttered.
Gerry beamed at both of them. His eyes flitted between the girls and the screen, perhaps nervous that his digital counterpart wasn't as polished as they'd been expecting.
"You can ask him almost anything. He's not as advanced as the ones they're making in the big studios, but I think Michael will like him."
Alice and Mandy gathered closer to the monitor. A mute Gerry grinned back from the screen. Sitting in his wooden chair, the real Gerry turned to his AI twin and began chattering away: "So, what do you think o' the place? Not bad eh?"
"Oh aye, like what you've done wi' it," said AI Gerry.
"Gerry," Alice cut in. "What did you say about Michael there?"
"Ah, I made this for him. After all, it's the kind o' thing his studio was doin'. I had to clear some space to upload it 'n show you guys, so I had to remove Skype for now, but Michael won't mind. Anyway, Mandy's gonna let me Skype him from her phone."
Mandy pulled her phone out and smiled. "Aye, he'll be able to chat with two Gerry's."
Alice grabbed Mandy by the arm: "What did you tell him?" she whispered, her eyes wide.
"I told him he can use my phone if he wants to Skype Michael. Is that okay?"
Alice turned to Gerry, who was chattering away with his computerized clone. "Gerry, we'll just be one second, I need to discuss something with Mandy."
"Righto," he nodded.
Outside the room, Alice paced up and down the narrow hallway.
Mandy could see how flustered she was. "What's wrong? Don't you like the chatbot? I think it's kinda c-"
"Michael's dead," Alice spluttered.
"What do you mean? He talks to him all the time."
Alice sighed. "He doesn't talk to Michael. See, a few years back, Michael found out he had cancer. He worked for this company that did AI chatbot stuff. When he knew he was dying he--" she groped in the air for the words-- "he built this chatbot thing for Gerry, some kind of super-advanced AI. Gerry had just been diagnosed with Alzheimer's and I guess Michael was worried Gerry would forget him. He designed the chatbot to say he was in Australia to explain why he couldn't visit."
"That's awful," Mandy granted, "but I don't get what the problem is. I mean, surely he can show the AI Michael his own chatbot?"
"No, because you can't get the AI Michael on Skype. Michael just designed the program to look like Skype."
"But then--" Mandy went silent.
"Michael uploaded the entire AI to Gerry's computer before his death. Gerry didn't delete Skype. He deleted the AI Michael."
"So… that's it? He-he's gone?" Mandy's voice cracked. "He can't just be gone, surely he can't?"
The women stood staring at each other. They looked to the door of the study. They could still hear Gerry, gabbing away with his cybercopy.
"I can't go back in there," muttered Mandy. Her voice wavered as she tried to stem the misery rising in her throat.
Alice shook her head and paced the floor. She stopped and stared at Mandy with grim resignation. "We don't have a choice."
When they returned, Gerry was still happily chatting away.
"Hiya girls. Ya wanna ask my handsome twin any other questions? If not, we could get Michael on the phone?"
Neither woman spoke. Gerry clapped his hands and turned gaily to the monitor again: "I cannae wait for ya t'meet him, Gerry. He's gonna be impressed wi' you."
Alice clasped her hands to her mouth. Tears welled in the women's eyes as they watched the old man converse with his digital copy. The heat of the room seemed to swell, becoming insufferable. Mandy couldn't take it anymore. She jumped up, bolted to the door and collapsed against a wall in the hallway. Alice perched on the edge of her seat in a dumb daze, praying for the floor to open and swallow the contents of the room whole.
Oblivious, Gerry and his echo babbled away, the blue glow of the screen illuminating his euphoric face. "Just wait until y'meet him Gerry, just wait."
Since the recent reversal of Roe v. Wade — the landmark decision establishing a constitutional right to abortion — the vulnerabilities of reproductive health data and various other information stored on digital devices or shared through the Web have risen to the forefront.
Menstrual period tracking apps are an example of how technologies that collect information from users could be weaponized against abortions seekers. The apps, which help tens of millions of users in the U.S. predict when they’re ovulating, may provide evidence that leads to criminal prosecution in states with abortion bans, says Anton T. Dahbura, executive director of the Johns Hopkins University Information Security Institute. In states where abortion is outlawed, “it’s probably best to not use a period tracker,” he says.
Following the Dobbs v. Jackson ruling in late June that overturned Roe, even women who suffered a miscarriage could be suspected of having an abortion in some cases. While using these apps in anonymous mode may appear more secure, “data is notoriously difficult to perfectly anonymize,” Dahbura says. “Whether the data are stored on the user’s device or in the cloud, there are ways to connect that data to the user.”
Completely concealing one’s tracks in cyberspace poses enormous challenges. Digital forensics can take advantage of technology such as GPS apps, security cameras, license plate trackers, credit card transactions and bank records to reconstruct a person’s activities,” Dahbura says. “Abortion service providers are also in a world of risk for similar reasons.”
Practicing “good cyber hygiene” is essential. That’s particularly true in states where private citizens may be rewarded for reporting on women they suspect of having an abortion, such as Texas, which passed a so-called bounty hunter law last fall. To help guard against hacking, Dahbura suggests using strong passwords and two-factor authentication when possible while remaining on alert for phishing scams on email or texts.
Another option for safeguarding privacy is to avoid such apps entirely, but that choice will depend on an individual’s analysis of the risks and benefits, says Leah Fowler, research assistant professor at the University of Houston Law Center, Health Law & Policy Institute.
“These apps are popular because people find them helpful and convenient, so I hesitate to tell anyone to get rid of something they like without more concrete evidence of its nefarious uses,” she says. “I also hate the idea that asking anyone capable of becoming pregnant to opt out of all or part of the digital economy could ever be a viable solution. That’s an enormous policy failure. We have to do better than that.”
The potential universe of abortion-relevant data can include information from a variety of fitness and other biometric trackers, text and social media chat records, call details, purchase histories and medical insurance records.
Instead, Fowler recommends that concerned consumers read the terms of service and privacy policies of the apps they’re using. If some of the terms are unclear, she suggests emailing customer service with questions until the answers are satisfactory. It’s also wise for consumers to research products that meet their specific needs and find out whether other women have raised concerns about specific apps. Users interested in more privacy may want to switch to an app that stores data locally, meaning the data stays on your device, or does not use third-party tracking, so the app-maker is the only company with access to it, she says.
Period tracking apps can be useful for those on fertility journeys, making it easier to store information digitally than on paper charts. But users may want to factor in whether they live in a state with an anti-abortion stance and run the risk of legal issues due to a potential data breach, says Carmel Shachar, executive director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.
Consumers’ risks extend beyond period tracking apps in the post-Roe v. Wade era. “Anything that creates digital breadcrumbs to your reproductive choices and conduct could raise concerns — for example, googling ‘abortion providers near me’ or texting your best friend that you are pregnant but do not want to be,” Shachar says. Women also could incriminate themselves by bringing their phones, which may record geolocation data, to the clinic with them.
The potential universe of abortion-relevant data can include information from a variety of fitness and other biometric trackers, text and social media chat records, call details, purchase histories and medical insurance records, says Rebecca Wexler, faculty co-director of the Berkeley Center for Law & Technology. “These data sources can reveal a pregnant person’s decision to seek or obtain an abortion, as well as reveal a healthcare provider’s provision of abortion services and anyone else’s provision of abortion assistance,” she says.
In some situations, people or companies could inadvertently expose themselves to risk after posting on social media with offers of places for abortion seekers to stay after traveling from states with bans. They could be liable for aiding and abetting abortion. At this point, it’s unclear whether states that ban abortion will try to prosecute residents who seek abortions in other states without bans.
Another possibility is that a woman seeking an abortion will be prosecuted based not only on her phone’s data, but also on the data that law enforcement finds on someone else’s device or a shared computer. As a result, “people in one household may find themselves at odds with each other,” says K Royal, faculty fellow at the Center for Law, Science, and Innovation at Arizona State University’s Sandra Day O'Connor College of Law. “This is a very delicate situation.”
Individuals and corporate executives should research their options before leaving a digital footprint. “Guard your privacy carefully, whether you are seeking help or you are seeking to help someone,” Royal says. While she has come across recommendations from other experts who suggest carrying a second phone that is harder to link a person’s identity for certain online activities, “it’s not practical on a general basis.”
The privacy of this health data isn’t fully protected by the law because period trackers, texting services and other apps are not healthcare providers — and as a result, there’s no prohibition on sharing the information with a third party under the Health Insurance Portability and Accountability Act of 1996, says Florencia Marotta-Wurgler, a professor who specializes in online consumer contracts and data privacy at the NYU School of Law.
“So, as long as there is valid consent, then it’s fair game unless you say that it violates the reasonable expectations of consumers,” she says. “But this is pretty unchartered territory at the moment.”
As states implement laws granting anyone the power to report suspected or known pregnancies to law enforcement, anti-choice activists are purchasing reproductive health data from companies that make period apps, says Rebecca Herold, chief executive officer of Privacy & Security Brainiacs in Des Moines, Iowa, and a member of the Emerging Trends Working Group at ISACA, an association focused on information technology governance. They could also buy data on search histories and make it available in places like Texas for “bounty hunters” to find out which women have searched for information about abortions.
Some groups are creating their own apps described as providing general medical information on subjects such as pregnancy health. But they are “ultimately intended to ‘catch’ women” — to identify those who are probably pregnant and dissuade them from having an abortion, to launch harassment campaigns against them, or to report them to law enforcement, anti-choice groups and others in states where such prenatal medical care procedures are now restricted or prohibited, Herold says.
In addition to privacy concerns, the reversal of Roe v. Wade raises censorship issues. Facebook and Instagram have started to remove or flag content, particularly as it relates to providing the abortion pill, says Michael Kleinman, director of the Silicon Valley Initiative at Amnesty International USA, a global organization that promotes human rights.
Facebook and Instagram have rules that forbid private citizens from buying, selling or giving away pharmaceuticals, including the abortion pill, according to a social media post by a communications director for Meta, which owns both platforms. In the same post, though, the Meta official noted that the company’s enforcement of this rule has been “incorrect” in some cases.
“It’s terrifying to think that arbitrary decisions by these platforms can dramatically limit the ability of people to access critical reproductive rights information,” Kleinman says. However, he adds, “as it currently stands, the platforms make unilateral decisions about what reproductive rights information they allow and what information they take down.”
Should We Use Technologies to Enhance Morality?
Our moral ‘hardware’ evolved over 100,000 years ago while humans were still scratching the savannah. The perils we encountered back then were radically different from those that confront us now. To survive and flourish in the face of complex future challenges our archaic operating systems might need an upgrade – in non-traditional ways.
Morality refers to standards of right and wrong when it comes to our beliefs, behaviors, and intentions. Broadly, moral enhancement is the use of biomedical technology to improve moral functioning. This could include augmenting empathy, altruism, or moral reasoning, or curbing antisocial traits like outgroup bias and aggression.
The claims related to moral enhancement are grand and polarizing: it’s been both tendered as a solution to humanity’s existential crises and bluntly dismissed as an armchair hypothesis. So, does the concept have any purchase? The answer leans heavily on our definition and expectations.
One issue is that the debate is often carved up in dichotomies – is moral enhancement feasible or unfeasible? Permissible or impermissible? Fact or fiction? On it goes. While these gesture at imperatives, trading in absolutes blurs the realities at hand. A sensible approach must resist extremes and recognize that moral disrupters are already here.
We know that existing interventions, whether they occur unknowingly or on purpose, have the power to modify moral dispositions in ways both good and bad. For instance, neurotoxins can promote antisocial behavior. The ‘lead-crime hypothesis’ links childhood lead-exposure to impulsivity, antisocial aggression, and various other problems. Mercury has been associated with cognitive deficits, which might impair moral reasoning and judgement. It’s well documented that alcohol makes people more prone to violence.
So, what about positive drivers? Here’s where it gets more tangled.
Medicine has long treated psychiatric disorders with drugs like sedatives and antipsychotics. However, there’s short mention of morality in the Diagnostic and Statistical Manual of Mental Disorders (DSM) despite the moral merits of pharmacotherapy – these effects are implicit and indirect. Such cases are regarded as treatments rather than enhancements.
It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
Conventionally, an enhancement must go beyond what is ‘normal,’ species-typical, or medically necessary – this is known as the ‘treatment-enhancement distinction.’ But boundaries of health and disease are fluid, so whether we call a procedure ‘moral enhancement’ or ‘medical treatment’ is liable to change with shifts in social values, expert opinions, and clinical practices.
Human enhancements are already used for a range of purported benefits: caffeine, smart drugs, and other supplements to boost cognitive performance; cosmetic procedures for aesthetic reasons; and steroids and stimulants for physical advantage. More boldly, cyborgs like Moon Ribas and Neil Harbisson are pushing transpecies boundaries with new kinds of sensory perception. It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
How might it work?
One possibility for shaping moral temperaments is with neurostimulation devices. These use electrodes to deliver a low-intensity current that alters the electromagnetic activity of specific neural regions. For instance, transcranial Direct Current Stimulation (tDCS) can target parts of the brain involved in self-awareness, moral judgement, and emotional decision-making. It’s been shown to increase empathy and valued-based learning, and decrease aggression and risk-taking behavior. Many countries already use tDCS to treat pain and depression, but evidence for enhancement effects on healthy subjects is mixed.
Another suggestion is targeting neuromodulators like serotonin and dopamine. Serotonin is linked to prosocial attributes like trust, fairness, and cooperation, but low activity is thought to motivate desires for revenge and harming others. It’s not as simple as indiscriminately boosting brain chemicals though. While serotonin is amenable to SSRIs, precise levels are difficult to measure and track, and there’s no scientific consensus on the “optimum” amount or on whether such a value even exists. Fluctuations due to lifestyle factors such as diet, stress, and exercise add further complexity. Currently, more research is needed on the significance of neuromodulators and their network dynamics across the moral landscape.
There are a range of other prospects. The ‘love drugs’ oxytocin and MDMA mediate pair bonding, cooperation, and social attachment, although some studies suggest that people with high levels of oxytocin are more aggressive toward outsiders. Lithium is a mood stabilizer that has been shown to reduce aggression in prison populations; beta-blockers like propranolol and the supplement omega-3 have similar effects. Increasingly, brain-computer interfaces augur a world of brave possibilities. Such appeals are not without limitations, but they indicate some ways that external tools can positively nudge our moral sentiments.
Who needs morally enhancing?
A common worry is that enhancement technologies could be weaponized for social control by authoritarian regimes, or used like the oppressive eugenics of the early 20th century. Fortunately, the realities are far more mundane and such dystopian visions are fantastical. So, what are some actual possibilities?
Some researchers suggest that neurotechnologies could help to reactivate brain regions of those suffering from moral pathologies, including healthy people with psychopathic traits (like a lack of empathy). Another proposal is using such technology on young people with conduct problems to prevent serious disorders in adulthood.
Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it?
A question is whether these kinds of interventions should be compulsory for dangerous criminals. On the other hand, a voluntary treatment for inmates wouldn’t be so different from existing incentive schemes. For instance, some U.S. jurisdictions already offer drug treatment programs in exchange for early release or instead of prison time. Then there’s the difficult question of how we should treat non-criminal but potentially harmful ‘successful’ psychopaths.
Others argue that if virtues have a genetic component, there is no technological reason why present practices of embryo screening for genetic diseases couldn’t also be used for selecting socially beneficial traits.
Perhaps the most immediate scenario is a kind of voluntary moral therapy, which would use biomedicine to facilitate ideal brain-states to augment traditional psychotherapy. Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it? Approaches like neurofeedback and psychedelic-assisted therapy could prove helpful.
What are the challenges?
A general challenge is that of setting. Morality is context dependent; what’s good in one environment may be bad in another and vice versa, so we don’t want to throw out the baby with the bathwater. Of course, common sense tells us that some tendencies are more socially desirable than others: fairness, altruism, and openness are clearly preferred over aggression, dishonesty, and prejudice.
One argument is that remoulding ‘brute impulses’ via biology would not count as moral enhancement. This view claims that for an action to truly count as moral it must involve cognition – reasoning, deliberation, judgement – as a necessary part of moral behavior. Critics argue that we should be concerned more with ends rather than means, so ultimately it’s outcomes that matter most.
Another worry is that modifying one biological aspect will have adverse knock-on effects for other valuable traits. Certainly, we must be careful about the network impacts of any intervention. But all stimuli have distributed effects on the body, so it’s really a matter of weighing up the cost/benefit trade-offs as in any standard medical decision.
Is it ethical?
Our values form a big part of who we are – some bioethicists argue that altering morality would pose a threat to character and personal identity. Another claim is that moral enhancement would compromise autonomy by limiting a person’s range of choices and curbing their ‘freedom to fall.’ Any intervention must consider the potential impacts on selfhood and personal liberty, in addition to the wider social implications.
This includes the importance of social and genetic diversity, which is closely tied to considerations of fairness, equality, and opportunity. The history of psychiatry is rife with examples of systematic oppression, like ‘drapetomania’ – the spurious mental illness that was thought to cause African slaves’ desire to flee captivity. Advocates for using moral enhancement technologies to help kids with conduct problems should be mindful that they disproportionately come from low-income communities. We must ensure that any habilitative practice doesn’t perpetuate harmful prejudices by unfairly targeting marginalized people.
Human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good.
Then, there are concerns that morally-enhanced persons would be vulnerable to predation by those who deliberately avoid moral therapies. This relates to what’s been dubbed the ‘bootstrapping problem’: would-be moral enhancement candidates are the types of individuals that benefit from not being morally enhanced. Imagine if every senator was asked to undergo an honesty-boosting procedure prior to entering public office – would they go willingly? Then again, perhaps a technological truth-serum wouldn’t be such a bad requisite for those in positions of stern social consequence.
Advocates argue that biomedical moral betterment would simply offer another means of pursuing the same goals as fixed social mechanisms like religion, education, and community, and non-invasive therapies like cognitive-behavior therapy and meditation. It’s even possible that technological efforts would be more effective. After all, human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good. If we can safely improve ourselves in direct and deliberate ways then there’s no morally significant difference whether this happens via conventional methods or new technology.
Future prospects
Where speculation about human enhancement has led to hype and technophilia, many bioethicists urge restraint. We can be grounded in current science while anticipating feasible medium-term prospects. It’s unlikely moral enhancement heralds any metamorphic post-human utopia (or dystopia), but that doesn’t mean dismissing its transformative potential. In one sense, we should be wary of transhumanist fervour about the salvatory promise of new technology. By the same token we must resist technofear and alarmist efforts to balk social and scientific progress. Emerging methods will continue to shape morality in subtle and not-so-subtle ways – the critical steps are spotting and scaffolding these with robust ethical discussion, public engagement, and reasonable policy options. Steering a bright and judicious course requires that we pilot the possibilities of morally-disruptive technologies.