Short Story Contest Winner: "The Gerry Program"
It's an odd sensation knowing you're going to die, but it was a feeling Gerry Ferguson had become relatively acquainted with over the past two years. What most perplexed the terminally ill, he observed, was not the concept of death so much as the continuation of all other life.
Gerry's secret project had been in the works for two years now, ever since they found the growth.
Who will mourn me when I'm gone? What trait or idiosyncrasy will people most recall? Will I still be talked of, 100 years from now?
But Gerry didn't worry about these questions. He was comfortable that his legacy would live on, in one form or another. From his cozy flat in the west end of Glasgow, Gerry had managed to put his affairs in order and still find time for small joys.
Feeding the geese in summer at the park just down from his house, reading classics from the teeming bookcase in the living room, talking with his son Michael on Skype. It was Michael who had first suggested reading some of the new works of non-fiction that now littered the large oak desk in Gerry's study.
He was just finishing 'The Master Algorithm' when his shabby grandfather clock chimed six o'clock. Time to call Michael. Crammed into his tiny study, Gerry pulled his computer's webcam close and waved at Michael's smiling face.
"Hi Dad! How're you today?"
"I'm alright, son. How're things in sunny Australia?"
"Hot as always. How's things in Scotland?"
"I'd 'ave more chance gettin' a tan from this computer screen than I do goin' out there."
Michael chuckled. He's got that hearty Ferguson laugh, Gerry thought.
"How's the project coming along?" Michael asked. "Am I going to see it one of these days?"
"Of course," grinned Gerry, "I designed it for you."
Gerry's secret project had been in the works for two years now, ever since they found the growth. He had decided it was better not to tell Michael. He would only worry.
The two men chatted for hours. They discussed Michael's love life (or lack thereof), memories of days walking in the park, and their shared passion, the unending woes of Rangers Football Club. It wasn't until Michael said his goodbyes that Gerry noticed he'd been sitting in the dark for the best part of three hours, his mesh curtains casting a dim orange glow across the room from the street light outside. Time to get back to work.
*
Every night, Gerry sat at his computer, crawling forums, nourishing his project, feeding his knowledge and debating with other programmers. Even at age 82, Gerry knew more than most about algorithms. Never wanting to feel old, and with all the kids so adept at this digital stuff, Gerry figured he should give the Internet a try too. Besides, it kept his brain active and restored some of the sociability he'd lost in the previous decades as old friends passed away and the physical scope of his world contracted.
This night, like every night, Gerry worked away into the wee hours. His back would ache come morning, but this was the only time he truly felt alive these days. From his snug red brick home in Scotland, Gerry could share thoughts and information with strangers from all over the world. It truly was a miracle of modern science!
*
The next day, Gerry woke to the warm amber sun seeping in between a crack in the curtains. Like every morning, his thoughts took a little time to come into focus. Instinctively his hand went to the other side of the bed. Nobody there. Of course; she was gone. Rita, the sweetest woman he'd ever known. Four years this spring, God rest her soul.
Puttering around the cramped kitchen, Gerry heard a knock at the door. Who could that be? He could see two women standing in the hallway, their bodies contorted in the fisheye glass of the peephole. One looked familiar, but Gerry couldn't be sure. He fiddled with the locks and pulled the door open.
"Hi Gerry. How are you today?"
"Fine, thanks," he muttered, still searching his mind for where he'd seen her face before.
Noting the confusion in his eyes, the woman proffered a hand. "Alice, Alice Corgan. I pop round every now and again to check on you."
It clicked. "Ah aye! Come in, come in. Lemme get ya a cuppa." Gerry turned and shuffled into the flat.
As Gerry set about his tiny kitchen, Alice called from the living room, "This is Mandy. She's a care worker too. She's going to pay you occasional visits if that's alright with you."
Gerry poked his head around the doorway. "I'll always welcome a beautiful young lady in ma home. Though, I've tae warn you I'm a married man, so no funny business." He winked and ducked back into the kitchen.
Alice turned to Mandy with a grin. "He's a good man, our Gerry. You'll get along just fine." She lowered her voice. "As I said, with the Alzheimer's, he has to be reminded to take his medication, but he's still mostly self-sufficient. We installed a medi-bot to remind him every day and dispense the pills. If he doesn't respond, we'll get a message to send someone over."
Mandy nodded and scribbled notes in a pad.
"When I'm gone, Michael will have somethin' to remember me by."
"Also, and this is something we've been working on for a few months now, Gerry is convinced he has something…" her voice trailed off. "He thinks he has cancer. Now, while the Alzheimer's may affect his day-to-day life, it's not at a stage where he needs to be taken into care. The last time we went for a checkup, the doctor couldn't find any sign of cancer. I think it stems from--"
Gerry shouted from the other room: "Does the young lady take sugar?"
"No, I'm fine thanks," Mandy called back.
"Of course you don't," smiled Gerry. "Young lady like yersel' is sweet enough."
*
The following week, Mandy arrived early at Gerry's. He looked unsure at first, but he invited her in.
Sitting on the sofa nurturing a cup of tea, Alice tried to keep things light. "So what do you do in your spare time, Gerry?"
"I've got nothing but spare time these days, even if it's running a little low."
"Do you have any hobbies?"
"Yes actually." Gerry smiled. "I'm makin' a computer program."
Alice was taken aback. She knew very little about computers herself. "What's the program for?" she asked.
"Well, despite ma appearance, I'm no spring chicken. I know I don't have much time left. Ma son, he lives down in Australia now, he worked on a computer program that uses AI - that's artificial intelligence - to imitate a person."
Alice still looked confused, so Gerry pressed on.
"Well, I know I've not long left, so I've been usin' this open source code to make ma own for when I'm gone. I've already written all the code. Now I just have to add the things that make it seem like me. I can upload audio, text, even videos of masel'. That way, when I'm gone, Michael will have somethin' to remember me by."
Mandy sat there, stunned. She had no idea anybody could do this, much less an octogenarian from his small, ramshackle flat in Glasgow.
"That's amazing Gerry. I'd love to see the real thing when you're done."
"O' course. I mean, it'll take time. There's so much to add, but I'll be happy to give a demonstration."
Mandy sat there and cradled her mug. Imagine, she thought, being able to preserve yourself, or at least some basic caricature of yourself, forever.
*
As the weeks went on, Gerry slowly added new shades to his coded double. Mandy would leaf through the dusty photo albums on Gerry's bookcase, pointing to photos and asking for the story behind each one. Gerry couldn't always remember but, when he could, the accompanying stories were often hilarious, incredible, and usually a little of both. As he vividly recounted tales of bombing missions over Burma, trips to the beach with a young Michael and, in one particularly interesting story, giving the finger to Margaret Thatcher, Mandy would diligently record them through a Dictaphone to be uploaded to the program.
Gerry loved the company, particularly when he could regale the young woman with tales of his son Michael. One day, as they sat on the sofa flicking through a box of trinkets from his days as a travelling salesman, Mandy asked why he didn't have a smartphone.
He shrugged. "If I'm out 'n about then I want to see the world, not some 2D version of it. Besides, there's nothin' on there for me."
Alice explained that you could get Skype on a smartphone: "You'd be able to talk with Michael and feed the geese at the park at the same time," she offered.
Gerry seemed interested but didn't mention it again.
"Only thing I'm worried about with ma computer," he remarked, "is if there's another power cut and I can't call Michael. There's been a few this year from the snow 'n I hate not bein' able to reach him."
"Well, if you ever want to use the Skype app on my phone to call him you're welcome," said Mandy. "After all, you just need to add him to my contacts."
Gerry was flattered. "That's a relief, knowing I won't miss out on calling Michael if the computer goes bust."
*
Then, in early spring, just as the first green buds burst forth from the bare branches, Gerry asked Mandy to come by. "Bring that Alice girl if ya can - I know she's excited to see this too."
The next day, Mandy and Alice dutifully filed into the cramped study and sat down on rickety wooden chairs brought from the living room for this special occasion.
An image of Gerry, somewhat younger than the man himself, flashed up on the screen.
With a dramatic throat clearing, Gerry opened the program on his computer. An image of Gerry, somewhat younger than the man himself, flashed up on the screen.
The room was silent.
"Hiya Michael!" AI Gerry blurted. The real Gerry looked flustered and clicked around the screen. "I forgot to put the facial recognition on. Michael's just the go-to name when it doesn't recognize a face." His voice lilted with anxious excitement. "This is Alice," Gerry said proudly to the camera, pointing at Alice, "and this is Mandy."
AI Gerry didn't take his eyes from real Gerry, but grinned. "Hello, Alice. Hiya Mandy." The voice was definitely his, even if the flow of speech was slightly disjointed.
"Hi," Alice and Mandy stuttered.
Gerry beamed at both of them. His eyes flitted between the girls and the screen, perhaps nervous that his digital counterpart wasn't as polished as they'd been expecting.
"You can ask him almost anything. He's not as advanced as the ones they're making in the big studios, but I think Michael will like him."
Alice and Mandy gathered closer to the monitor. A mute Gerry grinned back from the screen. Sitting in his wooden chair, the real Gerry turned to his AI twin and began chattering away: "So, what do you think o' the place? Not bad eh?"
"Oh aye, like what you've done wi' it," said AI Gerry.
"Gerry," Alice cut in. "What did you say about Michael there?"
"Ah, I made this for him. After all, it's the kind o' thing his studio was doin'. I had to clear some space to upload it 'n show you guys, so I had to remove Skype for now, but Michael won't mind. Anyway, Mandy's gonna let me Skype him from her phone."
Mandy pulled her phone out and smiled. "Aye, he'll be able to chat with two Gerry's."
Alice grabbed Mandy by the arm: "What did you tell him?" she whispered, her eyes wide.
"I told him he can use my phone if he wants to Skype Michael. Is that okay?"
Alice turned to Gerry, who was chattering away with his computerized clone. "Gerry, we'll just be one second, I need to discuss something with Mandy."
"Righto," he nodded.
Outside the room, Alice paced up and down the narrow hallway.
Mandy could see how flustered she was. "What's wrong? Don't you like the chatbot? I think it's kinda c-"
"Michael's dead," Alice spluttered.
"What do you mean? He talks to him all the time."
Alice sighed. "He doesn't talk to Michael. See, a few years back, Michael found out he had cancer. He worked for this company that did AI chatbot stuff. When he knew he was dying he--" she groped in the air for the words-- "he built this chatbot thing for Gerry, some kind of super-advanced AI. Gerry had just been diagnosed with Alzheimer's and I guess Michael was worried Gerry would forget him. He designed the chatbot to say he was in Australia to explain why he couldn't visit."
"That's awful," Mandy granted, "but I don't get what the problem is. I mean, surely he can show the AI Michael his own chatbot?"
"No, because you can't get the AI Michael on Skype. Michael just designed the program to look like Skype."
"But then--" Mandy went silent.
"Michael uploaded the entire AI to Gerry's computer before his death. Gerry didn't delete Skype. He deleted the AI Michael."
"So… that's it? He-he's gone?" Mandy's voice cracked. "He can't just be gone, surely he can't?"
The women stood staring at each other. They looked to the door of the study. They could still hear Gerry, gabbing away with his cybercopy.
"I can't go back in there," muttered Mandy. Her voice wavered as she tried to stem the misery rising in her throat.
Alice shook her head and paced the floor. She stopped and stared at Mandy with grim resignation. "We don't have a choice."
When they returned, Gerry was still happily chatting away.
"Hiya girls. Ya wanna ask my handsome twin any other questions? If not, we could get Michael on the phone?"
Neither woman spoke. Gerry clapped his hands and turned gaily to the monitor again: "I cannae wait for ya t'meet him, Gerry. He's gonna be impressed wi' you."
Alice clasped her hands to her mouth. Tears welled in the women's eyes as they watched the old man converse with his digital copy. The heat of the room seemed to swell, becoming insufferable. Mandy couldn't take it anymore. She jumped up, bolted to the door and collapsed against a wall in the hallway. Alice perched on the edge of her seat in a dumb daze, praying for the floor to open and swallow the contents of the room whole.
Oblivious, Gerry and his echo babbled away, the blue glow of the screen illuminating his euphoric face. "Just wait until y'meet him Gerry, just wait."
In The Fake News Era, Are We Too Gullible? No, Says Cognitive Scientist
One of the oddest political hoaxes of recent times was Pizzagate, in which conspiracy theorists claimed that Hillary Clinton and her 2016 campaign chief ran a child sex ring from the basement of a Washington, DC, pizzeria.
To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility.
Millions of believers spread the rumor on social media, abetted by Russian bots; one outraged netizen stormed the restaurant with an assault rifle and shot open what he took to be the dungeon door. (It actually led to a computer closet.) Pundits cited the imbroglio as evidence that Americans had lost the ability to tell fake news from the real thing, putting our democracy in peril.
Such fears, however, are nothing new. "For most of history, the concept of widespread credulity has been fundamental to our understanding of society," observes Hugo Mercier in Not Born Yesterday: The Science of Who We Trust and What We Believe (Princeton University Press, 2020). In the fourth century BCE, he points out, the historian Thucydides blamed Athens' defeat by Sparta on a demagogue who hoodwinked the public into supporting idiotic military strategies; Plato extended that argument to condemn democracy itself. Today, atheists and fundamentalists decry one another's gullibility, as do climate-change accepters and deniers. Leftists bemoan the masses' blind acceptance of the "dominant ideology," while conservatives accuse those who do revolt of being duped by cunning agitators.
What's changed, all sides agree, is the speed at which bamboozlement can propagate. In the digital age, it seems, a sucker is born every nanosecond.
The Case Against Credulity
Yet Mercier, a cognitive scientist at the Jean Nicod Institute in Paris, thinks we've got the problem backward. To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility. "We don't credulously accept whatever we're told—even when those views are supported by the majority of the population, or by prestigious, charismatic individuals," he writes. "On the contrary, we are skilled at figuring out who to trust and what to believe, and, if anything, we're too hard rather than too easy to influence."
He bases those contentions on a growing body of research in neuropsychiatry, evolutionary psychology, and other fields. Humans, Mercier argues, are hardwired to balance openness with vigilance when assessing communicated information. To gauge a statement's accuracy, we instinctively test it from many angles, including: Does it jibe with what I already believe? Does the speaker share my interests? Has she demonstrated competence in this area? What's her reputation for trustworthiness? And, with more complex assertions: Does the argument make sense?
This process, Mercier says, enables us to learn much more from one another than do other animals, and to communicate in a far more complex way—key to our unparalleled adaptability. But it doesn't always save us from trusting liars or embracing demonstrably false beliefs. To better understand why, leapsmag spoke with the author.
How did you come to write Not Born Yesterday?
In 2010, I collaborated with the cognitive scientist Dan Sperber and some other colleagues on a paper called "Epistemic Vigilance," which laid out the argument that evolutionarily, it would make no sense for humans to be gullible. If you can be easily manipulated and influenced, you're going to be in major trouble. But as I talked to people, I kept encountering resistance. They'd tell me, "No, no, people are influenced by advertising, by political campaigns, by religious leaders." I started doing more research to see if I was wrong, and eventually I had enough to write a book.
With all the talk about "fake news" these days, the topic has gotten a lot more timely.
Yes. But on the whole, I'm skeptical that fake news matters very much. And all the energy we spend fighting it is energy not spent on other pursuits that may be better ways of improving our informational environment. The real challenge, I think, is not how to shut up people who say stupid things on the internet, but how to make it easier for people who say correct things to convince people.
"History shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion."
You start the book with an anecdote about your encounter with a con artist several years ago, who scammed you out of 20 euros. Why did you choose that anecdote?
Although I'm arguing that people aren't generally gullible, I'm not saying we're completely impervious to attempts at tricking us. It's just that we're much better than we think at resisting manipulation. And while there's a risk of trusting someone who doesn't deserve to be trusted, there's also a risk of not trusting someone who could have been trusted. You miss out on someone who could help you, or from whom you might have learned something—including figuring out who to trust.
You argue that in humans, vigilance and open-mindedness evolved hand-in-hand, leading to a set of cognitive mechanisms you call "open vigilance."
There's a common view that people start from a state of being gullible and easy to influence, and get better at rejecting information as they become smarter and more sophisticated. But that's not what really happens. It's much harder to get apes than humans to do anything they don't want to do, for example. And research suggests that over evolutionary time, the better our species became at telling what we should and shouldn't listen to, the more open to influence we became. Even small children have ways to evaluate what people tell them.
The most basic is what I call "plausibility checking": if you tell them you're 200 years old, they're going to find that highly suspicious. Kids pay attention to competence; if someone is an expert in the relevant field, they'll trust her more. They're likelier to trust someone who's nice to them. My colleagues and I have found that by age 2 ½, children can distinguish between very strong and very weak arguments. Obviously, these skills keep developing throughout your life.
But you've found that even the most forceful leaders—and their propaganda machines—have a hard time changing people's minds.
Throughout history, there's been this fear of demagogues leading whole countries into terrible decisions. In reality, these leaders are mostly good at feeling the crowd and figuring out what people want to hear. They're not really influencing [the masses]; they're surfing on pre-existing public opinion. We know from a recent study, for instance, that if you match cities in which Hitler gave campaign speeches in the late '20s through early '30s with similar cities in which he didn't give campaign speeches, there was no difference in vote share for the Nazis. Nazi propaganda managed to make Germans who were already anti-Semitic more likely to express their anti-Semitism or act on it. But Germans who were not already anti-Semitic were completely inured to the propaganda.
So why, in totalitarian regimes, do people seem so devoted to the ruler?
It's not a very complex psychology. In these regimes, the slightest show of discontent can be punished by death, or by you and your whole family being sent to a labor camp. That doesn't mean propaganda has no effect, but you can explain people's obedience without it.
What about cult leaders and religious extremists? Their followers seem willing to believe anything.
Prophets and preachers can inspire the kind of fervor that leads people to suicidal acts or doomed crusades. But history shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion. Only when people are ready for extreme actions can a charismatic figure provide the spark that lights the fire.
Once a religion becomes ubiquitous, the limits of its persuasive powers become clear. Every anthropologist knows that in societies that are nominally dominated by orthodox belief systems—whether Christian or Muslim or anything else—most people share a view of God, or the spirit, that's closer to what you find in societies that lack such religions. In the Middle Ages, for instance, you have records of priests complaining of how unruly the people are—how they spend the whole Mass chatting or gossiping, or go on pilgrimages mostly because of all the prostitutes and wine-drinking. They continue pagan practices. They resist attempts to make them pay tithes. It's very far from our image of how much people really bought the dominant religion.
"The mainstream media is extremely reliable. The scientific consensus is extremely reliable."
And what about all those wild rumors and conspiracy theories on social media? Don't those demonstrate widespread gullibility?
I think not, for two reasons. One is that most of these false beliefs tend to be held in a way that's not very deep. People may say Pizzagate is true, yet that belief doesn't really interact with the rest of their cognition or their behavior. If you really believe that children are being abused, then trying to free them is the moral and rational thing to do. But the only person who did that was the guy who took his assault weapon to the pizzeria. Most people just left one-star reviews of the restaurant.
The other reason is that most of these beliefs actually play some useful role for people. Before any ethnic massacre, for example, rumors circulate about atrocities having been committed by the targeted minority. But those beliefs aren't what's really driving the phenomenon. In the horrendous pogrom of Kishinev, Moldova, 100 years ago, you had these stories of blood libel—a child disappeared, typical stuff. And then what did the Christian inhabitants do? They raped the [Jewish] women, they pillaged the wine stores, they stole everything they could. They clearly wanted to get that stuff, and they made up something to justify it.
Where do skeptics like climate-change deniers and anti-vaxxers fit into the picture?
Most people in most countries accept that vaccination is good and that climate change is real and man-made. These ideas are deeply counter-intuitive, so the fact that scientists were able to get them across is quite fascinating. But the environment in which we live is vastly different from the one in which we evolved. There's a lot more information, which makes it harder to figure out who we can trust. The main effect is that we don't trust enough; we don't accept enough information. We also rely on shortcuts and heuristics—coarse cues of trustworthiness. There are people who abuse these cues. They may have a PhD or an MD, and they use those credentials to help them spread messages that are not true and not good. Mostly, they're affirming what people want to believe, but they may also be changing minds at the margins.
How can we improve people's ability to resist that kind of exploitation?
I wish I could tell you! That's literally my next project. Generally speaking, though, my advice is very vanilla. The mainstream media is extremely reliable. The scientific consensus is extremely reliable. If you trust those sources, you'll go wrong in a very few cases, but on the whole, they'll probably give you good results. Yet a lot of the problems that we attribute to people being stupid and irrational are not entirely their fault. If governments were less corrupt, if the pharmaceutical companies were irreproachable, these problems might not go away—but they would certainly be minimized.
“Virtual Biopsies” May Soon Make Some Invasive Tests Unnecessary
At his son's college graduation in 2017, Dan Chessin felt "terribly uncomfortable" sitting in the stadium. The bouts of pain persisted, and after months of monitoring, a urologist took biopsies of suspicious areas in his prostate.
This innovation may enhance diagnostic precision and promptness, but it also brings ethical concerns to the forefront.
"In my case, the biopsies came out cancerous," says Chessin, 60, who underwent robotic surgery for intermediate-grade prostate cancer at University Hospitals Cleveland Medical Center.
Although he needed a biopsy, as most patients today do, advances in radiologic technology may make such invasive measures unnecessary in the future. Researchers are developing better imaging techniques and algorithms—a form of computer science called artificial intelligence, in which machines learn and execute tasks that typically require human brain power.
This innovation may enhance diagnostic precision and promptness. But it also brings ethical concerns to the forefront of the conversation, highlighting the potential for invasion of privacy, unequal patient access, and less physician involvement in patient care.
A National Academy of Medicine Special Publication, released in December, emphasizes that setting industry-wide standards for use in patient care is essential to AI's responsible and transparent implementation as the industry grapples with voluminous quantities of data. The technology should be viewed as a tool to supplement decision-making by highly trained professionals, not to replace it.
MRI--a test that uses powerful magnets, radio waves, and a computer to take detailed images inside the body--has become highly accurate in detecting aggressive prostate cancer, but its reliability is more limited in identifying low and intermediate grades of malignancy. That's why Chessin opted to have his prostate removed rather than take the chance of missing anything more suspicious that could develop.
His urologist, Lee Ponsky, says AI's most significant impact is yet to come. He hopes University Hospitals Cleveland Medical Center's collaboration with research scientists at its academic affiliate, Case Western Reserve University, will lead to the invention of a virtual biopsy.
A National Cancer Institute five-year grant is funding the project, launched in 2017, to develop a combined MRI and computerized tool to support more accurate detection and grading of prostate cancer. Such a tool would be "the closest to a crystal ball that we can get," says Ponsky, professor and chairman of the Urology Institute.
In situations where AI has guided diagnostics, radiologists' interpretations of breast, lung, and prostate lesions have improved as much as 25 percent, says Anant Madabhushi, a biomedical engineer and director of the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve, who is collaborating with Ponsky. "AI is very nascent," Madabhushi says, estimating that fewer than 10 percent of niche academic medical centers have used it. "We are still optimizing and validating the AI and virtual biopsy technology."
In October, several North American and European professional organizations of radiologists, imaging informaticists, and medical physicists released a joint statement on the ethics of AI. "Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future," reads the statement, published in the Journal of the American College of Radiology. "The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes."
Overreliance on new technology also poses concern when humans "outsource the process to a machine."
The statement's leader author, radiologist J. Raymond Geis, says "there's no question" that machines equipped with artificial intelligence "can extract more information than two human eyes" by spotting very subtle patterns in pixels. Yet, such nuances are "only part of the bigger picture of taking care of a patient," says Geis, a senior scientist with the American College of Radiology's Data Science Institute. "We have to be able to combine that with knowledge of what those pixels mean."
Setting ethical standards is high on all physicians' radar because the intricacies of each patient's medical record are factored into the computer's algorithm, which, in turn, may be used to help interpret other patients' scans, says radiologist Frank Rybicki, vice chair of operations and quality at the University of Cincinnati's department of radiology. Although obtaining patients' informed consent in writing is currently necessary, ethical dilemmas arise if and when patients have a change of heart about the use of their private health information. It is likely that removing individual data may be possible for some algorithms but not others, Rybicki says.
The information is de-identified to protect patient privacy. Using it to advance research is akin to analyzing human tissue removed in surgical procedures with the goal of discovering new medicines to fight disease, says Maryellen Giger, a University of Chicago medical physicist who studies computer-aided diagnosis in cancers of the breast, lung, and prostate, as well as bone diseases. Physicians who become adept at using AI to augment their interpretation of imaging will be ahead of the curve, she says.
As with other new discoveries, patient access and equality come into play. While AI appears to "have potential to improve over human performance in certain contexts," an algorithm's design may result in greater accuracy for certain groups of patients, says Lucia M. Rafanelli, a political theorist at The George Washington University. This "could have a disproportionately bad impact on one segment of the population."
Overreliance on new technology also poses concern when humans "outsource the process to a machine." Over time, they may cease developing and refining the skills they used before the invention became available, said Chloe Bakalar, a visiting research collaborator at Princeton University's Center for Information Technology Policy.
"AI is a paradigm shift with magic power and great potential."
Striking the right balance in the rollout of the technology is key. Rushing to integrate AI in clinical practice may cause harm, whereas holding back too long could undermine its ability to be helpful. Proper governance becomes paramount. "AI is a paradigm shift with magic power and great potential," says Ge Wang, a biomedical imaging professor at Rensselaer Polytechnic Institute in Troy, New York. "It is only ethical to develop it proactively, validate it rigorously, regulate it systematically, and optimize it as time goes by in a healthy ecosystem."