Would you leave your small child in the care of a robot for several hours a day? It may sound laughable at first, but think carefully.
"Given the huge amounts of money we pay for childcare, a [robot caregiver] is a very attractive proposition."
Robots that can care for children would be a godsend to many parents, especially the financially strapped. In the U.S., 62 percent of women who gave birth in 2016 worked outside the home, and day care costs are often exorbitant. In California, for instance, the annual cost for day care for a single child averages over $22,000. The price is lower in some states, but it still accounts for a hefty chunk of the typical family's budget.
"We're talking about the Holy Grail of parenting," says Zoltan Istvan, a technology consultant and futurist. "Imagine a robot that could assume 70 percent to 80 percent of the caregiver's role for your child. Given the huge amounts of money we pay for childcare, that's a very attractive proposition."
Both China and Japan are on the leading edge of employing specially designed social robots for the care of children. Due to long work schedules, shifting demographics and China's long-term (but now defunct) one-child policy, both countries have a severe shortage of family caregivers. Enter the iPal, a child-sized humanoid robot with a round head, expressive face and articulated fingers, which can keep children engaged and entertained for hours on end. According to its manufacturer, AvatarMind Robot Technology, iPal is already selling like hotcakes in Asia and is expected to be available in the U.S. within the next year. The standard version of iPal sells for $2,499, and it's not the only robot claimed to be suitable for childcare. Other robots being fine-tuned are Softbank's humanoid models Pepper and NAO, which are also considered to be child-friendly social robots.
iPal talks, dances, plays games, reads stories and plugs into social media and the internet. According to AvatarMind, over time iPal learns your child's likes and dislikes, and can independently learn more about subjects your child is interested in to boost learning. In addition, it will wake your child up in the morning and tell him when it's time to get dressed, brush his teeth or wash his hands. If your child is a diabetic, it will remind her when it's time to check her blood sugar. But iPal isn't just a fancy appliance that mechanically performs these functions; it does so with "personality."
iPal robot interacting with a boy.
The robot has an "emotion management system" that detects your child's emotions and mirrors them (unless your child is sad, and then it tries to cheer him up). But it's not exactly like iPal has the kind of emotion chip long sought by Star Trek's android Data. What it does is emotional simulation--what some would call emotional dishonesty--considering that it doesn't actually feel anything. But research has shown that the lack of authenticity doesn't really matter when it comes to the human response to feigned emotion.
Children, and even adults, tend to respond to "emotional" robots as though they're alive and sentient even when we've seen all the wires and circuit boards that underlie their wizardry. In fact, we're hardwired to respond to them as though they are human beings in a real relationship with us.
The question is whether the relationships we develop with robots causes social maladaptation, especially among the most vulnerable among us—young children just learning how to connect and interact with others. Could a robot in fact come close to providing the authentic back-and-forth that helps children develop empathy, reciprocity, and self-esteem? Also, could steady engagement with a robot nanny diminish precious time needed for real family bonding?
It depends on whom you ask.
Because iPal is voice-activated, it frees children to learn by interacting in a way that's more natural than interacting with traditional toys, says Dr. Daniel Xiong, Co-founder and Chief Technology Officer at AvatarMind. "iPal is like a "real" family member with you whenever you need it," he says.
Xiong doesn't put a time limit on how long a child should interact with iPal on a daily basis. He sees the relationship between the child and the robot as healthy, though he admits that the technology needs to advance substantially before iPal could take the place of a human babysitter.
It's no coincidence that many toymakers and manufacturers are designing cute robots that look and behave like real children or animals, says Sherry Turkle, a Professor of Social Studies and Science at MIT. "When they make eye contact and gesture toward us, they predispose us to view them as thinking and caring," she has written in The Washington Post. "They are designed to be cute, to provide a nurturing response" from the child. "And when it comes to sociable AI, nurturance is the killer app: We nurture what we love, and we love what we nurture."
What are we saying to children about their importance to us when we're willing to outsource their care to a robot?
The problem is that we get lulled into thinking that we're in an actual relationship, when a robot can't possibly love us back. If adults have these vulnerabilities, what might such lopsided relationships do to the emotional development of a small child? Turkle notes that while we tend to ascribe a mind and emotions to a socially interactive robot, "Simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love."
Still, is active, playful engagement with a robot for a few hours a day any more harmful than several hours in front of a TV or with an iPad? Some, like Xiong, regard interacting with a robot as better than mere passive entertainment. iPal's manufacturers say that their robot can't replace parents or teachers and is best used by three- to eight-year-olds after school, while they wait for their parents to get off of work. But as robots become ever more sophisticated, they're expected to become more and more captivating, and to perform more of the tasks of day-to-day care.
Some studies, performed by Turkle and fellow MIT colleague Cynthia Breazeal, have revealed a darker side to child-robot interaction. Turkle has reported extensively on these studies in The Washington Post and in her 2011 book, Alone Together: Why We Expect More from Technology and Less from Each Other. Most children love robots, but some act out their inner bully on the hapless machines, hitting and kicking them and otherwise trying to hurt them. The trouble is that the robot can't fight back, teaching children that they can bully and abuse without consequences. Such harmful behavior could carry over into the child's human relationships.
And it turns out that communicative machines don't actually teach kids good communication skills. It's well known that parent-child communication in the first three years of life sets the stage for a child's intellectual and academic success. Verbal back-and-forth with parents and caregivers is like food for a child's growing brain. One article published in JAMA Pediatrics showed that babies who played with electronic toys—like the popular robot dog AIBO—show a decrease in both the quantity and quality of their language skills.
Anna V. Sosa of the Child Speech and Language Lab at Northern Arizona University studied 26 ten- to 16-month-old infants to compare the growth of their language skills after they played with three types of toys: Electronic toys like a baby laptop and talking farm; traditional toys like wooden puzzles and building blocks; and books read aloud by their parents.
The play that produced the most growth in verbal ability was having books read to them, followed by play with traditional toys. Language gains after playing with electronic toys came dead last. This form of play involved the least use of adult words, the least conversational turn-taking with parents, and the least verbalizations from the children. While the study sample was small, it's not hard to extrapolate that no electronic toy or even more abled robot could supply the intimate responsiveness of a parent reading stories to a child, explaining new words, answering the child's questions, and modeling the kind of back-and-forth interaction that promotes empathy and reciprocity in human relationships.
Most experts acknowledge that robots can be valuable educational tools, but they can't make a child feel truly loved, validated, and valued.
Research suggests that the main problem of leaving children in the care of robots on a regular basis is the risk of their stunted, unhealthy emotional development. In Alone Together, Turkle asks: What are we saying to children about their importance to us when we're willing to outsource their care to a robot? A child might be superficially entertained by the robot while her self-esteem is systematically undermined.
Two of the most vocal critics of robot nannies are researchers at the University of Sheffield in the U.K., Noel and Amanda Sharkey. In an article published in the journal Interaction Studies, they claim that the overuse of childcare robots could have serious consequences for the psychological and emotional wellbeing of children.
They acknowledge that limited use of robots can have positive effects like keeping a child safe from physical harm, allowing remote monitoring and supervision by parents, keeping a child entertained, and stimulating an interest in science and engineering. But the Sharkeys see the overuse of robots as a source of emotional alienation between parents and children. Just regularly plopping a child down with a robot for hours of interaction could be a form of neglect that panders to busy parents at the cost of a child's emotional development.
Robots, the Sharkeys argue, prey upon a child's natural tendency to anthropomorphize, which sucks them into a pseudo-relationship with a machine that can never return their affection. This can be seen as a form of emotional exploitation—a machine that promises connection but can never truly deliver. Furthermore, as robots develop more intimate skills such as bathing, feeding and changing diapers, children will lose out on some of the most fundamental and precious bonding activities with their parents.
Critics say that children's natural ability to bond is prime territory for exploitation by toy and robot manufacturers, who ultimately have a commercial agenda. The Sharkeys noted one study in which a state-of-the-art robot was employed in a daycare center. The ten- to 20-month-old children bonded more deeply with the robot than with a teddy bear. It's not hard to see that starting the robot-bonding process early in life is good for robot business, as babies and toddlers graduate to increasingly sophisticated machines.
"It is possible that exclusive or near exclusive care of a child by a robot could result in cognitive and linguistic impairments," say the Sharkeys. They cite the danger of a child developing what is called in psychology a pathological attachment disorder. Attachment disorders occur when parents are unpredictable or neglectful in their emotional responsiveness. The resulting shaky bond interferes with a child's ability to feel trust, pleasure, safety, and comfort in the presence of the parent. Unhealthy patterns of attachment include "insecure attachment," a form of anxiety that arises when a child cannot trust his caregiver with meeting his emotional needs. Children with attachment disorders may anxiously avoid attachments and may not be able to experience empathy, the cornerstone of relationships. Such patterns can follow a child throughout life and infect every other relationship they have.
An example of the inadequacy of robot nannies rests on the pre-programmed emotional responses they have in their repertoires. They're designed to detect and mirror a child's emotions and do things like play a child's favorite song when he's crying or in distress. But such a response could be the height of insensitivity. It discounts and belittles what may be a child's authentic response to an upsetting turn of events, like a scraped knee from a fall. A robot playing a catchy jingle is a far cry from having Mom clean and dress the wound, and perhaps more importantly, kiss it and make it better.
Most experts acknowledge that robots can be valuable educational tools. But they can't make a child feel truly loved, validated, and valued. That's the job of parents, and when parents abdicate this responsibility, it's not only the child that misses out on one of life's most profound experiences.
So consider buying a robot to entertain and educate your little one—just make sure you're close by for the true bonding opportunities that arrive so fast and last so fleetingly in the life of a child.
After You Die, Your Digital Self Could Live on as a Chatbot
My wife and I visited a will-and-trust lawyer after our first son was born. Everything seemed simple and clear until the lawyer asked, without missing a beat, "So, what about your social media management?" My wife looked at me and, even though I'm more tech savvy, I felt as confused as a Luddite.
One can imagine chatbots becoming the next generation of care management alongside funeral services, and will and testaments.
"Social media management?" I laughed, making a joke about my wife spending more time on Facebook than I do. But the lawyer's question was serious, as were the legal documents asking for our profile page links, passwords, and related information.
What do you want to happen to your Facebook, Twitter, and other social media platforms after you die? Your grandfather may have wanted his cremated ashes poured into the Ganges, or a burial in a prepaid plot. But unlike earlier generations, whose personas ended with their last breath, your bits and bytes could live on across multiple servers, holding a space for you online like a digital obelisk. Or, if you desire, your relatives can do the equivalent of a DNR: Delete account.
"It is the future of 'Get your affairs in order,'" says John Havens, Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. He remembers being pulled aside when his father was being put into the ICU and realizing that his dad wasn't going to come back.
Havens says if we are lucky enough to know that we are wrapping up our time, then we have the opportunity not just to bow out of the digital world gracefully, but to have our digital persona carry on beyond us. This persona could go beyond today's static memorial pages on Facebook and Instagram; it could be an interactive computer program designed from your specific speech patterns, memories, and personality – a chatbot.
"I could have an algorithm trained to hear what I say and how I say it," Havens told me. "You can say, 'I'm Damon and I'm going to pass in the next few months, but, you know, over the past six months, I've created a chatbot to continue our conversations. In the upcoming months, my partner or loved ones will let you know when the chatbot will take over and be involved.'"
The chatbot could become an extension of you on platforms like Messenger or WhatsApp, for example. One can imagine this becoming the next generation of care management alongside funeral services, and will and testaments. You can see the future in Eugenia Kuyda, an entrepreneur who successfully created an interactive chatbot of her late friend, Roman Mazurenko, just based on his text messages. Her new program, Replika, may eventually give us the same technology so we, too, can all potentially do the same with our loved ones. Expect other tech companies to follow suit.
There is now no real separation between IRL and online – just as there may be an increasingly blurred line between our personas before and after death.
Chatbots offer us an irresistible decision: They are artificial intelligence programs built to have conversations with people, usually within a service capacity like canceling a shipping order or getting to the right help desk. You can view it as a modern-day helpline and, no doubt, you've interacted with chatbots when you've made purchases online. Chatbots are now becoming verbal, too, managing phone calls you make to your credit card company, local utilities, and other daily operations.
We witnessed our future this spring when Google showed off Google Duplex. It is a voice-driven system that will call people on your behalf with the intention, Google says, to manage your life. At the Google I/O conference, Google CEO Sundar Pichai showed Duplex calling a hair salon and interacting with the human receptionist – with nearly all the pauses, mmm-hmms, and colloquialisms as its female counterpart. "The amazing part is the assistant can actually understand the nuances of conversation," Pichai said to the rapt tech audience.
Recode's Kurt Wagner explained the immediate problem with the Google Duplex demo, which is the same problem technologists so often overlook: What if someone uses your technology in ways you didn't intend? "The major concern with that demo was that Google Assistant never said it was a robot or told the salon that the call was being recorded. When pressed by members of the media in the days after the demo, Google declined to comment, leading some to believe the company had simply overlooked this privacy element altogether."
"This is why disclosure will be so huge," Havens says. "When people call, they will begin with, 'Hello. I am a human.'"
This conflict between the physical and the digital is now coming to a head, though it isn't the clichéd man against machine Skynet conspiracy theories, but rather us against us. Today, it is as if we are split into two or, perhaps more accurately, two personas – our "real-life" persona and our online persona – and we're now experiencing fatigue trying to hold center.
It is a new phenomenon reflective of our social media: Media forerunners like MySpace and Friendster as well as classic websites like LiveJournal and Tumblr allowed us to explore the online world – and, in a sense, the physical world beyond our physical reach – using avatars as close to or as far from our real selves as we desired. On the Internet, nobody knows you're a dog.
Facebook truly eliminated the powerful choice of anonymity, as its extensive verification process required people to give up anonymity to participate in the biggest social network in the world. This was a willful, purposeful decision by Facebook: Founder Mark Zuckerberg has been an advocate of being yourself online, and the former Director of Market Development Randi Zuckerberg infamously said, "I think anonymity on the Internet has to go away… People behave a lot better when they have their real names down."
This was Facebook's intention and, whether or not its theory of people behaving better is true, especially in light of the 2016 U.S. Presidential election, the effects on us are real. Sex workers and other high-risk, anonymity-driven entrepreneurs are being outed via social media. The parallel rise in online addiction clinics isn't a coincidence, as the blur between the physical self and the digital self has never been hazier. There is now no real separation between IRL and online – just as there may be an increasingly blurred line between our personas before and after death.
Chatbots represent a tempting form of convenience: A way to remove our cognitive load to an assistant that will manage our relationships.
We have Carrie Fisher starring in the next Star Wars movie, potentially winning the first truly post-humous Oscar thanks to technology that can help transition older footage into live-recorded footage. Similar, more subtle turns occurred with Paul Walker in the Fast and the Furious 7, which used a combination of CGI and stand-ins. But a key difference is that we actually know they are dead before the movie is even released. As not-famous individuals, we have the ethical choice (duty?) to disclose that information to our social media followers after we die.
While we're still alive, though, chatbots represent a tempting form of convenience: A way to remove our cognitive load to an assistant that will manage our relationships. The rub is that our online relationships are our personal relationships, so we're not just potentially automating, say, our social media feed or our online postings, but our responsibilities in the real-life relationships that we've built. There is no line.
"It's naïve to think that the Google Duplex that was designed to make your hair appointments won't be used to do more difficult things like break up with a girlfriend," Havens says. "Record 50 words, use different inflections, and put in phrases like 'It's not you, it's me.' Why wouldn't people do that?"
Well, it really depends on the person. My wife and I ended up leaving the social media management section of our will blank for now. I even took a long social media sabbatical to connect with people more in person. If my online relationships and my in-person relationships are all becoming the same, then maybe it's OK to let them die – just like I will.
This past March, headlines suddenly flooded the Internet about a startup company called Nectome. Founded by two graduates of the Massachusetts Institute of Technology, the new company was charging people $10,000 to join a waiting list to have their brains embalmed, down to the last neuron, using an award-winning chemical compound.
While the lay public presumably burnt their wills and grew ever more excited about the end of humanity's quest for immortality, neurologists let out a collective sigh.
Essentially, participants' brains would turn to a substance like glass and remain in a state of near-perfect preservation indefinitely. "If memories can truly be preserved by a sufficiently good brain banking technique," Nectome's website explains, "we believe that within the century it could become feasible to digitize your preserved brain and use that information to recreate your mind." But as with most Faustian bargains, Nectome's proposition came with a serious caveat -- death.
That's right, in order for Nectome's process to properly preserve your connectome, the comprehensive map of the brain's neural connections, you must be alive (and under anesthesia) while the fluid is injected. This way, the company postulates, when the science advances enough to read and extract your memories someday, your vitrified brain will still contain your perfectly preserved essence--which can then be digitally recreated as a computer simulation.
Almost immediately this story gained buzz with punchy headlines: "Startup wants to upload your brain to the cloud, but has to kill you to do it," "San Junipero is real: Nectome wants to upload your brain," and "New tech firm promises eternal life, but you have to die."
While the lay public presumably burnt their wills and grew ever more excited about the end of humanity's quest for immortality, neurologists let out a collective sigh -- hype had struck the scientific community once again.
The truth about Nectome is that its claims are highly speculative and no hard science exists to suggest that our connectome is the key to our 'being,' nor that it can ever be digitally revived. "We haven't come even close to understanding even the most basic types of functioning in the brain," says neuroscientist Alex Fox, who was educated at the University of Queensland in Australia. "Memory storage in the brain is only a theoretical concept [and] there are some seriously huge gaps in our knowledge base that stand in the way of testing [the connectome] theory."
After the Nectome story broke, Harvard computational neuroscientist Sam Gershman tweeted out:
"Didn't anyone tell them that we've known the C Elegans (a microscopic worm) connectome for over a decade but haven't figured out how to reconstruct all of their memories? And that's only 7000 synapses compared to the trillions of synapses in the human brain!"
Hype can come from researchers themselves, who are under an enormous amount of pressure to publish original work and maintain funding.
How media coverage of Nectome went from an initial fastidiously researched article in the MIT Technology Review by veteran science journalist Antonio Regalado to the click-bait frenzy it became is a prime example of the 'science hype' phenomenon. According to Adam Auch, who holds a doctorate in philosophy from Dalhousie University in Nova Scotia, Canada, "Hype is a feature of all stages of the scientific dissemination process, from the initial circulation of preliminary findings within particular communities of scientists, to the process by which such findings come to be published in peer-reviewed journals, to the subsequent uptake these findings receive from the non-specialist press and the general public."
In the case of Nectome, hype was present from the word go. Riding the high of several major wins, including having raised over one million dollars in funding and partnering with well-known MIT neurologist Edward Boyden, Nectome founders Michael McCanna and Robert McIntyre launched their website on March 1, 2018. Just one month prior, they were able to purchase and preserve a newly deceased corpse in Portland, Oregon, showing that vitrifixation, their method of chemical preservation, could be used on a human specimen. It had previously won an award for preserving every synaptic structure on a rabbit brain.
The Nectome mission statement, found on its website, is laced with saccharine language that skirts the unproven nature of the procedure the company is peddling for big bucks: "Our mission is to preserve your brain well enough to keep all its memories intact: from that great chapter of your favorite book to the feeling of cold winter air, baking an apple pie, or having dinner with your friends and family."
This rhetoric is an example of hype that can come from researchers themselves, who are under an enormous amount of pressure to publish original work and maintain funding. As a result, there is a constant push to present science as "groundbreaking" when really, as is apparently the case with Nectome, it is only a small piece in a much larger effort.
Calling out the audacity of Nectome's posited future, neuroscientist Gershman commented to another publication, "The important question is whether the connectome is sufficient for memory: Can I reconstruct all memories knowing only the connections between neurons? The answer is almost certainly no, given our knowledge about how memories are stored (itself a controversial topic)."
The former home page of Nectome's website, which has now been replaced by a statement titled, "Response to recent press."
Furthermore, universities like MIT, who entered into a subcontract with Nectome, are under pressure to seek funding through partnerships with industry as a result of the Bayh-Dole Act of 1980. Also known as the Patent and Trademark Law Amendments Act, this piece of legislation allows universities to commercialize inventions developed under federally funded research programs, like Nectome's method of preserving brains, formally called Aldehyde-Stabilized Cryopreservation.
"[Universities use] every incentive now to talk about innovation," explains Dr. Ivan Oransky, president of the Association of Health Care Journalists and co-founder of retractionwatch.com, a blog that catalogues errors and fraud in published research. "Innovation to me is often a fancy word for hype. The role of journalists should not be to glorify what universities [say, but to] tell the closest version of the truth they can."
In this case, a combination of the hyperbolic press, combined with some impressively researched expose pieces, led MIT to cut its ties with Nectome on April 2nd, 2018, just two weeks after the news of their company broke.
The solution to the dangers of hype, experts say, is a more scientifically literate public—and less clickbait-driven journalism.
Because of its multi-layered nature, science hype carries several disturbing consequences. For one, exaggerated coverage of a discovery could mislead the public by giving them false hope or unfounded worry. And media hype can contribute to a general mistrust of science. In these instances, people might, as Auch puts it, "fall back on previously held beliefs, evocative narratives, or comforting biases instead of well-justified scientific evidence."
All of this is especially dangerous in today's 'fake news' era, when companies or political parties sow public confusion for their own benefit, such as with global warming. In the case of Nectome, the danger is that people might opt to end their lives based off a lacking scientific theory. In fact, the company is hoping to enlist terminal patients in California, where doctor-assisted suicide is legal. And 25 people have paid the $10,000 to join Nectome's waiting list, including Sam Altman, president of the famed startup accelerator Y Combinator. Nectome now has offered to refund the money.
Founders McCanna and McIntyre did not return repeated requests for comment for this article. A new statement on their website begins: "Vitrifixation today is a powerful research tool, but needs more research and development before anyone considers applying it in a context other than research."
The solution to the dangers of hype, experts say, is a more scientifically literate public—and less clickbait-driven journalism. Until then, it seems that companies like Nectome will continue to enjoy at least 15 minutes of fame.