This Special Music Helped Preemie Babies’ Brains Develop
Move over, Baby Einstein: New research from Switzerland shows that listening to soothing music in the first weeks of life helps encourage brain development in preterm babies.
For the study, the scientists recruited a harpist and a new-age musician to compose three pieces of music.
The Lowdown
Children who are born prematurely, between 24 and 32 weeks of pregnancy, are far more likely to survive today than they used to be—but because their brains are less developed at birth, they're still at high risk for learning difficulties and emotional disorders later in life.
Researchers in Geneva thought that the unfamiliar and stressful noises in neonatal intensive care units might be partially responsible. After all, a hospital ward filled with alarms, other infants crying, and adults bustling in and out is far more disruptive than the quiet in-utero environment the babies are used to. They decided to test whether listening to pleasant music could have a positive, counterbalancing effect on the babies' brain development.
Led by Dr. Petra Hüppi at the University of Geneva, the scientists recruited Swiss harpist and new-age musician Andreas Vollenweider (who has collaborated with the likes of Carly Simon, Bryan Adams, and Bobby McFerrin). Vollenweider developed three pieces of music specifically for the NICU babies, which were played for them five times per week. Each track was used for specific purposes: To help the baby wake up; to stimulate a baby who was already awake; and to help the baby fall back asleep.
When they reached an age equivalent to a full-term baby, the infants underwent an MRI. The researchers focused on connections within the salience network, which determines how relevant information is, and then processes and acts on it—crucial components of healthy social behavior and emotional regulation. The neural networks of preemies who had listened to Vollenweider's pieces were stronger than preterm babies who had not received the intervention, and were instead much more similar to full-term babies.
Next Up
The first infants in the study are now 6 years old—the age when cognitive problems usually become diagnosable. Researchers plan to follow up with more cognitive and socio-emotional assessments, to determine whether the effects of the music intervention have lasted.
The first infants in the study are now 6 years old—the age when cognitive problems usually become diagnosable.
The scientists note in their paper that, while they saw strong results in the babies' primary auditory cortex and thalamus connections—suggesting that they had developed an ability to recognize and respond to familiar music—there was less reaction in the regions responsible for socioemotional processing. They hypothesize that more time spent listening to music during a NICU stay could improve those connections as well; but another study would be needed to know for sure.
Open Questions
Because this initial study had a fairly small sample size (only 20 preterm infants underwent the musical intervention, with another 19 studied as a control group), and they all listened to the same music for the same amount of time, it's still undetermined whether variations in the type and frequency of music would make a difference. Are Vollenweider's harps, bells, and punji the runaway favorite, or would other styles of music help, too? (Would "Baby Shark" help … or hurt?) There's also a chance that other types of repetitive sounds, like parents speaking or singing to their children, might have similar effects.
But the biggest question is still the one that the scientists plan to tackle next: Whether the intervention lasts as the children grow up. If it does, that's great news for any family with a preemie — and for the baby-sized headphone industry.
“Siri, Read My Mind”: A New Device Lets Users Think Commands
Sometime in the near future, we won't need to type on a smartphone or computer to silently communicate our thoughts to others.
"We're moving as fast as possible to get the technology right, to get the ethics right, to get everything right."
In fact, the devices themselves will quietly understand our intentions and express them to other people. We won't even need to move our mouths.
That "sometime in the near future" is now.
At the recent TED Conference, MIT student and TED Fellow Arnav Kapur was onstage with a colleague doing the first live public demo of his new technology. He was showing how you can communicate with a computer using signals from your brain. The usually cool, erudite audience seemed a little uncomfortable.
"If you look at the history of computing, we've always treated computers as external devices that compute and act on our behalf," Kapur said. "What I want to do is I want to weave computing, AI and Internet as part of us."
His colleague started up a device called AlterEgo. Thin like a sticker, AlterEgo picks up signals in the mouth cavity. It recognizes the intended speech and processes it through the built-in AI. The device then gives feedback to the user directly through bone conduction: It vibrates your inner ear drum and gives you a response meshing with your normal hearing.
Onstage, the assistant quietly thought of a question: "What is the weather in Vancouver?" Seconds later, AlterEgo told him in his ear. "It's 50 degrees and rainy here in Vancouver," the assistant announced.
AlterEgo essentially gives you a built-in Siri.
"We don't have a deadline [to go to market], but we're moving as fast as possible to get the technology right, to get the ethics right, to get everything right," Kapur told me after the talk. "We're developing it both as a general purpose computer interface and [in specific instances] like on the clinical side or even in people's homes."
Nearly-telepathic communication actually makes sense now. About ten years ago, the Apple iPhone replaced the ubiquitous cell phone keyboard with a blank touchscreen. A few years later, Google Glass put computer screens into a simple lens. More recently, Amazon Alexa and Microsoft Cortana have dropped the screen and gone straight for voice control. Now those voices are getting closer to our minds and may even become indistinguishable in the future.
"We knew the voice market was growing, like with getting map locations, and audio is the next frontier of user interfaces," says Dr. Rupal Patel, Founder and CEO of VocalID. The startup literally gives voices to the voiceless, particularly people unable to speak because of illness or other circumstances.
"We start with [our database of] human voices, then train our deep learning technology to learn the pattern of speech… We mix voices together from our voice bank, so it's not just Damon's voice, but three or five voices. They are different enough to blend it into a voice that does not exist today – kind of like a face morph."
The VocalID customer then has a voice as unique as he or she is, mixed together like a Sauvignon blend. It is a surrogate voice for those of us who cannot speak, just as much as AlterEgo is a surrogate companion for our brains.
"I'm very skeptical keyboards or voice-based communication will be replaced any time soon."
Voice equality will become increasingly important as Siri, Alexa and voice-based interfaces become the dominant communication method.
It may feel odd to view your voice as a privilege, but as the world becomes more voice-activated, there will be a wider gap between the speakers and the voiceless. Picture going shopping without access to the Internet or trying to eat healthily when your neighborhood is a food desert. And suffering from vocal difficulties is more common than you might think. In fact, according to government statistics, around 7.5 million people in the U.S. have trouble using their voices.
While voice communication appears to be here to stay, at least for now, a more radical shift to mind-controlled communication is not necessarily inevitable. Tech futurist Wagner James Au, for one, is dubious.
"I'm very skeptical keyboards or voice-based communication will be replaced any time soon. Generation Z has grown up with smartphones and games like Fortnite, so I don't see them quickly switching to a new form factor. It's still unclear if even head-mounted AR/VR displays will see mass adoption, and mind-reading devices are a far greater physical imposition on the user."
How adopters use the newest brain impulse-reading, voice-altering technology is a much more complicated discussion. This spring, a video showed U.S. House Speaker Nancy Pelosi stammering and slurring her words at a press conference. The problem is that it didn't really happen: the video was manufactured and heavily altered from the original source material.
So-called deepfake videos use computer algorithms to capture the visual and vocal cues of an individual, and then the creator can manipulate it to say whatever it wants. Deepfakes have already created false narratives in the political and media systems – and these are only videos. Newer tech is making the barrier between tech and our brains, if not our entire identity, even thinner.
"Last year," says Patel of VocalID, "we did penetration testing with our voices on banks that use voice control – and our generation 4 system is even tricky for you and me to identify the difference (between real and fake). As a forward-thinking company, we want to prevent risk early on by watermarking voices, creating a detector of false voices, and so on." She adds, "The line will become more blurred over time."
Onstage at TED, Kapur reassured the audience about who would be in the driver's seat. "This is why we designed the system to deliberately record from the peripheral nervous system, which is why the control in all situations resides with the user."
And, like many creators, he quickly shifted back to the possibilities. "What could the implications of something like this be? Imagine perfectly memorizing things, where you perfectly record information that you silently speak, and then hear them later when you want to, internally searching for information, crunching numbers at speeds computers do, silently texting other people."
"The potential," he concluded, "could be far-reaching."
There's no shortage of fake news going around the internet these days, but how do we become more aware as consumers of what's real and what's not?
"We are hoping to create what you might call a general 'vaccine' against fake news, rather than trying to counter each specific conspiracy or falsehood."
Researchers at the University of Cambridge may have answered just that by developing an online game designed to expose and educate participants to the tactics used by those spreading false information.
"We wanted to see if we could preemptively debunk, or 'pre-bunk', fake news by exposing people to a weak dose of the methods used to create and spread disinformation, so they have a better understanding of how they might be deceived," Dr Sander van der Linden, Director of the Cambridge Social Decision-Making Lab, said in a statement.
"This is a version of what psychologists call 'inoculation theory', with our game working like a psychological vaccination."
In February 2018, van der Linden and his coauthor, Jon Roozenbeek, helped launch the browser game, "Bad News," where players take on the role of "Disinformation and Fake News Tycoon."
They can manipulate news and social media within the game by several different methods, including deploying twitter-bots, photo-shopping evidence, creating fake accounts, and inciting conspiracy theories with the goal of attracting followers and maintaining a "credibility score" for persuasiveness.
In order to gauge the game's effectiveness, players were asked to rate the reliability of a number of real and fake news headlines and tweets both before and after playing. The data from 15,000 players was evaluated, with the results published June 25 in the journal Palgrave Communications.
The results concluded that "the perceived reliability of fake news before playing the game had reduced by an average of 21% after completing it. Yet the game made no difference to how users ranked real news."
Just 15 minutes of playing the game can have a moderate effect on people, which could play a major role on a larger scale.
Additionally, participants who "registered as most susceptible to fake news headlines at the outset benefited most from the 'inoculation,'" according to the study.
Just 15 minutes of playing the game can have a moderate effect on people, which could play a major role on a larger scale when it comes to "building a societal resistance to fake news," according to Dr. van der Linden.
"Research suggests that fake news spreads faster and deeper than the truth, so combating disinformation after-the-fact can be like fighting a losing battle," he said.
"We are hoping to create what you might call a general 'vaccine' against fake news, rather than trying to counter each specific conspiracy or falsehood," Roozenbeek added.
Van der Linden and Roozenbeek's work is an early example of the potential methods to protect people against deception by training them to be more attuned to the methods used to distribute fake news.
"I hope that the positive results give further credence to the new science of prebunking rather than only thinking about traditional debunking. On a larger level, I also hope the game and results inspire a new kind of behavioral science research where we actively engage with people and apply insights from psychological science in the public interest," van der Linden told leapsmag.
"I like the idea that the end result of a scientific theory is a real-world partnership and practical tool that organizations and people can use to guard themselves against online manipulation techniques in a novel and hopefully fun and engaging manner."
Ready to be "inoculated" against fake news? Then play the game for yourself.