This Special Music Helped Preemie Babies’ Brains Develop
Move over, Baby Einstein: New research from Switzerland shows that listening to soothing music in the first weeks of life helps encourage brain development in preterm babies.
For the study, the scientists recruited a harpist and a new-age musician to compose three pieces of music.
The Lowdown
Children who are born prematurely, between 24 and 32 weeks of pregnancy, are far more likely to survive today than they used to be—but because their brains are less developed at birth, they're still at high risk for learning difficulties and emotional disorders later in life.
Researchers in Geneva thought that the unfamiliar and stressful noises in neonatal intensive care units might be partially responsible. After all, a hospital ward filled with alarms, other infants crying, and adults bustling in and out is far more disruptive than the quiet in-utero environment the babies are used to. They decided to test whether listening to pleasant music could have a positive, counterbalancing effect on the babies' brain development.
Led by Dr. Petra Hüppi at the University of Geneva, the scientists recruited Swiss harpist and new-age musician Andreas Vollenweider (who has collaborated with the likes of Carly Simon, Bryan Adams, and Bobby McFerrin). Vollenweider developed three pieces of music specifically for the NICU babies, which were played for them five times per week. Each track was used for specific purposes: To help the baby wake up; to stimulate a baby who was already awake; and to help the baby fall back asleep.
When they reached an age equivalent to a full-term baby, the infants underwent an MRI. The researchers focused on connections within the salience network, which determines how relevant information is, and then processes and acts on it—crucial components of healthy social behavior and emotional regulation. The neural networks of preemies who had listened to Vollenweider's pieces were stronger than preterm babies who had not received the intervention, and were instead much more similar to full-term babies.
Next Up
The first infants in the study are now 6 years old—the age when cognitive problems usually become diagnosable. Researchers plan to follow up with more cognitive and socio-emotional assessments, to determine whether the effects of the music intervention have lasted.
The first infants in the study are now 6 years old—the age when cognitive problems usually become diagnosable.
The scientists note in their paper that, while they saw strong results in the babies' primary auditory cortex and thalamus connections—suggesting that they had developed an ability to recognize and respond to familiar music—there was less reaction in the regions responsible for socioemotional processing. They hypothesize that more time spent listening to music during a NICU stay could improve those connections as well; but another study would be needed to know for sure.
Open Questions
Because this initial study had a fairly small sample size (only 20 preterm infants underwent the musical intervention, with another 19 studied as a control group), and they all listened to the same music for the same amount of time, it's still undetermined whether variations in the type and frequency of music would make a difference. Are Vollenweider's harps, bells, and punji the runaway favorite, or would other styles of music help, too? (Would "Baby Shark" help … or hurt?) There's also a chance that other types of repetitive sounds, like parents speaking or singing to their children, might have similar effects.
But the biggest question is still the one that the scientists plan to tackle next: Whether the intervention lasts as the children grow up. If it does, that's great news for any family with a preemie — and for the baby-sized headphone industry.
Sloppy Science Happens More Than You Think
The media loves to tout scientific breakthroughs, and few are as toutable – and in turn, have been as touted – as CRISPR. This method of targeted DNA excision was discovered in bacteria, which use it as an adaptive immune system to combat reinfection with a previously encountered virus.
Shouldn't the editors at a Nature journal know better than to have published an incorrect paper in the first place?
It is cool on so many levels: not only is the basic function fascinating, reminding us that we still have more to discover about even simple organisms that we thought we knew so well, but the ability it grants us to remove and replace any DNA of interest has almost limitless applications in both the lab and the clinic. As if that didn't make it sexy enough, add in a bicoastal, male-female, very public and relatively ugly patent battle, and the CRISPR story is irresistible.
And then last summer, a bombshell dropped. The prestigious journal Nature Methods published a paper in which the authors claimed that CRISPR could cause many unintended mutations, rendering it unfit for clinical use. Havoc duly ensued; stocks in CRISPR-based companies plummeted. Thankfully, the authors of the offending paper were responsible, good scientists; they reassessed, then recanted. Their attention- and headline- grabbing results were wrong, and they admitted as much, leading Nature Methods to formally retract the paper this spring.
How did this happen? Shouldn't the editors at a Nature journal know better than to have published this in the first place?
Alas, high-profile scientific journals publish misleading and downright false results fairly regularly. Some errors are unavoidable – that's how the scientific method works. Hypotheses and conclusions will invariably be overturned as new data becomes available and new technologies are developed that allow for deeper and deeper studies. That's supposed to happen. But that's not what we're talking about here. Nor are we talking about obvious offenses like outright plagiarism. We're talking about mistakes that are avoidable, and that still have serious ramifications.
The cultures of both industry and academia promote research that is poorly designed and even more poorly analyzed.
Two parties are responsible for a scientific publication, and thus two parties bear the blame when things go awry: the scientists who perform and submit the work, and the journals who publish it. Unfortunately, both are incentivized for speedy and flashy publications, and not necessarily for correct publications. It is hardly a surprise, then, that we end up with papers that are speedy and flashy – and not necessarily correct.
"Scientists don't lie and submit falsified data," said Andy Koff, a professor of Molecular Biology at Sloan Kettering Institute, the basic research arm of Memorial Sloan Kettering Cancer Center. Richard Harris, who wrote the book on scientific misconduct running the gamut from unconscious bias and ignorance to more malicious fraudulence, largely concurs (full disclosure: I reviewed the book here). "Scientists want to do good science and want to be recognized as such," he said. But even so, the cultures of both industry and academia promote research that is poorly designed and even more poorly analyzed. In Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Millions, Harris describes how scientists must constantly publish in order to maintain their reputations and positions, to get grants and tenure and students. "They are disincentivized from doing that last extra experiment to prove their results," he said; it could prove too risky if it could cost them a publication.
Ivan Oransky and Adam Marcus founded Retraction Watch, a blog that tracks the retraction of scientific papers, in 2010. Oransky pointed out that blinded peer review – the pride and joy of the scientific publishing enterprise – is a large part of the problem. "Pre-publication peer review is still important, but we can't treat it like the only check on the system. Papers are being reviewed by non-experts, and reviewers are asked to review papers only tangentially related to their field. Moreover, most peer reviewers don't look at the underlying or raw data, even when it is available. How then can they tell if the analysis is flawed or the data is accurate?" he wondered.
Mistaken publications also erode the public's opinion of legitimate science, which is problematic since that opinion isn't especially high to begin with.
Koff agreed that anonymous peer review is valuable, but severely flawed. "Blinded review forces a collective view of importance," he said. "If an article disagrees with the reviewer's worldview, the article gets rejected or forced to adhere to that worldview – even if that means pushing the data someplace it shouldn't necessarily go." We have lost the scientific principle behind review, he thinks, which was to critically analyze a paper. But instead of challenging fundamental assumptions within a paper, reviewers now tend to just ask for more and more supplementary data. And don't get him started on editors. "Editors are supposed to arbitrate between reviewers and writers and they have completely abdicated this responsibility, at every journal. They do not judge, and that's a real failing."
Harris laments the wasted time, effort, and resources that result when erroneous ideas take hold in a field, not to mention lives lost when drug discovery is predicated on basic science findings that end up being wrong. "When no one takes the time, care, and money to reproduce things, science isn't stopping – but it is slowing down," he noted. Mistaken publications also erode the public's opinion of legitimate science, which is problematic since that opinion isn't especially high to begin with.
Scientists and publishers don't only cause the problem, though – they may also provide the solution. Both camps are increasingly recognizing and dealing with the crisis. The self-proclaimed "data thugs" Nick Brown and James Heathers use pretty basic arithmetic to reveal statistical errors in papers. The microbiologist Elisabeth Bik scans the scientific literature for problematic images "in her free time." The psychologist Brian Nosek founded the Center for Open Science, a non-profit organization dedicated to promoting openness, integrity, and reproducibility in scientific research. The Nature family of journals – yes, the one responsible for the latest CRISPR fiasco – has its authors complete a checklist to combat irreproducibility, à la Atul Gawande. And Nature Communications, among other journals, uses transparent peer review, in which authors can opt to have the reviews of their manuscript published anonymously alongside the completed paper. This practice "shows people how the paper evolved," said Koff "and keeps the reviewer and editor accountable. Did the reviewer identify the major problems with the paper? Because there are always major problems with a paper."
Your Digital Avatar May One Day Get Sick Before You Do
Artificial intelligence is everywhere, just not in the way you think it is.
These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."
"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."
"And this is, of course, as far from the truth as you can possibly get."
What Exactly Is Artificial Intelligence, Anyway?
Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.
"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."
The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.
"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."
Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)
But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."
The Fundamental Breakthrough That Must Be Solved
To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."
In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.
Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.
In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.
One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.
That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.
The Ethical Problem That Deserves More Attention
But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.
It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.