To Make Science Engaging, We Need a Sesame Street for Adults
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
In the mid-1960s, a documentary producer in New York City wondered if the addictive jingles, clever visuals, slogans, and repetition of television ads—the ones that were captivating young children of the time—could be harnessed for good. Over the course of three months, she interviewed educators, psychologists, and artists, and the result was a bonanza of ideas.
Perhaps a new TV show could teach children letters and numbers in short animated sequences? Perhaps adults and children could read together with puppets providing comic relief and prompting interaction from the audience? And because it would be broadcast through a device already in almost every home, perhaps this show could reach across socioeconomic divides and close an early education gap?
Soon after Joan Ganz Cooney shared her landmark report, "The Potential Uses of Television in Preschool Education," in 1966, she was prototyping show ideas, attracting funding from The Carnegie Corporation, The Ford Foundation, and The Corporation for Public Broadcasting, and co-founding the Children's Television Workshop with psychologist Lloyd Morrisett. And then, on November 10, 1969, informal learning was transformed forever with the premiere of Sesame Street on public television.
For its first season, Sesame Street won three Emmy Awards and a Peabody Award. Its star, Big Bird, landed on the cover of Time Magazine, which called the show "TV's gift to children." Fifty years later, it's hard to imagine an approach to informal preschool learning that isn't Sesame Street.
And that approach can be boiled down to one word: Entertainment.
Despite decades of evidence from Sesame Street—one of the most studied television shows of all time—and more research from social science, psychology, and media communications, we haven't yet taken Ganz Cooney's concepts to heart in educating adults. Adults have news programs and documentaries and educational YouTube channels, but no Sesame Street. So why don't we? Here's how we can design a new kind of television to make science engaging and accessible for a public that is all too often intimidated by it.
We have to start from the realization that America is a nation of high-school graduates. By the end of high school, students have decided to abandon science because they think it's too difficult, and as a nation, we've made it acceptable for any one of us to say "I'm not good at science" and offload thinking to the ones who might be. So, is it surprising that a large number of Americans are likely to believe in conspiracy theories like the 25% that believe the release of COVID-19 was planned, the one in ten who believe the Moon landing was a hoax, or the 30–40% that think the condensation trails of planes are actually nefarious chemtrails? If we're meeting people where they are, the aim can't be to get the audience from an A to an A+, but from an F to a D, and without judgment of where they are starting from.
There's also a natural compulsion for a well-meaning educator to fill a literacy gap with a barrage of information, but this is what I call "factsplaining," and we know it doesn't work. And worse, it can backfire. In one study from 2014, parents were provided with factual information about vaccine safety, and it was the group that was already the most averse to vaccines that uniquely became even more averse.
Why? Our social identities and cognitive biases are stubborn gatekeepers when it comes to processing new information. We filter ideas through pre-existing beliefs—our values, our religions, our political ideologies. Incongruent ideas are rejected. Congruent ideas, no matter how absurd, are allowed through. We hear what we want to hear, and then our brains justify the input by creating narratives that preserve our identities. Even when we have all the facts, we can use them to support any worldview.
But social science has revealed many mechanisms for hijacking these processes through narrative storytelling, and this can form the foundation of a new kind of educational television.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence?
As media creators, we can reject factsplaining and instead construct entertaining narratives that disrupt cognitive processes. Two-decade-old research tells us when people are immersed in entertaining fiction narratives, they loosen their defenses, opening a path for new information, editing attitudes, and inspiring new behavior. Where news about hot-button issues like climate change or vaccination might trigger resistance or a backfire effect, fiction can be crafted to be absorbing and, as a result, persuasive.
But the narratives can't be stuffed with information. They must be simplified. If this feels like the opposite of what an educator should be doing, it is possible to reduce the complexity of information, without oversimplification, through "exemplification," a framing device to tell the stories of individuals in specific circumstances that can speak to the greater issue without needing to explain it all. It's a technique you've seen used in biopics. The Discovery Channel true-crime miniseries Manhunt: Unabomber does many things well from a science storytelling perspective, including exemplifying the virtues of the scientific method through a character who argues for a new field of science, forensic linguistics, to catch one of the most notorious domestic terrorists in U.S. history.
We must also appeal to the audience's curiosity. We know curiosity is such a strong driver of human behavior that it can even counteract the biases put up by one's political ideology around subjects like climate change. If we treat science information like a product—and we should—advertising research tells us we can maximize curiosity though a Goldilocks effect. If the information is too complex, your show might as well be a PowerPoint presentation. If it's too simple, it's Sesame Street. There's a sweet spot for creating intrigue about new information when there's a moderate cognitive gap.
The science of "identification" tells us that the more the main character is endearing to a viewer, the more likely the viewer will adopt the character's worldview and journey of change. This insight further provides incentives to craft characters reflective of our audiences. If we accept our biases for what they are, we can understand why the messenger becomes more important than the message, because, without an appropriate messenger, the message becomes faint and ineffective. And research confirms that the stereotype-busting doctor-skeptic Dana Scully of The X-Files, a popular science-fiction series, was an inspiration for a generation of women who pursued science careers.
With these directions, we can start making a new kind of television. But is television itself still the right delivery medium? Americans do spend six hours per day—a quarter of their lives—watching video. And even with the rise of social media and apps, science-themed television shows remain popular, with four out of five adults reporting that they watch shows about science at least sometimes. CBS's The Big Bang Theory was the most-watched show on television in the 2017–2018 season, and Cartoon Network's Rick & Morty is the most popular comedy series among millennials. And medical and forensic dramas continue to be broadcast staples. So yes, it's as true today as it was in the 1980s when George Gerbner, the "cultivation theory" researcher who studied the long-term impacts of television images, wrote, "a single episode on primetime television can reach more people than all science and technology promotional efforts put together."
We know from cultivation theory that media images can shape our views of scientists. Quick, picture a scientist! Was it an old, white man with wild hair in a lab coat? If most Americans don't encounter research science firsthand, it's media that dictates how we perceive science and scientists. Characters like Sheldon Cooper and Rick Sanchez become the model. But we can correct that by representing professionals more accurately on-screen and writing characters more like Dana Scully.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence? Or could new series counter the misinfodemics surrounding COVID-19 and vaccines through more compelling, corrective narratives? Social science has given us a blueprint suggesting they could. Binge-watching a show like the surreal NBC sitcom The Good Place doesn't replace a Ph.D. in philosophy, but its use of humor plants the seed of continued interest in a new subject. The goal of persuasive entertainment isn't to replace formal education, but it can inspire, shift attitudes, increase confidence in the knowledge of complex issues, and otherwise prime viewers for continued learning.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
There's a quiet revolution going on in medicine. It's driven by artificial intelligence, but paradoxically, new technology may put a more human face on healthcare.
AI's usefulness in healthcare ranges far and wide.
Artificial intelligence is software that can process massive amounts of information and learn over time, arriving at decisions with striking accuracy and efficiency. It offers greater accuracy in diagnosis, exponentially faster genome sequencing, the mining of medical literature and patient records at breathtaking speed, a dramatic reduction in administrative bureaucracy, personalized medicine, and even the democratization of healthcare.
The algorithms that bring these advantages won't replace doctors; rather, by offloading some of the most time-consuming tasks in healthcare, providers will be able to focus on personal interactions with patients—listening, empathizing, educating and generally putting the care back in healthcare. The relationship can focus on the alleviation of suffering, both the physical and emotional kind.
Challenges of Getting AI Up and Running
The AI revolution, still in its early phase in medicine, is already spurring some amazing advances, despite the fact that some experts say it has been overhyped. IBM's Watson Health program is a case in point. IBM capitalized on Watson's ability to process natural language by designing algorithms that devour data like medical articles and analyze images like MRIs and medical slides. The algorithms help diagnose diseases and recommend treatment strategies.
But Technology Review reported that a heavily hyped partnership with the MD Anderson Cancer Center in Houston fell apart in 2017 because of a lack of data in the proper format. The data existed, just not in a way that the voraciously data-hungry AI could use to train itself.
The hiccup certainly hasn't dampened the enthusiasm for medical AI among other tech giants, including Google and Apple, both of which have invested billions in their own healthcare projects. At this point, the main challenge is the need for algorithms to interpret a huge diversity of data mined from medical records. This can include everything from CT scans, MRIs, electrocardiograms, x-rays, and medical slides, to millions of pages of medical literature, physician's notes, and patient histories. It can even include data from implantables and wearables such as the Apple Watch and blood sugar monitors.
None of this information is in anything resembling a standard format across and even within hospitals, clinics, and diagnostic centers. Once the algorithms are trained, however, they can crunch massive amounts of data at blinding speed, with an accuracy that matches and sometimes even exceeds that of highly experienced doctors.
Genome sequencing, for example, took years to accomplish as recently as the early 2000s. The Human Genome Project, the first sequencing of the human genome, was an international effort that took 13 years to complete. In April of this year, Rady Children's Institute for Genomic Medicine in San Diego used an AI-powered genome sequencing algorithm to diagnose rare genetic diseases in infants in about 20 hours, according to ScienceDaily.
"Patient care will always begin and end with the doctor."
Dr. Stephen Kingsmore, the lead author of an article published in Science Translational Medicine, emphasized that even though the algorithm helped guide the treatment strategies of neonatal intensive care physicians, the doctor was still an indispensable link in the chain. "Some people call this artificial intelligence, we call it augmented intelligence," he says. "Patient care will always begin and end with the doctor."
One existing trend is helping to supply a great amount of valuable data to algorithms—the electronic health record. Initially blamed for exacerbating the already crushing workload of many physicians, the EHR is emerging as a boon for algorithms because it consolidates all of a patient's data in one record.
Examples of AI in Action Around the Globe
If you're a parent who has ever taken a child to the doctor with flulike symptoms, you know the anxiety of wondering if the symptoms signal something serious. Kang Zhang, M.D., Ph.D., the founding director of the Institute for Genomic Medicine at the University of California at San Diego, and colleagues developed an AI natural language processing model that used deep learning to analyze the EHRs of 1.3 million pediatric visits to a clinic in Guanzhou, China.
The AI identified common childhood diseases with about the same accuracy as human doctors, and it was even able to split the diagnoses into two categories—common conditions such as flu, and serious, life-threatening conditions like meningitis. Zhang has emphasized that the algorithm didn't replace the human doctor, but it did streamline the diagnostic process and could be used in a triage capacity when emergency room personnel need to prioritize the seriously ill over those suffering from common, less dangerous ailments.
AI's usefulness in healthcare ranges far and wide. In Uganda and several other African nations, AI is bringing modern diagnostics to remote villages that have no access to traditional technologies such as x-rays. The New York Times recently reported that there, doctors are using a pocket-sized, hand-held ultrasound machine that works in concert with a cell phone to image and diagnose everything from pneumonia (a common killer of children) to cancerous tumors.
The beauty of the highly portable, battery-powered device is that ultrasound images can be uploaded on computers so that physicians anywhere in the world can review them and weigh in with their advice. And the images are instantly incorporated into the patient's EHR.
Jonathan Rothberg, the founder of Butterfly Network, the Connecticut company that makes the device, told The New York Times that "Two thirds of the world's population gets no imaging at all. When you put something on a chip, the price goes down and you democratize it." The Butterfly ultrasound machine, which sells for $2,000, promises to be a game-changer in remote areas of Africa, South America, and Asia, as well as at the bedsides of patients in developed countries.
AI algorithms are rapidly emerging in healthcare across the U.S. and the world. China has become a major international player, set to surpass the U.S. this year in AI capital investment, the translation of AI research into marketable products, and even the number of often-cited research papers on AI. So far the U.S. is still the leader, but some experts describe the relationship between the U.S. and China as an AI cold war.
"The future of machine learning isn't sentient killer robots. It's longer human lives."
The U.S. Food and Drug Administration expanded its approval of medical algorithms from two in all of 2017 to about two per month throughout 2018. One of the first fields to be impacted is ophthalmology.
One algorithm, developed by the British AI company DeepMind (owned by Alphabet, the parent company of Google), instantly scans patients' retinas and is able to diagnose diabetic retinopathy without needing an ophthalmologist to interpret the scans. This means diabetics can get the test every year from their family physician without having to see a specialist. The Financial Times reported in March that the technology is now being used in clinics throughout Europe.
In Copenhagen, emergency service dispatchers are using a new voice-processing AI called Corti to analyze the conversations in emergency phone calls. The algorithm analyzes the verbal cues of callers, searches its huge database of medical information, and provides dispatchers with onscreen diagnostic information. Freddy Lippert, the CEO of EMS Copenhagen, notes that the algorithm has already saved lives by expediting accurate diagnoses in high-pressure situations where time is of the essence.
Researchers at the University of Nottingham in the UK have even developed a deep learning algorithm that predicts death more accurately than human clinicians. The algorithm incorporates data from a huge range of factors in a chronically ill population, including how many fruits and vegetables a patient eats on a daily basis. Dr. Stephen Weng, lead author of the study, published in PLOS ONE, said in a press release, "We found machine learning algorithms were significantly more accurate in predicting death than the standard prediction models developed by a human expert."
New digital technologies are allowing patients to participate in their healthcare as never before. A feature of the new Apple Watch is an app that detects cardiac arrhythmias and even produces an electrocardiogram if an abnormality is detected. The technology, approved by the FDA, is helping cardiologists monitor heart patients and design interventions for those who may be at higher risk of a cardiac event like a stroke.
If having an algorithm predict your death sends a shiver down your spine, consider that algorithms may keep you alive longer. In 2018, technology reporter Tristan Greene wrote for Medium that "…despite the unending deluge of panic-ridden articles declaring AI the path to apocalypse, we're now living in a world where algorithms save lives every day. The future of machine learning isn't sentient killer robots. It's longer human lives."
The Risks of AI Compiling Your Data
To be sure, the advent of AI-infused medical technology is not without its risks. One risk is that the use of AI wearables constantly monitoring our vital signs could turn us into a nation of hypochondriacs, racing to our doctors every time there's a blip in some vital sign. Such a development could stress an already overburdened system that suffers from, among other things, a shortage of doctors and nurses. Another risk has to do with the privacy protections on the massive repository of intimately personal information that AI will have on us.
In an article recently published in the Journal of the American Medical Association, Australian researcher Kit Huckvale and colleagues examined the handling of data by 36 smartphone apps that assisted people with either depression or smoking cessation, two areas that could lend themselves to stigmatization if they fell into the wrong hands.
Out of the 36 apps, 33 shared their data with third parties, despite the fact that just 25 of those apps had a privacy policy at all and out of those, only 23 stated that data would be shared with third parties. The recipients of all that data? It went almost exclusively to Facebook and Google, to be used for advertising and marketing purposes. But there's nothing to stop it from ending up in the hands of insurers, background databases, or any other entity.
Even when data isn't voluntarily shared, any digital information can be hacked. EHRs and even wearable devices share the same vulnerability as any other digital record or device. Still, the promise of AI to radically improve efficiency and accuracy in healthcare is hard to ignore.
AI Can Help Restore Humanity to Medicine
Eric Topol, director of the Scripps Research Translational Institute and author of the new book Deep Medicine, says that AI gives doctors and nurses the most precious gift of all: time.
Topol welcomes his patients' use of the Apple Watch cardiac feature and is optimistic about the ways that AI is revolutionizing medicine. He says that the watch helps doctors monitor how well medications are working and has already helped to prevent strokes. But in addition to that, AI will help bring the humanity back to a profession that has become as cold and hard as a stainless steel dissection table.
"When I graduated from medical school in the 1970s," he says, "you had a really intimate relationship with your doctor." Over the decades, he has seen that relationship steadily erode as medical organizations demanded that doctors see more and more patients within ever-shrinking time windows.
"Doctors have no time to think, to communicate. We need to restore the mission in medicine."
In addition to that, EHRs have meant that doctors and nurses are getting buried in paperwork and administrative tasks. This is no doubt one reason why a recent study by the World Health Organization showed that worldwide, about 50 percent of doctors suffer from burnout. People who are utterly exhausted make more mistakes, and medical clinicians are no different from the rest of us. Only medical mistakes have unacceptably high stakes. According to its website, Johns Hopkins University recently announced that in the U.S. alone, 250,000 people die from medical mistakes each year.
"Doctors have no time to think, to communicate," says Topol. "We need to restore the mission in medicine." AI is giving doctors more time to devote to the thing that attracted them to medicine in the first place—connecting deeply with patients.
There is a real danger at this juncture, though, that administrators aware of the time-saving aspects of AI will simply push doctors to see more patients, read more tests, and embrace an even more crushing workload.
"We can't leave it to the administrators to just make things worse," says Topol. "Now is the time for doctors to advocate for a restoration of the human touch. We need to stand up for patients and for the patient-doctor relationship."
AI could indeed be a game changer, he says, but rather than squander the huge benefits of more time, "We need a new equation going forward."
This Special Music Helped Preemie Babies’ Brains Develop
Move over, Baby Einstein: New research from Switzerland shows that listening to soothing music in the first weeks of life helps encourage brain development in preterm babies.
For the study, the scientists recruited a harpist and a new-age musician to compose three pieces of music.
The Lowdown
Children who are born prematurely, between 24 and 32 weeks of pregnancy, are far more likely to survive today than they used to be—but because their brains are less developed at birth, they're still at high risk for learning difficulties and emotional disorders later in life.
Researchers in Geneva thought that the unfamiliar and stressful noises in neonatal intensive care units might be partially responsible. After all, a hospital ward filled with alarms, other infants crying, and adults bustling in and out is far more disruptive than the quiet in-utero environment the babies are used to. They decided to test whether listening to pleasant music could have a positive, counterbalancing effect on the babies' brain development.
Led by Dr. Petra Hüppi at the University of Geneva, the scientists recruited Swiss harpist and new-age musician Andreas Vollenweider (who has collaborated with the likes of Carly Simon, Bryan Adams, and Bobby McFerrin). Vollenweider developed three pieces of music specifically for the NICU babies, which were played for them five times per week. Each track was used for specific purposes: To help the baby wake up; to stimulate a baby who was already awake; and to help the baby fall back asleep.
When they reached an age equivalent to a full-term baby, the infants underwent an MRI. The researchers focused on connections within the salience network, which determines how relevant information is, and then processes and acts on it—crucial components of healthy social behavior and emotional regulation. The neural networks of preemies who had listened to Vollenweider's pieces were stronger than preterm babies who had not received the intervention, and were instead much more similar to full-term babies.
Next Up
The first infants in the study are now 6 years old—the age when cognitive problems usually become diagnosable. Researchers plan to follow up with more cognitive and socio-emotional assessments, to determine whether the effects of the music intervention have lasted.
The first infants in the study are now 6 years old—the age when cognitive problems usually become diagnosable.
The scientists note in their paper that, while they saw strong results in the babies' primary auditory cortex and thalamus connections—suggesting that they had developed an ability to recognize and respond to familiar music—there was less reaction in the regions responsible for socioemotional processing. They hypothesize that more time spent listening to music during a NICU stay could improve those connections as well; but another study would be needed to know for sure.
Open Questions
Because this initial study had a fairly small sample size (only 20 preterm infants underwent the musical intervention, with another 19 studied as a control group), and they all listened to the same music for the same amount of time, it's still undetermined whether variations in the type and frequency of music would make a difference. Are Vollenweider's harps, bells, and punji the runaway favorite, or would other styles of music help, too? (Would "Baby Shark" help … or hurt?) There's also a chance that other types of repetitive sounds, like parents speaking or singing to their children, might have similar effects.
But the biggest question is still the one that the scientists plan to tackle next: Whether the intervention lasts as the children grow up. If it does, that's great news for any family with a preemie — and for the baby-sized headphone industry.