To Make Science Engaging, We Need a Sesame Street for Adults
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
In the mid-1960s, a documentary producer in New York City wondered if the addictive jingles, clever visuals, slogans, and repetition of television ads—the ones that were captivating young children of the time—could be harnessed for good. Over the course of three months, she interviewed educators, psychologists, and artists, and the result was a bonanza of ideas.
Perhaps a new TV show could teach children letters and numbers in short animated sequences? Perhaps adults and children could read together with puppets providing comic relief and prompting interaction from the audience? And because it would be broadcast through a device already in almost every home, perhaps this show could reach across socioeconomic divides and close an early education gap?
Soon after Joan Ganz Cooney shared her landmark report, "The Potential Uses of Television in Preschool Education," in 1966, she was prototyping show ideas, attracting funding from The Carnegie Corporation, The Ford Foundation, and The Corporation for Public Broadcasting, and co-founding the Children's Television Workshop with psychologist Lloyd Morrisett. And then, on November 10, 1969, informal learning was transformed forever with the premiere of Sesame Street on public television.
For its first season, Sesame Street won three Emmy Awards and a Peabody Award. Its star, Big Bird, landed on the cover of Time Magazine, which called the show "TV's gift to children." Fifty years later, it's hard to imagine an approach to informal preschool learning that isn't Sesame Street.
And that approach can be boiled down to one word: Entertainment.
Despite decades of evidence from Sesame Street—one of the most studied television shows of all time—and more research from social science, psychology, and media communications, we haven't yet taken Ganz Cooney's concepts to heart in educating adults. Adults have news programs and documentaries and educational YouTube channels, but no Sesame Street. So why don't we? Here's how we can design a new kind of television to make science engaging and accessible for a public that is all too often intimidated by it.
We have to start from the realization that America is a nation of high-school graduates. By the end of high school, students have decided to abandon science because they think it's too difficult, and as a nation, we've made it acceptable for any one of us to say "I'm not good at science" and offload thinking to the ones who might be. So, is it surprising that a large number of Americans are likely to believe in conspiracy theories like the 25% that believe the release of COVID-19 was planned, the one in ten who believe the Moon landing was a hoax, or the 30–40% that think the condensation trails of planes are actually nefarious chemtrails? If we're meeting people where they are, the aim can't be to get the audience from an A to an A+, but from an F to a D, and without judgment of where they are starting from.
There's also a natural compulsion for a well-meaning educator to fill a literacy gap with a barrage of information, but this is what I call "factsplaining," and we know it doesn't work. And worse, it can backfire. In one study from 2014, parents were provided with factual information about vaccine safety, and it was the group that was already the most averse to vaccines that uniquely became even more averse.
Why? Our social identities and cognitive biases are stubborn gatekeepers when it comes to processing new information. We filter ideas through pre-existing beliefs—our values, our religions, our political ideologies. Incongruent ideas are rejected. Congruent ideas, no matter how absurd, are allowed through. We hear what we want to hear, and then our brains justify the input by creating narratives that preserve our identities. Even when we have all the facts, we can use them to support any worldview.
But social science has revealed many mechanisms for hijacking these processes through narrative storytelling, and this can form the foundation of a new kind of educational television.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence?
As media creators, we can reject factsplaining and instead construct entertaining narratives that disrupt cognitive processes. Two-decade-old research tells us when people are immersed in entertaining fiction narratives, they loosen their defenses, opening a path for new information, editing attitudes, and inspiring new behavior. Where news about hot-button issues like climate change or vaccination might trigger resistance or a backfire effect, fiction can be crafted to be absorbing and, as a result, persuasive.
But the narratives can't be stuffed with information. They must be simplified. If this feels like the opposite of what an educator should be doing, it is possible to reduce the complexity of information, without oversimplification, through "exemplification," a framing device to tell the stories of individuals in specific circumstances that can speak to the greater issue without needing to explain it all. It's a technique you've seen used in biopics. The Discovery Channel true-crime miniseries Manhunt: Unabomber does many things well from a science storytelling perspective, including exemplifying the virtues of the scientific method through a character who argues for a new field of science, forensic linguistics, to catch one of the most notorious domestic terrorists in U.S. history.
We must also appeal to the audience's curiosity. We know curiosity is such a strong driver of human behavior that it can even counteract the biases put up by one's political ideology around subjects like climate change. If we treat science information like a product—and we should—advertising research tells us we can maximize curiosity though a Goldilocks effect. If the information is too complex, your show might as well be a PowerPoint presentation. If it's too simple, it's Sesame Street. There's a sweet spot for creating intrigue about new information when there's a moderate cognitive gap.
The science of "identification" tells us that the more the main character is endearing to a viewer, the more likely the viewer will adopt the character's worldview and journey of change. This insight further provides incentives to craft characters reflective of our audiences. If we accept our biases for what they are, we can understand why the messenger becomes more important than the message, because, without an appropriate messenger, the message becomes faint and ineffective. And research confirms that the stereotype-busting doctor-skeptic Dana Scully of The X-Files, a popular science-fiction series, was an inspiration for a generation of women who pursued science careers.
With these directions, we can start making a new kind of television. But is television itself still the right delivery medium? Americans do spend six hours per day—a quarter of their lives—watching video. And even with the rise of social media and apps, science-themed television shows remain popular, with four out of five adults reporting that they watch shows about science at least sometimes. CBS's The Big Bang Theory was the most-watched show on television in the 2017–2018 season, and Cartoon Network's Rick & Morty is the most popular comedy series among millennials. And medical and forensic dramas continue to be broadcast staples. So yes, it's as true today as it was in the 1980s when George Gerbner, the "cultivation theory" researcher who studied the long-term impacts of television images, wrote, "a single episode on primetime television can reach more people than all science and technology promotional efforts put together."
We know from cultivation theory that media images can shape our views of scientists. Quick, picture a scientist! Was it an old, white man with wild hair in a lab coat? If most Americans don't encounter research science firsthand, it's media that dictates how we perceive science and scientists. Characters like Sheldon Cooper and Rick Sanchez become the model. But we can correct that by representing professionals more accurately on-screen and writing characters more like Dana Scully.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence? Or could new series counter the misinfodemics surrounding COVID-19 and vaccines through more compelling, corrective narratives? Social science has given us a blueprint suggesting they could. Binge-watching a show like the surreal NBC sitcom The Good Place doesn't replace a Ph.D. in philosophy, but its use of humor plants the seed of continued interest in a new subject. The goal of persuasive entertainment isn't to replace formal education, but it can inspire, shift attitudes, increase confidence in the knowledge of complex issues, and otherwise prime viewers for continued learning.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
How Emerging Technologies Can Help Us Fight the New Coronavirus
In nature, few species remain dominant for long. Any sizable population of similar individuals offers immense resources to whichever parasite can evade its defenses, spreading rapidly from one member to the next.
Which will prove greater: our defenses or our vulnerabilities?
Humans are one such dominant species. That wasn't always the case: our hunter-gatherer ancestors lived in groups too small and poorly connected to spread pathogens like wildfire. Our collective vulnerability to pandemics began with the dawn of cities and trade networks thousands of years ago. Roman cities were always demographic sinks, but never more so than when a pandemic agent swept through. The plague of Cyprian, the Antonine plague, the plague of Justinian – each is thought to have killed over ten million people, an appallingly high fraction of the total population of the empire.
With the advent of sanitation, hygiene, and quarantines, we developed our first non-immunological defenses to curtail the spread of plagues. With antibiotics, we began to turn the weapons of microbes against our microbial foes. Most potent of all, we use vaccines to train our immune systems to fight pathogens before we are even exposed. Edward Jenner's original vaccine alone is estimated to have saved half a billion lives.
It's been over a century since we suffered from a swift and deadly pandemic. Even the last deadly influenza of 1918 killed only a few percent of humanity – nothing so bad as any of the Roman plagues, let alone the Black Death of medieval times.
How much of our recent winning streak has been due to luck?
Much rides on that question, because the same factors that first made our ancestors vulnerable are now ubiquitous. Our cities are far larger than those of ancient times. They're inhabited by an ever-growing fraction of humanity, and are increasingly closely connected: we now routinely travel around the world in the course of a day. Despite urbanization, global population growth has increased contact with wild animals, creating more opportunities for zoonotic pathogens to jump species. Which will prove greater: our defenses or our vulnerabilities?
The tragic emergence of coronavirus 2019-nCoV in Wuhan may provide a test case. How devastating this virus will become is highly uncertain at the time of writing, but its rapid spread to many countries is deeply worrisome. That it seems to kill only the already infirm and spare the healthy is small comfort, and may counterintuitively assist its spread: it's easy to implement a quarantine when everyone infected becomes extremely ill, but if carriers may not exhibit symptoms as has been reported, it becomes exceedingly difficult to limit transmission. The virus, a distant relative of the more lethal SARS virus that killed 800 people in 2002 to 2003, has evolved to be transmitted between humans and spread to 18 countries in just six weeks.
Humanity's response has been faster than ever, if not fast enough. To its immense credit, China swiftly shared information, organized and built new treatment centers, closed schools, and established quarantines. The Coalition for Epidemic Preparedness Innovations, which was founded in 2017, quickly funded three different companies to develop three different varieties of vaccine: a standard protein vaccine, a DNA vaccine, and an RNA vaccine, with more planned. One of the agreements was signed after just four days of discussion, far faster than has ever been done before.
The new vaccine candidates will likely be ready for clinical trials by early summer, but even if successful, it will be additional months before the vaccine will be widely available. The delay may well be shorter than ever before thanks to advances in manufacturing and logistics, but a delay it will be.
The 1918 influenza virus killed more than half of its victims in the United Kingdom over just three months.
If we faced a truly nasty virus, something that spreads like pandemic influenza – let alone measles – yet with the higher fatality rate of, say, H7N9 avian influenza, the situation would be grim. We are profoundly unprepared, on many different levels.
So what would it take to provide us with a robust defense against pandemics?
Minimize the attack surface: 2019-nCoV jumped from an animal, most probably a bat, to humans. China has now banned the wildlife trade in response to the epidemic. Keeping it banned would be prudent, but won't be possible in all nations. Still, there are other methods of protection. Influenza viruses commonly jump from birds to pigs to humans; the new coronavirus may have similarly passed through a livestock animal. Thanks to CRISPR, we can now edit the genomes of most livestock. If we made them immune to known viruses, and introduced those engineered traits to domesticated animals everywhere, we would create a firewall in those intermediate hosts. We might even consider heritably immunizing the wild organisms most likely to serve as reservoirs of disease.
None of these defenses will be cheap, but they'll be worth every penny.
Rapid diagnostics: We need a reliable method of detection costing just pennies to be available worldwide inside of a week of discovering a new virus. This may eventually be possible thanks to a technology called SHERLOCK, which is based on a CRISPR system more commonly used for precision genome editing. Instead of using CRISPR to find and edit a particular genome sequence in a cell, SHERLOCK programs it to search for a desired target and initiate an easily detected chain reaction upon discovery. The technology is capable of fantastic sensitivity: with an attomolar (10-18) detection limit, it senses single molecules of a unique DNA or RNA fingerprint, and the components can be freeze-dried onto paper strips.
Better preparations: China acted swiftly to curtail the spread of the Wuhan virus with traditional public health measures, but not everything went as smoothly as it might have. Most cities and nations have never conducted a pandemic preparedness drill. Best give people a chance to practice keeping the city barely functional while minimizing potential exposure events before facing the real thing.
Faster vaccines: Three months to clinical trials is too long. We need a robust vaccine discovery and production system that can generate six candidates within a week of the pathogen's identification, manufacture a million doses the week after, and scale up to a hundred million inside of a month. That may be possible for novel DNA and RNA-based vaccines, and indeed anything that can be delivered using a standardized gene therapy vector. For example, instead of teaching each person's immune system to evolve protective antibodies by showing it pieces of the virus, we can program cells to directly produce known antibodies via gene therapy. Those antibodies could be discovered by sifting existing diverse libraries of hundreds of millions of candidates, computationally designed from scratch, evolved using synthetic laboratory ecosystems, or even harvested from the first patients to report symptoms. Such a vaccine might be discovered and produced fast enough at scale to halt almost any natural pandemic.
Robust production and delivery: Our defenses must not be vulnerable to the social and economic disruptions caused by a pandemic. Unfortunately, our economy selects for speed and efficiency at the expense of robustness. Just-in-time supply chains that wing their way around the world require every node to be intact. If workers aren't on the job producing a critical component, the whole chain breaks until a substitute can be found. A truly nasty pandemic would disrupt economies all over the world, so we will need to pay extra to preserve the capacity for independent vertically integrated production chains in multiple nations. Similarly, vaccines are only useful if people receive them, so delivery systems should be as robustly automated as possible.
None of these defenses will be cheap, but they'll be worth every penny. Our nations collectively spend trillions on defense against one another, but only billions to protect humanity from pandemic viruses known to have killed more people than any human weapon. That's foolish – especially since natural animal diseases that jump the species barrier aren't the only pandemic threats.
We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural.
The complete genomes of all historical pandemic viruses ever to have been sequenced are freely available to anyone with an internet connection. True, these are all agents we've faced before, so we have a pre-existing armory of pharmaceuticals and vaccines and experience. There's no guarantee that they would become pandemics again; for example, a large fraction of humanity is almost certainly immune to the 1918 influenza virus due to exposure to the related 2009 pandemic, making it highly unlikely that the virus would take off if released.
Still, making the blueprints publicly available means that a large and growing number of people with the relevant technical skills can single-handedly make deadly biological agents that might be able to spread autonomously -- at least if they can get their hands on the relevant DNA. At present, such people most certainly can, so long as they bother to check the publicly available list of which gene synthesis companies do the right thing and screen orders -- and by implication, which ones don't.
One would hope that at least some of the companies that don't advertise that they screen are "honeypots" paid by intelligence agencies to catch would-be bioterrorists, but even if most of them are, it's still foolish to let individuals access that kind of destructive power. We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural. Hence, we should build a secure and adaptive system capable of screening all DNA synthesis for known and potential future pandemic agents... without disclosing what we think is a credible bioweapon.
Whether or not it becomes a global pandemic, the emergence of Wuhan coronavirus has underscored the need for coordinated action to prevent the spread of pandemic disease. Let's ensure that our reactive response minimally prepares us for future threats, for one day, reacting may not be enough.
This Dog's Nose Is So Good at Smelling Cancer That Scientists Are Trying to Build One Just Like It
Daisy wouldn't leave Claire Guest alone. Instead of joining Guest's other dogs for a run in the park, the golden retriever with the soulful eyes kept nudging Guest's chest, and stared at her intently, somehow hoping she'd get the message.
"I was incredibly lucky to be told by Daisy."
When Guest got home, she detected a tiny lump in one of her breasts. She dismissed it, but her sister, who is a family doctor, insisted she get it checked out.
That saved her life. A series of tests, including a biopsy and a mammogram, revealed the cyst was benign. But doctors discovered a tumor hidden deep inside her chest wall, an insidious malignancy that normally isn't detected until the cancer has rampaged out of control throughout the body. "My prognosis would have been very poor," says Guest, who is an animal behavioralist. "I was incredibly lucky to be told by Daisy."
Ironically, at the time, Guest was training hearing dogs for the deaf—alerting them to doorbells or phones--for a charitable foundation. But she had been working on a side project to harness dogs' exquisitely sensitive sense of smell to spot cancer at its earliest and most treatable stages. When Guest was diagnosed with cancer two decades ago, however, the use of dogs to detect diseases was in its infancy and scientific evidence was largely anecdotal.
In the years since, Guest and the British charitable foundation she co-founded with Dr. John Church in 2008, Medical Detection Dogs (MDD), has shown that dogs can be trained to detect odors that predict a looming medical crisis hours in advance, in the case of diabetes or epilepsy, as well as the presence of cancers.
In a proof of principle study published in the BMJ in 2004, they showed dogs had better than a 40 percent success rate in identifying bladder cancer, which was significantly better than random chance (14 percent). Subsequent research indicated dogs can detect odors down to parts per trillion, which is the equivalent of sniffing out a teaspoon of sugar in two Olympic size swimming pools (a million gallons).
American scientists are devising artificial noses that mimic dogs' sense of smell, so these potentially life-saving diagnostic tools are widely available.
But the problem is "dogs can't be scaled up"—it costs upwards of $25,000 to train them—"and you can't keep a trained dog in every oncology practice," says Guest.
The good news is that the pivotal 2004 BMJ paper caught the attention of two American scientists—Andreas Mershin, a physicist at MIT, and Wen-Yee Yee, a chemistry professor at The University of Texas at El Paso. They have joined Guest's quest to leverage canines' highly attuned olfactory systems and devise artificial noses that mimic dogs' sense of smell, so these potentially life-saving diagnostic tools are widely available.
"What we do know is that this is real," says Guest. "Anything that can improve diagnosis of cancer is something we ought to know about."
Dogs have routinely been used for centuries as trackers for hunting and more recently, for ferreting out bombs and bodies. Dogs like Daisy, who went on to become a star performer in Guest's pack of highly trained cancer detecting canines before her death in 2018, have shared a special bond with their human companions for thousands of years. But their vastly superior olfaction is the result of simple anatomy.
Humans possess about six million olfactory receptors—the antenna-like structures inside cell membranes in our nose that latch on to the molecules in the air when we inhale. In contrast, dogs have about 300 million of them and the brain region that analyzes smells is, proportionally, about 40 times greater than ours.
Research indicates that cancerous cells interfere with normal metabolic processes, prompting them to produce volatile organic compounds (VOCs), which enter the blood stream and are either exhaled in our breath or excreted in urine. Dogs can identify these VOCs in urine samples at the tiniest concentrations, 0.001 parts per million, and can be trained to identify the specific "odor fingerprint" of different cancers, although teaching them how to distinguish these signals from background odors is far more complicated than training them to detect drugs or explosives.
For the past fifteen years, Andreas Mershin of MIT has been grappling with this complexity in his quest to devise an artificial nose, which he calls the Nano-Nose, first as a military tool to spot land mines and IEDS, and more recently as a cancer detection tool that can be used in doctors' offices. The ultimate goal is to create an easy-to-use olfaction system powered by artificial intelligence that can fit inside of smartphones and can replicate dogs' ability to sniff out early signs of prostate cancer, which could eliminate a lot of painful and costly biopsies.
Andreas Mershin works on his artificial nose.
Trained canines have a better than 90 percent accuracy in spotting prostate cancer, which is normally difficult to detect. The current diagnostic, the prostate specific antigen test, which measures levels of certain immune system cells associated with prostate cancer, has about as much accuracy "as a coin toss," according to the scientist who discovered PSA. These false positives can lead to unnecessary and horrifically invasive biopsies to retrieve tissue samples.
So far, Mershin's prototype device has the same sensitivity as the dogs—and can detect odors at parts per trillion—but it still can't distinguish that cancer smell in individual human patients the way a dog can. "What we're trying to understand from the dogs is how they look at the data they are collecting so we can copy it," says Mershin. "We still have to make it intelligent enough to know what it is looking at—what we are lacking is artificial dog intelligence."
The intricate parts of the artificial nose are designed to fit inside a smartphone.
At UT El Paso, Wen-Yee Lee and her research team has used the canine olfactory system as a model for a new screening test for prostate cancer, which has a 92 percent accuracy in tests of urine samples and could be eventually developed as a kit similar to the home pregnancy test. "If dogs can do it, we can do it better," says Lee, whose husband was diagnosed with prostate cancer in 2005.
The UT scientists used samples from about 150 patients, and looked at about 9,000 compounds before they were able to zero in on the key VOCs that are released by prostate cancers—"it was like finding a needle in the haystack," says Lee. But a more reliable test that can also distinguish which cancers are more aggressive could help patients decide their best treatment options and avoid invasive procedures that can render them incontinent and impotent.
"This is much more accurate than the PSA—we were able to see a very distinct difference between people with prostate cancer and those without cancer," says Lee, who has been sharing her research with Guest and hopes to have the test on the market within the next few years.
In the meantime, Guest's foundation has drawn the approving attention of royal animal lovers: Camilla, the Duchess of Cornwall, is a patron, which opened up the charitable floodgates and helped legitimize MDD in the scientific community. Even Camilla's mother-in-law, Queen Elizabeth, has had a demonstration of these canny canines' unique abilities.
Claire Guest, and two of MDDs medical detection dogs, Jodie and Nimbus, meet with queen Elizabeth.
"She actually held one of my [artificial] noses in her hand and asked really good questions, including things we hadn't thought of, like the range of how far away a dog can pick up the scent or if this can be used to screen for malaria," says Mershin. "I was floored by this curious 93-year-old lady. Half of humanity's deaths are from chronic diseases and what the dogs are showing is a whole new way of understanding holistic diseases of the system."