To Make Science Engaging, We Need a Sesame Street for Adults
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
In the mid-1960s, a documentary producer in New York City wondered if the addictive jingles, clever visuals, slogans, and repetition of television ads—the ones that were captivating young children of the time—could be harnessed for good. Over the course of three months, she interviewed educators, psychologists, and artists, and the result was a bonanza of ideas.
Perhaps a new TV show could teach children letters and numbers in short animated sequences? Perhaps adults and children could read together with puppets providing comic relief and prompting interaction from the audience? And because it would be broadcast through a device already in almost every home, perhaps this show could reach across socioeconomic divides and close an early education gap?
Soon after Joan Ganz Cooney shared her landmark report, "The Potential Uses of Television in Preschool Education," in 1966, she was prototyping show ideas, attracting funding from The Carnegie Corporation, The Ford Foundation, and The Corporation for Public Broadcasting, and co-founding the Children's Television Workshop with psychologist Lloyd Morrisett. And then, on November 10, 1969, informal learning was transformed forever with the premiere of Sesame Street on public television.
For its first season, Sesame Street won three Emmy Awards and a Peabody Award. Its star, Big Bird, landed on the cover of Time Magazine, which called the show "TV's gift to children." Fifty years later, it's hard to imagine an approach to informal preschool learning that isn't Sesame Street.
And that approach can be boiled down to one word: Entertainment.
Despite decades of evidence from Sesame Street—one of the most studied television shows of all time—and more research from social science, psychology, and media communications, we haven't yet taken Ganz Cooney's concepts to heart in educating adults. Adults have news programs and documentaries and educational YouTube channels, but no Sesame Street. So why don't we? Here's how we can design a new kind of television to make science engaging and accessible for a public that is all too often intimidated by it.
We have to start from the realization that America is a nation of high-school graduates. By the end of high school, students have decided to abandon science because they think it's too difficult, and as a nation, we've made it acceptable for any one of us to say "I'm not good at science" and offload thinking to the ones who might be. So, is it surprising that a large number of Americans are likely to believe in conspiracy theories like the 25% that believe the release of COVID-19 was planned, the one in ten who believe the Moon landing was a hoax, or the 30–40% that think the condensation trails of planes are actually nefarious chemtrails? If we're meeting people where they are, the aim can't be to get the audience from an A to an A+, but from an F to a D, and without judgment of where they are starting from.
There's also a natural compulsion for a well-meaning educator to fill a literacy gap with a barrage of information, but this is what I call "factsplaining," and we know it doesn't work. And worse, it can backfire. In one study from 2014, parents were provided with factual information about vaccine safety, and it was the group that was already the most averse to vaccines that uniquely became even more averse.
Why? Our social identities and cognitive biases are stubborn gatekeepers when it comes to processing new information. We filter ideas through pre-existing beliefs—our values, our religions, our political ideologies. Incongruent ideas are rejected. Congruent ideas, no matter how absurd, are allowed through. We hear what we want to hear, and then our brains justify the input by creating narratives that preserve our identities. Even when we have all the facts, we can use them to support any worldview.
But social science has revealed many mechanisms for hijacking these processes through narrative storytelling, and this can form the foundation of a new kind of educational television.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence?
As media creators, we can reject factsplaining and instead construct entertaining narratives that disrupt cognitive processes. Two-decade-old research tells us when people are immersed in entertaining fiction narratives, they loosen their defenses, opening a path for new information, editing attitudes, and inspiring new behavior. Where news about hot-button issues like climate change or vaccination might trigger resistance or a backfire effect, fiction can be crafted to be absorbing and, as a result, persuasive.
But the narratives can't be stuffed with information. They must be simplified. If this feels like the opposite of what an educator should be doing, it is possible to reduce the complexity of information, without oversimplification, through "exemplification," a framing device to tell the stories of individuals in specific circumstances that can speak to the greater issue without needing to explain it all. It's a technique you've seen used in biopics. The Discovery Channel true-crime miniseries Manhunt: Unabomber does many things well from a science storytelling perspective, including exemplifying the virtues of the scientific method through a character who argues for a new field of science, forensic linguistics, to catch one of the most notorious domestic terrorists in U.S. history.
We must also appeal to the audience's curiosity. We know curiosity is such a strong driver of human behavior that it can even counteract the biases put up by one's political ideology around subjects like climate change. If we treat science information like a product—and we should—advertising research tells us we can maximize curiosity though a Goldilocks effect. If the information is too complex, your show might as well be a PowerPoint presentation. If it's too simple, it's Sesame Street. There's a sweet spot for creating intrigue about new information when there's a moderate cognitive gap.
The science of "identification" tells us that the more the main character is endearing to a viewer, the more likely the viewer will adopt the character's worldview and journey of change. This insight further provides incentives to craft characters reflective of our audiences. If we accept our biases for what they are, we can understand why the messenger becomes more important than the message, because, without an appropriate messenger, the message becomes faint and ineffective. And research confirms that the stereotype-busting doctor-skeptic Dana Scully of The X-Files, a popular science-fiction series, was an inspiration for a generation of women who pursued science careers.
With these directions, we can start making a new kind of television. But is television itself still the right delivery medium? Americans do spend six hours per day—a quarter of their lives—watching video. And even with the rise of social media and apps, science-themed television shows remain popular, with four out of five adults reporting that they watch shows about science at least sometimes. CBS's The Big Bang Theory was the most-watched show on television in the 2017–2018 season, and Cartoon Network's Rick & Morty is the most popular comedy series among millennials. And medical and forensic dramas continue to be broadcast staples. So yes, it's as true today as it was in the 1980s when George Gerbner, the "cultivation theory" researcher who studied the long-term impacts of television images, wrote, "a single episode on primetime television can reach more people than all science and technology promotional efforts put together."
We know from cultivation theory that media images can shape our views of scientists. Quick, picture a scientist! Was it an old, white man with wild hair in a lab coat? If most Americans don't encounter research science firsthand, it's media that dictates how we perceive science and scientists. Characters like Sheldon Cooper and Rick Sanchez become the model. But we can correct that by representing professionals more accurately on-screen and writing characters more like Dana Scully.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence? Or could new series counter the misinfodemics surrounding COVID-19 and vaccines through more compelling, corrective narratives? Social science has given us a blueprint suggesting they could. Binge-watching a show like the surreal NBC sitcom The Good Place doesn't replace a Ph.D. in philosophy, but its use of humor plants the seed of continued interest in a new subject. The goal of persuasive entertainment isn't to replace formal education, but it can inspire, shift attitudes, increase confidence in the knowledge of complex issues, and otherwise prime viewers for continued learning.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
Earlier this year, California-based Ambry Genetics announced that it was discontinuing a test meant to estimate a person's risk of developing prostate or breast cancer. The test looks for variations in a person's DNA that are known to be associated with these cancers.
Known as a polygenic risk score, this type of test adds up the effects of variants in many genes — often in the dozens or hundreds — and calculates a person's risk of developing a particular health condition compared to other people. In this way, polygenic risk scores are different from traditional genetic tests that look for mutations in single genes, such as BRCA1 and BRCA2, which raise the risk of breast cancer.
Traditional genetic tests look for mutations that are relatively rare in the general population but have a large impact on a person's disease risk, like BRCA1 and BRCA2. By contrast, polygenic risk scores scan for more common genetic variants that, on their own, have a small effect on risk. Added together, however, they can raise a person's risk for developing disease.
These scores could become a part of routine healthcare in the next few years. Researchers are developing polygenic risk scores for cancer, heart, disease, diabetes and even depression. Before they can be rolled out widely, they'll have to overcome a key limitation: racial bias.
"The issue with these polygenic risk scores is that the scientific studies which they're based on have primarily been done in individuals of European ancestry," says Sara Riordan, president of the National Society of Genetics Counselors. These scores are calculated by comparing the genetic data of people with and without a particular disease. To make these scores accurate, researchers need genetic data from tens or hundreds of thousands of people.
Myriad's old test would have shown that a Black woman had twice as high of a risk for breast cancer compared to the average woman even if she was at low or average risk.
A 2018 analysis found that 78% of participants included in such large genetic studies, known as genome-wide association studies, were of European descent. That's a problem, because certain disease-associated genetic variants don't appear equally across different racial and ethnic groups. For example, a particular variant in the TTR gene, known as V1221, occurs more frequently in people of African descent. In recent years, the variant has been found in 3 to 4 percent of individuals of African ancestry in the United States. Mutations in this gene can cause protein to build up in the heart, leading to a higher risk of heart failure. A polygenic risk score for heart disease based on genetic data from mostly white people likely wouldn't give accurate risk information to African Americans.
Accuracy in genetic testing matters because such polygenic risk scores could help patients and their doctors make better decisions about their healthcare.
For instance, if a polygenic risk score determines that a woman is at higher-than-average risk of breast cancer, her doctor might recommend more frequent mammograms — X-rays that take a picture of the breast. Or, if a risk score reveals that a patient is more predisposed to heart attack, a doctor might prescribe preventive statins, a type of cholesterol-lowering drug.
"Let's be clear, these are not diagnostic tools," says Alicia Martin, a population and statistical geneticist at the Broad Institute of MIT and Harvard. "We can't use a polygenic score to say you will or will not get breast cancer or have a heart attack."
But combining a patient's polygenic risk score with other factors that affect disease risk — like age, weight, medication use or smoking status — may provide a better sense of how likely they are to develop a specific health condition than considering any one risk factor one its own. The accuracy of polygenic risk scores becomes even more important when considering that these scores may be used to guide medication prescription or help patients make decisions about preventive surgery, such as a mastectomy.
In a study published in September, researchers used results from large genetics studies of people with European ancestry and data from the UK Biobank to calculate polygenic risk scores for breast and prostate cancer for people with African, East Asian, European and South Asian ancestry. They found that they could identify individuals at higher risk of breast and prostate cancer when they scaled the risk scores within each group, but the authors say this is only a temporary solution. Recruiting more diverse participants for genetics studies will lead to better cancer detection and prevent, they conclude.
Recent efforts to do just that are expected to make these scores more accurate in the future. Until then, some genetics companies are struggling to overcome the European bias in their tests.
Acknowledging the limitations of its polygenic risk score, Ambry Genetics said in April that it would stop offering the test until it could be recalibrated. The company launched the test, known as AmbryScore, in 2018.
"After careful consideration, we have decided to discontinue AmbryScore to help reduce disparities in access to genetic testing and to stay aligned with current guidelines," the company said in an email to customers. "Due to limited data across ethnic populations, most polygenic risk scores, including AmbryScore, have not been validated for use in patients of diverse backgrounds." (The company did not make a spokesperson available for an interview for this story.)
In September 2020, the National Comprehensive Cancer Network updated its guidelines to advise against the use of polygenic risk scores in routine patient care because of "significant limitations in interpretation." The nonprofit, which represents 31 major cancer cancers across the United States, said such scores could continue to be used experimentally in clinical trials, however.
Holly Pederson, director of Medical Breast Services at the Cleveland Clinic, says the realization that polygenic risk scores may not be accurate for all races and ethnicities is relatively recent. Pederson worked with Salt Lake City-based Myriad Genetics, a leading provider of genetic tests, to improve the accuracy of its polygenic risk score for breast cancer.
The company announced in August that it had recalibrated the test, called RiskScore, for women of all ancestries. Previously, Myriad did not offer its polygenic risk score to women who self-reported any ancestry other than sole European or Ashkenazi ancestry.
"Black women, while they have a similar rate of breast cancer to white women, if not lower, had twice as high of a polygenic risk score because the development and validation of the model was done in white populations," Pederson said of the old test. In other words, Myriad's old test would have shown that a Black woman had twice as high of a risk for breast cancer compared to the average woman even if she was at low or average risk.
To develop and validate the new score, Pederson and other researchers assessed data from more than 275,000 women, including more than 31,000 African American women and nearly 50,000 women of East Asian descent. They looked at 56 different genetic variants associated with ancestry and 93 associated with breast cancer. Interestingly, they found that at least 95% of the breast cancer variants were similar amongst the different ancestries.
The company says the resulting test is now more accurate for all women across the board, but Pederson cautions that it's still slightly less accurate for Black women.
"It's not only the lack of data from Black women that leads to inaccuracies and a lack of validation in these types of risk models, it's also the pure genomic diversity of Africa," she says, noting that Africa is the most genetically diverse continent on the planet. "We just need more data, not only in American Black women but in African women to really further characterize that continent."
Martin says it's problematic that such scores are most accurate for white people because they could further exacerbate health disparities in traditionally underserved groups, such as Black Americans. "If we were to set up really representative massive genetic studies, we would do a much better job at predicting genetic risk for everybody," she says.
Earlier this year, the National Institutes of Health awarded $38 million to researchers to improve the accuracy of polygenic risk scores in diverse populations. Researchers will create new genome datasets and pool information from existing ones in an effort to diversify the data that polygenic scores rely on. They plan to make these datasets available to other scientists to use.
"By having adequate representation, we can ensure that the results of a genetic test are widely applicable," Riordan says.
New Podcast: George Church on Woolly Mammoths, Organ Transplants, and Covid Vaccines
The "Making Sense of Science" podcast features interviews with leading medical and scientific experts about the latest developments and the big ethical and societal questions they raise. This monthly podcast is hosted by journalist Kira Peikoff, founding editor of the award-winning science outlet Leaps.org.
This month, our guest is notable genetics pioneer Dr. George Church of Harvard Medical School. Dr. Church has remarkably bold visions for how innovation in science can fundamentally transform the future of humanity and our planet. His current moonshot projects include: de-extincting some of the woolly mammoth's genes to create a hybrid Asian elephant with the cold-tolerance traits of the woolly mammoth, so that this animal can re-populate the Arctic and help stave off climate change; reversing chronic diseases of aging through gene therapy, which he and colleagues are now testing in dogs; and transplanting genetically engineered pig organs to humans to eliminate the tragically long waiting lists for organs. Hear Dr. Church discuss all this and more on our latest episode.
Watch the Trailer:
Listen to the Episode:
Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.