To Make Science Engaging, We Need a Sesame Street for Adults
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
In the mid-1960s, a documentary producer in New York City wondered if the addictive jingles, clever visuals, slogans, and repetition of television ads—the ones that were captivating young children of the time—could be harnessed for good. Over the course of three months, she interviewed educators, psychologists, and artists, and the result was a bonanza of ideas.
Perhaps a new TV show could teach children letters and numbers in short animated sequences? Perhaps adults and children could read together with puppets providing comic relief and prompting interaction from the audience? And because it would be broadcast through a device already in almost every home, perhaps this show could reach across socioeconomic divides and close an early education gap?
Soon after Joan Ganz Cooney shared her landmark report, "The Potential Uses of Television in Preschool Education," in 1966, she was prototyping show ideas, attracting funding from The Carnegie Corporation, The Ford Foundation, and The Corporation for Public Broadcasting, and co-founding the Children's Television Workshop with psychologist Lloyd Morrisett. And then, on November 10, 1969, informal learning was transformed forever with the premiere of Sesame Street on public television.
For its first season, Sesame Street won three Emmy Awards and a Peabody Award. Its star, Big Bird, landed on the cover of Time Magazine, which called the show "TV's gift to children." Fifty years later, it's hard to imagine an approach to informal preschool learning that isn't Sesame Street.
And that approach can be boiled down to one word: Entertainment.
Despite decades of evidence from Sesame Street—one of the most studied television shows of all time—and more research from social science, psychology, and media communications, we haven't yet taken Ganz Cooney's concepts to heart in educating adults. Adults have news programs and documentaries and educational YouTube channels, but no Sesame Street. So why don't we? Here's how we can design a new kind of television to make science engaging and accessible for a public that is all too often intimidated by it.
We have to start from the realization that America is a nation of high-school graduates. By the end of high school, students have decided to abandon science because they think it's too difficult, and as a nation, we've made it acceptable for any one of us to say "I'm not good at science" and offload thinking to the ones who might be. So, is it surprising that a large number of Americans are likely to believe in conspiracy theories like the 25% that believe the release of COVID-19 was planned, the one in ten who believe the Moon landing was a hoax, or the 30–40% that think the condensation trails of planes are actually nefarious chemtrails? If we're meeting people where they are, the aim can't be to get the audience from an A to an A+, but from an F to a D, and without judgment of where they are starting from.
There's also a natural compulsion for a well-meaning educator to fill a literacy gap with a barrage of information, but this is what I call "factsplaining," and we know it doesn't work. And worse, it can backfire. In one study from 2014, parents were provided with factual information about vaccine safety, and it was the group that was already the most averse to vaccines that uniquely became even more averse.
Why? Our social identities and cognitive biases are stubborn gatekeepers when it comes to processing new information. We filter ideas through pre-existing beliefs—our values, our religions, our political ideologies. Incongruent ideas are rejected. Congruent ideas, no matter how absurd, are allowed through. We hear what we want to hear, and then our brains justify the input by creating narratives that preserve our identities. Even when we have all the facts, we can use them to support any worldview.
But social science has revealed many mechanisms for hijacking these processes through narrative storytelling, and this can form the foundation of a new kind of educational television.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence?
As media creators, we can reject factsplaining and instead construct entertaining narratives that disrupt cognitive processes. Two-decade-old research tells us when people are immersed in entertaining fiction narratives, they loosen their defenses, opening a path for new information, editing attitudes, and inspiring new behavior. Where news about hot-button issues like climate change or vaccination might trigger resistance or a backfire effect, fiction can be crafted to be absorbing and, as a result, persuasive.
But the narratives can't be stuffed with information. They must be simplified. If this feels like the opposite of what an educator should be doing, it is possible to reduce the complexity of information, without oversimplification, through "exemplification," a framing device to tell the stories of individuals in specific circumstances that can speak to the greater issue without needing to explain it all. It's a technique you've seen used in biopics. The Discovery Channel true-crime miniseries Manhunt: Unabomber does many things well from a science storytelling perspective, including exemplifying the virtues of the scientific method through a character who argues for a new field of science, forensic linguistics, to catch one of the most notorious domestic terrorists in U.S. history.
We must also appeal to the audience's curiosity. We know curiosity is such a strong driver of human behavior that it can even counteract the biases put up by one's political ideology around subjects like climate change. If we treat science information like a product—and we should—advertising research tells us we can maximize curiosity though a Goldilocks effect. If the information is too complex, your show might as well be a PowerPoint presentation. If it's too simple, it's Sesame Street. There's a sweet spot for creating intrigue about new information when there's a moderate cognitive gap.
The science of "identification" tells us that the more the main character is endearing to a viewer, the more likely the viewer will adopt the character's worldview and journey of change. This insight further provides incentives to craft characters reflective of our audiences. If we accept our biases for what they are, we can understand why the messenger becomes more important than the message, because, without an appropriate messenger, the message becomes faint and ineffective. And research confirms that the stereotype-busting doctor-skeptic Dana Scully of The X-Files, a popular science-fiction series, was an inspiration for a generation of women who pursued science careers.
With these directions, we can start making a new kind of television. But is television itself still the right delivery medium? Americans do spend six hours per day—a quarter of their lives—watching video. And even with the rise of social media and apps, science-themed television shows remain popular, with four out of five adults reporting that they watch shows about science at least sometimes. CBS's The Big Bang Theory was the most-watched show on television in the 2017–2018 season, and Cartoon Network's Rick & Morty is the most popular comedy series among millennials. And medical and forensic dramas continue to be broadcast staples. So yes, it's as true today as it was in the 1980s when George Gerbner, the "cultivation theory" researcher who studied the long-term impacts of television images, wrote, "a single episode on primetime television can reach more people than all science and technology promotional efforts put together."
We know from cultivation theory that media images can shape our views of scientists. Quick, picture a scientist! Was it an old, white man with wild hair in a lab coat? If most Americans don't encounter research science firsthand, it's media that dictates how we perceive science and scientists. Characters like Sheldon Cooper and Rick Sanchez become the model. But we can correct that by representing professionals more accurately on-screen and writing characters more like Dana Scully.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence? Or could new series counter the misinfodemics surrounding COVID-19 and vaccines through more compelling, corrective narratives? Social science has given us a blueprint suggesting they could. Binge-watching a show like the surreal NBC sitcom The Good Place doesn't replace a Ph.D. in philosophy, but its use of humor plants the seed of continued interest in a new subject. The goal of persuasive entertainment isn't to replace formal education, but it can inspire, shift attitudes, increase confidence in the knowledge of complex issues, and otherwise prime viewers for continued learning.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
Should We Use Technologies to Enhance Morality?
Our moral ‘hardware’ evolved over 100,000 years ago while humans were still scratching the savannah. The perils we encountered back then were radically different from those that confront us now. To survive and flourish in the face of complex future challenges our archaic operating systems might need an upgrade – in non-traditional ways.
Morality refers to standards of right and wrong when it comes to our beliefs, behaviors, and intentions. Broadly, moral enhancement is the use of biomedical technology to improve moral functioning. This could include augmenting empathy, altruism, or moral reasoning, or curbing antisocial traits like outgroup bias and aggression.
The claims related to moral enhancement are grand and polarizing: it’s been both tendered as a solution to humanity’s existential crises and bluntly dismissed as an armchair hypothesis. So, does the concept have any purchase? The answer leans heavily on our definition and expectations.
One issue is that the debate is often carved up in dichotomies – is moral enhancement feasible or unfeasible? Permissible or impermissible? Fact or fiction? On it goes. While these gesture at imperatives, trading in absolutes blurs the realities at hand. A sensible approach must resist extremes and recognize that moral disrupters are already here.
We know that existing interventions, whether they occur unknowingly or on purpose, have the power to modify moral dispositions in ways both good and bad. For instance, neurotoxins can promote antisocial behavior. The ‘lead-crime hypothesis’ links childhood lead-exposure to impulsivity, antisocial aggression, and various other problems. Mercury has been associated with cognitive deficits, which might impair moral reasoning and judgement. It’s well documented that alcohol makes people more prone to violence.
So, what about positive drivers? Here’s where it gets more tangled.
Medicine has long treated psychiatric disorders with drugs like sedatives and antipsychotics. However, there’s short mention of morality in the Diagnostic and Statistical Manual of Mental Disorders (DSM) despite the moral merits of pharmacotherapy – these effects are implicit and indirect. Such cases are regarded as treatments rather than enhancements.
It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
Conventionally, an enhancement must go beyond what is ‘normal,’ species-typical, or medically necessary – this is known as the ‘treatment-enhancement distinction.’ But boundaries of health and disease are fluid, so whether we call a procedure ‘moral enhancement’ or ‘medical treatment’ is liable to change with shifts in social values, expert opinions, and clinical practices.
Human enhancements are already used for a range of purported benefits: caffeine, smart drugs, and other supplements to boost cognitive performance; cosmetic procedures for aesthetic reasons; and steroids and stimulants for physical advantage. More boldly, cyborgs like Moon Ribas and Neil Harbisson are pushing transpecies boundaries with new kinds of sensory perception. It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
How might it work?
One possibility for shaping moral temperaments is with neurostimulation devices. These use electrodes to deliver a low-intensity current that alters the electromagnetic activity of specific neural regions. For instance, transcranial Direct Current Stimulation (tDCS) can target parts of the brain involved in self-awareness, moral judgement, and emotional decision-making. It’s been shown to increase empathy and valued-based learning, and decrease aggression and risk-taking behavior. Many countries already use tDCS to treat pain and depression, but evidence for enhancement effects on healthy subjects is mixed.
Another suggestion is targeting neuromodulators like serotonin and dopamine. Serotonin is linked to prosocial attributes like trust, fairness, and cooperation, but low activity is thought to motivate desires for revenge and harming others. It’s not as simple as indiscriminately boosting brain chemicals though. While serotonin is amenable to SSRIs, precise levels are difficult to measure and track, and there’s no scientific consensus on the “optimum” amount or on whether such a value even exists. Fluctuations due to lifestyle factors such as diet, stress, and exercise add further complexity. Currently, more research is needed on the significance of neuromodulators and their network dynamics across the moral landscape.
There are a range of other prospects. The ‘love drugs’ oxytocin and MDMA mediate pair bonding, cooperation, and social attachment, although some studies suggest that people with high levels of oxytocin are more aggressive toward outsiders. Lithium is a mood stabilizer that has been shown to reduce aggression in prison populations; beta-blockers like propranolol and the supplement omega-3 have similar effects. Increasingly, brain-computer interfaces augur a world of brave possibilities. Such appeals are not without limitations, but they indicate some ways that external tools can positively nudge our moral sentiments.
Who needs morally enhancing?
A common worry is that enhancement technologies could be weaponized for social control by authoritarian regimes, or used like the oppressive eugenics of the early 20th century. Fortunately, the realities are far more mundane and such dystopian visions are fantastical. So, what are some actual possibilities?
Some researchers suggest that neurotechnologies could help to reactivate brain regions of those suffering from moral pathologies, including healthy people with psychopathic traits (like a lack of empathy). Another proposal is using such technology on young people with conduct problems to prevent serious disorders in adulthood.
Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it?
A question is whether these kinds of interventions should be compulsory for dangerous criminals. On the other hand, a voluntary treatment for inmates wouldn’t be so different from existing incentive schemes. For instance, some U.S. jurisdictions already offer drug treatment programs in exchange for early release or instead of prison time. Then there’s the difficult question of how we should treat non-criminal but potentially harmful ‘successful’ psychopaths.
Others argue that if virtues have a genetic component, there is no technological reason why present practices of embryo screening for genetic diseases couldn’t also be used for selecting socially beneficial traits.
Perhaps the most immediate scenario is a kind of voluntary moral therapy, which would use biomedicine to facilitate ideal brain-states to augment traditional psychotherapy. Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it? Approaches like neurofeedback and psychedelic-assisted therapy could prove helpful.
What are the challenges?
A general challenge is that of setting. Morality is context dependent; what’s good in one environment may be bad in another and vice versa, so we don’t want to throw out the baby with the bathwater. Of course, common sense tells us that some tendencies are more socially desirable than others: fairness, altruism, and openness are clearly preferred over aggression, dishonesty, and prejudice.
One argument is that remoulding ‘brute impulses’ via biology would not count as moral enhancement. This view claims that for an action to truly count as moral it must involve cognition – reasoning, deliberation, judgement – as a necessary part of moral behavior. Critics argue that we should be concerned more with ends rather than means, so ultimately it’s outcomes that matter most.
Another worry is that modifying one biological aspect will have adverse knock-on effects for other valuable traits. Certainly, we must be careful about the network impacts of any intervention. But all stimuli have distributed effects on the body, so it’s really a matter of weighing up the cost/benefit trade-offs as in any standard medical decision.
Is it ethical?
Our values form a big part of who we are – some bioethicists argue that altering morality would pose a threat to character and personal identity. Another claim is that moral enhancement would compromise autonomy by limiting a person’s range of choices and curbing their ‘freedom to fall.’ Any intervention must consider the potential impacts on selfhood and personal liberty, in addition to the wider social implications.
This includes the importance of social and genetic diversity, which is closely tied to considerations of fairness, equality, and opportunity. The history of psychiatry is rife with examples of systematic oppression, like ‘drapetomania’ – the spurious mental illness that was thought to cause African slaves’ desire to flee captivity. Advocates for using moral enhancement technologies to help kids with conduct problems should be mindful that they disproportionately come from low-income communities. We must ensure that any habilitative practice doesn’t perpetuate harmful prejudices by unfairly targeting marginalized people.
Human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good.
Then, there are concerns that morally-enhanced persons would be vulnerable to predation by those who deliberately avoid moral therapies. This relates to what’s been dubbed the ‘bootstrapping problem’: would-be moral enhancement candidates are the types of individuals that benefit from not being morally enhanced. Imagine if every senator was asked to undergo an honesty-boosting procedure prior to entering public office – would they go willingly? Then again, perhaps a technological truth-serum wouldn’t be such a bad requisite for those in positions of stern social consequence.
Advocates argue that biomedical moral betterment would simply offer another means of pursuing the same goals as fixed social mechanisms like religion, education, and community, and non-invasive therapies like cognitive-behavior therapy and meditation. It’s even possible that technological efforts would be more effective. After all, human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good. If we can safely improve ourselves in direct and deliberate ways then there’s no morally significant difference whether this happens via conventional methods or new technology.
Future prospects
Where speculation about human enhancement has led to hype and technophilia, many bioethicists urge restraint. We can be grounded in current science while anticipating feasible medium-term prospects. It’s unlikely moral enhancement heralds any metamorphic post-human utopia (or dystopia), but that doesn’t mean dismissing its transformative potential. In one sense, we should be wary of transhumanist fervour about the salvatory promise of new technology. By the same token we must resist technofear and alarmist efforts to balk social and scientific progress. Emerging methods will continue to shape morality in subtle and not-so-subtle ways – the critical steps are spotting and scaffolding these with robust ethical discussion, public engagement, and reasonable policy options. Steering a bright and judicious course requires that we pilot the possibilities of morally-disruptive technologies.
Podcast: The Friday Five - your health research roundup
The Friday Five is a new podcast series in which Leaps.org covers five breakthroughs in research over the previous week that you may have missed. There are plenty of controversies and ethical issues in science – and we get into many of them in our online magazine – but there’s also plenty to be excited about, and this news roundup is focused on inspiring scientific work to give you some momentum headed into the weekend.
Covered in this week's Friday Five:
- Puffer fish chemical for treating chronic pain
- Sleep study on the health benefits of waking up multiples times per night
- Best exercise regimens for reducing the risk of mortality aka living longer
- AI breakthrough in mapping protein structures with DeepMind
- Ultrasound stickers to see inside your body