To Make Science Engaging, We Need a Sesame Street for Adults
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
In the mid-1960s, a documentary producer in New York City wondered if the addictive jingles, clever visuals, slogans, and repetition of television ads—the ones that were captivating young children of the time—could be harnessed for good. Over the course of three months, she interviewed educators, psychologists, and artists, and the result was a bonanza of ideas.
Perhaps a new TV show could teach children letters and numbers in short animated sequences? Perhaps adults and children could read together with puppets providing comic relief and prompting interaction from the audience? And because it would be broadcast through a device already in almost every home, perhaps this show could reach across socioeconomic divides and close an early education gap?
Soon after Joan Ganz Cooney shared her landmark report, "The Potential Uses of Television in Preschool Education," in 1966, she was prototyping show ideas, attracting funding from The Carnegie Corporation, The Ford Foundation, and The Corporation for Public Broadcasting, and co-founding the Children's Television Workshop with psychologist Lloyd Morrisett. And then, on November 10, 1969, informal learning was transformed forever with the premiere of Sesame Street on public television.
For its first season, Sesame Street won three Emmy Awards and a Peabody Award. Its star, Big Bird, landed on the cover of Time Magazine, which called the show "TV's gift to children." Fifty years later, it's hard to imagine an approach to informal preschool learning that isn't Sesame Street.
And that approach can be boiled down to one word: Entertainment.
Despite decades of evidence from Sesame Street—one of the most studied television shows of all time—and more research from social science, psychology, and media communications, we haven't yet taken Ganz Cooney's concepts to heart in educating adults. Adults have news programs and documentaries and educational YouTube channels, but no Sesame Street. So why don't we? Here's how we can design a new kind of television to make science engaging and accessible for a public that is all too often intimidated by it.
We have to start from the realization that America is a nation of high-school graduates. By the end of high school, students have decided to abandon science because they think it's too difficult, and as a nation, we've made it acceptable for any one of us to say "I'm not good at science" and offload thinking to the ones who might be. So, is it surprising that a large number of Americans are likely to believe in conspiracy theories like the 25% that believe the release of COVID-19 was planned, the one in ten who believe the Moon landing was a hoax, or the 30–40% that think the condensation trails of planes are actually nefarious chemtrails? If we're meeting people where they are, the aim can't be to get the audience from an A to an A+, but from an F to a D, and without judgment of where they are starting from.
There's also a natural compulsion for a well-meaning educator to fill a literacy gap with a barrage of information, but this is what I call "factsplaining," and we know it doesn't work. And worse, it can backfire. In one study from 2014, parents were provided with factual information about vaccine safety, and it was the group that was already the most averse to vaccines that uniquely became even more averse.
Why? Our social identities and cognitive biases are stubborn gatekeepers when it comes to processing new information. We filter ideas through pre-existing beliefs—our values, our religions, our political ideologies. Incongruent ideas are rejected. Congruent ideas, no matter how absurd, are allowed through. We hear what we want to hear, and then our brains justify the input by creating narratives that preserve our identities. Even when we have all the facts, we can use them to support any worldview.
But social science has revealed many mechanisms for hijacking these processes through narrative storytelling, and this can form the foundation of a new kind of educational television.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence?
As media creators, we can reject factsplaining and instead construct entertaining narratives that disrupt cognitive processes. Two-decade-old research tells us when people are immersed in entertaining fiction narratives, they loosen their defenses, opening a path for new information, editing attitudes, and inspiring new behavior. Where news about hot-button issues like climate change or vaccination might trigger resistance or a backfire effect, fiction can be crafted to be absorbing and, as a result, persuasive.
But the narratives can't be stuffed with information. They must be simplified. If this feels like the opposite of what an educator should be doing, it is possible to reduce the complexity of information, without oversimplification, through "exemplification," a framing device to tell the stories of individuals in specific circumstances that can speak to the greater issue without needing to explain it all. It's a technique you've seen used in biopics. The Discovery Channel true-crime miniseries Manhunt: Unabomber does many things well from a science storytelling perspective, including exemplifying the virtues of the scientific method through a character who argues for a new field of science, forensic linguistics, to catch one of the most notorious domestic terrorists in U.S. history.
We must also appeal to the audience's curiosity. We know curiosity is such a strong driver of human behavior that it can even counteract the biases put up by one's political ideology around subjects like climate change. If we treat science information like a product—and we should—advertising research tells us we can maximize curiosity though a Goldilocks effect. If the information is too complex, your show might as well be a PowerPoint presentation. If it's too simple, it's Sesame Street. There's a sweet spot for creating intrigue about new information when there's a moderate cognitive gap.
The science of "identification" tells us that the more the main character is endearing to a viewer, the more likely the viewer will adopt the character's worldview and journey of change. This insight further provides incentives to craft characters reflective of our audiences. If we accept our biases for what they are, we can understand why the messenger becomes more important than the message, because, without an appropriate messenger, the message becomes faint and ineffective. And research confirms that the stereotype-busting doctor-skeptic Dana Scully of The X-Files, a popular science-fiction series, was an inspiration for a generation of women who pursued science careers.
With these directions, we can start making a new kind of television. But is television itself still the right delivery medium? Americans do spend six hours per day—a quarter of their lives—watching video. And even with the rise of social media and apps, science-themed television shows remain popular, with four out of five adults reporting that they watch shows about science at least sometimes. CBS's The Big Bang Theory was the most-watched show on television in the 2017–2018 season, and Cartoon Network's Rick & Morty is the most popular comedy series among millennials. And medical and forensic dramas continue to be broadcast staples. So yes, it's as true today as it was in the 1980s when George Gerbner, the "cultivation theory" researcher who studied the long-term impacts of television images, wrote, "a single episode on primetime television can reach more people than all science and technology promotional efforts put together."
We know from cultivation theory that media images can shape our views of scientists. Quick, picture a scientist! Was it an old, white man with wild hair in a lab coat? If most Americans don't encounter research science firsthand, it's media that dictates how we perceive science and scientists. Characters like Sheldon Cooper and Rick Sanchez become the model. But we can correct that by representing professionals more accurately on-screen and writing characters more like Dana Scully.
Could new television series establish the baseline narratives for novel science like gene editing, quantum computing, or artificial intelligence? Or could new series counter the misinfodemics surrounding COVID-19 and vaccines through more compelling, corrective narratives? Social science has given us a blueprint suggesting they could. Binge-watching a show like the surreal NBC sitcom The Good Place doesn't replace a Ph.D. in philosophy, but its use of humor plants the seed of continued interest in a new subject. The goal of persuasive entertainment isn't to replace formal education, but it can inspire, shift attitudes, increase confidence in the knowledge of complex issues, and otherwise prime viewers for continued learning.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
The Death Predictor: A Helpful New Tool or an Ethical Morass?
Whenever Eric Karl Oermann has to tell a patient about a terrible prognosis, their first question is always: "how long do I have?" Oermann would like to offer a precise answer, to provide some certainty and help guide treatment. But although he's one of the country's foremost experts in medical artificial intelligence, Oermann is still dependent on a computer algorithm that's often wrong.
Doctors are notoriously terrible at guessing how long their patients will live.
Artificial intelligence, now often called deep learning or neural networks, has radically transformed language and image processing. It's allowed computers to play chess better than the world's grand masters and outwit the best Jeopardy players. But it still can't precisely tell a doctor how long a patient has left – or how to help that person live longer.
Someday, researchers predict, computers will be able to watch a video of a patient to determine their health status. Doctors will no longer have to spend hours inputting data into medical records. And computers will do a better job than specialists at identifying tiny tumors, impending crises, and, yes, figuring out how long the patient has to live. Oermann, a neurosurgeon at Mount Sinai, says all that technology will allow doctors to spend more time doing what they do best: talking with their patients. "I want to see more deep learning and computers in a clinical setting," he says, "so there can be more human interaction." But those days are still at least three to five years off, Oermann and other researchers say.
Doctors are notoriously terrible at guessing how long their patients will live, says Nigam Shah, an associate professor at Stanford University and assistant director of the school's Center for Biomedical Informatics Research. Doctors don't want to believe that their patient – whom they've come to like – will die. "Doctors over-estimate survival many-fold," Shah says. "How do you go into work, in say, oncology, and not be delusionally optimistic? You have to be."
But patients near the end of life will get better treatment – and even live longer – if they are overseen by hospice or palliative care, research shows. So, instead of relying on human bias to select those whose lives are nearing their end, Shah and his colleagues showed that they could use a deep learning algorithm based on medical records to flag incoming patients with a life expectancy of three months to a year. They use that data to indicate who might need palliative care. Then, the palliative care team can reach out to treating physicians proactively, instead of relying on their referrals or taking the time to read extensive medical charts.
But, although the system works well, Shah isn't yet sure if such indicators actually get the appropriate patients into palliative care. He's recently partnered with a palliative care doctor to run a gold-standard clinical trial to test whether patients who are flagged by this algorithm are indeed a better match for palliative care.
"What is effective from a health system perspective might not be effective from a treating physician's perspective and might not be effective from the patient's perspective," Shah notes. "I don't have a good way to guess everybody's reaction without actually studying it." Whether palliative care is appropriate, for instance, depends on more than just the patient's health status. "If the patient's not ready, the family's not ready and the doctor's not ready, then you're just banging your head against the wall," Shah says. "Given limited capacity, it's a waste of resources" to put that person in palliative care.
The algorithm isn't perfect, but "on balance, it leads to better decisions more often."
Alexander Smith and Sei Lee, both palliative care doctors, work together at the University of California, San Francisco, to develop predictions for patients who come to the hospital with a complicated prognosis or a history of decline. Their algorithm, they say, helps decide if this patient's problems – which might include diabetes, heart disease, a slow-growing cancer, and memory issues – make them eligible for hospice. The algorithm isn't perfect, they both agree, but "on balance, it leads to better decisions more often," Smith says.
Bethany Percha, an assistant professor at Mount Sinai, says that an algorithm may tell doctors that their patient is trending downward, but it doesn't do anything to change that trajectory. "Even if you can predict something, what can you do about it?" Algorithms may be able to offer treatment suggestions – but not what specific actions will alter a patient's future, says Percha, also the chief technology officer of Precise Health Enterprise, a product development group within Mount Sinai. And the algorithms remain challenging to develop. Electronic medical records may be great at her hospital, but if the patient dies at a different one, her system won't know. If she wants to be certain a patient has died, she has to merge social security records of death with her system's medical records – a time-consuming and cumbersome process.
An algorithm that learns from biased data will be biased, Shah says. Patients who are poor or African American historically have had worse health outcomes. If researchers train an algorithm on data that includes those biases, they get baked into the algorithms, which can then lead to a self-fulfilling prophesy. Smith and Lee say they've taken race out of their algorithms to avoid this bias.
Age is even trickier. There's no question that someone's risk of illness and death goes up with age. But an 85-year-old who breaks a hip running a marathon should probably be treated very differently than an 85-year-old who breaks a hip trying to get out of a chair in a dementia care unit. That's why the doctor can never be taken out of the equation, Shah says. Human judgment will always be required in medical care and an algorithm should never be followed blindly, he says.
Experts say that the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully.
Researchers are also concerned that their algorithms will be used to ration care, or that insurance companies will use their data to justify a rate increase. If an algorithm predicts a patient is going to end up back in the hospital soon, "who's benefitting from knowing a patient is going to be readmitted? Probably the insurance company," Percha says.
Still, Percha and others say, the flaws in artificial intelligence algorithms shouldn't prevent people from using them – carefully. "These are new and exciting tools that have a lot of potential uses. We need to be conscious about how to use them going forward, but it doesn't mean we shouldn't go down this road," she says. "I think the potential benefits outweigh the risks, especially because we've barely scratched the surface of what big data can do right now."
“Young Blood” Transfusions Are Not Ready For Primetime – Yet
The world of dementia research erupted into cheers when news of the first real victory in a clinical trial against Alzheimer's Disease in over a decade was revealed last October.
By connecting the circulatory systems of a young and an old mouse, the regenerative potential of the young mouse decreased, and the old mouse became healthier.
Alzheimer's treatments have been famously difficult to develop; 99 percent of the 200-plus such clinical trials since 2000 have utterly failed. Even the few slight successes have failed to produce what is called 'disease modifying' agents that really help people with the disease. This makes the success, by the midsize Spanish pharma company Grifols, worthy of special attention.
However, the specifics of the Grifols treatment, a process called plasmapheresis, are atypical for another reason - they did not give patients a small molecule or an elaborate gene therapy, but rather simply the most common component of normal human blood plasma, a protein called albumin. A large portion of the patients' normal plasma was removed, and then a sterile solution of albumin was infused back into them to keep their overall blood volume relatively constant.
So why does replacing Alzheimer's patients' plasma with albumin seem to help their brains? One theory is that the action is direct. Alzheimer's patients have low levels of serum albumin, which is needed to clear out the plaques of amyloid that slowly build up in the brain. Supplementing those patients with extra albumin boosts their ability to clear the plaques and improves brain health. However, there is also evidence suggesting that the problem may be something present in the plasma of the sick person and pulling their plasma out and replacing it with a filler, like an albumin solution, may be what creates the purported benefit.
This scientific question is the tip of an iceberg that goes far beyond Alzheimer's Disease and albumin, to a debate that has been waged on the pages of scientific journals about the secrets of using young, healthy blood to extend youth and health.
This debate started long before the Grifols data was released, in 2014 when a group of researchers at Stanford found that by connecting the circulatory systems of a young and an old mouse, the regenerative potential of the young mouse decreased, and the old mouse became healthier. There was something either present in young blood that allowed tissues to regenerate, or something present in old blood that prevented regeneration. Whatever the biological reason, the effects in the experiment were extraordinary, providing a startling boost in health in the older mouse.
After the initial findings, multiple research groups got to work trying to identify the "active factor" of regeneration (or the inhibitor of that regeneration). They soon uncovered a variety of compounds such as insulin-like growth factor 1 (IGF1), CCL11, and GDF11, but none seemed to provide all the answers researchers were hoping for, with a number of high-profile retractions based on unsound experimental practices, or inconclusive data.
Years of research later, the simplest conclusion is that the story of plasma regeneration is not simple - there isn't a switch in our blood we can flip to turn back our biological clocks. That said, these hypotheses are far from dead, and many researchers continue to explore the possibility of using the rejuvenating ability of youthful plasma to treat a variety of diseases of aging.
But the bold claims of improved vigor thanks to young blood are so far unsupported by clinical evidence.
The data remain intriguing because of the astounding results from the conjoined circulatory system experiments. The current surge in interest in studying the biology of aging is likely to produce a new crop of interesting results in the next few years. Both CCL11 and GDF11 are being researched as potential drug targets by two startups, Alkahest and Elevian, respectively.
Without clarity on a single active factor driving rejuvenation, it's tempting to try a simpler approach: taking actual blood plasma provided by young people and infusing it into elderly subjects. This is what at least one startup company, Ambrosia, is now offering in five commercial clinics across the U.S. -- for $8,000 a liter.
By using whole plasma, the idea is to sidestep our ignorance, reaping the benefits of young plasma transfusion without knowing exactly what the active factors are that make the treatment work in mice. This space has attracted both established players in the plasmapheresis field – Alkahest and Grifols have teamed up to test fractions of whole plasma in Alzheimer's and Parkinson's – but also direct-to-consumer operations like Ambrosia that just want to offer patients access to treatments without regulatory oversight.
But the bold claims of improved vigor thanks to young blood are so far unsupported by clinical evidence. We simply haven't performed trials to test whether dosing a mostly healthy person with plasma can slow down aging, at least not yet. There is some evidence that plasma replacement works in mice, yes, but those experiments are all done in very different systems than what a human receiving young plasma might experience. To date, I have not seen any plasma transfusion clinic doing young blood plasmapheresis propose a clinical trial that is anything more than a shallow advertisement for their procedures.
The efforts I have seen to perform prophylactic plasmapheresis will fail to impact societal health. Without clearly defined endpoints and proper clinical trials, we won't know whether the procedure really lowers the risk of disease or helps with conditions of aging. So even if their hypothesis is correct, the lack of strong evidence to fall back on means that the procedure will never spread beyond the fringe groups willing to take the risk. If their hypothesis is wrong, then people are paying a huge amount of money for false hope, just as they do, sadly, at the phony stem cell clinics that started popping up all through the 2000s when stem cell hype was at its peak.
Until then, prophylactic plasma transfusions will be the domain of the optimistic and the gullible.
The real progress in the field will be made slowly, using carefully defined products either directly isolated from blood or targeting a bloodborne factor, just as the serious pharma and biotech players are doing already.
The field will progress in stages, first creating and carefully testing treatments for well-defined diseases, and only then will it progress to large-scale clinical trials in relatively healthy people to look for the prevention of disease. Most of us will choose to wait for this second stage of trials before undergoing any new treatments. Until then, prophylactic plasma transfusions will be the domain of the optimistic and the gullible.