Podcast: Should Scientific Controversies Be Silenced?

Podcast: Should Scientific Controversies Be Silenced?

The recent Joe Rogan/Spotify controversy prompts the consideration of tough questions about expertise, trust, gatekeepers, and dissent.

The "Making Sense of Science" podcast features interviews with leading medical and scientific experts about the latest developments and the big ethical and societal questions they raise. This monthly podcast is hosted by journalist Kira Peikoff, founding editor of the award-winning science outlet Leaps.org.

The recent Joe Rogan/Spotify backlash over the misinformation presented in his recent episode on the Covid-19 vaccines raises some difficult and important bioethical questions for society: How can people know which experts to trust? What should big tech gatekeepers do about false claims promoted on their platforms? How should the scientific establishment respond to heterodox viewpoints from experts who disagree with the consensus? When is silencing of dissent merited, and when is it problematic? Journalist Kira Peikoff asks infectious disease physician and pandemic scholar Dr. Amesh Adalja to weigh in.


Dr. Amesh Adalja, Senior Scholar, Johns Hopkins Center for Health Security and an infectious disease physician

Listen to the Episode

Kira Peikoff

Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.

Researchers probe extreme gene therapy for severe alcoholism

When all traditional therapeutic approaches fail for alcohol abuse disorder, a radical gene therapy might be something to try in the future.

Adobe Stock

Story by Freethink

A single shot — a gene therapy injected into the brain — dramatically reduced alcohol consumption in monkeys that previously drank heavily. If the therapy is safe and effective in people, it might one day be a permanent treatment for alcoholism for people with no other options.

The challenge: Alcohol use disorder (AUD) means a person has trouble controlling their alcohol consumption, even when it is negatively affecting their life, job, or health.

Keep Reading Keep Reading
Kristin Houser
Kristin Houser is a staff writer at Freethink, where she covers science and tech. Her written work has appeared in Business Insider, NBC News, and the World Economic Forum’s Agenda, among other publications, and Stephen Colbert once talked about a piece on The Late Show, to her delight. Formerly, Kristin was a staff writer for Futurism and wrote several animated and live action web series.
Massive benefits of AI come with environmental and human costs. Can AI itself be part of the solution?

Generative AI has a large carbon footprint and other drawbacks. But AI can help mitigate its own harms—by plowing through mountains of data on extreme weather and human displacement.

Adobe Stock

The recent explosion of generative artificial intelligence tools like ChatGPT and Dall-E enabled anyone with internet access to harness AI’s power for enhanced productivity, creativity, and problem-solving. With their ever-improving capabilities and expanding user base, these tools proved useful across disciplines, from the creative to the scientific.

But beneath the technological wonders of human-like conversation and creative expression lies a dirty secret—an alarming environmental and human cost. AI has an immense carbon footprint. Systems like ChatGPT take months to train in high-powered data centers, which demand huge amounts of electricity, much of which is still generated with fossil fuels, as well as water for cooling. “One of the reasons why Open AI needs investments [to the tune of] $10 billion from Microsoft is because they need to pay for all of that computation,” says Kentaro Toyama, a computer scientist at the University of Michigan. There’s also an ecological toll from mining rare minerals required for hardware and infrastructure. This environmental exploitation pollutes land, triggers natural disasters and causes large-scale human displacement. Finally, for data labeling needed to train and correct AI algorithms, the Big Data industry employs cheap and exploitative labor, often from the Global South.

Keep Reading Keep Reading
Payal Dhar
Payal is a writer based in New Delhi who has been covering science, technology, and society since 1998.