Your Questions Answered About Kids, Teens, and Covid Vaccines
Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.
This virtual event convened leading scientific and medical experts to address the public's questions and concerns about Covid-19 vaccines in kids and teens. Highlight video below.
DATE:
Thursday, May 13th, 2021
12:30 p.m. - 1:45 p.m. EDT
Dr. H. Dele Davies, M.D., MHCM
Senior Vice Chancellor for Academic Affairs and Dean for Graduate Studies at the University of Nebraska Medical (UNMC). He is an internationally recognized expert in pediatric infectious diseases and a leader in community health.
Dr. Emily Oster, Ph.D.
Professor of Economics at Brown University. She is a best-selling author and parenting guru who has pioneered a method of assessing school safety.
Dr. Tina Q. Tan, M.D.
Professor of Pediatrics at the Feinberg School of Medicine, Northwestern University. She has been involved in several vaccine survey studies that examine the awareness, acceptance, barriers and utilization of recommended preventative vaccines.
Dr. Inci Yildirim, M.D., Ph.D., M.Sc.
Associate Professor of Pediatrics (Infectious Disease); Medical Director, Transplant Infectious Diseases at Yale School of Medicine; Associate Professor of Global Health, Yale Institute for Global Health. She is an investigator for the multi-institutional COVID-19 Prevention Network's (CoVPN) Moderna mRNA-1273 clinical trial for children 6 months to 12 years of age.
About the Event Series
This event is the second of a four-part series co-hosted by Leaps.org, the Aspen Institute Science & Society Program, and the Sabin–Aspen Vaccine Science & Policy Group, with generous support from the Gordon and Betty Moore Foundation and the Howard Hughes Medical Institute.
:
Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.
After You Die, Your Digital Self Could Live on as a Chatbot
My wife and I visited a will-and-trust lawyer after our first son was born. Everything seemed simple and clear until the lawyer asked, without missing a beat, "So, what about your social media management?" My wife looked at me and, even though I'm more tech savvy, I felt as confused as a Luddite.
One can imagine chatbots becoming the next generation of care management alongside funeral services, and will and testaments.
"Social media management?" I laughed, making a joke about my wife spending more time on Facebook than I do. But the lawyer's question was serious, as were the legal documents asking for our profile page links, passwords, and related information.
What do you want to happen to your Facebook, Twitter, and other social media platforms after you die? Your grandfather may have wanted his cremated ashes poured into the Ganges, or a burial in a prepaid plot. But unlike earlier generations, whose personas ended with their last breath, your bits and bytes could live on across multiple servers, holding a space for you online like a digital obelisk. Or, if you desire, your relatives can do the equivalent of a DNR: Delete account.
"It is the future of 'Get your affairs in order,'" says John Havens, Executive Director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. He remembers being pulled aside when his father was being put into the ICU and realizing that his dad wasn't going to come back.
Havens says if we are lucky enough to know that we are wrapping up our time, then we have the opportunity not just to bow out of the digital world gracefully, but to have our digital persona carry on beyond us. This persona could go beyond today's static memorial pages on Facebook and Instagram; it could be an interactive computer program designed from your specific speech patterns, memories, and personality – a chatbot.
"I could have an algorithm trained to hear what I say and how I say it," Havens told me. "You can say, 'I'm Damon and I'm going to pass in the next few months, but, you know, over the past six months, I've created a chatbot to continue our conversations. In the upcoming months, my partner or loved ones will let you know when the chatbot will take over and be involved.'"
The chatbot could become an extension of you on platforms like Messenger or WhatsApp, for example. One can imagine this becoming the next generation of care management alongside funeral services, and will and testaments. You can see the future in Eugenia Kuyda, an entrepreneur who successfully created an interactive chatbot of her late friend, Roman Mazurenko, just based on his text messages. Her new program, Replika, may eventually give us the same technology so we, too, can all potentially do the same with our loved ones. Expect other tech companies to follow suit.
There is now no real separation between IRL and online – just as there may be an increasingly blurred line between our personas before and after death.
Chatbots offer us an irresistible decision: They are artificial intelligence programs built to have conversations with people, usually within a service capacity like canceling a shipping order or getting to the right help desk. You can view it as a modern-day helpline and, no doubt, you've interacted with chatbots when you've made purchases online. Chatbots are now becoming verbal, too, managing phone calls you make to your credit card company, local utilities, and other daily operations.
We witnessed our future this spring when Google showed off Google Duplex. It is a voice-driven system that will call people on your behalf with the intention, Google says, to manage your life. At the Google I/O conference, Google CEO Sundar Pichai showed Duplex calling a hair salon and interacting with the human receptionist – with nearly all the pauses, mmm-hmms, and colloquialisms as its female counterpart. "The amazing part is the assistant can actually understand the nuances of conversation," Pichai said to the rapt tech audience.
Recode's Kurt Wagner explained the immediate problem with the Google Duplex demo, which is the same problem technologists so often overlook: What if someone uses your technology in ways you didn't intend? "The major concern with that demo was that Google Assistant never said it was a robot or told the salon that the call was being recorded. When pressed by members of the media in the days after the demo, Google declined to comment, leading some to believe the company had simply overlooked this privacy element altogether."
"This is why disclosure will be so huge," Havens says. "When people call, they will begin with, 'Hello. I am a human.'"
This conflict between the physical and the digital is now coming to a head, though it isn't the clichéd man against machine Skynet conspiracy theories, but rather us against us. Today, it is as if we are split into two or, perhaps more accurately, two personas – our "real-life" persona and our online persona – and we're now experiencing fatigue trying to hold center.
It is a new phenomenon reflective of our social media: Media forerunners like MySpace and Friendster as well as classic websites like LiveJournal and Tumblr allowed us to explore the online world – and, in a sense, the physical world beyond our physical reach – using avatars as close to or as far from our real selves as we desired. On the Internet, nobody knows you're a dog.
Facebook truly eliminated the powerful choice of anonymity, as its extensive verification process required people to give up anonymity to participate in the biggest social network in the world. This was a willful, purposeful decision by Facebook: Founder Mark Zuckerberg has been an advocate of being yourself online, and the former Director of Market Development Randi Zuckerberg infamously said, "I think anonymity on the Internet has to go away… People behave a lot better when they have their real names down."
This was Facebook's intention and, whether or not its theory of people behaving better is true, especially in light of the 2016 U.S. Presidential election, the effects on us are real. Sex workers and other high-risk, anonymity-driven entrepreneurs are being outed via social media. The parallel rise in online addiction clinics isn't a coincidence, as the blur between the physical self and the digital self has never been hazier. There is now no real separation between IRL and online – just as there may be an increasingly blurred line between our personas before and after death.
Chatbots represent a tempting form of convenience: A way to remove our cognitive load to an assistant that will manage our relationships.
We have Carrie Fisher starring in the next Star Wars movie, potentially winning the first truly post-humous Oscar thanks to technology that can help transition older footage into live-recorded footage. Similar, more subtle turns occurred with Paul Walker in the Fast and the Furious 7, which used a combination of CGI and stand-ins. But a key difference is that we actually know they are dead before the movie is even released. As not-famous individuals, we have the ethical choice (duty?) to disclose that information to our social media followers after we die.
While we're still alive, though, chatbots represent a tempting form of convenience: A way to remove our cognitive load to an assistant that will manage our relationships. The rub is that our online relationships are our personal relationships, so we're not just potentially automating, say, our social media feed or our online postings, but our responsibilities in the real-life relationships that we've built. There is no line.
"It's naïve to think that the Google Duplex that was designed to make your hair appointments won't be used to do more difficult things like break up with a girlfriend," Havens says. "Record 50 words, use different inflections, and put in phrases like 'It's not you, it's me.' Why wouldn't people do that?"
Well, it really depends on the person. My wife and I ended up leaving the social media management section of our will blank for now. I even took a long social media sabbatical to connect with people more in person. If my online relationships and my in-person relationships are all becoming the same, then maybe it's OK to let them die – just like I will.
One Day, There Might Be a Drug for a Broken Heart
For Tony Y., 37, healing from heartbreak is slow and incomplete. Each of several exes is associated with a cluster of sore memories. Although he loves the Blue Ridge Mountains, he can't visit because they remind him of a romantic holiday years ago.
If a new drug made rejections less painful, one expert argues, it could relieve or even prevent major depression.
Like some 30 to 40 percent of depressed patients, Tony hasn't had success with current anti-depressants. One day, psychiatrists may be able to offer him a new kind of opioid, an anti-depressant for people suffering from the cruel pain of rejection.
A Surprising Discovery
As we move through life, rejections -- bullying in school, romantic breakups, and divorces -- are powerful triggers to depressive episodes, observes David Hsu, a neuroscientist at Stony Brook University School of Medicine in Long Island, New York. If a new drug made them less painful, he argues, it could relieve or even prevent major depression.
Our bodies naturally produce opioids to soothe physical pain, and opioid drugs like morphine and oxycodone work by plugging into the same receptors in our brains. The same natural opioids may also respond to emotional hurts, and painkillers can dramatically affect mood. Today's epidemic of opioid abuse raises the question: How many lives might have been saved if we had a safe, non-addictive option for medicating emotional pain?
Already one anti-depressant, tianeptine, locks into the mu opioid receptor, the target of morphine and oxycodone. Scientists knew that tianeptine, prescribed in some countries in Europe, Asia, and Latin America, acted differently than the most common anti-depressants in use today, which affect the levels of other brain chemicals, serotonin and norepinephrine. But the discovery in 2014 that tianeptine tapped the mu receptor was a "huge surprise," says co-author Jonathan Javitch, chief of the Division of Molecular Therapeutics at Columbia University.
The news arrived when scientists' basic understanding of depression is in flux; viewed biologically, it may cover several disorders. One of them could hinge on opioids. It's possible that some people release fewer opioids naturally or that the receptors for it are less effective.
Javitch has launched a startup, Kures, to make tianeptine more effective and convenient and to find other opioid-modulators. That may seem quixotic in the midst of an opioid epidemic, but tianeptine doesn't create dependency in low, prescription doses and has been used safely around the world for decades. To identify likely patients, cofounder Andrew Kruegel is looking for ways to "segment the depressed population by measures that have to do with opioid release," he says.
Is Emotional Pain Actually "Pain"?
No one imagines that the pain from rejection or loss is the same as pain from a broken leg. Physical pain is two perceptions—a sensory perception and an "affective" one, which makes pain unpleasant.
Exploration of an overlap between physical and what research psychologists call "social pain" has heated up since the mid-2000s.
The sensory perception, processed by regions of the brain called the primary and secondary somatosensory cortices and the posterior insula, tells us whether the pain is in your arm or your leg, how strong it is and whether it is a sting, ache, or has some other quality. The affective perception, in another part of the brain called the dorsal anterior cingulate cortex and the anterior insula, tells us that we want the pain to stop, fast! When people with lesions in the latter areas experience a stimulus that ordinarily would be painful, they don't mind it.
Science now suggests that emotional pain arises in the affective brain circuits. Exploration of an overlap between physical and what research psychologists call "social pain" has heated up since the mid-2000s. Animal evidence goes back to the 1970s: babies separated from their mothers showed less distress when given morphine, and more if dosed with naloxone, the opioid antagonist.
Parents, of course, face the question of whether Baby feels alone or wet whenever she howls. And the answer is: both hurt. Being abandoned is the ultimate threat in our early life, and it makes sense that a brain system to monitor social threats would piggyback upon an existing system for pain. Piggybacking is a feature of evolution. An ancestor who felt "hurt" when threatened by rejection might learn adaptive behavior: to cooperate or run.
In 2010, a large multi-university team led by Nathan DeWall at the University of Kentucky, reported that acetaminophen (Tylenol) reduced social pain. Undergraduates took 500 mg of acetaminophen upon awakening and at bedtime every day for three weeks and reported nightly about their day using a previously-tested "Hurt Feelings Scale," rating how strongly they agreed with questions like, "Today, being teased hurt my feelings."
Over the weeks, their reports of hurt feelings steadily declined, while remaining flat in a control group that took placebos. In a second experiment, the research group showed that, compared to controls, people who had taken acetaminophen for three weeks showed less brain activity in the affective brain circuits while they experienced rejection during a virtual ball-tossing game. Later, Hsu's brain scan research supported the idea that rejection triggers the mu opioid receptor system, which normally provides pain-dampening opioids.
More evidence comes from nonhuman primates with lesions in the affective circuits: They cry less when separated from caregivers or social groups.
Heartbreak seems to lie in those regions: women with major depression are more hurt by romantic rejection than normal controls are and show more activity in those areas in brain scans, Hsu found. Also, factors that make us more vulnerable to rejection -- like low self-esteem -- are linked to more activity in the key areas, studies show.
The trait "high rejection sensitivity" increases your risk of depression more than "global neuroticism" does, Hsu observes, and predicts a poor recovery from depression. Pain sensitivity is another clue: People with a gene linked to it seem to be more hurt by social exclusion. Once you're depressed, you become more rejection-sensitive and prone to pain—a classic bad feedback loop.
"Ideally, we'd have biomarkers to distinguish when loss becomes complicated grief and then depression, and we might prevent the transition with a drug."
Helen Mayberg, a neurologist renowned for her study of brain circuits in depression, sees, as Hsu does, the possibility of preventing depressions. "Nobody would suggest we treat routine bad social pain with drugs. But it is true that in susceptible people, losing a partner, for example, can lead to a full-blown depression," says Mayberg, who is the founding director of The Center for Advanced Circuit Therapeutics at Mount Sinai's Icahn School of Medicine in New York City. "Ideally, we'd have biomarkers to distinguish when loss becomes complicated grief and then depression, and we might prevent the transition with a drug. It would be like taking medication when you feel the warning symptoms of a headache to prevent a full-blown migraine."
A Way Out of the Opioid Crisis?
The exploration of social pain should lead us to a deeper understanding of pain, beyond the sharp distinctions between "physical" and "psychological." Finding our way out of the current crisis may require that deeper understanding. About half of the people with opioid prescriptions have mental health disorders. "I expect there are a lot of people using street opioids—heroin or prescriptions purchased from others--to self-medicate psychological pain," Kreugel says.
What we may need, he suggests, is "a new paradigm for using opioids in psychiatry: low, sub-analgesic, sub-euphoric dosing." But so far it hasn't been easy. Investors don't flock to fund psychiatric drugs and in 2018, the word opioid is poison.
As for Tony Y., he's struggled for three years to recover from his most serious relationship. "Driving around highways looking at exit signs toward places we visited together sometimes fills me with unbearable anguish," he admits. "And because we used to do so much bird watching together, sometimes a mere glimpse of a random bird sets me off." He perks up at the idea of a heartbreak drug. "If the side effects didn't seem bad, I would consider it, absolutely."