Pregnant & Breastfeeding Women Who Get the COVID-19 Vaccine Are Protecting Their Infants, Research Suggests
Becky Cummings had multiple reasons to get vaccinated against COVID-19 while tending to her firstborn, Clark, who arrived in September 2020 at 27 weeks.
The 29-year-old intensive care unit nurse in Greensboro, North Carolina, had witnessed the devastation day in and day out as the virus took its toll on the young and old. But when she was offered the vaccine, she hesitated, skeptical of its rapid emergency use authorization.
Exclusion of pregnant and lactating mothers from clinical trials fueled her concerns. Ultimately, though, she concluded the benefits of vaccination outweighed the risks of contracting the potentially deadly virus.
"Long story short," Cummings says, in December "I got vaccinated to protect myself, my family, my patients, and the general public."
At the time, Cummings remained on the fence about breastfeeding, citing a lack of evidence to support its safety after vaccination, so she pumped and stashed breast milk in the freezer. Her son is adjusting to life as a preemie, requiring mother's milk to be thickened with formula, but she's becoming comfortable with the idea of breastfeeding as more research suggests it's safe.
"If I could pop him on the boob," she says, "I would do it in a heartbeat."
Now, a study recently published in the Journal of the American Medical Association found "robust secretion" of specific antibodies in the breast milk of mothers who received a COVID-19 vaccine, indicating a potentially protective effect against infection in their infants.
The presence of antibodies in the breast milk, detectable as early as two weeks after vaccination, lasted for six weeks after the second dose of the Pfizer-BioNTech vaccine.
"We believe antibody secretion into breast milk will persist for much longer than six weeks, but we first wanted to prove any secretion at all after vaccination," says Ilan Youngster, the study's corresponding author and head of pediatric infectious diseases at Shamir Medical Center in Zerifin, Israel.
That's why the research team performed a preliminary analysis at six weeks. "We are still collecting samples from participants and hope to soon be able to comment about the duration of secretion."
As with other respiratory illnesses, such as influenza and pertussis, secretion of antibodies in breast milk confers protection from infection in infants. The researchers expect a similar immune response from the COVID-19 vaccine and are expecting the findings to spur an increase in vaccine acceptance among pregnant and lactating women.
A COVID-19 outbreak struck three families the research team followed in the study, resulting in at least one non-breastfed sibling developing symptomatic infection; however, none of the breastfed babies became ill. "This is obviously not empirical proof," Youngster acknowledges, "but still a nice anecdote."
Leaps.org inquired whether infants who derive antibodies only through breast milk are likely to have a lower immunity than infants whose mothers were vaccinated while they were in utero. In other words, is maternal transmission of antibodies stronger during pregnancy than during breastfeeding, or about the same?
"This is a different kind of transmission," Youngster explains. "When a woman is infected or vaccinated during pregnancy, some antibodies will be transferred through the placenta to the baby's bloodstream and be present for several months." But in the nursing mother, that protection occurs through local action. "We always recommend breastfeeding whenever possible, and, in this case, it might have added benefits."
A study published online in March found COVID-19 vaccination provided pregnant and lactating women with robust immune responses comparable to those experienced by their nonpregnant counterparts. The study, appearing in the American Journal of Obstetrics and Gynecology, documented the presence of vaccine-generated antibodies in umbilical cord blood and breast milk after mothers had been vaccinated.
Natali Aziz, a maternal-fetal medicine specialist at Stanford University School of Medicine, notes that it's too early to draw firm conclusions about the reduction in COVID-19 infection rates among newborns of vaccinated mothers. Citing the two aforementioned research studies, she says it's biologically plausible that antibodies passed through the placenta and breast milk impart protective benefits. While thousands of pregnant and lactating women have been vaccinated against COVID-19, without incurring adverse outcomes, many are still wondering whether it's safe to breastfeed afterward.
It's important to bear in mind that pregnant women may develop more severe COVID-19 complications, which could lead to intubation or admittance to the intensive care unit. "We, in our practice, are supporting pregnant and breastfeeding patients to be vaccinated," says Aziz, who is also director of perinatal infectious diseases at Stanford Children's Health, which has been vaccinating new mothers and other hospitalized patients at discharge since late April.
Earlier in April, Huntington Hospital in Long Island, New York, began offering the COVID-19 vaccine to women after they gave birth. The hospital chose the one-shot Johnson & Johnson vaccine for postpartum patients, so they wouldn't need to return for a second shot while acclimating to life with a newborn, says Mitchell Kramer, chairman of obstetrics and gynecology.
The hospital suspended the program when the Food and Drug Administration and the Centers for Disease Control and Prevention paused use of the J&J vaccine starting April 13, while investigating several reports of dangerous blood clots and low platelet counts among more than 7 million people in the United States who had received that vaccine.
In lifting the pause April 23, the agencies announced the vaccine's fact sheets will bear a warning of the heightened risk for a rare but serious blood clot disorder among women under age 50. As a result, Kramer says, "we will likely not be using the J&J vaccine for our postpartum population."
So, would it make sense to vaccinate infants when one for them eventually becomes available, not just their mothers? "In general, most of the time, infants do not have as good of an immune response to vaccines," says Jonathan Temte, associate dean for public health and community engagement at the University of Wisconsin School of Medicine and Public Health in Madison.
"Many of our vaccines are held until children are six months of age. For example, the influenza vaccine starts at age six months, the measles vaccine typically starts one year of age, as do rubella and mumps. Immune response is typically not very good for viral illnesses in young infants under the age of six months."
So far, the FDA has granted emergency use authorization of the Pfizer-BioNTech vaccine for children as young as 16 years old. The agency is considering data from Pfizer to lower that age limit to 12. Studies are also underway in children under age 12. Meanwhile, data from Moderna on 12-to 17-year-olds and from Pfizer on 12- to 15-year-olds have not been made public. (Pfizer announced at the end of March that its vaccine is 100 percent effective in preventing COVID-19 in the latter age group, and FDA authorization for this population is expected soon.)
"There will be step-wise progression to younger children, with infants and toddlers being the last ones tested," says James Campbell, a pediatric infectious diseases physician and head of maternal and child clinical studies at the University of Maryland School of Medicine Center for Vaccine Development.
"Once the data are analyzed for safety, tolerability, optimal dose and regimen, and immune responses," he adds, "they could be authorized and recommended and made available to American children." The data on younger children are not expected until the end of this year, with regulatory authorization possible in early 2022.
For now, Vonnie Cesar, a family nurse practitioner in Smyrna, Georgia, is aiming to persuade expectant and new mothers to get vaccinated. She has observed that patients in metro Atlanta seem more inclined than their rural counterparts.
To quell some of their skepticism and fears, Cesar, who also teaches nursing students, conceived a visual way to demonstrate the novel mechanism behind the COVID-19 vaccine technology. Holding a palm-size physical therapy ball outfitted with clear-colored push pins, she simulates the spiked protein of the coronavirus. Slime slathered at the gaps permeates areas around the spikes—a process similar to how our antibodies build immunity to the virus.
These conversations often lead hesitant patients to discuss vaccination with their husbands or partners. "The majority of people I'm speaking with," she says, "are coming to the conclusion that this is the right thing for me, this is the common good, and they want to make sure that they're here for their children."
CORRECTION: An earlier version of this article mistakenly stated that the COVID-19 vaccines were granted emergency "approval." They have been granted emergency use authorization, not full FDA approval. We regret the error.
Sloppy Science Happens More Than You Think
The media loves to tout scientific breakthroughs, and few are as toutable – and in turn, have been as touted – as CRISPR. This method of targeted DNA excision was discovered in bacteria, which use it as an adaptive immune system to combat reinfection with a previously encountered virus.
Shouldn't the editors at a Nature journal know better than to have published an incorrect paper in the first place?
It is cool on so many levels: not only is the basic function fascinating, reminding us that we still have more to discover about even simple organisms that we thought we knew so well, but the ability it grants us to remove and replace any DNA of interest has almost limitless applications in both the lab and the clinic. As if that didn't make it sexy enough, add in a bicoastal, male-female, very public and relatively ugly patent battle, and the CRISPR story is irresistible.
And then last summer, a bombshell dropped. The prestigious journal Nature Methods published a paper in which the authors claimed that CRISPR could cause many unintended mutations, rendering it unfit for clinical use. Havoc duly ensued; stocks in CRISPR-based companies plummeted. Thankfully, the authors of the offending paper were responsible, good scientists; they reassessed, then recanted. Their attention- and headline- grabbing results were wrong, and they admitted as much, leading Nature Methods to formally retract the paper this spring.
How did this happen? Shouldn't the editors at a Nature journal know better than to have published this in the first place?
Alas, high-profile scientific journals publish misleading and downright false results fairly regularly. Some errors are unavoidable – that's how the scientific method works. Hypotheses and conclusions will invariably be overturned as new data becomes available and new technologies are developed that allow for deeper and deeper studies. That's supposed to happen. But that's not what we're talking about here. Nor are we talking about obvious offenses like outright plagiarism. We're talking about mistakes that are avoidable, and that still have serious ramifications.
The cultures of both industry and academia promote research that is poorly designed and even more poorly analyzed.
Two parties are responsible for a scientific publication, and thus two parties bear the blame when things go awry: the scientists who perform and submit the work, and the journals who publish it. Unfortunately, both are incentivized for speedy and flashy publications, and not necessarily for correct publications. It is hardly a surprise, then, that we end up with papers that are speedy and flashy – and not necessarily correct.
"Scientists don't lie and submit falsified data," said Andy Koff, a professor of Molecular Biology at Sloan Kettering Institute, the basic research arm of Memorial Sloan Kettering Cancer Center. Richard Harris, who wrote the book on scientific misconduct running the gamut from unconscious bias and ignorance to more malicious fraudulence, largely concurs (full disclosure: I reviewed the book here). "Scientists want to do good science and want to be recognized as such," he said. But even so, the cultures of both industry and academia promote research that is poorly designed and even more poorly analyzed. In Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Millions, Harris describes how scientists must constantly publish in order to maintain their reputations and positions, to get grants and tenure and students. "They are disincentivized from doing that last extra experiment to prove their results," he said; it could prove too risky if it could cost them a publication.
Ivan Oransky and Adam Marcus founded Retraction Watch, a blog that tracks the retraction of scientific papers, in 2010. Oransky pointed out that blinded peer review – the pride and joy of the scientific publishing enterprise – is a large part of the problem. "Pre-publication peer review is still important, but we can't treat it like the only check on the system. Papers are being reviewed by non-experts, and reviewers are asked to review papers only tangentially related to their field. Moreover, most peer reviewers don't look at the underlying or raw data, even when it is available. How then can they tell if the analysis is flawed or the data is accurate?" he wondered.
Mistaken publications also erode the public's opinion of legitimate science, which is problematic since that opinion isn't especially high to begin with.
Koff agreed that anonymous peer review is valuable, but severely flawed. "Blinded review forces a collective view of importance," he said. "If an article disagrees with the reviewer's worldview, the article gets rejected or forced to adhere to that worldview – even if that means pushing the data someplace it shouldn't necessarily go." We have lost the scientific principle behind review, he thinks, which was to critically analyze a paper. But instead of challenging fundamental assumptions within a paper, reviewers now tend to just ask for more and more supplementary data. And don't get him started on editors. "Editors are supposed to arbitrate between reviewers and writers and they have completely abdicated this responsibility, at every journal. They do not judge, and that's a real failing."
Harris laments the wasted time, effort, and resources that result when erroneous ideas take hold in a field, not to mention lives lost when drug discovery is predicated on basic science findings that end up being wrong. "When no one takes the time, care, and money to reproduce things, science isn't stopping – but it is slowing down," he noted. Mistaken publications also erode the public's opinion of legitimate science, which is problematic since that opinion isn't especially high to begin with.
Scientists and publishers don't only cause the problem, though – they may also provide the solution. Both camps are increasingly recognizing and dealing with the crisis. The self-proclaimed "data thugs" Nick Brown and James Heathers use pretty basic arithmetic to reveal statistical errors in papers. The microbiologist Elisabeth Bik scans the scientific literature for problematic images "in her free time." The psychologist Brian Nosek founded the Center for Open Science, a non-profit organization dedicated to promoting openness, integrity, and reproducibility in scientific research. The Nature family of journals – yes, the one responsible for the latest CRISPR fiasco – has its authors complete a checklist to combat irreproducibility, à la Atul Gawande. And Nature Communications, among other journals, uses transparent peer review, in which authors can opt to have the reviews of their manuscript published anonymously alongside the completed paper. This practice "shows people how the paper evolved," said Koff "and keeps the reviewer and editor accountable. Did the reviewer identify the major problems with the paper? Because there are always major problems with a paper."
Your Digital Avatar May One Day Get Sick Before You Do
Artificial intelligence is everywhere, just not in the way you think it is.
These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."
"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."
"And this is, of course, as far from the truth as you can possibly get."
What Exactly Is Artificial Intelligence, Anyway?
Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.
"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."
The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.
"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."
Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)
But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."
The Fundamental Breakthrough That Must Be Solved
To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."
In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.
Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.
In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.
One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.
That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.
The Ethical Problem That Deserves More Attention
But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.
It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.