Who Qualifies as an “Expert” And How Can We Decide Who Is Trustworthy?
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
Expertise is a slippery concept. Who has it, who claims it, and who attributes or yields it to whom is a culturally specific, sociological process. During the COVID-19 pandemic, we have witnessed a remarkable emergence of legitimate and not-so-legitimate scientists publicly claiming or being attributed to have academic expertise in precisely my field: infectious disease epidemiology. From any vantage point, it is clear that charlatans abound out there, garnering TV coverage and hundreds of thousands of Twitter followers based on loud opinions despite flimsy credentials. What is more interesting as an insider is the gradient of expertise beyond these obvious fakers.
A person's expertise is not a fixed attribute; it is a hierarchical trait defined relative to others. Despite my protestations, I am the go-to expert on every aspect of the pandemic to my family. To a reporter, I might do my best to answer a question about the immune response to SARS-CoV-2, noting that I'm not an immunologist. Among other academic scientists, my expertise is more well-defined as a subfield of epidemiology, and within that as a particular area within infectious disease epidemiology. There's a fractal quality to it; as you zoom in on a particular subject, a differentiation of expertise emerges among scientists who, from farther out, appear to be interchangeable.
We all have our scientific domain and are less knowledgeable outside it, of course, and we are often asked to comment on a broad range of topics. But many scientists without a track record in the field have become favorites among university administrators, senior faculty in unrelated fields, policymakers, and science journalists, using institutional prestige or social connections to promote themselves. This phenomenon leads to a distorted representation of science—and of academic scientists—in the public realm.
Trustworthy experts will direct you to others in their field who know more about particular topics, and will tend to be honest about what is and what isn't "in their lane."
Predictably, white male voices have been disproportionately amplified, and men are certainly over-represented in the category of those who use their connections to inappropriately claim expertise. Generally speaking, we are missing women, racial minorities, and global perspectives. This is not only important because it misrepresents who scientists are and reinforces outdated stereotypes that place white men in the Global North at the top of a credibility hierarchy. It also matters because it can promote bad science, and it passes over scientists who can lend nuance to the scientific discourse and give global perspectives on this quintessentially global crisis.
Also at work, in my opinion, are two biases within academia: the conflation of institutional prestige with individual expertise, and the bizarre hierarchy among scientists that attributes greater credibility to those in quantitative fields like physics. Regardless of mathematical expertise or institutional affiliation, lack of experience working with epidemiological data can lead to over-confidence in the deceptively simple mathematical models that we use to understand epidemics, as well as the inappropriate use of uncertain data to inform them. Prominent and vocal scientists from different quantitative fields have misapplied the methods of infectious disease epidemiology during the COVID-19 pandemic so far, creating enormous confusion among policymakers and the public. Early forecasts that predicted the epidemic would be over by now, for example, led to a sense that epidemiological models were all unreliable.
Meanwhile, legitimate scientific uncertainties and differences of opinion, as well as fundamentally different epidemic dynamics arising in diverse global contexts and in different demographic groups, appear in the press as an indistinguishable part of this general chaos. This leads many people to question whether the field has anything worthwhile to contribute, and muddies the facts about COVID-19 policies for reducing transmission that most experts agree on, like wearing masks and avoiding large indoor gatherings.
So how do we distinguish an expert from a charlatan? I believe a willingness to say "I don't know" and to openly describe uncertainties, nuances, and limitations of science are all good signs. Thoughtful engagement with questions and new ideas is also an indication of expertise, as opposed to arrogant bluster or a bullish insistence on a particular policy strategy regardless of context (which is almost always an attempt to hide a lack of depth of understanding). Trustworthy experts will direct you to others in their field who know more about particular topics, and will tend to be honest about what is and what isn't "in their lane." For example, some expertise is quite specific to a given subfield: epidemiologists who study non-infectious conditions or nutrition, for example, use different methods from those of infectious disease experts, because they generally don't need to account for the exponential growth that is inherent to a contagion process.
Academic scientists have a specific, technical contribution to make in containing the COVID-19 pandemic and in communicating research findings as they emerge. But the liminal space between scientists and the public is subject to the same undercurrents of sexism, racism, and opportunism that society and the academy have always suffered from. Although none of the proxies for expertise described above are fool-proof, they are at least indicative of integrity and humility—two traits the world is in dire need of at this moment in history.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
Sarah Mancoll was 22 years old when she noticed a bald spot on the back of her head. A dermatologist confirmed that it was alopecia aerata, an autoimmune disorder that causes hair loss.
Of 213 new drugs approved from 2003 to 2012, only five percent included any data from pregnant women.
She successfully treated the condition with corticosteroid shots for nearly 10 years. Then Mancoll and her husband began thinking about starting a family. Would the shots be safe for her while pregnant? For the fetus? What about breastfeeding?
Mancoll consulted her primary care physician, her dermatologist, even a pediatrician. Without clinical data, no one could give her a definitive answer, so she stopped treatment to be "on the safe side." By the time her son was born, she'd lost at least half her hair. She returned to her Washington, D.C., public policy job two months later entirely bald—and without either eyebrows or eyelashes.
After having two more children in quick succession, Mancoll recently resumed the shots but didn't forget her experience. Today, she is an advocate for including more pregnant and lactating women in clinical studies so they can have more information about therapies than she did.
"I live a very privileged life, and I'll do just fine with or without hair, but it's not just about me," Mancoll said. "It's about a huge population of women who are being disenfranchised…They're invisible."
About 4 million women give birth each year in the United States, and many face medical conditions, from hypertension and diabetes to psychiatric disorders. A 2011 study showed that most women reported taking at least one medication while pregnant between 1976 and 2008. But for decades, pregnant and lactating women have been largely excluded from clinical drug studies that rigorously test medications for safety and effectiveness.
An estimated 98 percent of government-approved drug treatments between 2000 and 2010 had insufficient data to determine risk to the fetus, and close to 75 percent had no human pregnancy data at all. All told, of 213 new pharmaceuticals approved from 2003 to 2012, only five percent included any data from pregnant women.
But recent developments suggest that could be changing. Amid widespread concerns about increased maternal mortality rates, women's health advocates, physicians, and researchers are sensing and encouraging a cultural shift toward protecting women through responsible research instead of from research.
"The question is not whether to do research with pregnant women, but how," Anne Drapkin Lyerly, professor and associate director of the Center for Bioethics at the University of North Carolina at Chapel Hill, wrote last year in an op-ed. "These advances are essential. It is well past time—and it is morally imperative—for research to benefit pregnant women."
"In excluding pregnant women from drug trials to protect them from experimentation, we subject them to uncontrolled experimentation."
To that end, the American College of Obstetricians and Gynecologists' Committee on Ethics acknowledged that research trials need to be better designed so they don't "inappropriately constrain the reproductive choices of study participants or unnecessarily exclude pregnant women." A federal task force also called for significantly expanded research and the removal of regulatory barriers that make it difficult for pregnant and lactating women to participate in research.
Several months ago, a government change to a regulation known as the Common Rule took effect, removing pregnant women as a "vulnerable population" in need of special protections -- a designation that had made it more difficult to enroll them in clinical drug studies. And just last week, the U.S. Food and Drug Administration (FDA) issued new draft guidances for industry on when and how to include pregnant and lactating women in clinical trials.
Inclusion is better than the absence of data on their treatment, said Catherine Spong, former chair of the federal task force.
"It's a paradox," said Spong, professor of obstetrics and gynecology and chief of maternal fetal medicine at University of Texas Southwestern Medical Center. "There is a desire to protect women and fetuses from harm, which is translated to a reluctance to include them in research. By excluding them, the evidence for their care is limited."
Jacqueline Wolf, a professor of the history of medicine at Ohio University, agreed.
"In excluding pregnant women from drug trials to protect them from experimentation, we subject them to uncontrolled experimentation," she said. "We give them the medication without doing any research, and that's dangerous."
Women, of course, don't stop getting sick or having chronic medical conditions just because they are pregnant or breastfeeding, and conditions during pregnancy can affect a baby's health later in life. Evidence-based data is important for other reasons, too.
Pregnancy can dramatically change a woman's physiology, affecting how drugs act on her body and how her body acts or reacts to drugs. For instance, pregnant bodies can more quickly clear out medications such as glyburide, used during diabetes in pregnancy to stabilize high blood-sugar levels, which can be toxic to the fetus and harmful to women. That means a regular dose of the drug may not be enough to control blood sugar and prevent poor outcomes.
Pregnant patients also may be reluctant to take needed drugs for underlying conditions (and doctors may be hesitant to prescribe them), which in turn can cause more harm to the woman and fetus than had they been treated. For example, women who have severe asthma attacks while pregnant are at a higher risk of having low-birthweight babies, and pregnant women with uncontrolled diabetes in early pregnancy have more than four times the risk of birth defects.
Current clinical trials involving pregnant women are assessing treatments for obstructive sleep apnea, postpartum hemorrhage, lupus, and diabetes.
For Kate O'Brien, taking medication during her pregnancy was a matter of life and death. A freelance video producer who lives in New Jersey, O'Brien was diagnosed with tuberculosis in 2015 after she became pregnant with her second child, a boy. Even as she signed hospital consent forms, she had no idea if the treatment would harm him.
"It's a really awful experience," said O'Brien, who now is active with We are TB, an advocacy and support network. "All they had to tell me about the medication was just that women have been taking it for a really long time all over the world. That was the best they could do."
More and more doctors, researchers and women's health organizations and advocates are calling that unacceptable.
By indicating that filling current knowledge gaps is "a critical public health need," the FDA is signaling its support for advancing research with pregnant women, said Lyerly, also co-founder of the Second Wave Initiative, which promotes fair representation of the health interests of pregnant women in biomedical research and policies. "It's a very important shift."
Research with pregnant women can be done ethically, Lyerly said, whether by systematically collecting data from those already taking medications or enrolling pregnant women in studies of drugs or vaccines in development.
Current clinical trials involving pregnant women are assessing treatments for obstructive sleep apnea, postpartum hemorrhage, lupus, and diabetes. Notable trials in development target malaria and HIV prevention in pregnancy.
"It clearly is doable to do this research, and test trials are important to provide evidence for treatment," Spong said. "If we don't have that evidence, we aren't making the best educated decisions for women."
The news last November that a rogue Chinese scientist had genetically altered the embryos of a pair of Chinese twins shocked the world. But although this use of advanced technology to change the human gene pool was premature, it was a harbinger of how genetic science will alter our healthcare, the way we make babies, the nature of the babies we make, and, ultimately, our sense of who and what we are as a species.
The healthcare applications of the genetics revolution are merely stations along the way to the ultimate destination.
But while the genetics revolution has already begun, we aren't prepared to handle these Promethean technologies responsibly.
By identifying the structure of DNA in the 1950s, Watson, Crick, Wilkins, and Franklin showed that the book of life was written in the DNA double helix. When the human genome project was completed in 2003, we saw how this book of human life could be transcribed. Painstaking research paired with advanced computational algorithms then showed what increasing numbers of genes do and how the genetic book of life can be read.
Now, with the advent of precision gene editing tools like CRISPR, we are seeing that the book of life -- and all biology -- can be re-written. Biology is being recognized as another form of readable, writable, and hackable information technology with we humans as the coders.
The impact of this transformation is being first experienced in our healthcare. Gene therapies including those extracting, re-engineering, then reintroducing a person's own cells enhanced into cancer-fighting supercells are already performing miracles in clinical trials. Thousands of applications have already been submitted to regulators across the globe for trials using gene therapies to address a host of other diseases.
Recently, the first gene editing of cells inside a person's body was deployed to treat the genetically relatively simple metabolic disorder Hunter syndrome, with many more applications to come. These new approaches are only the very first steps in our shift from the current system of generalized medicine based on population averages to precision medicine based on each patient's individual biology to predictive medicine based on AI-generated estimations of a person's future health state.
Jamie Metzl's groundbreaking new book, Hacking Darwin: Genetic Engineering and the Future of Humanity, explores how the genetic revolution is transforming our healthcare, the way we make babies, and the nature of and babies we make, what this means for each of us, and what we must all do now to prepare for what's coming.
This shift in our healthcare will ensure that millions and then billions of people will have their genomes sequenced as the foundation of their treatment. Big data analytics will then be used to compare at scale people's genotypes (what their genes say) to their phenotypes (how those genes are expressed over the course of their lives).
These massive datasets of genetic and life information will then make it possible to go far beyond the simple genetic analysis of today and to understand far more complex human diseases and traits influenced by hundreds or thousands of genes. Our understanding of this complex genetic system within the vaster ecosystem of our bodies and the environment around us will transform healthcare for the better and help us cure terrible diseases that have plagued our ancestors for millennia.
But as revolutionary as this challenge will be for medicine, the healthcare applications of the genetics revolution are merely stations along the way to the ultimate destination – a deep and fundamental transformation of our evolutionary trajectory as a species.
A first inkling of where we are heading can be seen in the direct-to-consumer genetic testing industry. Many people around the world have now sent their cheek swabs to companies like 23andMe for analysis. The information that comes back can tell people a lot about relatively simple genetic traits like carrier status for single gene mutation diseases, eye color, or whether they hate the taste of cilantro, but the information about complex traits like athletic predisposition, intelligence, or personality style today being shared by some of these companies is wildly misleading.
This will not always be the case. As the genetic and health data pools grow, analysis of large numbers of sequenced genomes will make it possible to apply big data analytics to predict some very complex genetic disease risks and the genetic components of traits like height, IQ, temperament, and personality style with increasing accuracy. This process, called "polygenic scoring," is already being offered in beta stage by a few companies and will become an ever bigger part of our lives going forward.
The most profound application of all this will be in our baby-making. Before making a decision about which of the fertilized eggs to implant, women undergoing in vitro fertilization can today elect to have a small number of cells extracted from their pre-implanted embryos and sequenced. With current technology, this can be used to screen for single-gene mutation diseases and other relatively simple disorders. Polygenic scoring, however, will soon make it possible to screen these early stage pre-implanted embryos to assess their risk of complex genetic diseases and even to make predictions about the heritable parts of complex human traits. The most intimate elements of being human will start feeling like high-pressure choices needing to be made by parents.
The limit of our imagination will become the most significant barrier to our recasting biology.
Adult stem cell technologies will then likely make it possible to generate hundreds or thousands of a woman's own eggs from her blood sample or skin graft. This would blow open the doors of reproductive possibility and allow parents to choose embryos with exceptional potential capabilities from a much larger set of options.
The complexity of human biology will place some limits to the extent of possible gene edits that might be made to these embryos, but all of biology, including our own, is extremely flexible. How else could all the diversity of life have emerged from a single cell nearly four billion years ago? The limit of our imagination will become the most significant barrier to our recasting biology.
But while we humans are gaining the powers of the gods, we aren't at all ready to use them.
The same tools that will help cure our worst afflictions, save our children, help us live longer, healthier, more robust lives will also open the door to potential abuses. Prospective parents with the best of intentions or governments with lax regulatory structures or aggressive ideas of how population-wide genetic engineering might be used to enhance national competitiveness or achieve some other goal could propel us into a genetic arms race that could undermine our essential diversity, dangerously divide societies, lead to dangerous, destabilizing, and potentially even deadly conflicts between us, and threaten our very humanity.
But while the advance of genetic technologies is inevitable, how it plays out is anything but. If we don't want the genetic revolution to undermine our species or lead to grave conflicts between genetic haves and have nots or between societies opting in and those opting out, now is the time when we need to make smart decisions based on our individual and collective best values. Although the technology driving the genetic revolution is new, the value systems we will need to optimize the benefits and minimize the harms of this massive transformation are ones we have been developing for thousands of years.
And while some very smart and well-intentioned scientists have been meeting to explore what comes next, it won't be enough for a few of even our wisest prophets to make decisions about the future of our species that will impact everyone. We'll also need smart regulations on both the national and international levels.
Every country will need to have its own regulatory guidelines for human genetic engineering based on both international best practices and the country's unique traditions and values. Because we are all one species, however, we will also ultimately need to develop guidelines that can apply to all of us.
As a first step toward making this possible, we must urgently launch a global, species-wide education effort and inclusive dialogue on the future of human genetic engineering that can eventually inform global norms that will need to underpin international regulations. This process will not be easy, but the alternative of an unregulated genetic arms race would be far worse.
The overlapping genomics and AI revolutions may seem like distant science fiction but are closer than you think. Far sooner than most people recognize, the inherent benefits of these technologies and competition between us will spark rapid adoption. Before that spark ignites, we have a brief moment to come together as a species like we never have before to articulate and translate into action the future we jointly envision. The north star of our best shared values can help us navigate the almost unimaginable opportunities and very real challenges that lie ahead.