Who Qualifies as an “Expert” And How Can We Decide Who Is Trustworthy?
This article is part of the magazine, "The Future of Science In America: The Election Issue," co-published by LeapsMag, the Aspen Institute Science & Society Program, and GOOD.
Expertise is a slippery concept. Who has it, who claims it, and who attributes or yields it to whom is a culturally specific, sociological process. During the COVID-19 pandemic, we have witnessed a remarkable emergence of legitimate and not-so-legitimate scientists publicly claiming or being attributed to have academic expertise in precisely my field: infectious disease epidemiology. From any vantage point, it is clear that charlatans abound out there, garnering TV coverage and hundreds of thousands of Twitter followers based on loud opinions despite flimsy credentials. What is more interesting as an insider is the gradient of expertise beyond these obvious fakers.
A person's expertise is not a fixed attribute; it is a hierarchical trait defined relative to others. Despite my protestations, I am the go-to expert on every aspect of the pandemic to my family. To a reporter, I might do my best to answer a question about the immune response to SARS-CoV-2, noting that I'm not an immunologist. Among other academic scientists, my expertise is more well-defined as a subfield of epidemiology, and within that as a particular area within infectious disease epidemiology. There's a fractal quality to it; as you zoom in on a particular subject, a differentiation of expertise emerges among scientists who, from farther out, appear to be interchangeable.
We all have our scientific domain and are less knowledgeable outside it, of course, and we are often asked to comment on a broad range of topics. But many scientists without a track record in the field have become favorites among university administrators, senior faculty in unrelated fields, policymakers, and science journalists, using institutional prestige or social connections to promote themselves. This phenomenon leads to a distorted representation of science—and of academic scientists—in the public realm.
Trustworthy experts will direct you to others in their field who know more about particular topics, and will tend to be honest about what is and what isn't "in their lane."
Predictably, white male voices have been disproportionately amplified, and men are certainly over-represented in the category of those who use their connections to inappropriately claim expertise. Generally speaking, we are missing women, racial minorities, and global perspectives. This is not only important because it misrepresents who scientists are and reinforces outdated stereotypes that place white men in the Global North at the top of a credibility hierarchy. It also matters because it can promote bad science, and it passes over scientists who can lend nuance to the scientific discourse and give global perspectives on this quintessentially global crisis.
Also at work, in my opinion, are two biases within academia: the conflation of institutional prestige with individual expertise, and the bizarre hierarchy among scientists that attributes greater credibility to those in quantitative fields like physics. Regardless of mathematical expertise or institutional affiliation, lack of experience working with epidemiological data can lead to over-confidence in the deceptively simple mathematical models that we use to understand epidemics, as well as the inappropriate use of uncertain data to inform them. Prominent and vocal scientists from different quantitative fields have misapplied the methods of infectious disease epidemiology during the COVID-19 pandemic so far, creating enormous confusion among policymakers and the public. Early forecasts that predicted the epidemic would be over by now, for example, led to a sense that epidemiological models were all unreliable.
Meanwhile, legitimate scientific uncertainties and differences of opinion, as well as fundamentally different epidemic dynamics arising in diverse global contexts and in different demographic groups, appear in the press as an indistinguishable part of this general chaos. This leads many people to question whether the field has anything worthwhile to contribute, and muddies the facts about COVID-19 policies for reducing transmission that most experts agree on, like wearing masks and avoiding large indoor gatherings.
So how do we distinguish an expert from a charlatan? I believe a willingness to say "I don't know" and to openly describe uncertainties, nuances, and limitations of science are all good signs. Thoughtful engagement with questions and new ideas is also an indication of expertise, as opposed to arrogant bluster or a bullish insistence on a particular policy strategy regardless of context (which is almost always an attempt to hide a lack of depth of understanding). Trustworthy experts will direct you to others in their field who know more about particular topics, and will tend to be honest about what is and what isn't "in their lane." For example, some expertise is quite specific to a given subfield: epidemiologists who study non-infectious conditions or nutrition, for example, use different methods from those of infectious disease experts, because they generally don't need to account for the exponential growth that is inherent to a contagion process.
Academic scientists have a specific, technical contribution to make in containing the COVID-19 pandemic and in communicating research findings as they emerge. But the liminal space between scientists and the public is subject to the same undercurrents of sexism, racism, and opportunism that society and the academy have always suffered from. Although none of the proxies for expertise described above are fool-proof, they are at least indicative of integrity and humility—two traits the world is in dire need of at this moment in history.
[Editor's Note: To read other articles in this special magazine issue, visit the beautifully designed e-reader version.]
[Editor's Note: This essay is in response to our current Big Question, which we posed to experts with different perspectives: "How should DNA tests for intelligence be used, if at all, by parents and educators?"]
It's 2019. Prenatal genetic tests are being used to help parents select from healthy and diseased eggs. Genetic risk profiles are being created for a range of common diseases. And embryonic gene editing has moved into the clinic. The science community is nearly unanimous on the question of whether we should be consulting our genomes as early as possible to create healthy offspring. If you can predict it, let's prevent it, and the sooner, the better.
There are big issues with IQ genetics that should be considered before parents and educators adopt DNA IQ predictions.
When it comes to care of our babies, kids, and future generations, we are doing things today that we never even dreamed would be possible. But one area that remains murky is the long fraught question of IQ, and whether to use DNA science to tell us something about it. There are big issues with IQ genetics that should be considered before parents and educators adopt DNA IQ predictions.
IQ tests have been around for over a century. They've been used by doctors, teachers, government officials, and a whole host of institutions as a proxy for intelligence, especially in youth. At times in history, test results have been used to determine whether to allow a person to procreate, remain a part of society, or merely stay alive. These abuses seem to be a distant part of our past, and IQ tests have since garnered their fair share of controversy for exhibiting racial and cultural biases. But they continue to be used across society. Indeed, much of the literature aimed at expecting parents justifies its recommendations (more omegas, less formula, etc.) based on promises of raising a baby's IQ.
This is the power of IQ testing sans DNA science. Until recently, the two were separate entities, with IQ tests indicating a coefficient created from individual responses to written questions and genetic tests indicating some disease susceptibility based on a sequence of one's DNA. Yet in recent years, scientists have begun to unlock the secrets of inherited aspects of intelligence with genetic analyses that scan millions of points of variation in DNA. Both bench scientists and direct-to-consumer companies have used these new technologies to find variants associated with exceptional IQ scores. There are a number of tests on the open market that parents and educators can use at will. These tests purport to reveal whether a child is inherently predisposed to be intelligent, and some suggest ways to track them for success.
I started looking into these tests when I was doing research for my book, "Social by Nature: The Promise and Peril of Sociogenomics." This book investigated the new genetic science of social phenomena, like educational attainment and political persuasion, investment strategies, and health habits. I learned that, while many of the scientists doing much of the basic research into these things cautioned that the effects of genetic factors were quite small, most saw testing as one data point among many that could help to somehow level the playing field for young people. The rationale went that in certain circumstances, some needed help more than others. Why not put our collective resources together to help them?
Good nutrition, support at home, and access to healthcare and education make a huge difference in how people do.
Some experts believed so strongly in the power of DNA behavioral prediction that they argued it would be unfair not to use predictors to determine a kid's future, prevent negative outcomes, and promote the possibility for positive ones. The educators out in the wider world that I spoke with agreed. With careful attention, they thought sociogenomic tests could help young people get the push they needed when they possessed DNA sequences that weren't working in their favor. Officials working with troubled youth told me they hoped DNA data could be marshaled early enough that kids would thrive at home and in school, thereby avoiding ending up in their care. While my conversations with folks centered around sociogenomic data in general, genetic IQ prediction was completely entangled in it all.
I present these prevailing views to demonstrate both the widespread appeal of genetic predictors as well as the well-meaning intentions of those in favor of using them. It's a truly progressive notion to help those who need help the most. But we must question whether genetic predictors are data points worth looking at.
When we examine the way DNA IQ predictors are generated, we see scientists grouping people with similar IQ test results and academic achievements, and then searching for the DNA those people have in common. But there's a lot more to scores and achievements than meets the eye. Good nutrition, support at home, and access to healthcare and education make a huge difference in how people do. Therefore, the first problem with using DNA IQ predictors is that the data points themselves may be compromised by numerous inaccuracies.
We must then ask ourselves where the deep, enduring inequities in our society are really coming from. A deluge of research has shown that poor life outcomes are a product of social inequalities, like toxic living conditions, underfunded schools, and unhealthy jobs. A wealth of research has also shown that race, gender, sexuality, and class heavily influence life outcomes in numerous ways. Parents and caregivers feed, talk, and play differently with babies of different genders. Teachers treat girls and boys, as well as members of different racial and ethnic backgrounds, differently to the point where they do better and worse in different subject areas.
Healthcare providers consistently racially profile, using diagnostics and prescribing therapies differently for the same health conditions. Access to good schools and healthcare are strongly mitigated by one's race and socioeconomic status. But even youth from privileged backgrounds suffer worse health and life outcomes when they identify or are identified as queer. These are but a few examples of the ways in which social inequities affect our chances in life. Therefore, the second problem with using DNA IQ predictors is that it obscures these very real, and frankly lethal, determinants. Instead of attending to the social environment, parents and educators take inborn genetics as the reason for a child's successes or failures.
It is time that we shift our priorities from seeking genetic causes to fixing the social causes we know to be real.
The other problem with using DNA IQ predictors is that research into the weightiness of DNA evidence has shown time and again that people take DNA evidence more seriously than they do other kinds of evidence. So it's not realistic to say that we can just consider IQ genetics as merely one tiny data point. People will always give more weight to DNA evidence than it deserves. And given its proven negligible effect, it would be irresponsible to do so.
It is time that we shift our priorities from seeking genetic causes to fixing the social causes we know to be real. Parents and educators need to be wary of solutions aimed at them and their individual children.
[Editor's Note: Read another perspective in the series here.]
You read an online article about climate change, then start scanning the comments on Facebook. Right on cue, Seth the Science Denier chimes in with:
The study found that science deniers whose arguments go unchallenged can harm other people's attitudes toward science.
"Humans didn't cause this. Climate is always changing. The earth has always had cycles of warming and cooling—what's happening now isn't new. The idea that humans are causing something that happened long before humans were even around is absurd."
You know he's wrong. You recognize the fallacy in his argument. Do you take the time to engage with him, or write him off and move along?
New research suggests that countering science deniers like Seth is important—not necessarily to change their minds, but to keep them from influencing others.
Looking at Seth's argument, someone without much of a science background might think it makes sense. After all, climate is always changing. The earth has always gone through cycles, even before humans. Without a scientifically sound response, a reader may begin to doubt that human-caused climate change is really a thing.
A study published in Nature found that science deniers whose arguments go unchallenged can harm other people's attitudes toward science. Many people read discussions without actively engaging themselves, and some may not recognize erroneous information when they see it. Without someone to point out how a denier's statements are false or misleading, people are more likely to be influenced by the denier's arguments.
Researchers tested two strategies for countering science denial—by topic (presenting the facts) and by technique (addressing the illogical argument). Rebutting a science denier with facts and pointing out the fallacies in their arguments both had a positive effect on audience attitudes toward legitimate science. A combination of topic and technique rebuttals also had a positive effect.
"In the light of these findings we recommend that advocates for science train in topic and technique rebuttal," the authors wrote. "Both strategies were equally effective in mitigating the influence of science deniers in public debates. Advocates can choose which strategy they prefer, depending on their levels of expertise and confidence."
Who you're really addressing are the lurkers who might be swayed by misinformation if it isn't countered by real science.
So what does that look like? If we were to counter Seth's statements with a topic rebuttal, focusing on facts, it might look something like this:
Yes, climate has always changed due to varying CO2 levels in the atmosphere. Scientists have tracked that data. But they also have data showing that human activity, such as burning fossil fuels, has dramatically increased CO2 levels. Climate change is now happening at a rate that isn't natural and is dangerous for life as we know it.
A technique rebuttal might focus on how Seth is using selective information and leaving out important facts:
Climate has always changed, that's true. But you've omitted important information about why it changes and what's different about the changes we're seeing now.
Ultimately, we could combine the two techniques in something like this:
Climate has always changed, but you've omitted important information about why it changes and what's different about what we're seeing now. Levels of CO2 in the atmosphere are largely what drives natural climate change, but human activity has increased CO2 beyond natural levels. That's making climate change happen faster than it should, with devastating effects for life on Earth.
Remember that the point is not to convince Seth, though it's great if that happens. Who you're really addressing are the lurkers who might be swayed by misinformation if it isn't countered by truth.
It's a wacky world out there, science lovers. Keep on fighting the good fight.