Why Are Autism Rates Steadily Rising?
Stefania Sterling was just 21 when she had her son, Charlie. She was young and healthy, with no genetic issues apparent in either her or her husband's family, so she expected Charlie to be typical.
"It is surprising that the prevalence of a significant disorder like autism has risen so consistently over a relatively brief period."
It wasn't until she went to a Mommy and Me music class when he was one, and she saw all the other one-year-olds walking, that she realized how different her son was. He could barely crawl, didn't speak, and made no eye contact. By the time he was three, he was diagnosed as being on the lower functioning end of the autism spectrum.
She isn't sure why it happened – and researchers, too, are still trying to understand the basis of the complex condition. Studies suggest that genes can act together with influences from the environment to affect development in ways that lead to Autism Spectrum Disorder (ASD). But rates of ASD are rising dramatically, making the need to figure out why it's happening all the more urgent.
The Latest News
Indeed, the CDC's latest autism report, released last week, which uses 2016 data, found that the prevalence of ASD in four-year-old children was one in 64 children, or 15.6 affected children per 1,000. That's more than the 14.1 rate they found in 2014, for the 11 states included in the study. New Jersey, as in years past, was the highest, with 25.3 per 1,000, compared to Missouri, which had just 8.8 per 1,000.
The rate for eight-year-olds had risen as well. Researchers found the ASD prevalence nationwide was 18.5 per 1,000, or one in 54, about 10 percent higher than the 16.8 rate found in 2014. New Jersey, again, was the highest, at one in 32 kids, compared to Colorado, which had the lowest rate, at one in 76 kids. For New Jersey, that's a 175 percent rise from the baseline number taken in 2000, when the state had just one in 101 kids.
"It is surprising that the prevalence of a significant disorder like autism has risen so consistently over a relatively brief period," said Walter Zahorodny, an associate professor of pediatrics at Rutgers New Jersey Medical School, who was involved in collecting the data.
The study echoed the findings of a surprising 2011 study in South Korea that found 1 in every 38 students had ASD. That was the the first comprehensive study of autism prevalence using a total population sample: A team of investigators from the U.S., South Korea, and Canada looked at 55,000 children ages 7 to 12 living in a community in South Korea and found that 2.64 percent of them had some level of autism.
Searching for Answers
Scientists can't put their finger on why rates are rising. Some say it's better diagnosis. That is, it's not that more people have autism. It's that we're better at detecting it. Others attribute it to changes in the diagnostic criteria. Specifically, the May 2013 update of the Diagnostic and Statistical Manual of Mental Disorders-5 -- the standard classification of mental disorders -- removed the communication deficit from the autism definition, which made more children fall under that category. Cynical observers believe physicians and therapists are handing out the diagnosis more freely to allow access to services available only to children with autism, but that are also effective for other children.
Alycia Halladay, chief science officer for the Autism Science Foundation in New York, said she wishes there were just one answer, but there's not. While she believes the rising ASD numbers are due in part to factors like better diagnosis and a change in the definition, she does not believe that accounts for the entire rise in prevalence. As for the high numbers in New Jersey, she said the state has always had a higher prevalence of autism compared to other states. It is also one of the few states that does a good job at recording cases of autism in its educational records, meaning that children in New Jersey are more likely to be counted compared to kids in other states.
"Not every state is as good as New Jersey," she said. "That accounts for some of the difference compared to elsewhere, but we don't know if it's all of the difference in prevalence, or most of it, or what."
"What we do know is that vaccinations do not cause autism."
There is simply no defined proven reason for these increases, said Scott Badesch, outgoing president and CEO of the Autism Society of America.
"There are suggestions that it is based on better diagnosis, but there are also suggestions that the incidence of autism is in fact increasing due to reasons that have yet been determined," he said, adding, "What we do know is that vaccinations do not cause autism."
Zahorodny, the pediatrics professor, believes something is going on beyond better detection or evolving definitions.
"Changes in awareness and shifts in how children are identified or diagnosed are relevant, but they only take you so far in accounting for an increase of this magnitude," he said. "We don't know what is driving the surge in autism recorded by the ADDM Network and others."
He suggested that the increase in prevalence could be due to non-genetic environmental triggers or risk factors we do not yet know about, citing possibilities including parental age, prematurity, low birth rate, multiplicity, breech presentation, or C-section delivery. It may not be one, but rather several factors combined, he said.
"Increases in ASD prevalence have affected the whole population, so the triggers or risks must be very widely dispersed across all strata," he added.
There are studies that find new risk factors for ASD almost on a daily basis, said Idan Menashe, assistant professor in the Department of Health at Ben-Gurion University of the Negev, the fastest growing research university in Israel.
"There are plenty of studies that find new genetic variants (and new genes)," he said. In addition, various prenatal and perinatal risk factors are associated with a risk of ASD. He cited a study his university conducted last year on the relationship between C-section births and ASD, which found that exposure to general anesthesia may explain the association.
Whatever the cause, health practitioners are seeing the consequences in real time.
"People say rates are higher because of the changes in the diagnostic criteria," said Dr. Roseann Capanna-Hodge, a psychologist in Ridgefield, CT. "And they say it's easier for children to get identified. I say that's not the truth and that I've been doing this for 30 years, and that even 10 years ago, I did not see the level of autism that I do see today."
Sure, we're better at detecting autism, she added, but the detection improvements have largely occurred at the low- to mid- level part of the spectrum. The higher rates of autism are occurring at the more severe end, in her experience.
A Polarizing Theory
Among the more controversial risk factors scientists are exploring is the role environmental toxins may play in the development of autism. Some scientists, doctors and mental health experts suspect that toxins like heavy metals, pesticides, chemicals, or pollution may interrupt the way genes are expressed or the way endocrine systems function, manifesting in symptoms of autism. But others firmly resist such claims, at least until more evidence comes forth. To date, studies have been mixed and many have been more associative than causative.
"Today, scientists are still trying to figure out whether there are other environmental changes that can explain this rise, but studies of this question didn't provide any conclusive answer," said Menashe, who also serves as the scientific director of the National Autism Research Center at BGU.
"It's not everything that makes Charlie. He's just like any other kid."
That inconclusiveness has not dissuaded some doctors from taking the perspective that toxins do play a role. "Autism rates are rising because there is a mismatch between our genes and our environment," said Julia Getzelman, a pediatrician in San Francisco. "The majority of our evolution didn't include the kinds of toxic hits we are experiencing. The planet has changed drastically in just the last 75 years –- it has become more and more polluted with tens of thousands of unregulated chemicals being used by industry that are having effects on our most vulnerable."
She cites BPA, an industrial chemical that has been used since the 1960s to make certain plastics and resins. A large body of research, she says, has shown its impact on human health and the endocrine system. BPA binds to our own hormone receptors, so it may negatively impact the thyroid and brain. A study in 2015 was the first to identify a link between BPA and some children with autism, but the relationship was associative, not causative. Meanwhile, the Food and Drug Administration maintains that BPA is safe at the current levels occurring in food, based on its ongoing review of the available scientific evidence.
Michael Mooney, President of St. Louis-based Delta Genesis, a non-profit organization that treats children struggling with neurodevelopmental delays like autism, suspects a strong role for epigenetics, which refers to changes in how genes are expressed as a result of environmental influences, lifestyle behaviors, age, or disease states.
He believes some children are genetically predisposed to the disorder, and some unknown influence or combination of influences pushes them over the edge, triggering epigenetic changes that result in symptoms of autism.
For Stefania Sterling, it doesn't really matter how or why she had an autistic child. That's only one part of Charlie.
"It's not everything that makes Charlie," she said. "He's just like any other kid. He comes with happy moments. He comes with sad moments. Just like my other three kids."
In The Fake News Era, Are We Too Gullible? No, Says Cognitive Scientist
One of the oddest political hoaxes of recent times was Pizzagate, in which conspiracy theorists claimed that Hillary Clinton and her 2016 campaign chief ran a child sex ring from the basement of a Washington, DC, pizzeria.
To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility.
Millions of believers spread the rumor on social media, abetted by Russian bots; one outraged netizen stormed the restaurant with an assault rifle and shot open what he took to be the dungeon door. (It actually led to a computer closet.) Pundits cited the imbroglio as evidence that Americans had lost the ability to tell fake news from the real thing, putting our democracy in peril.
Such fears, however, are nothing new. "For most of history, the concept of widespread credulity has been fundamental to our understanding of society," observes Hugo Mercier in Not Born Yesterday: The Science of Who We Trust and What We Believe (Princeton University Press, 2020). In the fourth century BCE, he points out, the historian Thucydides blamed Athens' defeat by Sparta on a demagogue who hoodwinked the public into supporting idiotic military strategies; Plato extended that argument to condemn democracy itself. Today, atheists and fundamentalists decry one another's gullibility, as do climate-change accepters and deniers. Leftists bemoan the masses' blind acceptance of the "dominant ideology," while conservatives accuse those who do revolt of being duped by cunning agitators.
What's changed, all sides agree, is the speed at which bamboozlement can propagate. In the digital age, it seems, a sucker is born every nanosecond.
The Case Against Credulity
Yet Mercier, a cognitive scientist at the Jean Nicod Institute in Paris, thinks we've got the problem backward. To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility. "We don't credulously accept whatever we're told—even when those views are supported by the majority of the population, or by prestigious, charismatic individuals," he writes. "On the contrary, we are skilled at figuring out who to trust and what to believe, and, if anything, we're too hard rather than too easy to influence."
He bases those contentions on a growing body of research in neuropsychiatry, evolutionary psychology, and other fields. Humans, Mercier argues, are hardwired to balance openness with vigilance when assessing communicated information. To gauge a statement's accuracy, we instinctively test it from many angles, including: Does it jibe with what I already believe? Does the speaker share my interests? Has she demonstrated competence in this area? What's her reputation for trustworthiness? And, with more complex assertions: Does the argument make sense?
This process, Mercier says, enables us to learn much more from one another than do other animals, and to communicate in a far more complex way—key to our unparalleled adaptability. But it doesn't always save us from trusting liars or embracing demonstrably false beliefs. To better understand why, leapsmag spoke with the author.
How did you come to write Not Born Yesterday?
In 2010, I collaborated with the cognitive scientist Dan Sperber and some other colleagues on a paper called "Epistemic Vigilance," which laid out the argument that evolutionarily, it would make no sense for humans to be gullible. If you can be easily manipulated and influenced, you're going to be in major trouble. But as I talked to people, I kept encountering resistance. They'd tell me, "No, no, people are influenced by advertising, by political campaigns, by religious leaders." I started doing more research to see if I was wrong, and eventually I had enough to write a book.
With all the talk about "fake news" these days, the topic has gotten a lot more timely.
Yes. But on the whole, I'm skeptical that fake news matters very much. And all the energy we spend fighting it is energy not spent on other pursuits that may be better ways of improving our informational environment. The real challenge, I think, is not how to shut up people who say stupid things on the internet, but how to make it easier for people who say correct things to convince people.
"History shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion."
You start the book with an anecdote about your encounter with a con artist several years ago, who scammed you out of 20 euros. Why did you choose that anecdote?
Although I'm arguing that people aren't generally gullible, I'm not saying we're completely impervious to attempts at tricking us. It's just that we're much better than we think at resisting manipulation. And while there's a risk of trusting someone who doesn't deserve to be trusted, there's also a risk of not trusting someone who could have been trusted. You miss out on someone who could help you, or from whom you might have learned something—including figuring out who to trust.
You argue that in humans, vigilance and open-mindedness evolved hand-in-hand, leading to a set of cognitive mechanisms you call "open vigilance."
There's a common view that people start from a state of being gullible and easy to influence, and get better at rejecting information as they become smarter and more sophisticated. But that's not what really happens. It's much harder to get apes than humans to do anything they don't want to do, for example. And research suggests that over evolutionary time, the better our species became at telling what we should and shouldn't listen to, the more open to influence we became. Even small children have ways to evaluate what people tell them.
The most basic is what I call "plausibility checking": if you tell them you're 200 years old, they're going to find that highly suspicious. Kids pay attention to competence; if someone is an expert in the relevant field, they'll trust her more. They're likelier to trust someone who's nice to them. My colleagues and I have found that by age 2 ½, children can distinguish between very strong and very weak arguments. Obviously, these skills keep developing throughout your life.
But you've found that even the most forceful leaders—and their propaganda machines—have a hard time changing people's minds.
Throughout history, there's been this fear of demagogues leading whole countries into terrible decisions. In reality, these leaders are mostly good at feeling the crowd and figuring out what people want to hear. They're not really influencing [the masses]; they're surfing on pre-existing public opinion. We know from a recent study, for instance, that if you match cities in which Hitler gave campaign speeches in the late '20s through early '30s with similar cities in which he didn't give campaign speeches, there was no difference in vote share for the Nazis. Nazi propaganda managed to make Germans who were already anti-Semitic more likely to express their anti-Semitism or act on it. But Germans who were not already anti-Semitic were completely inured to the propaganda.
So why, in totalitarian regimes, do people seem so devoted to the ruler?
It's not a very complex psychology. In these regimes, the slightest show of discontent can be punished by death, or by you and your whole family being sent to a labor camp. That doesn't mean propaganda has no effect, but you can explain people's obedience without it.
What about cult leaders and religious extremists? Their followers seem willing to believe anything.
Prophets and preachers can inspire the kind of fervor that leads people to suicidal acts or doomed crusades. But history shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion. Only when people are ready for extreme actions can a charismatic figure provide the spark that lights the fire.
Once a religion becomes ubiquitous, the limits of its persuasive powers become clear. Every anthropologist knows that in societies that are nominally dominated by orthodox belief systems—whether Christian or Muslim or anything else—most people share a view of God, or the spirit, that's closer to what you find in societies that lack such religions. In the Middle Ages, for instance, you have records of priests complaining of how unruly the people are—how they spend the whole Mass chatting or gossiping, or go on pilgrimages mostly because of all the prostitutes and wine-drinking. They continue pagan practices. They resist attempts to make them pay tithes. It's very far from our image of how much people really bought the dominant religion.
"The mainstream media is extremely reliable. The scientific consensus is extremely reliable."
And what about all those wild rumors and conspiracy theories on social media? Don't those demonstrate widespread gullibility?
I think not, for two reasons. One is that most of these false beliefs tend to be held in a way that's not very deep. People may say Pizzagate is true, yet that belief doesn't really interact with the rest of their cognition or their behavior. If you really believe that children are being abused, then trying to free them is the moral and rational thing to do. But the only person who did that was the guy who took his assault weapon to the pizzeria. Most people just left one-star reviews of the restaurant.
The other reason is that most of these beliefs actually play some useful role for people. Before any ethnic massacre, for example, rumors circulate about atrocities having been committed by the targeted minority. But those beliefs aren't what's really driving the phenomenon. In the horrendous pogrom of Kishinev, Moldova, 100 years ago, you had these stories of blood libel—a child disappeared, typical stuff. And then what did the Christian inhabitants do? They raped the [Jewish] women, they pillaged the wine stores, they stole everything they could. They clearly wanted to get that stuff, and they made up something to justify it.
Where do skeptics like climate-change deniers and anti-vaxxers fit into the picture?
Most people in most countries accept that vaccination is good and that climate change is real and man-made. These ideas are deeply counter-intuitive, so the fact that scientists were able to get them across is quite fascinating. But the environment in which we live is vastly different from the one in which we evolved. There's a lot more information, which makes it harder to figure out who we can trust. The main effect is that we don't trust enough; we don't accept enough information. We also rely on shortcuts and heuristics—coarse cues of trustworthiness. There are people who abuse these cues. They may have a PhD or an MD, and they use those credentials to help them spread messages that are not true and not good. Mostly, they're affirming what people want to believe, but they may also be changing minds at the margins.
How can we improve people's ability to resist that kind of exploitation?
I wish I could tell you! That's literally my next project. Generally speaking, though, my advice is very vanilla. The mainstream media is extremely reliable. The scientific consensus is extremely reliable. If you trust those sources, you'll go wrong in a very few cases, but on the whole, they'll probably give you good results. Yet a lot of the problems that we attribute to people being stupid and irrational are not entirely their fault. If governments were less corrupt, if the pharmaceutical companies were irreproachable, these problems might not go away—but they would certainly be minimized.
“Virtual Biopsies” May Soon Make Some Invasive Tests Unnecessary
At his son's college graduation in 2017, Dan Chessin felt "terribly uncomfortable" sitting in the stadium. The bouts of pain persisted, and after months of monitoring, a urologist took biopsies of suspicious areas in his prostate.
This innovation may enhance diagnostic precision and promptness, but it also brings ethical concerns to the forefront.
"In my case, the biopsies came out cancerous," says Chessin, 60, who underwent robotic surgery for intermediate-grade prostate cancer at University Hospitals Cleveland Medical Center.
Although he needed a biopsy, as most patients today do, advances in radiologic technology may make such invasive measures unnecessary in the future. Researchers are developing better imaging techniques and algorithms—a form of computer science called artificial intelligence, in which machines learn and execute tasks that typically require human brain power.
This innovation may enhance diagnostic precision and promptness. But it also brings ethical concerns to the forefront of the conversation, highlighting the potential for invasion of privacy, unequal patient access, and less physician involvement in patient care.
A National Academy of Medicine Special Publication, released in December, emphasizes that setting industry-wide standards for use in patient care is essential to AI's responsible and transparent implementation as the industry grapples with voluminous quantities of data. The technology should be viewed as a tool to supplement decision-making by highly trained professionals, not to replace it.
MRI--a test that uses powerful magnets, radio waves, and a computer to take detailed images inside the body--has become highly accurate in detecting aggressive prostate cancer, but its reliability is more limited in identifying low and intermediate grades of malignancy. That's why Chessin opted to have his prostate removed rather than take the chance of missing anything more suspicious that could develop.
His urologist, Lee Ponsky, says AI's most significant impact is yet to come. He hopes University Hospitals Cleveland Medical Center's collaboration with research scientists at its academic affiliate, Case Western Reserve University, will lead to the invention of a virtual biopsy.
A National Cancer Institute five-year grant is funding the project, launched in 2017, to develop a combined MRI and computerized tool to support more accurate detection and grading of prostate cancer. Such a tool would be "the closest to a crystal ball that we can get," says Ponsky, professor and chairman of the Urology Institute.
In situations where AI has guided diagnostics, radiologists' interpretations of breast, lung, and prostate lesions have improved as much as 25 percent, says Anant Madabhushi, a biomedical engineer and director of the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve, who is collaborating with Ponsky. "AI is very nascent," Madabhushi says, estimating that fewer than 10 percent of niche academic medical centers have used it. "We are still optimizing and validating the AI and virtual biopsy technology."
In October, several North American and European professional organizations of radiologists, imaging informaticists, and medical physicists released a joint statement on the ethics of AI. "Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future," reads the statement, published in the Journal of the American College of Radiology. "The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes."
Overreliance on new technology also poses concern when humans "outsource the process to a machine."
The statement's leader author, radiologist J. Raymond Geis, says "there's no question" that machines equipped with artificial intelligence "can extract more information than two human eyes" by spotting very subtle patterns in pixels. Yet, such nuances are "only part of the bigger picture of taking care of a patient," says Geis, a senior scientist with the American College of Radiology's Data Science Institute. "We have to be able to combine that with knowledge of what those pixels mean."
Setting ethical standards is high on all physicians' radar because the intricacies of each patient's medical record are factored into the computer's algorithm, which, in turn, may be used to help interpret other patients' scans, says radiologist Frank Rybicki, vice chair of operations and quality at the University of Cincinnati's department of radiology. Although obtaining patients' informed consent in writing is currently necessary, ethical dilemmas arise if and when patients have a change of heart about the use of their private health information. It is likely that removing individual data may be possible for some algorithms but not others, Rybicki says.
The information is de-identified to protect patient privacy. Using it to advance research is akin to analyzing human tissue removed in surgical procedures with the goal of discovering new medicines to fight disease, says Maryellen Giger, a University of Chicago medical physicist who studies computer-aided diagnosis in cancers of the breast, lung, and prostate, as well as bone diseases. Physicians who become adept at using AI to augment their interpretation of imaging will be ahead of the curve, she says.
As with other new discoveries, patient access and equality come into play. While AI appears to "have potential to improve over human performance in certain contexts," an algorithm's design may result in greater accuracy for certain groups of patients, says Lucia M. Rafanelli, a political theorist at The George Washington University. This "could have a disproportionately bad impact on one segment of the population."
Overreliance on new technology also poses concern when humans "outsource the process to a machine." Over time, they may cease developing and refining the skills they used before the invention became available, said Chloe Bakalar, a visiting research collaborator at Princeton University's Center for Information Technology Policy.
"AI is a paradigm shift with magic power and great potential."
Striking the right balance in the rollout of the technology is key. Rushing to integrate AI in clinical practice may cause harm, whereas holding back too long could undermine its ability to be helpful. Proper governance becomes paramount. "AI is a paradigm shift with magic power and great potential," says Ge Wang, a biomedical imaging professor at Rensselaer Polytechnic Institute in Troy, New York. "It is only ethical to develop it proactively, validate it rigorously, regulate it systematically, and optimize it as time goes by in a healthy ecosystem."