Can Genetic Testing Help Shed Light on the Autism Epidemic?
Autism cases are still on the rise, and scientists don't know why. In April, the Centers for Disease Control (CDC) reported that rates of autism had increased once again, now at an estimated 1 in 59 children up from 1 in 68 just two years ago. Rates have been climbing steadily since 2007 when the CDC initially estimated that 1 in 150 children were on the autism spectrum.
Some clinicians are concerned that the creeping expansion of autism is causing the diagnosis to lose its meaning.
The standard explanation for this increase has been the expansion of the definition of autism to include milder forms like Asperger's, as well as a heightened awareness of the condition that has improved screening efforts. For example, the most recent jump is attributed to children in minority communities being diagnosed who might have previously gone under the radar. In addition, more federally funded resources are available to children with autism than other types of developmental disorders, which may prompt families or physicians to push harder for a diagnosis.
Some clinicians are concerned that the creeping expansion of autism is causing the diagnosis to lose its meaning. William Graf, a pediatric neurologist at Connecticut Children's Medical Center, says that when a nurse tells him that a new patient has a history of autism, the term is no longer a useful description. "Even though I know this topic extremely well, I cannot picture the child anymore," he says. "Use the words mild, moderate, or severe. Just give me a couple more clues, because when you say autism today, I have no idea what people are talking about anymore."
Genetic testing has emerged as one potential way to remedy the overly broad label by narrowing down a heterogeneous diagnosis to a specific genetic disorder. According to Suma Shankar, a medical geneticist at the University of California, Davis, up to 60 percent of autism cases could be attributed to underlying genetic causes. Common examples include Fragile X Syndrome or Rett Syndrome—neurodevelopmental disorders that are caused by mutations in individual genes and are behaviorally classified as autism.
With more than 500 different mutations associated with autism, very few additional diagnoses provide meaningful information.
Having a genetic diagnosis in addition to an autism diagnosis can help families in several ways, says Shankar. Knowing the genetic origin can alert families to other potential health problems that are linked to the mutation, such as heart defects or problems with the immune system. It may also help clinicians provide more targeted behavioral therapies and could one day lead to the development of drug treatments for underlying neurochemical abnormalities. "It will pave the way to begin to tease out treatments," Shankar says.
When a doctor diagnoses a child as having a specific genetic condition, the label of autism is still kept because it is more well-known and gives the child access to more state-funded resources. Children can thus be diagnosed with multiple conditions: autism spectrum disorder and their specific gene mutation. However, with more than 500 different mutations associated with autism, very few additional diagnoses provide meaningful information. What's more, the presence or absence of a mutation doesn't necessarily indicate whether the child is on the mild or severe end of the autism spectrum.
Because of this, Graf doubts that genetic classifications are really that useful. He tells the story of a boy with epilepsy and severe intellectual disabilities who was diagnosed with autism as a young child. Years later, Graf ordered genetic testing for the boy and discovered that he had a mutation in the gene SYNGAP1. However, this knowledge didn't change the boy's autism status. "That diagnosis [SYNGAP1] turns out to be very specific for him, but it will never be a household name. Biologically it's good to know, and now it's all over his chart. But on a societal level he still needs this catch-all label [of autism]," Graf says.
"It gives some information, but to what degree does that change treatment or prognosis?"
Jennifer Singh, a sociologist at Georgia Tech who wrote the book Multiple Autisms: Spectrums of Advocacy and Genomic Science, agrees. "I don't know that the knowledge gained from just having a gene that's linked to autism," is that beneficial, she says. "It gives some information, but to what degree does that change treatment or prognosis? Because at the end of the day you have to address the issues that are at hand, whatever they might be."
As more children are diagnosed with autism, knowledge of the underlying genetic mutation causing the condition could help families better understand the diagnosis and anticipate their child's developmental trajectory. However, for the vast majority, an additional label provides little clarity or consolation.
Instead of spending money on genetic screens, Singh thinks the resources would be better used on additional services for people who don't have access to behavioral, speech, or occupational therapy. "Things that are really going to matter for this child in their future," she says.
Over the past two millennia, Chinese ingenuity has spawned some of humanity's most consequential inventions. Without gunpowder, guns, bombs, and rockets; without paper, printing, and money printed on paper; and without the compass, which enabled ships to navigate the open ocean, modern civilization might never have been born.
Today, a specter is haunting the developed world: Chinese innovation dominance. And the results have been so spectacular that the United States feels its preeminence threatened.
Yet China lapsed into cultural and technological stagnation during the Qing dynasty, just as the Scientific Revolution was transforming Europe. Western colonial incursions and a series of failed rebellions further sapped the Celestial Empire's capacity for innovation. By the mid-20th century, when the Communist triumph led to a devastating famine and years of bloody political turmoil, practically the only intellectual property China could offer for export was Mao's Little Red Book.
After Deng Xiaoping took power in 1978, launching a transition from a rigidly planned economy to a semi-capitalist one, China's factories began pumping out goods for foreign consumption. Still, originality remained a low priority. The phrase "Made in China" came to be synonymous with "cheap knockoff."
Today, however, a specter is haunting the developed world: Chinese innovation dominance. It first wafted into view in 2006, when the government announced an "indigenous innovation" campaign, dedicated to establishing China as a technology powerhouse by 2020—and a global leader by 2050—as part of its Medium- and Long-Term National Plan for Science and Technology Development. Since then, an array of initiatives have sought to unleash what pundits often call the Chinese "tech dragon," whether in individual industries, such as semiconductors or artificial intelligence, or across the board (as with the Made in China 2025 project, inaugurated in 2015). These efforts draw on a well-stocked bureaucratic arsenal: state-directed financing; strategic mergers and acquisitions; competition policies designed to boost domestic companies and hobble foreign rivals; buy-Chinese procurement policies; cash incentives for companies to file patents; subsidies for academic researchers in favored fields.
The results have been spectacular—so much so that the United States feels its preeminence threatened. Voices across the political spectrum are calling for emergency measures, including a clampdown on technology transfers, capital investment, and Chinese students' ability to study abroad. But are the fears driving such proposals justified?
"We've flipped from thinking China is incapable of anything but imitation to thinking China is about to eat our lunch," says Kaiser Kuo, host of the Sinica podcast at supchina.com, who recently returned to the U.S after 20 years in Beijing—the last six as director of international communications for the tech giant Baidu. Like some other veteran China-watchers, Kuo believes neither extreme reflects reality. "We're in as much danger now of overestimating China's innovative capacity," he warns, "as we were a few years ago of underestimating it."
A Lab and Tech-Business Bonanza
By many measures, China's innovation renaissance is mind-boggling. Spending on research and development as a percentage of gross domestic product nearly quadrupled between 1996 and 2016, from .56 percent to 2.1 percent; during the same period, spending in the United States rose by just .3 percentage points, from 2.44 to 2.79 percent of GDP. China is now second only to the U.S. in total R&D spending, accounting for 21 percent of the global total of $2 trillion, according to a report released in January by the National Science Foundation. In 2016, the number of scientific publications from China exceeded those from the U.S. for the first time, by 426,000 to 409,000. Chinese researchers are blazing new trails on the frontiers of cloning, stem cell medicine, gene editing, and quantum computing. Chinese patent applications have soared from 170,000 to nearly 3 million since 2000; the country now files almost as many international patents as the U.S. and Japan, and more than Germany and South Korea. Between 2008 and 2017, two Chinese tech firms—Huawei and ZTE—traded places as the world's top patent filer in six out of nine years.
"China is still in its Star Trek phase, while we're in our Black Mirror phase." Yet there are formidable barriers to China beating America in the innovation race—or even catching up anytime soon.
Accompanying this lab-based ferment is a tech-business bonanza. China's three biggest internet companies, Baidu, Alibaba Group and Tencent Holdings (known collectively as BAT), have become global titans of search, e-commerce, mobile payments, gaming, and social media. Da-Jiang Innovations in Science and Technology (DJI) controls more than 70 percent of the world's commercial drone market. Of the planet's 262 "unicorns" (startups worth more than a billion dollars), about one-third are Chinese. The country attracted $77 billion in venture capital investment between 2014 and 2016, according to Fortune, and is now among the top three markets for VC in emerging technologies including AI, virtual reality, autonomous vehicles, and 3D printing.
These developments have fueled a buoyant techno-optimism in China that contrasts sharply with the darker view increasingly prevalent in the West—in part, perhaps, because China's historic limits on civil liberties have inured the populace to the intrusive implications of, say, facial recognition technology or social-credit software, which are already being used to tighten government control. "China is still in its Star Trek phase, while we're in our Black Mirror phase," Kuo observes. By contrast with Americans' ambivalent attitudes toward Facebook founder Mark Zuckerberg or Amazon's Jeff Bezos, he adds, most Chinese regard tech entrepreneurs like Baidu's Robin Li and Alibaba's Jack Ma as "flat-out heroes."
Yet there are formidable barriers to China beating America in the innovation race—or even catching up anytime soon. Many are catalogued in The Fat Tech Dragon, a 2017 monograph by Scott Kennedy, deputy director of the Freeman Chair in China Studies and director of the Project on Chinese Business and Political Economy at the Center for Strategic and International Studies. Among the obstacles, Kennedy writes, are "an education system that encourages deference to authority and does not prepare students to be creative and take risks, a financial system that disproportionately funnels funds to undeserving state-owned enterprises… and a market structure where profits can be made through a low-margin, high-volume strategy or through political connections."
China's R&D money, Kennedy points out, is mostly showered on the "D": of the $209 billion spent in 2015, only 5 percent went toward basic research, 10.8 percent toward applied research, and a massive 84.2 percent toward development. While fully half of venture capital in the States goes to early-stage startups, the figure for China is under 20 percent; true "angel" investors are scarce. Likewise, only 21 percent of Chinese patents are for original inventions, as opposed to tweaks of existing technologies. Most problematic, the domestic value of patents in China is strikingly low. In 2015, the country's patent licensing generated revenues of just $1.75 billion, compared to $115 billion for IP licensing in the U.S. in 2012 (the most recent year for which data is available). In short, Kennedy concludes, "China may now be a 'large' IP country, but it is still a 'weak' one."
"[The Chinese] are trying very hard to keep the economy from crashing, but it'll happen eventually. Then there will be a major, major contraction."
Anne Stevenson-Yang, co-founder and research director of J Capital Research, and a leading China analyst, sees another potential stumbling block: the government's obsession with neck-snapping GDP growth. "What China does is to determine, 'Our GDP growth will be X,' and then it generates enough investment to create X," Stevenson-Yang explains. To meet those quotas, officials pour money into gigantic construction projects, creating the empty "ghost cities" that litter the countryside, or subsidize industrial production far beyond realistic demand. "It's the ultimate Ponzi-scheme economy," she says, citing as examples the Chinese cellphone and solar industries, which ballooned on state funding, flooded global markets with dirt-cheap products, thrived just long enough to kill off most of their overseas competitors, and then largely collapsed. Such ventures, Stevenson-Yang notes, have driven China's debt load perilously high. "They're trying very hard to keep the economy from crashing, but it'll happen eventually," she predicts. "Then there will be a major, major contraction."
"An Intensifying Race Toward Techno-Nationalism"
The greatest vulnerability of the Chinese innovation boom may be that it still depends heavily on imported IP. "Over the last few years, China has placed its bets on a combination of global knowledge sourcing and indigenous technology development," says Dieter Ernst, a senior fellow at the Centre for International Governance Innovation in Waterloo, Canada, and the East-West Center in Honolulu, who has served as an Asia advisor for the U.N. and the World Bank. Aside from international journals (and, occasionally, industrial espionage), Chinese labs and corporations obtain non-indigenous knowledge in a number of ways: by paying licensing fees; recruiting Chinese scientists and engineers who've studied or worked abroad; hiring professionals from other countries; or acquiring foreign companies. And though enforcement of IP laws has improved markedly in recent years, foreign businesses are often pressured to provide technology transfers in exchange for access to markets.
Many of China's top tech entrepreneurs—including Ma, Li, and Alibaba's Joseph Tsai—are alumni of U.S. universities, and, as Kuo puts it, "big fans of all things American." Unfortunately, however, Americans are ever less likely to be fans of China, thanks largely to that country's sometimes predatory trade practices—and also to what Ernst calls "an intensifying race toward techno-nationalism." With varying degrees of bellicosity and consistency, leaders of both U.S. parties embrace elements of the trend, as do politicians (and voters) across much of Europe. "There's a growing consensus that China is poised to overtake us," says Ernst, "and that we need to design policies to obstruct its rise."
One of the foremost liberal analysts supporting this view is Lee Branstetter, a professor of economics and public policy at Carnegie Mellon University and former senior economist on President Barack Obama's Council of Economic Advisors. "Over the decades, in a systematic and premeditated fashion, the Chinese government and its state-owned enterprises have worked to extract valuable technology from foreign multinationals, with an explicit goal of eventually displacing those leading multinationals with successful Chinese firms in global markets," Branstetter wrote in a 2017 report to the United States Trade Representative. To combat such "forced transfers," he suggested, laws could be passed empowering foreign governments to investigate coercive requests and block any deemed inappropriate—not just those involving military-related or crucial infrastructure technology, which current statutes cover. Branstetter also called for "sharply" curtailing Chinese students' access to Western graduate programs, as a way to "get policymakers' attention in Beijing" and induce them to play fair.
Similar sentiments are taking hold in Congress, where the Foreign Investment Risk Review Modernization Act—aimed at strengthening the process by which the Committee on Foreign Investment in the United States reviews Chinese acquisition of American technologies—is expected to pass with bipartisan support, though its harsher provisions were softened due to objections from Silicon Valley. The Trump Administration announced in May that it would soon take executive action to curb Chinese investments in U.S. tech firms and otherwise limit access to intellectual property. The State Department, meanwhile, imposed a one-year limit on visas for Chinese grad students in high-tech fields.
Ernst argues that such measures are motivated largely by exaggerated notions of China's ability to reach its ambitious goals, and by the political advantages that fearmongering confers. "If you look at AI, chip design and fabrication, robotics, pharmaceuticals, the gap with the U.S. is huge," he says. "Reducing it will take at least 10 or 15 years."
Cracking down on U.S. tech transfers to Chinese companies, Ernst cautions, will deprive U.S. firms of vital investment capital and spur China to retaliate, cutting off access to the nation's gargantuan markets; it will also push China to forge IP deals with more compliant nations, or revert to outright piracy. And restricting student visas, besides harming U.S. universities that depend on Chinese scholars' billions in tuition, will have a "chilling effect on America's ability to attract to researchers and engineers from all countries."
"It's not a zero-sum game. I don't think China is going to eat our lunch. We can sit down and enjoy lunch together."
America's own science and technology community, Ernst adds, considers it crucial to swap ideas with China's fast-growing pool of talent. The 2017 annual meeting of the Palo Alto-based Association for Advancement of Artificial Intelligence, he notes, featured a nearly equal number of papers by researchers in China and the U.S. Organizers postponed the meeting after discovering that the original date coincided with the Chinese New Year.
China's rising influence on the tech world carries upsides as well as downsides, Scott Kennedy observes. The country's successes in e-commerce, he says, "haven't damaged the global internet sector, but have actually been a spur to additional innovation and progress. By contrast, China's success in solar and wind has decimated the global sectors," due to state-mandated overcapacity. "When Chinese firms win through open competition, the outcome is constructive; when they win through industrial policy and protectionism, the outcome is destructive."
The solution, Kennedy and like-minded experts argue, is to discourage protectionism rather than engage in it, adjusting tech-transfer policy just enough to cope with evolving national-security concerns. Instead of trying to squelch China's innovation explosion, they say, the U.S. should seek ways to spread its potential benefits (as happened in previous eras with Japan and South Korea), and increase America's indigenous investments in tech-related research, education, and job training.
"It's not a zero-sum game," says Kaiser Kuo. "I don't think China is going to eat our lunch. We can sit down and enjoy lunch together."
Human experimentation has come a long way since congressional hearings in the 1970s exposed patterns of abuse. Where yesterday's patients were protected only by the good conscience of physician-researchers, today's patients are spirited past hazards through an elaborate system of oversight and informed consent. Yet in many ways, the project of grounding human research on ethical foundations remains incomplete.
As human research has become a mainstay of career and commercial advancement among academics, research centers, and industry, new threats to research integrity have emerged.
To be sure, much of the medical research we do meets exceedingly high standards. Progress in cancer immunotherapy, or infectious disease, reflects the best of what can be accomplished when medical scientists and patients collaborate productively. And abuses of the earlier part of the 20th century--like those perpetrated by the U.S. Public Health Service in Guatemala--are for the history books.
Yet as human research has become a mainstay of career and commercial advancement among academics, research centers, and industry, new threats to research integrity have emerged. Many flourish in the blind spot of current oversight systems.
Take, for example, the tendency to publish only "positive" findings ("publication bias"). When patients participate in studies, they are told that their contributions will promote medical discovery. That can't happen if results of experiments never get beyond the hard drives of researchers. While researchers are often eager to publish trials showing a drug works, according to a study my own team conducted, fewer than 4 in 10 trials of drugs that never receive FDA approval get published. This tendency- which occurs in academia as well as industry- deprives other scientists of opportunities to build on these failures and make good on the sacrifice of patients. It also means the trials may be inadvertently repeated by other researchers, subjecting more patients to risks.
On the other hand, many clinical trials test treatments that have already been proven effective beyond a shadow of doubt. Consider the drug aprotinin, used for the management of bleeding during surgery. An analysis in 2005 showed that, not long after the drug was proven effective, researchers launched dozens of additional placebo-controlled trials. These redundant trials are far in excess of what regulators required for drug approval, and deprived patients in placebo arms of a proven effective therapy. Whether because of an oversight or deliberately (does it matter?), researchers conducting these trials often failed in publications to describe previous evidence of efficacy. What's the point of running a trial if no one reads the results?
It is surprisingly easy for companies to hijack research to market their treatments.
At the other extreme are trials that are little more than shots in the dark. In one case, patients with spinal cord injury were enrolled in a safety trial testing a cell-based regenerative medicine treatment. After the trial stopped (results were negative), laboratory scientists revealed that the cells had been shown ineffective in animal experiments. Though this information had been available to the company and FDA, researchers pursued the trial anyway.
It is surprisingly easy for companies to hijack research to market their treatments. One way this happens is through "seeding trials"- studies that are designed not to address a research question, but instead to habituate doctors to using a new drug and to generate publications that serve as advertisements. Such trials flood the medical literature with findings that are unreliable because studies are small and not well designed. They also use the prestige of science to pursue goals that are purely commercial. Yet because they harm science- not patients (many such studies are minimally risky because all patients receive proven effective medications)- ethics committees rarely block them.
Closely related is the phenomenon of small uninformative trials. After drugs get approved by the FDA, companies often launch dozens of small trials in new diseases other than the one the drug was approved to treat. Because these studies are small, they often overestimate efficacy. Indeed, the way trials are often set up, if a company tests an ineffective drug in 40 different studies, one will typically produce a false positive by chance alone. Because companies are free to run as many trials as they like and to circulate "positive" results, they have incentives to run lots of small trials that don't provide a definitive test of their drug's efficacy.
Universities, funding bodies, and companies should be scored by a neutral third-party based on the impact of their trials -- like Moody's for credit ratings.
Don't think public agencies are much better. Funders like the National Institutes of Health secure their appropriations by gratifying Congress. This means that NIH gets more by spreading its funding among small studies in different Congressional districts than by concentrating budgets among a few research institutions pursuing large trials. The result is that some NIH-funded clinical trials are not especially equipped to inform medical practice.
It's tempting to think that FDA, medical journals, ethics committees, and funding agencies can fix these problems. However, these practices continue in part because FDA, ethics committees, and researchers often do not see what is at stake for patients by acquiescing to low scientific standards. This behavior dishonors the patients who volunteer for research, and also threatens the welfare of downstream patients, whose care will be determined by the output of research.
To fix this, deficiencies in study design and reporting need to be rendered visible. Universities, funding bodies, and companies should be scored by a neutral third-party based on the impact of their trials, or the extent to which their trials are published in full -- like Moody's for credit ratings, or the Kelley Blue Book for cars. This system of accountability would allow everyone to see which institutions make the most of the contributions of research subjects. It could also harness the competitive instincts of institutions to improve research quality.
Another step would be for researchers to level with patients when they enroll in studies. Patients who agree to research are usually offered bromides about how their participation may help future patients. However, not all studies are created equal with respect to merit. Patients have a right to know when they are entering studies that are unlikely to have a meaningful impact on medicine.
Ethics committees and drug regulators have done a good job protecting research volunteers from unchecked scientific ambition. However, today's research is plagued by studies that have poor scientific credentials. Such studies free-ride on the well-earned reputation of serious medical science. They also potentially distort the evidence available to physicians and healthcare systems. Regulators, academic medical centers, and others should establish policies that better protect human research volunteers by protecting the quality of the research itself.