Researchers Behaving Badly: Known Frauds Are "the Tip of the Iceberg"
Last week, the whistleblowers in the Paolo Macchiarini affair at Sweden's Karolinska Institutet went on the record here to detail the retaliation they suffered for trying to expose a star surgeon's appalling research misconduct.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise.
The whistleblowers had discovered that in six published papers, Macchiarini falsified data, lied about the condition of patients and circumvented ethical approvals. As a result, multiple patients suffered and died. But Karolinska turned a blind eye for years.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise. Just this week, for example, Retraction Watch and STAT together broke the news that a Harvard Medical School cardiologist and stem cell researcher, Piero Anversa, falsified data in a whopping 31 papers, which now have to be retracted. Anversa had claimed that he could regenerate heart muscle by injecting bone marrow cells into damaged hearts, a result that no one has been able to duplicate.
A 2009 study published in the Public Library of Science (PLOS) found that about two percent of scientists admitted to committing fabrication, falsification or plagiarism in their work. That's a small number, but up to one third of scientists admit to committing "questionable research practices" that fall into a gray area between rigorous accuracy and outright fraud.
These dubious practices may include misrepresentations, research bias, and inaccurate interpretations of data. One common questionable research practice entails formulating a hypothesis after the research is done in order to claim a successful premise. Another highly questionable practice that can shape research is ghost-authoring by representatives of the pharmaceutical industry and other for-profit fields. Still another is gifting co-authorship to unqualified but powerful individuals who can advance one's career. Such practices can unfairly bolster a scientist's reputation and increase the likelihood of getting the work published.
The above percentages represent what scientists admit to doing themselves; when they evaluate the practices of their colleagues, the numbers jump dramatically. In a 2012 study published in the Journal of Research in Medical Sciences, researchers estimated that 14 percent of other scientists commit serious misconduct, while up to 72 percent engage in questionable practices. While these are only estimates, the problem is clearly not one of just a few bad apples.
In the PLOS study, Daniele Fanelli says that increasing evidence suggests the known frauds are "just the 'tip of the iceberg,' and that many cases are never discovered" because fraud is extremely hard to detect.
Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
In addition, it's likely that most cases of scientific misconduct go unreported because of the high price of whistleblowing. Those in the Macchiarini case showed extraordinary persistence in their multi-year campaign to stop his deadly trachea implants, while suffering serious damage to their careers. Such heroic efforts to unmask fraud are probably rare.
To make matters worse, there are numerous players in the scientific world who may be complicit in either committing misconduct or covering it up. These include not only primary researchers but co-authors, institutional executives, journal editors, and industry leaders. Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
Another part of the problem is that it's rare for students in science and medicine to receive an education in ethics. And studies have shown that older, more experienced and possibly jaded researchers are more likely to fudge results than their younger, more idealistic colleagues.
So, given the steep price that individuals and institutions pay for scientific misconduct, what compels them to go down that road in the first place? According to the JRMS study, individuals face intense pressures to publish and to attract grant money in order to secure teaching positions at universities. Once they have acquired positions, the pressure is on to keep the grants and publishing credits coming in order to obtain tenure, be appointed to positions on boards, and recruit flocks of graduate students to assist in research. And not to be underestimated is the human ego.
Paolo Macchiarini is an especially vivid example of a scientist seeking not only fortune, but fame. He liberally (and falsely) claimed powerful politicians and celebrities, even the Pope, as patients or admirers. He may be an extreme example, but we live in an age of celebrity scientists who bring huge amounts of grant money and high prestige to the institutions that employ them.
The media plays a significant role in both glorifying stars and unmasking frauds. In the Macchiarini scandal, the media first lifted him up, as in NBC's laudatory documentary, "A Leap of Faith," which painted him as a kind of miracle-worker, and then brought him down, as in the January 2016 documentary, "The Experiments," which chronicled the agonizing death of one of his patients.
Institutions can also play a crucial role in scientific fraud by putting more emphasis on the number and frequency of papers published than on their quality. The whole course of a scientist's career is profoundly affected by something called the h-index. This is a number based on both the frequency of papers published and how many times the papers are cited by other researchers. Raising one's ranking on the h-index becomes an overriding goal, sometimes eclipsing the kind of patient, time-consuming research that leads to true breakthroughs based on reliable results.
Universities also create a high-pressured environment that encourages scientists to cut corners. They, too, place a heavy emphasis on attracting large monetary grants and accruing fame and prestige. This can lead them, just as it led Karolinska, to protect a star scientist's sloppy or questionable research. According to Dr. Andrew Rosenberg, who is director of the Center for Science and Democracy at the U.S.-based Union of Concerned Scientists, "Karolinska defended its investment in an individual as opposed to the long-term health of the institution. People were dying, and they should have outsourced the investigation from the very beginning."
Having institutions investigate their own practices is a conflict of interest from the get-go, says Rosenberg.
Scientists, universities, and research institutions are also not immune to fads. "Hot" subjects attract grant money and confer prestige, incentivizing scientists to shift their research priorities in a direction that garners more grants. This can mean neglecting the scientist's true area of expertise and interests in favor of a subject that's more likely to attract grant money. In Macchiarini's case, he was allegedly at the forefront of the currently sexy field of regenerative medicine -- a field in which Karolinska was making a huge investment.
The relative scarcity of resources intensifies the already significant pressure on scientists. They may want to publish results rapidly, since they face many competitors for limited grant money, academic positions, students, and influence. The scarcity means that a great many researchers will fail while only a few succeed. Once again, the temptation may be to rush research and to show it in the most positive light possible, even if it means fudging or exaggerating results.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable.
Intense competition can have a perverse effect on researchers, according to a 2007 study in the journal Science of Engineering and Ethics. Not only does it place undue pressure on scientists to succeed, it frequently leads to the withholding of information from colleagues, which undermines a system in which new discoveries build on the previous work of others. Researchers may feel compelled to withhold their results because of the pressure to be the first to publish. The study's authors propose that more investment in basic research from governments could alleviate some of these competitive pressures.
Scientific journals, although they play a part in publishing flawed science, can't be expected to investigate cases of suspected fraud, says the German science blogger Leonid Schneider. Schneider's writings helped to expose the Macchiarini affair.
"They just basically wait for someone to retract problematic papers," he says.
He also notes that, while American scientists can go to the Office of Research Integrity to report misconduct, whistleblowers in Europe have no external authority to whom they can appeal to investigate cases of fraud.
"They have to go to their employer, who has a vested interest in covering up cases of misconduct," he says.
Science is increasingly international. Major studies can include collaborators from several different countries, and he suggests there should be an international body accessible to all researchers that will investigate suspected fraud.
Ultimately, says Rosenberg, the scientific system must incorporate trust. "You trust co-authors when you write a paper, and peer reviewers at journals trust that scientists at research institutions like Karolinska are acting with integrity."
Without trust, the whole system falls apart. It's the trust of the public, an elusive asset once it has been betrayed, that science depends upon for its very existence. Scientific research is overwhelmingly financed by tax dollars, and the need for the goodwill of the public is more than an abstraction.
The Macchiarini affair raises a profound question of trust and responsibility: Should multiple co-authors be held responsible for a lead author's misconduct?
Karolinska apparently believes so. When the institution at last owned up to the scandal, it vindictively found Karl Henrik-Grinnemo, one of the whistleblowers, guilty of scientific misconduct as well. It also designated two other whistleblowers as "blameworthy" for their roles as co-authors of the papers on which Macchiarini was the lead author.
As a result, the whistleblowers' reputations and employment prospects have become collateral damage. Accusations of research misconduct can be a career killer. Research grants dry up, employment opportunities evaporate, publishing becomes next to impossible, and collaborators vanish into thin air.
Grinnemo contends that co-authors should only be responsible for their discrete contributions, not for the data supplied by others.
"Different aspects of a paper are highly specialized," he says, "and that's why you have multiple authors. You cannot go through every single bit of data because you don't understand all the parts of the article."
This is especially true in multidisciplinary, translational research, where there are sometimes 20 or more authors. "You have to trust co-authors, and if you find something wrong you have to notify all co-authors. But you couldn't go through everything or it would take years to publish an article," says Grinnemo.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable. Along with increased support from governments and industry, a change in academic culture that emphasizes quality over quantity of published studies could help encourage meritorious research.
But beyond that, trust will always play a role when numerous specialists unite to achieve a common goal: the accumulation of knowledge that will promote human health, wealth, and well-being.
[Correction: An earlier version of this story mistakenly credited The New York Times with breaking the news of the Anversa retractions, rather than Retraction Watch and STAT, which jointly published the exclusive on October 14th. The piece in the Times ran on October 15th. We regret the error.]
The largest ever seizure of fentanyl in the United States – 254 pounds of the white powder, enough to kill 1 in 3 Americans by overdose – was found under a shipment of cucumbers recently.
A policing approach alone is insufficient to take on the opioid crisis.
Those types of stories barely make the headlines any more, in part because illicit drugs are no longer just handsold by drug dealers; these sales have gone online. The neighborhood dealer faces the same evolving environment as other retailers and may soon go the way of Sears.
But opioids themselves are not going away. I could make an opioid purchase online in about 30 seconds and have it sent to my door, says Joe Smyser. The epidemiologist and president of The Public Good Projects isn't bragging, he's simply stating a fact about the opioid crisis that has struck the United States. The U.S Drug Enforcement Agency, social media companies, and some foreign governments have undertaken massive efforts to shut down sites selling illegal drugs, and they have gotten very good at it, shuttering most within a day of their opening.
But it's a Whac-A-Mole situation in which new ones pop up as quickly as older ones are closed; they are promoted through hashtags, social media networks, and ubiquitous email spam to lure visitors to a website or call a WhatsApp number to make a purchase. The online disruption by law enforcement has become simply another cost of doing business for drug sellers. Fentanyl, and similar analogues created to evade detection and the law, are at the center of it. Small amounts can be mixed with other "safer" opioids to get a high, and the growth of online sales have all contributed to the surge of opioid-related deaths: about 17,500 in 2006; 47,600 in 2017; and a projected 82,000 a year by 2025.
All of this has occurred even while authorities have been cracking down on the prescribing of opioids, and prescription-related deaths have declined. Clearly a policing approach alone is insufficient to take on the opioid crisis.
Building the Tools
The Public Good Projects (PGP), a nonprofit organization founded by concerned experts, was set up to better understand public health issues in this new online environment and better shape responses. The first step is to understand what people are hearing and the language they are using by monitoring social media and other forms of public communications. "We're collecting data from every publicly available media source that we can get our hands on. It's broadcast television data, it's radio, it's print newspapers and magazines. And then it's online data; it's online video, social media, blogs, websites," Smyser explains.
The purpose was to better understand the opioid crisis and find out if there were differences between affected rural and urban populations.
"Then our job is to create queries, create searches of all of that data so that we find what is the information that Americans are exposed to about a topic, and then what … Americans [are] sharing amongst themselves about that same topic."
He says it's the same thing business has been doing for years to monitor their "brand health" and be prepared for possible negative issues that might arise about their products and services. He believes PGP is the first group to use those tools for public health.
Looking At Opioids
PGP's work on opioids started with a contract from the Substance Abuse and Mental Health Administration (SAMHSA) through the National Science Foundation. The purpose was simply to better understand the opioid crisis in the United States and in particular find out if there were differences between affected rural and urban populations. A team of data scientists, public health professionals, and cultural anthropologists needed several months to sort out and organize the algorithms from the sheer volume of data.
Drug use is particularly rich in slang, where a specific drug or way of using it can be referred to in multiple ways in different towns and social groups. Traditional media often uses clinical terms, Twitter shorthand, and all of that has to be structured and integrated "so that it isn't just spitting out data that is gobbledygook and of no use to anyone," says Smyser.
The data they gather is both cumulative and in real time, tabulated and visually represented in constantly morphing hashtag and word clouds where the color and size of the word indicates the source and volume of its use.
Popular hashtags on Twitter relating to the opioid crisis.
(Credit: The Public Good Projects)
The visual presentation of data helps to understand what different groups are saying and how they are saying it. For example, compare the hashtag and word clouds. Younger people are more likely to use the hashtags of Twitter, while older people are more likely to use older forms of media, and that is reflected in their concerns and language in those clouds.
Popular words relating to the opioid crisis gathered from older forms of media.
(Credit: The Public Good Projects)
A Ping map shows the origin of messages, while a Spidey map shows the network of how messages are being forwarded and shared among people. These sets of data can be overlaid with zip code, census, and socioeconomic data to provide an even deeper sense of who is saying what. And when integrated together, they provide clues to topics and language that might best engage people in each niche.
A Ping map showing the origin of messages around the opioid crisis.
(Credit: The Public Good Projects)
Opioids Speak
One thing that quickly became apparent to PGP in monitoring the media is that "over half of the information that the American public is exposed to about opioids is a very distant policy debate," says Smyser.
It is political pronouncements in DC, the legal system going after pharmaceutical companies that promoted prescription opioids for pain relief (and more), or mandatory prison terms for offenders. Relatively little is about treatment, the impact on families and communities, and what people can do themselves. That is particularly important in light of another key finding: residents of "Trump-land," the rural areas that supported the president and are being ravaged by opioids, talk about the problem and solutions very differently from urban areas.
"In rural communities there is usually a huge emphasis on self-reliance, and we take care of each other; that's why we enjoy living here. We are a neighborhood, we come together and we fix our own problems," according to Smyser.
In contrast, urban communities tend to be more transient, less likely to live in multigenerational households and neighborhoods, and look to formal institutions rather than themselves for solutions. "The message that we're sending people is one where there is really no role whatsoever for self-efficacy...we're giving them nothing to do" to help solve the problem themselves, says Smyser. "In fact, I could argue it is reducing self-efficacy."
Residents of "Trump-land," the rural areas that supported the president and are being ravaged by opioids, talk about the problem and solutions very differently from urban areas.
The opioid crisis is complex and improving the situation will be too. Smyser believes a top-down policing approach alone will not work; it is better to provide front-line public health officers at the state and local level with more and current intelligence so they can respond in their communities.
"I think that would be enormously impactful. But right now, we just don't have that service." SAMHSA declined multiple requests to discuss this project paid for with federal money. A spokesman concluded with: "That project occurred under the previous administration, and we did not have a direct relationship with PGP. As a result, I am unable to comment on the project."
The Milken Institute Center for Public Health, a think tank that is working to find solutions to the opioid epidemic, had an upbeat response. Director Sabrina Spitaletta said, "PGP's work to provide real-time data that monitors topics of high concern in public health has been very helpful to many of the front-line organizations working to combat this crisis."
Bad Actors Getting Your Health Data Is the FBI’s Latest Worry
In February 2015, the health insurer Anthem revealed that criminal hackers had gained access to the company's servers, exposing the personal information of nearly 79 million patients. It's the largest known healthcare breach in history.
FBI agents worry that the vast amounts of healthcare data being generated for precision medicine efforts could leave the U.S. vulnerable to cyber and biological attacks.
That year, the data of millions more would be compromised in one cyberattack after another on American insurers and other healthcare organizations. In fact, for the past several years, the number of reported data breaches has increased each year, from 199 in 2010 to 344 in 2017, according to a September 2018 analysis in the Journal of the American Medical Association.
The FBI's Edward You sees this as a worrying trend. He says hackers aren't just interested in your social security or credit card number. They're increasingly interested in stealing your medical information. Hackers can currently use this information to make fake identities, file fraudulent insurance claims, and order and sell expensive drugs and medical equipment. But beyond that, a new kind of cybersecurity threat is around the corner.
Mr. You and others worry that the vast amounts of healthcare data being generated for precision medicine efforts could leave the U.S. vulnerable to cyber and biological attacks. In the wrong hands, this data could be used to exploit or extort an individual, discriminate against certain groups of people, make targeted bioweapons, or give another country an economic advantage.
Precision medicine, of course, is the idea that medical treatments can be tailored to individuals based on their genetics, environment, lifestyle or other traits. But to do that requires collecting and analyzing huge quantities of health data from diverse populations. One research effort, called All of Us, launched by the U.S. National Institutes of Health last year, aims to collect genomic and other healthcare data from one million participants with the goal of advancing personalized medical care.
Other initiatives are underway by academic institutions and healthcare organizations. Electronic medical records, genetic tests, wearable health trackers, mobile apps, and social media are all sources of valuable healthcare data that a bad actor could potentially use to learn more about an individual or group of people.
"When you aggregate all of that data together, that becomes a very powerful profile of who you are," Mr. You says.
A supervisory special agent in the biological countermeasures unit within the FBI's weapons of mass destruction directorate, it's Mr. You's job to imagine worst-case bioterror scenarios and figure out how to prevent and prepare for them.
That used to mean focusing on threats like anthrax, Ebola, and smallpox—pathogens that could be used to intentionally infect people—"basically the dangerous bugs," as he puts it. In recent years, advances in gene editing and synthetic biology have given rise to fears that rogue, or even well-intentioned, scientists could create a virulent virus that's intentionally, or unintentionally, released outside the lab.
"If a foreign source, especially a criminal one, has your biological information, then they might have some particular insights into what your future medical needs might be and exploit that."
While Mr. You is still tracking those threats, he's been traveling around the country talking to scientists, lawyers, software engineers, cyber security professionals, government officials and CEOs about new security threats—those posed by genetic and other biological data.
Emerging threats
Mr. You says one possible situation he can imagine is the potential for nefarious actors to use an individual's sensitive medical information to extort or blackmail that person.
"If a foreign source, especially a criminal one, has your biological information, then they might have some particular insights into what your future medical needs might be and exploit that," he says. For instance, "what happens if you have a singular medical condition and an outside entity says they have a treatment for your condition?" You could get talked into paying a huge sum of money for a treatment that ends up being bogus.
Or what if hackers got a hold of a politician or high-profile CEO's health records? Say that person had a disease-causing genetic mutation that could affect their ability to carry out their job in the future and hackers threatened to expose that information. These scenarios may seem far-fetched, but Mr. You thinks they're becoming increasingly plausible.
On a wider scale, Kavita Berger, a scientist at Gryphon Scientific, a Washington, D.C.-area life sciences consulting firm, worries that data from different populations could be used to discriminate against certain groups of people, like minorities and immigrants.
For instance, the advocacy group Human Rights Watch in 2017 flagged a concerning trend in China's Xinjiang territory, a region with a history of government repression. Police there had purchased 12 DNA sequencers and were collecting and cataloging DNA samples from people to build a national database.
"The concern is that this particular province has a huge population of the Muslim minority in China," Ms. Berger says. "Now they have a really huge database of genetic sequences. You have to ask, why does a police station need 12 next-generation sequencers?"
Also alarming is the potential that large amounts of data from different groups of people could lead to customized bioweapons if that data ends up in the wrong hands.
Eleonore Pauwels, a research fellow on emerging cybertechnologies at United Nations University's Centre for Policy Research, says new insights gained from genomic and other data will give scientists a better understanding of how diseases occur and why certain people are more susceptible to certain diseases.
"As you get more and more knowledge about the genomic picture and how the microbiome and the immune system of different populations function, you could get a much deeper understanding about how you could target different populations for treatment but also how you could eventually target them with different forms of bioagents," Ms. Pauwels says.
Economic competitiveness
Another reason hackers might want to gain access to large genomic and other healthcare datasets is to give their country a leg up economically. Many large cyber-attacks on U.S. healthcare organizations have been tied to Chinese hacking groups.
"This is a biological space race and we just haven't woken up to the fact that we're in this race."
"It's becoming clear that China is increasingly interested in getting access to massive data sets that come from different countries," Ms. Pauwels says.
A year after U.S. President Barack Obama conceived of the Precision Medicine Initiative in 2015—later renamed All of Us—China followed suit, announcing the launch of a 15-year, $9 billion precision health effort aimed at turning China into a global leader in genomics.
Chinese genomics companies, too, are expanding their reach outside of Asia. One company, WuXi NextCODE, which has offices in Shanghai, Reykjavik, and Cambridge, Massachusetts, has built an extensive library of genomes from the U.S., China and Iceland, and is now setting its sights on Ireland.
Another Chinese company, BGI, has partnered with Children's Hospital of Philadelphia and Sinai Health System in Toronto, and also formed a collaboration with the Smithsonian Institute to sequence all species on the planet. BGI has built its own advanced genomic sequencing machines to compete with U.S.-based Illumina.
Mr. You says having access to all this data could lead to major breakthroughs in healthcare, such as new blockbuster drugs. "Whoever has the largest, most diverse dataset is truly going to win the day and come up with something very profitable," he says.
Some direct-to-consumer genetic testing companies with offices in the U.S., like Dante Labs, also use BGI to process customers' DNA.
Experts worry that China could race ahead the U.S. in precision medicine because of Chinese laws governing data sharing. Currently, China prohibits the exportation of genetic data without explicit permission from the government. Mr. You says this creates an asymmetry in data sharing between the U.S. and China.
"This is a biological space race and we just haven't woken up to the fact that we're in this race," he said in January at an American Society for Microbiology conference in Washington, D.C. "We don't have access to their data. There is absolutely no reciprocity."
Protecting your data
While Mr. You has been stressing the importance of data security to anyone who will listen, the National Academies of Sciences, Engineering, and Medicine, which makes scientific and policy recommendations on issues of national importance, has commissioned a study on "safeguarding the bioeconomy."
In the meantime, Ms. Berger says organizations that deal with people's health data should assess their security risks and identify potential vulnerabilities in their systems.
As for what individuals can do to protect themselves, she urges people to think about the different ways they're sharing healthcare data—such as via mobile health apps and wearables.
"Ask yourself, what's the benefit of sharing this? What are the potential consequences of sharing this?" she says.
Mr. You also cautions people to think twice before taking consumer DNA tests. They may seem harmless, he says, but at the end of the day, most people don't know where their genetic information is going. "If your genetic sequence is taken, once it's gone, it's gone. There's nothing you can do about it."