Researchers Behaving Badly: Known Frauds Are "the Tip of the Iceberg"
Last week, the whistleblowers in the Paolo Macchiarini affair at Sweden's Karolinska Institutet went on the record here to detail the retaliation they suffered for trying to expose a star surgeon's appalling research misconduct.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise.
The whistleblowers had discovered that in six published papers, Macchiarini falsified data, lied about the condition of patients and circumvented ethical approvals. As a result, multiple patients suffered and died. But Karolinska turned a blind eye for years.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise. Just this week, for example, Retraction Watch and STAT together broke the news that a Harvard Medical School cardiologist and stem cell researcher, Piero Anversa, falsified data in a whopping 31 papers, which now have to be retracted. Anversa had claimed that he could regenerate heart muscle by injecting bone marrow cells into damaged hearts, a result that no one has been able to duplicate.
A 2009 study published in the Public Library of Science (PLOS) found that about two percent of scientists admitted to committing fabrication, falsification or plagiarism in their work. That's a small number, but up to one third of scientists admit to committing "questionable research practices" that fall into a gray area between rigorous accuracy and outright fraud.
These dubious practices may include misrepresentations, research bias, and inaccurate interpretations of data. One common questionable research practice entails formulating a hypothesis after the research is done in order to claim a successful premise. Another highly questionable practice that can shape research is ghost-authoring by representatives of the pharmaceutical industry and other for-profit fields. Still another is gifting co-authorship to unqualified but powerful individuals who can advance one's career. Such practices can unfairly bolster a scientist's reputation and increase the likelihood of getting the work published.
The above percentages represent what scientists admit to doing themselves; when they evaluate the practices of their colleagues, the numbers jump dramatically. In a 2012 study published in the Journal of Research in Medical Sciences, researchers estimated that 14 percent of other scientists commit serious misconduct, while up to 72 percent engage in questionable practices. While these are only estimates, the problem is clearly not one of just a few bad apples.
In the PLOS study, Daniele Fanelli says that increasing evidence suggests the known frauds are "just the 'tip of the iceberg,' and that many cases are never discovered" because fraud is extremely hard to detect.
Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
In addition, it's likely that most cases of scientific misconduct go unreported because of the high price of whistleblowing. Those in the Macchiarini case showed extraordinary persistence in their multi-year campaign to stop his deadly trachea implants, while suffering serious damage to their careers. Such heroic efforts to unmask fraud are probably rare.
To make matters worse, there are numerous players in the scientific world who may be complicit in either committing misconduct or covering it up. These include not only primary researchers but co-authors, institutional executives, journal editors, and industry leaders. Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
Another part of the problem is that it's rare for students in science and medicine to receive an education in ethics. And studies have shown that older, more experienced and possibly jaded researchers are more likely to fudge results than their younger, more idealistic colleagues.
So, given the steep price that individuals and institutions pay for scientific misconduct, what compels them to go down that road in the first place? According to the JRMS study, individuals face intense pressures to publish and to attract grant money in order to secure teaching positions at universities. Once they have acquired positions, the pressure is on to keep the grants and publishing credits coming in order to obtain tenure, be appointed to positions on boards, and recruit flocks of graduate students to assist in research. And not to be underestimated is the human ego.
Paolo Macchiarini is an especially vivid example of a scientist seeking not only fortune, but fame. He liberally (and falsely) claimed powerful politicians and celebrities, even the Pope, as patients or admirers. He may be an extreme example, but we live in an age of celebrity scientists who bring huge amounts of grant money and high prestige to the institutions that employ them.
The media plays a significant role in both glorifying stars and unmasking frauds. In the Macchiarini scandal, the media first lifted him up, as in NBC's laudatory documentary, "A Leap of Faith," which painted him as a kind of miracle-worker, and then brought him down, as in the January 2016 documentary, "The Experiments," which chronicled the agonizing death of one of his patients.
Institutions can also play a crucial role in scientific fraud by putting more emphasis on the number and frequency of papers published than on their quality. The whole course of a scientist's career is profoundly affected by something called the h-index. This is a number based on both the frequency of papers published and how many times the papers are cited by other researchers. Raising one's ranking on the h-index becomes an overriding goal, sometimes eclipsing the kind of patient, time-consuming research that leads to true breakthroughs based on reliable results.
Universities also create a high-pressured environment that encourages scientists to cut corners. They, too, place a heavy emphasis on attracting large monetary grants and accruing fame and prestige. This can lead them, just as it led Karolinska, to protect a star scientist's sloppy or questionable research. According to Dr. Andrew Rosenberg, who is director of the Center for Science and Democracy at the U.S.-based Union of Concerned Scientists, "Karolinska defended its investment in an individual as opposed to the long-term health of the institution. People were dying, and they should have outsourced the investigation from the very beginning."
Having institutions investigate their own practices is a conflict of interest from the get-go, says Rosenberg.
Scientists, universities, and research institutions are also not immune to fads. "Hot" subjects attract grant money and confer prestige, incentivizing scientists to shift their research priorities in a direction that garners more grants. This can mean neglecting the scientist's true area of expertise and interests in favor of a subject that's more likely to attract grant money. In Macchiarini's case, he was allegedly at the forefront of the currently sexy field of regenerative medicine -- a field in which Karolinska was making a huge investment.
The relative scarcity of resources intensifies the already significant pressure on scientists. They may want to publish results rapidly, since they face many competitors for limited grant money, academic positions, students, and influence. The scarcity means that a great many researchers will fail while only a few succeed. Once again, the temptation may be to rush research and to show it in the most positive light possible, even if it means fudging or exaggerating results.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable.
Intense competition can have a perverse effect on researchers, according to a 2007 study in the journal Science of Engineering and Ethics. Not only does it place undue pressure on scientists to succeed, it frequently leads to the withholding of information from colleagues, which undermines a system in which new discoveries build on the previous work of others. Researchers may feel compelled to withhold their results because of the pressure to be the first to publish. The study's authors propose that more investment in basic research from governments could alleviate some of these competitive pressures.
Scientific journals, although they play a part in publishing flawed science, can't be expected to investigate cases of suspected fraud, says the German science blogger Leonid Schneider. Schneider's writings helped to expose the Macchiarini affair.
"They just basically wait for someone to retract problematic papers," he says.
He also notes that, while American scientists can go to the Office of Research Integrity to report misconduct, whistleblowers in Europe have no external authority to whom they can appeal to investigate cases of fraud.
"They have to go to their employer, who has a vested interest in covering up cases of misconduct," he says.
Science is increasingly international. Major studies can include collaborators from several different countries, and he suggests there should be an international body accessible to all researchers that will investigate suspected fraud.
Ultimately, says Rosenberg, the scientific system must incorporate trust. "You trust co-authors when you write a paper, and peer reviewers at journals trust that scientists at research institutions like Karolinska are acting with integrity."
Without trust, the whole system falls apart. It's the trust of the public, an elusive asset once it has been betrayed, that science depends upon for its very existence. Scientific research is overwhelmingly financed by tax dollars, and the need for the goodwill of the public is more than an abstraction.
The Macchiarini affair raises a profound question of trust and responsibility: Should multiple co-authors be held responsible for a lead author's misconduct?
Karolinska apparently believes so. When the institution at last owned up to the scandal, it vindictively found Karl Henrik-Grinnemo, one of the whistleblowers, guilty of scientific misconduct as well. It also designated two other whistleblowers as "blameworthy" for their roles as co-authors of the papers on which Macchiarini was the lead author.
As a result, the whistleblowers' reputations and employment prospects have become collateral damage. Accusations of research misconduct can be a career killer. Research grants dry up, employment opportunities evaporate, publishing becomes next to impossible, and collaborators vanish into thin air.
Grinnemo contends that co-authors should only be responsible for their discrete contributions, not for the data supplied by others.
"Different aspects of a paper are highly specialized," he says, "and that's why you have multiple authors. You cannot go through every single bit of data because you don't understand all the parts of the article."
This is especially true in multidisciplinary, translational research, where there are sometimes 20 or more authors. "You have to trust co-authors, and if you find something wrong you have to notify all co-authors. But you couldn't go through everything or it would take years to publish an article," says Grinnemo.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable. Along with increased support from governments and industry, a change in academic culture that emphasizes quality over quantity of published studies could help encourage meritorious research.
But beyond that, trust will always play a role when numerous specialists unite to achieve a common goal: the accumulation of knowledge that will promote human health, wealth, and well-being.
[Correction: An earlier version of this story mistakenly credited The New York Times with breaking the news of the Anversa retractions, rather than Retraction Watch and STAT, which jointly published the exclusive on October 14th. The piece in the Times ran on October 15th. We regret the error.]
A new virus has emerged and stoked fears of another pandemic: monkeypox. Since May 2022, it has been detected in 29 U.S. states, the District of Columbia, and Puerto Rico among international travelers and their close contacts. On a worldwide scale, as of June 30, there have been 5,323 cases in 52 countries.
The good news: An existing vaccine can go a long way toward preventing a catastrophic outbreak. Because monkeypox is a close relative of smallpox, the same vaccine can be used—and it is about 85 percent effective against the virus, according to the World Health Organization (WHO).
Also on the plus side, monkeypox is less contagious with milder illness than smallpox and, compared to COVID-19, produces more telltale signs. Scientists think that a “ring” vaccination strategy can be used when these signs appear to help with squelching this alarming outbreak.
How it’s transmitted
Monkeypox spreads between people primarily through direct contact with infectious sores, scabs, or bodily fluids. People also can catch it through respiratory secretions during prolonged, face-to-face contact, according to the Centers for Disease Control and Prevention (CDC).
As of June 30, there have been 396 documented monkeypox cases in the U.S., and the CDC has activated its Emergency Operations Center to mobilize additional personnel and resources. The U.S. Department of Health and Human Services is aiming to boost testing capacity and accessibility. No Americans have died from monkeypox during this outbreak but, during the COVID-19 pandemic (February 2020 to date), Africa has documented 12,141 cases and 363 deaths from monkeypox.
Ring vaccination proved effective in curbing the smallpox and Ebola outbreaks. As the monkeypox threat continues to loom, scientists view this as the best vaccine approach.
A person infected with monkeypox typically has symptoms—for instance, fever and chills—in a contagious state, so knowing when to avoid close contact with others makes it easier to curtail than COVID-19.
Advantages of ring vaccination
For this reason, it’s feasible to vaccinate a “ring” of people around the infected individual rather than inoculating large swaths of the population. Ring vaccination proved effective in curbing the smallpox and Ebola outbreaks. As the monkeypox threat continues to loom, scientists view this as the best vaccine approach.
With many infections, “it normally would make sense to everyone to vaccinate more widely,” says Wesley C. Van Voorhis, a professor and director of the Center for Emerging and Re-emerging Infectious Diseases at the University of Washington School of Medicine in Seattle. However, “in this case, ring vaccination may be sufficient to contain the outbreak and also minimize the rare, but potentially serious side effects of the smallpox/monkeypox vaccine.”
There are two licensed smallpox vaccines in the United States: ACAM2000 (live Vaccina virus) and JYNNEOS (live virus non-replicating). The ACAM 2000, Van Voorhis says, is the old smallpox vaccine that, in rare instances, could spread diffusely within the body and cause heart problems, as well as severe rash in people with eczema or serious infection in immunocompromised patients.
To prevent organ damage, the current recommendation would be to use the JYNNEOS vaccine, says Phyllis Kanki, a professor of health sciences in the division of immunology and infectious diseases at the Harvard T.H. Chan School of Public Health. However, according to a report on the CDC’s website, people with immunocompromising conditions could have a higher risk of getting a severe case of monkeypox, despite being vaccinated, and “might be less likely to mount an effective response after any vaccination, including after JYNNEOS.”
In the late 1960s, the ring vaccination strategy became part of the WHO’s mission to globally eradicate smallpox, with the last known natural case described in Somalia in 1977. Ring vaccination can also refer to how a clinical trial is designed, as was the case in 2015, when this approach was used for researching the benefits of an investigational Ebola vaccine in Guinea, Kanki says.
“Since Monkeypox spreads by close contact and we have an effective vaccine, vaccinating high-risk individuals and their contacts may be a good strategy to limit transmission,” she says, adding that privacy is an important ethical principle that comes into play, as people with monkeypox would need to disclose their close contacts so that they could benefit from ring vaccination.
Rapid identification of cases and contacts—along with their cooperation—is essential for ring vaccination to be effective. Although mass vaccination also may work, the risk of infection to most of the population remains low while supply of the JYNNEOS vaccine is limited, says Stanley Deresinski, a clinical professor of medicine in the Infectious Disease Clinic at Stanford University School of Medicine.
Other strategies for preventing transmission
Ideally, the vaccine should be administered within four days of an exposure, but it’s recommended for up to 14 days. The WHO also advocates more widespread vaccination campaigns in the population segment with the most cases so far: men who engage in sex with other men.
The virus appears to be spreading in sexual networks, which differs from what was seen in previously reported outbreaks of monkeypox (outside of Africa), where risk was associated with travel to central or west Africa or various types of contact with individuals or animals from those locales. There is no evidence of transmission by food, but contaminated articles in the environment such as bedding are potential sources of the virus, Deresinski says.
Severe cases of monkeypox can occur, but “transmission of the virus requires close contact,” he says. “There is no evidence of aerosol transmission, as occurs with SARS-CoV-2, although it must be remembered that the smallpox virus, a close relative of monkeypox, was transmitted by aerosol.”
Deresinski points to the fact that in 2003, monkeypox was introduced into the U.S. through imports from Ghana of infected small mammals, such as Gambian giant rats, as pets. They infected prairie dogs, which also were sold as pets and, ultimately, this resulted in 37 confirmed transmissions to humans and 10 probable cases. A CDC investigation identified no cases of human-to-human transmission. Then, in 2021, a traveler flew from Nigeria to Dallas through Atlanta, developing skin lesions several days after arrival. Another CDC investigation yielded 223 contacts, although 85 percent were deemed to be at only minimal risk and the remainder at intermediate risk. No new cases were identified.
How much should we be worried
But how serious of a threat is monkeypox this time around? “Right now, the risk to the general public is very low,” says Scott Roberts, an assistant professor and associate medical director of infection prevention at Yale School of Medicine. “Monkeypox is spread through direct contact with infected skin lesions or through close contact for a prolonged period of time with an infected person. It is much less transmissible than COVID-19.”
The monkeypox incubation period—the time from infection until the onset of symptoms—is typically seven to 14 days but can range from five to 21 days, compared with only three days for the Omicron variant of COVID-19. With such a long incubation, there is a larger window to conduct contact tracing and vaccinate people before symptoms appear, which can prevent infection or lessen the severity.
But symptoms may present atypically or recognition may be delayed. “Ring vaccination works best with 100 percent adherence, and in the absence of a mandate, this is not achievable,” Roberts says.
At the outset of infection, symptoms include fever, chills, and fatigue. Several days later, a rash becomes noticeable, usually beginning on the face and spreading to other parts of the body, he says. The rash starts as flat lesions that raise and develop fluid, similar to manifestations of chickenpox. Once the rash scabs and falls off, a person is no longer contagious.
“It's an uncomfortable infection,” says Van Voorhis, the University of Washington School of Medicine professor. There may be swollen lymph nodes. Sores and rash are often limited to the genitals and areas around the mouth or rectum, suggesting intimate contact as the source of spread.
Symptoms of monkeypox usually last from two to four weeks. The WHO estimated that fatalities range from 3 to 6 percent. Although it’s believed to infect various animal species, including rodents and monkeys in west and central Africa, “the animal reservoir for the virus is unknown,” says Kanki, the Harvard T.H. Chan School of Public Health professor.
Too often, viruses originate in parts of the world that are too poor to grapple with them and may lack the resources to invest in vaccines and treatments. “This disease is endemic in central and west Africa, and it has basically been ignored until it jumped to the north and infected Europeans, Americans, and Canadians,” Van Voorhis says. “We have to do a better job in health care and prevention all over the world. This is the kind of thing that comes back to bite us.”
nudgesYou are driving along the highway and see an electronic sign that reads: “3,238 traffic deaths this year.” Do you think this reminder of roadside mortality would change how you drive? According to a recent, peer-reviewed study in Science, seeing that sign would make you more likely to crash. That’s ironic, given that the sign’s creators assumed it would make you safer.
The study, led by a pair of economists at the University of Toronto and University of Minnesota, examined seven years of traffic accident data from 880 electric highway sign locations in Texas, which experienced 4,480 fatalities in 2021. For one week of each month, the Texas Department of Transportation posts the latest fatality messages on signs along select traffic corridors as part of a safety campaign. Their logic is simple: Tell people to drive with care by reminding them of the dangers on the road.
But when the researchers looked at the data, they found that the number of crashes increased by 1.52 percent within three miles of these signs when compared with the same locations during the same month in previous years when signs did not show fatality information. That impact is similar to raising the speed limit by four miles or decreasing the number of highway troopers by 10 percent.
The scientists calculated that these messages contributed to 2,600 additional crashes and 16 deaths annually. They also found a social cost, meaning the financial expense borne by society as a whole due to these crashes, of $377 million per year, in Texas alone.
The cause, they argue, is distracted driving. Much like incoming texts or phone calls, these “in-your-face” messages grab your attention and undermine your focus on the road. The signs are particularly distracting and dangerous because, in communicating that many people died doing exactly what you are doing, they cause anxiety. Supporting this hypothesis, the scientists discovered that crashes increase when the signs report higher numbers of deaths. Thus, later in the year, as that total mortality figure goes up, so do the percentage of crashes.
Boomerang effects happen when those with authority, in government or business, fail to pay attention to the science. These leaders rely on armchair psychology and gut intuitions on what should work, rather than measuring what does work.
That change over time is not simply a function of changing weather, the study’s authors observed. They also found that the increase in car crashes is greatest in more complex road segments, which require greater focus to navigate.
The overall findings represent what behavioral scientists like myself call a “boomerang effect,” meaning an intervention that produces consequences opposite to those intended. Unfortunately, these effects are all too common. Between 1998 and 2004, Congress funded the $1 billion National Youth Anti-Drug Media Campaign, which famously boomeranged. Using professional advertising and public relations firms, the campaign bombarded kids aged 9 to 18 with anti-drug messaging, focused on marijuana, on TV, radio, magazines, and websites. A 2008 study funded by the National Institutes of Health found that children and teens saw these ads two to three times per week. However, more exposure to this advertising increased the likelihood that youth used marijuana. Why? Surveys and interviews suggested that young people who saw the ads got the impression that many of their peers used marijuana. As a result, they became more likely to use the drug themselves.
Boomerang effects happen when those with authority, in government or business, fail to pay attention to the science. These leaders rely on armchair psychology and gut intuitions on what should work, rather than measuring what does work.
To be clear, message campaigns—whether on electronic signs or through advertisements—can have a substantial effect on behavior. Extensive research reveals that people can be influenced by “nudges,” which shape the environment to influence their behavior in a predictable manner. For example, a successful campaign to reduce car accidents involved sending smartphone notifications that helped drivers evaluate their performance after each trip. These messages informed drivers of their personal average and best performance, as measured by accelerometers and gyroscopes. The campaign, which ran over 21 months, significantly reduced accident frequency.
Nudges work best when rigorously tested with small-scale experiments that evaluate their impact. Because behavioral scientists are infrequently consulted in creating these policies, some studies suggest that only 62 percent have a statistically significant effect. Other research reveals that up to 15 percent of desired interventions may backfire.
In the case of roadside mortality signage, the data are damning. The new research based on the Texas signs aligns with several past studies. For instance, research has shown that increasing people’s anxiety causes them to drive worse. Another, a Virginia Tech study in a laboratory setting, found that showing drivers fatality messages increased what psychologists call “cognitive load,” or the amount of information your brain is processing, with emotionally-salient information being especially burdensome and preoccupying, thus causing more distraction.
Nonetheless, Texas, along with at least 28 other states, has pursued mortality messaging campaigns since 2012, without testing them effectively. Behavioral science is critical here: when road signs are tested by people without expertise in how minds work, the results are often counterproductive. For example, the Virginia Tech research looked at road signs that used humor, popular culture, sports, and other nontraditional themes with the goal of provoking an emotional response. When they measured how participants responded to these signs, they noticed greater cognitive activation and attention in the brain. Thus, the researchers decided, the signs worked. But a behavioral scientist would note that increased attention likely contributes to the signs’ failure. As the just-published study in Science makes clear, distracting, emotionally-loaded signs are dangerous to drivers.
But there is good news. First, in most cases, it’s very doable to run an effective small-scale study testing an intervention. States could set up a safety campaign with a few electric signs in a diversity of settings and evaluate the impact over three months on driver crashes after seeing the signs. Policymakers could ask researchers to track the data as they run ads for a few months in a variety of nationally representative markets for a few months and assess their effectiveness. They could also ask behavioral scientists whether their proposals are well designed, whether similar policies have been tried previously in other places, and how these policies have worked in practice.
Everyday citizens can write to and call their elected officials to ask them to make this kind of research a priority before embracing an untested safety campaign. More broadly, you can encourage them to avoid relying on armchair psychology and to test their intuitions before deploying initiatives that might place the public under threat.