Researchers Behaving Badly: Known Frauds Are "the Tip of the Iceberg"
Last week, the whistleblowers in the Paolo Macchiarini affair at Sweden's Karolinska Institutet went on the record here to detail the retaliation they suffered for trying to expose a star surgeon's appalling research misconduct.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise.
The whistleblowers had discovered that in six published papers, Macchiarini falsified data, lied about the condition of patients and circumvented ethical approvals. As a result, multiple patients suffered and died. But Karolinska turned a blind eye for years.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise. Just this week, for example, Retraction Watch and STAT together broke the news that a Harvard Medical School cardiologist and stem cell researcher, Piero Anversa, falsified data in a whopping 31 papers, which now have to be retracted. Anversa had claimed that he could regenerate heart muscle by injecting bone marrow cells into damaged hearts, a result that no one has been able to duplicate.
A 2009 study published in the Public Library of Science (PLOS) found that about two percent of scientists admitted to committing fabrication, falsification or plagiarism in their work. That's a small number, but up to one third of scientists admit to committing "questionable research practices" that fall into a gray area between rigorous accuracy and outright fraud.
These dubious practices may include misrepresentations, research bias, and inaccurate interpretations of data. One common questionable research practice entails formulating a hypothesis after the research is done in order to claim a successful premise. Another highly questionable practice that can shape research is ghost-authoring by representatives of the pharmaceutical industry and other for-profit fields. Still another is gifting co-authorship to unqualified but powerful individuals who can advance one's career. Such practices can unfairly bolster a scientist's reputation and increase the likelihood of getting the work published.
The above percentages represent what scientists admit to doing themselves; when they evaluate the practices of their colleagues, the numbers jump dramatically. In a 2012 study published in the Journal of Research in Medical Sciences, researchers estimated that 14 percent of other scientists commit serious misconduct, while up to 72 percent engage in questionable practices. While these are only estimates, the problem is clearly not one of just a few bad apples.
In the PLOS study, Daniele Fanelli says that increasing evidence suggests the known frauds are "just the 'tip of the iceberg,' and that many cases are never discovered" because fraud is extremely hard to detect.
Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
In addition, it's likely that most cases of scientific misconduct go unreported because of the high price of whistleblowing. Those in the Macchiarini case showed extraordinary persistence in their multi-year campaign to stop his deadly trachea implants, while suffering serious damage to their careers. Such heroic efforts to unmask fraud are probably rare.
To make matters worse, there are numerous players in the scientific world who may be complicit in either committing misconduct or covering it up. These include not only primary researchers but co-authors, institutional executives, journal editors, and industry leaders. Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
Another part of the problem is that it's rare for students in science and medicine to receive an education in ethics. And studies have shown that older, more experienced and possibly jaded researchers are more likely to fudge results than their younger, more idealistic colleagues.
So, given the steep price that individuals and institutions pay for scientific misconduct, what compels them to go down that road in the first place? According to the JRMS study, individuals face intense pressures to publish and to attract grant money in order to secure teaching positions at universities. Once they have acquired positions, the pressure is on to keep the grants and publishing credits coming in order to obtain tenure, be appointed to positions on boards, and recruit flocks of graduate students to assist in research. And not to be underestimated is the human ego.
Paolo Macchiarini is an especially vivid example of a scientist seeking not only fortune, but fame. He liberally (and falsely) claimed powerful politicians and celebrities, even the Pope, as patients or admirers. He may be an extreme example, but we live in an age of celebrity scientists who bring huge amounts of grant money and high prestige to the institutions that employ them.
The media plays a significant role in both glorifying stars and unmasking frauds. In the Macchiarini scandal, the media first lifted him up, as in NBC's laudatory documentary, "A Leap of Faith," which painted him as a kind of miracle-worker, and then brought him down, as in the January 2016 documentary, "The Experiments," which chronicled the agonizing death of one of his patients.
Institutions can also play a crucial role in scientific fraud by putting more emphasis on the number and frequency of papers published than on their quality. The whole course of a scientist's career is profoundly affected by something called the h-index. This is a number based on both the frequency of papers published and how many times the papers are cited by other researchers. Raising one's ranking on the h-index becomes an overriding goal, sometimes eclipsing the kind of patient, time-consuming research that leads to true breakthroughs based on reliable results.
Universities also create a high-pressured environment that encourages scientists to cut corners. They, too, place a heavy emphasis on attracting large monetary grants and accruing fame and prestige. This can lead them, just as it led Karolinska, to protect a star scientist's sloppy or questionable research. According to Dr. Andrew Rosenberg, who is director of the Center for Science and Democracy at the U.S.-based Union of Concerned Scientists, "Karolinska defended its investment in an individual as opposed to the long-term health of the institution. People were dying, and they should have outsourced the investigation from the very beginning."
Having institutions investigate their own practices is a conflict of interest from the get-go, says Rosenberg.
Scientists, universities, and research institutions are also not immune to fads. "Hot" subjects attract grant money and confer prestige, incentivizing scientists to shift their research priorities in a direction that garners more grants. This can mean neglecting the scientist's true area of expertise and interests in favor of a subject that's more likely to attract grant money. In Macchiarini's case, he was allegedly at the forefront of the currently sexy field of regenerative medicine -- a field in which Karolinska was making a huge investment.
The relative scarcity of resources intensifies the already significant pressure on scientists. They may want to publish results rapidly, since they face many competitors for limited grant money, academic positions, students, and influence. The scarcity means that a great many researchers will fail while only a few succeed. Once again, the temptation may be to rush research and to show it in the most positive light possible, even if it means fudging or exaggerating results.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable.
Intense competition can have a perverse effect on researchers, according to a 2007 study in the journal Science of Engineering and Ethics. Not only does it place undue pressure on scientists to succeed, it frequently leads to the withholding of information from colleagues, which undermines a system in which new discoveries build on the previous work of others. Researchers may feel compelled to withhold their results because of the pressure to be the first to publish. The study's authors propose that more investment in basic research from governments could alleviate some of these competitive pressures.
Scientific journals, although they play a part in publishing flawed science, can't be expected to investigate cases of suspected fraud, says the German science blogger Leonid Schneider. Schneider's writings helped to expose the Macchiarini affair.
"They just basically wait for someone to retract problematic papers," he says.
He also notes that, while American scientists can go to the Office of Research Integrity to report misconduct, whistleblowers in Europe have no external authority to whom they can appeal to investigate cases of fraud.
"They have to go to their employer, who has a vested interest in covering up cases of misconduct," he says.
Science is increasingly international. Major studies can include collaborators from several different countries, and he suggests there should be an international body accessible to all researchers that will investigate suspected fraud.
Ultimately, says Rosenberg, the scientific system must incorporate trust. "You trust co-authors when you write a paper, and peer reviewers at journals trust that scientists at research institutions like Karolinska are acting with integrity."
Without trust, the whole system falls apart. It's the trust of the public, an elusive asset once it has been betrayed, that science depends upon for its very existence. Scientific research is overwhelmingly financed by tax dollars, and the need for the goodwill of the public is more than an abstraction.
The Macchiarini affair raises a profound question of trust and responsibility: Should multiple co-authors be held responsible for a lead author's misconduct?
Karolinska apparently believes so. When the institution at last owned up to the scandal, it vindictively found Karl Henrik-Grinnemo, one of the whistleblowers, guilty of scientific misconduct as well. It also designated two other whistleblowers as "blameworthy" for their roles as co-authors of the papers on which Macchiarini was the lead author.
As a result, the whistleblowers' reputations and employment prospects have become collateral damage. Accusations of research misconduct can be a career killer. Research grants dry up, employment opportunities evaporate, publishing becomes next to impossible, and collaborators vanish into thin air.
Grinnemo contends that co-authors should only be responsible for their discrete contributions, not for the data supplied by others.
"Different aspects of a paper are highly specialized," he says, "and that's why you have multiple authors. You cannot go through every single bit of data because you don't understand all the parts of the article."
This is especially true in multidisciplinary, translational research, where there are sometimes 20 or more authors. "You have to trust co-authors, and if you find something wrong you have to notify all co-authors. But you couldn't go through everything or it would take years to publish an article," says Grinnemo.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable. Along with increased support from governments and industry, a change in academic culture that emphasizes quality over quantity of published studies could help encourage meritorious research.
But beyond that, trust will always play a role when numerous specialists unite to achieve a common goal: the accumulation of knowledge that will promote human health, wealth, and well-being.
[Correction: An earlier version of this story mistakenly credited The New York Times with breaking the news of the Anversa retractions, rather than Retraction Watch and STAT, which jointly published the exclusive on October 14th. The piece in the Times ran on October 15th. We regret the error.]
Can Radical Transparency Overcome Resistance to COVID-19 Vaccines?
When historians look back on the COVID-19 pandemic, they may mark November 9, 2020 as the day the tide began to turn. That's when the New York-based pharmaceutical giant Pfizer announced that clinical trials showed its experimental vaccine, developed with the German firm BioNTech, to be 90 percent effective in preventing the disease.
A week later, Massachusetts biotech startup Moderna declared its vaccine to be 95 percent effective. By early December, Great Britain had begun mass inoculations, followed—once the Food and Drug Administration gave the thumbs-up—by the United States. In this scenario, the worst global health crisis in a century was on the cusp of resolution.
Yet future chroniclers may instead peg November 9 as the day false hope dawned. That could happen if serious safety issues, undetected so far, arise after millions of doses are administered. Experts consider it unlikely, however, that such problems alone (as opposed to the panic they might spark) would affect enough people to thwart a victory over the coronavirus. A more immediate obstacle is vaccine hesitancy—the prospect that much of the populace will refuse to roll up their sleeves.
To achieve "herd immunity" for COVID-19 (the point at which a vaccine reduces transmission rates enough to protect those who can't or won't take it, or for whom it doesn't work), epidemiologists estimate that up to 85 percent of the population will have to be vaccinated. Alarmingly, polls suggest that 40 to 50 percent of Americans intend to decline, judging the risks to be more worrisome than those posed by the coronavirus itself.
COVID vaccine skeptics occupy various positions on a spectrum of doubt. Some are committed anti-vaxxers, or devotees of conspiracy theories that view the pandemic as a hoax. Others belong to minority groups that have historically been used as guinea pigs in unethical medical research (for horrific examples, Google "Tuskegee syphilis experiment" or "Henrietta Lacks"). Still others simply mistrust Big Pharma and/or Big Government. A common fear is that the scramble to find a vaccine—intensified by partisan and profit motives—has led to corner-cutting in the testing and approval process. "They really rushed," an Iowa trucker told The Washington Post. "I'll probably wait a couple of months after they start to see how everyone else is handling it."
The COVID crisis has spurred calls for secretive Data Safety and Monitoring Boards to come out of the shadows.
The consensus among scientists, by contrast, is that the process has been rigorous enough, given the exigency of the situation, that the public can feel reasonably confident in any vaccine that has earned the imprimatur of the FDA. For those of us who share that assessment, finding ways to reassure the hesitant-but-persuadable is an urgent matter.
Vax-positive public health messaging is one obvious tactic, but a growing number of experts say it's not enough. They prescribe a regimen of radical transparency throughout the system that regulates research—in particular, regarding the secretive panels that oversee vaccine trials.
The Crucial Role of the Little-Known Panels
Like other large clinical trials involving potentially high-demand or controversial products, studies of COVID-19 vaccines in most countries are supervised by groups of independent observers. Known in the United States as data safety and monitoring boards (DSMBs), and elsewhere as data monitoring committees, these panels consist of scientists, clinicians, statisticians, and other authorities with no ties to the sponsor of the study.
The six trials funded by the federal program known as Operation Warp Speed (including those of newly approved Moderna and frontrunner AstraZeneca) share a DSMB, whose members are selected by the National Institutes of Health; other companies (including Pfizer) appoint their own. The panel's job is to monitor the safety and efficacy of a treatment while the trial is ongoing, and to ensure that data is being collected and analyzed correctly.
Vaccine studies are "double-blinded," which means neither the participants nor the doctors running the trial know who's getting the real thing and who's getting a placebo. But the DSMB can access that information if a study volunteer has what might be a serious side effect—and if the participant was in the vaccine group, the board can ask that the trial be paused for further investigation.
The DSMB also checks for efficacy at pre-determined intervals. If it finds that the vaccine group and the placebo group are getting sick at similar rates, the panel can recommend stopping the trial due to "futility." And if the results look overwhelmingly positive, the DSMB can recommend that the study sponsor apply for FDA approval before the scheduled end of the trial, in order to hurry the product to market.
With this kind of inside dope and high-level influence, DSMBs could easily become targets for outside pressure. That's why, since the 1980s, their membership has typically been kept secret.
During the early days of the AIDS crisis, researchers working on HIV drugs feared for the safety of the experts on their boards. "They didn't want them to be besieged and harassed by members of the community," explains Susan Ellenberg, a professor of biostatistics, medical ethics and health policy at the University of Pennsylvania, and co-author of Data Monitoring Committees in Clinical Trials, the DSMB bible. "You can understand why people would very much want to know how things were looking in a given trial. They wanted to save their own lives; they wanted to save their friends' lives." Ellenberg, who was founding director of the biostatistics branch of the AIDS division at the National Institute of Allergy and Infectious Diseases (NIAID), helped shape a range of policies designed to ensure that DSMBs made decisions based on data and nothing else.
Confidentiality also shields DSMB members from badgering by patient advocacy groups, who might urge that a drug be presented for approval before trial results are conclusive, or by profit-hungry investors. "It prevents people from trying to pry out information to get an edge in the stock market," says Art Caplan, a bioethicist at New York University.
Yet the COVID crisis has spurred calls for DSMBs to come out of the shadows. One triggering event came in March 2020, when the FDA approved hydroxychloroquine for COVID-19—a therapy that President Donald J. Trump touted, despite scant evidence for its efficacy. (Approval was rescinded in June.) If the agency could bow to political pressure on these medications, critics warned, it might do so with vaccines as well. In the end, that didn't happen; the Pfizer approval was issued well after Election Day, despite Trump's goading, and most experts agree that it was based on solid science. Still, public suspicion lingers.
Another shock came in September, after British-based AstraZeneca announced it was pausing its vaccine trial globally due to a "suspected adverse rection" in a volunteer. The company shared no details with the press. Instead, AstraZeneca's CEO divulged them in a private call with J.P. Morgan investors the next day, confirming that the volunteer was suffering from transverse myelitis, a rare and serious spinal inflammation—and that the study had also been halted in July, when another volunteer displayed neurological symptoms. STAT News broke the story after talking to tipsters.
Although both illnesses were found to be unrelated to the vaccine, and the trial was restarted, the incident had a paradoxical effect: while it confirmed for experts that the oversight system was working, AstraZeneca's initial lack of candor added to many laypeople's sense that it wasn't. "If you were seeking to undermine trust, that's kind of how you would go about doing it," says Charles Weijer, a bioethicist at Western University in Ontario, who has helped develop clinical trial guidelines for the World Health Organization.
Both Caplan and Weijer have served on many DSMBs; they believe the boards are generally trustworthy, and that those overseeing COVID vaccine trials are performing their jobs well. But the secrecy surrounding these groups, they and others argue, has become counterproductive. Shining a light on the statistical sausage-makers would help dispel doubts about the finished product.
"I'm not suggesting that any of these companies are doing things unethically," Weijer explains. "But the circumstances of a global pandemic are sufficiently challenging that perhaps they ought to be doing some things differently. I believe it would be trust-producing for data monitoring committees to be more forthcoming than usual."
Building Trust: More Transparency
Just how forthcoming is a matter of debate. Caplan suggests that each COVID vaccine DSMB reveal the name of its chair; that would enable the scientific community, as well as the media and the general public, to get a sense of the integrity and qualifications of the board as a whole while preserving the anonymity of the other members.
Indeed, when Operation Warp Speed's DSMB chair, Richard Whitley, was outed through a website slip-up, many observers applauded his selection for the role; a professor of pediatrics, microbiology, medicine and neurosurgery at the University of Alabama at Birmingham, he is "an exceptionally experienced and qualified individual," Weijer says. (Reporters with ProPublica later identified two other members: Susan Ellenberg and immunologist William Makgoba, known for his work on the South African AIDS Vaccine Initiative.)
Caplan would also like to see more details of the protocols DSMBs are using to make decisions, such as the statistical threshold for efficacy that would lead them to seek approval from the FDA. And he wishes the NIH would spell out specific responsibilities for these monitoring boards. "They don't really have clear, government-mandated charters," he notes. For example, there's no requirement that DSMBs include an ethicist or patient advocate—both of which Caplan considers essential for vaccine trials. "Rough guidelines," he says, "would be useful."
Weijer, for his part, thinks DSMBs should disclose all their members. "When you only disclose the chair, you leave questions unanswered," he says. "What expertise do [the others] bring to the table? Are they similarly free of relevant conflicts of interest? And it doesn't answer the question that will be foremost on many people's minds: are these people in the pocket of pharma?"
Weijer and Caplan both want to see greater transparency around the trial results themselves. Because the FDA approved the Pfizer and Moderna vaccines with emergency use authorizations rather than full licensure, which requires more extensive safety testing, these products reached the market without the usual paper trail of peer-reviewed publications. The same will likely be true of any future COVID vaccines that the agency greenlights. To add another level of scrutiny, both ethicists suggest, each company should publicly release its data at the end of a trial. "That offers the potential for academic groups to go in and do an analysis," Weijer explains, "to verify the claims about the safety and efficacy of the vaccine." The point, he says, is not only to ensure that the approval was justified, but to provide evidence to counter skeptics' qualms.
Caplan may differ on some of the details, but he endorses the premise. "It's all a matter of trust," he says. "You're always watching that, because a vaccine is only as good as the number of people who take it."
Scientists Attempt to Make Human Cells Resistant to Coronaviruses and Ebola
Under the electronic microscope, the Ebola particles looked like tiny round bubbles floating inside human cells. Except these Ebola particles couldn't get free from their confinement.
They were trapped inside their bubbles, unable to release their RNA into the human cells to start replicating. These cells stopped the Ebola infection. And they did it on their own, without any medications, albeit in a petri dish of immunologist Adam Lacy-Hulbert. He studies how cells fight infections at the Benaroya Research Institute in Seattle, Washington.
These weren't just any ordinary human cells. They had a specific gene turned on—namely CD74, which typically wouldn't be on. Lacy-Hulbert's team was experimenting with turning various genes on and off to see what made cells fight viral infections better. One particular form of the CD74 gene did the trick. Normally, the Ebola particles would use the cells' own proteases—enzymes that are often called "molecular scissors" because they slice proteins—to cut the bubbles open. But CD74 produced a protein that blocked the scissors from cutting the bubbles, leaving Ebola trapped.
"When that gene turns on, it makes the protein that interferes with Ebola replication," Lacy-Hulbert says. "The protein binds to those molecular scissors and stops them from working." Even better, the protein interfered with coronaviruses too, including SARS-CoV-2, as the team published in the journal Science.
This begs the question: If one can turn on cells' viral resistance in a lab, can this be done in a human body so we that we can better fight Ebola, coronaviruses and other viral scourges?
Recent research indeed shows that our ability to fight viral infections is written in our genes. Genetic variability is at least one reason why some coronavirus-infected people don't develop symptoms while others stay on ventilators for weeks—often due to the aberrant response of their immune system, which went on overdrive to kill the pathogen. But if cells activate certain genes early in the infection, they might successfully stop viruses from replicating before the immune system spirals out of control.
"If my father who is 70 years old tests positive, I would recommend he takes interferon as early as possible."
When we talk about fighting infections, we tend to think in terms of highly specialized immune system cells—B-cells that release antibodies and T-cells that stimulate inflammatory responses, says Lacy-Hulbert. But all other cells in the body have the ability to fight infections too via different means. When cells detect the presence of a pathogen, they release interferons—small protein molecules named so because they set off a genetic chain reaction that interferes with viral replication. These molecules work as alarm signals to other cells around them. The neighboring cells transduce these signals inside themselves and turn on genes responsible for cellular defenses.
"There are at least 300 to 400 genes that are stimulated by type I interferons," says professor Jean-Laurent Casanova at Rockefeller University.
Scientists don't yet know exactly what all of these genes do, but they change the molecular behavior of the cells. "The cells go into a dramatic change and start producing hundreds of proteins that interfere with viral replication on the inside," explains Qian Zhang, a researcher at Casanova's lab. "Some block the proteins the virus needs and some physically tether the virus."
Some cells produce only small amount of interferon, enough to alert their neighbors. Others, such microphages and monocytes, whose jobs are to detect foreign invaders, produce a lot, injecting interferons into the blood to sound the alarm throughout the body. "They are professional cells so their jobs [are] to detect a viral or bacterial infection," Zhang explains.
People with impaired interferon responses are more vulnerable to infections, including influenza and coronaviruses. In two recent studies published in the journal Science, Casanova, Zhang and their colleagues found that patients who lacked a certain type of interferon had more severe Covid-19 symptoms and some died from it. The team ran a genetic comparison of blood samples from patients hospitalized with severe coronavirus cases against those with the asymptomatic infections.
They found that people with severe disease had rare variants in the 13 genes responsible for interferon production. More than three percent of them had a genetic mutation resulting in non-functioning genes. And over ten percent had an autoimmune condition, in which misguided antibodies neutralized their interferons, dampening their bodies' defenses—and these patients were predominantly men. These discoveries help explain why some young and seemingly healthy individuals require life support, while others have mild symptoms or none. The findings also offer ways of stimulating cellular resistance.
A New Frontier in the Making
The idea of making human cells genetically resistant to infections—and possibly other stressors like cancer or aging—has been considered before. It is the concept behind the Genome Project-write or GP-write project, which aims to create "ultra-safe" versions of human cells that resist a variety of pathogens by way of "recoding" or rewriting the cells' genes.
To build proteins, cells use combinations of three DNA bases called codons to represent amino acids—the proteins' building blocks. But biologists find that many of the codons are redundant so if they were removed from all genes, the human cells would still make all their proteins. However, the viruses, whose genes would still include these eliminated redundant codons, would no longer successfully be able to replicate inside human cells.
In 2016, the GP-Write team successfully reduced the number of Escherichia coli's codons from 64 to 57. Recoding genes in all human cells would be harder, but some recoded cells may be transplanted into the body, says Harvard Medical School geneticist George Church, the GP-Write core founding member.
"You can recode a subset of the body, such as all of your blood," he says. "You can also grow an organ inside a recoded pig and transplant it."
Church adds that these methods are still in stages that are too early to help us with this pandemic.
LeapsMag exclusively interviewed Church in 2019 about his latest progress with DNA recoding:
The Push for Clinical Trials
In the meantime, interferons may prove an easier medicine. Lacy-Hulbert thinks that interferon gamma might play a role in activating the CD74 gene, which gums up the molecular scissors. There also may be other ways to activate that gene. "So we are now thinking, can we develop a drug that mimics that actual activity?" he says.
Some interferons are already manufactured and used for treating certain diseases, including multiple sclerosis. Theoretically, nothing prevents doctors from prescribing interferons to Covid patients, but it must be done in the early stages of infection—to stimulate genes that trigger cellular defenses before the virus invades too many cells and before the immune systems mobilizes its big guns.
"If my father who is 70 years old tests positive, I would recommend he takes interferon as early as possible," says Zhang. But to make it a mainstream practice, doctors need clear prescription guidelines. "What would really help doctors make these decisions is clinical trials," says Casanova, so that such guidelines can be established. "We are now starting to push for clinical trials," he adds.
Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.