Researchers Behaving Badly: Known Frauds Are "the Tip of the Iceberg"
Last week, the whistleblowers in the Paolo Macchiarini affair at Sweden's Karolinska Institutet went on the record here to detail the retaliation they suffered for trying to expose a star surgeon's appalling research misconduct.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise.
The whistleblowers had discovered that in six published papers, Macchiarini falsified data, lied about the condition of patients and circumvented ethical approvals. As a result, multiple patients suffered and died. But Karolinska turned a blind eye for years.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise. Just this week, for example, Retraction Watch and STAT together broke the news that a Harvard Medical School cardiologist and stem cell researcher, Piero Anversa, falsified data in a whopping 31 papers, which now have to be retracted. Anversa had claimed that he could regenerate heart muscle by injecting bone marrow cells into damaged hearts, a result that no one has been able to duplicate.
A 2009 study published in the Public Library of Science (PLOS) found that about two percent of scientists admitted to committing fabrication, falsification or plagiarism in their work. That's a small number, but up to one third of scientists admit to committing "questionable research practices" that fall into a gray area between rigorous accuracy and outright fraud.
These dubious practices may include misrepresentations, research bias, and inaccurate interpretations of data. One common questionable research practice entails formulating a hypothesis after the research is done in order to claim a successful premise. Another highly questionable practice that can shape research is ghost-authoring by representatives of the pharmaceutical industry and other for-profit fields. Still another is gifting co-authorship to unqualified but powerful individuals who can advance one's career. Such practices can unfairly bolster a scientist's reputation and increase the likelihood of getting the work published.
The above percentages represent what scientists admit to doing themselves; when they evaluate the practices of their colleagues, the numbers jump dramatically. In a 2012 study published in the Journal of Research in Medical Sciences, researchers estimated that 14 percent of other scientists commit serious misconduct, while up to 72 percent engage in questionable practices. While these are only estimates, the problem is clearly not one of just a few bad apples.
In the PLOS study, Daniele Fanelli says that increasing evidence suggests the known frauds are "just the 'tip of the iceberg,' and that many cases are never discovered" because fraud is extremely hard to detect.
Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
In addition, it's likely that most cases of scientific misconduct go unreported because of the high price of whistleblowing. Those in the Macchiarini case showed extraordinary persistence in their multi-year campaign to stop his deadly trachea implants, while suffering serious damage to their careers. Such heroic efforts to unmask fraud are probably rare.
To make matters worse, there are numerous players in the scientific world who may be complicit in either committing misconduct or covering it up. These include not only primary researchers but co-authors, institutional executives, journal editors, and industry leaders. Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
Another part of the problem is that it's rare for students in science and medicine to receive an education in ethics. And studies have shown that older, more experienced and possibly jaded researchers are more likely to fudge results than their younger, more idealistic colleagues.
So, given the steep price that individuals and institutions pay for scientific misconduct, what compels them to go down that road in the first place? According to the JRMS study, individuals face intense pressures to publish and to attract grant money in order to secure teaching positions at universities. Once they have acquired positions, the pressure is on to keep the grants and publishing credits coming in order to obtain tenure, be appointed to positions on boards, and recruit flocks of graduate students to assist in research. And not to be underestimated is the human ego.
Paolo Macchiarini is an especially vivid example of a scientist seeking not only fortune, but fame. He liberally (and falsely) claimed powerful politicians and celebrities, even the Pope, as patients or admirers. He may be an extreme example, but we live in an age of celebrity scientists who bring huge amounts of grant money and high prestige to the institutions that employ them.
The media plays a significant role in both glorifying stars and unmasking frauds. In the Macchiarini scandal, the media first lifted him up, as in NBC's laudatory documentary, "A Leap of Faith," which painted him as a kind of miracle-worker, and then brought him down, as in the January 2016 documentary, "The Experiments," which chronicled the agonizing death of one of his patients.
Institutions can also play a crucial role in scientific fraud by putting more emphasis on the number and frequency of papers published than on their quality. The whole course of a scientist's career is profoundly affected by something called the h-index. This is a number based on both the frequency of papers published and how many times the papers are cited by other researchers. Raising one's ranking on the h-index becomes an overriding goal, sometimes eclipsing the kind of patient, time-consuming research that leads to true breakthroughs based on reliable results.
Universities also create a high-pressured environment that encourages scientists to cut corners. They, too, place a heavy emphasis on attracting large monetary grants and accruing fame and prestige. This can lead them, just as it led Karolinska, to protect a star scientist's sloppy or questionable research. According to Dr. Andrew Rosenberg, who is director of the Center for Science and Democracy at the U.S.-based Union of Concerned Scientists, "Karolinska defended its investment in an individual as opposed to the long-term health of the institution. People were dying, and they should have outsourced the investigation from the very beginning."
Having institutions investigate their own practices is a conflict of interest from the get-go, says Rosenberg.
Scientists, universities, and research institutions are also not immune to fads. "Hot" subjects attract grant money and confer prestige, incentivizing scientists to shift their research priorities in a direction that garners more grants. This can mean neglecting the scientist's true area of expertise and interests in favor of a subject that's more likely to attract grant money. In Macchiarini's case, he was allegedly at the forefront of the currently sexy field of regenerative medicine -- a field in which Karolinska was making a huge investment.
The relative scarcity of resources intensifies the already significant pressure on scientists. They may want to publish results rapidly, since they face many competitors for limited grant money, academic positions, students, and influence. The scarcity means that a great many researchers will fail while only a few succeed. Once again, the temptation may be to rush research and to show it in the most positive light possible, even if it means fudging or exaggerating results.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable.
Intense competition can have a perverse effect on researchers, according to a 2007 study in the journal Science of Engineering and Ethics. Not only does it place undue pressure on scientists to succeed, it frequently leads to the withholding of information from colleagues, which undermines a system in which new discoveries build on the previous work of others. Researchers may feel compelled to withhold their results because of the pressure to be the first to publish. The study's authors propose that more investment in basic research from governments could alleviate some of these competitive pressures.
Scientific journals, although they play a part in publishing flawed science, can't be expected to investigate cases of suspected fraud, says the German science blogger Leonid Schneider. Schneider's writings helped to expose the Macchiarini affair.
"They just basically wait for someone to retract problematic papers," he says.
He also notes that, while American scientists can go to the Office of Research Integrity to report misconduct, whistleblowers in Europe have no external authority to whom they can appeal to investigate cases of fraud.
"They have to go to their employer, who has a vested interest in covering up cases of misconduct," he says.
Science is increasingly international. Major studies can include collaborators from several different countries, and he suggests there should be an international body accessible to all researchers that will investigate suspected fraud.
Ultimately, says Rosenberg, the scientific system must incorporate trust. "You trust co-authors when you write a paper, and peer reviewers at journals trust that scientists at research institutions like Karolinska are acting with integrity."
Without trust, the whole system falls apart. It's the trust of the public, an elusive asset once it has been betrayed, that science depends upon for its very existence. Scientific research is overwhelmingly financed by tax dollars, and the need for the goodwill of the public is more than an abstraction.
The Macchiarini affair raises a profound question of trust and responsibility: Should multiple co-authors be held responsible for a lead author's misconduct?
Karolinska apparently believes so. When the institution at last owned up to the scandal, it vindictively found Karl Henrik-Grinnemo, one of the whistleblowers, guilty of scientific misconduct as well. It also designated two other whistleblowers as "blameworthy" for their roles as co-authors of the papers on which Macchiarini was the lead author.
As a result, the whistleblowers' reputations and employment prospects have become collateral damage. Accusations of research misconduct can be a career killer. Research grants dry up, employment opportunities evaporate, publishing becomes next to impossible, and collaborators vanish into thin air.
Grinnemo contends that co-authors should only be responsible for their discrete contributions, not for the data supplied by others.
"Different aspects of a paper are highly specialized," he says, "and that's why you have multiple authors. You cannot go through every single bit of data because you don't understand all the parts of the article."
This is especially true in multidisciplinary, translational research, where there are sometimes 20 or more authors. "You have to trust co-authors, and if you find something wrong you have to notify all co-authors. But you couldn't go through everything or it would take years to publish an article," says Grinnemo.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable. Along with increased support from governments and industry, a change in academic culture that emphasizes quality over quantity of published studies could help encourage meritorious research.
But beyond that, trust will always play a role when numerous specialists unite to achieve a common goal: the accumulation of knowledge that will promote human health, wealth, and well-being.
[Correction: An earlier version of this story mistakenly credited The New York Times with breaking the news of the Anversa retractions, rather than Retraction Watch and STAT, which jointly published the exclusive on October 14th. The piece in the Times ran on October 15th. We regret the error.]
In The Fake News Era, Are We Too Gullible? No, Says Cognitive Scientist
One of the oddest political hoaxes of recent times was Pizzagate, in which conspiracy theorists claimed that Hillary Clinton and her 2016 campaign chief ran a child sex ring from the basement of a Washington, DC, pizzeria.
To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility.
Millions of believers spread the rumor on social media, abetted by Russian bots; one outraged netizen stormed the restaurant with an assault rifle and shot open what he took to be the dungeon door. (It actually led to a computer closet.) Pundits cited the imbroglio as evidence that Americans had lost the ability to tell fake news from the real thing, putting our democracy in peril.
Such fears, however, are nothing new. "For most of history, the concept of widespread credulity has been fundamental to our understanding of society," observes Hugo Mercier in Not Born Yesterday: The Science of Who We Trust and What We Believe (Princeton University Press, 2020). In the fourth century BCE, he points out, the historian Thucydides blamed Athens' defeat by Sparta on a demagogue who hoodwinked the public into supporting idiotic military strategies; Plato extended that argument to condemn democracy itself. Today, atheists and fundamentalists decry one another's gullibility, as do climate-change accepters and deniers. Leftists bemoan the masses' blind acceptance of the "dominant ideology," while conservatives accuse those who do revolt of being duped by cunning agitators.
What's changed, all sides agree, is the speed at which bamboozlement can propagate. In the digital age, it seems, a sucker is born every nanosecond.
The Case Against Credulity
Yet Mercier, a cognitive scientist at the Jean Nicod Institute in Paris, thinks we've got the problem backward. To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility. "We don't credulously accept whatever we're told—even when those views are supported by the majority of the population, or by prestigious, charismatic individuals," he writes. "On the contrary, we are skilled at figuring out who to trust and what to believe, and, if anything, we're too hard rather than too easy to influence."
He bases those contentions on a growing body of research in neuropsychiatry, evolutionary psychology, and other fields. Humans, Mercier argues, are hardwired to balance openness with vigilance when assessing communicated information. To gauge a statement's accuracy, we instinctively test it from many angles, including: Does it jibe with what I already believe? Does the speaker share my interests? Has she demonstrated competence in this area? What's her reputation for trustworthiness? And, with more complex assertions: Does the argument make sense?
This process, Mercier says, enables us to learn much more from one another than do other animals, and to communicate in a far more complex way—key to our unparalleled adaptability. But it doesn't always save us from trusting liars or embracing demonstrably false beliefs. To better understand why, leapsmag spoke with the author.
How did you come to write Not Born Yesterday?
In 2010, I collaborated with the cognitive scientist Dan Sperber and some other colleagues on a paper called "Epistemic Vigilance," which laid out the argument that evolutionarily, it would make no sense for humans to be gullible. If you can be easily manipulated and influenced, you're going to be in major trouble. But as I talked to people, I kept encountering resistance. They'd tell me, "No, no, people are influenced by advertising, by political campaigns, by religious leaders." I started doing more research to see if I was wrong, and eventually I had enough to write a book.
With all the talk about "fake news" these days, the topic has gotten a lot more timely.
Yes. But on the whole, I'm skeptical that fake news matters very much. And all the energy we spend fighting it is energy not spent on other pursuits that may be better ways of improving our informational environment. The real challenge, I think, is not how to shut up people who say stupid things on the internet, but how to make it easier for people who say correct things to convince people.
"History shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion."
You start the book with an anecdote about your encounter with a con artist several years ago, who scammed you out of 20 euros. Why did you choose that anecdote?
Although I'm arguing that people aren't generally gullible, I'm not saying we're completely impervious to attempts at tricking us. It's just that we're much better than we think at resisting manipulation. And while there's a risk of trusting someone who doesn't deserve to be trusted, there's also a risk of not trusting someone who could have been trusted. You miss out on someone who could help you, or from whom you might have learned something—including figuring out who to trust.
You argue that in humans, vigilance and open-mindedness evolved hand-in-hand, leading to a set of cognitive mechanisms you call "open vigilance."
There's a common view that people start from a state of being gullible and easy to influence, and get better at rejecting information as they become smarter and more sophisticated. But that's not what really happens. It's much harder to get apes than humans to do anything they don't want to do, for example. And research suggests that over evolutionary time, the better our species became at telling what we should and shouldn't listen to, the more open to influence we became. Even small children have ways to evaluate what people tell them.
The most basic is what I call "plausibility checking": if you tell them you're 200 years old, they're going to find that highly suspicious. Kids pay attention to competence; if someone is an expert in the relevant field, they'll trust her more. They're likelier to trust someone who's nice to them. My colleagues and I have found that by age 2 ½, children can distinguish between very strong and very weak arguments. Obviously, these skills keep developing throughout your life.
But you've found that even the most forceful leaders—and their propaganda machines—have a hard time changing people's minds.
Throughout history, there's been this fear of demagogues leading whole countries into terrible decisions. In reality, these leaders are mostly good at feeling the crowd and figuring out what people want to hear. They're not really influencing [the masses]; they're surfing on pre-existing public opinion. We know from a recent study, for instance, that if you match cities in which Hitler gave campaign speeches in the late '20s through early '30s with similar cities in which he didn't give campaign speeches, there was no difference in vote share for the Nazis. Nazi propaganda managed to make Germans who were already anti-Semitic more likely to express their anti-Semitism or act on it. But Germans who were not already anti-Semitic were completely inured to the propaganda.
So why, in totalitarian regimes, do people seem so devoted to the ruler?
It's not a very complex psychology. In these regimes, the slightest show of discontent can be punished by death, or by you and your whole family being sent to a labor camp. That doesn't mean propaganda has no effect, but you can explain people's obedience without it.
What about cult leaders and religious extremists? Their followers seem willing to believe anything.
Prophets and preachers can inspire the kind of fervor that leads people to suicidal acts or doomed crusades. But history shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion. Only when people are ready for extreme actions can a charismatic figure provide the spark that lights the fire.
Once a religion becomes ubiquitous, the limits of its persuasive powers become clear. Every anthropologist knows that in societies that are nominally dominated by orthodox belief systems—whether Christian or Muslim or anything else—most people share a view of God, or the spirit, that's closer to what you find in societies that lack such religions. In the Middle Ages, for instance, you have records of priests complaining of how unruly the people are—how they spend the whole Mass chatting or gossiping, or go on pilgrimages mostly because of all the prostitutes and wine-drinking. They continue pagan practices. They resist attempts to make them pay tithes. It's very far from our image of how much people really bought the dominant religion.
"The mainstream media is extremely reliable. The scientific consensus is extremely reliable."
And what about all those wild rumors and conspiracy theories on social media? Don't those demonstrate widespread gullibility?
I think not, for two reasons. One is that most of these false beliefs tend to be held in a way that's not very deep. People may say Pizzagate is true, yet that belief doesn't really interact with the rest of their cognition or their behavior. If you really believe that children are being abused, then trying to free them is the moral and rational thing to do. But the only person who did that was the guy who took his assault weapon to the pizzeria. Most people just left one-star reviews of the restaurant.
The other reason is that most of these beliefs actually play some useful role for people. Before any ethnic massacre, for example, rumors circulate about atrocities having been committed by the targeted minority. But those beliefs aren't what's really driving the phenomenon. In the horrendous pogrom of Kishinev, Moldova, 100 years ago, you had these stories of blood libel—a child disappeared, typical stuff. And then what did the Christian inhabitants do? They raped the [Jewish] women, they pillaged the wine stores, they stole everything they could. They clearly wanted to get that stuff, and they made up something to justify it.
Where do skeptics like climate-change deniers and anti-vaxxers fit into the picture?
Most people in most countries accept that vaccination is good and that climate change is real and man-made. These ideas are deeply counter-intuitive, so the fact that scientists were able to get them across is quite fascinating. But the environment in which we live is vastly different from the one in which we evolved. There's a lot more information, which makes it harder to figure out who we can trust. The main effect is that we don't trust enough; we don't accept enough information. We also rely on shortcuts and heuristics—coarse cues of trustworthiness. There are people who abuse these cues. They may have a PhD or an MD, and they use those credentials to help them spread messages that are not true and not good. Mostly, they're affirming what people want to believe, but they may also be changing minds at the margins.
How can we improve people's ability to resist that kind of exploitation?
I wish I could tell you! That's literally my next project. Generally speaking, though, my advice is very vanilla. The mainstream media is extremely reliable. The scientific consensus is extremely reliable. If you trust those sources, you'll go wrong in a very few cases, but on the whole, they'll probably give you good results. Yet a lot of the problems that we attribute to people being stupid and irrational are not entirely their fault. If governments were less corrupt, if the pharmaceutical companies were irreproachable, these problems might not go away—but they would certainly be minimized.
“Virtual Biopsies” May Soon Make Some Invasive Tests Unnecessary
At his son's college graduation in 2017, Dan Chessin felt "terribly uncomfortable" sitting in the stadium. The bouts of pain persisted, and after months of monitoring, a urologist took biopsies of suspicious areas in his prostate.
This innovation may enhance diagnostic precision and promptness, but it also brings ethical concerns to the forefront.
"In my case, the biopsies came out cancerous," says Chessin, 60, who underwent robotic surgery for intermediate-grade prostate cancer at University Hospitals Cleveland Medical Center.
Although he needed a biopsy, as most patients today do, advances in radiologic technology may make such invasive measures unnecessary in the future. Researchers are developing better imaging techniques and algorithms—a form of computer science called artificial intelligence, in which machines learn and execute tasks that typically require human brain power.
This innovation may enhance diagnostic precision and promptness. But it also brings ethical concerns to the forefront of the conversation, highlighting the potential for invasion of privacy, unequal patient access, and less physician involvement in patient care.
A National Academy of Medicine Special Publication, released in December, emphasizes that setting industry-wide standards for use in patient care is essential to AI's responsible and transparent implementation as the industry grapples with voluminous quantities of data. The technology should be viewed as a tool to supplement decision-making by highly trained professionals, not to replace it.
MRI--a test that uses powerful magnets, radio waves, and a computer to take detailed images inside the body--has become highly accurate in detecting aggressive prostate cancer, but its reliability is more limited in identifying low and intermediate grades of malignancy. That's why Chessin opted to have his prostate removed rather than take the chance of missing anything more suspicious that could develop.
His urologist, Lee Ponsky, says AI's most significant impact is yet to come. He hopes University Hospitals Cleveland Medical Center's collaboration with research scientists at its academic affiliate, Case Western Reserve University, will lead to the invention of a virtual biopsy.
A National Cancer Institute five-year grant is funding the project, launched in 2017, to develop a combined MRI and computerized tool to support more accurate detection and grading of prostate cancer. Such a tool would be "the closest to a crystal ball that we can get," says Ponsky, professor and chairman of the Urology Institute.
In situations where AI has guided diagnostics, radiologists' interpretations of breast, lung, and prostate lesions have improved as much as 25 percent, says Anant Madabhushi, a biomedical engineer and director of the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve, who is collaborating with Ponsky. "AI is very nascent," Madabhushi says, estimating that fewer than 10 percent of niche academic medical centers have used it. "We are still optimizing and validating the AI and virtual biopsy technology."
In October, several North American and European professional organizations of radiologists, imaging informaticists, and medical physicists released a joint statement on the ethics of AI. "Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future," reads the statement, published in the Journal of the American College of Radiology. "The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes."
Overreliance on new technology also poses concern when humans "outsource the process to a machine."
The statement's leader author, radiologist J. Raymond Geis, says "there's no question" that machines equipped with artificial intelligence "can extract more information than two human eyes" by spotting very subtle patterns in pixels. Yet, such nuances are "only part of the bigger picture of taking care of a patient," says Geis, a senior scientist with the American College of Radiology's Data Science Institute. "We have to be able to combine that with knowledge of what those pixels mean."
Setting ethical standards is high on all physicians' radar because the intricacies of each patient's medical record are factored into the computer's algorithm, which, in turn, may be used to help interpret other patients' scans, says radiologist Frank Rybicki, vice chair of operations and quality at the University of Cincinnati's department of radiology. Although obtaining patients' informed consent in writing is currently necessary, ethical dilemmas arise if and when patients have a change of heart about the use of their private health information. It is likely that removing individual data may be possible for some algorithms but not others, Rybicki says.
The information is de-identified to protect patient privacy. Using it to advance research is akin to analyzing human tissue removed in surgical procedures with the goal of discovering new medicines to fight disease, says Maryellen Giger, a University of Chicago medical physicist who studies computer-aided diagnosis in cancers of the breast, lung, and prostate, as well as bone diseases. Physicians who become adept at using AI to augment their interpretation of imaging will be ahead of the curve, she says.
As with other new discoveries, patient access and equality come into play. While AI appears to "have potential to improve over human performance in certain contexts," an algorithm's design may result in greater accuracy for certain groups of patients, says Lucia M. Rafanelli, a political theorist at The George Washington University. This "could have a disproportionately bad impact on one segment of the population."
Overreliance on new technology also poses concern when humans "outsource the process to a machine." Over time, they may cease developing and refining the skills they used before the invention became available, said Chloe Bakalar, a visiting research collaborator at Princeton University's Center for Information Technology Policy.
"AI is a paradigm shift with magic power and great potential."
Striking the right balance in the rollout of the technology is key. Rushing to integrate AI in clinical practice may cause harm, whereas holding back too long could undermine its ability to be helpful. Proper governance becomes paramount. "AI is a paradigm shift with magic power and great potential," says Ge Wang, a biomedical imaging professor at Rensselaer Polytechnic Institute in Troy, New York. "It is only ethical to develop it proactively, validate it rigorously, regulate it systematically, and optimize it as time goes by in a healthy ecosystem."