Researchers Behaving Badly: Known Frauds Are "the Tip of the Iceberg"
Last week, the whistleblowers in the Paolo Macchiarini affair at Sweden's Karolinska Institutet went on the record here to detail the retaliation they suffered for trying to expose a star surgeon's appalling research misconduct.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise.
The whistleblowers had discovered that in six published papers, Macchiarini falsified data, lied about the condition of patients and circumvented ethical approvals. As a result, multiple patients suffered and died. But Karolinska turned a blind eye for years.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise. Just this week, for example, Retraction Watch and STAT together broke the news that a Harvard Medical School cardiologist and stem cell researcher, Piero Anversa, falsified data in a whopping 31 papers, which now have to be retracted. Anversa had claimed that he could regenerate heart muscle by injecting bone marrow cells into damaged hearts, a result that no one has been able to duplicate.
A 2009 study published in the Public Library of Science (PLOS) found that about two percent of scientists admitted to committing fabrication, falsification or plagiarism in their work. That's a small number, but up to one third of scientists admit to committing "questionable research practices" that fall into a gray area between rigorous accuracy and outright fraud.
These dubious practices may include misrepresentations, research bias, and inaccurate interpretations of data. One common questionable research practice entails formulating a hypothesis after the research is done in order to claim a successful premise. Another highly questionable practice that can shape research is ghost-authoring by representatives of the pharmaceutical industry and other for-profit fields. Still another is gifting co-authorship to unqualified but powerful individuals who can advance one's career. Such practices can unfairly bolster a scientist's reputation and increase the likelihood of getting the work published.
The above percentages represent what scientists admit to doing themselves; when they evaluate the practices of their colleagues, the numbers jump dramatically. In a 2012 study published in the Journal of Research in Medical Sciences, researchers estimated that 14 percent of other scientists commit serious misconduct, while up to 72 percent engage in questionable practices. While these are only estimates, the problem is clearly not one of just a few bad apples.
In the PLOS study, Daniele Fanelli says that increasing evidence suggests the known frauds are "just the 'tip of the iceberg,' and that many cases are never discovered" because fraud is extremely hard to detect.
Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
In addition, it's likely that most cases of scientific misconduct go unreported because of the high price of whistleblowing. Those in the Macchiarini case showed extraordinary persistence in their multi-year campaign to stop his deadly trachea implants, while suffering serious damage to their careers. Such heroic efforts to unmask fraud are probably rare.
To make matters worse, there are numerous players in the scientific world who may be complicit in either committing misconduct or covering it up. These include not only primary researchers but co-authors, institutional executives, journal editors, and industry leaders. Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
Another part of the problem is that it's rare for students in science and medicine to receive an education in ethics. And studies have shown that older, more experienced and possibly jaded researchers are more likely to fudge results than their younger, more idealistic colleagues.
So, given the steep price that individuals and institutions pay for scientific misconduct, what compels them to go down that road in the first place? According to the JRMS study, individuals face intense pressures to publish and to attract grant money in order to secure teaching positions at universities. Once they have acquired positions, the pressure is on to keep the grants and publishing credits coming in order to obtain tenure, be appointed to positions on boards, and recruit flocks of graduate students to assist in research. And not to be underestimated is the human ego.
Paolo Macchiarini is an especially vivid example of a scientist seeking not only fortune, but fame. He liberally (and falsely) claimed powerful politicians and celebrities, even the Pope, as patients or admirers. He may be an extreme example, but we live in an age of celebrity scientists who bring huge amounts of grant money and high prestige to the institutions that employ them.
The media plays a significant role in both glorifying stars and unmasking frauds. In the Macchiarini scandal, the media first lifted him up, as in NBC's laudatory documentary, "A Leap of Faith," which painted him as a kind of miracle-worker, and then brought him down, as in the January 2016 documentary, "The Experiments," which chronicled the agonizing death of one of his patients.
Institutions can also play a crucial role in scientific fraud by putting more emphasis on the number and frequency of papers published than on their quality. The whole course of a scientist's career is profoundly affected by something called the h-index. This is a number based on both the frequency of papers published and how many times the papers are cited by other researchers. Raising one's ranking on the h-index becomes an overriding goal, sometimes eclipsing the kind of patient, time-consuming research that leads to true breakthroughs based on reliable results.
Universities also create a high-pressured environment that encourages scientists to cut corners. They, too, place a heavy emphasis on attracting large monetary grants and accruing fame and prestige. This can lead them, just as it led Karolinska, to protect a star scientist's sloppy or questionable research. According to Dr. Andrew Rosenberg, who is director of the Center for Science and Democracy at the U.S.-based Union of Concerned Scientists, "Karolinska defended its investment in an individual as opposed to the long-term health of the institution. People were dying, and they should have outsourced the investigation from the very beginning."
Having institutions investigate their own practices is a conflict of interest from the get-go, says Rosenberg.
Scientists, universities, and research institutions are also not immune to fads. "Hot" subjects attract grant money and confer prestige, incentivizing scientists to shift their research priorities in a direction that garners more grants. This can mean neglecting the scientist's true area of expertise and interests in favor of a subject that's more likely to attract grant money. In Macchiarini's case, he was allegedly at the forefront of the currently sexy field of regenerative medicine -- a field in which Karolinska was making a huge investment.
The relative scarcity of resources intensifies the already significant pressure on scientists. They may want to publish results rapidly, since they face many competitors for limited grant money, academic positions, students, and influence. The scarcity means that a great many researchers will fail while only a few succeed. Once again, the temptation may be to rush research and to show it in the most positive light possible, even if it means fudging or exaggerating results.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable.
Intense competition can have a perverse effect on researchers, according to a 2007 study in the journal Science of Engineering and Ethics. Not only does it place undue pressure on scientists to succeed, it frequently leads to the withholding of information from colleagues, which undermines a system in which new discoveries build on the previous work of others. Researchers may feel compelled to withhold their results because of the pressure to be the first to publish. The study's authors propose that more investment in basic research from governments could alleviate some of these competitive pressures.
Scientific journals, although they play a part in publishing flawed science, can't be expected to investigate cases of suspected fraud, says the German science blogger Leonid Schneider. Schneider's writings helped to expose the Macchiarini affair.
"They just basically wait for someone to retract problematic papers," he says.
He also notes that, while American scientists can go to the Office of Research Integrity to report misconduct, whistleblowers in Europe have no external authority to whom they can appeal to investigate cases of fraud.
"They have to go to their employer, who has a vested interest in covering up cases of misconduct," he says.
Science is increasingly international. Major studies can include collaborators from several different countries, and he suggests there should be an international body accessible to all researchers that will investigate suspected fraud.
Ultimately, says Rosenberg, the scientific system must incorporate trust. "You trust co-authors when you write a paper, and peer reviewers at journals trust that scientists at research institutions like Karolinska are acting with integrity."
Without trust, the whole system falls apart. It's the trust of the public, an elusive asset once it has been betrayed, that science depends upon for its very existence. Scientific research is overwhelmingly financed by tax dollars, and the need for the goodwill of the public is more than an abstraction.
The Macchiarini affair raises a profound question of trust and responsibility: Should multiple co-authors be held responsible for a lead author's misconduct?
Karolinska apparently believes so. When the institution at last owned up to the scandal, it vindictively found Karl Henrik-Grinnemo, one of the whistleblowers, guilty of scientific misconduct as well. It also designated two other whistleblowers as "blameworthy" for their roles as co-authors of the papers on which Macchiarini was the lead author.
As a result, the whistleblowers' reputations and employment prospects have become collateral damage. Accusations of research misconduct can be a career killer. Research grants dry up, employment opportunities evaporate, publishing becomes next to impossible, and collaborators vanish into thin air.
Grinnemo contends that co-authors should only be responsible for their discrete contributions, not for the data supplied by others.
"Different aspects of a paper are highly specialized," he says, "and that's why you have multiple authors. You cannot go through every single bit of data because you don't understand all the parts of the article."
This is especially true in multidisciplinary, translational research, where there are sometimes 20 or more authors. "You have to trust co-authors, and if you find something wrong you have to notify all co-authors. But you couldn't go through everything or it would take years to publish an article," says Grinnemo.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable. Along with increased support from governments and industry, a change in academic culture that emphasizes quality over quantity of published studies could help encourage meritorious research.
But beyond that, trust will always play a role when numerous specialists unite to achieve a common goal: the accumulation of knowledge that will promote human health, wealth, and well-being.
[Correction: An earlier version of this story mistakenly credited The New York Times with breaking the news of the Anversa retractions, rather than Retraction Watch and STAT, which jointly published the exclusive on October 14th. The piece in the Times ran on October 15th. We regret the error.]
Ethan Lindenberger, the Ohio teenager who sought out vaccinations after he was denied them as a child, recently testified before Congress about why his parents became anti-vaxxers. The trouble, he believes, stems from the pervasiveness of misinformation online.
There is evidence that 'educating' people with facts about the benefits of vaccination may not be effective.
"For my mother, her love and affection and care as a parent was used to push an agenda to create a false distress," he told the Senate Committee. His mother read posts on social media saying vaccines are dangerous, and that was enough to persuade her against them.
His story is an example of how widespread and harmful the current discourse on vaccinations is—and more importantly—how traditional strategies to convince people about the merits of vaccination have largely failed.
As responsible members of society, all of us have implicitly signed on to what ethicists call the "Social Contract" -- we agree to abide by certain moral and political rules of behavior. This is what our societal values, norms, and often governments are based upon. However, with the unprecedented rise of social media, alternative facts, and fake news, it is evident that our understanding—and application—of the social contract must also evolve.
Nowhere is this breakdown of societal norms more visible than in the failure to contain the spread of vaccine-preventable diseases like measles. What started off as unexplained episodes in New York City last October, mostly in communities that are under-vaccinated, has exploded into a national epidemic: 880 cases of measles across 24 states in 2019, according to the CDC (as of May 17, 2019). In fact, the Unites States is only eight months away from losing its "measles free" status, joining Venezuela as the second country out of North and South America with that status.
The U.S. is not the only country facing this growing problem. Such constant and perilous reemergence of measles and other vaccine-preventable diseases in various parts of the world raises doubts about the efficacy of current vaccination policies. In addition to the loss of valuable life, these outbreaks lead to loss of millions of dollars in unnecessary expenditure of scarce healthcare resources. While we may be living through an age of information, we are also navigating an era whose hallmark is a massive onslaught on truth.
There is ample evidence on how these outbreaks start: low-vaccination rates. At the same time, there is evidence that 'educating' people with facts about the benefits of vaccination may not be effective. Indeed, human reasoning has a limit, and facts alone rarely change a person's opinion. In a fascinating report by researchers from the University of Pennsylvania, a small experiment revealed how "behavioral nudges" could inform policy decisions around vaccination.
In the reported experiment, the vaccination rate for employees of a company increased by 1.5 percent when they were prompted to name the date when they planned to get their flu shot. In the same experiment, when employees were prompted to name both a date and a time for their planned flu shot, vaccination rate increased by 4 percent.
A randomized trial revealed the subtle power of "announcements" – direct, brief, assertive statements by physicians that assumed parents were ready to vaccinate their children.
This experiment is a part of an emerging field of behavioral economics—a scientific undertaking that uses insights from psychology to understand human decision-making. The field was born from a humbling realization that humans probably do not possess an unlimited capacity for processing information. Work in this field could inform how we can formulate vaccination policy that is effective, conserves healthcare resources, and is applicable to current societal norms.
Take, for instance, the case of Human Papilloma Virus (HPV) that can cause several types of cancers in both men and women. Research into the quality of physician communication has repeatedly revealed how lukewarm recommendations for HPV vaccination by primary care physicians likely contributes to under-immunization of eligible adolescents and can cause confusion for parents.
A randomized trial revealed the subtle power of "announcements" – direct, brief, assertive statements by physicians that assumed parents were ready to vaccinate their children. These announcements increased vaccination rates by 5.4 percent. Lengthy, open-ended dialogues demonstrated no benefit in vaccination rates. It seems that uncertainty from the physician translates to unwillingness from a parent.
Choice architecture is another compelling concept. The premise is simple: We hardly make any of our decisions in vacuum; the environment in which these decisions are made has an influence. If health systems were designed with these insights in mind, people would be more likely to make better choices—without being forced.
This theory, proposed by Richard Thaler, who won the 2017 Nobel Prize in Economics, was put to the test by physicians at the University of Pennsylvania. In their study, flu vaccination rates at primary care practices increased by 9.5 percent all because the staff implemented "active choice intervention" in their electronic health records—a prompt that nudged doctors and nurses to ask patients if they'd gotten the vaccine yet. This study illustrated how an intervention as simple as a reminder can save lives.
To be sure, some bioethicists do worry about implementing these policies. Are behavioral nudges akin to increased scrutiny or a burden for the disadvantaged? For example, would incentives to quit smoking unfairly target the poor, who are more likely to receive criticism for bad choices?
The measles outbreak is a sober reminder of how devastating it can be when the social contract breaks down.
While this is a valid concern, behavioral economics offers one of the only ethical solutions to increasing vaccination rates by addressing the most critical—and often legal—challenge to universal vaccinations: mandates. Choice architecture and other interventions encourage and inform a choice, allowing an individual to retain his or her right to refuse unwanted treatment. This distinction is especially important, as evidence suggests that people who refuse vaccinations often do so as a result of cognitive biases – systematic errors in thinking resulting from emotional attachment or a lack of information.
For instance, people are prone to "confirmation bias," or a tendency to selectively believe in information that confirms their preexisting theories, rather than the available evidence. At the same time, people do not like mandates. In such situations, choice architecture provides a useful option: people are nudged to make the right choice via the design of health delivery systems, without needing policies that rely on force.
The measles outbreak is a sober reminder of how devastating it can be when the social contract breaks down and people fall prey to misinformation. But all is not lost. As we fight a larger societal battle against alternative facts, we now have another option in the trenches to subtly encourage people to make better choices.
Using insights from research in decision-making, we can all contribute meaningfully in controversial conversations with family, friends, neighbors, colleagues, and our representatives — and push for policies that protect those we care about. A little more than a hundred years ago, thousands of lives were routinely lost to preventive illnesses. We've come too far to let ignorance destroy us now.
New Tech Can Predict Breast Cancer Years in Advance
Every two minutes, a woman is diagnosed with breast cancer. The question is, can those at high risk be identified early enough to survive?
New AI software has predicted risk equally well in both white and black women for the first time.
The current standard practice in medicine is not exactly precise. It relies on age, family history of cancer, and breast density, among other factors, to determine risk. But these factors do not always tell the whole story, leaving many women to slip through the cracks. In addition, a racial gap persists in breast cancer treatment and survival. African-American women are 42 percent more likely to die from the disease despite relatively equal rates of diagnosis.
But now those grim statistics could be changing. A team of researchers from MIT's Computer Science and Artificial Intelligence Laboratory have developed a deep learning model that can more accurately predict a patient's breast cancer risk compared to established clinical guidelines – and it has predicted risk equally well in both white and black women for the first time.
The Lowdown
Study results published in Radiology described how the AI software read mammogram images from more than 60,000 patients at Massachusetts General Hospital to identify subtle differences in breast tissue that pointed to potential risk factors, even in their earliest stages. The team accessed the patients' actual diagnoses and determined that the AI model was able to correctly place 31 percent of all cancer patients in the highest-risk category of developing breast cancer within five years of the examination, compared to just 18 percent for existing models.
"Each image has hundreds of thousands of pixels identifying something that may not necessarily be detected by the human eye," said MIT professor Regina Barzilay, one of the study's lead authors. "We all have limited visual capacities so it seems some machines trained on hundreds of thousands of images with a known outcome can capture correlations the human eye might not notice."
Barzilay, a breast cancer survivor herself, had abnormal tissue patterns on mammograms in 2012 and 2013, but wasn't diagnosed until after a 2014 image reading, illustrating the limitations of human processing alone.
MIT professor Regina Barzilay, a lead author on the new study and a breast cancer survivor herself.
(Courtesy MIT)
Next up: The MIT team is looking at training the model to detect other cancers and health risks. Barzilay recalls how a cardiologist told her during a conference that women with heart diseases had a different pattern of calcification on their mammograms, demonstrating how already existing images can be used to extract other pieces of information about a person's health status.
Integration of the AI model in standard care could help doctors better tailor screening and prevention programs based on actual instead of perceived risk. Patients who might register as higher risk by current guidelines could be identified as lower risk, helping resolve conflicting opinions about how early and how often women should receive mammograms.
Open Questions: While the results were promising, it's unknown how well the model will work on a larger scale, as the study looked at data from just one institution and used mammograms supplied by just one hospital. Some risk factor information was also unavailable for certain patients during the study, leaving researchers unable to fully compare the AI model's performance to that of the traditional standard.
One incentive to wider implementation and study, however, is the bonus that no new hardware is required to use the AI model. With other institutions now showing interest, this software could lead to earlier routine detection and treatment of breast cancer — resulting in more lives saved.