Researchers Behaving Badly: Known Frauds Are "the Tip of the Iceberg"
Last week, the whistleblowers in the Paolo Macchiarini affair at Sweden's Karolinska Institutet went on the record here to detail the retaliation they suffered for trying to expose a star surgeon's appalling research misconduct.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise.
The whistleblowers had discovered that in six published papers, Macchiarini falsified data, lied about the condition of patients and circumvented ethical approvals. As a result, multiple patients suffered and died. But Karolinska turned a blind eye for years.
Scientific fraud of the type committed by Macchiarini is rare, but studies suggest that it's on the rise. Just this week, for example, Retraction Watch and STAT together broke the news that a Harvard Medical School cardiologist and stem cell researcher, Piero Anversa, falsified data in a whopping 31 papers, which now have to be retracted. Anversa had claimed that he could regenerate heart muscle by injecting bone marrow cells into damaged hearts, a result that no one has been able to duplicate.
A 2009 study published in the Public Library of Science (PLOS) found that about two percent of scientists admitted to committing fabrication, falsification or plagiarism in their work. That's a small number, but up to one third of scientists admit to committing "questionable research practices" that fall into a gray area between rigorous accuracy and outright fraud.
These dubious practices may include misrepresentations, research bias, and inaccurate interpretations of data. One common questionable research practice entails formulating a hypothesis after the research is done in order to claim a successful premise. Another highly questionable practice that can shape research is ghost-authoring by representatives of the pharmaceutical industry and other for-profit fields. Still another is gifting co-authorship to unqualified but powerful individuals who can advance one's career. Such practices can unfairly bolster a scientist's reputation and increase the likelihood of getting the work published.
The above percentages represent what scientists admit to doing themselves; when they evaluate the practices of their colleagues, the numbers jump dramatically. In a 2012 study published in the Journal of Research in Medical Sciences, researchers estimated that 14 percent of other scientists commit serious misconduct, while up to 72 percent engage in questionable practices. While these are only estimates, the problem is clearly not one of just a few bad apples.
In the PLOS study, Daniele Fanelli says that increasing evidence suggests the known frauds are "just the 'tip of the iceberg,' and that many cases are never discovered" because fraud is extremely hard to detect.
Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
In addition, it's likely that most cases of scientific misconduct go unreported because of the high price of whistleblowing. Those in the Macchiarini case showed extraordinary persistence in their multi-year campaign to stop his deadly trachea implants, while suffering serious damage to their careers. Such heroic efforts to unmask fraud are probably rare.
To make matters worse, there are numerous players in the scientific world who may be complicit in either committing misconduct or covering it up. These include not only primary researchers but co-authors, institutional executives, journal editors, and industry leaders. Essentially everyone wants to be associated with big breakthroughs, and they may overlook scientifically shaky foundations when a major advance is claimed.
Another part of the problem is that it's rare for students in science and medicine to receive an education in ethics. And studies have shown that older, more experienced and possibly jaded researchers are more likely to fudge results than their younger, more idealistic colleagues.
So, given the steep price that individuals and institutions pay for scientific misconduct, what compels them to go down that road in the first place? According to the JRMS study, individuals face intense pressures to publish and to attract grant money in order to secure teaching positions at universities. Once they have acquired positions, the pressure is on to keep the grants and publishing credits coming in order to obtain tenure, be appointed to positions on boards, and recruit flocks of graduate students to assist in research. And not to be underestimated is the human ego.
Paolo Macchiarini is an especially vivid example of a scientist seeking not only fortune, but fame. He liberally (and falsely) claimed powerful politicians and celebrities, even the Pope, as patients or admirers. He may be an extreme example, but we live in an age of celebrity scientists who bring huge amounts of grant money and high prestige to the institutions that employ them.
The media plays a significant role in both glorifying stars and unmasking frauds. In the Macchiarini scandal, the media first lifted him up, as in NBC's laudatory documentary, "A Leap of Faith," which painted him as a kind of miracle-worker, and then brought him down, as in the January 2016 documentary, "The Experiments," which chronicled the agonizing death of one of his patients.
Institutions can also play a crucial role in scientific fraud by putting more emphasis on the number and frequency of papers published than on their quality. The whole course of a scientist's career is profoundly affected by something called the h-index. This is a number based on both the frequency of papers published and how many times the papers are cited by other researchers. Raising one's ranking on the h-index becomes an overriding goal, sometimes eclipsing the kind of patient, time-consuming research that leads to true breakthroughs based on reliable results.
Universities also create a high-pressured environment that encourages scientists to cut corners. They, too, place a heavy emphasis on attracting large monetary grants and accruing fame and prestige. This can lead them, just as it led Karolinska, to protect a star scientist's sloppy or questionable research. According to Dr. Andrew Rosenberg, who is director of the Center for Science and Democracy at the U.S.-based Union of Concerned Scientists, "Karolinska defended its investment in an individual as opposed to the long-term health of the institution. People were dying, and they should have outsourced the investigation from the very beginning."
Having institutions investigate their own practices is a conflict of interest from the get-go, says Rosenberg.
Scientists, universities, and research institutions are also not immune to fads. "Hot" subjects attract grant money and confer prestige, incentivizing scientists to shift their research priorities in a direction that garners more grants. This can mean neglecting the scientist's true area of expertise and interests in favor of a subject that's more likely to attract grant money. In Macchiarini's case, he was allegedly at the forefront of the currently sexy field of regenerative medicine -- a field in which Karolinska was making a huge investment.
The relative scarcity of resources intensifies the already significant pressure on scientists. They may want to publish results rapidly, since they face many competitors for limited grant money, academic positions, students, and influence. The scarcity means that a great many researchers will fail while only a few succeed. Once again, the temptation may be to rush research and to show it in the most positive light possible, even if it means fudging or exaggerating results.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable.
Intense competition can have a perverse effect on researchers, according to a 2007 study in the journal Science of Engineering and Ethics. Not only does it place undue pressure on scientists to succeed, it frequently leads to the withholding of information from colleagues, which undermines a system in which new discoveries build on the previous work of others. Researchers may feel compelled to withhold their results because of the pressure to be the first to publish. The study's authors propose that more investment in basic research from governments could alleviate some of these competitive pressures.
Scientific journals, although they play a part in publishing flawed science, can't be expected to investigate cases of suspected fraud, says the German science blogger Leonid Schneider. Schneider's writings helped to expose the Macchiarini affair.
"They just basically wait for someone to retract problematic papers," he says.
He also notes that, while American scientists can go to the Office of Research Integrity to report misconduct, whistleblowers in Europe have no external authority to whom they can appeal to investigate cases of fraud.
"They have to go to their employer, who has a vested interest in covering up cases of misconduct," he says.
Science is increasingly international. Major studies can include collaborators from several different countries, and he suggests there should be an international body accessible to all researchers that will investigate suspected fraud.
Ultimately, says Rosenberg, the scientific system must incorporate trust. "You trust co-authors when you write a paper, and peer reviewers at journals trust that scientists at research institutions like Karolinska are acting with integrity."
Without trust, the whole system falls apart. It's the trust of the public, an elusive asset once it has been betrayed, that science depends upon for its very existence. Scientific research is overwhelmingly financed by tax dollars, and the need for the goodwill of the public is more than an abstraction.
The Macchiarini affair raises a profound question of trust and responsibility: Should multiple co-authors be held responsible for a lead author's misconduct?
Karolinska apparently believes so. When the institution at last owned up to the scandal, it vindictively found Karl Henrik-Grinnemo, one of the whistleblowers, guilty of scientific misconduct as well. It also designated two other whistleblowers as "blameworthy" for their roles as co-authors of the papers on which Macchiarini was the lead author.
As a result, the whistleblowers' reputations and employment prospects have become collateral damage. Accusations of research misconduct can be a career killer. Research grants dry up, employment opportunities evaporate, publishing becomes next to impossible, and collaborators vanish into thin air.
Grinnemo contends that co-authors should only be responsible for their discrete contributions, not for the data supplied by others.
"Different aspects of a paper are highly specialized," he says, "and that's why you have multiple authors. You cannot go through every single bit of data because you don't understand all the parts of the article."
This is especially true in multidisciplinary, translational research, where there are sometimes 20 or more authors. "You have to trust co-authors, and if you find something wrong you have to notify all co-authors. But you couldn't go through everything or it would take years to publish an article," says Grinnemo.
Though the pressures facing scientists are very real, the problem of misconduct is not inevitable. Along with increased support from governments and industry, a change in academic culture that emphasizes quality over quantity of published studies could help encourage meritorious research.
But beyond that, trust will always play a role when numerous specialists unite to achieve a common goal: the accumulation of knowledge that will promote human health, wealth, and well-being.
[Correction: An earlier version of this story mistakenly credited The New York Times with breaking the news of the Anversa retractions, rather than Retraction Watch and STAT, which jointly published the exclusive on October 14th. The piece in the Times ran on October 15th. We regret the error.]
Story by Big Think
For most of history, artificial intelligence (AI) has been relegated almost entirely to the realm of science fiction. Then, in late 2022, it burst into reality — seemingly out of nowhere — with the popular launch of ChatGPT, the generative AI chatbot that solves tricky problems, designs rockets, has deep conversations with users, and even aces the Bar exam.
But the truth is that before ChatGPT nabbed the public’s attention, AI was already here, and it was doing more important things than writing essays for lazy college students. Case in point: It was key to saving the lives of tens of millions of people.
AI-designed mRNA vaccines
As Dave Johnson, chief data and AI officer at Moderna, told MIT Technology Review‘s In Machines We Trust podcast in 2022, AI was integral to creating the company’s highly effective mRNA vaccine against COVID. Moderna and Pfizer/BioNTech’s mRNA vaccines collectively saved between 15 and 20 million lives, according to one estimate from 2022.
Johnson described how AI was hard at work at Moderna, well before COVID arose to infect billions. The pharmaceutical company focuses on finding mRNA therapies to fight off infectious disease, treat cancer, or thwart genetic illness, among other medical applications. Messenger RNA molecules are essentially molecular instructions for cells that tell them how to create specific proteins, which do everything from fighting infection, to catalyzing reactions, to relaying cellular messages.
Johnson and his team put AI and automated robots to work making lots of different mRNAs for scientists to experiment with. Moderna quickly went from making about 30 per month to more than one thousand. They then created AI algorithms to optimize mRNA to maximize protein production in the body — more bang for the biological buck.
For Johnson and his team’s next trick, they used AI to automate science, itself. Once Moderna’s scientists have an mRNA to experiment with, they do pre-clinical tests in the lab. They then pore over reams of data to see which mRNAs could progress to the next stage: animal trials. This process is long, repetitive, and soul-sucking — ill-suited to a creative scientist but great for a mindless AI algorithm. With scientists’ input, models were made to automate this tedious process.
“We don’t think about AI in the context of replacing humans,” says Dave Johnson, chief data and AI officer at Moderna. “We always think about it in terms of this human-machine collaboration, because they’re good at different things. Humans are really good at creativity and flexibility and insight, whereas machines are really good at precision and giving the exact same result every single time and doing it at scale and speed.”
All these AI systems were in put in place over the past decade. Then COVID showed up. So when the genome sequence of the coronavirus was made public in January 2020, Moderna was off to the races pumping out and testing mRNAs that would tell cells how to manufacture the coronavirus’s spike protein so that the body’s immune system would recognize and destroy it. Within 42 days, the company had an mRNA vaccine ready to be tested in humans. It eventually went into hundreds of millions of arms.
Biotech harnesses the power of AI
Moderna is now turning its attention to other ailments that could be solved with mRNA, and the company is continuing to lean on AI. Scientists are still coming to Johnson with automation requests, which he happily obliges.
“We don’t think about AI in the context of replacing humans,” he told the Me, Myself, and AI podcast. “We always think about it in terms of this human-machine collaboration, because they’re good at different things. Humans are really good at creativity and flexibility and insight, whereas machines are really good at precision and giving the exact same result every single time and doing it at scale and speed.”
Moderna, which was founded as a “digital biotech,” is undoubtedly the poster child of AI use in mRNA vaccines. Moderna recently signed a deal with IBM to use the company’s quantum computers as well as its proprietary generative AI, MoLFormer.
Moderna’s success is encouraging other companies to follow its example. In January, BioNTech, which partnered with Pfizer to make the other highly effective mRNA vaccine against COVID, acquired the company InstaDeep for $440 million to implement its machine learning AI across its mRNA medicine platform. And in May, Chinese technology giant Baidu announced an AI tool that designs super-optimized mRNA sequences in minutes. A nearly countless number of mRNA molecules can code for the same protein, but some are more stable and result in the production of more proteins. Baidu’s AI, called “LinearDesign,” finds these mRNAs. The company licensed the tool to French pharmaceutical company Sanofi.
Writing in the journal Accounts of Chemical Research in late 2021, Sebastian M. Castillo-Hair and Georg Seelig, computer engineers who focus on synthetic biology at the University of Washington, forecast that AI machine learning models will further accelerate the biotechnology research process, putting mRNA medicine into overdrive to the benefit of all.
This article originally appeared on Big Think, home of the brightest minds and biggest ideas of all time.
Opioid prescription policies may hurt those in chronic pain
Tinu Abayomi-Paul works as a writer and activist, plus one unwanted job: Trying to fill her opioid prescription. She says that some pharmacists laugh and tell her that no one needs the amount of pain medication that she is seeking. Another pharmacist near her home in Venus, Tex., refused to fill more than seven days of a 30-day prescription.
To get a new prescription—partially filled opioid prescriptions can’t be dispensed later—Abayomi-Paul needed to return to her doctor’s office. But without her medication, she was having too much pain to travel there, much less return to the pharmacy. She rationed out the pills over several weeks, an agonizing compromise that left her unable to work, interact with her children, sleep restfully, or leave the house. “Don’t I deserve to do more than survive?” she says.
Abayomi-Paul’s pain results from a degenerative spine disorder, chronic lymphocytic leukemia, and more than a dozen other diagnoses and disabilities. She is part of a growing group of people with chronic pain who have been negatively impacted by the fallout from efforts to prevent opioid overdose deaths.
Guidelines for dispensing these pills are complicated because many opioids, like codeine, oxycodone, and morphine, are prescribed legally for pain. Yet, deaths from opioids have increased rapidly since 1999 and become a national emergency. Many of them, such as heroin, are used illegally. The CDC identified three surges in opioid use: an increase in opioid prescriptions in the ‘90s, a surge of heroin around 2010, and an influx of fentanyl and other powerful synthetic opioids in 2013.
As overdose deaths grew, so did public calls to address them, prompting the CDC to change its prescription guidelines in 2016. The new guidelines suggested limiting medication for acute pain to a seven-day supply, capping daily doses of morphine, and other restrictions. Some statistics suggest that these policies have worked; from 2016 to 2019, prescriptions for opiates fell 44 percent. Physicians also started progressively lowering opioid doses for patients, a practice called tapering. A study tracking nearly 100,000 Medicare subscribers on opioids found that about 13 percent of patients were tapering in 2012, and that number increased to about 23 percent by 2017.
But some physicians may be too aggressive with this tapering strategy. About one in four people had doses reduced by more than 10 percent per week, a rate faster than the CDC recommends. The approach left people like Abayomi-Paul without the medication they needed. Every year, Abayomi-Paul says, her prescriptions are harder to fill. David Brushwood, a pharmacy professor who specializes in policy and outcomes at the University of Florida in Gainesville, says opioid dosing isn’t one-size-fits-all. “Patients need to be taken care of individually, not based on what some government agency says they need,” he says.
‘This is not survivable’
Health policy and disability rights attorney Erin Gilmer advocated for people with pain, using her own experience with chronic pain and a host of medical conditions as a guidepost. She launched an advocacy website, Healthcare as a Human Right, and shared her struggles on Twitter: “This pain is more than anything I've endured before and I've already been through too much. Yet because it's not simply identified no one believes it's as bad as it is. This is not survivable.”
When her pain dramatically worsened midway through 2021, Gilmer’s posts grew ominous: “I keep thinking it can't possibly get worse but somehow every day is worse than the last.”
The CDC revised its guidelines in 2022 after criticisms that people with chronic pain were being undertreated, enduring dangerous withdrawal symptoms, and suffering psychological distress. (Long-term opioid use can cause physical dependency, an adaptive reaction that is different than the compulsive misuse associated with a substance use disorder.) It was too late for Gilmer. On July 7, 2021, the 38-year-old died by suicide.
Last August, an Ohio district court ruling set forth a new requirement for Walgreens, Walmart, and CVS pharmacists in two counties. These pharmacists must now document opioid prescriptions that are turned down, even for customers who have no previous purchases at that pharmacy, and they’re required to share this information with other locations in the same chain. None of the three pharmacies responded to an interview request from Leaps.org.
In a practice called red flagging, pharmacists may label a prescription suspicious for a variety of reasons, such as if a pharmacist observes an unusually high dose, a long distance from the patient’s home to the pharmacy, or cash payment. Pharmacists may question patients or prescribers to resolve red flags but, regardless of the explanation, they’re free to refuse to fill a prescription.
As the risk of litigation has grown, so has finger-pointing, says Seth Whitelaw, a compliance consultant at Whitelaw Compliance Group in West Chester, PA, who advises drug, medical device, and biotech companies. Drugmakers accused in National Prescription Opioid Litigation (NPOL), a complex set of thousands of cases on opioid epidemic deaths, which includes the Ohio district case, have argued that they shouldn’t be responsible for the large supply of opiates and overdose deaths. Yet, prosecutors alleged that these pharmaceutical companies hid addiction and overdose risks when labeling opioids, while distributors and pharmacists failed to identify suspicious orders or scripts.
Patients and pharmacists fear red flags
The requirements that pharmacists document prescriptions they refuse to fill so far only apply to two counties in Ohio. But Brushwood fears they will spread because of this precedent, and because there’s no way for pharmacists to predict what new legislation is on the way. “There is no definition of a red flag, there are no lists of red flags. There is no instruction on what to do when a red flag is detected. There’s no guidance on how to document red flags. It is a standardless responsibility,” Brushwood says. This adds trepidation for pharmacists—and more hoops to jump through for patients.
“I went into the doctor one day here and she said, ‘I'm going to stop prescribing opioids to all my patients effective immediately,” Nicolson says.
“We now have about a dozen studies that show that actually ripping somebody off their medication increases their risk of overdose and suicide by three to five times, destabilizes their health and mental health, often requires some hospitalization or emergency care, and can cause heart attacks,” says Kate Nicolson, founder of the National Pain Advocacy Center based in Boulder, Colorado. “It can kill people.” Nicolson was in pain for decades due to a surgical injury to the nerves leading to her spinal cord before surgeries fixed the problem.
Another issue is that primary care offices may view opioid use as a reason to turn down new patients. In a 2021 study, secret shoppers called primary care clinics in nine states, identifying themselves as long-term opioid users. When callers said their opioids were discontinued because their former physician retired, as opposed to an unspecified reason, they were more likely to be offered an appointment. Even so, more than 40 percent were refused an appointment. The study authors say their findings suggest that some physicians may try to avoid treating people who use opioids.
Abayomi-Paul says red flagging has changed how she fills prescriptions. “Once I go to one place, I try to [continue] going to that same place because of the amount of records that I have and making sure my medications don’t conflict,” Abayomi-Paul says.
Nicolson moved to Colorado from Washington D.C. in 2015, before the CDC issued its 2016 guidelines. When the guidelines came out, she found the change to be shockingly abrupt. “I went into the doctor one day here and she said, ‘I'm going to stop prescribing opioids to all my patients effective immediately.’” Since then, she’s spoken with dozens of patients who have been red-flagged or simply haven’t been able to access pain medication.
Despite her expertise, Nicolson isn’t positive she could successfully fill an opioid prescription today even if she needed one. At this point, she’s not sure exactly what various pharmacies would view as a red flag. And she’s not confident that these red flags even work. “You can have very legitimate reasons for being 50 miles away or having to go to multiple pharmacies, given that there are drug shortages now, as well as someone refusing to fill [a prescription.] It doesn't mean that you’re necessarily ‘drug seeking.’”
While there’s no easy solution. Whitelaw says clarifying the role of pharmacists and physicians in patient access to opioids could help people get the medication they need. He is seeking policy changes that focus on the needs of people in pain more than the number of prescriptions filled. He also advocates standardizing the definition of red flags and procedures for resolving them. Still, there will never be a single policy that can be applied to all people, explains Brushwood, the University of Florida professor. “You have to make a decision about each individual prescription.”