Sloppy Science Happens More Than You Think
The media loves to tout scientific breakthroughs, and few are as toutable – and in turn, have been as touted – as CRISPR. This method of targeted DNA excision was discovered in bacteria, which use it as an adaptive immune system to combat reinfection with a previously encountered virus.
Shouldn't the editors at a Nature journal know better than to have published an incorrect paper in the first place?
It is cool on so many levels: not only is the basic function fascinating, reminding us that we still have more to discover about even simple organisms that we thought we knew so well, but the ability it grants us to remove and replace any DNA of interest has almost limitless applications in both the lab and the clinic. As if that didn't make it sexy enough, add in a bicoastal, male-female, very public and relatively ugly patent battle, and the CRISPR story is irresistible.
And then last summer, a bombshell dropped. The prestigious journal Nature Methods published a paper in which the authors claimed that CRISPR could cause many unintended mutations, rendering it unfit for clinical use. Havoc duly ensued; stocks in CRISPR-based companies plummeted. Thankfully, the authors of the offending paper were responsible, good scientists; they reassessed, then recanted. Their attention- and headline- grabbing results were wrong, and they admitted as much, leading Nature Methods to formally retract the paper this spring.
How did this happen? Shouldn't the editors at a Nature journal know better than to have published this in the first place?
Alas, high-profile scientific journals publish misleading and downright false results fairly regularly. Some errors are unavoidable – that's how the scientific method works. Hypotheses and conclusions will invariably be overturned as new data becomes available and new technologies are developed that allow for deeper and deeper studies. That's supposed to happen. But that's not what we're talking about here. Nor are we talking about obvious offenses like outright plagiarism. We're talking about mistakes that are avoidable, and that still have serious ramifications.
The cultures of both industry and academia promote research that is poorly designed and even more poorly analyzed.
Two parties are responsible for a scientific publication, and thus two parties bear the blame when things go awry: the scientists who perform and submit the work, and the journals who publish it. Unfortunately, both are incentivized for speedy and flashy publications, and not necessarily for correct publications. It is hardly a surprise, then, that we end up with papers that are speedy and flashy – and not necessarily correct.
"Scientists don't lie and submit falsified data," said Andy Koff, a professor of Molecular Biology at Sloan Kettering Institute, the basic research arm of Memorial Sloan Kettering Cancer Center. Richard Harris, who wrote the book on scientific misconduct running the gamut from unconscious bias and ignorance to more malicious fraudulence, largely concurs (full disclosure: I reviewed the book here). "Scientists want to do good science and want to be recognized as such," he said. But even so, the cultures of both industry and academia promote research that is poorly designed and even more poorly analyzed. In Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Millions, Harris describes how scientists must constantly publish in order to maintain their reputations and positions, to get grants and tenure and students. "They are disincentivized from doing that last extra experiment to prove their results," he said; it could prove too risky if it could cost them a publication.
Ivan Oransky and Adam Marcus founded Retraction Watch, a blog that tracks the retraction of scientific papers, in 2010. Oransky pointed out that blinded peer review – the pride and joy of the scientific publishing enterprise – is a large part of the problem. "Pre-publication peer review is still important, but we can't treat it like the only check on the system. Papers are being reviewed by non-experts, and reviewers are asked to review papers only tangentially related to their field. Moreover, most peer reviewers don't look at the underlying or raw data, even when it is available. How then can they tell if the analysis is flawed or the data is accurate?" he wondered.
Mistaken publications also erode the public's opinion of legitimate science, which is problematic since that opinion isn't especially high to begin with.
Koff agreed that anonymous peer review is valuable, but severely flawed. "Blinded review forces a collective view of importance," he said. "If an article disagrees with the reviewer's worldview, the article gets rejected or forced to adhere to that worldview – even if that means pushing the data someplace it shouldn't necessarily go." We have lost the scientific principle behind review, he thinks, which was to critically analyze a paper. But instead of challenging fundamental assumptions within a paper, reviewers now tend to just ask for more and more supplementary data. And don't get him started on editors. "Editors are supposed to arbitrate between reviewers and writers and they have completely abdicated this responsibility, at every journal. They do not judge, and that's a real failing."
Harris laments the wasted time, effort, and resources that result when erroneous ideas take hold in a field, not to mention lives lost when drug discovery is predicated on basic science findings that end up being wrong. "When no one takes the time, care, and money to reproduce things, science isn't stopping – but it is slowing down," he noted. Mistaken publications also erode the public's opinion of legitimate science, which is problematic since that opinion isn't especially high to begin with.
Scientists and publishers don't only cause the problem, though – they may also provide the solution. Both camps are increasingly recognizing and dealing with the crisis. The self-proclaimed "data thugs" Nick Brown and James Heathers use pretty basic arithmetic to reveal statistical errors in papers. The microbiologist Elisabeth Bik scans the scientific literature for problematic images "in her free time." The psychologist Brian Nosek founded the Center for Open Science, a non-profit organization dedicated to promoting openness, integrity, and reproducibility in scientific research. The Nature family of journals – yes, the one responsible for the latest CRISPR fiasco – has its authors complete a checklist to combat irreproducibility, à la Atul Gawande. And Nature Communications, among other journals, uses transparent peer review, in which authors can opt to have the reviews of their manuscript published anonymously alongside the completed paper. This practice "shows people how the paper evolved," said Koff "and keeps the reviewer and editor accountable. Did the reviewer identify the major problems with the paper? Because there are always major problems with a paper."
Your Digital Avatar May One Day Get Sick Before You Do
Artificial intelligence is everywhere, just not in the way you think it is.
These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."
"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."
"And this is, of course, as far from the truth as you can possibly get."
What Exactly Is Artificial Intelligence, Anyway?
Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.
"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."
The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.
"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."
Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)
But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."
The Fundamental Breakthrough That Must Be Solved
To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."
In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.
Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.
In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.
One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.
That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.
The Ethical Problem That Deserves More Attention
But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.
It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.
The rise of remote work is a win-win for people with disabilities and employers
Disability advocates see remote work as a silver lining of the pandemic, a win-win for adults with disabilities and the business world alike.
Any corporate leader would jump at the opportunity to increase their talent pool of potential employees by 15 percent, with all these new hires belonging to an underrepresented minority. That’s especially true given tight labor markets and CEO desires to increase headcount. Yet, too few leaders realize that people with disabilities are the largest minority group in this country, numbering 50 million.
Some executives may dread the extra investments in accommodating people’s disabilities. Yet, providing full-time remote work could suffice, according to a new study by the Economic Innovation Group think tank. The authors found that the employment rate for people with disabilities did not simply reach the pre-pandemic level by mid-2022, but far surpassed it, to the highest rate in over a decade. “Remote work and a strong labor market are helping [individuals with disabilities] find work,” said Adam Ozimek, who led the research and is chief economist at the Economic Innovation Group.
Disability advocates see this development as a silver lining of the pandemic, a win-win for adults with disabilities and the business world alike. For decades before the pandemic, employers had refused requests from workers with disabilities to work remotely, according to Thomas Foley, executive director of the National Disability Institute. During the pandemic, "we all realized that...many of us could work remotely,” Foley says. “[T]hat was disproportionately positive for people with disabilities."
Charles-Edouard Catherine, director of corporate and government relations for the National Organization on Disability, said that remote-work options had been advocated for many years to accommodate disabilities. “It’s a little frustrating that for decades corporate America was saying it’s too complicated, we’ll lose productivity, and now suddenly it’s like, sure, let’s do it.”
The pandemic opened doors for people with disabilities
Early in the pandemic, employment rates dropped for everyone, including people with disabilities, according to Ozimek’s research. However, these rates recovered quickly. In the second quarter of 2022, people with disabilities aged 25 to 54, the prime working age, are 3.5 percent more likely to be employed, compared to before the pandemic.
What about people without disabilites? They are still 1.1 percent less likely to be employed.
These numbers suggest that remote work has enabled a substantial number of people with disabilities to find and retain employment.
“We have a last-in, first-out labor market, and [people with disabilities] are often among the last in and the first out,” Ozimek says. However, this dynamic has changed, with adults with disabilities seeing employment rates recover much faster. Now, the question is whether the new trend will endure, Ozimek adds. “And my conclusion is that not only is it a permanent thing, but it’s going to improve.”
Gene Boes, president and chief executive of the Northwest Center, a Seattle organization that helps people with disabilities become more independent, confirms this finding. “The new world we live in has opened the door a little bit more…because there’s just more demand for labor.”
Long COVID disabilities put a premium on remote work
Remote work can help mitigate the impact of long COVID. The U.S. Centers for Disease Control and Prevention reports that about 19 percent of those who had COVID developed long COVID. Recent Census Bureau data indicates that 16 million working age Americans suffer from it, with economic costs estimated at $3.7 trillion.
Certainly, many of these so-called long-haulers experience relatively mild symptoms - such as loss of smell - which, while troublesome, are not disabling. But other symptoms are serious enough to be disabilities.
According to a recent study from the Federal Reserve Bank of Minneapolis, about a quarter of those with long COVID changed their employment status or working hours. That means long COVID was serious enough to interfere with work for 4 million people. For many, the issue was serious enough to qualify them as disabled.
Indeed, the Federal Reserve Bank of New York found in a just-released study that the number of individuals with disabilities in the U.S. grew by 1.7 million. That growth stemmed mainly from long COVID conditions such as fatigue and brain fog, meaning difficulties with concentration or memory, with 1.3 million people reporting an increase in brain fog since mid-2020.
Many had to drop out of the labor force due to long COVID. Yet, about 900,000 people who are newly disabled have managed to continue working. Without remote work, they might have lost these jobs.
For example, a software engineer at one of my client companies has struggled with brain fog related to long COVID. With remote work, this employee can work during the hours when she feels most mentally alert and focused, even if that means short bursts of productivity throughout the day. With flexible scheduling, she can take rests, meditate, or engage in activities that help her regain focus and energy. Without the need to commute to the office, she can save energy and time and reduce stress, which is crucial when dealing with brain fog.
In fact, the author of the Federal Reserve Bank of New York study notes that long COVID can be considered a disability under the Americans with Disability Act, depending on the specifics of the condition. That means the law can require private employers with fifteen or more staff, as well as government agencies, to make reasonable accommodations for those with long COVID. Richard Deitz, the author of this study, writes in the paper that “telework and flexible scheduling are two accommodations that can be particularly beneficial for workers dealing with fatigue and brain fog.”
The current drive to return to the office, led by many C-suite executives, may need to be reconsidered in light of legal and HR considerations. Arlene S. Kanter, director of the disability law and policy program at the Syracuse University College of Law, said that the question should depend on whether people with disabilities can perform their work well at home, as they did during Covid outbreaks. “[T]hen people with disabilities, as a matter of accommodation, shouldn’t be denied that right,” Kanter said.
Diversity benefits
But companies shouldn’t need to worry about legal regulations. It simply makes dollars and sense to expand their talent pool by 15% of an underrepresented minority. After all, extensive research shows that improving diversity boosts both decision-making and financial performance.
Companies that are offering more flexible work options have already gained significant benefits in terms of diverse hires. In its efforts to adapt to the post-pandemic environment, Meta, the owner of Facebook and Instagram, decided to offer permanent fully remote work options to its entire workforce. And according to Meta chief diversity officer Maxine Williams, the candidates who accepted job offers for remote positions were “substantially more likely” to come from diverse communities: people with disabilities, Black, Hispanic, Alaskan Native, Native American, veterans, and women. The numbers bear out these claims: people with disabilities increased from 4.7 to 6.2 percent of Meta’s employees.
Having consulted for 21 companies to help them transition to hybrid work arrangements, I can confirm that Meta’s numbers aren’t a fluke. The more my clients proved willing to offer remote work, the more staff with disabilities they recruited - and retained. That includes employees with mobility challenges. But it also includes employees with less visible disabilities, such as people with long COVID and immunocompromised people who feel reluctant to put themselves at risk of getting COVID by coming into the office.
Unfortunately, many leaders fail to see the benefits of remote work for underrepresented groups, such as those with disabilities. Some even say the opposite is true, with JP Morgan CEO Jamie Dimon claiming that returning to the office will aid diversity.
What explains this poor executive decision making? Part of the answer comes from a mental blindspot called the in-group bias. Our minds tend to favor and pay attention to the concerns of those in the group of people who seem to look and think like us. Dimon and other executives without disabilities don’t perceive people with disabilities to be part of their in-group. They thus are blind to the concerns of those with disabilities, which leads to misperceptions such as Dimon’s that returning to the office will aid diversity.
In-group bias is one of many dangerous judgment errors known as cognitive biases. They impact decision making in all life areas, ranging from the future of work to relationships.
Another relevant cognitive bias is the empathy gap. This term refers to our difficulty empathizing with those outside of our in-group. The lack of empathy combines with the blindness from the in-group bias, causing executives to ignore the feelings of employees with disabilities and prospective hires.
Omission bias also plays a role. This dangerous judgment error causes us to perceive failure to act as less problematic than acting. Consequently, executives perceive a failure to support the needs of those with disabilities as a minor matter.
Conclusion
The failure to empower people with disabilities through remote work options will prove costly to the bottom lines of companies. Not only are limiting their talent pool by 15 percent, they’re harming their ability to recruit and retain diverse candidates. And as their lawyers and HR departments will tell them, by violating the ADA, they are putting themselves in legal jeopardy.
By contrast, companies like Meta - and my clients - that offer remote work opportunities are seizing a competitive advantage by recruiting these underrepresented candidates. They’re lowering costs of labor while increasing diversity. The future belongs to the savvy companies that offer the flexibility that people with disabilities need.