Americans Fell for a Theranos-Style Scam 100 Years Ago. Will We Ever Learn?
The huckster understands what people want – an easy route to good health -- and figures out just how to provide it as long as no one asks too many questions.
"Americans are very much prone to this sort of thinking: Give me a pill or give me a magical bean that can make me lose weight!"
The keys to success: Hoopla, fancy technology, and gullibility. And oh yes, one more thing: a blood sample. Well, lots and lots of blood samples. Every testing fee counts.
Sound familiar? It could be the story of the preternaturally persuasive Elizabeth Holmes, the disgraced founder of Theranos who stands accused of perpetrating a massive blood-testing fraud. But this is a different story from a different time, one that dates back 100 years but sounds almost like it could unfold on the front page of The Wall Street Journal today.
The main difference: Back then, watchdogs thought they'd be able to vanquish fake medicine and scam science. Fat chance, it turned out. It seems like we're more likely to lose-weight-quick than make much of a dent into quackery and health fraud.
Why? Have we learned anything at all over the past century? As we sweep into a new decade, experts says we're not as advanced as we'd like to think. But the fight against fraud and fakery continues.
Quackery: As American As America Itself
In the 17th century, British healers of questionable reputation got a new name -- "quack," from the Dutch word "quacksalver," which originally referred to someone who treats others with home remedies but developed a new meaning along the lines of "charlatan." And these quacks got a new place to sell their wares: the American colonies.
By 1692, a Boston newspaper advertised a patent medicine that promised to cure "the Griping of the Guts, and the Wind Cholick" and – for good measure – "preventeth that woeful Distemper of the Dry Belly Ach." A couple centuries later, the most famous woman in the United States wasn't a first lady or feminist but a hawker of nostrums named Lydia Estes Pinkham whose "vegetable compound" promised to banish "female complaints." One advertisement suggested that the "sure cure" would have saved the life of a Connecticut clergyman whose wife killed him after suffering from feminine maladies for 16 years.
By the early 20th century, Americans were fascinated by electricity and radiation, and both healers and hucksters embraced the new high-tech era. Men with flagging libidos, for example, could irradiate their private parts with the radioactive Radiendocrinator or buy battery-powered electric belts equipped with dangling bits to supercharge their, um, dangling bits.
The Rise of the Radio Wave 'Cure'
Enter radionics, the (supposed) science of better health via radio waves. The idea was that "healthy people radiate healthy energy," and sickness could be reversed through diagnosis and re-tuning, write Dr. Lydia Kang and Nate Pedersen in their 2017 book "Quackery: A Brief History of the Worst Ways to Cure Everything."
Detecting illness and fixing it required machinery -- Dynamizers, Radioclasts and Oscillocasts – that could cost hundreds of dollars each. Thousands of physicians bought them. Fortunately, they could work remotely, for a fee. The worried-and-potentially-unwell just needed to send a blood sample and, of course, a personal check.
Sting operations revealed radionics to be bogus. A skeptic sent a blood sample to one radionics practitioner in Albuquerque who reported back with news of an infected fallopian tube. In fact, the blood sample came from a male guinea pig. As an American Medical Association leader reported, the guinea pig "had shown no female characteristics up to that time, and a postmortem examination yielded no evidence of ladylike attributes."
When Quackery Refused to Yield
The rise of bogus medical technology in the early 20th century spawned a watchdog industry as organizations like the American Medical Association swept into action, said medical historian Eric Boyle, author of 2012's "Quack Medicine: A History of Combating Health Fraud in Twentieth-Century America."
"When quackery was recognized as a major problem, the people who campaigned for its demise were confident that they could get rid of it," he said. "A lot of people believed that increased education, the truths of science, and laws designed to protect consumers would ultimately drive quackery from the marketplace. And then throughout the century, as modern medicine developed, and more effectively treated one disease after another, many observers remained confident in that prediction."
There's a bid to "flood the information highway with truth to turn the storm of fake promotional stuff into a trickle."
But fake medicine persisted as Americans continued their quest to get- healthy-quick… or get-rich-quick by promising to help others to get- healthy-quick. Even radionics refused to die. It's still around in various forms. And, as the Theranos scandal reveals, we're still hoping our blood can offer the keys to longevity and good health.
Why Do We Still Fall for Scams?
In our own era, the Theranos company rose to prominence when founder and CEO Elizabeth Holmes convinced journalists and investors that she'd found a way to cheaply test drops of blood for hundreds of conditions. Then it all fell apart, famously, when the world learned that the technology didn't work. The company has folded, and Holmes faces a federal trial on fraud charges this year.
"There were a lot of prominent, very smart people who bought into the myth of Elizabeth Holmes," a former employee told "60 Minutes," even though the blood tests never actually worked as advertised.
Shouldn't "prominent, very smart people" know better? "People are gullible," said Dr. Stephen Barrett, a psychiatrist and leading quack-buster who runs the QuackWatch website. But there's more to the story. According to him, we're uniquely vulnerable as individuals to bogus medicine.
Scam artists specifically pinpoint their target audiences, such as "smart people," desperate people and alienated people, he said.
Smart people, for example, might be overconfident about their ability to detect fraud and fall for bogus medicine. Alienated people may distrust the establishment, whether it's the medical field or government watchdogs, and be more receptive to alternative sources of information.
Dr. Barrett also points a finger at magical thinking, which comes in different forms. It could mean a New Age-style belief that our minds can control the world around us. Or, as professional quack-buster Alex Berezow said, it could refer to "our cultural obsession with quick fixes."
"Americans are very much prone to this sort of thinking: Give me a pill or give me a magical bean that can make me lose weight! But complex problems need complex solutions," said Berezow, a microbiologist who debunks junk science in his job as a spokesman for the American Council on Science & Health.
American mistrust of expertise makes matters worse, he said. "When I tell people they need to get vaccinated, I'm called a shill for the pharmaceutical industry," he said. "If I say dietary supplements generally don't work, I'm a shill for doctors who want to keep people sick."
What can ordinary citizens do to protect themselves from fake medicine? "You have to have a healthy skepticism of everything," Berezow said. "When you come across something new, is someone trying to take advantage of you? It's a horrible way to think about the world, but there's some truth to it."
"Like any chronic disease, we will have to live with it while we do our best to fight it."
The government and experts have their own roles to play via regulation and education, respectively. For all the criticism it gets, the Food & Drug Administration does serve as a bulwark against fakery in prescription medicine. And while celebrities like Gwyneth "Goop" Paltrow hawk countless questionable medical products on the Internet, scientists and physicians are fighting back by using social media as a tool to promote the truth. There's a bid to "flood the information highway with truth to turn the storm of fake promotional stuff into a trickle," said Dr. Randi Hutter Epstein, a writer in residence at Yale School of Medicine and author of 2018's "Aroused: The History of Hormones and How They Control Just About Everything."
What's next? Like death, taxes and Cher, charlatans are likely to always be with us. Boyle quoted the late William Jarvis, a pioneering quack-buster in the late 20th century who believed health fraud would never be eradicated: "Like any chronic disease, we will have to live with it while we do our best to fight it."
Many leaders at top companies are trying to get workers to return to the office. They say remote and hybrid work are bad for their employees’ mental well-being and lead to a sense of social isolation, meaninglessness, and lack of work-life boundaries, so we should just all go back to office-centric work.
One example is Google, where the company’s leadership is defending its requirement of mostly in-office work for all staff as necessary to protect social capital, meaning people’s connections to and trust in one another. That’s despite a survey of over 1,000 Google employees showing that two-thirds feel unhappy about being forced to work in the office three days per week. In internal meetings and public letters, many have threatened to leave, and some are already quitting to go to other companies with more flexible options.
Last month, GM rolled out a policy similar to Google’s, but had to backtrack because of intense employee opposition. The same is happening in some places outside of the U.S. For instance, three-fifths of all Chinese employers are refusing to offer permanent remote work options, according to a survey this year from The Paper.
For their claims that remote work hurts well-being, some of these office-centric traditionalists cite a number of prominent articles. For example, Arthur Brooks claimed in an essay that “aggravation from commuting is no match for the misery of loneliness, which can lead to depression, substance abuse, sedentary behavior, and relationship damage, among other ills.” An article in Forbes reported that over two-thirds of employees who work from home at least part of the time had trouble getting away from work at the end of the day. And Fast Company has a piece about how remote work can “exacerbate existing mental health issues” like depression and anxiety.
For his part, author Malcolm Gladwell has also championed a swift return to the office, saying there is a “core psychological truth, which is we want you to have a feeling of belonging and to feel necessary…I know it’s a hassle to come into the office, but if you’re just sitting in your pajamas in your bedroom, is that the work life you want to live?”
These arguments may sound logical to some, but they fly in the face of research and my own experience as a behavioral scientist and as a consultant to Fortune 500 companies. In these roles, I have seen the pitfalls of in-person work, which can be just as problematic, if not more so. Remote work is not without its own challenges, but I have helped 21 companies implement a series of simple steps to address them.
Research finds that remote work is actually better for you
The trouble with the articles described above - and claims by traditionalist business leaders and gurus - stems from a sneaky misdirection. They decry the negative impact of remote and hybrid work for wellbeing. Yet they gloss over the damage to wellbeing caused by the alternative, namely office-centric work.
It’s like comparing remote and hybrid work to a state of leisure. Sure, people would feel less isolated if they could hang out and have a beer with their friends instead of working. They could take care of their existing mental health issues if they could visit a therapist. But that’s not in the cards. What’s in the cards is office-centric work. That means the frustration of a long commute to the office, sitting at your desk in an often-uncomfortable and oppressive open office for at least 8 hours, having a sad desk lunch and unhealthy snacks, sometimes at an insanely expensive cost and, for making it through this series of insults, you’re rewarded with more frustration while commuting back home.
In a 2022 survey, the vast majority of respondents felt that working remotely improved their work-life balance. Much of that improvement stemmed from saving time due to not needing to commute and having a more flexible schedule.
So what happens when we compare apples to apples? That’s when we need to hear from the horse’s mouth: namely, surveys of employees themselves, who experienced both in-office work before the pandemic, and hybrid and remote work after COVID struck.
Consider a 2022 survey by Cisco of 28,000 full-time employees around the globe. Nearly 80 percent of respondents say that remote and hybrid work improved their overall well-being: that applies to 83 percent of Millennials, 82 percent of Gen Z, 76 percent of Gen Z, and 66 percent of Baby Boomers. The vast majority of respondents felt that working remotely improved their work-life balance.
Much of that improvement stemmed from saving time due to not needing to commute and having a more flexible schedule: 90 percent saved 4 to 8 hours or more per week. What did they do with that extra time? The top choice for almost half was spending more time with family, friends and pets, which certainly helped address the problem of isolation from the workplace. Indeed, three-quarters of them report that working from home improved their family relationships, and 51 percent strengthened their friendships. Twenty percent used the freed up hours for self-care.
Of the small number who report their work-life balance has not improved or even worsened, the number one reason is the difficulty of disconnecting from work, but 82 percent report that working from anywhere has made them happier. Over half say that remote work decreased their stress levels.
Other surveys back up Cisco’s findings. For example, a 2022 Future Forum survey compared knowledge workers who worked full-time in the office, in a hybrid modality, and fully remote. It found that full-time in-office workers felt the least satisfied with work-life balance, hybrid workers were in the middle, and fully remote workers felt most satisfied. The same distribution applied to questions about stress and anxiety. A mental health website called Tracking Happiness found in a 2022 survey of over 12,000 workers that fully remote employees report a happiness level about 20 percent greater than office-centric ones. Another survey by CNBC in June found that fully remote workers are more often very satisfied with their jobs than workers who are fully in-person.
Academic peer-reviewed research provides further support. Consider a 2022 study published in the International Journal of Environmental Research and Public Health of bank workers who worked on the same tasks of advising customers either remotely or in-person. It found that fully remote workers experienced higher meaningfulness, self-actualization, happiness, and commitment than in-person workers. Another study, published by the National Bureau of Economic Research, reported that hybrid workers, compared to office-centric ones, experienced higher satisfaction with work and had 35 percent more job retention.
What about the supposed burnout crisis associated with remote work? Indeed, burnout is a concern. A survey by Deloitte finds that 77 percent of workers experienced burnout at their current job. Gallup came up with a slightly lower number of 67 percent in its survey. But guess what? Both of those surveys are from 2018, long before the era of widespread remote work.
By contrast, in a Gallup survey in late 2021, 58 percent of respondents reported less burnout. An April 2021 McKinsey survey found burnout in 54 percent of Americans and 49 percent globally. A September 2021 survey by The Hartford reported 61 percent burnout. Arguably, the increase in full or part-time remote opportunities during the pandemic helped to address feelings of burnout, rather than increasing them. Indeed, that finding aligns with the earlier surveys and peer-reviewed research suggesting remote and hybrid work improves wellbeing.
Remote work isn’t perfect – here’s how to fix its shortcomings
Still, burnout is a real problem for hybrid and remote workers, as it is for in-office workers. Employers need to offer mental health benefits with online options to help employees address these challenges, regardless of where they’re working.
Moreover, while they’re better overall for wellbeing, remote and hybrid work arrangements do have specific disadvantages around work-life separation. To address work-life issues, I advise my clients who I helped make the transition to hybrid and remote work to establish norms and policies that focus on clear expectations and setting boundaries.
For working at home and collaborating with others, there’s sometimes an unhealthy expectation that once you start your workday in your home office chair, and that you’ll work continuously while sitting there.
Some people expect their Slack or Microsoft Teams messages to be answered within an hour, while others check Slack once a day. Some believe email requires a response within three hours, and others feel three days is fine. As a result of such uncertainty and lack of clarity about what’s appropriate, too many people feel uncomfortable disconnecting and not replying to messages or doing work tasks after hours. That might stem from a fear of not meeting their boss’s expectations or not wanting to let their colleagues down.
To solve this problem, companies need to establish and incentivize clear expectations and boundaries. They should develop policies and norms around response times for different channels of communication. They also need to clarify work-life boundaries – for example, the frequency and types of unusual circumstances that will require employees to work outside of regular hours.
Moreover, for working at home and collaborating with others, there’s sometimes an unhealthy expectation that once you start your workday in your home office chair, and that you’ll work continuously while sitting there (except for your lunch break). That’s not how things work in the office, which has physical and mental breaks built in throughout the day. You took 5-10 minutes to walk from one meeting to another, or you went to get your copies from the printer and chatted with a coworker on the way.
Those and similar physical and mental breaks, research shows, decrease burnout, improve productivity, and reduce mistakes. That’s why companies should strongly encourage employees to take at least a 10-minute break every hour during remote work. At least half of those breaks should involve physical activity, such as stretching or walking around, to counteract the dangerous effects of prolonged sitting. Other breaks should be restorative mental activities, such as meditation, brief naps, walking outdoors, or whatever else feels restorative to you.
To facilitate such breaks, my client organizations such as the University of Southern California’s Information Sciences Institute shortened hour-long meetings to 50 minutes and half-hour meetings to 25 minutes, to give everyone – both in-person and remote workers – a mental and physical break and transition time.
Very few people will be reluctant to have shorter meetings. After that works out, move to other aspects of setting boundaries and expectations. Doing so will require helping team members get on the same page and reduce conflicts and tensions. By setting clear expectations, you’ll address the biggest challenge for wellbeing for remote and hybrid work: establishing clear work-life boundaries.
In May 2022, Californian biotech Ultima Genomics announced that its UG 100 platform was capable of sequencing an entire human genome for just $100, a landmark moment in the history of the field. The announcement was particularly remarkable because few had previously heard of the company, a relative unknown in an industry long dominated by global giant Illumina which controls about 80 percent of the world’s sequencing market.
Ultima’s secret was to completely revamp many technical aspects of the way Illumina have traditionally deciphered DNA. The process usually involves first splitting the double helix DNA structure into single strands, then breaking these strands into short fragments which are laid out on a glass surface called a flow cell. When this flow cell is loaded into the sequencing machine, color-coded tags are attached to each individual base letter. A laser scans the bases individually while a camera simultaneously records the color associated with them, a process which is repeated until every single fragment has been sequenced.
Instead, Ultima has found a series of shortcuts to slash the cost and boost efficiency. “Ultima Genomics has developed a fundamentally new sequencing architecture designed to scale beyond conventional approaches,” says Josh Lauer, Ultima’s chief commercial officer.
This ‘new architecture’ is a series of subtle but highly impactful tweaks to the sequencing process ranging from replacing the costly flow cell with a silicon wafer which is both cheaper and allows more DNA to be read at once, to utilizing machine learning to convert optical data into usable information.
To put $100 genome in perspective, back in 2012 the cost of sequencing a single genome was around $10,000, a price tag which dropped to $1,000 a few years later. Before Ultima’s announcement, the cost of sequencing an individual genome was around $600.
Several studies have found that nearly 12 percent of healthy people who have their genome sequenced, then discover they have a variant pointing to a heightened risk of developing a disease that can be monitored, treated or prevented.
While Ultima’s new machine is not widely available yet, Illumina’s response has been rapid. Last month the company unveiled the NovaSeq X series, which it describes as its fastest most cost-efficient sequencing platform yet, capable of sequencing genomes at $200, with further price cuts likely to follow.
But what will the rapidly tumbling cost of sequencing actually mean for medicine? “Well to start with, obviously it’s going to mean more people getting their genome sequenced,” says Michael Snyder, professor of genetics at Stanford University. “It'll be a lot more accessible to people.”
At the moment sequencing is mainly limited to certain cancer patients where it is used to inform treatment options, and individuals with undiagnosed illnesses. In the past, initiatives such as SeqFirst have attempted further widen access to genome sequencing based on growing amounts of research illustrating the potential benefits of the technology in healthcare. Several studies have found that nearly 12 percent of healthy people who have their genome sequenced, then discover they have a variant pointing to a heightened risk of developing a disease that can be monitored, treated or prevented.
“While whole genome sequencing is not yet widely used in the U.S., it has started to come into pediatric critical care settings such as newborn intensive care units,” says Professor Michael Bamshad, who heads the genetic medicine division in the University of Washington’s pediatrics department. “It is also being used more often in outpatient clinical genetics services, particularly when conventional testing fails to identify explanatory variants.”
But the cost of sequencing itself is only one part of the price tag. The subsequent clinical interpretation and genetic counselling services often come to several thousand dollars, a cost which insurers are not always willing to pay.
As a result, while Bamshad and others hope that the arrival of the $100 genome will create new opportunities to use genetic testing in innovative ways, the most immediate benefits are likely to come in the realm of research.
Bigger Data
There are numerous ways in which cheaper sequencing is likely to advance scientific research, for example the ability to collect data on much larger patient groups. This will be a major boon to scientists working on complex heterogeneous diseases such as schizophrenia or depression where there are many genes involved which all exert subtle effects, as well as substantial variance across the patient population. Bigger studies could help scientists identify subgroups of patients where the disease appears to be driven by similar gene variants, who can then be more precisely targeted with specific drugs.
If insurers can figure out the economics, Snyder even foresees a future where at a certain age, all of us can qualify for annual sequencing of our blood cells to search for early signs of cancer or the potential onset of other diseases like type 2 diabetes.
David Curtis, a genetics professor at University College London, says that scientists studying these illnesses have previously been forced to rely on genome-wide association studies which are limited because they only identify common gene variants. “We might see a significant increase in the number of large association studies using sequence data,” he says. “It would be far preferable to use this because it provides information about rare, potentially functional variants.”
Cheaper sequencing will also aid researchers working on diseases which have traditionally been underfunded. Bamshad cites cystic fibrosis, a condition which affects around 40,000 children and adults in the U.S., as one particularly pertinent example.
“Funds for gene discovery for rare diseases are very limited,” he says. “We’re one of three sites that did whole genome sequencing on 5,500 people with cystic fibrosis, but our statistical power is limited. A $100 genome would make it much more feasible to sequence everyone in the U.S. with cystic fibrosis and make it more likely that we discover novel risk factors and pathways influencing clinical outcomes.”
For progressive diseases that are more common like cancer and type 2 diabetes, as well as neurodegenerative conditions like multiple sclerosis and ALS, geneticists will be able to go even further and afford to sequence individual tumor cells or neurons at different time points. This will enable them to analyze how individual DNA modifications like methylation, change as the disease develops.
In the case of cancer, this could help scientists understand how tumors evolve to evade treatments. Within in a clinical setting, the ability to sequence not just one, but many different cells across a patient’s tumor could point to the combination of treatments which offer the best chance of eradicating the entire cancer.
“What happens at the moment with a solid tumor is you treat with one drug, and maybe 80 percent of that tumor is susceptible to that drug,” says Neil Ward, vice president and general manager in the EMEA region for genomics company PacBio. “But the other 20 percent of the tumor has already got mutations that make it resistant, which is probably why a lot of modern therapies extend life for sadly only a matter of months rather than curing, because they treat a big percentage of the tumor, but not the whole thing. So going forwards, I think that we will see genomics play a huge role in cancer treatments, through using multiple modalities to treat someone's cancer.”
If insurers can figure out the economics, Snyder even foresees a future where at a certain age, all of us can qualify for annual sequencing of our blood cells to search for early signs of cancer or the potential onset of other diseases like type 2 diabetes.
“There are companies already working on looking for cancer signatures in methylated DNA,” he says. “If it was determined that you had early stage cancer, pre-symptomatically, that could then be validated with targeted MRI, followed by surgery or chemotherapy. It makes a big difference catching cancer early. If there were signs of type 2 diabetes, you could start taking steps to mitigate your glucose rise, and possibly prevent it or at least delay the onset.”
This would already revolutionize the way we seek to prevent a whole range of illnesses, but others feel that the $100 genome could also usher in even more powerful and controversial preventative medicine schemes.
Newborn screening
In the eyes of Kári Stefánsson, the Icelandic neurologist who been a visionary for so many advances in the field of human genetics over the last 25 years, the falling cost of sequencing means it will be feasible to sequence the genomes of every baby born.
“We have recently done an analysis of genomes in Iceland and the UK Biobank, and in 4 percent of people you find mutations that lead to serious disease, that can be prevented or dealt with,” says Stefansson, CEO of deCODE genetics, a subsidiary of the pharmaceutical company Amgen. “This could transform our healthcare systems.”
As well as identifying newborns with rare diseases, this kind of genomic information could be used to compute a person’s risk score for developing chronic illnesses later in life. If for example, they have a higher than average risk of colon or breast cancer, they could be pre-emptively scheduled for annual colonoscopies or mammograms as soon as they hit adulthood.
To a limited extent, this is already happening. In the UK, Genomics England has launched the Newborn Genomes Programme, which plans to undertake whole-genome sequencing of up to 200,000 newborn babies, with the aim of enabling the early identification of rare genetic diseases.
"I have not had my own genome sequenced and I would not have wanted my parents to have agreed to this," Curtis says. "I don’t see that sequencing children for the sake of some vague, ill-defined benefits could ever be justifiable.”
However, some scientists feel that it is tricky to justify sequencing the genomes of apparently healthy babies, given the data privacy issues involved. They point out that we still know too little about the links which can be drawn between genetic information at birth, and risk of chronic illness later in life.
“I think there are very difficult ethical issues involved in sequencing children if there are no clear and immediate clinical benefits,” says Curtis. “They cannot consent to this process. I have not had my own genome sequenced and I would not have wanted my parents to have agreed to this. I don’t see that sequencing children for the sake of some vague, ill-defined benefits could ever be justifiable.”
Curtis points out that there are many inherent risks about this data being available. It may fall into the hands of insurance companies, and it could even be used by governments for surveillance purposes.
“Genetic sequence data is very useful indeed for forensic purposes. Its full potential has yet to be realized but identifying rare variants could provide a quick and easy way to find relatives of a perpetrator,” he says. “If large numbers of people had been sequenced in a healthcare system then it could be difficult for a future government to resist the temptation to use this as a resource to investigate serious crimes.”
While sequencing becoming more widely available will present difficult ethical and moral challenges, it will offer many benefits for society as a whole. Cheaper sequencing will help boost the diversity of genomic datasets which have traditionally been skewed towards individuals of white, European descent, meaning that much of the actionable medical information which has come out of these studies is not relevant to people of other ethnicities.
Ward predicts that in the coming years, the growing amount of genetic information will ultimately change the outcomes for many with rare, previously incurable illnesses.
“If you're the parent of a child that has a susceptible or a suspected rare genetic disease, their genome will get sequenced, and while sadly that doesn’t always lead to treatments, it’s building up a knowledge base so companies can spring up and target that niche of a disease,” he says. “As a result there’s a whole tidal wave of new therapies that are going to come to market over the next five years, as the genetic tools we have, mature and evolve.”