Elizabeth Holmes Through the Director’s Lens
Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.
"The Inventor," a chronicle of Theranos's storied downfall, premiered recently on HBO. Leapsmag reached out to director Alex Gibney, whom The New York Times has called "one of America's most successful and prolific documentary filmmakers," for his perspective on Elizabeth Holmes and the world she inhabited.
Do you think Elizabeth Holmes was a charismatic sociopath from the start — or is she someone who had good intentions, over-promised, and began the lies to keep her business afloat, a "fake it till you make it" entrepreneur like Thomas Edison?
I'm not qualified to say if EH was or is a sociopath. I don't think she started Theranos as a scam whose only purpose was to make money. If she had done so, she surely would have taken more money for herself along the way. I do think that she had good intentions and that she, as you say, "began the lies to keep her business afloat." ([Reporter John] Carreyrou's book points out that those lies began early.) I think that the Edison comparison is instructive for a lot of reasons.
First, Edison was the original "fake-it-till-you-make-it" entrepreneur. That puts this kind of behavior in the mainstream of American business. By saying that, I am NOT endorsing the ethic, just the opposite. As one Enron executive mused about the mendacity there, "Was it fraud or was it bad marketing?" That gives you a sense of how baked-in the "fake it" sensibility is.
"Having a thirst for fame and a noble cause enabled her to think it was OK to lie in service of those goals."
I think EH shares one other thing with Edison, which is a huge ego coupled with a talent for storytelling as long as she is the heroic, larger-than-life main character. It's interesting that EH calls her initial device "Edison." Edison was the world's most famous "inventor," both because of the devices that came out of his shop and and for his ability for "self-invention." As Randall Stross notes in "The Wizard of Menlo Park," he was the first celebrity businessman. In addition to her "good intentions," EH was certainly motivated by fame and glory and many of her lies were in service to those goals.
Having a thirst for fame and a noble cause enabled her to think it was OK to lie in service of those goals. That doesn't excuse the lies. But those noble goals may have allowed EH to excuse them for herself or, more perniciously, to make believe that they weren't lies at all. This is where we get into scary psychological territory.
But rather than thinking of it as freakish, I think it's more productive to think of it as an exaggeration of the way we all lie to others and to ourselves. That's the point of including the Dan Ariely experiment with the dice. In that experiment, most of the subjects cheated more when they thought they were doing it for a good cause. Even more disturbing, that "good cause" allowed them to lie much more effectively because they had come to believe they weren't doing anything wrong. As it turns out, economics isn't a rational practice; it's the practice of rationalizing.
Where EH and Edison differ is that Edison had a firm grip on reality. He knew he could find a way to make the incandescent lightbulb work. There is no evidence that EH was close to making her "Edison" work. But rather than face reality (and possibly adjust her goals) she pretended that her dream was real. That kind of "over-promising" or "bold vision" is one thing when you are making a prototype in the lab. It's a far more serious matter when you are using a deeply flawed system on real patients. EH can tell herself that she had to do that (Walgreens was ready to walk away if she hadn't "gone live") or else Theranos would have run out of money.
But look at the calculation she made: she thought it was worth putting lives at risk in order to make her dream come true. Now we're getting into the realm of the sociopath. But my experience leads me to believe that -- as in the case of the Milgram experiment -- most people don't do terrible things right away, they come to crimes gradually as they become more comfortable with bigger and bigger rationalizations. At Theranos, the more valuable the company became, the bigger grew the lies.
The two whistleblowers come across as courageous heroes, going up against the powerful and intimidating company. The contrast between their youth and lack of power and the old elite backers of Theronos is staggering, and yet justice triumphed. Were the whistleblowers hesitant or afraid to appear in the film, or were they eager to share their stories?
By the time I got to them, they were willing and eager to tell their stories, once I convinced them that I would honor their testimony. In the case of Erika and Tyler, they were nudged to participate by John Carreyrou, in whom they had enormous trust.
"It's simply crazy that no one demanded to see an objective demonstration of the magic box."
Why do you think so many elite veterans of politics and venture capitalism succumbed to Holmes' narrative in the first place, without checking into the details of its technology or financials?
The reasons are all in the film. First, Channing Robertson and many of the old men on her board were clearly charmed by her and maybe attracted to her. They may have rationalized their attraction by convincing themselves it was for a good cause! Second, as Dan Ariely tells us, we all respond to stories -- more than graphs and data -- because they stir us emotionally. EH was a great storyteller. Third, the story of her as a female inventor and entrepreneur in male-dominated Silicon Valley is a tale that they wanted to invest in.
There may have been other factors. EH was very clever about the way she put together an ensemble of credibility. How could Channing Robertson, George Shultz, Henry Kissinger and Jim Mattis all be wrong? And when Walgreens put the Wellness Centers in stores, investors like Rupert Murdoch assumed that Walgreens must have done its due diligence. But they hadn't!
It's simply crazy that no one demanded to see an objective demonstration of the magic box. But that blind faith, as it turns out, is more a part of capitalism than we have been taught.
Do you think that Roger Parloff deserves any blame for the glowing Fortune story on Theranos, since he appears in the film to blame himself? Or was he just one more victim of Theranos's fraud?
He put her on the cover of Fortune so he deserves some blame for the fraud. He still blames himself. That willingness to hold himself to account shows how seriously he takes the job of a journalist. Unlike Elizabeth, Roger has the honesty and moral integrity to admit that he made a mistake. He owned up to it and published a mea culpa. That said, Roger was also a victim because Elizabeth lied to him.
Do you think investors in Silicon Valley, with their FOMO attitudes and deep pockets, are vulnerable to making the same mistake again with a shiny new startup, or has this saga been a sober reminder to do their due diligence first?
Many of the mistakes made with Theranos were the same mistakes made with Enron. We must learn to recognize that we are, by nature, trusting souls. Knowing that should lead us to a guiding slogan: "trust but verify."
The irony of Holmes dancing to "I Can't Touch This" is almost too perfect. How did you find that footage?
It was leaked to us.
"Elizabeth Holmes is now famous for her fraud. Who better to host the re-boot of 'The Apprentice.'"
Holmes is facing up to 20 years in prison for federal fraud charges, but Vanity Fair recently reported that she is seeking redemption, taking meetings with filmmakers for a possible documentary to share her "real" story. What do you think will become of Holmes in the long run?
It's usually a mistake to handicap a trial. My guess is that she will be convicted and do some prison time. But maybe she can convince jurors -- the way she convinced journalists, her board, and her investors -- that, on account of her noble intentions, she deserves to be found not guilty. "Somewhere, over the rainbow…"
After the trial, and possibly prison, I'm sure that EH will use her supporters (like Tim Draper) to find a way to use the virtual currency of her celebrity to rebrand herself and launch something new. Fitzgerald famously said that "there are no second acts in American lives." That may be the stupidest thing he ever said.
Donald Trump failed at virtually every business he ever embarked on. But he became a celebrity for being a fake businessman and used that celebrity -- and phony expertise -- to become president of the United States. Elizabeth Holmes is now famous for her fraud. Who better to host the re-boot of "The Apprentice." And then?
"You Can't Touch This!"
Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.
One day in recent past, scientists at Columbia University’s Creative Machines Lab set up a robotic arm inside a circle of five streaming video cameras and let the robot watch itself move, turn and twist. For about three hours the robot did exactly that—it looked at itself this way and that, like toddlers exploring themselves in a room full of mirrors. By the time the robot stopped, its internal neural network finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. In other words, the robot built a spatial self-awareness, just like humans do. “We trained its deep neural network to understand how it moved in space,” says Boyuan Chen, one of the scientists who worked on it.
For decades robots have been doing helpful tasks that are too hard, too dangerous, or physically impossible for humans to carry out themselves. Robots are ultimately superior to humans in complex calculations, following rules to a tee and repeating the same steps perfectly. But even the biggest successes for human-robot collaborations—those in manufacturing and automotive industries—still require separating the two for safety reasons. Hardwired for a limited set of tasks, industrial robots don't have the intelligence to know where their robo-parts are in space, how fast they’re moving and when they can endanger a human.
Over the past decade or so, humans have begun to expect more from robots. Engineers have been building smarter versions that can avoid obstacles, follow voice commands, respond to human speech and make simple decisions. Some of them proved invaluable in many natural and man-made disasters like earthquakes, forest fires, nuclear accidents and chemical spills. These disaster recovery robots helped clean up dangerous chemicals, looked for survivors in crumbled buildings, and ventured into radioactive areas to assess damage.
Now roboticists are going a step further, training their creations to do even better: understand their own image in space and interact with humans like humans do. Today, there are already robot-teachers like KeeKo, robot-pets like Moffin, robot-babysitters like iPal, and robotic companions for the elderly like Pepper.
But even these reasonably intelligent creations still have huge limitations, some scientists think. “There are niche applications for the current generations of robots,” says professor Anthony Zador at Cold Spring Harbor Laboratory—but they are not “generalists” who can do varied tasks all on their own, as they mostly lack the abilities to improvise, make decisions based on a multitude of facts or emotions, and adjust to rapidly changing circumstances. “We don’t have general purpose robots that can interact with the world. We’re ages away from that.”
Robotic spatial self-awareness – the achievement by the team at Columbia – is an important step toward creating more intelligent machines. Hod Lipson, professor of mechanical engineering who runs the Columbia lab, says that future robots will need this ability to assist humans better. Knowing how you look and where in space your parts are, decreases the need for human oversight. It also helps the robot to detect and compensate for damage and keep up with its own wear-and-tear. And it allows robots to realize when something is wrong with them or their parts. “We want our robots to learn and continue to grow their minds and bodies on their own,” Chen says. That’s what Zador wants too—and on a much grander level. “I want a robot who can drive my car, take my dog for a walk and have a conversation with me.”
Columbia scientists have trained a robot to become aware of its own "body," so it can map the right path to touch a ball without running into an obstacle, in this case a square.
Jane Nisselson and Yinuo Qin/ Columbia Engineering
Today’s technological advances are making some of these leaps of progress possible. One of them is the so-called Deep Learning—a method that trains artificial intelligence systems to learn and use information similar to how humans do it. Described as a machine learning method based on neural network architectures with multiple layers of processing units, Deep Learning has been used to successfully teach machines to recognize images, understand speech and even write text.
Trained by Google, one of these language machine learning geniuses, BERT, can finish sentences. Another one called GPT3, designed by San Francisco-based company OpenAI, can write little stories. Yet, both of them still make funny mistakes in their linguistic exercises that even a child wouldn’t. According to a paper published by Stanford’s Center for Research on Foundational Models, BERT seems to not understand the word “not.” When asked to fill in the word after “A robin is a __” it correctly answers “bird.” But try inserting the word “not” into that sentence (“A robin is not a __”) and BERT still completes it the same way. Similarly, in one of its stories, GPT3 wrote that if you mix a spoonful of grape juice into your cranberry juice and drink the concoction, you die. It seems that robots, and artificial intelligence systems in general, are still missing some rudimentary facts of life that humans and animals grasp naturally and effortlessly.
How does one give robots a genome? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create.
It's not exactly the robots’ fault. Compared to humans, and all other organisms that have been around for thousands or millions of years, robots are very new. They are missing out on eons of evolutionary data-building. Animals and humans are born with the ability to do certain things because they are pre-wired in them. Flies know how to fly, fish knows how to swim, cats know how to meow, and babies know how to cry. Yet, flies don’t really learn to fly, fish doesn’t learn to swim, cats don’t learn to meow, and babies don’t learn to cry—they are born able to execute such behaviors because they’re preprogrammed to do so. All that happens thanks to the millions of years of evolutions wired into their respective genomes, which give rise to the brain’s neural networks responsible for these behaviors. Robots are the newbies, missing out on that trove of information, Zador argues.
A neuroscience professor who studies how brain circuitry generates various behaviors, Zador has a different approach to developing the robotic mind. Until their creators figure out a way to imbue the bots with that information, robots will remain quite limited in their abilities. Each model will only be able to do certain things it was programmed to do, but it will never go above and beyond its original code. So Zador argues that we have to start giving robots a genome.
How does one do that? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create. Genomes lay out rules for brain development. Specifically, the genome encodes blueprints for wiring up our nervous system—the details of which neurons are connected, the strength of those connections and other specs that will later hold the information learned throughout life. “Our genomes serve as blueprints for building our nervous system and these blueprints give rise to a human brain, which contains about 100 billion neurons,” Zador says.
If you think what a genome is, he explains, it is essentially a very compact and compressed form of information storage. Conceptually, genomes are similar to CliffsNotes and other study guides. When students read these short summaries, they know about what happened in a book, without actually reading that book. And that’s how we should be designing the next generation of robots if we ever want them to act like humans, Zador says. “We should give them a set of behavioral CliffsNotes, which they can then unwrap into brain-like structures.” Robots that have such brain-like structures will acquire a set of basic rules to generate basic behaviors and use them to learn more complex ones.
Currently Zador is in the process of developing algorithms that function like simple rules that generate such behaviors. “My algorithms would write these CliffsNotes, outlining how to solve a particular problem,” he explains. “And then, the neural networks will use these CliffsNotes to figure out which ones are useful and use them in their behaviors.” That’s how all living beings operate. They use the pre-programmed info from their genetics to adapt to their changing environments and learn what’s necessary to survive and thrive in these settings.
For example, a robot’s neural network could draw from CliffsNotes with “genetic” instructions for how to be aware of its own body or learn to adjust its movements. And other, different sets of CliffsNotes may imbue it with the basics of physical safety or the fundamentals of speech.
At the moment, Zador is working on algorithms that are trying to mimic neuronal blueprints for very simple organisms—such as earthworms, which have only 302 neurons and about 7000 synapses compared to the millions we have. That’s how evolution worked, too—expanding the brains from simple creatures to more complex to the Homo Sapiens. But if it took millions of years to arrive at modern humans, how long would it take scientists to forge a robot with human intelligence? That’s a billion-dollar question. Yet, Zador is optimistic. “My hypotheses is that if you can build simple organisms that can interact with the world, then the higher level functions will not be nearly as challenging as they currently are.”
Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.
Podcast: Wellness chatbots and meditation pods with Deepak Chopra
Over the last few decades, perhaps no one has impacted healthy lifestyles more than Deepak Chopra. While several of his theories and recommendations have been criticized by prominent members of the scientific community, he has helped bring meditation, yoga and other practices for well-being into the mainstream in ways that benefit the health of vast numbers of people every day. His work has led many to accept new ways of thinking about alternative medicine, the power of mind over body, and the malleability of the aging process.
His impact is such that it's been observed our culture no longer recognizes him as a human being but as a pervasive symbol of new-agey personal health and spiritual growth. Last week, I had a chance to confirm that Chopra is, in fact, a human being – and deserving of his icon status – when I talked with him for the Leaps.org podcast. He relayed ideas that were wise and ancient, yet highly relevant to our world today, with the fluidity and ease of someone discussing the weather. Showing no signs of slowing down at age 76, he described his prolific work, including the publication of two books in the past year and a range of technologies he’s developing, including a meditation app, meditation pods for the workplace, and a chatbot for mental health called Piwi.
Take a listen and get inspired to do some meditation and deep thinking on the future of health. As Chopra told me, “If you don’t have time to meditate once per day, you probably need to meditate twice per day.”
Highlights:
2:10: Chopra talks about meditation broadly and meditation pods, including the ones made by OpenSeed for meditation in the workplace.
6:10: The drawbacks of quick fixes like drugs for mental health.
10:30: The benefits of group meditation versus individual meditation.
14:35: What is a "metahuman" and how to become one.
19:40: The difference between the conditioned mind and the mind that's infinitely creative.
22:48: How Chopra's views of free will differ from the views of many neuroscientists.
28:04: Thinking Fast and Slow, and the role of intuition.
31:20: Athletic and creative geniuses.
32:43: The nature of fundamental truth.
34:00: Meditation for kids.
37:12: Never alone.Love and how AI chatbots can support mental health.
42:30: Extending lifespan, gene editing and lifestyle.
46:05: Chopra's mentor in living a long good life (and my mentor).
47:45: The power of yoga.
Links:
- OpenSeed meditation pods for people to meditate at work (Chopra is an advisor to OpenSeed).
- Chopra's book from 2021, Metahuman: Unleash Your Infinite Potential
- Chopra's book from 2022, Abundance: The Inner Path to Wealth
- NeverAlone.Love, Chopra's collaboration of businesses, policy makers, mental health professionals and others to raise awareness about mental health, advance scientific research and "create a global technology platform to democratize access to resources."
- The Piwi chatbot for mental health
- The Chopra Meditation & Well-Being App for people of all ages
- Only 1.6 percent of U.S. children meditate, according to the National Center for Complementary and Integrative Health