Tapping into the Power of the Placebo Effect
When Wayne Jonas was in medical school 40 years ago, doctors would write out a prescription for placebos, spelling it out backwards in capital letters, O-B-E-C-A-L-P. The pharmacist would fill the prescription with a sugar pill, recalls Jonas, now director of integrative health programs at the Samueli Foundation. It fulfilled the patient's desire for the doctor to do something when perhaps no drug could help, and the sugar pills did no harm.
Today, that deception is seen as unethical. But time and time again, studies have shown that placebos can have real benefits. Now, researchers are trying to untangle the mysteries of placebo effect in an effort to better treat patients.
The use of placebos took off in the post-WWII period, when randomized controlled clinical trials became the gold standard for medical research. One group in a study would be treated with a placebo, a supposedly inert pill or procedure that would not affect normal healing and recovery, while another group in the study would receive an "active" component, most commonly a pill under investigation. Presumably, the group receiving the active treatment would have a better response and the difference from the placebo group would represent the efficacy of the drug being tested. That was the basis for drug approval by the U.S. Food and Drug Administration.
"Placebo responses were marginalized," says Ted Kaptchuk, director of the Program in Placebo Studies & Therapeutic Encounters at Harvard Medical School. "Doctors were taught they have to overcome it when they were thinking about using an effective drug."
But that began to change around the turn of the 21st century. The National Institutes of Health held a series of meetings to set a research agenda and fund studies to answer some basic questions, led by Jonas who was in charge of the office of alternative medicine at the time. "People spontaneously get better all the time," says Kaptchuk. The crucial question was, is the placebo effect real? Is it more than just spontaneous healing?
Brain mechanisms
A turning point came in 2001 in a paper in Science that showed physical evidence of the placebo effect. It used positron emission tomography (PET) scans to measure release patterns of dopamine — a chemical messenger involved in how we feel pleasure — in the brains of patients with Parkinson's disease. Surprisingly, the placebo activated the same patterns that were activated by Parkinson's drugs, such as levodopa. It proved the placebo effect was real; now the search was on to better understand and control it.
A key part of the effect can be the beliefs, expectations, context, and "rituals" of the encounter between doctor and patient. Belief by the doctor and patient that the treatment would work, and the formalized practices of administering the treatment can all contribute to a positive outcome.
Conditioning can be another important component in generating a response, as Pavlov demonstrated more than a century ago in his experiments with dogs. They were trained with a bell prior to feeding such that they would begin to salivate in anticipation at the sound of a bell even with no food present.
Translating that to humans, studies with pain medications and sleeping aids showed that patients who had a positive response with a certain dose of those medications could have the same response if the doses was reduced and a dummy pill substituted, even to the point where there was no longer any active ingredient.
Researchers think placebo treatments can work particularly well in helping people deal with pain and psychological disorders.
Those types of studies troubled Kaptchuk because they often relied on deception; patients weren't told they were receiving a placebo, or at best there was a possibility that they might be randomized to receive a placebo. He believed the placebo effect could work even if patients were told upfront that they were going to receive a placebo. More than a dozen so call "open-label placebo" studies across numerous medical conditions, by Kaptchuk and others, have shown that you don't have to lie to patients for a placebo to work.
Jonas likes to tell the story of a patient who used methotrexate, a potent immunosuppressant, to control her rheumatoid arthritis. She was planning a long trip and didn't want to be bothered with the injections and monitoring required in using the drug, So she began to drink a powerful herbal extract of anise, a licorice flavor that she hated, prior to each injection. She reduced the amount of methotrexate over a period of months and finally stopped, but continued to drink the anise. That process had conditioned her body "to alter her immune function and her autoimmunity" as if she were taking the drug, much like Pavlov's dogs had been trained. She has not taken methotrexate for more than a year.
An intriguing paper published in May 2021 found that mild, non-invasive electric stimulation to the brain could not only boost the placebo effect on pain but also reduce the "nocebo" effect — when patients report a negative effect to a sham treatment. While the work is very preliminary, it may open the door to directly manipulating these responses.
Researchers think placebo treatments can work particularly well in helping people deal with pain and psychological disorders, areas where drugs often are of little help. Still, placebos aren't a cure and only a portion of patients experience a placebo effect.
Nocebo
If medicine were a soap opera, the nocebo would be the evil twin of the placebo. It's what happens when patients have adverse side effects because of the expectation that they will. It's commonly seem when patients claims to experience pain or gastric distress that can occur with a drug even when they've received a placebo. The side effects were either imagined or caused by something else.
"Up to 97% of reported pharmaceutical side effects are not caused by the drug itself but rather by nocebo effects and symptom misattribution," according to one 2019 paper.
One way to reduce a nocebo response is to simply not tell patients that specific side effects might occur. An example is a liver biopsy, in which a large-gauge needle is used to extract a tissue sample for examination. Those told ahead of time that they might experience some pain were more likely to report pain and greater pain than those who weren't offered this information.
Interestingly, a nocebo response plays out in the hippocampus, a part of the brain that is never activated in a placebo response. "I think what we are dealing with with nocebo is anxiety," says Kaptchuk, but he acknowledges that others disagree.
Distraction may be another way to minimize the nocebo effect. Pediatricians are using virtual reality (VR) to engage children and distract them during routine procedures such as blood draws and changing wound dressings, and burn patients of all ages have found relief with specially created VRs.
Treatment response
Jonas argues that what we commonly call the placebo effect is misnamed and leading us astray. "The fact is people heal and that inherent healing capacity is both powerful and influenced by mental, social, and contextual factors that are embedded in every medical encounter since the idea of treatment began," he wrote in a 2019 article in the journal Frontiers in Psychiatry. "Our understanding of healing and ability to enhance it will be accelerated if we stop using the term 'placebo response' and call it what it is—the meaning response, and its special application in medicine called the healing response."
He cites evidence that "only 15% to 20% of the healing of an individual or a population comes from health care. The rest—nearly 80%—comes from other factors rarely addressed in the health care system: behavioral and lifestyle choices that people make in their daily life."
To better align treatments and maximize their effectiveness, Jonas has created HOPE (Healing Oriented Practices & Environments) Note, "a patient-guided process designed to identify the patient's values and goals in their life and for healing." Essentially, it seeks to make clear to both doctor and patient what the patient's goals are in seeking treatment. In an extreme example of terminal cancer, some patients may choose to extend life despite the often brutal treatments, while others might prefer to optimize quality of life in the remaining time that they have. It builds on practices already taught in medical schools. Jonas believes doctors and patients can use tools like these to maximize the treatment response and achieve better outcomes.
Much of the medical profession has been resistant to these approaches. Part of that is simply tradition and limited data on their effectiveness, but another very real factor is the billing process for how they are reimbursed. Jonas says a new medical billing code added this year gives doctors another way to be compensated for the extra time and effort that a more holistic approach to medicine may initially require. Other moves away from fee-for-service payments to bundling and payment for outcomes, and the integrated care provided by the Veterans Affairs, Kaiser Permanente and other groups offer longer term hope for the future of approaches that might enhance the healing response.
This article was first published by Leaps.org on July 7, 2021.
Podcast: The Friday Five weekly roundup in health research
The Friday Five covers five stories in health research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on scientific creativity and progress to give you a therapeutic dose of inspiration headed into the weekend.
Covered in this week's Friday Five:
- Sex differences in cancer
- Promising research on a vaccine for Lyme disease
- Using a super material for brain-like devices
- Measuring your immunity to Covid
- Reducing dementia risk with leisure activities
One day in recent past, scientists at Columbia University’s Creative Machines Lab set up a robotic arm inside a circle of five streaming video cameras and let the robot watch itself move, turn and twist. For about three hours the robot did exactly that—it looked at itself this way and that, like toddlers exploring themselves in a room full of mirrors. By the time the robot stopped, its internal neural network finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. In other words, the robot built a spatial self-awareness, just like humans do. “We trained its deep neural network to understand how it moved in space,” says Boyuan Chen, one of the scientists who worked on it.
For decades robots have been doing helpful tasks that are too hard, too dangerous, or physically impossible for humans to carry out themselves. Robots are ultimately superior to humans in complex calculations, following rules to a tee and repeating the same steps perfectly. But even the biggest successes for human-robot collaborations—those in manufacturing and automotive industries—still require separating the two for safety reasons. Hardwired for a limited set of tasks, industrial robots don't have the intelligence to know where their robo-parts are in space, how fast they’re moving and when they can endanger a human.
Over the past decade or so, humans have begun to expect more from robots. Engineers have been building smarter versions that can avoid obstacles, follow voice commands, respond to human speech and make simple decisions. Some of them proved invaluable in many natural and man-made disasters like earthquakes, forest fires, nuclear accidents and chemical spills. These disaster recovery robots helped clean up dangerous chemicals, looked for survivors in crumbled buildings, and ventured into radioactive areas to assess damage.
Now roboticists are going a step further, training their creations to do even better: understand their own image in space and interact with humans like humans do. Today, there are already robot-teachers like KeeKo, robot-pets like Moffin, robot-babysitters like iPal, and robotic companions for the elderly like Pepper.
But even these reasonably intelligent creations still have huge limitations, some scientists think. “There are niche applications for the current generations of robots,” says professor Anthony Zador at Cold Spring Harbor Laboratory—but they are not “generalists” who can do varied tasks all on their own, as they mostly lack the abilities to improvise, make decisions based on a multitude of facts or emotions, and adjust to rapidly changing circumstances. “We don’t have general purpose robots that can interact with the world. We’re ages away from that.”
Robotic spatial self-awareness – the achievement by the team at Columbia – is an important step toward creating more intelligent machines. Hod Lipson, professor of mechanical engineering who runs the Columbia lab, says that future robots will need this ability to assist humans better. Knowing how you look and where in space your parts are, decreases the need for human oversight. It also helps the robot to detect and compensate for damage and keep up with its own wear-and-tear. And it allows robots to realize when something is wrong with them or their parts. “We want our robots to learn and continue to grow their minds and bodies on their own,” Chen says. That’s what Zador wants too—and on a much grander level. “I want a robot who can drive my car, take my dog for a walk and have a conversation with me.”
Columbia scientists have trained a robot to become aware of its own "body," so it can map the right path to touch a ball without running into an obstacle, in this case a square.
Jane Nisselson and Yinuo Qin/ Columbia Engineering
Today’s technological advances are making some of these leaps of progress possible. One of them is the so-called Deep Learning—a method that trains artificial intelligence systems to learn and use information similar to how humans do it. Described as a machine learning method based on neural network architectures with multiple layers of processing units, Deep Learning has been used to successfully teach machines to recognize images, understand speech and even write text.
Trained by Google, one of these language machine learning geniuses, BERT, can finish sentences. Another one called GPT3, designed by San Francisco-based company OpenAI, can write little stories. Yet, both of them still make funny mistakes in their linguistic exercises that even a child wouldn’t. According to a paper published by Stanford’s Center for Research on Foundational Models, BERT seems to not understand the word “not.” When asked to fill in the word after “A robin is a __” it correctly answers “bird.” But try inserting the word “not” into that sentence (“A robin is not a __”) and BERT still completes it the same way. Similarly, in one of its stories, GPT3 wrote that if you mix a spoonful of grape juice into your cranberry juice and drink the concoction, you die. It seems that robots, and artificial intelligence systems in general, are still missing some rudimentary facts of life that humans and animals grasp naturally and effortlessly.
How does one give robots a genome? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create.
It's not exactly the robots’ fault. Compared to humans, and all other organisms that have been around for thousands or millions of years, robots are very new. They are missing out on eons of evolutionary data-building. Animals and humans are born with the ability to do certain things because they are pre-wired in them. Flies know how to fly, fish knows how to swim, cats know how to meow, and babies know how to cry. Yet, flies don’t really learn to fly, fish doesn’t learn to swim, cats don’t learn to meow, and babies don’t learn to cry—they are born able to execute such behaviors because they’re preprogrammed to do so. All that happens thanks to the millions of years of evolutions wired into their respective genomes, which give rise to the brain’s neural networks responsible for these behaviors. Robots are the newbies, missing out on that trove of information, Zador argues.
A neuroscience professor who studies how brain circuitry generates various behaviors, Zador has a different approach to developing the robotic mind. Until their creators figure out a way to imbue the bots with that information, robots will remain quite limited in their abilities. Each model will only be able to do certain things it was programmed to do, but it will never go above and beyond its original code. So Zador argues that we have to start giving robots a genome.
How does one do that? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create. Genomes lay out rules for brain development. Specifically, the genome encodes blueprints for wiring up our nervous system—the details of which neurons are connected, the strength of those connections and other specs that will later hold the information learned throughout life. “Our genomes serve as blueprints for building our nervous system and these blueprints give rise to a human brain, which contains about 100 billion neurons,” Zador says.
If you think what a genome is, he explains, it is essentially a very compact and compressed form of information storage. Conceptually, genomes are similar to CliffsNotes and other study guides. When students read these short summaries, they know about what happened in a book, without actually reading that book. And that’s how we should be designing the next generation of robots if we ever want them to act like humans, Zador says. “We should give them a set of behavioral CliffsNotes, which they can then unwrap into brain-like structures.” Robots that have such brain-like structures will acquire a set of basic rules to generate basic behaviors and use them to learn more complex ones.
Currently Zador is in the process of developing algorithms that function like simple rules that generate such behaviors. “My algorithms would write these CliffsNotes, outlining how to solve a particular problem,” he explains. “And then, the neural networks will use these CliffsNotes to figure out which ones are useful and use them in their behaviors.” That’s how all living beings operate. They use the pre-programmed info from their genetics to adapt to their changing environments and learn what’s necessary to survive and thrive in these settings.
For example, a robot’s neural network could draw from CliffsNotes with “genetic” instructions for how to be aware of its own body or learn to adjust its movements. And other, different sets of CliffsNotes may imbue it with the basics of physical safety or the fundamentals of speech.
At the moment, Zador is working on algorithms that are trying to mimic neuronal blueprints for very simple organisms—such as earthworms, which have only 302 neurons and about 7000 synapses compared to the millions we have. That’s how evolution worked, too—expanding the brains from simple creatures to more complex to the Homo Sapiens. But if it took millions of years to arrive at modern humans, how long would it take scientists to forge a robot with human intelligence? That’s a billion-dollar question. Yet, Zador is optimistic. “My hypotheses is that if you can build simple organisms that can interact with the world, then the higher level functions will not be nearly as challenging as they currently are.”
Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.