Trading syphilis for malaria: How doctors treated one deadly disease by infecting patients with another
If you had lived one hundred years ago, syphilis – a bacterial infection spread by sexual contact – would likely have been one of your worst nightmares. Even though syphilis still exists, it can now be detected early and cured quickly with a course of antibiotics. Back then, however, before antibiotics and without an easy way to detect the disease, syphilis was very often a death sentence.
To understand how feared syphilis once was, it’s important to understand exactly what it does if it’s allowed to progress: the infections start off as small, painless sores or even a single sore near the vagina, penis, anus, or mouth. The sores disappear around three to six weeks after the initial infection – but untreated, syphilis moves into a secondary stage, often presenting as a mild rash in various areas of the body (such as the palms of a person’s hands) or through other minor symptoms. The disease progresses from there, often quietly and without noticeable symptoms, sometimes for decades before it reaches its final stages, where it can cause blindness, organ damage, and even dementia. Research indicates, in fact, that as much as 10 percent of psychiatric admissions in the early 20th century were due to dementia caused by syphilis, also known as neurosyphilis.
Like any bacterial disease, syphilis can affect kids, too. Though it’s spread primarily through sexual contact, it can also be transmitted from mother to child during birth, causing lifelong disability.
The poet-physician Aldabert Bettman, who wrote fictionalized poems based on his experiences as a doctor in the 1930s, described the effect syphilis could have on an infant in his poem Daniel Healy:
I always got away clean
when I went out
With the boys.
The night before
I was married
I went out,—But was not so fortunate;
And I infected
My bride.
When little Daniel
Was born
His eyes discharged;
And I dared not tell
That because
I had seen too much
Little Daniel sees not at all
Given the horrors of untreated syphilis, it’s maybe not surprising that people would go to extremes to try and treat it. One of the earliest remedies for syphilis, dating back to 15th century Naples, was using mercury – either rubbing it on the skin where blisters appeared, or breathing it in as a vapor. (Not surprisingly, many people who underwent this type of “treatment” died of mercury poisoning.)
Other primitive treatments included using tinctures made of a flowering plant called guaiacum, as well as inducing “sweat baths” to eliminate the syphilitic toxins. In 1910, an arsenic-based drug called Salvarsan hit the market and was hailed as a “magic bullet” for its ability to target and destroy the syphilis-causing bacteria without harming the patient. However, while Salvarsan was effective in treating early-stage syphilis, it was largely ineffective by the time the infection progressed beyond the second stage. Tens of thousands of people each year continued to die of syphilis or were otherwise shipped off to psychiatric wards due to neurosyphilis.
It was in one of these psychiatric units in the early 20th century that Dr. Julius Wagner-Juaregg got the idea for a potential cure.
Wagner-Juaregg was an Austrian-born physician trained in “experimental pathology” at the University of Vienna. Wagner-Juaregg started his medical career conducting lab experiments on animals and then moved on to work at different psychiatric clinics in Vienna, despite having no training in psychiatry or neurology.
Wagner-Juaregg’s work was controversial to say the least. At the time, medicine – particularly psychiatric medicine – did not have anywhere near the same rigorous ethical standards that doctors, researchers, and other scientists are bound to today. Wagner-Juaregg would devise wild theories about the cause of their psychiatric ailments and then perform experimental procedures in an attempt to cure them. (As just one example, Wagner-Juaregg would sterilize his adolescent male patients, thinking “excessive masturbation” was the cause of their schizophrenia.)
But sometimes these wild theories paid off. In 1883, during his residency, Wagner-Juaregg noted that a female patient with mental illness who had contracted a skin infection and suffered a high fever experienced a sudden (and seemingly miraculous) remission from her psychosis symptoms after the fever had cleared. Wagner-Juaregg theorized that inducing a high fever in his patients with neurosyphilis could help them recover as well.
Eventually, Wagner-Juaregg was able to put his theory to the test. Around 1890, Wagner-Juaregg got his hands on something called tuberculin, a therapeutic treatment created by the German microbiologist Robert Koch in order to cure tuberculosis. Tuberculin would later turn out to be completely ineffective for treating tuberculosis, often creating severe immune responses in patients – but for a short time, Wagner-Juaregg had some success in using tuberculin to help his dementia patients. Giving his patients tuberculin resulted in a high fever – and after completing the treatment, Wagner-Jauregg reported that his patient’s dementia was completely halted. The success was short-lived, however: Wagner-Juaregg eventually had to discontinue tuberculin as a treatment, as it began to be considered too toxic.
By 1917, Wagner-Juaregg’s theory about syphilis and fevers was becoming more credible – and one day a new opportunity presented itself when a wounded soldier, stricken with malaria and a related fever, was accidentally admitted to his psychiatric unit.
When his findings were published in 1918, Wagner-Juaregg’s so-called “fever therapy” swept the globe.
What Wagner-Juaregg did next was ethically deplorable by any standard: Before he allowed the soldier any quinine (the standard treatment for malaria at the time), Wagner-Juaregg took a small sample of the soldier’s blood and inoculated three syphilis patients with the sample, rubbing the blood on their open syphilitic blisters.
It’s unclear how well the malaria treatment worked for those three specific patients – but Wagner-Juaregg’s records show that in the span of one year, he inoculated a total of nine patients with malaria, for the sole purpose of inducing fevers, and six of them made a full recovery. Wagner-Juaregg’s treatment was so successful, in fact, that one of his inoculated patients, an actor who was unable to work due to his dementia, was eventually able to find work again and return to the stage. Two additional patients – a military officer and a clerk – recovered from their once-terminal illnesses and returned to their former careers as well.
When his findings were published in 1918, Wagner-Juaregg’s so-called “fever therapy” swept the globe. The treatment was hailed as a breakthrough – but it still had risks. Malaria itself had a mortality rate of about 15 percent at the time. Many people considered that to be a gamble worth taking, compared to dying a painful, protracted death from syphilis.
Malaria could also be effectively treated much of the time with quinine, whereas other fever-causing illnesses were not so easily treated. Triggering a fever by way of malaria specifically, therefore, became the standard of care.
Tens of thousands of people with syphilitic dementia would go on to be treated with fever therapy until the early 1940s, when a combination of Salvarsan and penicillin caused syphilis infections to decline. Eventually, neurosyphilis became rare, and then nearly unheard of.
Despite his contributions to medicine, it’s important to note that Wagner-Juaregg was most definitely not a person to idolize. In fact, he was an outspoken anti-Semite and proponent of eugenics, arguing that Jews were more prone to mental illness and that people who were mentally ill should be forcibly sterilized. (Wagner-Juaregg later became a Nazi sympathizer during Hitler’s rise to power even though, bizarrely, his first wife was Jewish.) Another problematic issue was that his fever therapy involved experimental treatments on many who, due to their cognitive issues, could not give informed consent.
Lack of consent was also a fundamental problem with the syphilis study at Tuskegee, appalling research that began just 14 years after Wagner-Juaregg published his “fever therapy” findings.
Still, despite his outrageous views, Wagner-Juaregg was awarded the Nobel Prize in Medicine or Physiology in 1927 – and despite some egregious human rights abuses, the miraculous “fever therapy” was partly responsible for taming one of the deadliest plagues in human history.
There's no shortage of fake news going around the internet these days, but how do we become more aware as consumers of what's real and what's not?
"We are hoping to create what you might call a general 'vaccine' against fake news, rather than trying to counter each specific conspiracy or falsehood."
Researchers at the University of Cambridge may have answered just that by developing an online game designed to expose and educate participants to the tactics used by those spreading false information.
"We wanted to see if we could preemptively debunk, or 'pre-bunk', fake news by exposing people to a weak dose of the methods used to create and spread disinformation, so they have a better understanding of how they might be deceived," Dr Sander van der Linden, Director of the Cambridge Social Decision-Making Lab, said in a statement.
"This is a version of what psychologists call 'inoculation theory', with our game working like a psychological vaccination."
In February 2018, van der Linden and his coauthor, Jon Roozenbeek, helped launch the browser game, "Bad News," where players take on the role of "Disinformation and Fake News Tycoon."
They can manipulate news and social media within the game by several different methods, including deploying twitter-bots, photo-shopping evidence, creating fake accounts, and inciting conspiracy theories with the goal of attracting followers and maintaining a "credibility score" for persuasiveness.
In order to gauge the game's effectiveness, players were asked to rate the reliability of a number of real and fake news headlines and tweets both before and after playing. The data from 15,000 players was evaluated, with the results published June 25 in the journal Palgrave Communications.
The results concluded that "the perceived reliability of fake news before playing the game had reduced by an average of 21% after completing it. Yet the game made no difference to how users ranked real news."
Just 15 minutes of playing the game can have a moderate effect on people, which could play a major role on a larger scale.
Additionally, participants who "registered as most susceptible to fake news headlines at the outset benefited most from the 'inoculation,'" according to the study.
Just 15 minutes of playing the game can have a moderate effect on people, which could play a major role on a larger scale when it comes to "building a societal resistance to fake news," according to Dr. van der Linden.
"Research suggests that fake news spreads faster and deeper than the truth, so combating disinformation after-the-fact can be like fighting a losing battle," he said.
"We are hoping to create what you might call a general 'vaccine' against fake news, rather than trying to counter each specific conspiracy or falsehood," Roozenbeek added.
Van der Linden and Roozenbeek's work is an early example of the potential methods to protect people against deception by training them to be more attuned to the methods used to distribute fake news.
"I hope that the positive results give further credence to the new science of prebunking rather than only thinking about traditional debunking. On a larger level, I also hope the game and results inspire a new kind of behavioral science research where we actively engage with people and apply insights from psychological science in the public interest," van der Linden told leapsmag.
"I like the idea that the end result of a scientific theory is a real-world partnership and practical tool that organizations and people can use to guard themselves against online manipulation techniques in a novel and hopefully fun and engaging manner."
Ready to be "inoculated" against fake news? Then play the game for yourself.
What if people could just survive on sunlight like plants?
The admittedly outlandish question occurred to me after reading about how climate change will exacerbate drought, flooding, and worldwide food shortages. Many of these problems could be eliminated if human photosynthesis were possible. Had anyone ever tried it?
Extreme space travel exists at an ethically unique spot that makes human experimentation much more palatable.
I emailed Sidney Pierce, professor emeritus in the Department of Integrative Biology at the University of South Florida, who studies a type of sea slug, Elysia chlorotica, that eats photosynthetic algae, incorporating the algae's key cell structure into itself. It's still a mystery how exactly a slug can operate the part of the cell that converts sunlight into energy, which requires proteins made by genes to function, but the upshot is that the slugs can (and do) live on sunlight in-between feedings.
Pierce says he gets questions about human photosynthesis a couple of times a year, but it almost certainly wouldn't be worth it to try to develop the process in a human. "A high-metabolic rate, large animal like a human could probably not survive on photosynthesis," he wrote to me in an email. "The main reason is a lack of surface area. They would either have to grow leaves or pull a trailer covered with them."
In short: Plants have already exploited the best tricks for subsisting on photosynthesis, and unless we want to look and act like plants, we won't have much success ourselves. Not that it stopped Pierce from trying to develop human photosynthesis technology anyway: "I even tried to sell it to the Navy back in the day," he told me. "Imagine photosynthetic SEALS."
It turns out, however, that while no one is actively trying to create photosynthetic humans, scientists are considering the ways humans might need to change to adapt to future environments, either here on the rapidly changing Earth or on another planet. Rice University biologist Scott Solomon has written an entire book, Future Humans, in which he explores the environmental pressures that are likely to influence human evolution from this point forward. On Earth, Solomon says, infectious disease will remain a major driver of change. As for Mars, the big two are lower gravity and radiation, the latter of which bombards the Martian surface constantly because the planet has no magnetosphere.
Although he considers this example "pretty out there," Solomon says one possible solution to Mars' magnetic assault could leave humans not photosynthetic green, but orange, thanks to pigments called carotenoids that are responsible for the bright hues of pumpkins and carrots.
"Carotenoids protect against radiation," he says. "Usually only plants and microbes can produce carotenoids, but there's at least one kind of insect, a particular type of aphid, that somehow acquired the gene for making carotenoids from a fungus. We don't exactly know how that happened, but now they're orange... I view that as an example of, hey, maybe humans on Mars will evolve new kinds of pigmentation that will protect us from the radiation there."
We could wait for an orange human-producing genetic variation to occur naturally, or with new gene editing techniques such as CRISPR-Cas9, we could just directly give astronauts genetic advantages such as carotenoid-producing skin. This may not be as far-off as it sounds: Extreme space travel exists at an ethically unique spot that makes human experimentation much more palatable. If an astronaut already plans to subject herself to the enormous experiment of traveling to, and maybe living out her days on, a dangerous and faraway planet, do we have any obligation to provide all the protection we can?
Probably the most vocal person trying to figure out what genetic protections might help astronauts is Cornell geneticist Chris Mason. His lab has outlined a 10-phase, 500-year plan for human survival, starting with the comparatively modest goal of establishing which human genes are not amenable to change and should be marked with a "Do not disturb" sign.
To be clear, Mason is not actually modifying human beings. Instead, his lab has studied genes in radiation-resistant bacteria, such as the Deinococcus genus. They've expressed proteins called DSUP from tardigrades, tiny water bears that can survive in space, in human cells. They've looked into p53, a gene that is overexpressed in elephants and seems to protect them from cancer. They also developed a protocol to work on the NASA twin study comparing astronauts Scott Kelly, who spent a year aboard the International Space Station, and his brother Mark, who did not, to find out what effects space tends to have on genes in the first place.
In a talk he gave in December, Mason reported that 8.7 percent of Scott Kelly's genes—mostly those associated with immune function, DNA repair, and bone formation—did not return to normal after the astronaut had been home for six months. "Some of these space genes, we could engineer them, activate them, have them be hyperactive when you go to space," he said in that same talk. "When we think about having the hubris to go to a faraway planet...it seems like an almost impossible idea….but I really like people and I want us to survive for a long time, and this is the first step on the stairwell to survive out of the solar system."
What is the most important ability we could give our future selves through science?
There are others performing studies to figure out what capabilities we might bestow on the future-proof superhuman, but none of them are quite as extreme as photosynthesis (although all of them are useful). At Harvard, geneticist George Church wants to engineer cells to be resistant to viruses, such as the common cold and HIV. At Columbia, synthetic biologist Harris Wang is addressing self-sufficient humans more directly—trying to spur kidney cells to produce amino acids that are normally only available from diet.
But perhaps Future Humans author Scott Solomon has the most radical idea. I asked him a version of the classic What would be your superhero power? question: What does he see as the most important ability we could give our future selves through science?
"The empathy gene," he said. "The ability to put yourself in someone else's shoes and see the world as they see it. I think it would solve a lot of our problems."