Trading syphilis for malaria: How doctors treated one deadly disease by infecting patients with another
If you had lived one hundred years ago, syphilis – a bacterial infection spread by sexual contact – would likely have been one of your worst nightmares. Even though syphilis still exists, it can now be detected early and cured quickly with a course of antibiotics. Back then, however, before antibiotics and without an easy way to detect the disease, syphilis was very often a death sentence.
To understand how feared syphilis once was, it’s important to understand exactly what it does if it’s allowed to progress: the infections start off as small, painless sores or even a single sore near the vagina, penis, anus, or mouth. The sores disappear around three to six weeks after the initial infection – but untreated, syphilis moves into a secondary stage, often presenting as a mild rash in various areas of the body (such as the palms of a person’s hands) or through other minor symptoms. The disease progresses from there, often quietly and without noticeable symptoms, sometimes for decades before it reaches its final stages, where it can cause blindness, organ damage, and even dementia. Research indicates, in fact, that as much as 10 percent of psychiatric admissions in the early 20th century were due to dementia caused by syphilis, also known as neurosyphilis.
Like any bacterial disease, syphilis can affect kids, too. Though it’s spread primarily through sexual contact, it can also be transmitted from mother to child during birth, causing lifelong disability.
The poet-physician Aldabert Bettman, who wrote fictionalized poems based on his experiences as a doctor in the 1930s, described the effect syphilis could have on an infant in his poem Daniel Healy:
I always got away clean
when I went out
With the boys.
The night before
I was married
I went out,—But was not so fortunate;
And I infected
My bride.
When little Daniel
Was born
His eyes discharged;
And I dared not tell
That because
I had seen too much
Little Daniel sees not at all
Given the horrors of untreated syphilis, it’s maybe not surprising that people would go to extremes to try and treat it. One of the earliest remedies for syphilis, dating back to 15th century Naples, was using mercury – either rubbing it on the skin where blisters appeared, or breathing it in as a vapor. (Not surprisingly, many people who underwent this type of “treatment” died of mercury poisoning.)
Other primitive treatments included using tinctures made of a flowering plant called guaiacum, as well as inducing “sweat baths” to eliminate the syphilitic toxins. In 1910, an arsenic-based drug called Salvarsan hit the market and was hailed as a “magic bullet” for its ability to target and destroy the syphilis-causing bacteria without harming the patient. However, while Salvarsan was effective in treating early-stage syphilis, it was largely ineffective by the time the infection progressed beyond the second stage. Tens of thousands of people each year continued to die of syphilis or were otherwise shipped off to psychiatric wards due to neurosyphilis.
It was in one of these psychiatric units in the early 20th century that Dr. Julius Wagner-Juaregg got the idea for a potential cure.
Wagner-Juaregg was an Austrian-born physician trained in “experimental pathology” at the University of Vienna. Wagner-Juaregg started his medical career conducting lab experiments on animals and then moved on to work at different psychiatric clinics in Vienna, despite having no training in psychiatry or neurology.
Wagner-Juaregg’s work was controversial to say the least. At the time, medicine – particularly psychiatric medicine – did not have anywhere near the same rigorous ethical standards that doctors, researchers, and other scientists are bound to today. Wagner-Juaregg would devise wild theories about the cause of their psychiatric ailments and then perform experimental procedures in an attempt to cure them. (As just one example, Wagner-Juaregg would sterilize his adolescent male patients, thinking “excessive masturbation” was the cause of their schizophrenia.)
But sometimes these wild theories paid off. In 1883, during his residency, Wagner-Juaregg noted that a female patient with mental illness who had contracted a skin infection and suffered a high fever experienced a sudden (and seemingly miraculous) remission from her psychosis symptoms after the fever had cleared. Wagner-Juaregg theorized that inducing a high fever in his patients with neurosyphilis could help them recover as well.
Eventually, Wagner-Juaregg was able to put his theory to the test. Around 1890, Wagner-Juaregg got his hands on something called tuberculin, a therapeutic treatment created by the German microbiologist Robert Koch in order to cure tuberculosis. Tuberculin would later turn out to be completely ineffective for treating tuberculosis, often creating severe immune responses in patients – but for a short time, Wagner-Juaregg had some success in using tuberculin to help his dementia patients. Giving his patients tuberculin resulted in a high fever – and after completing the treatment, Wagner-Jauregg reported that his patient’s dementia was completely halted. The success was short-lived, however: Wagner-Juaregg eventually had to discontinue tuberculin as a treatment, as it began to be considered too toxic.
By 1917, Wagner-Juaregg’s theory about syphilis and fevers was becoming more credible – and one day a new opportunity presented itself when a wounded soldier, stricken with malaria and a related fever, was accidentally admitted to his psychiatric unit.
When his findings were published in 1918, Wagner-Juaregg’s so-called “fever therapy” swept the globe.
What Wagner-Juaregg did next was ethically deplorable by any standard: Before he allowed the soldier any quinine (the standard treatment for malaria at the time), Wagner-Juaregg took a small sample of the soldier’s blood and inoculated three syphilis patients with the sample, rubbing the blood on their open syphilitic blisters.
It’s unclear how well the malaria treatment worked for those three specific patients – but Wagner-Juaregg’s records show that in the span of one year, he inoculated a total of nine patients with malaria, for the sole purpose of inducing fevers, and six of them made a full recovery. Wagner-Juaregg’s treatment was so successful, in fact, that one of his inoculated patients, an actor who was unable to work due to his dementia, was eventually able to find work again and return to the stage. Two additional patients – a military officer and a clerk – recovered from their once-terminal illnesses and returned to their former careers as well.
When his findings were published in 1918, Wagner-Juaregg’s so-called “fever therapy” swept the globe. The treatment was hailed as a breakthrough – but it still had risks. Malaria itself had a mortality rate of about 15 percent at the time. Many people considered that to be a gamble worth taking, compared to dying a painful, protracted death from syphilis.
Malaria could also be effectively treated much of the time with quinine, whereas other fever-causing illnesses were not so easily treated. Triggering a fever by way of malaria specifically, therefore, became the standard of care.
Tens of thousands of people with syphilitic dementia would go on to be treated with fever therapy until the early 1940s, when a combination of Salvarsan and penicillin caused syphilis infections to decline. Eventually, neurosyphilis became rare, and then nearly unheard of.
Despite his contributions to medicine, it’s important to note that Wagner-Juaregg was most definitely not a person to idolize. In fact, he was an outspoken anti-Semite and proponent of eugenics, arguing that Jews were more prone to mental illness and that people who were mentally ill should be forcibly sterilized. (Wagner-Juaregg later became a Nazi sympathizer during Hitler’s rise to power even though, bizarrely, his first wife was Jewish.) Another problematic issue was that his fever therapy involved experimental treatments on many who, due to their cognitive issues, could not give informed consent.
Lack of consent was also a fundamental problem with the syphilis study at Tuskegee, appalling research that began just 14 years after Wagner-Juaregg published his “fever therapy” findings.
Still, despite his outrageous views, Wagner-Juaregg was awarded the Nobel Prize in Medicine or Physiology in 1927 – and despite some egregious human rights abuses, the miraculous “fever therapy” was partly responsible for taming one of the deadliest plagues in human history.
Can Biotechnology Take the Allergies Out of Cats?
Amy Bitterman, who teaches at Rutgers Law School in Newark, gets enormous pleasure from her three mixed-breed rescue cats, Spike, Dee, and Lucy. To manage her chronically stuffy nose, three times a week she takes Allegra D, which combines the antihistamine fexofenadine with the decongestant pseudoephedrine. Amy's dog allergy is rougher--so severe that when her sister launched a business, Pet Care By Susan, from their home in Edison, New Jersey, they knew Susan would have to move elsewhere before she could board dogs. Amy has tried to visit their brother, who owns a Labrador Retriever, taking Allegra D beforehand. But she began sneezing, and then developed watery eyes and phlegm in her chest.
"It gets harder and harder to breathe," she says.
Animal lovers have long dreamed of "hypo-allergenic" cats and dogs. Although to date, there is no such thing, biotechnology is beginning to provide solutions for cat-lovers. Cats are a simpler challenge than dogs. Dog allergies involve as many as seven proteins. But up to 95 percent of people who have cat allergies--estimated at 10 to 30 percent of the population in North America and Europe--react to one protein, Fel d1. Interestingly, cats don't seem to need Fel d1. There are cats who don't produce much Fel d1 and have no known health problems.
The current technologies fight Fel d1 in ingenious ways. Nestle Purina reached the market first with a cat food, Pro Plan LiveClear, launched in the U.S. a year and a half ago. It contains Fel d1 antibodies from eggs that in effect neutralize the protein. HypoCat, a vaccine for cats, induces them to create neutralizing antibodies to their own Fel d1. It may be available in the United States by 2024, says Gary Jennings, chief executive officer of Saiba Animal Health, a University of Zurich spin-off. Another approach, using the gene-editing tool CRISPR to create a medication that would splice out Fel d1 genes in particular tissues, is the furthest from fruition.
"Our goal was to ensure that whatever we do has no negative impact on the cat."
Customer demand is high. "We already have a steady stream of allergic cat owners contacting us desperate to have access to the vaccine or participate in the testing program," Jennings said. "There is a major unmet medical need."
More than a third of Americans own a cat (while half own a dog), and pet ownership is rising. With more Americans living alone, pets may be just the right amount of company. But the number of Americans with asthma increases every year. Of that group, some 20 to 30 percent have pet allergies that could trigger a possibly deadly attack. It is not clear how many pets end up in shelters because their owners could no longer manage allergies. Instead, allergists commonly report that their patients won't give up a beloved companion.
No one can completely avoid Fel d1, which clings to clothing and lands everywhere cat-owners go, even in schools and new homes never occupied by cats. Myths among cat-lovers may lead them to underestimate their own level of risk. Short hair doesn't help: the length of cat hair doesn't affect the production of Fel d1. Bathing your cat will likely upset it and accomplish little. Washing cuts the amount on its skin and fur only for two days. In one study, researchers measured the Fel d1 in the ambient air in a small chamber occupied by a cat—and then washed the cat. Three hours later, with the cat in the chamber again, the measurable Fel d1 in the air was lower. But this benefit was gone after 24 hours.
For years, the best option has been shots for people that prompt protective antibodies. Bitterman received dog and cat allergy injections twice a week as a child. However, these treatments require up to 100 injections over three to five years, and, as in her case, the effect may be partial or wear off. Even if you do opt for shots, treating the cat also makes sense, since you could protect more than one allergic member of your household and any allergic visitors as well.
An Allergy-Neutralizing Diet
Cats produce much of their Fel d1 in their saliva, which then spreads it to their fur when they groom, observed Nestle Purina immunologist Ebenezer Satyaraj. He realized that this made saliva—and therefore a cat's mouth--an unusually effective site for change. Hens exposed to Fel d1 produce their own antibodies, which survive in their eggs. The team coated LiveClear food with a powder form of these eggs; once in a cat's mouth, the chicken antibody binds to the Fel d1 in the cat's saliva, neutralizing it.
The results are partial: In a study with 105 cats, the level of active Fel d1 in their fur had dropped on average by 47 percent after ten weeks eating LiveClear. Cats that produced more Fel d1 at baseline had a more robust response, with a drop of up to 71 percent. A safety study found no effects on cats after six months on the diet. "Our goal was to ensure that whatever we do has no negative impact on the cat," Satyaraj said. Might a dogfood that minimizes dog allergens be on the way? "There is some early work," he said.
A Vaccine
This is a year when vaccines changed the lives of billions. Saiba's vaccine, HypoCat, delivers recombinant Fel d1 and the coat from a plant virus (the Cucumber mosaic virus) without any vital genetic information. The viral coat serves as a carrier. A cat would need shots once or twice a year to produce antibodies that neutralize Fel d1.
HypoCat works much like any vaccine, with the twist that the enemy is the cat's own protein. Is that safe? Saiba's team has followed 70 cats treated with the vaccine over two years and they remain healthy. Again the active Fel d1 doesn't disappear but diminishes. The team asked 10 people with cat allergies to report on their symptoms when they pet their vaccinated cats. Eight of them could pet their cat for nearly a half hour before their symptoms began, compared with an average of 17 minutes before the vaccine.
Jennings hopes to develop a HypoDog shot with a similar approach. However, the goal would be to target four or five proteins in one vaccine, and that increases the risk of hurting the dog. In the meantime, allergic dog-lovers considering an expensive breeder dog might think again: Independent research does not support the idea that any breed of dog produces less dander in the home. In fact, one well-designed study found that Spanish water dogs, Airedales, poodles and Labradoodles--breeds touted as hypo-allergenic--had significantly more of the most common allergen on their coat than an ordinary Lab and the control group.
Gene Editing
One day you might be able to bring your cat to the vet once a year for an injection that would modify specific tissues so they wouldn't produce Fel d1.
Nicole Brackett, a postdoctoral scientist at Viriginia-based Indoor Biotechnologies, which specializes in manufacturing biologics for allergy and asthma, most recently has used CRISPR to identify Fel d1 genetic sequences in cells from 50 domestic cats and 24 exotic ones. She learned that the sequences vary substantially from one cat to the next. This discovery, she says, backs up the observations that Fel d1 doesn't have a vital purpose.
The next step will be a CRISPR knockout of the relevant genes in cells from feline salivary glands, a prime source of Fel d1. Although the company is considering using CRISPR to edit the genes in a cat embryo and possibly produce a Fel d1-free cat, designer cats won't be its ultimate product. Instead, the company aims to produce injections that could treat any cat.
Reducing pet allergens at home could have a compound benefit, Indoor Biotechnologies founder Martin Chapman, an immunologist, notes: "When you dampen down the response to one allergen, you could also dampen it down to multiple allergens." As allergies become more common around the world, that's especially good news.
Earlier this year, California-based Ambry Genetics announced that it was discontinuing a test meant to estimate a person's risk of developing prostate or breast cancer. The test looks for variations in a person's DNA that are known to be associated with these cancers.
Known as a polygenic risk score, this type of test adds up the effects of variants in many genes — often in the dozens or hundreds — and calculates a person's risk of developing a particular health condition compared to other people. In this way, polygenic risk scores are different from traditional genetic tests that look for mutations in single genes, such as BRCA1 and BRCA2, which raise the risk of breast cancer.
Traditional genetic tests look for mutations that are relatively rare in the general population but have a large impact on a person's disease risk, like BRCA1 and BRCA2. By contrast, polygenic risk scores scan for more common genetic variants that, on their own, have a small effect on risk. Added together, however, they can raise a person's risk for developing disease.
These scores could become a part of routine healthcare in the next few years. Researchers are developing polygenic risk scores for cancer, heart, disease, diabetes and even depression. Before they can be rolled out widely, they'll have to overcome a key limitation: racial bias.
"The issue with these polygenic risk scores is that the scientific studies which they're based on have primarily been done in individuals of European ancestry," says Sara Riordan, president of the National Society of Genetics Counselors. These scores are calculated by comparing the genetic data of people with and without a particular disease. To make these scores accurate, researchers need genetic data from tens or hundreds of thousands of people.
Myriad's old test would have shown that a Black woman had twice as high of a risk for breast cancer compared to the average woman even if she was at low or average risk.
A 2018 analysis found that 78% of participants included in such large genetic studies, known as genome-wide association studies, were of European descent. That's a problem, because certain disease-associated genetic variants don't appear equally across different racial and ethnic groups. For example, a particular variant in the TTR gene, known as V1221, occurs more frequently in people of African descent. In recent years, the variant has been found in 3 to 4 percent of individuals of African ancestry in the United States. Mutations in this gene can cause protein to build up in the heart, leading to a higher risk of heart failure. A polygenic risk score for heart disease based on genetic data from mostly white people likely wouldn't give accurate risk information to African Americans.
Accuracy in genetic testing matters because such polygenic risk scores could help patients and their doctors make better decisions about their healthcare.
For instance, if a polygenic risk score determines that a woman is at higher-than-average risk of breast cancer, her doctor might recommend more frequent mammograms — X-rays that take a picture of the breast. Or, if a risk score reveals that a patient is more predisposed to heart attack, a doctor might prescribe preventive statins, a type of cholesterol-lowering drug.
"Let's be clear, these are not diagnostic tools," says Alicia Martin, a population and statistical geneticist at the Broad Institute of MIT and Harvard. "We can't use a polygenic score to say you will or will not get breast cancer or have a heart attack."
But combining a patient's polygenic risk score with other factors that affect disease risk — like age, weight, medication use or smoking status — may provide a better sense of how likely they are to develop a specific health condition than considering any one risk factor one its own. The accuracy of polygenic risk scores becomes even more important when considering that these scores may be used to guide medication prescription or help patients make decisions about preventive surgery, such as a mastectomy.
In a study published in September, researchers used results from large genetics studies of people with European ancestry and data from the UK Biobank to calculate polygenic risk scores for breast and prostate cancer for people with African, East Asian, European and South Asian ancestry. They found that they could identify individuals at higher risk of breast and prostate cancer when they scaled the risk scores within each group, but the authors say this is only a temporary solution. Recruiting more diverse participants for genetics studies will lead to better cancer detection and prevent, they conclude.
Recent efforts to do just that are expected to make these scores more accurate in the future. Until then, some genetics companies are struggling to overcome the European bias in their tests.
Acknowledging the limitations of its polygenic risk score, Ambry Genetics said in April that it would stop offering the test until it could be recalibrated. The company launched the test, known as AmbryScore, in 2018.
"After careful consideration, we have decided to discontinue AmbryScore to help reduce disparities in access to genetic testing and to stay aligned with current guidelines," the company said in an email to customers. "Due to limited data across ethnic populations, most polygenic risk scores, including AmbryScore, have not been validated for use in patients of diverse backgrounds." (The company did not make a spokesperson available for an interview for this story.)
In September 2020, the National Comprehensive Cancer Network updated its guidelines to advise against the use of polygenic risk scores in routine patient care because of "significant limitations in interpretation." The nonprofit, which represents 31 major cancer cancers across the United States, said such scores could continue to be used experimentally in clinical trials, however.
Holly Pederson, director of Medical Breast Services at the Cleveland Clinic, says the realization that polygenic risk scores may not be accurate for all races and ethnicities is relatively recent. Pederson worked with Salt Lake City-based Myriad Genetics, a leading provider of genetic tests, to improve the accuracy of its polygenic risk score for breast cancer.
The company announced in August that it had recalibrated the test, called RiskScore, for women of all ancestries. Previously, Myriad did not offer its polygenic risk score to women who self-reported any ancestry other than sole European or Ashkenazi ancestry.
"Black women, while they have a similar rate of breast cancer to white women, if not lower, had twice as high of a polygenic risk score because the development and validation of the model was done in white populations," Pederson said of the old test. In other words, Myriad's old test would have shown that a Black woman had twice as high of a risk for breast cancer compared to the average woman even if she was at low or average risk.
To develop and validate the new score, Pederson and other researchers assessed data from more than 275,000 women, including more than 31,000 African American women and nearly 50,000 women of East Asian descent. They looked at 56 different genetic variants associated with ancestry and 93 associated with breast cancer. Interestingly, they found that at least 95% of the breast cancer variants were similar amongst the different ancestries.
The company says the resulting test is now more accurate for all women across the board, but Pederson cautions that it's still slightly less accurate for Black women.
"It's not only the lack of data from Black women that leads to inaccuracies and a lack of validation in these types of risk models, it's also the pure genomic diversity of Africa," she says, noting that Africa is the most genetically diverse continent on the planet. "We just need more data, not only in American Black women but in African women to really further characterize that continent."
Martin says it's problematic that such scores are most accurate for white people because they could further exacerbate health disparities in traditionally underserved groups, such as Black Americans. "If we were to set up really representative massive genetic studies, we would do a much better job at predicting genetic risk for everybody," she says.
Earlier this year, the National Institutes of Health awarded $38 million to researchers to improve the accuracy of polygenic risk scores in diverse populations. Researchers will create new genome datasets and pool information from existing ones in an effort to diversify the data that polygenic scores rely on. They plan to make these datasets available to other scientists to use.
"By having adequate representation, we can ensure that the results of a genetic test are widely applicable," Riordan says.