How a Nobel-Prize Winner Fought Her Family, Nazis, and Bombs to Change our Understanding of Cells Forever
When Rita Levi-Montalcini decided to become a scientist, she was determined that nothing would stand in her way. And from the beginning, that determination was put to the test. Before Levi-Montalcini became a Nobel Prize-winning neurobiologist, the first to discover and isolate a crucial chemical called Neural Growth Factor (NGF), she would have to battle both the sexism within her own family as well as the racism and fascism that was slowly engulfing her country
Levi-Montalcini was born to two loving parents in Turin, Italy at the turn of the 20th century. She and her twin sister, Paola, were the youngest of the family's four children, and Levi-Montalcini described her childhood as "filled with love and reciprocal devotion." But while her parents were loving, supportive and "highly cultured," her father refused to let his three daughters engage in any schooling beyond the basics. "He loved us and had a great respect for women," she later explained, "but he believed that a professional career would interfere with the duties of a wife and mother."
At age 20, Levi-Montalcini had finally had enough. "I realized that I could not possibly adjust to a feminine role as conceived by my father," she is quoted as saying, and asked his permission to finish high school and pursue a career in medicine. When her father reluctantly agreed, Levi-Montalcini was ecstatic: In just under a year, she managed to catch up on her mathematics, graduate high school, and enroll in medical school in Turin.
By 1936, Levi-Montalcini had graduated medical school at the top of her class and decided to stay on at the University of Turin as a research assistant for histologist and human anatomy professor Guiseppe Levi. Levi-Montalcini started studying nerve cells and nerve fibers – the tiny, slender tendrils that are threaded throughout our nerves and that determine what information each nerve can transmit. But it wasn't long before another enormous obstacle to her scientific career reared its head.
Science Under a Fascist Regime
Two years into her research assistant position, Levi-Montalcini was fired, along with every other "non-Aryan Italian" who held an academic or professional career, thanks to a series of antisemitic laws passed by Italy's then-leader Benito Mussolini. Forced out of her academic position, Levi-Montalcini went to Belgium for a fellowship at a neurological institute in Brussels – but then was forced back to Turin when the German army invaded.
Levi-Montalcini decided to keep researching. She and Guiseppe Levi built a makeshift lab in Levi-Montalcini's apartment, borrowing chicken eggs from local farmers and using sewing needles to dissect them. By dissecting the chicken embryos from her bedroom laboratory, she was able to see how nerve fibers formed and died. The two continued this research until they were interrupted again – this time, by British air raids. Levi-Montalcini fled to a country cottage to continue her research, and then two years later was forced into hiding when the German army invaded Italy. Levi-Montalcini and her family assumed different identities and lived with non-Jewish friends in Florence to survive the Holocaust. Despite all of this, Levi-Montalcini continued her work, dissecting chicken embryos from her hiding place until the end of the war.
"The discovery of NGF really changed the world in which we live, because now we knew that cells talk to other cells, and that they use soluble factors. It was hugely important."
A Post-War Discovery
Several years after the war, when Levi-Montalcini was once again working at the University of Turin, a German embryologist named Viktor Hamburger invited her to Washington University in St. Louis. Hamburger was impressed by Levi-Montalcini's research with her chicken embryos, and secured an opportunity for her to continue her work in America. The invitation would "change the course of my life," Levi-Montalcini would later recall.
During her fellowship, Montalcini grew tumors in mice and then transferred them to chick embryos in order to see how it would affect the chickens. To her surprise, she noticed that introducing the tumor samples would cause nerve fibers to grow rapidly. From this, Levi-Montalcini discovered and was able to isolate a protein that she determined was able to cause this rapid growth. She later named this Nerve Growth Factor, or NGF.
From there, Levi-Montalcini and her team launched new experiments to test NGF, injecting it and repressing it to see the effect it had in a test subject's body. When the team injected NGF into embryonic mice, they observed nerve growth, as well as the mouse pups developing faster – their eyes opening earlier and their teeth coming in sooner – than the untreated group. When the team purified the NGF extract, however, it had no effect, leading the team to believe that something else in the crude extract of NGF was influencing the growth of the newborn mice. Stanley Cohen, Levi-Montalcini's colleague, identified another growth factor called EGF – epidermal growth factor – that caused the mouse pups' eyes and teeth to grow so quickly.
Levi-Montalcini continued to experiment with NGF for the next several decades at Washington University, illuminating how NGF works in our body. When Levi-Montalcini injected newborn mice with an antiserum for NGF, for example, her team found that it "almost completely deprived the animals of a sympathetic nervous system." Other experiments done by Levi-Montalcini and her colleagues helped show the role that NGF plays in other important biological processes, such as the regulation of our immune system and ovulation.
"The discovery of NGF really changed the world in which we live, because now we knew that cells talk to other cells, and that they use soluble factors. It was hugely important," said Bill Mobley, Chair of the Department of Neurosciences at the University of California, San Diego School of Medicine.
Her Lasting Legacy
After years of setbacks, Levi-Montalcini's groundbreaking work was recognized in 1986, when she was awarded the Nobel Prize in Medicine for her discovery of NGF (Cohen, her colleague who discovered EGF, shared the prize). Researchers continue to study NGF even to this day, and the continued research has been able to increase our understanding of diseases like HIV and Alzheimer's.
Levi-Montalcini never stopped researching either: In January 2012, at the age of 102, Levi-Montalcini published her last research paper in the journal PNAS, making her the oldest member of the National Academy of Science to do so. Before she died in December 2012, she encouraged other scientists who would suffer setbacks in their careers to keep pursuing their passions. "Don't fear the difficult moments," Levi-Montalcini is quoted as saying. "The best comes from them."
“Virtual Biopsies” May Soon Make Some Invasive Tests Unnecessary
At his son's college graduation in 2017, Dan Chessin felt "terribly uncomfortable" sitting in the stadium. The bouts of pain persisted, and after months of monitoring, a urologist took biopsies of suspicious areas in his prostate.
This innovation may enhance diagnostic precision and promptness, but it also brings ethical concerns to the forefront.
"In my case, the biopsies came out cancerous," says Chessin, 60, who underwent robotic surgery for intermediate-grade prostate cancer at University Hospitals Cleveland Medical Center.
Although he needed a biopsy, as most patients today do, advances in radiologic technology may make such invasive measures unnecessary in the future. Researchers are developing better imaging techniques and algorithms—a form of computer science called artificial intelligence, in which machines learn and execute tasks that typically require human brain power.
This innovation may enhance diagnostic precision and promptness. But it also brings ethical concerns to the forefront of the conversation, highlighting the potential for invasion of privacy, unequal patient access, and less physician involvement in patient care.
A National Academy of Medicine Special Publication, released in December, emphasizes that setting industry-wide standards for use in patient care is essential to AI's responsible and transparent implementation as the industry grapples with voluminous quantities of data. The technology should be viewed as a tool to supplement decision-making by highly trained professionals, not to replace it.
MRI--a test that uses powerful magnets, radio waves, and a computer to take detailed images inside the body--has become highly accurate in detecting aggressive prostate cancer, but its reliability is more limited in identifying low and intermediate grades of malignancy. That's why Chessin opted to have his prostate removed rather than take the chance of missing anything more suspicious that could develop.
His urologist, Lee Ponsky, says AI's most significant impact is yet to come. He hopes University Hospitals Cleveland Medical Center's collaboration with research scientists at its academic affiliate, Case Western Reserve University, will lead to the invention of a virtual biopsy.
A National Cancer Institute five-year grant is funding the project, launched in 2017, to develop a combined MRI and computerized tool to support more accurate detection and grading of prostate cancer. Such a tool would be "the closest to a crystal ball that we can get," says Ponsky, professor and chairman of the Urology Institute.
In situations where AI has guided diagnostics, radiologists' interpretations of breast, lung, and prostate lesions have improved as much as 25 percent, says Anant Madabhushi, a biomedical engineer and director of the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve, who is collaborating with Ponsky. "AI is very nascent," Madabhushi says, estimating that fewer than 10 percent of niche academic medical centers have used it. "We are still optimizing and validating the AI and virtual biopsy technology."
In October, several North American and European professional organizations of radiologists, imaging informaticists, and medical physicists released a joint statement on the ethics of AI. "Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future," reads the statement, published in the Journal of the American College of Radiology. "The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes."
Overreliance on new technology also poses concern when humans "outsource the process to a machine."
The statement's leader author, radiologist J. Raymond Geis, says "there's no question" that machines equipped with artificial intelligence "can extract more information than two human eyes" by spotting very subtle patterns in pixels. Yet, such nuances are "only part of the bigger picture of taking care of a patient," says Geis, a senior scientist with the American College of Radiology's Data Science Institute. "We have to be able to combine that with knowledge of what those pixels mean."
Setting ethical standards is high on all physicians' radar because the intricacies of each patient's medical record are factored into the computer's algorithm, which, in turn, may be used to help interpret other patients' scans, says radiologist Frank Rybicki, vice chair of operations and quality at the University of Cincinnati's department of radiology. Although obtaining patients' informed consent in writing is currently necessary, ethical dilemmas arise if and when patients have a change of heart about the use of their private health information. It is likely that removing individual data may be possible for some algorithms but not others, Rybicki says.
The information is de-identified to protect patient privacy. Using it to advance research is akin to analyzing human tissue removed in surgical procedures with the goal of discovering new medicines to fight disease, says Maryellen Giger, a University of Chicago medical physicist who studies computer-aided diagnosis in cancers of the breast, lung, and prostate, as well as bone diseases. Physicians who become adept at using AI to augment their interpretation of imaging will be ahead of the curve, she says.
As with other new discoveries, patient access and equality come into play. While AI appears to "have potential to improve over human performance in certain contexts," an algorithm's design may result in greater accuracy for certain groups of patients, says Lucia M. Rafanelli, a political theorist at The George Washington University. This "could have a disproportionately bad impact on one segment of the population."
Overreliance on new technology also poses concern when humans "outsource the process to a machine." Over time, they may cease developing and refining the skills they used before the invention became available, said Chloe Bakalar, a visiting research collaborator at Princeton University's Center for Information Technology Policy.
"AI is a paradigm shift with magic power and great potential."
Striking the right balance in the rollout of the technology is key. Rushing to integrate AI in clinical practice may cause harm, whereas holding back too long could undermine its ability to be helpful. Proper governance becomes paramount. "AI is a paradigm shift with magic power and great potential," says Ge Wang, a biomedical imaging professor at Rensselaer Polytechnic Institute in Troy, New York. "It is only ethical to develop it proactively, validate it rigorously, regulate it systematically, and optimize it as time goes by in a healthy ecosystem."
How Emerging Technologies Can Help Us Fight the New Coronavirus
In nature, few species remain dominant for long. Any sizable population of similar individuals offers immense resources to whichever parasite can evade its defenses, spreading rapidly from one member to the next.
Which will prove greater: our defenses or our vulnerabilities?
Humans are one such dominant species. That wasn't always the case: our hunter-gatherer ancestors lived in groups too small and poorly connected to spread pathogens like wildfire. Our collective vulnerability to pandemics began with the dawn of cities and trade networks thousands of years ago. Roman cities were always demographic sinks, but never more so than when a pandemic agent swept through. The plague of Cyprian, the Antonine plague, the plague of Justinian – each is thought to have killed over ten million people, an appallingly high fraction of the total population of the empire.
With the advent of sanitation, hygiene, and quarantines, we developed our first non-immunological defenses to curtail the spread of plagues. With antibiotics, we began to turn the weapons of microbes against our microbial foes. Most potent of all, we use vaccines to train our immune systems to fight pathogens before we are even exposed. Edward Jenner's original vaccine alone is estimated to have saved half a billion lives.
It's been over a century since we suffered from a swift and deadly pandemic. Even the last deadly influenza of 1918 killed only a few percent of humanity – nothing so bad as any of the Roman plagues, let alone the Black Death of medieval times.
How much of our recent winning streak has been due to luck?
Much rides on that question, because the same factors that first made our ancestors vulnerable are now ubiquitous. Our cities are far larger than those of ancient times. They're inhabited by an ever-growing fraction of humanity, and are increasingly closely connected: we now routinely travel around the world in the course of a day. Despite urbanization, global population growth has increased contact with wild animals, creating more opportunities for zoonotic pathogens to jump species. Which will prove greater: our defenses or our vulnerabilities?
The tragic emergence of coronavirus 2019-nCoV in Wuhan may provide a test case. How devastating this virus will become is highly uncertain at the time of writing, but its rapid spread to many countries is deeply worrisome. That it seems to kill only the already infirm and spare the healthy is small comfort, and may counterintuitively assist its spread: it's easy to implement a quarantine when everyone infected becomes extremely ill, but if carriers may not exhibit symptoms as has been reported, it becomes exceedingly difficult to limit transmission. The virus, a distant relative of the more lethal SARS virus that killed 800 people in 2002 to 2003, has evolved to be transmitted between humans and spread to 18 countries in just six weeks.
Humanity's response has been faster than ever, if not fast enough. To its immense credit, China swiftly shared information, organized and built new treatment centers, closed schools, and established quarantines. The Coalition for Epidemic Preparedness Innovations, which was founded in 2017, quickly funded three different companies to develop three different varieties of vaccine: a standard protein vaccine, a DNA vaccine, and an RNA vaccine, with more planned. One of the agreements was signed after just four days of discussion, far faster than has ever been done before.
The new vaccine candidates will likely be ready for clinical trials by early summer, but even if successful, it will be additional months before the vaccine will be widely available. The delay may well be shorter than ever before thanks to advances in manufacturing and logistics, but a delay it will be.
The 1918 influenza virus killed more than half of its victims in the United Kingdom over just three months.
If we faced a truly nasty virus, something that spreads like pandemic influenza – let alone measles – yet with the higher fatality rate of, say, H7N9 avian influenza, the situation would be grim. We are profoundly unprepared, on many different levels.
So what would it take to provide us with a robust defense against pandemics?
Minimize the attack surface: 2019-nCoV jumped from an animal, most probably a bat, to humans. China has now banned the wildlife trade in response to the epidemic. Keeping it banned would be prudent, but won't be possible in all nations. Still, there are other methods of protection. Influenza viruses commonly jump from birds to pigs to humans; the new coronavirus may have similarly passed through a livestock animal. Thanks to CRISPR, we can now edit the genomes of most livestock. If we made them immune to known viruses, and introduced those engineered traits to domesticated animals everywhere, we would create a firewall in those intermediate hosts. We might even consider heritably immunizing the wild organisms most likely to serve as reservoirs of disease.
None of these defenses will be cheap, but they'll be worth every penny.
Rapid diagnostics: We need a reliable method of detection costing just pennies to be available worldwide inside of a week of discovering a new virus. This may eventually be possible thanks to a technology called SHERLOCK, which is based on a CRISPR system more commonly used for precision genome editing. Instead of using CRISPR to find and edit a particular genome sequence in a cell, SHERLOCK programs it to search for a desired target and initiate an easily detected chain reaction upon discovery. The technology is capable of fantastic sensitivity: with an attomolar (10-18) detection limit, it senses single molecules of a unique DNA or RNA fingerprint, and the components can be freeze-dried onto paper strips.
Better preparations: China acted swiftly to curtail the spread of the Wuhan virus with traditional public health measures, but not everything went as smoothly as it might have. Most cities and nations have never conducted a pandemic preparedness drill. Best give people a chance to practice keeping the city barely functional while minimizing potential exposure events before facing the real thing.
Faster vaccines: Three months to clinical trials is too long. We need a robust vaccine discovery and production system that can generate six candidates within a week of the pathogen's identification, manufacture a million doses the week after, and scale up to a hundred million inside of a month. That may be possible for novel DNA and RNA-based vaccines, and indeed anything that can be delivered using a standardized gene therapy vector. For example, instead of teaching each person's immune system to evolve protective antibodies by showing it pieces of the virus, we can program cells to directly produce known antibodies via gene therapy. Those antibodies could be discovered by sifting existing diverse libraries of hundreds of millions of candidates, computationally designed from scratch, evolved using synthetic laboratory ecosystems, or even harvested from the first patients to report symptoms. Such a vaccine might be discovered and produced fast enough at scale to halt almost any natural pandemic.
Robust production and delivery: Our defenses must not be vulnerable to the social and economic disruptions caused by a pandemic. Unfortunately, our economy selects for speed and efficiency at the expense of robustness. Just-in-time supply chains that wing their way around the world require every node to be intact. If workers aren't on the job producing a critical component, the whole chain breaks until a substitute can be found. A truly nasty pandemic would disrupt economies all over the world, so we will need to pay extra to preserve the capacity for independent vertically integrated production chains in multiple nations. Similarly, vaccines are only useful if people receive them, so delivery systems should be as robustly automated as possible.
None of these defenses will be cheap, but they'll be worth every penny. Our nations collectively spend trillions on defense against one another, but only billions to protect humanity from pandemic viruses known to have killed more people than any human weapon. That's foolish – especially since natural animal diseases that jump the species barrier aren't the only pandemic threats.
We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural.
The complete genomes of all historical pandemic viruses ever to have been sequenced are freely available to anyone with an internet connection. True, these are all agents we've faced before, so we have a pre-existing armory of pharmaceuticals and vaccines and experience. There's no guarantee that they would become pandemics again; for example, a large fraction of humanity is almost certainly immune to the 1918 influenza virus due to exposure to the related 2009 pandemic, making it highly unlikely that the virus would take off if released.
Still, making the blueprints publicly available means that a large and growing number of people with the relevant technical skills can single-handedly make deadly biological agents that might be able to spread autonomously -- at least if they can get their hands on the relevant DNA. At present, such people most certainly can, so long as they bother to check the publicly available list of which gene synthesis companies do the right thing and screen orders -- and by implication, which ones don't.
One would hope that at least some of the companies that don't advertise that they screen are "honeypots" paid by intelligence agencies to catch would-be bioterrorists, but even if most of them are, it's still foolish to let individuals access that kind of destructive power. We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural. Hence, we should build a secure and adaptive system capable of screening all DNA synthesis for known and potential future pandemic agents... without disclosing what we think is a credible bioweapon.
Whether or not it becomes a global pandemic, the emergence of Wuhan coronavirus has underscored the need for coordinated action to prevent the spread of pandemic disease. Let's ensure that our reactive response minimally prepares us for future threats, for one day, reacting may not be enough.