Nobel Prize goes to technology for mRNA vaccines
When Drew Weissman received a call from Katalin Karikó in the early morning hours this past Monday, he assumed his longtime research partner was calling to share a nascent, nagging idea. Weissman, a professor of medicine at the Perelman School of Medicine at the University of Pennsylvania, and Karikó, a professor at Szeged University and an adjunct professor at UPenn, both struggle with sleep disturbances. Thus, middle-of-the-night discourses between the two, often over email, has been a staple of their friendship. But this time, Karikó had something more pressing and exciting to share: They had won the 2023 Nobel Prize in Physiology or Medicine.
The work for which they garnered the illustrious award and its accompanying $1,000,000 cash windfall was completed about two decades ago, wrought through long hours in the lab over many arduous years. But humanity collectively benefited from its life-saving outcome three years ago, when both Moderna and Pfizer/BioNTech’s mRNA vaccines against COVID were found to be safe and highly effective at preventing severe disease. Billions of doses have since been given out to protect humans from the upstart viral scourge.
“I thought of going somewhere else, or doing something else,” said Katalin Karikó. “I also thought maybe I’m not good enough, not smart enough. I tried to imagine: Everything is here, and I just have to do better experiments.”
Unlocking the power of mRNA
Weissman and Karikó unlocked mRNA vaccines for the world back in the early 2000s when they made a key breakthrough. Messenger RNA molecules are essentially instructions for cells’ ribosomes to make specific proteins, so in the 1980s and 1990s, researchers started wondering if sneaking mRNA into the body could trigger cells to manufacture antibodies, enzymes, or growth agents for protecting against infection, treating disease, or repairing tissues. But there was a big problem: injecting this synthetic mRNA triggered a dangerous, inflammatory immune response resulting in the mRNA’s destruction.
While most other researchers chose not to tackle this perplexing problem to instead pursue more lucrative and publishable exploits, Karikó stuck with it. The choice sent her academic career into depressing doldrums. Nobody would fund her work, publications dried up, and after six years as an assistant professor at the University of Pennsylvania, Karikó got demoted. She was going backward.
“I thought of going somewhere else, or doing something else,” Karikó told Stat in 2020. “I also thought maybe I’m not good enough, not smart enough. I tried to imagine: Everything is here, and I just have to do better experiments.”
A tale of tenacity
Collaborating with Drew Weissman, a new professor at the University of Pennsylvania, in the late 1990s helped provide Karikó with the tenacity to continue. Weissman nurtured a goal of developing a vaccine against HIV-1, and saw mRNA as a potential way to do it.
“For the 20 years that we’ve worked together before anybody knew what RNA is, or cared, it was the two of us literally side by side at a bench working together,” Weissman said in an interview with Adam Smith of the Nobel Foundation.
In 2005, the duo made their 2023 Nobel Prize-winning breakthrough, detailing it in a relatively small journal, Immunity. (Their paper was rejected by larger journals, including Science and Nature.) They figured out that chemically modifying the nucleoside bases that make up mRNA allowed the molecule to slip past the body’s immune defenses. Karikó and Weissman followed up that finding by creating mRNA that’s more efficiently translated within cells, greatly boosting protein production. In 2020, scientists at Moderna and BioNTech (where Karikó worked from 2013 to 2022) rushed to craft vaccines against COVID, putting their methods to life-saving use.
The future of vaccines
Buoyed by the resounding success of mRNA vaccines, scientists are now hurriedly researching ways to use mRNA medicine against other infectious diseases, cancer, and genetic disorders. The now ubiquitous efforts stand in stark contrast to Karikó and Weissman’s previously unheralded struggles years ago as they doggedly worked to realize a shared dream that so many others shied away from. Katalin Karikó and Drew Weissman were brave enough to walk a scientific path that very well could have ended in a dead end, and for that, they absolutely deserve their 2023 Nobel Prize.
This article originally appeared on Big Think, home of the brightest minds and biggest ideas of all time.
Today’s Focus on STEM Education Is Missing A Crucial Point
I once saw a fascinating TED talk on 3D printing. As I watched the presenter discuss the custom fabrication, not of plastic gears or figurines, but of living, implantable kidneys, I thought I was finally living in the world of Star Trek, and I experienced a flush of that eager, expectant enthusiasm I felt as a child looking toward the future. I looked at my current career and felt a rejuvenation of my commitment to teach young people the power of science.
The well-rounded education of human beings needs to include lessons learned both from a study of the physical world, and from a study of humanity.
Whether we are teachers or not, those of us who admire technology and innovation, and who wish to support progress, usually embrace the importance of educating the next generation of scientists and inventors. Growing a healthy technological civilization takes a lot of work, skill, and wisdom, and its continued health depends on future generations of competent thinkers. Thus, we may find it encouraging that there is currently an abundance of interest in STEM– the common acronym for the study of science, technology, engineering, and math.
But education is as challenging an endeavor as science itself. Educating youth--if we want to do it right--requires as much thought, work, and expertise as discovering a cure or pioneering regenerative medicine. Before we give our money, time, or support to any particular school or policy, let's give some thought to the details of the educational process.
A Well-Balanced Diet
For one thing, STEM education cannot stand in isolation. The well-rounded education of human beings needs to include lessons learned both from a study of the physical world, and from a study of humanity. This is especially true for the basic education of children, but it is true even for college students. And even for those in science and engineering, there are important lessons to be learned from the study of history, literature, and art.
Scientists have their own emotions and values, and also need financial support. The fruits of their labor ultimately benefit other people. How are we all to function together in our division-of-labor society, without some knowledge of the way societies work? How are we to fully thrive and enjoy life, without some understanding of ourselves, our motives, our moral values, and our relationships to others? STEM education needs the humanities as a partner. That flourishing civilization we dream of requires both technical competence and informed life-choices.
Think for Yourself (Even in Science)
Perhaps even more important than what is taught, is the subject of how things are taught. We want our children to learn the skill of thinking independently, but even in the sciences, we often fail completely to demonstrate how. Instead of teaching science as a thinking process, we indoctrinate, using the grand discoveries of the great scientists as our sacred texts. But consider the words of Isaac Newton himself, regarding rote learning:
A Vulgar Mechanick can practice what he has been taught or seen done, but if he is in an error he knows not how to find it out and correct it, and if you put him out of his road he is at a stand. Whereas he that is able to reason nimbly and judiciously about figure, force, and motion, is never at rest till he gets over every rub.
What's the point of all this formal schooling in the first place? Is it, as many of the proponents of STEM education might argue, to train students for a "good" career?
If our goal is to help students "reason nimbly" about the world around them, as the great scientists themselves did, are we succeeding? When we "teach" middle school students about DNA or cellular respiration by presenting as our only supporting evidence cartoon pictures, are we showing students a process of discovery based on evidence and hard work? Or are we just training them to memorize and repeat what the authorities say?
A useful education needs to give students the skill of following a line of reasoning, of asking rational questions, and of chewing things through in their minds--even if we regard the material as beyond question. Besides feeding students a well-balanced diet of knowledge, healthy schooling needs to teach them to digest this information thoroughly.
Thinking Training
Now step back for a moment and think about the purpose of education. What's the point of all this formal schooling in the first place? Is it, as many of the proponents of STEM education might argue, to train students for a "good" career? That view may have some validity for young adults, who are beginning to choose electives in favored subjects, and have started to choose a direction for their career.
But for the basic education of children, this way of thinking is presumptuous and disastrous. I would argue that the central purpose of a basic education is not to teach children how to perform this or that particular skill, but simply to teach them to think clearly. We should not be aiming to provide job training, but thinking training. We should be helping children learn how to "reason nimbly" about the world around them, and breathing life into their thinking processes, by which they will grapple with the events and circumstances of their lives.
So as we admire innovation, dream of a wonderful future, and attempt to nurture the next generation of scientists and engineers, instead of obsessing over STEM education, let us focus on rational education. Let's worry about showing children how to think--about all the important things in life. Let's give them the basic facts of human existence -- physical and humanitarian -- and show them how to fluently and logically understand them.
Some students will become the next generation of creators, and some will follow other careers, but together -- if they are educated properly -- they will continue to grow their inheritance, and to keep our civilization healthy and flourishing, in body and in mind.
Do New Tools Need New Ethics?
Scarcely a week goes by without the announcement of another breakthrough owing to advancing biotechnology. Recent examples include the use of gene editing tools to successfully alter human embryos or clone monkeys; new immunotherapy-based treatments offering longer lives or even potential cures for previously deadly cancers; and the creation of genetically altered mosquitos using "gene drives" to quickly introduce changes into the population in an ecosystem and alter the capacity to carry disease.
The environment for conducting science is dramatically different today than it was in the 1970s, 80s, or even the early 2000s.
Each of these examples puts pressure on current policy guidelines and approaches, some existing since the late 1970s, which were created to help guide the introduction of controversial new life sciences technologies. But do the policies that made sense decades ago continue to make sense today, or do the tools created during different eras in science demand new ethics guidelines and policies?
Advances in biotechnology aren't new of course, and in fact have been the hallmark of science since the creation of the modern U.S. National Institutes of Health in the 1940s and similar government agencies elsewhere. Funding agencies focused on health sciences research with the hope of creating breakthroughs in human health, and along the way, basic science discoveries led to the creation of new scientific tools that offered the ability to approach life, death, and disease in fundamentally new ways.
For example, take the discovery in the 1970s of the "chemical scissors" in living cells called restriction enzymes, which could be controlled and used to introduce cuts at predictable locations in a strand of DNA. This led to the creation of tools that for the first time allowed for genetic modification of any organism with DNA, which meant bacteria, plants, animals, and even humans could in theory have harmful mutations repaired, but also that changes could be made to alter or even add genetic traits, with potentially ominous implications.
The scientists involved in that early research convened a small conference to discuss not only the science, but how to responsibly control its potential uses and their implications. The meeting became known as the Asilomar Conference for the meeting center where it was held, and is often noted as the prime example of the scientific community policing itself. While the Asilomar recommendations were not sufficient from a policy standpoint, they offered a blueprint on which policies could be based and presented a model of the scientific community setting responsible controls for itself.
But the environment for conducting science changed over the succeeding decades and it is dramatically different today than it was in the 1970s, 80s, or even the early 2000s. The regime for oversight and regulation that has provided controls for the introduction of so-called "gene therapy" in humans starting in the mid-1970s is beginning to show signs of fraying. The vast majority of such research was performed in the U.S., U.K., and Europe, where policies were largely harmonized. But as the tools for manipulating humans at the molecular level advanced, they also became more reliable and more precise, as well as cheaper and easier to use—think CRISPR—and therefore more accessible to more people in many more countries, many without clear oversight or policies laying out responsible controls.
There is no precedent for global-scale science policy, though that is exactly what this moment seems to demand.
As if to make the point through news headlines, scientists in China announced in 2017 that they had attempted to perform gene editing on in vitro human embryos to repair an inherited mutation for beta thalassemia--research that would not be permitted in the U.S. and most European countries and at the time was also banned in the U.K. Similarly, specialists from a reproductive medicine clinic in the U.S. announced in 2016 that they had performed a highly controversial reproductive technology by which DNA from two women is combined (so-called "three parent babies"), in a satellite clinic they had opened in Mexico to avoid existing prohibitions on the technique passed by the U.S. Congress in 2015.
In both cases, genetic changes were introduced into human embryos that if successful would lead to the birth of a child with genetically modified germline cells—the sperm in boys or eggs in girls—with those genetic changes passed on to all future generations of related offspring. Those are just two very recent examples, and it doesn't require much imagination to predict the list of controversial possible applications of advancing biotechnologies: attempts at genetic augmentation or even cloning in humans, and alterations of the natural environment with genetically engineered mosquitoes or other insects in areas with endemic disease. In fact, as soon as this month, scientists in Africa may release genetically modified mosquitoes for the first time.
The technical barriers are falling at a dramatic pace, but policy hasn't kept up, both in terms of what controls make sense and how to address what is an increasingly global challenge. There is no precedent for global-scale science policy, though that is exactly what this moment seems to demand. Mechanisms for policy at global scale are limited–-think UN declarations, signatory countries, and sometimes international treaties, but all are slow, cumbersome and have limited track records of success.
But not all the news is bad. There are ongoing efforts at international discussion, such as an international summit on human genome editing convened in 2015 by the National Academies of Sciences and Medicine (U.S.), Royal Academy (U.K.), and Chinese Academy of Sciences (China), a follow-on international consensus committee whose report was issued in 2017, and an upcoming 2nd international summit in Hong Kong in November this year.
These efforts need to continue to focus less on common regulatory policies, which will be elusive if not impossible to create and implement, but on common ground for the principles that ought to guide country-level rules. Such principles might include those from the list proposed by the international consensus committee, including transparency, due care, responsible science adhering to professional norms, promoting wellbeing of those affected, and transnational cooperation. Work to create a set of shared norms is ongoing and worth continued effort as the relevant stakeholders attempt to navigate what can only be called a brave new world.