He Beat Lymphoma at 31, While Pioneering Breakthroughs in Cancer Research
It looked like only good things were ahead of Taylor Schreiber in 2010.
Schreiber had just finished his PhD in cancer biology and was preparing to return to medical school to complete his degree. He also had been married a year, and, like any young newlyweds up for adventure, he and his wife Nicki decided to go backpacking in the Costa Rican rainforest.
He was 31, and it was April Fool's Day—but no joke.
During the trip, he experienced a series of night sweats and didn't think too much about it. Schreiber hadn't been feeling right for a few weeks and assumed he had a respiratory infection. Besides, they were sleeping outdoors in a hot, tropical jungle.
But the night sweats continued even after he got home, leaving his mattress so soaked in the morning it was if a bucket of water had been dumped on him overnight. On instinct, he called one of his thesis advisors at the Sylvester Comprehensive Cancer Center in Florida and described his symptoms.
Dr. Joseph Rosenblatt didn't hesitate. "It sounds like Hodgkins. Come see me tomorrow," he said.
The next day, Schreiber was diagnosed with Stage 3b Hodgkin Lymphoma, which meant the disease was advanced. He was 31, and it was April Fool's Day—but no joke.
"I was scared to death," he recalls. "[Thank] goodness it's one of those cancers that is highly treatable. But being 31 years old and all of a sudden being told that you have a 30 percent of mortality within the next two years wasn't anything that I was relieved about."
For Schreiber, the diagnosis was a personal and professional game-changer. He couldn't work in the hospital as a medical student while undergoing chemotherapy, so he wound up remaining in his post-doctorate lab for another two years. The experience also solidified his decision to apply his scientific and medical knowledge to drug development.
Today, now 39, Schreiber is co-founder, director and chief scientific officer of Shattuck Labs, an immuno-oncology startup, and the developer of several important research breakthroughs in the field of immunotherapy.
After his diagnosis, he continued working full-time as a postdoc, while undergoing an aggressive chemotherapy regimen.
"These days, I look back on [my cancer] and think it was one of the luckiest things that ever happened to me," he says. "In medical school, you learn what it is to treat people and learn about the disease. But there is nothing like being a patient to teach you another side of medicine."
Medicine first called to Schreiber when his maternal grandfather was dying from lung cancer complications. Schreiber's uncle, a radiologist at the medical center where his grandfather was being treated, took him on a tour of his department and showed him images of the insides of his body on an ultrasound machine.
Schreiber was mesmerized. His mother was a teacher and his dad sold windows, so medicine was not something to which he had been routinely exposed.
"This weird device was like looking through jelly, and I thought that was the coolest thing ever," he says.
The experience led him to his first real job at the Catholic Medical Center in Manchester, NH, then to a semester-long internship program during his senior year in high school in Concord Hospital's radiology department.
"This was a great experience, but it also made clear that there was not any meaningful way to learn or contribute to medicine before you obtained a medical degree," says Schreiber, who enrolled in Bucknell College to study biology.
Bench science appealed to him, and he volunteered in Dr. Jing Zhou's nephrology department lab at the Harvard Institutes of Medicine. Under the mentorship of one of her post-docs, Lei Guo, he learned a range of critical techniques in molecular biology, leading to their discovery of a new gene related to human polycystic kidney disease and his first published paper.
Before his cancer diagnosis, Schreiber also volunteered in the lab of Dr. Robert "Doc" Sackstein, a world-renowned bone marrow transplant physician and biomedical researcher, and his interests began to shift towards immunology.
"He was just one of those dynamic people who has a real knack for teaching, first of all, and for inspiring people to want to learn more and ask hard questions and understand experimental medicine," Schreiber says.
It was there that he learned the scientific method and the importance of incorporating the right controls in experiments—a simple idea, but difficult to perform well. He also made what Sackstein calls "a startling discovery" about chemokines, which are signaling proteins that can activate an immune response.
As immune cells travel around our bodies looking for potential sources of infection or disease, they latch onto blood vessel walls and "sniff around" for specific chemical cues that indicate a source of infection. Schreiber and his colleagues designed a system that mimics the blood vessel wall, allowing them to define which chemical cues efficiently drive immune cell migration from the blood into tissues.
Schreiber received the best overall research award in 2008 from the National Student Research Foundation. But even as Schreiber's expertise about immunology grew, his own immune system was about to fight its hardest battle.
After his diagnosis, he continued working full-time as a postdoc in the lab of Eckhard Podack, then chair of the microbiology and immunology department at the University of Miami's Leonard M. Miller School of Medicine.
At the same time, Schreiber began an aggressive intravenous chemotherapy regimen of adriamycin, bleomycin, vincristine and dacarbazine, every two weeks, for 6 months. His wife Nicki, an obgyn, transferred her residency from Emory University in Atlanta to Miami so they could be together.
"It was a weird period. I mean, it made me feel good to keep doing things and not just lay idle," he said. "But by the second cycle of chemo, I was immunosuppressed and losing my hair and wore a face mask walking around the lab, which I was certainly self-conscious. But everyone around me didn't make me feel like an alien so I just went about my business."
The experience reinforced his desire to stay in immunology, especially after having taken the most toxic chemotherapies.
He stayed home the day after chemo when he felt his worst, then rested his body and timed exercise to give the drugs the best shot of targeting sick cells (a strategy, he says, that "could have been voodoo"). He also drank "an incredible" amount of fluids to help flush the toxins out of his system.
Side effects of the chemo, besides hair loss, included intense nausea, diarrhea, a loss of appetite, some severe lung toxicities that eventually resolved, and incredible fatigue.
"I've always been a runner, and I would even try to run while I was doing chemo," he said. "After I finished treatment, I would go literally 150 yards and just have to stop, and it took a lot of effort to work through it."
The experience reinforced his desire to stay in immunology, especially after having taken the most toxic chemotherapies.
"They worked, and I could tolerate them because I was young, but people who are older can't," Schreiber said. "The whole field of immunotherapy has really demonstrated that there are effective therapies out there that don't come with all of the same toxicities as the original chemo, so it was galvanizing to imagine contributing to finding some of those."
Schreiber went on to complete his MD and PhD degrees from the Sheila and David Fuente Program in Cancer Biology at the Miller School of Medicine and was nominated in 2011 as a Future Leader in Cancer Research by the American Association for Cancer Research. He also has numerous publications in the fields of tumor immunology and immunotherapy.
Sackstein, who was struck by Schreiber's enthusiasm and "boundless energy," predicts he will be a "major player in the world of therapeutics."
"The future for Taylor is amazing because he has the capacity to synthesize current knowledge and understand the gaps and then ask the right questions to establish new paradigms," said Sackstein, currently dean of the Herbert Wertheim College of Medicine at Florida International University. "It's a very unusual talent."
Since then, he has devoted his career to developing innovative techniques aimed at unleashing the immune system to attack cancer with less toxicity than chemotherapy and better clinical results—first, at a company called Heat Biologics and then at Pelican Therapeutics.
His primary work at Austin, Texas-based Shattuck is aimed at combining two functions in a single therapy for cancer and inflammatory diseases, blocking molecules that put a brake on the immune system (checkpoint inhibitors) while also stimulating the immune system's cancer-killing T cells.
The company has one drug in clinical testing as part of its Agonist Redirected Checkpoint (ARC) platform, which represents a new class of biological medicine. Two others are expected within the next year, with a pipeline of more than 250 drug candidates spanning cancer, inflammatory, and metabolic diseases.
Nine years after his own cancer diagnosis, Schreiber says it remains a huge part of his life, though his chances of a cancer recurrence today are about the same as his chances of getting newly diagnosed with any other cancer.
"I feel blessed to be in a position to help cancer patients live longer and could not imagine a more fulfilling way to spend my life," he says.
Awash in a fluid finely calibrated to keep it alive, a human eye rests inside a transparent cubic device. This ECaBox, or Eyes in a Care Box, is a one-of-a-kind system built by scientists at Barcelona’s Centre for Genomic Regulation (CRG). Their goal is to preserve human eyes for transplantation and related research.
In recent years, scientists have learned to transplant delicate organs such as the liver, lungs or pancreas, but eyes are another story. Even when preserved at the average transplant temperature of 4 Centigrade, they last for 48 hours max. That's one explanation for why transplanting the whole eye isn’t possible—only the cornea, the dome-shaped, outer layer of the eye, can withstand the procedure. The retina, the layer at the back of the eyeball that turns light into electrical signals, which the brain converts into images, is extremely difficult to transplant because it's packed with nerve tissue and blood vessels.
These challenges also make it tough to research transplantation. “This greatly limits their use for experiments, particularly when it comes to the effectiveness of new drugs and treatments,” said Maria Pia Cosma, a biologist at Barcelona’s Centre for Genomic Regulation (CRG), whose team is working on the ECaBox.
Eye transplants are desperately needed, but they're nowhere in sight. About 12.7 million people worldwide need a corneal transplant, which means that only one in 70 people who require them, get them. The gaps are international. Eye banks in the United Kingdom are around 20 percent below the level needed to supply hospitals, while Indian eye banks, which need at least 250,000 corneas per year, collect only around 45 to 50 thousand donor corneas (and of those 60 to 70 percent are successfully transplanted).
As for retinas, it's impossible currently to put one into the eye of another person. Artificial devices can be implanted to restore the sight of patients suffering from severe retinal diseases, but the number of people around the world with such “bionic eyes” is less than 600, while in America alone 11 million people have some type of retinal disease leading to severe vision loss. Add to this an increasingly aging population, commonly facing various vision impairments, and you have a recipe for heavy burdens on individuals, the economy and society. In the U.S. alone, the total annual economic impact of vision problems was $51.4 billion in 2017.
Even if you try growing tissues in the petri dish route into organoids mimicking the function of the human eye, you will not get the physiological complexity of the structure and metabolism of the real thing, according to Cosma. She is a member of a scientific consortium that includes researchers from major institutions from Spain, the U.K., Portugal, Italy and Israel. The consortium has received about $3.8 million from the European Union to pursue innovative eye research. Her team’s goal is to give hope to at least 2.2 billion people across the world afflicted with a vision impairment and 33 million who go through life with avoidable blindness.
Their method? Resuscitating cadaveric eyes for at least a month.
If we succeed, it will be the first intact human model of the eye capable of exploring and analyzing regenerative processes ex vivo. -- Maria Pia Cosma.
“We proposed to resuscitate eyes, that is to restore the global physiology and function of human explanted tissues,” Cosma said, referring to living tissues extracted from the eye and placed in a medium for culture. Their ECaBox is an ex vivo biological system, in which eyes taken from dead donors are placed in an artificial environment, designed to preserve the eye’s temperature and pH levels, deter blood clots, and remove the metabolic waste and toxins that would otherwise spell their demise.
Scientists work on resuscitating eyes in the lab of Maria Pia Cosma.
Courtesy of Maria Pia Cosma.
“One of the great challenges is the passage of the blood in the capillary branches of the eye, what we call long-term perfusion,” Cosma said. Capillaries are an intricate network of very thin blood vessels that transport blood, nutrients and oxygen to cells in the body’s organs and systems. To maintain the garland-shaped structure of this network, sufficient amounts of oxygen and nutrients must be provided through the eye circulation and microcirculation. “Our ambition is to combine perfusion of the vessels with artificial blood," along with using a synthetic form of vitreous, or the gel-like fluid that lets in light and supports the the eye's round shape, Cosma said.
The scientists use this novel setup with the eye submersed in its medium to keep the organ viable, so they can test retinal function. “If we succeed, we will ensure full functionality of a human organ ex vivo. It will be the first intact human model of the eye capable of exploring and analyzing regenerative processes ex vivo,” Cosma added.
A rapidly developing field of regenerative medicine aims to stimulate the body's natural healing processes and restore or replace damaged tissues and organs. But for people with retinal diseases, regenerative medicine progress has been painfully slow. “Experiments on rodents show progress, but the risks for humans are unacceptable,” Cosma said.
The ECaBox could boost progress with regenerative medicine for people with retinal diseases, which has been painfully slow because human experiments involving their eyes are too risky. “We will test emerging treatments while reducing animal research, and greatly accelerate the discovery and preclinical research phase of new possible treatments for vision loss at significantly reduced costs,” Cosma explained. Much less time and money would be wasted during the drug discovery process. Their work may even make it possible to transplant the entire eyeball for those who need it.
“It is a very exciting project,” said Sanjay Sharma, a professor of ophthalmology and epidemiology at Queen's University, in Kingston, Canada. “The ability to explore and monitor regenerative interventions will increasingly be of importance as we develop therapies that can regenerate ocular tissues, including the retina.”
Seemingly, there's no sacred religious text or a holy book prohibiting the practice of eye donation.
But is the world ready for eye transplants? “People are a bit weird or very emotional about donating their eyes as compared to other organs,” Cosma said. And much can be said about the problem of eye donor shortage. Concerns include disfigurement and healthcare professionals’ fear that the conversation about eye donation will upset the departed person’s relatives because of cultural or religious considerations. As just one example, Sharma noted the paucity of eye donations in his home country, Canada.
Yet, experts like Sharma stress the importance of these donations for both the recipients and their family members. “It allows them some psychological benefit in a very difficult time,” he said. So why are global eye banks suffering? Is it because the eyes are the windows to the soul?
Seemingly, there's no sacred religious text or a holy book prohibiting the practice of eye donation. In fact, most major religions of the world permit and support organ transplantation and donation, and by extension eye donation, because they unequivocally see it as an “act of neighborly love and charity.” In Hinduism, the concept of eye donation aligns with the Hindu principle of daan or selfless giving, where individuals donate their organs or body after death to benefit others and contribute to society. In Islam, eye donation is a form of sadaqah jariyah, a perpetual charity, as it can continue to benefit others even after the donor's death.
Meanwhile, Buddhist masters teach that donating an organ gives another person the chance to live longer and practice dharma, the universal law and order, more meaningfully; they also dismiss misunderstandings of the type “if you donate an eye, you’ll be born without an eye in the next birth.” And Christian teachings emphasize the values of love, compassion, and selflessness, all compatible with organ donation, eye donation notwithstanding; besides, those that will have a house in heaven, will get a whole new body without imperfections and limitations.
The explanation for people’s resistance may lie in what Deepak Sarma, a professor of Indian religions and philosophy at Case Western Reserve University in Cleveland, calls “street interpretation” of religious or spiritual dogmas. Consider the mechanism of karma, which is about the causal relation between previous and current actions. “Maybe some Hindus believe there is karma in the eyes and, if the eye gets transplanted into another person, they will have to have that karmic card from now on,” Sarma said. “Even if there is peculiar karma due to an untimely death–which might be interpreted by some as bad karma–then you have the karma of the recipient, which is tremendously good karma, because they have access to these body parts, a tremendous gift,” Sarma said. The overall accumulation is that of good karma: “It’s a beautiful kind of balance,” Sarma said.
For the Jews, Christians, and Muslims who believe in the physical resurrection of the body that will be made new in an afterlife, the already existing body is sacred since it will be the basis of a new refashioned body in an afterlife.---Omar Sultan Haque.
With that said, Sarma believes it is a fallacy to personify or anthropomorphize the eye, which doesn’t have a soul, and stresses that the karma attaches itself to the soul and not the body parts. But for scholars like Omar Sultan Haque—a psychiatrist and social scientist at Harvard Medical School, investigating questions across global health, anthropology, social psychology, and bioethics—the hierarchy of sacredness of body parts is entrenched in human psychology. You cannot equate the pinky toe with the face, he explained.
“The eyes are the window to the soul,” Haque said. “People have a hierarchy of body parts that are considered more sacred or essential to the self or soul, such as the eyes, face, and brain.” In his view, the techno-utopian transhumanist communities (especially those in Silicon Valley) have reduced the totality of a person to a mere material object, a “wet robot” that knows no sacredness or hierarchy of human body parts. “But for the Jews, Christians, and Muslims who believe in the physical resurrection of the body that will be made new in an afterlife, the [already existing] body is sacred since it will be the basis of a new refashioned body in an afterlife,” Haque said. “You cannot treat the body like any old material artifact, or old chair or ragged cloth, just because materialistic, secular ideologies want so,” he continued.
For Cosma and her peers, however, the very definition of what is alive or not is a bit semantic. “As soon as we die, the electrophysiological activity in the eye stops,” she said. “The goal of the project is to restore this activity as soon as possible before the highly complex tissue of the eye starts degrading.” Cosma’s group doesn’t yet know when they will be able to keep the eyes alive and well in the ECaBox, but the consensus is that the sooner the better. Hopefully, the taboos and fears around the eye donations will dissipate around the same time.
As Our AI Systems Get Better, So Must We
As the power and capability of our AI systems increase by the day, the essential question we now face is what constitutes peak human. If we stay where we are while the AI systems we are unleashing continually get better, they will meet and then exceed our capabilities in an ever-growing number of domains. But while some technology visionaries like Elon Musk call for us to slow down the development of AI systems to buy time, this approach alone will simply not work in our hyper-competitive world, particularly when the potential benefits of AI are so great and our frameworks for global governance are so weak. In order to build the future we want, we must also become ever better humans.
The list of activities we once saw as uniquely human where AIs have now surpassed us is long and growing. First, AI systems could beat our best chess players, then our best Go players, then our best champions of multi-player poker. They can see patterns far better than we can, generate medical and other hypotheses most human specialists miss, predict and map out new cellular structures, and even generate beautiful, and, yes, creative, art.
A recent paper by Microsoft researchers analyzing the significant leap in capabilities in OpenAI’s latest AI bot, ChatGPT-4, asserted that the algorithm can “solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting.” Calling this functionality “strikingly close to human-level performance,” the authors conclude it “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
The concept of AGI has been around for decades. In its common use, the term suggests a time when individual machines can do many different things at a human level, not just one thing like playing Go or analyzing radiological images. Debating when AGI might arrive, a favorite pastime of computer scientists for years, now has become outdated.
We already have AI algorithms and chatbots that can do lots of different things. Based on the generalist definition, in other words, AGI is essentially already here.
Unfettered by the evolved capacity and storage constraints of our brains, AI algorithms can access nearly all of the digitized cultural inheritance of humanity since the dawn of recorded history and have increasing access to growing pools of digitized biological data from across the spectrum of life.
Once we recognize that both AI systems and humans have unique superpowers, the essential question becomes what each of us can do better than the other and what humans and AIs can best do in active collaboration. The future of our species will depend upon our ability to safely, dynamically, and continually figure that out.
With these ever-larger datasets, rapidly increasing computing and memory power, and new and better algorithms, our AI systems will keep getting better faster than most of us can today imagine. These capabilities have the potential to help us radically improve our healthcare, agriculture, and manufacturing, make our economies more productive and our development more sustainable, and do many important things better.
Soon, they will learn how to write their own code. Like human children, in other words, AI systems will grow up. But even that doesn’t mean our human goose is cooked.
Just like dolphins and dogs, these alternate forms of intelligence will be uniquely theirs, not a lesser or greater version of ours. There are lots of things AI systems can't do and will never be able to do because our AI algorithms, for better and for worse, will never be human. Our embodied human intelligence is its own thing.
Our human intelligence is uniquely ours based on the capacities we have developed in our 3.8-billion-year journey from single cell organisms to us. Our brains and bodies represent continuous adaptations on earlier models, which is why our skeletal systems look like those of lizards and our brains like most other mammals with some extra cerebral cortex mixed in. Human intelligence isn’t just some type of disembodied function but the inextricable manifestation of our evolved physical reality. It includes our sensory analytical skills and all of our animal instincts, intuitions, drives, and perceptions. Disembodied machine intelligence is something different than what we have evolved and possess.
Because of this, some linguists including Noam Chomsky have recently argued that AI systems will never be intelligent as long as they are just manipulating symbols and mathematical tokens without any inherent understanding. Nothing could be further from the truth. Anyone interacting with even first-generation AI chatbots quickly realizes that while these systems are far from perfect or omniscient and can sometimes be stupendously oblivious, they are surprisingly smart and versatile and will get more so… forever. We have little idea even how our own minds work, so judging AI systems based on their output is relatively close to how we evaluate ourselves.
Anyone not awed by the potential of these AI systems is missing the point. AI’s newfound capacities demand that we work urgently to establish norms, standards, and regulations at all levels from local to global to manage the very real risks. Pausing our development of AI systems now doesn’t make sense, however, even if it were possible, because we have no sufficient ways of uniformly enacting such a pause, no plan for how we would use the time, and no common framework for addressing global collective challenges like this.
But if all we feel is a passive awe for these new capabilities, we will also be missing the point.
Human evolution, biology, and cultural history are not just some kind of accidental legacy, disability, or parlor trick, but our inherent superpower. Our ancestors outcompeted rivals for billions of years to make us so well suited to the world we inhabit and helped build. Our social organization at scale has made it possible for us to forge civilizations of immense complexity, engineer biology and novel intelligence, and extend our reach to the stars. Our messy, embodied, intuitive, social human intelligence is roughly mimicable by AI systems but, by definition, never fully replicable by them.
Once we recognize that both AI systems and humans have unique superpowers, the essential question becomes what each of us can do better than the other and what humans and AIs can best do in active collaboration. We still don't know. The future of our species will depend upon our ability to safely, dynamically, and continually figure that out.
As we do, we'll learn that many of our ideas and actions are made up of parts, some of which will prove essentially human and some of which can be better achieved by AI systems. Those in every walk of work and life who most successfully identify the optimal contributions of humans, AIs, and the two together, and who build systems and workflows empowering humans to do human things, machines to do machine things, and humans and machines to work together in ways maximizing the respective strengths of each, will be the champions of the 21st century across all fields.
The dawn of the age of machine intelligence is upon us. It’s a quantum leap equivalent to the domestication of plants and animals, industrialization, electrification, and computing. Each of these revolutions forced us to rethink what it means to be human, how we live, and how we organize ourselves. The AI revolution will happen more suddenly than these earlier transformations but will follow the same general trajectory. Now is the time to aggressively prepare for what is fast heading our way, including by active public engagement, governance, and regulation.
AI systems will not replace us, but, like these earlier technology-driven revolutions, they will force us to become different humans as we co-evolve with our technology. We will never reach peak human in our ongoing evolutionary journey, but we’ve got to manage this transition wisely to build the type of future we’d like to inhabit.
Alongside our ascending AIs, we humans still have a lot of climbing to do.