Researchers Are Testing a New Stem Cell Therapy in the Hopes of Saving Millions from Blindness
Of all the infirmities of old age, failing sight is among the cruelest. It can mean the end not only of independence, but of a whole spectrum of joys—from gazing at a sunset or a grandchild's face to reading a novel or watching TV.
The Phase 1 trial will likely run through 2022, followed by a larger Phase 2 trial that could last another two or three years.
The leading cause of vision loss in people over 55 is age-related macular degeneration, or AMD, which afflicts an estimated 11 million Americans. As photoreceptors in the macula (the central part of the retina) die off, patients experience increasingly severe blurring, dimming, distortions, and blank spots in one or both eyes.
The disorder comes in two varieties, "wet" and "dry," both driven by a complex interaction of genetic, environmental, and lifestyle factors. It begins when deposits of cellular debris accumulate beneath the retinal pigment epithelium (RPE)—a layer of cells that nourish and remove waste products from the photoreceptors above them. In wet AMD, this process triggers the growth of abnormal, leaky blood vessels that damage the photoreceptors. In dry AMD, which accounts for 80 to 90 percent of cases, RPE cells atrophy, causing photoreceptors to wither away. Wet AMD can be controlled in about a quarter of patients, usually by injections of medication into the eye. For dry AMD, no effective remedy exists.
Stem Cells: Promise and Perils
Over the past decade, stem cell therapy has been widely touted as a potential treatment for AMD. The idea is to augment a patient's ailing RPE cells with healthy ones grown in the lab. A few small clinical trials have shown promising results. In a study published in 2018, for example, a University of Southern California team cultivated RPE tissue from embryonic stem cells on a plastic matrix and transplanted it into the retinas of four patients with advanced dry AMD. Because the trial was designed to test safety rather than efficacy, lead researcher Amir Kashani told a reporter, "we didn't expect that replacing RPE cells would return a significant amount of vision." Yet acuity improved substantially in one recipient, and the others regained their lost ability to focus on an object.
Therapies based on embryonic stem cells, however, have two serious drawbacks: Using fetal cell lines raises ethical issues, and such treatments require the patient to take immunosuppressant drugs (which can cause health problems of their own) to prevent rejection. That's why some experts favor a different approach—one based on induced pluripotent stem cells (iPSCs). Such cells, first produced in 2006, are made by returning adult cells to an undifferentiated state, and then using chemicals to reprogram them as desired. Treatments grown from a patient's own tissues could sidestep both hurdles associated with embryonic cells.
At least hypothetically. Today, the only stem cell therapies approved by the U.S. Food and Drug Administration (FDA) are umbilical cord-derived products for various blood and immune disorders. Although scientists are probing the use of embryonic stem cells or iPSCs for conditions ranging from diabetes to Parkinson's disease, such applications remain experimental—or fraudulent, as a growing number of patients treated at unlicensed "stem cell clinics" have painfully learned. (Some have gone blind after receiving bogus AMD therapies at those facilities.)
Last December, researchers at the National Eye Institute in Bethesda, Maryland, began enrolling patients with dry AMD in the country's first clinical trial using tissue grown from the patients' own stem cells. Led by biologist Kapil Bharti, the team intends to implant custom-made RPE cells in 12 recipients. If the effort pans out, it could someday save the sight of countless oldsters.
That, however, is what's technically referred to as a very big "if."
The First Steps
Bharti's trial is not the first in the world to use patient-derived iPSCs to treat age-related macular degeneration. In 2013, Japanese researchers implanted such cells into the eyes of a 77-year-old woman with wet AMD; after a year, her vision had stabilized, and she no longer needed injections to keep abnormal blood vessels from forming. A second patient was scheduled for surgery—but the procedure was canceled after the lab-grown RPE cells showed signs of worrisome mutations. That incident illustrates one potential problem with using stem cells: Under some circumstances, the cells or the tissue they form could turn cancerous.
"The knowledge and expertise we're gaining can be applied to many other iPSC-based therapies."
Bharti and his colleagues have gone to great lengths to avoid such outcomes. "Our process is significantly different," he told me in a phone interview. His team begins with patients' blood stem cells, which appear to be more genomically stable than the skin cells that the Japanese group used. After converting the blood cells to RPE stem cells, his team cultures them in a single layer on a biodegradable scaffold, which helps them grow in an orderly manner. "We think this material gives us a big advantage," Bharti says. The team uses a machine-learning algorithm to identify optimal cell structure and ensure quality control.
It takes about six months for a patch of iPSCs to become viable RPE cells. When they're ready, a surgeon uses a specially-designed tool to insert the tiny structure into the retina. Within days, the scaffold melts away, enabling the transplanted RPE cells to integrate fully into their new environment. Bharti's team initially tested their method on rats and pigs with eye damage mimicking AMD. The study, published in January 2019 in Science Translational Medicine, found that at ten weeks, the implanted RPE cells continued to function normally and protected neighboring photoreceptors from further deterioration. No trace of mutagenesis appeared.
Encouraged by these results, Bharti began recruiting human subjects. The Phase 1 trial will likely run through 2022, followed by a larger Phase 2 trial that could last another two or three years. FDA approval would require an even larger Phase 3 trial, with a decision expected sometime between 2025 and 2028—that is, if nothing untoward happens before then. One unknown (among many) is whether implanted cells can thrive indefinitely under the biochemically hostile conditions of an eye with AMD.
"Most people don't have a sense of just how long it takes to get something like this to work, and how many failures—even disasters—there are along the way," says Marco Zarbin, professor and chair of Ophthalmology and visual science at Rutgers New Jersey Medical School and co-editor of the book Cell-Based Therapy for Degenerative Retinal Diseases. "The first kidney transplant was done in 1933. But the first successful kidney transplant was in 1954. That gives you a sense of the time frame. We're really taking the very first steps in this direction."
Looking Ahead
Even if Bharti's method proves safe and effective, there's the question of its practicality. "My sense is that using induced pluripotent stem cells to treat the patient from whom they're derived is a very expensive undertaking," Zarbin observes. "So you'd have to have a very dramatic clinical benefit to justify that cost."
Bharti concedes that the price of iPSC therapy is likely to be high, given that each "dose" is formulated for a single individual, requires months to manufacture, and must be administered via microsurgery. Still, he expects economies of scale and production to emerge with time. "We're working on automating several steps of the process," he explains. "When that kicks in, a technician will be able to make products for 10 or 20 people at once, so the cost will drop proportionately."
Meanwhile, other researchers are pressing ahead with therapies for AMD using embryonic stem cells, which could be mass-produced to treat any patient who needs them. But should that approach eventually win FDA approval, Bharti believes there will still be room for a technique that requires neither fetal cell lines nor immunosuppression.
And not only for eye ailments. "The knowledge and expertise we're gaining can be applied to many other iPSC-based therapies," says the scientist, who is currently consulting with several companies that are developing such treatments. "I'm hopeful that we can leverage these approaches for a wide range of applications, whether it's for vision or across the body."
NEI launches iPS cell therapy trial for dry AMD
“Virtual Biopsies” May Soon Make Some Invasive Tests Unnecessary
At his son's college graduation in 2017, Dan Chessin felt "terribly uncomfortable" sitting in the stadium. The bouts of pain persisted, and after months of monitoring, a urologist took biopsies of suspicious areas in his prostate.
This innovation may enhance diagnostic precision and promptness, but it also brings ethical concerns to the forefront.
"In my case, the biopsies came out cancerous," says Chessin, 60, who underwent robotic surgery for intermediate-grade prostate cancer at University Hospitals Cleveland Medical Center.
Although he needed a biopsy, as most patients today do, advances in radiologic technology may make such invasive measures unnecessary in the future. Researchers are developing better imaging techniques and algorithms—a form of computer science called artificial intelligence, in which machines learn and execute tasks that typically require human brain power.
This innovation may enhance diagnostic precision and promptness. But it also brings ethical concerns to the forefront of the conversation, highlighting the potential for invasion of privacy, unequal patient access, and less physician involvement in patient care.
A National Academy of Medicine Special Publication, released in December, emphasizes that setting industry-wide standards for use in patient care is essential to AI's responsible and transparent implementation as the industry grapples with voluminous quantities of data. The technology should be viewed as a tool to supplement decision-making by highly trained professionals, not to replace it.
MRI--a test that uses powerful magnets, radio waves, and a computer to take detailed images inside the body--has become highly accurate in detecting aggressive prostate cancer, but its reliability is more limited in identifying low and intermediate grades of malignancy. That's why Chessin opted to have his prostate removed rather than take the chance of missing anything more suspicious that could develop.
His urologist, Lee Ponsky, says AI's most significant impact is yet to come. He hopes University Hospitals Cleveland Medical Center's collaboration with research scientists at its academic affiliate, Case Western Reserve University, will lead to the invention of a virtual biopsy.
A National Cancer Institute five-year grant is funding the project, launched in 2017, to develop a combined MRI and computerized tool to support more accurate detection and grading of prostate cancer. Such a tool would be "the closest to a crystal ball that we can get," says Ponsky, professor and chairman of the Urology Institute.
In situations where AI has guided diagnostics, radiologists' interpretations of breast, lung, and prostate lesions have improved as much as 25 percent, says Anant Madabhushi, a biomedical engineer and director of the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve, who is collaborating with Ponsky. "AI is very nascent," Madabhushi says, estimating that fewer than 10 percent of niche academic medical centers have used it. "We are still optimizing and validating the AI and virtual biopsy technology."
In October, several North American and European professional organizations of radiologists, imaging informaticists, and medical physicists released a joint statement on the ethics of AI. "Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future," reads the statement, published in the Journal of the American College of Radiology. "The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes."
Overreliance on new technology also poses concern when humans "outsource the process to a machine."
The statement's leader author, radiologist J. Raymond Geis, says "there's no question" that machines equipped with artificial intelligence "can extract more information than two human eyes" by spotting very subtle patterns in pixels. Yet, such nuances are "only part of the bigger picture of taking care of a patient," says Geis, a senior scientist with the American College of Radiology's Data Science Institute. "We have to be able to combine that with knowledge of what those pixels mean."
Setting ethical standards is high on all physicians' radar because the intricacies of each patient's medical record are factored into the computer's algorithm, which, in turn, may be used to help interpret other patients' scans, says radiologist Frank Rybicki, vice chair of operations and quality at the University of Cincinnati's department of radiology. Although obtaining patients' informed consent in writing is currently necessary, ethical dilemmas arise if and when patients have a change of heart about the use of their private health information. It is likely that removing individual data may be possible for some algorithms but not others, Rybicki says.
The information is de-identified to protect patient privacy. Using it to advance research is akin to analyzing human tissue removed in surgical procedures with the goal of discovering new medicines to fight disease, says Maryellen Giger, a University of Chicago medical physicist who studies computer-aided diagnosis in cancers of the breast, lung, and prostate, as well as bone diseases. Physicians who become adept at using AI to augment their interpretation of imaging will be ahead of the curve, she says.
As with other new discoveries, patient access and equality come into play. While AI appears to "have potential to improve over human performance in certain contexts," an algorithm's design may result in greater accuracy for certain groups of patients, says Lucia M. Rafanelli, a political theorist at The George Washington University. This "could have a disproportionately bad impact on one segment of the population."
Overreliance on new technology also poses concern when humans "outsource the process to a machine." Over time, they may cease developing and refining the skills they used before the invention became available, said Chloe Bakalar, a visiting research collaborator at Princeton University's Center for Information Technology Policy.
"AI is a paradigm shift with magic power and great potential."
Striking the right balance in the rollout of the technology is key. Rushing to integrate AI in clinical practice may cause harm, whereas holding back too long could undermine its ability to be helpful. Proper governance becomes paramount. "AI is a paradigm shift with magic power and great potential," says Ge Wang, a biomedical imaging professor at Rensselaer Polytechnic Institute in Troy, New York. "It is only ethical to develop it proactively, validate it rigorously, regulate it systematically, and optimize it as time goes by in a healthy ecosystem."
How Emerging Technologies Can Help Us Fight the New Coronavirus
In nature, few species remain dominant for long. Any sizable population of similar individuals offers immense resources to whichever parasite can evade its defenses, spreading rapidly from one member to the next.
Which will prove greater: our defenses or our vulnerabilities?
Humans are one such dominant species. That wasn't always the case: our hunter-gatherer ancestors lived in groups too small and poorly connected to spread pathogens like wildfire. Our collective vulnerability to pandemics began with the dawn of cities and trade networks thousands of years ago. Roman cities were always demographic sinks, but never more so than when a pandemic agent swept through. The plague of Cyprian, the Antonine plague, the plague of Justinian – each is thought to have killed over ten million people, an appallingly high fraction of the total population of the empire.
With the advent of sanitation, hygiene, and quarantines, we developed our first non-immunological defenses to curtail the spread of plagues. With antibiotics, we began to turn the weapons of microbes against our microbial foes. Most potent of all, we use vaccines to train our immune systems to fight pathogens before we are even exposed. Edward Jenner's original vaccine alone is estimated to have saved half a billion lives.
It's been over a century since we suffered from a swift and deadly pandemic. Even the last deadly influenza of 1918 killed only a few percent of humanity – nothing so bad as any of the Roman plagues, let alone the Black Death of medieval times.
How much of our recent winning streak has been due to luck?
Much rides on that question, because the same factors that first made our ancestors vulnerable are now ubiquitous. Our cities are far larger than those of ancient times. They're inhabited by an ever-growing fraction of humanity, and are increasingly closely connected: we now routinely travel around the world in the course of a day. Despite urbanization, global population growth has increased contact with wild animals, creating more opportunities for zoonotic pathogens to jump species. Which will prove greater: our defenses or our vulnerabilities?
The tragic emergence of coronavirus 2019-nCoV in Wuhan may provide a test case. How devastating this virus will become is highly uncertain at the time of writing, but its rapid spread to many countries is deeply worrisome. That it seems to kill only the already infirm and spare the healthy is small comfort, and may counterintuitively assist its spread: it's easy to implement a quarantine when everyone infected becomes extremely ill, but if carriers may not exhibit symptoms as has been reported, it becomes exceedingly difficult to limit transmission. The virus, a distant relative of the more lethal SARS virus that killed 800 people in 2002 to 2003, has evolved to be transmitted between humans and spread to 18 countries in just six weeks.
Humanity's response has been faster than ever, if not fast enough. To its immense credit, China swiftly shared information, organized and built new treatment centers, closed schools, and established quarantines. The Coalition for Epidemic Preparedness Innovations, which was founded in 2017, quickly funded three different companies to develop three different varieties of vaccine: a standard protein vaccine, a DNA vaccine, and an RNA vaccine, with more planned. One of the agreements was signed after just four days of discussion, far faster than has ever been done before.
The new vaccine candidates will likely be ready for clinical trials by early summer, but even if successful, it will be additional months before the vaccine will be widely available. The delay may well be shorter than ever before thanks to advances in manufacturing and logistics, but a delay it will be.
The 1918 influenza virus killed more than half of its victims in the United Kingdom over just three months.
If we faced a truly nasty virus, something that spreads like pandemic influenza – let alone measles – yet with the higher fatality rate of, say, H7N9 avian influenza, the situation would be grim. We are profoundly unprepared, on many different levels.
So what would it take to provide us with a robust defense against pandemics?
Minimize the attack surface: 2019-nCoV jumped from an animal, most probably a bat, to humans. China has now banned the wildlife trade in response to the epidemic. Keeping it banned would be prudent, but won't be possible in all nations. Still, there are other methods of protection. Influenza viruses commonly jump from birds to pigs to humans; the new coronavirus may have similarly passed through a livestock animal. Thanks to CRISPR, we can now edit the genomes of most livestock. If we made them immune to known viruses, and introduced those engineered traits to domesticated animals everywhere, we would create a firewall in those intermediate hosts. We might even consider heritably immunizing the wild organisms most likely to serve as reservoirs of disease.
None of these defenses will be cheap, but they'll be worth every penny.
Rapid diagnostics: We need a reliable method of detection costing just pennies to be available worldwide inside of a week of discovering a new virus. This may eventually be possible thanks to a technology called SHERLOCK, which is based on a CRISPR system more commonly used for precision genome editing. Instead of using CRISPR to find and edit a particular genome sequence in a cell, SHERLOCK programs it to search for a desired target and initiate an easily detected chain reaction upon discovery. The technology is capable of fantastic sensitivity: with an attomolar (10-18) detection limit, it senses single molecules of a unique DNA or RNA fingerprint, and the components can be freeze-dried onto paper strips.
Better preparations: China acted swiftly to curtail the spread of the Wuhan virus with traditional public health measures, but not everything went as smoothly as it might have. Most cities and nations have never conducted a pandemic preparedness drill. Best give people a chance to practice keeping the city barely functional while minimizing potential exposure events before facing the real thing.
Faster vaccines: Three months to clinical trials is too long. We need a robust vaccine discovery and production system that can generate six candidates within a week of the pathogen's identification, manufacture a million doses the week after, and scale up to a hundred million inside of a month. That may be possible for novel DNA and RNA-based vaccines, and indeed anything that can be delivered using a standardized gene therapy vector. For example, instead of teaching each person's immune system to evolve protective antibodies by showing it pieces of the virus, we can program cells to directly produce known antibodies via gene therapy. Those antibodies could be discovered by sifting existing diverse libraries of hundreds of millions of candidates, computationally designed from scratch, evolved using synthetic laboratory ecosystems, or even harvested from the first patients to report symptoms. Such a vaccine might be discovered and produced fast enough at scale to halt almost any natural pandemic.
Robust production and delivery: Our defenses must not be vulnerable to the social and economic disruptions caused by a pandemic. Unfortunately, our economy selects for speed and efficiency at the expense of robustness. Just-in-time supply chains that wing their way around the world require every node to be intact. If workers aren't on the job producing a critical component, the whole chain breaks until a substitute can be found. A truly nasty pandemic would disrupt economies all over the world, so we will need to pay extra to preserve the capacity for independent vertically integrated production chains in multiple nations. Similarly, vaccines are only useful if people receive them, so delivery systems should be as robustly automated as possible.
None of these defenses will be cheap, but they'll be worth every penny. Our nations collectively spend trillions on defense against one another, but only billions to protect humanity from pandemic viruses known to have killed more people than any human weapon. That's foolish – especially since natural animal diseases that jump the species barrier aren't the only pandemic threats.
We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural.
The complete genomes of all historical pandemic viruses ever to have been sequenced are freely available to anyone with an internet connection. True, these are all agents we've faced before, so we have a pre-existing armory of pharmaceuticals and vaccines and experience. There's no guarantee that they would become pandemics again; for example, a large fraction of humanity is almost certainly immune to the 1918 influenza virus due to exposure to the related 2009 pandemic, making it highly unlikely that the virus would take off if released.
Still, making the blueprints publicly available means that a large and growing number of people with the relevant technical skills can single-handedly make deadly biological agents that might be able to spread autonomously -- at least if they can get their hands on the relevant DNA. At present, such people most certainly can, so long as they bother to check the publicly available list of which gene synthesis companies do the right thing and screen orders -- and by implication, which ones don't.
One would hope that at least some of the companies that don't advertise that they screen are "honeypots" paid by intelligence agencies to catch would-be bioterrorists, but even if most of them are, it's still foolish to let individuals access that kind of destructive power. We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural. Hence, we should build a secure and adaptive system capable of screening all DNA synthesis for known and potential future pandemic agents... without disclosing what we think is a credible bioweapon.
Whether or not it becomes a global pandemic, the emergence of Wuhan coronavirus has underscored the need for coordinated action to prevent the spread of pandemic disease. Let's ensure that our reactive response minimally prepares us for future threats, for one day, reacting may not be enough.