Beyond Henrietta Lacks: How the Law Has Denied Every American Ownership Rights to Their Own Cells
The common perception is that Henrietta Lacks was a victim of poverty and racism when in 1951 doctors took samples of her cervical cancer without her knowledge or permission and turned them into the world's first immortalized cell line, which they called HeLa. The cell line became a workhorse of biomedical research and facilitated the creation of medical treatments and cures worth untold billions of dollars. Neither Lacks nor her family ever received a penny of those riches.
But racism and poverty is not to blame for Lacks' exploitation—the reality is even worse. In fact all patients, then and now, regardless of social or economic status, have absolutely no right to cells that are taken from their bodies. Some have called this biological slavery.
How We Got Here
The case that established this legal precedent is Moore v. Regents of the University of California.
John Moore was diagnosed with hairy-cell leukemia in 1976 and his spleen was removed as part of standard treatment at the UCLA Medical Center. On initial examination his physician, David W. Golde, had discovered some unusual qualities to Moore's cells and made plans prior to the surgery to have the tissue saved for research rather than discarded as waste. That research began almost immediately.
"On both sides of the case, legal experts and cultural observers cautioned that ownership of a human body was the first step on the slippery slope to 'bioslavery.'"
Even after Moore moved to Seattle, Golde kept bringing him back to Los Angeles to collect additional samples of blood and tissue, saying it was part of his treatment. When Moore asked if the work could be done in Seattle, he was told no. Golde's charade even went so far as claiming to find a low-income subsidy to pay for Moore's flights and put him up in a ritzy hotel to get him to return to Los Angeles, while paying for those out of his own pocket.
Moore became suspicious when he was asked to sign new consent forms giving up all rights to his biological samples and he hired an attorney to look into the matter. It turned out that Golde had been lying to his patient all along; he had been collecting samples unnecessary to Moore's treatment and had turned them into a cell line that he and UCLA had patented and already collected millions of dollars in compensation. The market for the cell lines was estimated at $3 billion by 1990.
Moore felt he had been taken advantage of and filed suit to claim a share of the money that had been made off of his body. "On both sides of the case, legal experts and cultural observers cautioned that ownership of a human body was the first step on the slippery slope to 'bioslavery,'" wrote Priscilla Wald, a professor at Duke University whose career has focused on issues of medicine and culture. "Moore could be viewed as asking to commodify his own body part or be seen as the victim of the theft of his most private and inalienable information."
The case bounced around different levels of the court system with conflicting verdicts for nearly six years until the California Supreme Court ruled on July 9, 1990 that Moore had no legal rights to cells and tissue once they were removed from his body.
The court made a utilitarian argument that the cells had no value until scientists manipulated them in the lab. And it would be too burdensome for researchers to track individual donations and subsequent cell lines to assure that they had been ethically gathered and used. It would impinge on the free sharing of materials between scientists, slow research, and harm the public good that arose from such research.
"In effect, what Moore is asking us to do is impose a tort duty on scientists to investigate the consensual pedigree of each human cell sample used in research," the majority wrote. In other words, researchers don't need to ask any questions about the materials they are using.
One member of the court did not see it that way. In his dissent, Stanley Mosk raised the specter of slavery that "arises wherever scientists or industrialists claim, as defendants have here, the right to appropriate and exploit a patient's tissue for their sole economic benefit—the right, in other words, to freely mine or harvest valuable physical properties of the patient's body. … This is particularly true when, as here, the parties are not in equal bargaining positions."
Mosk also cited the appeals court decision that the majority overturned: "If this science has become for profit, then we fail to see any justification for excluding the patient from participation in those profits."
But the majority bought the arguments that Golde, UCLA, and the nascent biotechnology industry in California had made in amici briefs filed throughout the legal proceedings. The road was now cleared for them to develop products worth billions without having to worry about or share with the persons who provided the raw materials upon which their research was based.
Critical Views
Biomedical research requires a continuous and ever-growing supply of human materials for the foundation of its ongoing work. If an increasing number of patients come to feel as John Moore did, that the system is ripping them off, then they become much less likely to consent to use of their materials in future research.
Some legal and ethical scholars say that donors should be able to limit the types of research allowed for their tissues and researchers should be monitored to assure compliance with those agreements. For example, today it is commonplace for companies to certify that their clothing is not made by child labor, their coffee is grown under fair trade conditions, that food labeled kosher is properly handled. Should we ask any less of our pharmaceuticals than that the donors whose cells made such products possible have been treated honestly and fairly, and share in the financial bounty that comes from such drugs?
Protection of individual rights is a hallmark of the American legal system, says Lisa Ikemoto, a law professor at the University of California Davis. "Putting the needs of a generalized public over the interests of a few often rests on devaluation of the humanity of the few," she writes in a reimagined version of the Moore decision that upholds Moore's property claims to his excised cells. The commentary is in a chapter of a forthcoming book in the Feminist Judgment series, where authors may only use legal precedent in effect at the time of the original decision.
"Why is the law willing to confer property rights upon some while denying the same rights to others?" asks Radhika Rao, a professor at the University of California, Hastings College of the Law. "The researchers who invest intellectual capital and the companies and universities that invest financial capital are permitted to reap profits from human research, so why not those who provide the human capital in the form of their own bodies?" It might be seen as a kind of sweat equity where cash strapped patients make a valuable in kind contribution to the enterprise.
The Moore court also made a big deal about inhibiting the free exchange of samples between scientists. That has become much less the situation over the more than three decades since the decision was handed down. Ironically, this decision, as well as other laws and regulations, have since strengthened the power of patents in biomedicine and by doing so have increased secrecy and limited sharing.
"Although the research community theoretically endorses the sharing of research, in reality sharing is commonly compromised by the aggressive pursuit and defense of patents and by the use of licensing fees that hinder collaboration and development," Robert D. Truog, Harvard Medical School ethicist and colleagues wrote in 2012 in the journal Science. "We believe that measures are required to ensure that patients not bear all of the altruistic burden of promoting medical research."
Additionally, the increased complexity of research and the need for exacting standardization of materials has given rise to an industry that supplies certified chemical reagents, cell lines, and whole animals bred to have specific genetic traits to meet research needs. This has been more efficient for research and has helped to ensure that results from one lab can be reproduced in another.
The Court's rationale of fostering collaboration and free exchange of materials between researchers also has been undercut by the changing structure of that research. Big pharma has shrunk the size of its own research labs and over the last decade has worked out cooperative agreements with major research universities where the companies contribute to the research budget and in return have first dibs on any findings (and sometimes a share of patent rights) that come out of those university labs. It has had a chilling effect on the exchange of materials between universities.
Perhaps tracking cell line donors and use restrictions on those donations might have been burdensome to researchers when Moore was being litigated. Some labs probably still kept their cell line records on 3x5 index cards, computers were primarily expensive room-size behemoths with limited capacity, the internet barely existed, and there was no cloud storage.
But that was the dawn of a new technological age and standards have changed. Now cell lines are kept in state-of-the-art sub zero storage units, tagged with the source, type of tissue, date gathered and often other information. Adding a few more data fields and contacting the donor if and when appropriate does not seem likely to disrupt the research process, as the court asserted.
Forging the Future
"U.S. universities are awarded almost 3,000 patents each year. They earn more than $2 billion each year from patent royalties. Sharing a modest portion of these profits is a novel method for creating a greater sense of fairness in research relationships that we think is worth exploring," wrote Mark Yarborough, a bioethicist at the University of California Davis Medical School, and colleagues. That was penned nearly a decade ago and those numbers have only grown.
The Michigan BioTrust for Health might serve as a useful model in tackling some of these issues. Dried blood spots have been collected from all newborns for half a century to be tested for certain genetic diseases, but controversy arose when the huge archive of dried spots was used for other research projects. As a result, the state created a nonprofit organization to in essence become a biobank and manage access to these spots only for specific purposes, and also to share any revenue that might arise from that research.
"If there can be no property in a whole living person, does it stand to reason that there can be no property in any part of a living person? If there were, can it be said that this could equate to some sort of 'biological slavery'?" Irish ethicist Asim A. Sheikh wrote several years ago. "Any amount of effort spent pondering the issue of 'ownership' in human biological materials with existing law leaves more questions than answers."
Perhaps the biggest question will arise when -- not if but when -- it becomes possible to clone a human being. Would a human clone be a legal person or the property of those who created it? Current legal precedent points to it being the latter.
Today, October 4, is the 70th anniversary of Henrietta Lacks' death from cancer. Over those decades her immortalized cells have helped make possible miraculous advances in medicine and have had a role in generating billions of dollars in profits. Surviving family members have spoken many times about seeking a share of those profits in the name of social justice; they intend to file lawsuits today. Such cases will succeed or fail on their own merits. But regardless of their specific outcomes, one can hope that they spark a larger public discussion of the role of patients in the biomedical research enterprise and lead to establishing a legal and financial claim for their contributions toward the next generation of biomedical research.
A Futuristic Suicide Machine Aims to End the Stigma of Assisted Dying
Bob Dent ended his life in Perth, Australia in 1996 after multiple surgeries to treat terminal prostate cancer had left him mostly bedridden and in agony.
Although Dent and his immediate family believed it was the right thing to do, the physician who assisted in his suicide – and had pushed for Australia's Northern Territory to legalize the practice the prior year – was deeply shaken.
"You climb in, you are going somewhere, you are leaving, and you are saying goodbye."
"When you get to know someone pretty well, and they set a date to have lunch with you and then have them die at 2 p.m., it's hard to forget," recalls Philip Nitschke.
Nitschke remembers being highly anxious that the device he designed – which released a fatal dose of Nembutal into a patient's bloodstream after they answered a series of questions on a laptop computer to confirm consent – wouldn't work. He was so alarmed by the prospect he recalls his shirt being soaked through with perspiration.
Known as a "Deliverance Machine," it was comprised of the computer, attached by a sheet of wiring to an attache case containing an apparatus for delivering the Nembutal. Although gray, squat and grimly businesslike, it was vastly more sophisticated than Jack Kevorkian's Thanatron – a tangle of tubes, hooks and vials redolent of frontier dentistry.
The Deliverance Machine did work – for Dent and three other patients of Nitschke. However, it remained far from reassuring. "It's not a very comfortable feeling, having a little suitcase and going around to people," he says. "I felt a little like an executioner."
The furor caused in part by Nitschke's work led to Australia's federal government banning physician-assisted suicide in 1997. Nitschke went on to co-found Exit International, one of the foremost assisted suicide advocacy groups, and relocated to the Netherlands.
Exit International recently introduced its most ambitious initiative to date. It's called the Sarco — essentially the Eames lounger of suicide machines. A prototype is currently on display at Venice Design, an adjunct to the Biennale.
Sheathed in a soothing blue coating, the Sarco prototype contains a window and pivots on a pedestal to allow viewing by friends and family. Its close quarters means the opening of a small canister of liquid nitrogen would cause quick and painless asphyxiation. Patrons with second thoughts can press a button to cancel the process.
"The sleek and colorful death-pod looks like it is about to whisk you away to a new territory, or that it just landed after being launched from a Star Trek federation ship," says Charles C. Camosy, associate professor of theological and social ethics at Fordham University in New York City, in an email. Camosy, who has profound misgivings about such a device, was not being complimentary.
Nitschke's goal is to de-medicalize assisted suicide, as liquid nitrogen is readily available. But he suggests employing a futuristic design will also move debate on the issue forward.
"You pick the time...have the party and people come around. You climb in, you are going somewhere, you are leaving, and you are saying goodbye," he says. "It lends itself to a sense of occasion."
Assisted suicide is spreading in developed countries, but very slowly. It was legalized again in Australia just last June, but only in one of its six states. It is legal throughout Canada and in nine U.S. states.
Although the process is outlawed throughout much of Europe, nations permitting it have taken a liberal approach. Euthanasia — where death may be instigated by an assenting physician at a patient's request — is legal in both Belgium and the Netherlands. A terminal illness is not required; a severe disability or a condition causing profound misery may suffice.
Only Switzerland permits suicide with non-physician assistance regardless of an individual's medical condition. David Goodall, a 104-year Australian scientist, traveled 8,000 miles to Basel last year to die with Exit International's assistance. Goodall was in good health for his age and his mind was needle sharp; at a news conference the day before he passed, he thoughtfully answered questions and sang Beethoven's "Ode to Joy" from memory. He simply believed he had lived long enough and wanted to avoid a diminishing quality of life.
"Dying is not a medical process, and if you've decided to do this through rational [decision-making], you should not have to get permission from the medical profession," Nitschke says.
However, the deathstyle aspirations of the Sarco bely the fact obtaining one will not be as simple as swiping a credit card. To create a legal firewall, anyone wishing to obtain a Sarco would have to purchase the plans, print the device themselves — it requires a high-end industrial printer to do so — then assemble it. As with the Deliverance device, the end user must be able to answer computer-generated questions designed by a Swiss psychiatrist to determine if they are making a rational decision. The process concludes with the transmission of a four-digit code to make the Sarco operational.
As with many cutting-edge designs, the path to a working prototype has been nettlesome. Plans for a printed window have been abandoned. How it will be obtained by end users remains unclear. There have also been complications in creating an AI-based algorithm underlying the user questions to reliably determine if the individual is of sound mind.
While Nitschke believes the Sarco will be deployed in Switzerland for the first time sometime next year, it will almost certainly be a subject of immense controversy. The Hastings Center, one of the world's major bioethics organizations and a leader on end-of-life decision-making, flatly refused to comment on the Sarco.
Camosy strongly condemns it. He notes since U.S. life expectancy is actually shortening — with despair-driven suicide playing a role — efforts must be marshaled to mitigate the trend. To him, the Sarco sends an utterly wrong message.
"It is diabolical that we would create machines to make it easier for people to kill themselves."
"Most people who request help in killing themselves don't do so because they are in intense, unbearable pain," he observes. "They do it because the culture in which they live has made them feel like a burden. This culture has told them they only have value if they are able to be 'productive' and 'contribute to society.'" He adds that the large majority of disability activists have been against assisted suicide and euthanasia because it is imperative to their movement that a stigma remain in place.
"It is diabolical that we would create machines to make it easier for people to kill themselves," Camosy concludes. "And anyone with even a single progressive bone in their body should resist this disturbingly morbid profit-making venture with everything they have."
Biologists are Growing Mini-Brains. What If They Become Conscious?
Few images are more uncanny than that of a brain without a body, fully sentient but afloat in sterile isolation. Such specters have spooked the speculatively-minded since the seventeenth century, when René Descartes declared, "I think, therefore I am."
Since August 29, 2019, the prospect of a bodiless but functional brain has begun to seem far less fantastical.
In Meditations on First Philosophy (1641), the French penseur spins a chilling thought experiment: he imagines "having no hands or eyes, or flesh, or blood or senses," but being tricked by a demon into believing he has all these things, and a world to go with them. A disembodied brain itself becomes a demon in the classic young-adult novel A Wrinkle in Time (1962), using mind control to subjugate a planet called Camazotz. In the sci-fi blockbuster The Matrix (1999), most of humanity endures something like Descartes' nightmare—kept in womblike pods by their computer overlords, who fill the captives' brains with a synthetized reality while tapping their metabolic energy as a power source.
Since August 29, 2019, however, the prospect of a bodiless but functional brain has begun to seem far less fantastical. On that date, researchers at the University of California, San Diego published a study in the journal Cell Stem Cell, reporting the detection of brainwaves in cerebral organoids—pea-size "mini-brains" grown in the lab. Such organoids had emitted random electrical impulses in the past, but not these complex, synchronized oscillations. "There are some of my colleagues who say, 'No, these things will never be conscious,'" lead researcher Alysson Muotri, a Brazilian-born biologist, told The New York Times. "Now I'm not so sure."
Alysson Muotri has no qualms about his creations attaining consciousness as a side effect of advancing medical breakthroughs.
(Credit: ZELMAN STUDIOS)
Muotri's findings—and his avowed ambition to push them further—brought new urgency to simmering concerns over the implications of brain organoid research. "The closer we come to his goal," said Christof Koch, chief scientist and president of the Allen Brain Institute in Seattle, "the more likely we will get a brain that is capable of sentience and feeling pain, agony, and distress." At the annual meeting of the Society for Neuroscience, researchers from the Green Neuroscience Laboratory in San Diego called for a partial moratorium, warning that the field was "perilously close to crossing this ethical Rubicon and may have already done so."
Yet experts are far from a consensus on whether brain organoids can become conscious, whether that development would necessarily be dreadful—or even how to tell if it has occurred.
So how worried do we need to be?
***
An organoid is a miniaturized, simplified version of an organ, cultured from various types of stem cells. Scientists first learned to make them in the 1980s, and have since turned out mini-hearts, lungs, kidneys, intestines, thyroids, and retinas, among other wonders. These creations can be used for everything from observation of basic biological processes to testing the effects of gene variants, pathogens, or medications. They enable researchers to run experiments that might be less accurate using animal models and unethical or impractical using actual humans. And because organoids are three-dimensional, they can yield insights into structural, developmental, and other matters that an ordinary cell culture could never provide.
In 2006, Japanese biologist Shinya Yamanaka developed a mix of proteins that turned skin cells into "pluripotent" stem cells, which could subsequently be transformed into neurons, muscle cells, or blood cells. (He later won a Nobel Prize for his efforts.) Developmental biologist Madeline Lancaster, then a post-doctoral student at the Institute of Molecular Biotechnology in Vienna, adapted that technique to grow the first brain organoids in 2013. Other researchers soon followed suit, cultivating specialized mini-brains to study disorders ranging from microcephaly to schizophrenia.
Muotri, now a youthful 45-year-old, was among the boldest of these pioneers. His team revealed the process by which Zika virus causes brain damage, and showed that sofosbuvir, a drug previously approved for hepatitis C, protected organoids from infection. He persuaded NASA to fly his organoids to the International Space Station, where they're being used to trace the impact of microgravity on neurodevelopment. He grew brain organoids using cells implanted with Neanderthal genes, and found that their wiring differed from organoids with modern DNA.
Like the latter experiment, Muotri's brainwave breakthrough emerged from a longtime obsession with neuroarchaeology. "I wanted to figure out how the human brain became unique," he told me in a phone interview. "Compared to other species, we are very social. So I looked for conditions where the social brain doesn't function well, and that led me to autism." He began investigating how gene variants associated with severe forms of the disorder affected neural networks in brain organoids.
Tinkering with chemical cocktails, Muotri and his colleagues were able to keep their organoids alive far longer than earlier versions, and to culture more diverse types of brain cells. One team member, Priscilla Negraes, devised a way to measure the mini-brains' electrical activity, by planting them in a tray lined with electrodes. By four months, the researchers found to their astonishment, normal organoids (but not those with an autism gene) emitted bursts of synchronized firing, separated by 20-second silences. At nine months, the organoids were producing up to 300,000 spikes per minute, across a range of frequencies.
He shared his vision for "brain farms," which would grow organoids en masse for drug development or tissue transplants.
When the team used an artificial intelligence system to compare these patterns with EEGs of gestating fetuses, the program found them to be nearly identical at each stage of development. As many scientists noted when the news broke, that didn't mean the organoids were conscious. (Their chaotic bursts bore little resemblance to the orderly rhythms of waking adult brains.) But to some observers, it suggested that they might be approaching the borderline.
***
Shortly after Muotri's team published their findings, I attended a conference at UCSD on the ethical questions they raised. The scientist, in jeans and a sky-blue shirt, spoke rhapsodically of brain organoids' potential to solve scientific mysteries and lead to new medical treatments. He showed video of a spider-like robot connected to an organoid through a computer interface. The machine responded to different brainwave patterns by walking or stopping—the first stage, Muotri hoped, in teaching organoids to communicate with the outside world. He described his plans to develop organoids with multiple brain regions, and to hook them up to retinal organoids so they could "see." He shared his vision for "brain farms," which would grow organoids en masse for drug development or tissue transplants.
Muotri holds a spider-like robot that can connect to an organoid through a computer interface.
(Credit: ROLAND LIZARONDO/KPBS)
Yet Muotri also stressed the current limitations of the technology. His organoids contain approximately 2 million neurons, compared to about 200 million in a rat's brain and 86 billion in an adult human's. They consist only of a cerebral cortex, and lack many of a real brain's cell types. Because researchers haven't yet found a way to give organoids blood vessels, moreover, nutrients can't penetrate their inner recesses—a severe constraint on their growth.
Another panelist strongly downplayed the imminence of any Rubicon. Patricia Churchland, an eminent philosopher of neuroscience, cited research suggesting that in mammals, networked connections between the cortex and the thalamus are a minimum requirement for consciousness. "It may be a blessing that you don't have the enabling conditions," she said, "because then you don't have the ethical issues."
Christof Koch, for his part, sounded much less apprehensive than the Times had made him seem. He noted that science lacks a definition of consciousness, beyond an organism's sense of its own existence—"the fact that it feels like something to be you or me." As to the competing notions of how the phenomenon arises, he explained, he prefers one known as Integrated Information Theory, developed by neuroscientist Giulio Tononi. IIT considers consciousness to be a quality intrinsic to systems that reach a certain level of complexity, integration, and causal power (the ability for present actions to determine future states). By that standard, Koch doubted that brain organoids had stepped over the threshold.
One way to tell, he said, might be to use the "zap and zip" test invented by Tononi and his colleague Marcello Massimini in the early 2000s to determine whether patients are conscious in the medical sense. This technique zaps the brain with a pulse of magnetic energy, using a coil held to the scalp. As loops of neural impulses cascade through the cerebral circuitry, an EEG records the firing patterns. In a waking brain, the feedback is highly complex—neither totally predictable nor totally random. In other states, such as sleep, coma, or anesthesia, the rhythms are simpler. Applying an algorithm commonly used for computer "zip" files, the researchers devised a scale that allowed them to correctly diagnose most patients who were minimally conscious or in a vegetative state.
If scientists could find a way to apply "zap and zip" to brain organoids, Koch ventured, it should be possible to rank their degree of awareness on a similar scale. And if it turned out that an organoid was conscious, he added, our ethical calculations should strive to minimize suffering, and avoid it where possible—just as we now do, or ought to, with animal subjects. (Muotri, I later learned, was already contemplating sensors that would signal when organoids were likely in distress.)
During the question-and-answer period, an audience member pressed Churchland about how her views might change if the "enabling conditions" for consciousness in brain organoids were to arise. "My feeling is, we'll answer that when we get there," she said. "That's an unsatisfying answer, but it's because I don't know. Maybe they're totally happy hanging out in a dish! Maybe that's the way to be."
***
Muotri himself admits to no qualms about his creations attaining consciousness, whether sooner or later. "I think we should try to replicate the model as close as possible to the human brain," he told me after the conference. "And if that involves having a human consciousness, we should go in that direction." Still, he said, if strong evidence of sentience does arise, "we should pause and discuss among ourselves what to do."
"The field is moving so rapidly, you blink your eyes and another advance has occurred."
Churchland figures it will be at least a decade before anyone reaches the crossroads. "That's partly because the thalamus has a very complex architecture," she said. It might be possible to mimic that architecture in the lab, she added, "but I tend to think it's not going to be a piece of cake."
If anything worries Churchland about brain organoids, in fact, it's that Muotri's visionary claims for their potential could set off a backlash among those who find them unacceptably spooky. "Alysson has done brilliant work, and he's wonderfully charismatic and charming," she said. "But then there's that guy back there who doesn't think it's exciting; he thinks you're the Devil incarnate. You're playing into the hands of people who are going to shut you down."
Koch, however, is more willing to indulge Muotri's dreams. "Ten years ago," he said, "nobody would have believed you can take a stem cell and get an entire retina out of it. It's absolutely frigging amazing. So who am I to say the same thing can't be true for the thalamus or the cortex? The field is moving so rapidly, you blink your eyes and another advance has occurred."
The point, he went on, is not to build a Cartesian thought experiment—or a Matrix-style dystopia—but to vanquish some of humankind's most terrifying foes. "You know, my dad passed away of Parkinson's. I had a twin daughter; she passed away of sudden death syndrome. One of my best friends killed herself; she was schizophrenic. We want to eliminate all these terrible things, and that requires experimentation. We just have to go into it with open eyes."