Society Needs Regulations to Prevent Research Abuses
[Editor's Note: Our Big Moral Question this month is, "Do government regulations help or hurt the goal of responsible and timely scientific innovation?"]
Government regulations help more than hurt the goal of responsible and timely scientific innovation. Opponents might argue that without regulations, researchers would be free to do whatever they want. But without ethics and regulations, scientists have performed horrific experiments. In Nazi concentration camps, for instance, doctors forced prisoners to stay in the snow to see how long it took for these inmates to freeze to death. These researchers also removed prisoner's limbs in order to try to develop innovations to reconnect these body parts, but all the experiments failed.
Researchers in not only industry, but also academia have violated research participants' rights.
Due to these atrocities, after the war, the Nuremberg Tribunal established the first ethical guidelines for research, mandating that all study participants provide informed consent. Yet many researchers, including those in leading U.S. academic institutions and government agencies, failed to follow these dictates. The U.S. government, for instance, secretly infected Guatemalan men with syphilis in order to study the disease and experimented on soldiers, exposing them without consent to biological and chemical warfare agents. In the 1960s, researchers at New York's Willowbrook State School purposefully fed intellectually disabled children infected stool extracts with hepatitis to study the disease. In 1966, in the New England Journal of Medicine, Henry Beecher, a Harvard anesthesiologist, described 22 cases of unethical research published in the nation's leading medical journals, but were mostly conducted without informed consent, and at times harmed participants without offering them any benefit.
Despite heightened awareness and enhanced guidelines, abuses continued. Until a 1974 journalistic exposé, the U.S. government continued to fund the now-notorious Tuskegee syphilis study of infected poor African-American men in rural Alabama, refusing to offer these men penicillin when it became available as effective treatment for the disease.
In response, in 1974 Congress passed the National Research Act, establishing research ethics committees or Institutional Review Boards (IRBs), to guide scientists, allowing them to innovate while protecting study participants' rights. Routinely, IRBs now detect and prevent unethical studies from starting.
Still, even with these regulations, researchers have at times conducted unethical investigations. In 1999 at the Los Angeles Veterans Affairs Hospital, for example, a patient twice refused to participate in a study that would prolong his surgery. The researcher nonetheless proceeded to experiment on him anyway, using an electrical probe in the patient's heart to collect data.
Part of the problem and consequent need for regulations is that researchers have conflicts of interest and often do not recognize ethical challenges their research may pose.
Pharmaceutical company scandals, involving Avandia, and Neurontin and other drugs, raise added concerns. In marketing Vioxx, OxyContin, and tobacco, corporations have hidden findings that might undercut sales.
Regulations become increasingly critical as drug companies and the NIH conduct increasing amounts of research in the developing world. In 1996, Pfizer conducted a study of bacterial meningitis in Nigeria in which 11 children died. The families thus sued. Pfizer produced a Nigerian IRB approval letter, but the letter turned out to have been forged. No Nigerian IRB had ever approved the study. Fourteen years later, Wikileaks revealed that Pfizer had hired detectives to find evidence of corruption against the Nigerian Attorney General, to compel him to drop the lawsuit.
Researchers in not only industry, but also academia have violated research participants' rights. Arizona State University scientists wanted to investigate the genes of a Native American group, the Havasupai, who were concerned about their high rates of diabetes. The investigators also wanted to study the group's rates of schizophrenia, but feared that the tribe would oppose the study, given the stigma. Hence, these researchers decided to mislead the tribe, stating that the study was only about diabetes. The university's research ethics committee knew the scientists' plan to study schizophrenia, but approved the study, including the consent form, which did not mention any psychiatric diagnoses. The Havasupai gave blood samples, but later learned that the researchers published articles about the tribe's schizophrenia and alcoholism, and genetic origins in Asia (while the Havasupai believed they originated in the Grand Canyon, where they now lived, and which they thus argued they owned). A 2010 legal settlement required that the university return the blood samples to the tribe, which then destroyed them. Had the researchers instead worked with the tribe more respectfully, they could have advanced science in many ways.
Part of the problem and consequent need for regulations is that researchers have conflicts of interest and often do not recognize ethical challenges their research may pose.
Such violations threaten to lower public trust in science, particularly among vulnerable groups that have historically been systemically mistreated, diminishing public and government support for research and for the National Institutes of Health, National Science Foundation and Centers for Disease Control, all of which conduct large numbers of studies.
Research that has failed to follow ethics has in fact impeded innovation.
In popular culture, myths of immoral science and technology--from Frankenstein to Big Brother and Dr. Strangelove--loom.
Admittedly, regulations involve inherent tradeoffs. Following certain rules can take time and effort. Certain regulations may in fact limit research that might potentially advance knowledge, but be grossly unethical. For instance, if our society's sole goal was to have scientists innovate as much as possible, we might let them stick needles into healthy people's brains to remove cells in return for cash that many vulnerable poor people might find desirable. But these studies would clearly pose major ethical problems.
Research that has failed to follow ethics has in fact impeded innovation. In 1999, the death of a young man, Jesse Gelsinger, in a gene therapy experiment in which the investigator was subsequently found to have major conflicts of interest, delayed innovations in the field of gene therapy research for years.
Without regulations, companies might market products that prove dangerous, leading to massive lawsuits that could also ultimately stifle further innovation within an industry.
The key question is not whether regulations help or hurt science alone, but whether they help or hurt science that is both "responsible and innovative."
We don't want "over-regulation." Rather, the right amount of regulations is needed – neither too much nor too little. Hence, policy makers in this area have developed regulations in fair and transparent ways and have also been working to reduce the burden on researchers – for instance, by allowing single IRBs to review multi-site studies, rather than having multiple IRBs do so, which can create obstacles.
In sum, society requires a proper balance of regulations to ensure ethical research, avoid abuses, and ultimately aid us all by promoting responsible innovation.
[Ed. Note: Check out the opposite viewpoint here, and follow LeapsMag on social media to share your perspective.]
On today’s episode of Making Sense of Science, I’m honored to be joined by Dr. Paul Song, a physician, oncologist, progressive activist and biotech chief medical officer. Through his company, NKGen Biotech, Dr. Song is leveraging the power of patients’ own immune systems by supercharging the body’s natural killer cells to make new treatments for Alzheimer’s and cancer.
Whereas other treatments for Alzheimer’s focus directly on reducing the build-up of proteins in the brain such as amyloid and tau in patients will mild cognitive impairment, NKGen is seeking to help patients that much of the rest of the medical community has written off as hopeless cases, those with late stage Alzheimer’s. And in small studies, NKGen has shown remarkable results, even improvement in the symptoms of people with these very progressed forms of Alzheimer’s, above and beyond slowing down the disease.
In the realm of cancer, Dr. Song is similarly setting his sights on another group of patients for whom treatment options are few and far between: people with solid tumors. Whereas some gradual progress has been made in treating blood cancers such as certain leukemias in past few decades, solid tumors have been even more of a challenge. But Dr. Song’s approach of using natural killer cells to treat solid tumors is promising. You may have heard of CAR-T, which uses genetic engineering to introduce cells into the body that have a particular function to help treat a disease. NKGen focuses on other means to enhance the 40 plus receptors of natural killer cells, making them more receptive and sensitive to picking out cancer cells.
Paul Y. Song, MD is currently CEO and Vice Chairman of NKGen Biotech. Dr. Song’s last clinical role was Asst. Professor at the Samuel Oschin Cancer Center at Cedars Sinai Medical Center.
Dr. Song served as the very first visiting fellow on healthcare policy in the California Department of Insurance in 2013. He is currently on the advisory board of the Pritzker School of Molecular Engineering at the University of Chicago and a board member of Mercy Corps, The Center for Health and Democracy, and Gideon’s Promise.
Dr. Song graduated with honors from the University of Chicago and received his MD from George Washington University. He completed his residency in radiation oncology at the University of Chicago where he served as Chief Resident and did a brachytherapy fellowship at the Institute Gustave Roussy in Villejuif, France. He was also awarded an ASTRO research fellowship in 1995 for his research in radiation inducible gene therapy.
With Dr. Song’s leadership, NKGen Biotech’s work on natural killer cells represents cutting-edge science leading to key findings and important pieces of the puzzle for treating two of humanity’s most intractable diseases.
Show links
- Paul Song LinkedIn
- NKGen Biotech on Twitter - @NKGenBiotech
- NKGen Website: https://nkgenbiotech.com/
- NKGen appoints Paul Song
- Patient Story: https://pix11.com/news/local-news/long-island/promising-new-treatment-for-advanced-alzheimers-patients/
- FDA Clearance: https://nkgenbiotech.com/nkgen-biotech-receives-ind-clearance-from-fda-for-snk02-allogeneic-natural-killer-cell-therapy-for-solid-tumors/Q3 earnings data: https://www.nasdaq.com/press-release/nkgen-biotech-inc.-reports-third-quarter-2023-financial-results-and-business
Is there a robot nanny in your child's future?
From ROBOTS AND THE PEOPLE WHO LOVE THEM: Holding on to Our Humanity in an Age of Social Robots by Eve Herold. Copyright © 2024 by the author and reprinted by permission of St. Martin’s Publishing Group.
Could the use of robots take some of the workload off teachers, add engagement among students, and ultimately invigorate learning by taking it to a new level that is more consonant with the everyday experiences of young people? Do robots have the potential to become full-fledged educators and further push human teachers out of the profession? The preponderance of opinion on this subject is that, just as AI and medical technology are not going to eliminate doctors, robot teachers will never replace human teachers. Rather, they will change the job of teaching.
A 2017 study led by Google executive James Manyika suggested that skills like creativity, emotional intelligence, and communication will always be needed in the classroom and that robots aren’t likely to provide them at the same level that humans naturally do. But robot teachers do bring advantages, such as a depth of subject knowledge that teachers can’t match, and they’re great for student engagement.
The teacher and robot can complement each other in new ways, with the teacher facilitating interactions between robots and students. So far, this is the case with teaching “assistants” being adopted now in China, Japan, the U.S., and Europe. In this scenario, the robot (usually the SoftBank child-size robot NAO) is a tool for teaching mainly science, technology, engineering, and math (the STEM subjects), but the teacher is very involved in planning, overseeing, and evaluating progress. The students get an entertaining and enriched learning experience, and some of the teaching load is taken off the teacher. At least, that’s what researchers have been able to observe so far.
To be sure, there are some powerful arguments for having robots in the classroom. A not-to-be-underestimated one is that robots “speak the language” of today’s children, who have been steeped in technology since birth. These children are adept at navigating a media-rich environment that is highly visual and interactive. They are plugged into the Internet 24-7. They consume music, games, and huge numbers of videos on a weekly basis. They expect to be dazzled because they are used to being dazzled by more and more spectacular displays of digital artistry. Education has to compete with social media and the entertainment vehicles of students’ everyday lives.
Another compelling argument for teaching robots is that they help prepare students for the technological realities they will encounter in the real world when robots will be ubiquitous. From childhood on, they will be interacting and collaborating with robots in every sphere of their lives from the jobs they do to dealing with retail robots and helper robots in the home. Including robots in the classroom is one way of making sure that children of all socioeconomic backgrounds will be better prepared for a highly automated age, when successfully using robots will be as essential as reading and writing. We’ve already crossed this threshold with computers and smartphones.
Students need multimedia entertainment with their teaching. This is something robots can provide through their ability to connect to the Internet and act as a centralized host to videos, music, and games. Children also need interaction, something robots can deliver up to a point, but which humans can surpass. The education of a child is not just intended to make them technologically functional in a wired world, it’s to help them grow in intellectual, creative, social, and emotional ways. When considered through this perspective, it opens the door to questions concerning just how far robots should go. Robots don’t just teach and engage children; they’re designed to tug at their heartstrings.
It’s no coincidence that many toy makers and manufacturers are designing cute robots that look and behave like real children or animals, says Turkle. “When they make eye contact and gesture toward us, they predispose us to view them as thinking and caring,” she has written in The Washington Post. “They are designed to be cute, to provide a nurturing response” from the child. As mentioned previously, this nurturing experience is a powerful vehicle for drawing children in and promoting strong attachment. But should children really love their robots?
ROBOTS AND THE PEOPLE WHO LOVE THEM: Holding on to Our Humanity in an Age of Social Robots by Eve Herold (January 9, 2024).
St. Martin’s Publishing Group
The problem, once again, is that a child can be lulled into thinking that she’s in an actual relationship, when a robot can’t possibly love her back. If adults have these vulnerabilities, what might such asymmetrical relationships do to the emotional development of a small child? Turkle notes that while we tend to ascribe a mind and emotions to a socially interactive robot, “simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love.”
Always a consideration is the fact that in the first few years of life, a child’s brain is undergoing rapid growth and development that will form the foundation of their lifelong emotional health. These formative experiences are literally shaping the child’s brain, their expectations, and their view of the world and their place in it. In Alone Together, Turkle asks: What are we saying to children about their importance to us when we’re willing to outsource their care to a robot? A child might be superficially entertained by the robot while his self-esteem is systematically undermined.
Research has emerged showing that there are clear downsides to child-robot relationships.
Still, in the case of robot nannies in the home, is active, playful engagement with a robot for a few hours a day any more harmful than several hours in front of a TV or with an iPad? Some, like Xiong, regard interacting with a robot as better than mere passive entertainment. iPal’s manufacturers say that their robot can’t replace parents or teachers and is best used by three- to eight-year-olds after school, while they wait for their parents to get off work. But as robots become ever-more sophisticated, they’re expected to perform more of the tasks of day-to-day care and to be much more emotionally advanced. There is no question children will form deep attachments to some of them. And research has emerged showing that there are clear downsides to child-robot relationships.
Some studies, performed by Turkle and fellow MIT colleague Cynthia Breazeal, have revealed a darker side to the child-robot bond. Turkle has reported extensively on these studies in The Washington Post and in her book Alone Together. Most children love robots, but some act out their inner bully on the hapless machines, hitting and kicking them and otherwise trying to hurt them. The trouble is that the robot can’t fight back, teaching children that they can bully and abuse without consequences. As in any other robot relationship, such harmful behavior could carry over into the child’s human relationships.
And, ironically, it turns out that communicative machines don’t actually teach kids good communication skills. It’s well known that parent-child communication in the first three years of life sets the stage for a very young child’s intellectual and academic success. Verbal back-and-forth with parents and care-givers is like fuel for a child’s growing brain. One article that examined several types of play and their effect on children’s communication skills, published in JAMA Pediatrics in 2015, showed that babies who played with electronic toys—like the popular robot dog Aibo—show a decrease in both the quantity and quality of their language skills.
Anna V. Sosa of the Child Speech and Language Lab at Northern Arizona University studied twenty-six ten- to sixteen- month-old infants to compare the growth of their language skills after they played with three types of toys: electronic toys like a baby laptop and talking farm; traditional toys like wooden puzzles and building blocks; and books read aloud by their parents. The play that produced the most growth in verbal ability was having books read to them by a caregiver, followed by play with traditional toys. Language gains after playing with electronic toys came dead last. This form of play involved the least use of adult words, the least conversational turntaking, and the least verbalizations from the children. While the study sample was small, it’s not hard to extrapolate that no electronic toy or even more abled robot could supply the intimate responsiveness of a parent reading stories to a child, explaining new words, answering the child’s questions, and modeling the kind of back- and-forth interaction that promotes empathy and reciprocity in relationships.
***
Most experts acknowledge that robots can be valuable educational tools. But they can’t make a child feel truly loved, validated, and valued. That’s the job of parents, and when parents abdicate this responsibility, it’s not only the child who misses out on one of life’s most profound experiences.
We really don’t know how the tech-savvy children of today will ultimately process their attachments to robots and whether they will be excessively predisposed to choosing robot companionship over that of humans. It’s possible their techno literacy will draw for them a bold line between real life and a quasi-imaginary history with a robot. But it will be decades before we see long-term studies culminating in sufficient data to help scientists, and the rest of us, to parse out the effects of a lifetime spent with robots.
This is an excerpt from ROBOTS AND THE PEOPLE WHO LOVE THEM: Holding on to Our Humanity in an Age of Social Robots by Eve Herold. The book will be published on January 9, 2024.