Society Needs Regulations to Prevent Research Abuses
[Editor's Note: Our Big Moral Question this month is, "Do government regulations help or hurt the goal of responsible and timely scientific innovation?"]
Government regulations help more than hurt the goal of responsible and timely scientific innovation. Opponents might argue that without regulations, researchers would be free to do whatever they want. But without ethics and regulations, scientists have performed horrific experiments. In Nazi concentration camps, for instance, doctors forced prisoners to stay in the snow to see how long it took for these inmates to freeze to death. These researchers also removed prisoner's limbs in order to try to develop innovations to reconnect these body parts, but all the experiments failed.
Researchers in not only industry, but also academia have violated research participants' rights.
Due to these atrocities, after the war, the Nuremberg Tribunal established the first ethical guidelines for research, mandating that all study participants provide informed consent. Yet many researchers, including those in leading U.S. academic institutions and government agencies, failed to follow these dictates. The U.S. government, for instance, secretly infected Guatemalan men with syphilis in order to study the disease and experimented on soldiers, exposing them without consent to biological and chemical warfare agents. In the 1960s, researchers at New York's Willowbrook State School purposefully fed intellectually disabled children infected stool extracts with hepatitis to study the disease. In 1966, in the New England Journal of Medicine, Henry Beecher, a Harvard anesthesiologist, described 22 cases of unethical research published in the nation's leading medical journals, but were mostly conducted without informed consent, and at times harmed participants without offering them any benefit.
Despite heightened awareness and enhanced guidelines, abuses continued. Until a 1974 journalistic exposé, the U.S. government continued to fund the now-notorious Tuskegee syphilis study of infected poor African-American men in rural Alabama, refusing to offer these men penicillin when it became available as effective treatment for the disease.
In response, in 1974 Congress passed the National Research Act, establishing research ethics committees or Institutional Review Boards (IRBs), to guide scientists, allowing them to innovate while protecting study participants' rights. Routinely, IRBs now detect and prevent unethical studies from starting.
Still, even with these regulations, researchers have at times conducted unethical investigations. In 1999 at the Los Angeles Veterans Affairs Hospital, for example, a patient twice refused to participate in a study that would prolong his surgery. The researcher nonetheless proceeded to experiment on him anyway, using an electrical probe in the patient's heart to collect data.
Part of the problem and consequent need for regulations is that researchers have conflicts of interest and often do not recognize ethical challenges their research may pose.
Pharmaceutical company scandals, involving Avandia, and Neurontin and other drugs, raise added concerns. In marketing Vioxx, OxyContin, and tobacco, corporations have hidden findings that might undercut sales.
Regulations become increasingly critical as drug companies and the NIH conduct increasing amounts of research in the developing world. In 1996, Pfizer conducted a study of bacterial meningitis in Nigeria in which 11 children died. The families thus sued. Pfizer produced a Nigerian IRB approval letter, but the letter turned out to have been forged. No Nigerian IRB had ever approved the study. Fourteen years later, Wikileaks revealed that Pfizer had hired detectives to find evidence of corruption against the Nigerian Attorney General, to compel him to drop the lawsuit.
Researchers in not only industry, but also academia have violated research participants' rights. Arizona State University scientists wanted to investigate the genes of a Native American group, the Havasupai, who were concerned about their high rates of diabetes. The investigators also wanted to study the group's rates of schizophrenia, but feared that the tribe would oppose the study, given the stigma. Hence, these researchers decided to mislead the tribe, stating that the study was only about diabetes. The university's research ethics committee knew the scientists' plan to study schizophrenia, but approved the study, including the consent form, which did not mention any psychiatric diagnoses. The Havasupai gave blood samples, but later learned that the researchers published articles about the tribe's schizophrenia and alcoholism, and genetic origins in Asia (while the Havasupai believed they originated in the Grand Canyon, where they now lived, and which they thus argued they owned). A 2010 legal settlement required that the university return the blood samples to the tribe, which then destroyed them. Had the researchers instead worked with the tribe more respectfully, they could have advanced science in many ways.
Part of the problem and consequent need for regulations is that researchers have conflicts of interest and often do not recognize ethical challenges their research may pose.
Such violations threaten to lower public trust in science, particularly among vulnerable groups that have historically been systemically mistreated, diminishing public and government support for research and for the National Institutes of Health, National Science Foundation and Centers for Disease Control, all of which conduct large numbers of studies.
Research that has failed to follow ethics has in fact impeded innovation.
In popular culture, myths of immoral science and technology--from Frankenstein to Big Brother and Dr. Strangelove--loom.
Admittedly, regulations involve inherent tradeoffs. Following certain rules can take time and effort. Certain regulations may in fact limit research that might potentially advance knowledge, but be grossly unethical. For instance, if our society's sole goal was to have scientists innovate as much as possible, we might let them stick needles into healthy people's brains to remove cells in return for cash that many vulnerable poor people might find desirable. But these studies would clearly pose major ethical problems.
Research that has failed to follow ethics has in fact impeded innovation. In 1999, the death of a young man, Jesse Gelsinger, in a gene therapy experiment in which the investigator was subsequently found to have major conflicts of interest, delayed innovations in the field of gene therapy research for years.
Without regulations, companies might market products that prove dangerous, leading to massive lawsuits that could also ultimately stifle further innovation within an industry.
The key question is not whether regulations help or hurt science alone, but whether they help or hurt science that is both "responsible and innovative."
We don't want "over-regulation." Rather, the right amount of regulations is needed – neither too much nor too little. Hence, policy makers in this area have developed regulations in fair and transparent ways and have also been working to reduce the burden on researchers – for instance, by allowing single IRBs to review multi-site studies, rather than having multiple IRBs do so, which can create obstacles.
In sum, society requires a proper balance of regulations to ensure ethical research, avoid abuses, and ultimately aid us all by promoting responsible innovation.
[Ed. Note: Check out the opposite viewpoint here, and follow LeapsMag on social media to share your perspective.]
Since the early 2000s, AI systems have eliminated more than 1.7 million jobs, and that number will only increase as AI improves. Some research estimates that by 2025, AI will eliminate more than 85 million jobs.
But for all the talk about job security, AI is also proving to be a powerful tool in healthcare—specifically, cancer detection. One recently published study has shown that, remarkably, artificial intelligence was able to detect 20 percent more cancers in imaging scans than radiologists alone.
Published in The Lancet Oncology, the study analyzed the scans of 80,000 Swedish women with a moderate hereditary risk of breast cancer who had undergone a mammogram between April 2021 and July 2022. Half of these scans were read by AI and then a radiologist to double-check the findings. The second group of scans was read by two researchers without the help of AI. (Currently, the standard of care across Europe is to have two radiologists analyze a scan before diagnosing a patient with breast cancer.)
The study showed that the AI group detected cancer in 6 out of every 1,000 scans, while the radiologists detected cancer in 5 per 1,000 scans. In other words, AI found 20 percent more cancers than the highly-trained radiologists.
Scientists have been using MRI images (like the ones pictured here) to train artificial intelligence to detect cancers earlier and with more accuracy. Here, MIT's AI system, MIRAI, looks for patterns in a patient's mammograms to detect breast cancer earlier than ever before. news.mit.edu
But even though the AI was better able to pinpoint cancer on an image, it doesn’t mean radiologists will soon be out of a job. Dr. Laura Heacock, a breast radiologist at NYU, said in an interview with CNN that radiologists do much more than simply screening mammograms, and that even well-trained technology can make errors. “These tools work best when paired with highly-trained radiologists who make the final call on your mammogram. Think of it as a tool like a stethoscope for a cardiologist.”
AI is still an emerging technology, but more and more doctors are using them to detect different cancers. For example, researchers at MIT have developed a program called MIRAI, which looks at patterns in patient mammograms across a series of scans and uses an algorithm to model a patient's risk of developing breast cancer over time. The program was "trained" with more than 200,000 breast imaging scans from Massachusetts General Hospital and has been tested on over 100,000 women in different hospitals across the world. According to MIT, MIRAI "has been shown to be more accurate in predicting the risk for developing breast cancer in the short term (over a 3-year period) compared to traditional tools." It has also been able to detect breast cancer up to five years before a patient receives a diagnosis.
The challenges for cancer-detecting AI tools now is not just accuracy. AI tools are also being challenged to perform consistently well across different ages, races, and breast density profiles, particularly given the increased risks that different women face. For example, Black women are 42 percent more likely than white women to die from breast cancer, despite having nearly the same rates of breast cancer as white women. Recently, an FDA-approved AI device for screening breast cancer has come under fire for wrongly detecting cancer in Black patients significantly more often than white patients.
As AI technology improves, radiologists will be able to accurately scan a more diverse set of patients at a larger volume than ever before, potentially saving more lives than ever.
Here's how one doctor overcame extraordinary odds to help create the birth control pill
Dr. Percy Julian had so many personal and professional obstacles throughout his life, it’s amazing he was able to accomplish anything at all. But this hidden figure not only overcame these incredible obstacles, he also laid the foundation for the creation of the birth control pill.
Julian’s first obstacle was growing up in the Jim Crow-era south in the early part of the twentieth century, where racial segregation kept many African-Americans out of schools, libraries, parks, restaurants, and more. Despite limited opportunities and education, Julian was accepted to DePauw University in Indiana, where he majored in chemistry. But in college, Julian encountered another obstacle: he wasn’t allowed to stay in DePauw’s student housing because of segregation. Julian found lodging in an off-campus boarding house that refused to serve him meals. To pay for his room, board, and food, Julian waited tables and fired furnaces while he studied chemistry full-time. Incredibly, he graduated in 1920 as valedictorian of his class.
After graduation, Julian landed a fellowship at Harvard University to study chemistry—but here, Julian ran into yet another obstacle. Harvard thought that white students would resent being taught by Julian, an African-American man, so they withdrew his teaching assistantship. Julian instead decided to complete his PhD at the University of Vienna in Austria. When he did, he became one of the first African Americans to ever receive a PhD in chemistry.
Julian received offers for professorships, fellowships, and jobs throughout the 1930s, due to his impressive qualifications—but these offers were almost always revoked when schools or potential employers found out Julian was black. In one instance, Julian was offered a job at the Institute of Paper Chemistory in Appleton, Wisconsin—but Appleton, like many cities in the United States at the time, was known as a “sundown town,” which meant that black people weren’t allowed to be there after dark. As a result, Julian lost the job.
During this time, Julian became an expert at synthesis, which is the process of turning one substance into another through a series of planned chemical reactions. Julian synthesized a plant compound called physostigmine, which would later become a treatment for an eye disease called glaucoma.
In 1936, Julian was finally able to land—and keep—a job at Glidden, and there he found a way to extract soybean protein. This was used to produce a fire-retardant foam used in fire extinguishers to smother oil and gasoline fires aboard ships and aircraft carriers, and it ended up saving the lives of thousands of soldiers during World War II.
At Glidden, Julian found a way to synthesize human sex hormones such as progesterone, estrogen, and testosterone, from plants. This was a hugely profitable discovery for his company—but it also meant that clinicians now had huge quantities of these hormones, making hormone therapy cheaper and easier to come by. His work also laid the foundation for the creation of hormonal birth control: Without the ability to synthesize these hormones, hormonal birth control would not exist.
Julian left Glidden in the 1950s and formed his own company, called Julian Laboratories, outside of Chicago, where he manufactured steroids and conducted his own research. The company turned profitable within a year, but even so Julian’s obstacles weren’t over. In 1950 and 1951, Julian’s home was firebombed and attacked with dynamite, with his family inside. Julian often had to sit out on the front porch of his home with a shotgun to protect his family from violence.
But despite years of racism and violence, Julian’s story has a happy ending. Julian’s family was eventually welcomed into the neighborhood and protected from future attacks (Julian’s daughter lives there to this day). Julian then became one of the country’s first black millionaires when he sold his company in the 1960s.
When Julian passed away at the age of 76, he had more than 130 chemical patents to his name and left behind a body of work that benefits people to this day.