Don’t fear AI, fear power-hungry humans
Story by Big Think
We live in strange times, when the technology we depend on the most is also that which we fear the most. We celebrate cutting-edge achievements even as we recoil in fear at how they could be used to hurt us. From genetic engineering and AI to nuclear technology and nanobots, the list of awe-inspiring, fast-developing technologies is long.
However, this fear of the machine is not as new as it may seem. Technology has a longstanding alliance with power and the state. The dark side of human history can be told as a series of wars whose victors are often those with the most advanced technology. (There are exceptions, of course.) Science, and its technological offspring, follows the money.
This fear of the machine seems to be misplaced. The machine has no intent: only its maker does. The fear of the machine is, in essence, the fear we have of each other — of what we are capable of doing to one another.
How AI changes things
Sure, you would reply, but AI changes everything. With artificial intelligence, the machine itself will develop some sort of autonomy, however ill-defined. It will have a will of its own. And this will, if it reflects anything that seems human, will not be benevolent. With AI, the claim goes, the machine will somehow know what it must do to get rid of us. It will threaten us as a species.
Well, this fear is also not new. Mary Shelley wrote Frankenstein in 1818 to warn us of what science could do if it served the wrong calling. In the case of her novel, Dr. Frankenstein’s call was to win the battle against death — to reverse the course of nature. Granted, any cure of an illness interferes with the normal workings of nature, yet we are justly proud of having developed cures for our ailments, prolonging life and increasing its quality. Science can achieve nothing more noble. What messes things up is when the pursuit of good is confused with that of power. In this distorted scale, the more powerful the better. The ultimate goal is to be as powerful as gods — masters of time, of life and death.
Should countries create a World Mind Organization that controls the technologies that develop AI?
Back to AI, there is no doubt the technology will help us tremendously. We will have better medical diagnostics, better traffic control, better bridge designs, and better pedagogical animations to teach in the classroom and virtually. But we will also have better winnings in the stock market, better war strategies, and better soldiers and remote ways of killing. This grants real power to those who control the best technologies. It increases the take of the winners of wars — those fought with weapons, and those fought with money.
A story as old as civilization
The question is how to move forward. This is where things get interesting and complicated. We hear over and over again that there is an urgent need for safeguards, for controls and legislation to deal with the AI revolution. Great. But if these machines are essentially functioning in a semi-black box of self-teaching neural nets, how exactly are we going to make safeguards that are sure to remain effective? How are we to ensure that the AI, with its unlimited ability to gather data, will not come up with new ways to bypass our safeguards, the same way that people break into safes?
The second question is that of global control. As I wrote before, overseeing new technology is complex. Should countries create a World Mind Organization that controls the technologies that develop AI? If so, how do we organize this planet-wide governing board? Who should be a part of its governing structure? What mechanisms will ensure that governments and private companies do not secretly break the rules, especially when to do so would put the most advanced weapons in the hands of the rule breakers? They will need those, after all, if other actors break the rules as well.
As before, the countries with the best scientists and engineers will have a great advantage. A new international détente will emerge in the molds of the nuclear détente of the Cold War. Again, we will fear destructive technology falling into the wrong hands. This can happen easily. AI machines will not need to be built at an industrial scale, as nuclear capabilities were, and AI-based terrorism will be a force to reckon with.
So here we are, afraid of our own technology all over again.
What is missing from this picture? It continues to illustrate the same destructive pattern of greed and power that has defined so much of our civilization. The failure it shows is moral, and only we can change it. We define civilization by the accumulation of wealth, and this worldview is killing us. The project of civilization we invented has become self-cannibalizing. As long as we do not see this, and we keep on following the same route we have trodden for the past 10,000 years, it will be very hard to legislate the technology to come and to ensure such legislation is followed. Unless, of course, AI helps us become better humans, perhaps by teaching us how stupid we have been for so long. This sounds far-fetched, given who this AI will be serving. But one can always hope.
This article originally appeared on Big Think, home of the brightest minds and biggest ideas of all time.
Story by Big Think
For most of history, artificial intelligence (AI) has been relegated almost entirely to the realm of science fiction. Then, in late 2022, it burst into reality — seemingly out of nowhere — with the popular launch of ChatGPT, the generative AI chatbot that solves tricky problems, designs rockets, has deep conversations with users, and even aces the Bar exam.
But the truth is that before ChatGPT nabbed the public’s attention, AI was already here, and it was doing more important things than writing essays for lazy college students. Case in point: It was key to saving the lives of tens of millions of people.
AI-designed mRNA vaccines
As Dave Johnson, chief data and AI officer at Moderna, told MIT Technology Review‘s In Machines We Trust podcast in 2022, AI was integral to creating the company’s highly effective mRNA vaccine against COVID. Moderna and Pfizer/BioNTech’s mRNA vaccines collectively saved between 15 and 20 million lives, according to one estimate from 2022.
Johnson described how AI was hard at work at Moderna, well before COVID arose to infect billions. The pharmaceutical company focuses on finding mRNA therapies to fight off infectious disease, treat cancer, or thwart genetic illness, among other medical applications. Messenger RNA molecules are essentially molecular instructions for cells that tell them how to create specific proteins, which do everything from fighting infection, to catalyzing reactions, to relaying cellular messages.
Johnson and his team put AI and automated robots to work making lots of different mRNAs for scientists to experiment with. Moderna quickly went from making about 30 per month to more than one thousand. They then created AI algorithms to optimize mRNA to maximize protein production in the body — more bang for the biological buck.
For Johnson and his team’s next trick, they used AI to automate science, itself. Once Moderna’s scientists have an mRNA to experiment with, they do pre-clinical tests in the lab. They then pore over reams of data to see which mRNAs could progress to the next stage: animal trials. This process is long, repetitive, and soul-sucking — ill-suited to a creative scientist but great for a mindless AI algorithm. With scientists’ input, models were made to automate this tedious process.
“We don’t think about AI in the context of replacing humans,” says Dave Johnson, chief data and AI officer at Moderna. “We always think about it in terms of this human-machine collaboration, because they’re good at different things. Humans are really good at creativity and flexibility and insight, whereas machines are really good at precision and giving the exact same result every single time and doing it at scale and speed.”
All these AI systems were in put in place over the past decade. Then COVID showed up. So when the genome sequence of the coronavirus was made public in January 2020, Moderna was off to the races pumping out and testing mRNAs that would tell cells how to manufacture the coronavirus’s spike protein so that the body’s immune system would recognize and destroy it. Within 42 days, the company had an mRNA vaccine ready to be tested in humans. It eventually went into hundreds of millions of arms.
Biotech harnesses the power of AI
Moderna is now turning its attention to other ailments that could be solved with mRNA, and the company is continuing to lean on AI. Scientists are still coming to Johnson with automation requests, which he happily obliges.
“We don’t think about AI in the context of replacing humans,” he told the Me, Myself, and AI podcast. “We always think about it in terms of this human-machine collaboration, because they’re good at different things. Humans are really good at creativity and flexibility and insight, whereas machines are really good at precision and giving the exact same result every single time and doing it at scale and speed.”
Moderna, which was founded as a “digital biotech,” is undoubtedly the poster child of AI use in mRNA vaccines. Moderna recently signed a deal with IBM to use the company’s quantum computers as well as its proprietary generative AI, MoLFormer.
Moderna’s success is encouraging other companies to follow its example. In January, BioNTech, which partnered with Pfizer to make the other highly effective mRNA vaccine against COVID, acquired the company InstaDeep for $440 million to implement its machine learning AI across its mRNA medicine platform. And in May, Chinese technology giant Baidu announced an AI tool that designs super-optimized mRNA sequences in minutes. A nearly countless number of mRNA molecules can code for the same protein, but some are more stable and result in the production of more proteins. Baidu’s AI, called “LinearDesign,” finds these mRNAs. The company licensed the tool to French pharmaceutical company Sanofi.
Writing in the journal Accounts of Chemical Research in late 2021, Sebastian M. Castillo-Hair and Georg Seelig, computer engineers who focus on synthetic biology at the University of Washington, forecast that AI machine learning models will further accelerate the biotechnology research process, putting mRNA medicine into overdrive to the benefit of all.
This article originally appeared on Big Think, home of the brightest minds and biggest ideas of all time.
Opioid prescription policies may hurt those in chronic pain
Tinu Abayomi-Paul works as a writer and activist, plus one unwanted job: Trying to fill her opioid prescription. She says that some pharmacists laugh and tell her that no one needs the amount of pain medication that she is seeking. Another pharmacist near her home in Venus, Tex., refused to fill more than seven days of a 30-day prescription.
To get a new prescription—partially filled opioid prescriptions can’t be dispensed later—Abayomi-Paul needed to return to her doctor’s office. But without her medication, she was having too much pain to travel there, much less return to the pharmacy. She rationed out the pills over several weeks, an agonizing compromise that left her unable to work, interact with her children, sleep restfully, or leave the house. “Don’t I deserve to do more than survive?” she says.
Abayomi-Paul’s pain results from a degenerative spine disorder, chronic lymphocytic leukemia, and more than a dozen other diagnoses and disabilities. She is part of a growing group of people with chronic pain who have been negatively impacted by the fallout from efforts to prevent opioid overdose deaths.
Guidelines for dispensing these pills are complicated because many opioids, like codeine, oxycodone, and morphine, are prescribed legally for pain. Yet, deaths from opioids have increased rapidly since 1999 and become a national emergency. Many of them, such as heroin, are used illegally. The CDC identified three surges in opioid use: an increase in opioid prescriptions in the ‘90s, a surge of heroin around 2010, and an influx of fentanyl and other powerful synthetic opioids in 2013.
As overdose deaths grew, so did public calls to address them, prompting the CDC to change its prescription guidelines in 2016. The new guidelines suggested limiting medication for acute pain to a seven-day supply, capping daily doses of morphine, and other restrictions. Some statistics suggest that these policies have worked; from 2016 to 2019, prescriptions for opiates fell 44 percent. Physicians also started progressively lowering opioid doses for patients, a practice called tapering. A study tracking nearly 100,000 Medicare subscribers on opioids found that about 13 percent of patients were tapering in 2012, and that number increased to about 23 percent by 2017.
But some physicians may be too aggressive with this tapering strategy. About one in four people had doses reduced by more than 10 percent per week, a rate faster than the CDC recommends. The approach left people like Abayomi-Paul without the medication they needed. Every year, Abayomi-Paul says, her prescriptions are harder to fill. David Brushwood, a pharmacy professor who specializes in policy and outcomes at the University of Florida in Gainesville, says opioid dosing isn’t one-size-fits-all. “Patients need to be taken care of individually, not based on what some government agency says they need,” he says.
‘This is not survivable’
Health policy and disability rights attorney Erin Gilmer advocated for people with pain, using her own experience with chronic pain and a host of medical conditions as a guidepost. She launched an advocacy website, Healthcare as a Human Right, and shared her struggles on Twitter: “This pain is more than anything I've endured before and I've already been through too much. Yet because it's not simply identified no one believes it's as bad as it is. This is not survivable.”
When her pain dramatically worsened midway through 2021, Gilmer’s posts grew ominous: “I keep thinking it can't possibly get worse but somehow every day is worse than the last.”
The CDC revised its guidelines in 2022 after criticisms that people with chronic pain were being undertreated, enduring dangerous withdrawal symptoms, and suffering psychological distress. (Long-term opioid use can cause physical dependency, an adaptive reaction that is different than the compulsive misuse associated with a substance use disorder.) It was too late for Gilmer. On July 7, 2021, the 38-year-old died by suicide.
Last August, an Ohio district court ruling set forth a new requirement for Walgreens, Walmart, and CVS pharmacists in two counties. These pharmacists must now document opioid prescriptions that are turned down, even for customers who have no previous purchases at that pharmacy, and they’re required to share this information with other locations in the same chain. None of the three pharmacies responded to an interview request from Leaps.org.
In a practice called red flagging, pharmacists may label a prescription suspicious for a variety of reasons, such as if a pharmacist observes an unusually high dose, a long distance from the patient’s home to the pharmacy, or cash payment. Pharmacists may question patients or prescribers to resolve red flags but, regardless of the explanation, they’re free to refuse to fill a prescription.
As the risk of litigation has grown, so has finger-pointing, says Seth Whitelaw, a compliance consultant at Whitelaw Compliance Group in West Chester, PA, who advises drug, medical device, and biotech companies. Drugmakers accused in National Prescription Opioid Litigation (NPOL), a complex set of thousands of cases on opioid epidemic deaths, which includes the Ohio district case, have argued that they shouldn’t be responsible for the large supply of opiates and overdose deaths. Yet, prosecutors alleged that these pharmaceutical companies hid addiction and overdose risks when labeling opioids, while distributors and pharmacists failed to identify suspicious orders or scripts.
Patients and pharmacists fear red flags
The requirements that pharmacists document prescriptions they refuse to fill so far only apply to two counties in Ohio. But Brushwood fears they will spread because of this precedent, and because there’s no way for pharmacists to predict what new legislation is on the way. “There is no definition of a red flag, there are no lists of red flags. There is no instruction on what to do when a red flag is detected. There’s no guidance on how to document red flags. It is a standardless responsibility,” Brushwood says. This adds trepidation for pharmacists—and more hoops to jump through for patients.
“I went into the doctor one day here and she said, ‘I'm going to stop prescribing opioids to all my patients effective immediately,” Nicolson says.
“We now have about a dozen studies that show that actually ripping somebody off their medication increases their risk of overdose and suicide by three to five times, destabilizes their health and mental health, often requires some hospitalization or emergency care, and can cause heart attacks,” says Kate Nicolson, founder of the National Pain Advocacy Center based in Boulder, Colorado. “It can kill people.” Nicolson was in pain for decades due to a surgical injury to the nerves leading to her spinal cord before surgeries fixed the problem.
Another issue is that primary care offices may view opioid use as a reason to turn down new patients. In a 2021 study, secret shoppers called primary care clinics in nine states, identifying themselves as long-term opioid users. When callers said their opioids were discontinued because their former physician retired, as opposed to an unspecified reason, they were more likely to be offered an appointment. Even so, more than 40 percent were refused an appointment. The study authors say their findings suggest that some physicians may try to avoid treating people who use opioids.
Abayomi-Paul says red flagging has changed how she fills prescriptions. “Once I go to one place, I try to [continue] going to that same place because of the amount of records that I have and making sure my medications don’t conflict,” Abayomi-Paul says.
Nicolson moved to Colorado from Washington D.C. in 2015, before the CDC issued its 2016 guidelines. When the guidelines came out, she found the change to be shockingly abrupt. “I went into the doctor one day here and she said, ‘I'm going to stop prescribing opioids to all my patients effective immediately.’” Since then, she’s spoken with dozens of patients who have been red-flagged or simply haven’t been able to access pain medication.
Despite her expertise, Nicolson isn’t positive she could successfully fill an opioid prescription today even if she needed one. At this point, she’s not sure exactly what various pharmacies would view as a red flag. And she’s not confident that these red flags even work. “You can have very legitimate reasons for being 50 miles away or having to go to multiple pharmacies, given that there are drug shortages now, as well as someone refusing to fill [a prescription.] It doesn't mean that you’re necessarily ‘drug seeking.’”
While there’s no easy solution. Whitelaw says clarifying the role of pharmacists and physicians in patient access to opioids could help people get the medication they need. He is seeking policy changes that focus on the needs of people in pain more than the number of prescriptions filled. He also advocates standardizing the definition of red flags and procedures for resolving them. Still, there will never be a single policy that can be applied to all people, explains Brushwood, the University of Florida professor. “You have to make a decision about each individual prescription.”