Meet the Scientists on the Frontlines of Protecting Humanity from a Man-Made Pathogen
Jean Peccoud wasn't expecting an email from the FBI. He definitely wasn't expecting the agency to invite him to a meeting. "My reaction was, 'What did I do wrong to be on the FBI watch list?'" he recalls.
You use those blueprints for white-hat research—which is, indeed, why the open blueprints exist—or you can do the same for a black-hat attack.
He didn't know what the feds could possibly want from him. "I was mostly scared at this point," he says. "I was deeply disturbed by the whole thing."
But he decided to go anyway, and when he traveled to San Francisco for the 2008 gathering, the reason for the e-vite became clear: The FBI was reaching out to researchers like him—scientists interested in synthetic biology—in anticipation of the potential nefarious uses of this technology. "The whole purpose of the meeting was, 'Let's start talking to each other before we actually need to talk to each other,'" says Peccoud, now a professor of chemical and biological engineering at Colorado State University. "'And let's make sure next time you get an email from the FBI, you don't freak out."
Synthetic biology—which Peccoud defines as "the application of engineering methods to biological systems"—holds great power, and with that (as always) comes great responsibility. When you can synthesize genetic material in a lab, you can create new ways of diagnosing and treating people, and even new food ingredients. But you can also "print" the genetic sequence of a virus or virulent bacterium.
And while it's not easy, it's also not as hard as it could be, in part because dangerous sequences have publicly available blueprints. You use those blueprints for white-hat research—which is, indeed, why the open blueprints exist—or you can do the same for a black-hat attack. You could synthesize a dangerous pathogen's code on purpose, or you could unwittingly do so because someone tampered with your digital instructions. Ordering synthetic genes for viral sequences, says Peccoud, would likely be more difficult today than it was a decade ago.
"There is more awareness of the industry, and they are taking this more seriously," he says. "There is no specific regulation, though."
Trying to lock down the interconnected machines that enable synthetic biology, secure its lab processes, and keep dangerous pathogens out of the hands of bad actors is part of a relatively new field: cyberbiosecurity, whose name Peccoud and colleagues introduced in a 2018 paper.
Biological threats feel especially acute right now, during the ongoing pandemic. COVID-19 is a natural pathogen -- not one engineered in a lab. But future outbreaks could start from a bug nature didn't build, if the wrong people get ahold of the right genetic sequences, and put them in the right sequence. Securing the equipment and processes that make synthetic biology possible -- so that doesn't happen -- is part of why the field of cyberbiosecurity was born.
The Origin Story
It is perhaps no coincidence that the FBI pinged Peccoud when it did: soon after a journalist ordered a sequence of smallpox DNA and wrote, for The Guardian, about how easy it was. "That was not good press for anybody," says Peccoud. Previously, in 2002, the Pentagon had funded SUNY Stonybrook researchers to try something similar: They ordered bits of polio DNA piecemeal and, over the course of three years, strung them together.
Although many years have passed since those early gotchas, the current patchwork of regulations still wouldn't necessarily prevent someone from pulling similar tricks now, and the technological systems that synthetic biology runs on are more intertwined — and so perhaps more hackable — than ever. Researchers like Peccoud are working to bring awareness to those potential problems, to promote accountability, and to provide early-detection tools that would catch the whiff of a rotten act before it became one.
Peccoud notes that if someone wants to get access to a specific pathogen, it is probably easier to collect it from the environment or take it from a biodefense lab than to whip it up synthetically. "However, people could use genetic databases to design a system that combines different genes in a way that would make them dangerous together without each of the components being dangerous on its own," he says. "This would be much more difficult to detect."
After his meeting with the FBI, Peccoud grew more interested in these sorts of security questions. So he was paying attention when, in 2010, the Department of Health and Human Services — now helping manage the response to COVID-19 — created guidance for how to screen synthetic biology orders, to make sure suppliers didn't accidentally send bad actors the sequences that make up bad genomes.
Guidance is nice, Peccoud thought, but it's just words. He wanted to turn those words into action: into a computer program. "I didn't know if it was something you can run on a desktop or if you need a supercomputer to run it," he says. So, one summer, he tasked a team of student researchers with poring over the sentences and turning them into scripts. "I let the FBI know," he says, having both learned his lesson and wanting to get in on the game.
Peccoud later joined forces with Randall Murch, a former FBI agent and current Virginia Tech professor, and a team of colleagues from both Virginia Tech and the University of Nebraska-Lincoln, on a prototype project for the Department of Defense. They went into a lab at the University of Nebraska at Lincoln and assessed all its cyberbio-vulnerabilities. The lab develops and produces prototype vaccines, therapeutics, and prophylactic components — exactly the kind of place that you always, and especially right now, want to keep secure.
"We were creating wiki of all these nasty things."
The team found dozens of Achilles' heels, and put them in a private report. Not long after that project, the two and their colleagues wrote the paper that first used the term "cyberbiosecurity." A second paper, led by Murch, came out five months later and provided a proposed definition and more comprehensive perspective on cyberbiosecurity. But although it's now a buzzword, it's the definition, not the jargon, that matters. "Frankly, I don't really care if they call it cyberbiosecurity," says Murch. Call it what you want: Just pay attention to its tenets.
A Database of Scary Sequences
Peccoud and Murch, of course, aren't the only ones working to screen sequences and secure devices. At the nonprofit Battelle Memorial Institute in Columbus, Ohio, for instance, scientists are working on solutions that balance the openness inherent to science and the closure that can stop bad stuff. "There's a challenge there that you want to enable research but you want to make sure that what people are ordering is safe," says the organization's Neeraj Rao.
Rao can't talk about the work Battelle does for the spy agency IARPA, the Intelligence Advanced Research Projects Activity, on a project called Fun GCAT, which aims to use computational tools to deep-screen gene-sequence orders to see if they pose a threat. It can, though, talk about a twin-type internal project: ThreatSEQ (pronounced, of course, "threat seek").
The project started when "a government customer" (as usual, no one will say which) asked Battelle to curate a list of dangerous toxins and pathogens, and their genetic sequences. The researchers even started tagging sequences according to their function — like whether a particular sequence is involved in a germ's virulence or toxicity. That helps if someone is trying to use synthetic biology not to gin up a yawn-inducing old bug but to engineer a totally new one. "How do you essentially predict what the function of a novel sequence is?" says Rao. You look at what other, similar bits of code do.
"We were creating wiki of all these nasty things," says Rao. As they were working, they realized that DNA manufacturers could potentially scan in sequences that people ordered, run them against the database, and see if anything scary matched up. Kind of like that plagiarism software your college professors used.
Battelle began offering their screening capability, as ThreatSEQ. When customers -- like, currently, Twist Bioscience -- throw their sequences in, and get a report back, the manufacturers make the final decision about whether to fulfill a flagged order — whether, in the analogy, to give an F for plagiarism. After all, legitimate researchers do legitimately need to have DNA from legitimately bad organisms.
"Maybe it's the CDC," says Rao. "If things check out, oftentimes [the manufacturers] will fulfill the order." If it's your aggrieved uncle seeking the virulent pathogen, maybe not. But ultimately, no one is stopping the manufacturers from doing so.
Beyond that kind of tampering, though, cyberbiosecurity also includes keeping a lockdown on the machines that make the genetic sequences. "Somebody now doesn't need physical access to infrastructure to tamper with it," says Rao. So it needs the same cyber protections as other internet-connected devices.
Scientists are also now using DNA to store data — encoding information in its bases, rather than into a hard drive. To download the data, you sequence the DNA and read it back into a computer. But if you think like a bad guy, you'd realize that a bad guy could then, for instance, insert a computer virus into the genetic code, and when the researcher went to nab her data, her desktop would crash or infect the others on the network.
Something like that actually happened in 2017 at the USENIX security symposium, an annual programming conference: Researchers from the University of Washington encoded malware into DNA, and when the gene sequencer assembled the DNA, it corrupted the sequencer's software, then the computer that controlled it.
"This vulnerability could be just the opening an adversary needs to compromise an organization's systems," Inspirion Biosciences' J. Craig Reed and Nicolas Dunaway wrote in a paper for Frontiers in Bioengineering and Biotechnology, included in an e-book that Murch edited called Mapping the Cyberbiosecurity Enterprise.
Where We Go From Here
So what to do about all this? That's hard to say, in part because we don't know how big a current problem any of it poses. As noted in Mapping the Cyberbiosecurity Enterprise, "Information about private sector infrastructure vulnerabilities or data breaches is protected from public release by the Protected Critical Infrastructure Information (PCII) Program," if the privateers share the information with the government. "Government sector vulnerabilities or data breaches," meanwhile, "are rarely shared with the public."
"What I think is encouraging right now is the fact that we're even having this discussion."
The regulations that could rein in problems aren't as robust as many would like them to be, and much good behavior is technically voluntary — although guidelines and best practices do exist from organizations like the International Gene Synthesis Consortium and the National Institute of Standards and Technology.
Rao thinks it would be smart if grant-giving agencies like the National Institutes of Health and the National Science Foundation required any scientists who took their money to work with manufacturing companies that screen sequences. But he also still thinks we're on our way to being ahead of the curve, in terms of preventing print-your-own bioproblems: "What I think is encouraging right now is the fact that we're even having this discussion," says Rao.
Peccoud, for his part, has worked to keep such conversations going, including by doing training for the FBI and planning a workshop for students in which they imagine and work to guard against the malicious use of their research. But actually, Peccoud believes that human error, flawed lab processes, and mislabeled samples might be bigger threats than the outside ones. "Way too often, I think that people think of security as, 'Oh, there is a bad guy going after me,' and the main thing you should be worried about is yourself and errors," he says.
Murch thinks we're only at the beginning of understanding where our weak points are, and how many times they've been bruised. Decreasing those contusions, though, won't just take more secure systems. "The answer won't be technical only," he says. It'll be social, political, policy-related, and economic — a cultural revolution all its own.
Story by Big Think
For most of history, artificial intelligence (AI) has been relegated almost entirely to the realm of science fiction. Then, in late 2022, it burst into reality — seemingly out of nowhere — with the popular launch of ChatGPT, the generative AI chatbot that solves tricky problems, designs rockets, has deep conversations with users, and even aces the Bar exam.
But the truth is that before ChatGPT nabbed the public’s attention, AI was already here, and it was doing more important things than writing essays for lazy college students. Case in point: It was key to saving the lives of tens of millions of people.
AI-designed mRNA vaccines
As Dave Johnson, chief data and AI officer at Moderna, told MIT Technology Review‘s In Machines We Trust podcast in 2022, AI was integral to creating the company’s highly effective mRNA vaccine against COVID. Moderna and Pfizer/BioNTech’s mRNA vaccines collectively saved between 15 and 20 million lives, according to one estimate from 2022.
Johnson described how AI was hard at work at Moderna, well before COVID arose to infect billions. The pharmaceutical company focuses on finding mRNA therapies to fight off infectious disease, treat cancer, or thwart genetic illness, among other medical applications. Messenger RNA molecules are essentially molecular instructions for cells that tell them how to create specific proteins, which do everything from fighting infection, to catalyzing reactions, to relaying cellular messages.
Johnson and his team put AI and automated robots to work making lots of different mRNAs for scientists to experiment with. Moderna quickly went from making about 30 per month to more than one thousand. They then created AI algorithms to optimize mRNA to maximize protein production in the body — more bang for the biological buck.
For Johnson and his team’s next trick, they used AI to automate science, itself. Once Moderna’s scientists have an mRNA to experiment with, they do pre-clinical tests in the lab. They then pore over reams of data to see which mRNAs could progress to the next stage: animal trials. This process is long, repetitive, and soul-sucking — ill-suited to a creative scientist but great for a mindless AI algorithm. With scientists’ input, models were made to automate this tedious process.
“We don’t think about AI in the context of replacing humans,” says Dave Johnson, chief data and AI officer at Moderna. “We always think about it in terms of this human-machine collaboration, because they’re good at different things. Humans are really good at creativity and flexibility and insight, whereas machines are really good at precision and giving the exact same result every single time and doing it at scale and speed.”
All these AI systems were in put in place over the past decade. Then COVID showed up. So when the genome sequence of the coronavirus was made public in January 2020, Moderna was off to the races pumping out and testing mRNAs that would tell cells how to manufacture the coronavirus’s spike protein so that the body’s immune system would recognize and destroy it. Within 42 days, the company had an mRNA vaccine ready to be tested in humans. It eventually went into hundreds of millions of arms.
Biotech harnesses the power of AI
Moderna is now turning its attention to other ailments that could be solved with mRNA, and the company is continuing to lean on AI. Scientists are still coming to Johnson with automation requests, which he happily obliges.
“We don’t think about AI in the context of replacing humans,” he told the Me, Myself, and AI podcast. “We always think about it in terms of this human-machine collaboration, because they’re good at different things. Humans are really good at creativity and flexibility and insight, whereas machines are really good at precision and giving the exact same result every single time and doing it at scale and speed.”
Moderna, which was founded as a “digital biotech,” is undoubtedly the poster child of AI use in mRNA vaccines. Moderna recently signed a deal with IBM to use the company’s quantum computers as well as its proprietary generative AI, MoLFormer.
Moderna’s success is encouraging other companies to follow its example. In January, BioNTech, which partnered with Pfizer to make the other highly effective mRNA vaccine against COVID, acquired the company InstaDeep for $440 million to implement its machine learning AI across its mRNA medicine platform. And in May, Chinese technology giant Baidu announced an AI tool that designs super-optimized mRNA sequences in minutes. A nearly countless number of mRNA molecules can code for the same protein, but some are more stable and result in the production of more proteins. Baidu’s AI, called “LinearDesign,” finds these mRNAs. The company licensed the tool to French pharmaceutical company Sanofi.
Writing in the journal Accounts of Chemical Research in late 2021, Sebastian M. Castillo-Hair and Georg Seelig, computer engineers who focus on synthetic biology at the University of Washington, forecast that AI machine learning models will further accelerate the biotechnology research process, putting mRNA medicine into overdrive to the benefit of all.
This article originally appeared on Big Think, home of the brightest minds and biggest ideas of all time.
Opioid prescription policies may hurt those in chronic pain
Tinu Abayomi-Paul works as a writer and activist, plus one unwanted job: Trying to fill her opioid prescription. She says that some pharmacists laugh and tell her that no one needs the amount of pain medication that she is seeking. Another pharmacist near her home in Venus, Tex., refused to fill more than seven days of a 30-day prescription.
To get a new prescription—partially filled opioid prescriptions can’t be dispensed later—Abayomi-Paul needed to return to her doctor’s office. But without her medication, she was having too much pain to travel there, much less return to the pharmacy. She rationed out the pills over several weeks, an agonizing compromise that left her unable to work, interact with her children, sleep restfully, or leave the house. “Don’t I deserve to do more than survive?” she says.
Abayomi-Paul’s pain results from a degenerative spine disorder, chronic lymphocytic leukemia, and more than a dozen other diagnoses and disabilities. She is part of a growing group of people with chronic pain who have been negatively impacted by the fallout from efforts to prevent opioid overdose deaths.
Guidelines for dispensing these pills are complicated because many opioids, like codeine, oxycodone, and morphine, are prescribed legally for pain. Yet, deaths from opioids have increased rapidly since 1999 and become a national emergency. Many of them, such as heroin, are used illegally. The CDC identified three surges in opioid use: an increase in opioid prescriptions in the ‘90s, a surge of heroin around 2010, and an influx of fentanyl and other powerful synthetic opioids in 2013.
As overdose deaths grew, so did public calls to address them, prompting the CDC to change its prescription guidelines in 2016. The new guidelines suggested limiting medication for acute pain to a seven-day supply, capping daily doses of morphine, and other restrictions. Some statistics suggest that these policies have worked; from 2016 to 2019, prescriptions for opiates fell 44 percent. Physicians also started progressively lowering opioid doses for patients, a practice called tapering. A study tracking nearly 100,000 Medicare subscribers on opioids found that about 13 percent of patients were tapering in 2012, and that number increased to about 23 percent by 2017.
But some physicians may be too aggressive with this tapering strategy. About one in four people had doses reduced by more than 10 percent per week, a rate faster than the CDC recommends. The approach left people like Abayomi-Paul without the medication they needed. Every year, Abayomi-Paul says, her prescriptions are harder to fill. David Brushwood, a pharmacy professor who specializes in policy and outcomes at the University of Florida in Gainesville, says opioid dosing isn’t one-size-fits-all. “Patients need to be taken care of individually, not based on what some government agency says they need,” he says.
‘This is not survivable’
Health policy and disability rights attorney Erin Gilmer advocated for people with pain, using her own experience with chronic pain and a host of medical conditions as a guidepost. She launched an advocacy website, Healthcare as a Human Right, and shared her struggles on Twitter: “This pain is more than anything I've endured before and I've already been through too much. Yet because it's not simply identified no one believes it's as bad as it is. This is not survivable.”
When her pain dramatically worsened midway through 2021, Gilmer’s posts grew ominous: “I keep thinking it can't possibly get worse but somehow every day is worse than the last.”
The CDC revised its guidelines in 2022 after criticisms that people with chronic pain were being undertreated, enduring dangerous withdrawal symptoms, and suffering psychological distress. (Long-term opioid use can cause physical dependency, an adaptive reaction that is different than the compulsive misuse associated with a substance use disorder.) It was too late for Gilmer. On July 7, 2021, the 38-year-old died by suicide.
Last August, an Ohio district court ruling set forth a new requirement for Walgreens, Walmart, and CVS pharmacists in two counties. These pharmacists must now document opioid prescriptions that are turned down, even for customers who have no previous purchases at that pharmacy, and they’re required to share this information with other locations in the same chain. None of the three pharmacies responded to an interview request from Leaps.org.
In a practice called red flagging, pharmacists may label a prescription suspicious for a variety of reasons, such as if a pharmacist observes an unusually high dose, a long distance from the patient’s home to the pharmacy, or cash payment. Pharmacists may question patients or prescribers to resolve red flags but, regardless of the explanation, they’re free to refuse to fill a prescription.
As the risk of litigation has grown, so has finger-pointing, says Seth Whitelaw, a compliance consultant at Whitelaw Compliance Group in West Chester, PA, who advises drug, medical device, and biotech companies. Drugmakers accused in National Prescription Opioid Litigation (NPOL), a complex set of thousands of cases on opioid epidemic deaths, which includes the Ohio district case, have argued that they shouldn’t be responsible for the large supply of opiates and overdose deaths. Yet, prosecutors alleged that these pharmaceutical companies hid addiction and overdose risks when labeling opioids, while distributors and pharmacists failed to identify suspicious orders or scripts.
Patients and pharmacists fear red flags
The requirements that pharmacists document prescriptions they refuse to fill so far only apply to two counties in Ohio. But Brushwood fears they will spread because of this precedent, and because there’s no way for pharmacists to predict what new legislation is on the way. “There is no definition of a red flag, there are no lists of red flags. There is no instruction on what to do when a red flag is detected. There’s no guidance on how to document red flags. It is a standardless responsibility,” Brushwood says. This adds trepidation for pharmacists—and more hoops to jump through for patients.
“I went into the doctor one day here and she said, ‘I'm going to stop prescribing opioids to all my patients effective immediately,” Nicolson says.
“We now have about a dozen studies that show that actually ripping somebody off their medication increases their risk of overdose and suicide by three to five times, destabilizes their health and mental health, often requires some hospitalization or emergency care, and can cause heart attacks,” says Kate Nicolson, founder of the National Pain Advocacy Center based in Boulder, Colorado. “It can kill people.” Nicolson was in pain for decades due to a surgical injury to the nerves leading to her spinal cord before surgeries fixed the problem.
Another issue is that primary care offices may view opioid use as a reason to turn down new patients. In a 2021 study, secret shoppers called primary care clinics in nine states, identifying themselves as long-term opioid users. When callers said their opioids were discontinued because their former physician retired, as opposed to an unspecified reason, they were more likely to be offered an appointment. Even so, more than 40 percent were refused an appointment. The study authors say their findings suggest that some physicians may try to avoid treating people who use opioids.
Abayomi-Paul says red flagging has changed how she fills prescriptions. “Once I go to one place, I try to [continue] going to that same place because of the amount of records that I have and making sure my medications don’t conflict,” Abayomi-Paul says.
Nicolson moved to Colorado from Washington D.C. in 2015, before the CDC issued its 2016 guidelines. When the guidelines came out, she found the change to be shockingly abrupt. “I went into the doctor one day here and she said, ‘I'm going to stop prescribing opioids to all my patients effective immediately.’” Since then, she’s spoken with dozens of patients who have been red-flagged or simply haven’t been able to access pain medication.
Despite her expertise, Nicolson isn’t positive she could successfully fill an opioid prescription today even if she needed one. At this point, she’s not sure exactly what various pharmacies would view as a red flag. And she’s not confident that these red flags even work. “You can have very legitimate reasons for being 50 miles away or having to go to multiple pharmacies, given that there are drug shortages now, as well as someone refusing to fill [a prescription.] It doesn't mean that you’re necessarily ‘drug seeking.’”
While there’s no easy solution. Whitelaw says clarifying the role of pharmacists and physicians in patient access to opioids could help people get the medication they need. He is seeking policy changes that focus on the needs of people in pain more than the number of prescriptions filled. He also advocates standardizing the definition of red flags and procedures for resolving them. Still, there will never be a single policy that can be applied to all people, explains Brushwood, the University of Florida professor. “You have to make a decision about each individual prescription.”