Biohackers Made a Cheap and Effective Home Covid Test -- But No One Is Allowed to Use It

Biohackers Made a Cheap and Effective Home Covid Test -- But No One Is Allowed to Use It

A stock image of a home test for COVID-19.

Photo by Annie Spratt on Unsplash

Last summer, when fast and cheap Covid tests were in high demand and governments were struggling to manufacture and distribute them, a group of independent scientists working together had a bit of a breakthrough.

Working on the Just One Giant Lab platform, an online community that serves as a kind of clearing house for open science researchers to find each other and work together, they managed to create a simple, one-hour Covid test that anyone could take at home with just a cup of hot water. The group tested it across a network of home and professional laboratories before being listed as a semi-finalist team for the XPrize, a competition that rewards innovative solutions-based projects. Then, the group hit a wall: they couldn't commercialize the test.


They wanted to keep their project open source, making it accessible to people around the world, so they decided to forgo traditional means of intellectual property protection and didn't seek patents. (They couldn't afford lawyers anyway). And, as a loose-knit group that was not supported by a traditional scientific institution, working in community labs and homes around the world, they had no access to resources or financial support for manufacturing or distributing their test at scale.

But without ethical and regulatory approval for clinical testing, manufacture, and distribution, they were legally unable to create field tests for real people, leaving their inexpensive, $16-per-test, innovative product languishing behind, while other, more expensive over-the-counter tests made their way onto the market.

Who Are These Radical Scientists?

Independent, decentralized biomedical research has come of age. Also sometimes called DIYbio, biohacking, or community biology, depending on whom you ask, open research is today a global movement with thousands of members, from scientists with advanced degrees to middle-grade students. Their motivations and interests vary across a wide spectrum, but transparency and accessibility are key to the ethos of the movement. Teams are agile, focused on shoestring-budget R&D, and aim to disrupt business as usual in the ivory towers of the scientific establishment.

Ethics oversight is critical to ensuring that research is conducted responsibly, even by biohackers.

Initiatives developed within the community, such as Open Insulin, which hopes to engineer processes for affordable, small-batch insulin production, "Slybera," a provocative attempt to reverse engineer a $1 million dollar gene therapy, and the hundreds of projects posted on the collaboration platform Just One Giant Lab during the pandemic, all have one thing in common: to pursue testing in humans, they need an ethics oversight mechanism.

These groups, most of which operate collaboratively in community labs, homes, and online, recognize that some sort of oversight or guidance is useful—and that it's the right thing to do.

But also, and perhaps more immediately, they need it because federal rules require ethics oversight of any biomedical research that's headed in the direction of the consumer market. In addition, some individuals engaged in this work do want to publish their research in traditional scientific journals, which—you guessed it—also require that research has undergone an ethics evaluation. Ethics oversight is critical to ensuring that research is conducted responsibly, even by biohackers.

Bridging the Ethics Gap

The problem is that traditional oversight mechanisms, such as institutional review boards at government or academic research institutions, as well as the private boards utilized by pharmaceutical companies, are not accessible to most independent researchers. Traditional review boards are either closed to the public, or charge fees that are out of reach for many citizen science initiatives. This has created an "ethics gap" in nontraditional scientific research.

Biohackers are seen in some ways as the direct descendents of "white hat" computer hackers, or those focused on calling out security holes and contributing solutions to technical problems within self-regulating communities. In the case of health and biotechnology, those problems include both the absence of treatments and the availability of only expensive treatments for certain conditions. As the DIYbio community grows, there needs to be a way to provide assurance that, when the work is successful, the public is able to benefit from it eventually. The team that developed the one-hour Covid test found a potential commercial partner and so might well overcome the oversight hurdle, but it's been 14 months since they developed the test--and counting.

In short, without some kind of oversight mechanism for the work of independent biomedical researchers, the solutions they innovate will never have the opportunity to reach consumers.

In a new paper in the journal Citizen Science: Theory & Practice, we consider the issue of the ethics gap and ask whether ethics oversight is something nontraditional researchers want, and if so, what forms it might take. Given that individuals within these communities sometimes vehemently disagree with each other, is consensus on these questions even possible?

We learned that there is no "one size fits all" solution for ethics oversight of nontraditional research. Rather, the appropriateness of any oversight model will depend on each initiative's objectives, needs, risks, and constraints.

We also learned that nontraditional researchers are generally willing (and in some cases eager) to engage with traditional scientific, legal, and bioethics experts on ethics, safety, and related questions.

We suggest that these experts make themselves available to help nontraditional researchers build infrastructure for ethics self-governance and identify when it might be necessary to seek outside assistance.

Independent biomedical research has promise, but like any emerging science, it poses novel ethical questions and challenges. Existing research ethics and oversight frameworks may not be well-suited to answer them in every context, so we need to think outside the box about what we can create for the future. That process should begin by talking to independent biomedical researchers about their activities, priorities, and concerns with an eye to understanding how best to support them.

Christi Guerrini and Alex Pearlman

Christi Guerrini, JD, MPH studies biomedical citizen science and is an Associate Professor at Baylor College of Medicine. Alex Pearlman, MA, is a science journalist and bioethicist who writes about emerging issues in biotechnology. They have recently launched outlawbio.org, a place for discussion about nontraditional research.

Scientists Want to Make Robots with Genomes that Help Grow their Minds

Giving robots self-awareness as they move through space - and maybe even providing them with gene-like methods for storing rules of behavior - could be important steps toward creating more intelligent machines.

phonlamaiphoto

One day in recent past, scientists at Columbia University’s Creative Machines Lab set up a robotic arm inside a circle of five streaming video cameras and let the robot watch itself move, turn and twist. For about three hours the robot did exactly that—it looked at itself this way and that, like toddlers exploring themselves in a room full of mirrors. By the time the robot stopped, its internal neural network finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. In other words, the robot built a spatial self-awareness, just like humans do. “We trained its deep neural network to understand how it moved in space,” says Boyuan Chen, one of the scientists who worked on it.

For decades robots have been doing helpful tasks that are too hard, too dangerous, or physically impossible for humans to carry out themselves. Robots are ultimately superior to humans in complex calculations, following rules to a tee and repeating the same steps perfectly. But even the biggest successes for human-robot collaborations—those in manufacturing and automotive industries—still require separating the two for safety reasons. Hardwired for a limited set of tasks, industrial robots don't have the intelligence to know where their robo-parts are in space, how fast they’re moving and when they can endanger a human.

Over the past decade or so, humans have begun to expect more from robots. Engineers have been building smarter versions that can avoid obstacles, follow voice commands, respond to human speech and make simple decisions. Some of them proved invaluable in many natural and man-made disasters like earthquakes, forest fires, nuclear accidents and chemical spills. These disaster recovery robots helped clean up dangerous chemicals, looked for survivors in crumbled buildings, and ventured into radioactive areas to assess damage.

Keep Reading Keep Reading
Lina Zeldovich

Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.

Podcast: Wellness chatbots and meditation pods with Deepak Chopra

Leaps.org talked with Deepak Chopra about new technologies he's developing for mental health with Jonathan Marcoschamer, CEO of OpenSeed, and others.

Hannah Cohen

Over the last few decades, perhaps no one has impacted healthy lifestyles more than Deepak Chopra. While several of his theories and recommendations have been criticized by prominent members of the scientific community, he has helped bring meditation, yoga and other practices for well-being into the mainstream in ways that benefit the health of vast numbers of people every day. His work has led many to accept new ways of thinking about alternative medicine, the power of mind over body, and the malleability of the aging process.

His impact is such that it's been observed our culture no longer recognizes him as a human being but as a pervasive symbol of new-agey personal health and spiritual growth. Last week, I had a chance to confirm that Chopra is, in fact, a human being – and deserving of his icon status – when I talked with him for the Leaps.org podcast. He relayed ideas that were wise and ancient, yet highly relevant to our world today, with the fluidity and ease of someone discussing the weather. Showing no signs of slowing down at age 76, he described his prolific work, including the publication of two books in the past year and a range of technologies he’s developing, including a meditation app, meditation pods for the workplace, and a chatbot for mental health called Piwi.

Take a listen and get inspired to do some meditation and deep thinking on the future of health. As Chopra told me, “If you don’t have time to meditate once per day, you probably need to meditate twice per day.”

Keep Reading Keep Reading
Matt Fuchs
Matt Fuchs is the host of the Making Sense of Science podcast and served previously as the editor-in-chief of Leaps.org. He writes as a contributor to the Washington Post, and his articles have also appeared in the New York Times, WIRED, Nautilus Magazine, Fortune Magazine and TIME Magazine. Follow him @fuchswriter.