Biohackers Made a Cheap and Effective Home Covid Test -- But No One Is Allowed to Use It

Biohackers Made a Cheap and Effective Home Covid Test -- But No One Is Allowed to Use It

A stock image of a home test for COVID-19.

Photo by Annie Spratt on Unsplash

Last summer, when fast and cheap Covid tests were in high demand and governments were struggling to manufacture and distribute them, a group of independent scientists working together had a bit of a breakthrough.

Working on the Just One Giant Lab platform, an online community that serves as a kind of clearing house for open science researchers to find each other and work together, they managed to create a simple, one-hour Covid test that anyone could take at home with just a cup of hot water. The group tested it across a network of home and professional laboratories before being listed as a semi-finalist team for the XPrize, a competition that rewards innovative solutions-based projects. Then, the group hit a wall: they couldn't commercialize the test.


They wanted to keep their project open source, making it accessible to people around the world, so they decided to forgo traditional means of intellectual property protection and didn't seek patents. (They couldn't afford lawyers anyway). And, as a loose-knit group that was not supported by a traditional scientific institution, working in community labs and homes around the world, they had no access to resources or financial support for manufacturing or distributing their test at scale.

But without ethical and regulatory approval for clinical testing, manufacture, and distribution, they were legally unable to create field tests for real people, leaving their inexpensive, $16-per-test, innovative product languishing behind, while other, more expensive over-the-counter tests made their way onto the market.

Who Are These Radical Scientists?

Independent, decentralized biomedical research has come of age. Also sometimes called DIYbio, biohacking, or community biology, depending on whom you ask, open research is today a global movement with thousands of members, from scientists with advanced degrees to middle-grade students. Their motivations and interests vary across a wide spectrum, but transparency and accessibility are key to the ethos of the movement. Teams are agile, focused on shoestring-budget R&D, and aim to disrupt business as usual in the ivory towers of the scientific establishment.

Ethics oversight is critical to ensuring that research is conducted responsibly, even by biohackers.

Initiatives developed within the community, such as Open Insulin, which hopes to engineer processes for affordable, small-batch insulin production, "Slybera," a provocative attempt to reverse engineer a $1 million dollar gene therapy, and the hundreds of projects posted on the collaboration platform Just One Giant Lab during the pandemic, all have one thing in common: to pursue testing in humans, they need an ethics oversight mechanism.

These groups, most of which operate collaboratively in community labs, homes, and online, recognize that some sort of oversight or guidance is useful—and that it's the right thing to do.

But also, and perhaps more immediately, they need it because federal rules require ethics oversight of any biomedical research that's headed in the direction of the consumer market. In addition, some individuals engaged in this work do want to publish their research in traditional scientific journals, which—you guessed it—also require that research has undergone an ethics evaluation. Ethics oversight is critical to ensuring that research is conducted responsibly, even by biohackers.

Bridging the Ethics Gap

The problem is that traditional oversight mechanisms, such as institutional review boards at government or academic research institutions, as well as the private boards utilized by pharmaceutical companies, are not accessible to most independent researchers. Traditional review boards are either closed to the public, or charge fees that are out of reach for many citizen science initiatives. This has created an "ethics gap" in nontraditional scientific research.

Biohackers are seen in some ways as the direct descendents of "white hat" computer hackers, or those focused on calling out security holes and contributing solutions to technical problems within self-regulating communities. In the case of health and biotechnology, those problems include both the absence of treatments and the availability of only expensive treatments for certain conditions. As the DIYbio community grows, there needs to be a way to provide assurance that, when the work is successful, the public is able to benefit from it eventually. The team that developed the one-hour Covid test found a potential commercial partner and so might well overcome the oversight hurdle, but it's been 14 months since they developed the test--and counting.

In short, without some kind of oversight mechanism for the work of independent biomedical researchers, the solutions they innovate will never have the opportunity to reach consumers.

In a new paper in the journal Citizen Science: Theory & Practice, we consider the issue of the ethics gap and ask whether ethics oversight is something nontraditional researchers want, and if so, what forms it might take. Given that individuals within these communities sometimes vehemently disagree with each other, is consensus on these questions even possible?

We learned that there is no "one size fits all" solution for ethics oversight of nontraditional research. Rather, the appropriateness of any oversight model will depend on each initiative's objectives, needs, risks, and constraints.

We also learned that nontraditional researchers are generally willing (and in some cases eager) to engage with traditional scientific, legal, and bioethics experts on ethics, safety, and related questions.

We suggest that these experts make themselves available to help nontraditional researchers build infrastructure for ethics self-governance and identify when it might be necessary to seek outside assistance.

Independent biomedical research has promise, but like any emerging science, it poses novel ethical questions and challenges. Existing research ethics and oversight frameworks may not be well-suited to answer them in every context, so we need to think outside the box about what we can create for the future. That process should begin by talking to independent biomedical researchers about their activities, priorities, and concerns with an eye to understanding how best to support them.

Christi Guerrini and Alex Pearlman

Christi Guerrini, JD, MPH studies biomedical citizen science and is an Associate Professor at Baylor College of Medicine. Alex Pearlman, MA, is a science journalist and bioethicist who writes about emerging issues in biotechnology. They have recently launched outlawbio.org, a place for discussion about nontraditional research.

Should We Use Technologies to Enhance Morality?

Should we welcome biomedical technologies that could enhance our ability to tell right from wrong and improve behaviors that are considered immoral such as dishonesty, prejudice and antisocial aggression?

Photo by Asa Rodger on Unsplash

Our moral ‘hardware’ evolved over 100,000 years ago while humans were still scratching the savannah. The perils we encountered back then were radically different from those that confront us now. To survive and flourish in the face of complex future challenges our archaic operating systems might need an upgrade – in non-traditional ways.

Morality refers to standards of right and wrong when it comes to our beliefs, behaviors, and intentions. Broadly, moral enhancement is the use of biomedical technology to improve moral functioning. This could include augmenting empathy, altruism, or moral reasoning, or curbing antisocial traits like outgroup bias and aggression.

The claims related to moral enhancement are grand and polarizing: it’s been both tendered as a solution to humanity’s existential crises and bluntly dismissed as an armchair hypothesis. So, does the concept have any purchase? The answer leans heavily on our definition and expectations.

Keep Reading Keep Reading
Cohen Marcus Lionel Brown
Cohen Marcus Lionel Brown teaches and researches ethics and applied philosophy at UOW in Greater Sydney, Australia. Specifically, he works on questions in neuroethics, moral psychology, aggression studies, and human enhancement. He is a current member of the Australasian Association of Philosophy Postgraduate Committee, Sydney Health Ethics Network, and the International Society for Research on Aggression. Cohen also works as a judge of the International Ethics Olympiad, and volunteers with the not-for-profit organization Primary Ethics. Find him on Twitter @CohenMarcusLio1
Podcast: The Friday Five - your health research roundup

The Friday Five is a new series in which Leaps.org covers five breakthroughs in research over the previous week that you may have missed.

The Friday Five is a new podcast series in which Leaps.org covers five breakthroughs in research over the previous week that you may have missed. There are plenty of controversies and ethical issues in science – and we get into many of them in our online magazine – but there’s also plenty to be excited about, and this news roundup is focused on inspiring scientific work to give you some momentum headed into the weekend.

Covered in this week's Friday Five:
- Puffer fish chemical for treating chronic pain
- Sleep study on the health benefits of waking up multiples times per night
- Best exercise regimens for reducing the risk of mortality aka living longer
- AI breakthrough in mapping protein structures with DeepMind
- Ultrasound stickers to see inside your body

Keep Reading Keep Reading
Matt Fuchs
Matt Fuchs is the host of the Making Sense of Science podcast and served previously as the editor-in-chief of Leaps.org. He writes as a contributor to the Washington Post, and his articles have also appeared in the New York Times, WIRED, Nautilus Magazine, Fortune Magazine and TIME Magazine. Follow him @fuchswriter.