Do New Tools Need New Ethics?
Scarcely a week goes by without the announcement of another breakthrough owing to advancing biotechnology. Recent examples include the use of gene editing tools to successfully alter human embryos or clone monkeys; new immunotherapy-based treatments offering longer lives or even potential cures for previously deadly cancers; and the creation of genetically altered mosquitos using "gene drives" to quickly introduce changes into the population in an ecosystem and alter the capacity to carry disease.
The environment for conducting science is dramatically different today than it was in the 1970s, 80s, or even the early 2000s.
Each of these examples puts pressure on current policy guidelines and approaches, some existing since the late 1970s, which were created to help guide the introduction of controversial new life sciences technologies. But do the policies that made sense decades ago continue to make sense today, or do the tools created during different eras in science demand new ethics guidelines and policies?
Advances in biotechnology aren't new of course, and in fact have been the hallmark of science since the creation of the modern U.S. National Institutes of Health in the 1940s and similar government agencies elsewhere. Funding agencies focused on health sciences research with the hope of creating breakthroughs in human health, and along the way, basic science discoveries led to the creation of new scientific tools that offered the ability to approach life, death, and disease in fundamentally new ways.
For example, take the discovery in the 1970s of the "chemical scissors" in living cells called restriction enzymes, which could be controlled and used to introduce cuts at predictable locations in a strand of DNA. This led to the creation of tools that for the first time allowed for genetic modification of any organism with DNA, which meant bacteria, plants, animals, and even humans could in theory have harmful mutations repaired, but also that changes could be made to alter or even add genetic traits, with potentially ominous implications.
The scientists involved in that early research convened a small conference to discuss not only the science, but how to responsibly control its potential uses and their implications. The meeting became known as the Asilomar Conference for the meeting center where it was held, and is often noted as the prime example of the scientific community policing itself. While the Asilomar recommendations were not sufficient from a policy standpoint, they offered a blueprint on which policies could be based and presented a model of the scientific community setting responsible controls for itself.
But the environment for conducting science changed over the succeeding decades and it is dramatically different today than it was in the 1970s, 80s, or even the early 2000s. The regime for oversight and regulation that has provided controls for the introduction of so-called "gene therapy" in humans starting in the mid-1970s is beginning to show signs of fraying. The vast majority of such research was performed in the U.S., U.K., and Europe, where policies were largely harmonized. But as the tools for manipulating humans at the molecular level advanced, they also became more reliable and more precise, as well as cheaper and easier to use—think CRISPR—and therefore more accessible to more people in many more countries, many without clear oversight or policies laying out responsible controls.
There is no precedent for global-scale science policy, though that is exactly what this moment seems to demand.
As if to make the point through news headlines, scientists in China announced in 2017 that they had attempted to perform gene editing on in vitro human embryos to repair an inherited mutation for beta thalassemia--research that would not be permitted in the U.S. and most European countries and at the time was also banned in the U.K. Similarly, specialists from a reproductive medicine clinic in the U.S. announced in 2016 that they had performed a highly controversial reproductive technology by which DNA from two women is combined (so-called "three parent babies"), in a satellite clinic they had opened in Mexico to avoid existing prohibitions on the technique passed by the U.S. Congress in 2015.
In both cases, genetic changes were introduced into human embryos that if successful would lead to the birth of a child with genetically modified germline cells—the sperm in boys or eggs in girls—with those genetic changes passed on to all future generations of related offspring. Those are just two very recent examples, and it doesn't require much imagination to predict the list of controversial possible applications of advancing biotechnologies: attempts at genetic augmentation or even cloning in humans, and alterations of the natural environment with genetically engineered mosquitoes or other insects in areas with endemic disease. In fact, as soon as this month, scientists in Africa may release genetically modified mosquitoes for the first time.
The technical barriers are falling at a dramatic pace, but policy hasn't kept up, both in terms of what controls make sense and how to address what is an increasingly global challenge. There is no precedent for global-scale science policy, though that is exactly what this moment seems to demand. Mechanisms for policy at global scale are limited–-think UN declarations, signatory countries, and sometimes international treaties, but all are slow, cumbersome and have limited track records of success.
But not all the news is bad. There are ongoing efforts at international discussion, such as an international summit on human genome editing convened in 2015 by the National Academies of Sciences and Medicine (U.S.), Royal Academy (U.K.), and Chinese Academy of Sciences (China), a follow-on international consensus committee whose report was issued in 2017, and an upcoming 2nd international summit in Hong Kong in November this year.
These efforts need to continue to focus less on common regulatory policies, which will be elusive if not impossible to create and implement, but on common ground for the principles that ought to guide country-level rules. Such principles might include those from the list proposed by the international consensus committee, including transparency, due care, responsible science adhering to professional norms, promoting wellbeing of those affected, and transnational cooperation. Work to create a set of shared norms is ongoing and worth continued effort as the relevant stakeholders attempt to navigate what can only be called a brave new world.
Scientists experiment with burning iron as a fuel source
Story by Freethink
Try burning an iron metal ingot and you’ll have to wait a long time — but grind it into a powder and it will readily burst into flames. That’s how sparklers work: metal dust burning in a beautiful display of light and heat. But could we burn iron for more than fun? Could this simple material become a cheap, clean, carbon-free fuel?
In new experiments — conducted on rockets, in microgravity — Canadian and Dutch researchers are looking at ways of boosting the efficiency of burning iron, with a view to turning this abundant material — the fourth most common in the Earth’s crust, about about 5% of its mass — into an alternative energy source.
Iron as a fuel
Iron is abundantly available and cheap. More importantly, the byproduct of burning iron is rust (iron oxide), a solid material that is easy to collect and recycle. Neither burning iron nor converting its oxide back produces any carbon in the process.
Iron oxide is potentially renewable by reacting with electricity or hydrogen to become iron again.
Iron has a high energy density: it requires almost the same volume as gasoline to produce the same amount of energy. However, iron has poor specific energy: it’s a lot heavier than gas to produce the same amount of energy. (Think of picking up a jug of gasoline, and then imagine trying to pick up a similar sized chunk of iron.) Therefore, its weight is prohibitive for many applications. Burning iron to run a car isn’t very practical if the iron fuel weighs as much as the car itself.
In its powdered form, however, iron offers more promise as a high-density energy carrier or storage system. Iron-burning furnaces could provide direct heat for industry, home heating, or to generate electricity.
Plus, iron oxide is potentially renewable by reacting with electricity or hydrogen to become iron again (as long as you’ve got a source of clean electricity or green hydrogen). When there’s excess electricity available from renewables like solar and wind, for example, rust could be converted back into iron powder, and then burned on demand to release that energy again.
However, these methods of recycling rust are very energy intensive and inefficient, currently, so improvements to the efficiency of burning iron itself may be crucial to making such a circular system viable.
The science of discrete burning
Powdered particles have a high surface area to volume ratio, which means it is easier to ignite them. This is true for metals as well.
Under the right circumstances, powdered iron can burn in a manner known as discrete burning. In its most ideal form, the flame completely consumes one particle before the heat radiating from it combusts other particles in its vicinity. By studying this process, researchers can better understand and model how iron combusts, allowing them to design better iron-burning furnaces.
Discrete burning is difficult to achieve on Earth. Perfect discrete burning requires a specific particle density and oxygen concentration. When the particles are too close and compacted, the fire jumps to neighboring particles before fully consuming a particle, resulting in a more chaotic and less controlled burn.
Presently, the rate at which powdered iron particles burn or how they release heat in different conditions is poorly understood. This hinders the development of technologies to efficiently utilize iron as a large-scale fuel.
Burning metal in microgravity
In April, the European Space Agency (ESA) launched a suborbital “sounding” rocket, carrying three experimental setups. As the rocket traced its parabolic trajectory through the atmosphere, the experiments got a few minutes in free fall, simulating microgravity.
One of the experiments on this mission studied how iron powder burns in the absence of gravity.
In microgravity, particles float in a more uniformly distributed cloud. This allows researchers to model the flow of iron particles and how a flame propagates through a cloud of iron particles in different oxygen concentrations.
Existing fossil fuel power plants could potentially be retrofitted to run on iron fuel.
Insights into how flames propagate through iron powder under different conditions could help design much more efficient iron-burning furnaces.
Clean and carbon-free energy on Earth
Various businesses are looking at ways to incorporate iron fuels into their processes. In particular, it could serve as a cleaner way to supply industrial heat by burning iron to heat water.
For example, Dutch brewery Swinkels Family Brewers, in collaboration with the Eindhoven University of Technology, switched to iron fuel as the heat source to power its brewing process, accounting for 15 million glasses of beer annually. Dutch startup RIFT is running proof-of-concept iron fuel power plants in Helmond and Arnhem.
As researchers continue to improve the efficiency of burning iron, its applicability will extend to other use cases as well. But is the infrastructure in place for this transition?
Often, the transition to new energy sources is slowed by the need to create new infrastructure to utilize them. Fortunately, this isn’t the case with switching from fossil fuels to iron. Since the ideal temperature to burn iron is similar to that for hydrocarbons, existing fossil fuel power plants could potentially be retrofitted to run on iron fuel.
This article originally appeared on Freethink, home of the brightest minds and biggest ideas of all time.
How to Use Thoughts to Control Computers with Dr. Tom Oxley
Tom Oxley is building what he calls a “natural highway into the brain” that lets people use their minds to control their phones and computers. The device, called the Stentrode, could improve the lives of hundreds of thousands of people living with spinal cord paralysis, ALS and other neurodegenerative diseases.
Leaps.org talked with Dr. Oxley for today’s podcast. A fascinating thing about the Stentrode is that it works very differently from other “brain computer interfaces” you may be familiar with, like Elon Musk’s Neuralink. Some BCIs are implanted by surgeons directly into a person’s brain, but the Stentrode is much less invasive. Dr. Oxley’s company, Synchron, opts for a “natural” approach, using stents in blood vessels to access the brain. This offers some major advantages to the handful of people who’ve already started to use the Stentrode.
The audio improves about 10 minutes into the episode. (There was a minor headset issue early on, but everything is audible throughout.) Dr. Oxley’s work creates game-changing opportunities for patients desperate for new options. His take on where we're headed with BCIs is must listening for anyone who cares about the future of health and technology.
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
In our conversation, Dr. Oxley talks about “Bluetooth brain”; the critical role of AI in the present and future of BCIs; how BCIs compare to voice command technology; regulatory frameworks for revolutionary technologies; specific people with paralysis who’ve been able to regain some independence thanks to the Stentrode; what it means to be a neurointerventionist; how to scale BCIs for more people to use them; the risks of BCIs malfunctioning; organic implants; and how BCIs help us understand the brain, among other topics.
Dr. Oxley received his PhD in neuro engineering from the University of Melbourne in Australia. He is the founding CEO of Synchron and an associate professor and the head of the vascular bionics laboratory at the University of Melbourne. He’s also a clinical instructor in the Deepartment of Neurosurgery at Mount Sinai Hospital. Dr. Oxley has completed more than 1,600 endovascular neurosurgical procedures on patients, including people with aneurysms and strokes, and has authored over 100 peer reviewed articles.
Links:
Synchron website - https://synchron.com/
Assessment of Safety of a Fully Implanted Endovascular Brain-Computer Interface for Severe Paralysis in 4 Patients (paper co-authored by Tom Oxley) - https://jamanetwork.com/journals/jamaneurology/art...
More research related to Synchron's work - https://synchron.com/research
Tom Oxley on LinkedIn - https://www.linkedin.com/in/tomoxl
Tom Oxley on Twitter - https://twitter.com/tomoxl?lang=en
Tom Oxley TED - https://www.ted.com/talks/tom_oxley_a_brain_implant_that_turns_your_thoughts_into_text?language=en
Tom Oxley website - https://tomoxl.com/
Novel brain implant helps paralyzed woman speak using digital avatar - https://engineering.berkeley.edu/news/2023/08/novel-brain-implant-helps-paralyzed-woman-speak-using-a-digital-avatar/
Edward Chang lab - https://changlab.ucsf.edu/
BCIs convert brain activity into text at 62 words per minute - https://med.stanford.edu/neurosurgery/news/2023/he...
Leaps.org: The Mind-Blowing Promise of Neural Implants - https://leaps.org/the-mind-blowing-promise-of-neural-implants/
Tom Oxley