Is Carbon Dioxide the New Black? Yes, If These Fabric-Designing Scientists Have Their Way
Each year the world releases around 33 billion tons of carbon dioxide into the atmosphere. What if we could use this waste carbon dioxide to make shirts, dresses and hats? It sounds unbelievable. But two innovators are trying to tackle climate change in this truly unique way.
Chemist Tawfiq Nasr Allah set up Fairbrics with material scientist Benoît Illy in 2019. They're using waste carbon dioxide from industrial fumes as a raw material to create polyester, identical to the everyday polyester we use now. They want to take a new and very different approach to make the fashion industry more sustainable.
The Dark Side of Fast Fashion
The fashion industry is responsible for around 4% of global emissions. In a 2015 report, the MIT Materials Systems Laboratory predicted that the global impact of polyester fabric will grow from around 880 billion kg of CO2 in 2015 to 1.5 trillion kg of CO2 by 2030.
Professor Greg Peters, an expert in environmental science and sustainability, highlights the wide-ranging difficulties caused by the production of polyester. "Because it is made from petrochemical crude oil there is no real limit on how much polyester can be produced...You have to consider the ecological damage (oil spills, fracking etc.) caused by the oil and gas industry."
Many big-name brands have pledged to become carbon neutral by 2050. But nothing has really changed in the way polyester is produced.
Some companies are recycling plastic bottles into polyester. The plastic is melted into ultra-fine strands and then spun to create polyester. However, only a limited number of bottles are available. New materials must be added because of the amount of plastic degradation that takes place. Ultimately, recycling accounts for only a small percentage of the total amount of polyester produced.
Nasr Allah and Illy hope they can offer the solution the fashion industry is looking for. They are not just reducing the carbon emissions that are conventionally produced by making polyester. Their process actually goes much further. It's carbon negative and works by using up emissions from other industries.
"In a sense we imitate what nature does so well: plants capture CO2 and turn it into natural fibers using sunlight, we capture CO2 and turn it into synthetic fibers using electricity."
Experts in the field see a lot of promise. Dr Phil de Luna is an expert in carbon valorization -- the process of converting carbon dioxide into high-value chemicals. He leads a $57-million research program developing the technology to decarbonize Canada.
"I think the approach is great," he says. "Being able to take CO2 and then convert it into polymers or polyester is an excellent way to think about utilizing waste emissions and replacing fossil fuel-based materials. That is overall a net negative as compared to making polyester from fossil fuels."
From Harmful Waste to Useful Raw Material
It all started with Nasr Allah's academic research, primarily at the French Alternative Energies and Atomic Energy Commission (CEA). He spent almost 5 years investigating CO2 valorization. In essence, this involves breaking the bonds between the carbon and oxygen atoms in CO2 to create bonds with other elements.
Recycling carbon dioxide in this way requires extremely high temperatures and pressures. Catalysts are needed to break the strong bonds between the atoms. However, these are toxic, volatile and quickly lose their effectiveness over time. So, directly converting carbon dioxide into the raw material for making polyester fibers is very difficult.
Nasr Allah developed a process involving multiple simpler stages. His innovative approach involves converting carbon dioxide to intermediate chemicals. These chemicals can then be transformed into the raw material which is used in the production of polyester. After many experiments, Nasr Allah developed new processes and new catalysts that worked more effectively.
"We use a catalyst to transform CO2 into the chemicals that are used for polyester manufacturing," Illy says. "In a sense we imitate what nature does so well: plants capture CO2 and turn it into natural fibers using sunlight, we capture CO2 and turn it into synthetic fibers using electricity."
The Challenges Ahead
Nasr Allah met material scientist Illy through Entrepreneur First, a programme which pairs individuals looking to form technical start-ups. Together they set up Fairbrics and worked on converting Nasr Allah's lab findings into commercial applications and industrial success.
"The main challenge we faced was to scale up the process," Illy reveals. "[It had to be] consistent and safe to be carried out by a trained technician, not a specialist PhD as was the case in the beginning."
They recruited a team of scientists to help them develop a more effective and robust manufacturing process. Together, the team gained a more detailed theoretical understanding about what was happening at each stage of the chemical reactions. Eventually, they were able to fine tune the process and produce consistent batches of polyester.
They're making significant progress. They've produced their first samples and signed their first commercial contract to make polyester, which will then be both fabricated into clothes and sold by partner companies.
Currently, one of the largest challenges is financial. "We need to raise a fair amount to buy the equipment we need to produce at a large scale," Illy explains.
How to Power the Process?
At the moment, their main scientific focus is getting the process working reliably so they can begin commercialization. In order to remain sustainable and economically viable once they start producing polyester on a large scale, they need to consider the amount of energy they use for carbon valorization and the emissions they produce.
The more they optimize the way their catalyst works, the easier it will be to transform the CO2. The whole process can then become more cost effective and energy efficient.
De Luna explains: "My concern is...whether their process will be economical at scale. The problem is the energy cost to take carbon dioxide and transform it into these other products and that's where the science and innovation has to happen. [Whether they can scale up economically] depends on the performance of their catalyst."
They don't just need to think about the amount of energy they use to produce polyester; they also have to consider where this energy comes from.
"They need access to cheap renewable energy," De Luna says, "...so they're not using or emitting CO2 to do the conversion." If the energy they use to transform CO2 into polyester actually ends up producing more CO2, this will end up cancelling out their positive environmental impact.
Based in France, they're well located to address this issue. France has a clean electricity system, with only about 10% of their electric power coming from fossil fuels due to their reliance on nuclear energy and renewables.
Where Do They Get the Carbon Dioxide?
As they scale up, they also need to be able to access a source of CO2. They intend to obtain this from the steel industry, the cement industry, and hydrogen production.
The technology to purify and capture waste carbon dioxide from these industries is available on a large scale. However, there are only around 20 commercial operations in the world. The high cost of carbon capture means that development continues to be slow. There are a growing number of startups capturing carbon dioxide straight from the air, but this is even more costly.
One major problem is that storing captured carbon dioxide is expensive. "There are somewhat limited options for permanently storing captured CO2, so innovations like this are important,'' says T. Reed Miller, a researcher at the Yale University Center for Industrial Ecology.
Illy says: "The challenge is now to decrease the cost [of carbon capture]. By using CO2 as a raw material, we can try to increase the number of industries that capture CO2. Our goal is to turn CO2 from a waste into a valuable product."
Beyond Fashion
For Nasr Allah and Illy, fashion is just the beginning. There are many markets they can potentially break into. Next, they hope to use the polyester they've created in the packaging industry. Today, a lot of polyester is consumed to make bottles and jars. Illy believes that eventually they can produce many different chemicals from CO2. These chemicals could then be used to make paints, adhesives, and even plastics.
The Fairbrics scientists are providing a vital alternative to fossil fuels and showcasing the real potential of carbon dioxide to become a worthy resource instead of a harmful polluter.
Illy believes they can make a real difference through innovation: "We can have a significant impact in reducing climate change."
“Virtual Biopsies” May Soon Make Some Invasive Tests Unnecessary
At his son's college graduation in 2017, Dan Chessin felt "terribly uncomfortable" sitting in the stadium. The bouts of pain persisted, and after months of monitoring, a urologist took biopsies of suspicious areas in his prostate.
This innovation may enhance diagnostic precision and promptness, but it also brings ethical concerns to the forefront.
"In my case, the biopsies came out cancerous," says Chessin, 60, who underwent robotic surgery for intermediate-grade prostate cancer at University Hospitals Cleveland Medical Center.
Although he needed a biopsy, as most patients today do, advances in radiologic technology may make such invasive measures unnecessary in the future. Researchers are developing better imaging techniques and algorithms—a form of computer science called artificial intelligence, in which machines learn and execute tasks that typically require human brain power.
This innovation may enhance diagnostic precision and promptness. But it also brings ethical concerns to the forefront of the conversation, highlighting the potential for invasion of privacy, unequal patient access, and less physician involvement in patient care.
A National Academy of Medicine Special Publication, released in December, emphasizes that setting industry-wide standards for use in patient care is essential to AI's responsible and transparent implementation as the industry grapples with voluminous quantities of data. The technology should be viewed as a tool to supplement decision-making by highly trained professionals, not to replace it.
MRI--a test that uses powerful magnets, radio waves, and a computer to take detailed images inside the body--has become highly accurate in detecting aggressive prostate cancer, but its reliability is more limited in identifying low and intermediate grades of malignancy. That's why Chessin opted to have his prostate removed rather than take the chance of missing anything more suspicious that could develop.
His urologist, Lee Ponsky, says AI's most significant impact is yet to come. He hopes University Hospitals Cleveland Medical Center's collaboration with research scientists at its academic affiliate, Case Western Reserve University, will lead to the invention of a virtual biopsy.
A National Cancer Institute five-year grant is funding the project, launched in 2017, to develop a combined MRI and computerized tool to support more accurate detection and grading of prostate cancer. Such a tool would be "the closest to a crystal ball that we can get," says Ponsky, professor and chairman of the Urology Institute.
In situations where AI has guided diagnostics, radiologists' interpretations of breast, lung, and prostate lesions have improved as much as 25 percent, says Anant Madabhushi, a biomedical engineer and director of the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve, who is collaborating with Ponsky. "AI is very nascent," Madabhushi says, estimating that fewer than 10 percent of niche academic medical centers have used it. "We are still optimizing and validating the AI and virtual biopsy technology."
In October, several North American and European professional organizations of radiologists, imaging informaticists, and medical physicists released a joint statement on the ethics of AI. "Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future," reads the statement, published in the Journal of the American College of Radiology. "The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes."
Overreliance on new technology also poses concern when humans "outsource the process to a machine."
The statement's leader author, radiologist J. Raymond Geis, says "there's no question" that machines equipped with artificial intelligence "can extract more information than two human eyes" by spotting very subtle patterns in pixels. Yet, such nuances are "only part of the bigger picture of taking care of a patient," says Geis, a senior scientist with the American College of Radiology's Data Science Institute. "We have to be able to combine that with knowledge of what those pixels mean."
Setting ethical standards is high on all physicians' radar because the intricacies of each patient's medical record are factored into the computer's algorithm, which, in turn, may be used to help interpret other patients' scans, says radiologist Frank Rybicki, vice chair of operations and quality at the University of Cincinnati's department of radiology. Although obtaining patients' informed consent in writing is currently necessary, ethical dilemmas arise if and when patients have a change of heart about the use of their private health information. It is likely that removing individual data may be possible for some algorithms but not others, Rybicki says.
The information is de-identified to protect patient privacy. Using it to advance research is akin to analyzing human tissue removed in surgical procedures with the goal of discovering new medicines to fight disease, says Maryellen Giger, a University of Chicago medical physicist who studies computer-aided diagnosis in cancers of the breast, lung, and prostate, as well as bone diseases. Physicians who become adept at using AI to augment their interpretation of imaging will be ahead of the curve, she says.
As with other new discoveries, patient access and equality come into play. While AI appears to "have potential to improve over human performance in certain contexts," an algorithm's design may result in greater accuracy for certain groups of patients, says Lucia M. Rafanelli, a political theorist at The George Washington University. This "could have a disproportionately bad impact on one segment of the population."
Overreliance on new technology also poses concern when humans "outsource the process to a machine." Over time, they may cease developing and refining the skills they used before the invention became available, said Chloe Bakalar, a visiting research collaborator at Princeton University's Center for Information Technology Policy.
"AI is a paradigm shift with magic power and great potential."
Striking the right balance in the rollout of the technology is key. Rushing to integrate AI in clinical practice may cause harm, whereas holding back too long could undermine its ability to be helpful. Proper governance becomes paramount. "AI is a paradigm shift with magic power and great potential," says Ge Wang, a biomedical imaging professor at Rensselaer Polytechnic Institute in Troy, New York. "It is only ethical to develop it proactively, validate it rigorously, regulate it systematically, and optimize it as time goes by in a healthy ecosystem."
How Emerging Technologies Can Help Us Fight the New Coronavirus
In nature, few species remain dominant for long. Any sizable population of similar individuals offers immense resources to whichever parasite can evade its defenses, spreading rapidly from one member to the next.
Which will prove greater: our defenses or our vulnerabilities?
Humans are one such dominant species. That wasn't always the case: our hunter-gatherer ancestors lived in groups too small and poorly connected to spread pathogens like wildfire. Our collective vulnerability to pandemics began with the dawn of cities and trade networks thousands of years ago. Roman cities were always demographic sinks, but never more so than when a pandemic agent swept through. The plague of Cyprian, the Antonine plague, the plague of Justinian – each is thought to have killed over ten million people, an appallingly high fraction of the total population of the empire.
With the advent of sanitation, hygiene, and quarantines, we developed our first non-immunological defenses to curtail the spread of plagues. With antibiotics, we began to turn the weapons of microbes against our microbial foes. Most potent of all, we use vaccines to train our immune systems to fight pathogens before we are even exposed. Edward Jenner's original vaccine alone is estimated to have saved half a billion lives.
It's been over a century since we suffered from a swift and deadly pandemic. Even the last deadly influenza of 1918 killed only a few percent of humanity – nothing so bad as any of the Roman plagues, let alone the Black Death of medieval times.
How much of our recent winning streak has been due to luck?
Much rides on that question, because the same factors that first made our ancestors vulnerable are now ubiquitous. Our cities are far larger than those of ancient times. They're inhabited by an ever-growing fraction of humanity, and are increasingly closely connected: we now routinely travel around the world in the course of a day. Despite urbanization, global population growth has increased contact with wild animals, creating more opportunities for zoonotic pathogens to jump species. Which will prove greater: our defenses or our vulnerabilities?
The tragic emergence of coronavirus 2019-nCoV in Wuhan may provide a test case. How devastating this virus will become is highly uncertain at the time of writing, but its rapid spread to many countries is deeply worrisome. That it seems to kill only the already infirm and spare the healthy is small comfort, and may counterintuitively assist its spread: it's easy to implement a quarantine when everyone infected becomes extremely ill, but if carriers may not exhibit symptoms as has been reported, it becomes exceedingly difficult to limit transmission. The virus, a distant relative of the more lethal SARS virus that killed 800 people in 2002 to 2003, has evolved to be transmitted between humans and spread to 18 countries in just six weeks.
Humanity's response has been faster than ever, if not fast enough. To its immense credit, China swiftly shared information, organized and built new treatment centers, closed schools, and established quarantines. The Coalition for Epidemic Preparedness Innovations, which was founded in 2017, quickly funded three different companies to develop three different varieties of vaccine: a standard protein vaccine, a DNA vaccine, and an RNA vaccine, with more planned. One of the agreements was signed after just four days of discussion, far faster than has ever been done before.
The new vaccine candidates will likely be ready for clinical trials by early summer, but even if successful, it will be additional months before the vaccine will be widely available. The delay may well be shorter than ever before thanks to advances in manufacturing and logistics, but a delay it will be.
The 1918 influenza virus killed more than half of its victims in the United Kingdom over just three months.
If we faced a truly nasty virus, something that spreads like pandemic influenza – let alone measles – yet with the higher fatality rate of, say, H7N9 avian influenza, the situation would be grim. We are profoundly unprepared, on many different levels.
So what would it take to provide us with a robust defense against pandemics?
Minimize the attack surface: 2019-nCoV jumped from an animal, most probably a bat, to humans. China has now banned the wildlife trade in response to the epidemic. Keeping it banned would be prudent, but won't be possible in all nations. Still, there are other methods of protection. Influenza viruses commonly jump from birds to pigs to humans; the new coronavirus may have similarly passed through a livestock animal. Thanks to CRISPR, we can now edit the genomes of most livestock. If we made them immune to known viruses, and introduced those engineered traits to domesticated animals everywhere, we would create a firewall in those intermediate hosts. We might even consider heritably immunizing the wild organisms most likely to serve as reservoirs of disease.
None of these defenses will be cheap, but they'll be worth every penny.
Rapid diagnostics: We need a reliable method of detection costing just pennies to be available worldwide inside of a week of discovering a new virus. This may eventually be possible thanks to a technology called SHERLOCK, which is based on a CRISPR system more commonly used for precision genome editing. Instead of using CRISPR to find and edit a particular genome sequence in a cell, SHERLOCK programs it to search for a desired target and initiate an easily detected chain reaction upon discovery. The technology is capable of fantastic sensitivity: with an attomolar (10-18) detection limit, it senses single molecules of a unique DNA or RNA fingerprint, and the components can be freeze-dried onto paper strips.
Better preparations: China acted swiftly to curtail the spread of the Wuhan virus with traditional public health measures, but not everything went as smoothly as it might have. Most cities and nations have never conducted a pandemic preparedness drill. Best give people a chance to practice keeping the city barely functional while minimizing potential exposure events before facing the real thing.
Faster vaccines: Three months to clinical trials is too long. We need a robust vaccine discovery and production system that can generate six candidates within a week of the pathogen's identification, manufacture a million doses the week after, and scale up to a hundred million inside of a month. That may be possible for novel DNA and RNA-based vaccines, and indeed anything that can be delivered using a standardized gene therapy vector. For example, instead of teaching each person's immune system to evolve protective antibodies by showing it pieces of the virus, we can program cells to directly produce known antibodies via gene therapy. Those antibodies could be discovered by sifting existing diverse libraries of hundreds of millions of candidates, computationally designed from scratch, evolved using synthetic laboratory ecosystems, or even harvested from the first patients to report symptoms. Such a vaccine might be discovered and produced fast enough at scale to halt almost any natural pandemic.
Robust production and delivery: Our defenses must not be vulnerable to the social and economic disruptions caused by a pandemic. Unfortunately, our economy selects for speed and efficiency at the expense of robustness. Just-in-time supply chains that wing their way around the world require every node to be intact. If workers aren't on the job producing a critical component, the whole chain breaks until a substitute can be found. A truly nasty pandemic would disrupt economies all over the world, so we will need to pay extra to preserve the capacity for independent vertically integrated production chains in multiple nations. Similarly, vaccines are only useful if people receive them, so delivery systems should be as robustly automated as possible.
None of these defenses will be cheap, but they'll be worth every penny. Our nations collectively spend trillions on defense against one another, but only billions to protect humanity from pandemic viruses known to have killed more people than any human weapon. That's foolish – especially since natural animal diseases that jump the species barrier aren't the only pandemic threats.
We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural.
The complete genomes of all historical pandemic viruses ever to have been sequenced are freely available to anyone with an internet connection. True, these are all agents we've faced before, so we have a pre-existing armory of pharmaceuticals and vaccines and experience. There's no guarantee that they would become pandemics again; for example, a large fraction of humanity is almost certainly immune to the 1918 influenza virus due to exposure to the related 2009 pandemic, making it highly unlikely that the virus would take off if released.
Still, making the blueprints publicly available means that a large and growing number of people with the relevant technical skills can single-handedly make deadly biological agents that might be able to spread autonomously -- at least if they can get their hands on the relevant DNA. At present, such people most certainly can, so long as they bother to check the publicly available list of which gene synthesis companies do the right thing and screen orders -- and by implication, which ones don't.
One would hope that at least some of the companies that don't advertise that they screen are "honeypots" paid by intelligence agencies to catch would-be bioterrorists, but even if most of them are, it's still foolish to let individuals access that kind of destructive power. We will eventually make our society immune to naturally occurring pandemics, but that day has not yet come, and future pandemic viruses may not be natural. Hence, we should build a secure and adaptive system capable of screening all DNA synthesis for known and potential future pandemic agents... without disclosing what we think is a credible bioweapon.
Whether or not it becomes a global pandemic, the emergence of Wuhan coronavirus has underscored the need for coordinated action to prevent the spread of pandemic disease. Let's ensure that our reactive response minimally prepares us for future threats, for one day, reacting may not be enough.