The Only Hydroxychloroquine Story You Need to Read
Dr. Adalja is focused on emerging infectious disease, pandemic preparedness, and biosecurity. He has served on US government panels tasked with developing guidelines for the treatment of plague, botulism, and anthrax in mass casualty settings and the system of care for infectious disease emergencies, and as an external advisor to the New York City Health and Hospital Emergency Management Highly Infectious Disease training program, as well as on a FEMA working group on nuclear disaster recovery. Dr. Adalja is an Associate Editor of the journal Health Security. He was a coeditor of the volume Global Catastrophic Biological Risks, a contributing author for the Handbook of Bioterrorism and Disaster Medicine, the Emergency Medicine CorePendium, Clinical Microbiology Made Ridiculously Simple, UpToDate's section on biological terrorism, and a NATO volume on bioterrorism. He has also published in such journals as the New England Journal of Medicine, the Journal of Infectious Diseases, Clinical Infectious Diseases, Emerging Infectious Diseases, and the Annals of Emergency Medicine. He is a board-certified physician in internal medicine, emergency medicine, infectious diseases, and critical care medicine. Follow him on Twitter: @AmeshAA
In the early days of a pandemic caused by a virus with no existing treatments, many different compounds are often considered and tried in an attempt to help patients.
It all relates back to a profound question: How do we know what we know?
Many of these treatments fall by the wayside as evidence accumulates regarding actual efficacy. At that point, other treatments become standard of care once their benefit is proven in rigorously designed trials.
However, about seven months into the pandemic, we're still seeing political resurrection of a treatment that has been systematically studied and demonstrated in well-designed randomized controlled trials to not have benefit.
The hydroxychloroquine (and by extension chloroquine) story is a complicated one that was difficult to follow even before it became infused with politics. It is a simple fact that these drugs, long approved by the Food and Drug Administration (FDA), work in Petri dishes against various viruses including coronaviruses. This set of facts provided biological plausibility to support formally studying their use in the clinical treatment and prevention of COVID-19. As evidence from these studies accumulates, it is a cognitive requirement to integrate that knowledge and not to evade it. This also means evaluating the rigor of the studies.
In recent days we have seen groups yet again promoting the use of hydroxychloroquine in, what is to me, a baffling disregard of the multiple recent studies that have shown no benefit. Indeed, though FDA-approved for other indications like autoimmune conditions and preventing malaria, the emergency use authorization for COVID-19 has been rescinded (which means the government cannot stockpile it). Still, however, many patients continue to ask for the drug, compelled by political commentary, viral videos, and anecdotal data. Yet most doctors (like myself) are refusing to write the prescriptions outside of a clinical trial – a position endorsed by professional medical organizations such as the American College of Physicians and the Infectious Diseases Society of America. Why this disconnect?
It all relates back to a profound question: How do we know what we know? In science, we use the scientific method – the process of observing reality, coming up with a hypothesis about what might be true, and testing that hypothesis as thoroughly as possible until we discover the objective truth.
The confusion we're seeing now stems from an inability to distinguish between anecdotes reported by physicians (observational data) and an actual evidence base. This is understandable among the general public but when done by a healthcare professional, it reveals a disdain for reason, logic, and the scientific method.
The Difference Between Observational Data and Randomized Controlled Trials
The power of informal observation is crucial. It is part of the scientific method but primarily as a basis for generating hypotheses that we can test. How do we conduct medical tests? The gold standard is the double-blind, randomized, placebo-controlled trial. This means that neither the researchers nor the volunteers know who is getting a drug and who is getting a sugar pill. Then both groups of the trial, called arms, can be compared to determine whether the people who got the drug fared better. This study design prevents biases and the placebo effect from confounding the data and undermining the veracity of the results.
For example, a seemingly beneficial effect might be seen in an observational study with no blinding and no control group. In such a case, all patients are openly given the drug and their doctors observe how they do. A prime example is the 36-patient single-arm study from France that generated a tremendous amount of interest after President Trump tweeted about it. But this kind of a study by its nature cannot answer the critical question: Was the positive effect because of hydroxychloroquine or just the natural course of the illness? In other words, would someone have recovered in a similar fashion regardless of the drug? What is the role of the placebo effect?
These are reasons why it is crucial to give a placebo to a control group that is as similar in every respect as possible to those receiving the intervention. Then we attempt to find out by comparing the two groups: What is the side effect profile of the drug? Are the groups large enough to detect a relatively rare safety concern? How long were the patients followed for? Was something else responsible for making the patients get better, such as the use of steroids (as likely was the case in the Henry Ford study)?
Looking at the two major hydroxychloroquine trials, it is apparent that, when studied using the best tools of clinical trials, no benefit is likely to occur.
All of these considerations amount to just a fraction of the questions that can be answered more definitively in a well-designed large randomized controlled trial than in observational studies. Indeed, an observational study from New York failed to show any benefit in hospitalized patients, showing how unclear and disparate the results can be with these types of studies. A New York retrospective study (which examined patient outcomes after they were already treated) had similar results and included the use of azithromycin.
When evaluating a study, it is also important to note whether conflicts of interest exist, as well as the quality of the peer review and the data itself. In the case of the French study, for example, the paper was published in a journal in which one of the authors was editor-in-chief, and it was accepted for publication after 24 hours. Patients who fared poorly on hydroxychloroquine were also left out of the study altogether, skewing the results.
What Randomized Controlled Trials Have Shown
Looking at the two major hydroxychloroquine trials, it is apparent that, when studied using the best tools of clinical trials, no benefit is likely to occur. The most important of these studies to announce results was part of the Recovery trial, which was designed to test multiple interventions in the treatment of COVID-19. This trial, which has yet to be formally published, was a randomized controlled trial that involved over 1500 hospitalized patients being administered hydroxychloroquine compared to over 3000 who did not receive the medication. Clinical testing requires large numbers of patients to have the power to demonstrate statistical significance -- the threshold at which any apparent benefit is more than you would expect by random chance alone.
In this study, hydroxychloroquine provided no mortality benefit or even a benefit in hospital length of stay. In fact, the opposite occurred. Hydroxychloroquine patients were more likely to stay in the hospital longer and were more likely to require mechanical ventilation. Additionally, smaller randomized trials conducted in China have not shown benefit either.
Another major study involved the use of hydroxychloroquine to prevent illness in people who were exposed to COVID-19. These results, published in The New England Journal of Medicine, included over 800 patients who were studied in a randomized double-blind controlled trial and also failed to show any benefit.
But what about adding the antibiotic azithromycin in conjunction with hydroxychloroquine? A three-arm randomized controlled study involving over 500 patients hospitalized with mild to moderate COVID-19 was conducted. Its results, also published in The New England Journal of Medicine, failed to show any benefit – with or without azithromycin – and demonstrated evidence of harm. Those who received these treatments had elevations of their liver function tests and heart rhythm abnormalities. These findings hold despite the retraction of an observational study showing similar results.
Additionally, when used in combination with remdesivir – an experimental antiviral – hydroxychloroquine has been shown to be associated with worse outcomes and more side effects.
But what about in mildly ill patients not requiring hospitalization? There was no benefit found in a randomized double-blind placebo-controlled trial of 400 patients, the majority of whom were given the drug within one day of symptoms.
Some randomized controlled studies have yet to report their findings on hydroxychloroquine in non-hospitalized patients, with the use of zinc (which has some evidence in the treatment of the common cold, another ailment that can be caused by coronaviruses). And studies have yet to come out regarding whether hydroxychloroquine can prevent people from getting sick before they are even exposed. But the preponderance of the evidence from studies designed specifically to find benefit for treating COVID-19 does not support its use outside of a research setting.
Today – even with some studies (including those with zinc) still ongoing – if a patient asked me to prescribe them hydroxychloroquine for any severity or stage of illness, with or without zinc, with or without azithromycin, I would refrain. I would explain that, based on the evidence from clinical trials that has been amassed, there is no reason to believe that it will alter the course of illness for the better.
Failing to recognize the reality of the situation runs the risk of crowding out other more promising treatments and creating animosity where none should exist.
What has been occurring is a continual shifting of goalposts with each negative hydroxychloroquine study. Those in favor of the drug protest that a trial did not include azithromycin or zinc or wasn't given at the right time to the right patients. While there may be biological plausibility to treating illness early or combining treatments with zinc, it can only be definitively shown in a randomized, controlled prospective study.
The bottom line: A study that only looks at past outcomes in one group of patients – even when well conducted – is at most hypothesis generating and cannot be used as the sole basis for a new treatment paradigm.
Some may argue that there is no time to wait for definitive studies, but no treatment is benign. The risk/benefit ratio is not the same for every possible use of the drug. For example, hydroxychloroquine has a long record of use in rheumatoid arthritis and systemic lupus (whose patients are facing shortages because of COVID-19 related demand). But the risk of side effects for many of these patients is worth taking because of the substantial benefit the drug provides in treating those conditions.
In COVID-19, however, the disease apparently causes cardiac abnormalities in a great deal of many mild cases, a situation that should prompt caution when using any drugs that have known effects on the cardiac system -- drugs like hydroxychloroquine and azithromycin.
My Own Experience
It is not the case that every physician was biased against this drug from the start. Indeed, most of us wanted it to be shown to be beneficial, as it was a generic drug that was widely available and very familiar. In fact, early in the pandemic I prescribed it to hospitalized patients on two occasions per a hospital protocol. However, it is impossible for me as a sole clinician to know whether it worked, was neutral, or was harmful. In recent days, however, I have found the hydroxychloroquine talk to have polluted the atmosphere. One recent patient was initially refusing remdesivir, a drug proven in large randomized trials to have effectiveness, because he had confused it with hydroxychloroquine.
Moving On to Other COVID Treatments: What a Treatment Should Do
The story of hydroxychloroquine illustrates a fruitless search for what we are actually looking for in a COVID-19 treatment. In short, we are looking for a medication that can decrease symptoms, decrease complications, hasten recovery, decrease hospitalizations, decrease contagiousness, decrease deaths, and prevent infection. While it is unlikely to find a single antiviral that can accomplish all of these, fulfilling even just one is important.
For example, remdesivir hastens recovery and dexamethasone decreases mortality. Definitive results of the use of convalescent plasma and immunomodulating drugs such as siltuxamab, baricitinib, and anakinra (for use in the cytokine storms characteristic of severe disease) are still pending, as are the trials with monoclonal antibodies.
While it was crucial that the medical and scientific community definitively answer the questions surrounding the use of chloroquine and hydroxychloroquine in the treatment of COVID-19, it is time to face the facts and accept that its use for the treatment of this disease is not likely to be beneficial. Failing to recognize the reality of the situation runs the risk of crowding out other more promising treatments and creating animosity where none should exist.
Dr. Adalja is focused on emerging infectious disease, pandemic preparedness, and biosecurity. He has served on US government panels tasked with developing guidelines for the treatment of plague, botulism, and anthrax in mass casualty settings and the system of care for infectious disease emergencies, and as an external advisor to the New York City Health and Hospital Emergency Management Highly Infectious Disease training program, as well as on a FEMA working group on nuclear disaster recovery. Dr. Adalja is an Associate Editor of the journal Health Security. He was a coeditor of the volume Global Catastrophic Biological Risks, a contributing author for the Handbook of Bioterrorism and Disaster Medicine, the Emergency Medicine CorePendium, Clinical Microbiology Made Ridiculously Simple, UpToDate's section on biological terrorism, and a NATO volume on bioterrorism. He has also published in such journals as the New England Journal of Medicine, the Journal of Infectious Diseases, Clinical Infectious Diseases, Emerging Infectious Diseases, and the Annals of Emergency Medicine. He is a board-certified physician in internal medicine, emergency medicine, infectious diseases, and critical care medicine. Follow him on Twitter: @AmeshAA
Awash in a fluid finely calibrated to keep it alive, a human eye rests inside a transparent cubic device. This ECaBox, or Eyes in a Care Box, is a one-of-a-kind system built by scientists at Barcelona’s Centre for Genomic Regulation (CRG). Their goal is to preserve human eyes for transplantation and related research.
In recent years, scientists have learned to transplant delicate organs such as the liver, lungs or pancreas, but eyes are another story. Even when preserved at the average transplant temperature of 4 Centigrade, they last for 48 hours max. That's one explanation for why transplanting the whole eye isn’t possible—only the cornea, the dome-shaped, outer layer of the eye, can withstand the procedure. The retina, the layer at the back of the eyeball that turns light into electrical signals, which the brain converts into images, is extremely difficult to transplant because it's packed with nerve tissue and blood vessels.
These challenges also make it tough to research transplantation. “This greatly limits their use for experiments, particularly when it comes to the effectiveness of new drugs and treatments,” said Maria Pia Cosma, a biologist at Barcelona’s Centre for Genomic Regulation (CRG), whose team is working on the ECaBox.
Eye transplants are desperately needed, but they're nowhere in sight. About 12.7 million people worldwide need a corneal transplant, which means that only one in 70 people who require them, get them. The gaps are international. Eye banks in the United Kingdom are around 20 percent below the level needed to supply hospitals, while Indian eye banks, which need at least 250,000 corneas per year, collect only around 45 to 50 thousand donor corneas (and of those 60 to 70 percent are successfully transplanted).
As for retinas, it's impossible currently to put one into the eye of another person. Artificial devices can be implanted to restore the sight of patients suffering from severe retinal diseases, but the number of people around the world with such “bionic eyes” is less than 600, while in America alone 11 million people have some type of retinal disease leading to severe vision loss. Add to this an increasingly aging population, commonly facing various vision impairments, and you have a recipe for heavy burdens on individuals, the economy and society. In the U.S. alone, the total annual economic impact of vision problems was $51.4 billion in 2017.
Even if you try growing tissues in the petri dish route into organoids mimicking the function of the human eye, you will not get the physiological complexity of the structure and metabolism of the real thing, according to Cosma. She is a member of a scientific consortium that includes researchers from major institutions from Spain, the U.K., Portugal, Italy and Israel. The consortium has received about $3.8 million from the European Union to pursue innovative eye research. Her team’s goal is to give hope to at least 2.2 billion people across the world afflicted with a vision impairment and 33 million who go through life with avoidable blindness.
Their method? Resuscitating cadaveric eyes for at least a month.
If we succeed, it will be the first intact human model of the eye capable of exploring and analyzing regenerative processes ex vivo. -- Maria Pia Cosma.
“We proposed to resuscitate eyes, that is to restore the global physiology and function of human explanted tissues,” Cosma said, referring to living tissues extracted from the eye and placed in a medium for culture. Their ECaBox is an ex vivo biological system, in which eyes taken from dead donors are placed in an artificial environment, designed to preserve the eye’s temperature and pH levels, deter blood clots, and remove the metabolic waste and toxins that would otherwise spell their demise.
Scientists work on resuscitating eyes in the lab of Maria Pia Cosma.
Courtesy of Maria Pia Cosma.
“One of the great challenges is the passage of the blood in the capillary branches of the eye, what we call long-term perfusion,” Cosma said. Capillaries are an intricate network of very thin blood vessels that transport blood, nutrients and oxygen to cells in the body’s organs and systems. To maintain the garland-shaped structure of this network, sufficient amounts of oxygen and nutrients must be provided through the eye circulation and microcirculation. “Our ambition is to combine perfusion of the vessels with artificial blood," along with using a synthetic form of vitreous, or the gel-like fluid that lets in light and supports the the eye's round shape, Cosma said.
The scientists use this novel setup with the eye submersed in its medium to keep the organ viable, so they can test retinal function. “If we succeed, we will ensure full functionality of a human organ ex vivo. It will be the first intact human model of the eye capable of exploring and analyzing regenerative processes ex vivo,” Cosma added.
A rapidly developing field of regenerative medicine aims to stimulate the body's natural healing processes and restore or replace damaged tissues and organs. But for people with retinal diseases, regenerative medicine progress has been painfully slow. “Experiments on rodents show progress, but the risks for humans are unacceptable,” Cosma said.
The ECaBox could boost progress with regenerative medicine for people with retinal diseases, which has been painfully slow because human experiments involving their eyes are too risky. “We will test emerging treatments while reducing animal research, and greatly accelerate the discovery and preclinical research phase of new possible treatments for vision loss at significantly reduced costs,” Cosma explained. Much less time and money would be wasted during the drug discovery process. Their work may even make it possible to transplant the entire eyeball for those who need it.
“It is a very exciting project,” said Sanjay Sharma, a professor of ophthalmology and epidemiology at Queen's University, in Kingston, Canada. “The ability to explore and monitor regenerative interventions will increasingly be of importance as we develop therapies that can regenerate ocular tissues, including the retina.”
Seemingly, there's no sacred religious text or a holy book prohibiting the practice of eye donation.
But is the world ready for eye transplants? “People are a bit weird or very emotional about donating their eyes as compared to other organs,” Cosma said. And much can be said about the problem of eye donor shortage. Concerns include disfigurement and healthcare professionals’ fear that the conversation about eye donation will upset the departed person’s relatives because of cultural or religious considerations. As just one example, Sharma noted the paucity of eye donations in his home country, Canada.
Yet, experts like Sharma stress the importance of these donations for both the recipients and their family members. “It allows them some psychological benefit in a very difficult time,” he said. So why are global eye banks suffering? Is it because the eyes are the windows to the soul?
Seemingly, there's no sacred religious text or a holy book prohibiting the practice of eye donation. In fact, most major religions of the world permit and support organ transplantation and donation, and by extension eye donation, because they unequivocally see it as an “act of neighborly love and charity.” In Hinduism, the concept of eye donation aligns with the Hindu principle of daan or selfless giving, where individuals donate their organs or body after death to benefit others and contribute to society. In Islam, eye donation is a form of sadaqah jariyah, a perpetual charity, as it can continue to benefit others even after the donor's death.
Meanwhile, Buddhist masters teach that donating an organ gives another person the chance to live longer and practice dharma, the universal law and order, more meaningfully; they also dismiss misunderstandings of the type “if you donate an eye, you’ll be born without an eye in the next birth.” And Christian teachings emphasize the values of love, compassion, and selflessness, all compatible with organ donation, eye donation notwithstanding; besides, those that will have a house in heaven, will get a whole new body without imperfections and limitations.
The explanation for people’s resistance may lie in what Deepak Sarma, a professor of Indian religions and philosophy at Case Western Reserve University in Cleveland, calls “street interpretation” of religious or spiritual dogmas. Consider the mechanism of karma, which is about the causal relation between previous and current actions. “Maybe some Hindus believe there is karma in the eyes and, if the eye gets transplanted into another person, they will have to have that karmic card from now on,” Sarma said. “Even if there is peculiar karma due to an untimely death–which might be interpreted by some as bad karma–then you have the karma of the recipient, which is tremendously good karma, because they have access to these body parts, a tremendous gift,” Sarma said. The overall accumulation is that of good karma: “It’s a beautiful kind of balance,” Sarma said.
For the Jews, Christians, and Muslims who believe in the physical resurrection of the body that will be made new in an afterlife, the already existing body is sacred since it will be the basis of a new refashioned body in an afterlife.---Omar Sultan Haque.
With that said, Sarma believes it is a fallacy to personify or anthropomorphize the eye, which doesn’t have a soul, and stresses that the karma attaches itself to the soul and not the body parts. But for scholars like Omar Sultan Haque—a psychiatrist and social scientist at Harvard Medical School, investigating questions across global health, anthropology, social psychology, and bioethics—the hierarchy of sacredness of body parts is entrenched in human psychology. You cannot equate the pinky toe with the face, he explained.
“The eyes are the window to the soul,” Haque said. “People have a hierarchy of body parts that are considered more sacred or essential to the self or soul, such as the eyes, face, and brain.” In his view, the techno-utopian transhumanist communities (especially those in Silicon Valley) have reduced the totality of a person to a mere material object, a “wet robot” that knows no sacredness or hierarchy of human body parts. “But for the Jews, Christians, and Muslims who believe in the physical resurrection of the body that will be made new in an afterlife, the [already existing] body is sacred since it will be the basis of a new refashioned body in an afterlife,” Haque said. “You cannot treat the body like any old material artifact, or old chair or ragged cloth, just because materialistic, secular ideologies want so,” he continued.
For Cosma and her peers, however, the very definition of what is alive or not is a bit semantic. “As soon as we die, the electrophysiological activity in the eye stops,” she said. “The goal of the project is to restore this activity as soon as possible before the highly complex tissue of the eye starts degrading.” Cosma’s group doesn’t yet know when they will be able to keep the eyes alive and well in the ECaBox, but the consensus is that the sooner the better. Hopefully, the taboos and fears around the eye donations will dissipate around the same time.
As Our AI Systems Get Better, So Must We
As the power and capability of our AI systems increase by the day, the essential question we now face is what constitutes peak human. If we stay where we are while the AI systems we are unleashing continually get better, they will meet and then exceed our capabilities in an ever-growing number of domains. But while some technology visionaries like Elon Musk call for us to slow down the development of AI systems to buy time, this approach alone will simply not work in our hyper-competitive world, particularly when the potential benefits of AI are so great and our frameworks for global governance are so weak. In order to build the future we want, we must also become ever better humans.
The list of activities we once saw as uniquely human where AIs have now surpassed us is long and growing. First, AI systems could beat our best chess players, then our best Go players, then our best champions of multi-player poker. They can see patterns far better than we can, generate medical and other hypotheses most human specialists miss, predict and map out new cellular structures, and even generate beautiful, and, yes, creative, art.
A recent paper by Microsoft researchers analyzing the significant leap in capabilities in OpenAI’s latest AI bot, ChatGPT-4, asserted that the algorithm can “solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting.” Calling this functionality “strikingly close to human-level performance,” the authors conclude it “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
The concept of AGI has been around for decades. In its common use, the term suggests a time when individual machines can do many different things at a human level, not just one thing like playing Go or analyzing radiological images. Debating when AGI might arrive, a favorite pastime of computer scientists for years, now has become outdated.
We already have AI algorithms and chatbots that can do lots of different things. Based on the generalist definition, in other words, AGI is essentially already here.
Unfettered by the evolved capacity and storage constraints of our brains, AI algorithms can access nearly all of the digitized cultural inheritance of humanity since the dawn of recorded history and have increasing access to growing pools of digitized biological data from across the spectrum of life.
Once we recognize that both AI systems and humans have unique superpowers, the essential question becomes what each of us can do better than the other and what humans and AIs can best do in active collaboration. The future of our species will depend upon our ability to safely, dynamically, and continually figure that out.
With these ever-larger datasets, rapidly increasing computing and memory power, and new and better algorithms, our AI systems will keep getting better faster than most of us can today imagine. These capabilities have the potential to help us radically improve our healthcare, agriculture, and manufacturing, make our economies more productive and our development more sustainable, and do many important things better.
Soon, they will learn how to write their own code. Like human children, in other words, AI systems will grow up. But even that doesn’t mean our human goose is cooked.
Just like dolphins and dogs, these alternate forms of intelligence will be uniquely theirs, not a lesser or greater version of ours. There are lots of things AI systems can't do and will never be able to do because our AI algorithms, for better and for worse, will never be human. Our embodied human intelligence is its own thing.
Our human intelligence is uniquely ours based on the capacities we have developed in our 3.8-billion-year journey from single cell organisms to us. Our brains and bodies represent continuous adaptations on earlier models, which is why our skeletal systems look like those of lizards and our brains like most other mammals with some extra cerebral cortex mixed in. Human intelligence isn’t just some type of disembodied function but the inextricable manifestation of our evolved physical reality. It includes our sensory analytical skills and all of our animal instincts, intuitions, drives, and perceptions. Disembodied machine intelligence is something different than what we have evolved and possess.
Because of this, some linguists including Noam Chomsky have recently argued that AI systems will never be intelligent as long as they are just manipulating symbols and mathematical tokens without any inherent understanding. Nothing could be further from the truth. Anyone interacting with even first-generation AI chatbots quickly realizes that while these systems are far from perfect or omniscient and can sometimes be stupendously oblivious, they are surprisingly smart and versatile and will get more so… forever. We have little idea even how our own minds work, so judging AI systems based on their output is relatively close to how we evaluate ourselves.
Anyone not awed by the potential of these AI systems is missing the point. AI’s newfound capacities demand that we work urgently to establish norms, standards, and regulations at all levels from local to global to manage the very real risks. Pausing our development of AI systems now doesn’t make sense, however, even if it were possible, because we have no sufficient ways of uniformly enacting such a pause, no plan for how we would use the time, and no common framework for addressing global collective challenges like this.
But if all we feel is a passive awe for these new capabilities, we will also be missing the point.
Human evolution, biology, and cultural history are not just some kind of accidental legacy, disability, or parlor trick, but our inherent superpower. Our ancestors outcompeted rivals for billions of years to make us so well suited to the world we inhabit and helped build. Our social organization at scale has made it possible for us to forge civilizations of immense complexity, engineer biology and novel intelligence, and extend our reach to the stars. Our messy, embodied, intuitive, social human intelligence is roughly mimicable by AI systems but, by definition, never fully replicable by them.
Once we recognize that both AI systems and humans have unique superpowers, the essential question becomes what each of us can do better than the other and what humans and AIs can best do in active collaboration. We still don't know. The future of our species will depend upon our ability to safely, dynamically, and continually figure that out.
As we do, we'll learn that many of our ideas and actions are made up of parts, some of which will prove essentially human and some of which can be better achieved by AI systems. Those in every walk of work and life who most successfully identify the optimal contributions of humans, AIs, and the two together, and who build systems and workflows empowering humans to do human things, machines to do machine things, and humans and machines to work together in ways maximizing the respective strengths of each, will be the champions of the 21st century across all fields.
The dawn of the age of machine intelligence is upon us. It’s a quantum leap equivalent to the domestication of plants and animals, industrialization, electrification, and computing. Each of these revolutions forced us to rethink what it means to be human, how we live, and how we organize ourselves. The AI revolution will happen more suddenly than these earlier transformations but will follow the same general trajectory. Now is the time to aggressively prepare for what is fast heading our way, including by active public engagement, governance, and regulation.
AI systems will not replace us, but, like these earlier technology-driven revolutions, they will force us to become different humans as we co-evolve with our technology. We will never reach peak human in our ongoing evolutionary journey, but we’ve got to manage this transition wisely to build the type of future we’d like to inhabit.
Alongside our ascending AIs, we humans still have a lot of climbing to do.