“Coming Back from the Dead” Is No Longer Science Fiction
Last year, there were widespread reports of a 53-year-old Frenchman who had suffered a cardiac arrest and "died," but was then resuscitated back to life 18 hours after his heart had stopped.
The once black-and-white line between life and death is now blurrier than ever.
This was thought to have been possible in part because his body had progressively cooled down naturally after his heart had stopped, through exposure to the outside cold. The medical team who revived him were reported as being "stupefied" that they had been able to bring him back to life, in particular since he had not even suffered brain damage.
Interestingly, this man represents one of a growing number of extraordinary cases in which people who would otherwise be declared dead have now been revived. It is a testament to the incredible impact of resuscitation science -- a science that is providing opportunities to literally reverse death, and in doing so, shedding light on the age-old question of what happens when we die.
Death: Past and Present
Throughout history, the boundary between life and death was marked by the moment a person's heart stopped, breathing ceased, and brain function shut down. A person became motionless, lifeless, and was deemed irreversibly dead. This is because once the heart stops beating, blood flow stops and oxygen is cut off from all the body's organs, including the brain. Consequently, within seconds, breathing stops and brain activity comes to a halt. Since the cessation of the heart literally occurs in a "moment," the philosophical notion of a specific point in time of "irreversible" death still pervades society today. The law, for example, relies on "time of death," which corresponds to when the heart stops beating.
The advent of cardiopulmonary resuscitation (CPR) in the 1960s was revolutionary, demonstrating that the heart could potentially be restarted after it had stopped, and what had been a clear black-and-white line was shown to be potentially reversible in some people. What was once called death—the ultimate end point— was now widely called cardiac arrest, and became a starting point.
From then on, it was only if somebody had requested not to be resuscitated or when CPR was deemed to have failed that people would be declared dead by "cardiopulmonary criteria." Biologically, cardiac arrest and death by cardiopulmonary criteria are the same process, albeit marked at different points in time depending on when a declaration of death is made.
The apparent irreversibility of death as we know it may not necessarily reflect true irretrievable cellular damage inside the body.
Clearly, contrary to many people's perceptions, cardiac arrest is not a heart attack; it is the final step in death irrespective of cause, whether it be a stroke, a heart attack, a car accident, an overwhelming infection or cancer. This is how roughly 95 percent of the population are declared dead.
The only exception is the small proportion of people who may have suffered catastrophic brain injuries, but whose hearts can be artificially kept beating for a period of time on life-support machines. These people can be legally declared dead based on brain death criteria before their hearts have stopped. This is because the brain can die either from oxygen starvation after cardiac arrest or from massive trauma and internal bleeding. Either way, the brain dies hours or possibly longer after these injuries have taken place and not just minutes.
A Profound Realization
What has become increasingly clear is that the apparent irreversibility of death as we know it may not necessarily reflect true irretrievable cellular damage inside the body. This is consistent with a mounting understanding: it is only after a person actually dies that the cells in the body start to undergo their own process of death. Intriguingly, this process is something that can now be manipulated through medical intervention. Being cold is one of the factors that slows down the rate of cellular decay. The 53-year-old Frenchman's case and the other recent cases of resuscitation after prolonged periods of time illustrate this new understanding.
Last week's earth-shattering announcement by neuroscientist Dr. Nenad Sestan and his team out of Yale, published in the prestigious scientific journal Nature, provides further evidence that a time gap exists between actual death and cellular death in cadavers. In this seminal study, these researchers were able to restore partial function in pig brains four hours after their heads were severed from their bodies. These results follow from the pioneering work in 2001 of geneticist Fred Gage and colleagues from the Salk Institute, also published in Nature, which demonstrated the possibility of growing human brain cells in the laboratory by taking brain biopsies from cadavers in the mortuary up to 21 hours post-mortem.
The once black-and-white line between life and death is now blurrier than ever. Some people may argue this means these humans and pigs weren't truly "dead." However, that is like saying the people who were guillotined during the French Revolution were also not dead. Clearly, that is not the case. They were all dead. The problem is not death; it's our reliance on an outdated philosophical, rather than biological, notion of death.
Death can no longer be considered an absolute moment but rather a process that can be reversed even many hours after it has taken place.
But the distinction between irreversibility from a medical perspective and biological irreversibility may not matter much from a pragmatic perspective today. If medical interventions do not exist at any given time or place, then of course death cannot be reversed.
However, it is crucial to distinguish between biologically and medically: When "irreversible" loss of function arises due to inadequate treatment, then a person could be potentially brought back in the future when an alternative therapy becomes available, or even today if he or she dies in a location where novel treatments can slow down the rate of cell death. However, when true irreversible loss of function arises from a biological perspective, then no treatment will ever be able to reverse the process, whether today, tomorrow, or in a hundred years.
Probing the "Grey Zone"
Today, thanks to modern resuscitation science, death can no longer be considered an absolute moment but rather a process that can be reversed even many hours after it has taken place. How many hours? We don't really know.
One of the wider implications of our medical advances is that we can now study what happens to the human mind and consciousness after people enter the "grey zone," which marks the time after the heart stops, but before irreversible and irretrievable cell damage occurs, and people are then brought back to life. Millions have been successfully revived and many have reported experiencing a unique, universal, and transformative mental state.
Were they "dead"? Yes, according to all the criteria we have ever used. But they were able to be brought back before their "dead" bodies had reached the point of permanent, irreversible cellular damage. This reflects the period of death for all of us. So rather than a "near-death experience," I prefer a new terminology to describe these cases -- "an actual-death experience." These survivors' unique experiences are providing eyewitness testimonies of what we will all be likely to experience when we die.
Such an experience reportedly includes seeing a warm light, the presence of a compassionate perfect individual, deceased relatives, a review of their lives, a judgment of their actions and intentions as they pertain to their humanity, and in some cases a sensation of seeing doctors and nurses working to resuscitate them.
Are these experiences compatible with hallucinations or illusions? No -- in part, because these people have described real, verifiable events, which, by definition are not hallucinations, and in part, because their experiences are not compatible with confused and delirious memories that characterize oxygen deprivation.
The challenge for us scientifically is understanding how this is possible at a time when all our science tells us the brain shuts down.
For instance, it is hard to classify a structured meaningful review of one's life and one's humanity as hallucinatory or illusory. Instead, these experiences represent a new understanding of the overall human experience of death. As an intensive care unit physician for more than 10 years, I have seen numerous cases where these reports have been corroborated by my colleagues. In short, these survivors have been known to come back with reports of full consciousness, with lucid, well-structured thought processes and memory formation.
The challenge for us scientifically is understanding how this is possible at a time when all our science tells us the brain shuts down. The fact that these experiences occur is a paradox and suggests the undiscovered entity we call the "self," "consciousness," or "psyche" – the thing that makes us who we are - may not become annihilated at the point of so-called death.
At New York University, the State University of New York, and across 20 hospitals in the U.S. and Europe, we have brought together a new multi-disciplinary team of experts across many specialties, including neurology, cardiology, and intensive care. Together, we hope to improve cardiac arrest prevention and treatment, as well as to address the impact of new scientific discoveries on our understanding of what happens at death.
One of our first studies, Awareness during Resuscitation (AWARE), published in the medical journal Resuscitation in 2014, confirmed that some cardiac arrest patients report a perception of awareness without recall; others report detailed memories and experiences; and a few report full auditory and visual awareness and consciousness of their experience, from a time when brain function would be expected to have ceased.
While you probably have some opinion or belief about this based upon your own philosophical, religious, or cultural background, you may not realize that exploring what happens when we die is now a subject that science is beginning to investigate.
There is no question more intriguing to humankind. And for the first time in our history, we may finally uncover some real answers.
In The Fake News Era, Are We Too Gullible? No, Says Cognitive Scientist
One of the oddest political hoaxes of recent times was Pizzagate, in which conspiracy theorists claimed that Hillary Clinton and her 2016 campaign chief ran a child sex ring from the basement of a Washington, DC, pizzeria.
To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility.
Millions of believers spread the rumor on social media, abetted by Russian bots; one outraged netizen stormed the restaurant with an assault rifle and shot open what he took to be the dungeon door. (It actually led to a computer closet.) Pundits cited the imbroglio as evidence that Americans had lost the ability to tell fake news from the real thing, putting our democracy in peril.
Such fears, however, are nothing new. "For most of history, the concept of widespread credulity has been fundamental to our understanding of society," observes Hugo Mercier in Not Born Yesterday: The Science of Who We Trust and What We Believe (Princeton University Press, 2020). In the fourth century BCE, he points out, the historian Thucydides blamed Athens' defeat by Sparta on a demagogue who hoodwinked the public into supporting idiotic military strategies; Plato extended that argument to condemn democracy itself. Today, atheists and fundamentalists decry one another's gullibility, as do climate-change accepters and deniers. Leftists bemoan the masses' blind acceptance of the "dominant ideology," while conservatives accuse those who do revolt of being duped by cunning agitators.
What's changed, all sides agree, is the speed at which bamboozlement can propagate. In the digital age, it seems, a sucker is born every nanosecond.
The Case Against Credulity
Yet Mercier, a cognitive scientist at the Jean Nicod Institute in Paris, thinks we've got the problem backward. To fight disinformation more effectively, he suggests, humans need to stop believing in one thing above all: our own gullibility. "We don't credulously accept whatever we're told—even when those views are supported by the majority of the population, or by prestigious, charismatic individuals," he writes. "On the contrary, we are skilled at figuring out who to trust and what to believe, and, if anything, we're too hard rather than too easy to influence."
He bases those contentions on a growing body of research in neuropsychiatry, evolutionary psychology, and other fields. Humans, Mercier argues, are hardwired to balance openness with vigilance when assessing communicated information. To gauge a statement's accuracy, we instinctively test it from many angles, including: Does it jibe with what I already believe? Does the speaker share my interests? Has she demonstrated competence in this area? What's her reputation for trustworthiness? And, with more complex assertions: Does the argument make sense?
This process, Mercier says, enables us to learn much more from one another than do other animals, and to communicate in a far more complex way—key to our unparalleled adaptability. But it doesn't always save us from trusting liars or embracing demonstrably false beliefs. To better understand why, leapsmag spoke with the author.
How did you come to write Not Born Yesterday?
In 2010, I collaborated with the cognitive scientist Dan Sperber and some other colleagues on a paper called "Epistemic Vigilance," which laid out the argument that evolutionarily, it would make no sense for humans to be gullible. If you can be easily manipulated and influenced, you're going to be in major trouble. But as I talked to people, I kept encountering resistance. They'd tell me, "No, no, people are influenced by advertising, by political campaigns, by religious leaders." I started doing more research to see if I was wrong, and eventually I had enough to write a book.
With all the talk about "fake news" these days, the topic has gotten a lot more timely.
Yes. But on the whole, I'm skeptical that fake news matters very much. And all the energy we spend fighting it is energy not spent on other pursuits that may be better ways of improving our informational environment. The real challenge, I think, is not how to shut up people who say stupid things on the internet, but how to make it easier for people who say correct things to convince people.
"History shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion."
You start the book with an anecdote about your encounter with a con artist several years ago, who scammed you out of 20 euros. Why did you choose that anecdote?
Although I'm arguing that people aren't generally gullible, I'm not saying we're completely impervious to attempts at tricking us. It's just that we're much better than we think at resisting manipulation. And while there's a risk of trusting someone who doesn't deserve to be trusted, there's also a risk of not trusting someone who could have been trusted. You miss out on someone who could help you, or from whom you might have learned something—including figuring out who to trust.
You argue that in humans, vigilance and open-mindedness evolved hand-in-hand, leading to a set of cognitive mechanisms you call "open vigilance."
There's a common view that people start from a state of being gullible and easy to influence, and get better at rejecting information as they become smarter and more sophisticated. But that's not what really happens. It's much harder to get apes than humans to do anything they don't want to do, for example. And research suggests that over evolutionary time, the better our species became at telling what we should and shouldn't listen to, the more open to influence we became. Even small children have ways to evaluate what people tell them.
The most basic is what I call "plausibility checking": if you tell them you're 200 years old, they're going to find that highly suspicious. Kids pay attention to competence; if someone is an expert in the relevant field, they'll trust her more. They're likelier to trust someone who's nice to them. My colleagues and I have found that by age 2 ½, children can distinguish between very strong and very weak arguments. Obviously, these skills keep developing throughout your life.
But you've found that even the most forceful leaders—and their propaganda machines—have a hard time changing people's minds.
Throughout history, there's been this fear of demagogues leading whole countries into terrible decisions. In reality, these leaders are mostly good at feeling the crowd and figuring out what people want to hear. They're not really influencing [the masses]; they're surfing on pre-existing public opinion. We know from a recent study, for instance, that if you match cities in which Hitler gave campaign speeches in the late '20s through early '30s with similar cities in which he didn't give campaign speeches, there was no difference in vote share for the Nazis. Nazi propaganda managed to make Germans who were already anti-Semitic more likely to express their anti-Semitism or act on it. But Germans who were not already anti-Semitic were completely inured to the propaganda.
So why, in totalitarian regimes, do people seem so devoted to the ruler?
It's not a very complex psychology. In these regimes, the slightest show of discontent can be punished by death, or by you and your whole family being sent to a labor camp. That doesn't mean propaganda has no effect, but you can explain people's obedience without it.
What about cult leaders and religious extremists? Their followers seem willing to believe anything.
Prophets and preachers can inspire the kind of fervor that leads people to suicidal acts or doomed crusades. But history shows that the audience's state of mind and material conditions matter more than the leader's powers of persuasion. Only when people are ready for extreme actions can a charismatic figure provide the spark that lights the fire.
Once a religion becomes ubiquitous, the limits of its persuasive powers become clear. Every anthropologist knows that in societies that are nominally dominated by orthodox belief systems—whether Christian or Muslim or anything else—most people share a view of God, or the spirit, that's closer to what you find in societies that lack such religions. In the Middle Ages, for instance, you have records of priests complaining of how unruly the people are—how they spend the whole Mass chatting or gossiping, or go on pilgrimages mostly because of all the prostitutes and wine-drinking. They continue pagan practices. They resist attempts to make them pay tithes. It's very far from our image of how much people really bought the dominant religion.
"The mainstream media is extremely reliable. The scientific consensus is extremely reliable."
And what about all those wild rumors and conspiracy theories on social media? Don't those demonstrate widespread gullibility?
I think not, for two reasons. One is that most of these false beliefs tend to be held in a way that's not very deep. People may say Pizzagate is true, yet that belief doesn't really interact with the rest of their cognition or their behavior. If you really believe that children are being abused, then trying to free them is the moral and rational thing to do. But the only person who did that was the guy who took his assault weapon to the pizzeria. Most people just left one-star reviews of the restaurant.
The other reason is that most of these beliefs actually play some useful role for people. Before any ethnic massacre, for example, rumors circulate about atrocities having been committed by the targeted minority. But those beliefs aren't what's really driving the phenomenon. In the horrendous pogrom of Kishinev, Moldova, 100 years ago, you had these stories of blood libel—a child disappeared, typical stuff. And then what did the Christian inhabitants do? They raped the [Jewish] women, they pillaged the wine stores, they stole everything they could. They clearly wanted to get that stuff, and they made up something to justify it.
Where do skeptics like climate-change deniers and anti-vaxxers fit into the picture?
Most people in most countries accept that vaccination is good and that climate change is real and man-made. These ideas are deeply counter-intuitive, so the fact that scientists were able to get them across is quite fascinating. But the environment in which we live is vastly different from the one in which we evolved. There's a lot more information, which makes it harder to figure out who we can trust. The main effect is that we don't trust enough; we don't accept enough information. We also rely on shortcuts and heuristics—coarse cues of trustworthiness. There are people who abuse these cues. They may have a PhD or an MD, and they use those credentials to help them spread messages that are not true and not good. Mostly, they're affirming what people want to believe, but they may also be changing minds at the margins.
How can we improve people's ability to resist that kind of exploitation?
I wish I could tell you! That's literally my next project. Generally speaking, though, my advice is very vanilla. The mainstream media is extremely reliable. The scientific consensus is extremely reliable. If you trust those sources, you'll go wrong in a very few cases, but on the whole, they'll probably give you good results. Yet a lot of the problems that we attribute to people being stupid and irrational are not entirely their fault. If governments were less corrupt, if the pharmaceutical companies were irreproachable, these problems might not go away—but they would certainly be minimized.
“Virtual Biopsies” May Soon Make Some Invasive Tests Unnecessary
At his son's college graduation in 2017, Dan Chessin felt "terribly uncomfortable" sitting in the stadium. The bouts of pain persisted, and after months of monitoring, a urologist took biopsies of suspicious areas in his prostate.
This innovation may enhance diagnostic precision and promptness, but it also brings ethical concerns to the forefront.
"In my case, the biopsies came out cancerous," says Chessin, 60, who underwent robotic surgery for intermediate-grade prostate cancer at University Hospitals Cleveland Medical Center.
Although he needed a biopsy, as most patients today do, advances in radiologic technology may make such invasive measures unnecessary in the future. Researchers are developing better imaging techniques and algorithms—a form of computer science called artificial intelligence, in which machines learn and execute tasks that typically require human brain power.
This innovation may enhance diagnostic precision and promptness. But it also brings ethical concerns to the forefront of the conversation, highlighting the potential for invasion of privacy, unequal patient access, and less physician involvement in patient care.
A National Academy of Medicine Special Publication, released in December, emphasizes that setting industry-wide standards for use in patient care is essential to AI's responsible and transparent implementation as the industry grapples with voluminous quantities of data. The technology should be viewed as a tool to supplement decision-making by highly trained professionals, not to replace it.
MRI--a test that uses powerful magnets, radio waves, and a computer to take detailed images inside the body--has become highly accurate in detecting aggressive prostate cancer, but its reliability is more limited in identifying low and intermediate grades of malignancy. That's why Chessin opted to have his prostate removed rather than take the chance of missing anything more suspicious that could develop.
His urologist, Lee Ponsky, says AI's most significant impact is yet to come. He hopes University Hospitals Cleveland Medical Center's collaboration with research scientists at its academic affiliate, Case Western Reserve University, will lead to the invention of a virtual biopsy.
A National Cancer Institute five-year grant is funding the project, launched in 2017, to develop a combined MRI and computerized tool to support more accurate detection and grading of prostate cancer. Such a tool would be "the closest to a crystal ball that we can get," says Ponsky, professor and chairman of the Urology Institute.
In situations where AI has guided diagnostics, radiologists' interpretations of breast, lung, and prostate lesions have improved as much as 25 percent, says Anant Madabhushi, a biomedical engineer and director of the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve, who is collaborating with Ponsky. "AI is very nascent," Madabhushi says, estimating that fewer than 10 percent of niche academic medical centers have used it. "We are still optimizing and validating the AI and virtual biopsy technology."
In October, several North American and European professional organizations of radiologists, imaging informaticists, and medical physicists released a joint statement on the ethics of AI. "Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future," reads the statement, published in the Journal of the American College of Radiology. "The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes."
Overreliance on new technology also poses concern when humans "outsource the process to a machine."
The statement's leader author, radiologist J. Raymond Geis, says "there's no question" that machines equipped with artificial intelligence "can extract more information than two human eyes" by spotting very subtle patterns in pixels. Yet, such nuances are "only part of the bigger picture of taking care of a patient," says Geis, a senior scientist with the American College of Radiology's Data Science Institute. "We have to be able to combine that with knowledge of what those pixels mean."
Setting ethical standards is high on all physicians' radar because the intricacies of each patient's medical record are factored into the computer's algorithm, which, in turn, may be used to help interpret other patients' scans, says radiologist Frank Rybicki, vice chair of operations and quality at the University of Cincinnati's department of radiology. Although obtaining patients' informed consent in writing is currently necessary, ethical dilemmas arise if and when patients have a change of heart about the use of their private health information. It is likely that removing individual data may be possible for some algorithms but not others, Rybicki says.
The information is de-identified to protect patient privacy. Using it to advance research is akin to analyzing human tissue removed in surgical procedures with the goal of discovering new medicines to fight disease, says Maryellen Giger, a University of Chicago medical physicist who studies computer-aided diagnosis in cancers of the breast, lung, and prostate, as well as bone diseases. Physicians who become adept at using AI to augment their interpretation of imaging will be ahead of the curve, she says.
As with other new discoveries, patient access and equality come into play. While AI appears to "have potential to improve over human performance in certain contexts," an algorithm's design may result in greater accuracy for certain groups of patients, says Lucia M. Rafanelli, a political theorist at The George Washington University. This "could have a disproportionately bad impact on one segment of the population."
Overreliance on new technology also poses concern when humans "outsource the process to a machine." Over time, they may cease developing and refining the skills they used before the invention became available, said Chloe Bakalar, a visiting research collaborator at Princeton University's Center for Information Technology Policy.
"AI is a paradigm shift with magic power and great potential."
Striking the right balance in the rollout of the technology is key. Rushing to integrate AI in clinical practice may cause harm, whereas holding back too long could undermine its ability to be helpful. Proper governance becomes paramount. "AI is a paradigm shift with magic power and great potential," says Ge Wang, a biomedical imaging professor at Rensselaer Polytechnic Institute in Troy, New York. "It is only ethical to develop it proactively, validate it rigorously, regulate it systematically, and optimize it as time goes by in a healthy ecosystem."