Don’t fear AI, fear power-hungry humans
Story by Big Think
We live in strange times, when the technology we depend on the most is also that which we fear the most. We celebrate cutting-edge achievements even as we recoil in fear at how they could be used to hurt us. From genetic engineering and AI to nuclear technology and nanobots, the list of awe-inspiring, fast-developing technologies is long.
However, this fear of the machine is not as new as it may seem. Technology has a longstanding alliance with power and the state. The dark side of human history can be told as a series of wars whose victors are often those with the most advanced technology. (There are exceptions, of course.) Science, and its technological offspring, follows the money.
This fear of the machine seems to be misplaced. The machine has no intent: only its maker does. The fear of the machine is, in essence, the fear we have of each other — of what we are capable of doing to one another.
How AI changes things
Sure, you would reply, but AI changes everything. With artificial intelligence, the machine itself will develop some sort of autonomy, however ill-defined. It will have a will of its own. And this will, if it reflects anything that seems human, will not be benevolent. With AI, the claim goes, the machine will somehow know what it must do to get rid of us. It will threaten us as a species.
Well, this fear is also not new. Mary Shelley wrote Frankenstein in 1818 to warn us of what science could do if it served the wrong calling. In the case of her novel, Dr. Frankenstein’s call was to win the battle against death — to reverse the course of nature. Granted, any cure of an illness interferes with the normal workings of nature, yet we are justly proud of having developed cures for our ailments, prolonging life and increasing its quality. Science can achieve nothing more noble. What messes things up is when the pursuit of good is confused with that of power. In this distorted scale, the more powerful the better. The ultimate goal is to be as powerful as gods — masters of time, of life and death.
Should countries create a World Mind Organization that controls the technologies that develop AI?
Back to AI, there is no doubt the technology will help us tremendously. We will have better medical diagnostics, better traffic control, better bridge designs, and better pedagogical animations to teach in the classroom and virtually. But we will also have better winnings in the stock market, better war strategies, and better soldiers and remote ways of killing. This grants real power to those who control the best technologies. It increases the take of the winners of wars — those fought with weapons, and those fought with money.
A story as old as civilization
The question is how to move forward. This is where things get interesting and complicated. We hear over and over again that there is an urgent need for safeguards, for controls and legislation to deal with the AI revolution. Great. But if these machines are essentially functioning in a semi-black box of self-teaching neural nets, how exactly are we going to make safeguards that are sure to remain effective? How are we to ensure that the AI, with its unlimited ability to gather data, will not come up with new ways to bypass our safeguards, the same way that people break into safes?
The second question is that of global control. As I wrote before, overseeing new technology is complex. Should countries create a World Mind Organization that controls the technologies that develop AI? If so, how do we organize this planet-wide governing board? Who should be a part of its governing structure? What mechanisms will ensure that governments and private companies do not secretly break the rules, especially when to do so would put the most advanced weapons in the hands of the rule breakers? They will need those, after all, if other actors break the rules as well.
As before, the countries with the best scientists and engineers will have a great advantage. A new international détente will emerge in the molds of the nuclear détente of the Cold War. Again, we will fear destructive technology falling into the wrong hands. This can happen easily. AI machines will not need to be built at an industrial scale, as nuclear capabilities were, and AI-based terrorism will be a force to reckon with.
So here we are, afraid of our own technology all over again.
What is missing from this picture? It continues to illustrate the same destructive pattern of greed and power that has defined so much of our civilization. The failure it shows is moral, and only we can change it. We define civilization by the accumulation of wealth, and this worldview is killing us. The project of civilization we invented has become self-cannibalizing. As long as we do not see this, and we keep on following the same route we have trodden for the past 10,000 years, it will be very hard to legislate the technology to come and to ensure such legislation is followed. Unless, of course, AI helps us become better humans, perhaps by teaching us how stupid we have been for so long. This sounds far-fetched, given who this AI will be serving. But one can always hope.
This article originally appeared on Big Think, home of the brightest minds and biggest ideas of all time.
Harvard Researchers Are Using a Breakthrough Tool to Find the Antibodies That Best Knock Out the Coronavirus
To find a cure for a deadly infectious disease in the 1995 medical thriller Outbreak, scientists extract the virus's antibodies from its original host—an African monkey.
"When a person is infected, the immune system makes antibodies kind of blindly."
The antibodies prevent the monkeys from getting sick, so doctors use these antibodies to make the therapeutic serum for humans. With SARS-CoV-2, the original hosts might be bats or pangolins, but scientists don't have access to either, so they are turning to the humans who beat the virus.
Patients who recovered from COVID-19 are valuable reservoirs of viral antibodies and may help scientists develop efficient therapeutics, says Stephen J. Elledge, professor of genetics and medicine at Harvard Medical School in Boston. Studying the structure of the antibodies floating in their blood can help understand what their immune systems did right to kill the pathogen.
When viruses invade the body, the immune system builds antibodies against them. The antibodies work like Velcro strips—they use special spots on their surface called paratopes to cling to the specific spots on the viral shell called epitopes. Once the antibodies circulating in the blood find their "match," they cling on to the virus and deactivate it.
But that process is far from simple. The epitopes and paratopes are built of various peptides that have complex shapes, are folded in specific ways, and may carry an electrical charge that repels certain molecules. Only when all of these parameters match, an antibody can get close enough to a viral particle—and shut it out.
So the immune system forges many different antibodies with varied parameters in hopes that some will work. "When a person is infected, the immune system makes antibodies kind of blindly," Elledge says. "It's doing a shotgun approach. It's not sure which ones will work, but it knows once it's made a good one that works."
Elledge and his team want to take the guessing out of the process. They are using their home-built tool VirScan to comb through the blood samples of the recovered COVID-19 patients to see what parameters the efficient antibodies should have. First developed in 2015, the VirScan has a library of epitopes found on the shells of viruses known to afflict humans, akin to a database of criminals' mug shots maintained by the police.
Originally, VirScan was meant to reveal which pathogens a person overcame throughout a lifetime, and could identify over 1,000 different strains of viruses and bacteria. When the team ran blood samples against the VirScan's library, the tool would pick out all the "usual suspects." And unlike traditional blood tests called ELISA, which can only detect one pathogen at a time, VirScan can detect all of them at once. Now, the team has updated VirScan with the SARS-CoV-2 "mug shot" and is beginning to test which antibodies from the recovered patients' blood will bind to them.
Knowing which antibodies bind best can also help fine-tune vaccines.
Obtaining blood samples was a challenge that caused some delays. "So far most of the recovered patients have been in China and those samples are hard to get," Elledge says. It also takes a person five to 10 days to develop antibodies, so the blood must be drawn at the right time during the illness. If a person is asymptomatic, it's hard to pinpoint the right moment. "We just got a couple of blood samples so we are testing now," he said. The team hopes to get some results very soon.
Elucidating the structure of efficient antibodies can help create therapeutics for COVID-19. "VirScan is a powerful technology to study antibody responses," says Harvard Medical School professor Dan Barouch, who also directs the Center for Virology and Vaccine Research. "A detailed understanding of the antibody responses to COVID-19 will help guide the design of next-generation vaccines and therapeutics."
For example, scientists can synthesize antibodies to specs and give them to patients as medicine. Once vaccines are designed, medics can use VirScan to see if those vaccinated again COVID-19 generate the necessary antibodies.
Knowing which antibodies bind best can also help fine-tune vaccines. Sometimes, viruses cause the immune system to generate antibodies that don't deactivate it. "We think the virus is trying to confuse the immune system; it is its business plan," Elledge says—so those unhelpful antibodies shouldn't be included in vaccines.
More importantly, VirScan can also tell which people have developed immunity to SARS-CoV-2 and can return to their workplaces and businesses, which is crucial to restoring the economy. Knowing one's immunity status is especially important for doctors working on the frontlines, Elledge notes. "The resistant ones can intubate the sick."
Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.
As countries around the world combat the coronavirus outbreak, governments that already operated sophisticated surveillance programs are ramping up the tracking of their citizens.
"The potential for invasions of privacy, abuse, and stigmatization is enormous."
Countries like China, South Korea, Israel, Singapore and others are closely monitoring citizens to track the spread of the virus and prevent further infections, and policymakers in the United States have proposed similar steps. These shifts in policy have civil liberties defenders alarmed, as history has shown increases in surveillance tend to stick around after an emergency is over.
In China, where the virus originated and surveillance is already ubiquitous, the government has taken measures like having people scan a QR code and answer questions about their health and travel history to enter their apartment building. The country has also increased the tracking of cell phones, encouraged citizens to report people who appear to be sick, utilized surveillance drones, and developed facial recognition that can identify someone even if they're wearing a mask.
In Israel, the government has begun tracking people's cell phones without a court order under a program that was initially meant to counter terrorism. Singapore has also been closely tracking people's movements using cell phone data. In South Korea, the government has been monitoring citizens' credit card and cell phone data and has heavily utilized facial recognition to combat the spread of the coronavirus.
Here at home, the United States government and state governments have been using cell phone data to determine where people are congregating. White House senior adviser Jared Kushner's task force to combat the coronavirus outbreak has proposed using cell phone data to track coronavirus patients. Cities around the nation are also using surveillance drones to maintain social distancing orders. Companies like Apple and Google that work closely with the federal government are currently developing systems to track Americans' cell phones.
All of this might sound acceptable if you're worried about containing the outbreak and getting back to normal life, but as we saw when the Patriot Act was passed in 2001 in the wake of the 9/11 terrorist attacks, expansions of the surveillance state can persist long after the emergency that seemed to justify them.
Jay Stanley, senior policy analyst with the ACLU Speech, Privacy, and Technology Project, says that this public health emergency requires bold action, but he worries that actions may be taken that will infringe on our privacy rights.
"This is an extraordinary crisis that justifies things that would not be justified in ordinary times, but we, of course, worry that any such things would be made permanent," Stanley says.
Stanley notes that the 9/11 situation was different from this current situation because we still face the threat of terrorism today, and we always will. The Patriot Act was a response to that threat, even if it was an extreme response. With this pandemic, it's quite possible we won't face something like this again for some time.
"We know that for the last seven or eight decades, we haven't seen a microbe this dangerous become a pandemic, and it's reasonable to expect it's not going to be happening for a while afterward," Stanley says. "We do know that when a vaccine is produced and is produced widely enough, the COVID crisis will be over. This does, unlike 9/11, have a definitive ending."
The ACLU released a white paper last week outlining the problems with using location data from cell phones and how policymakers should proceed when they discuss the usage of surveillance to combat the outbreak.
"Location data contains an enormously invasive and personal set of information about each of us, with the potential to reveal such things as people's social, sexual, religious, and political associations," they wrote. "The potential for invasions of privacy, abuse, and stigmatization is enormous. Any uses of such data should be temporary, restricted to public health agencies and purposes, and should make the greatest possible use of available techniques that allow for privacy and anonymity to be protected, even as the data is used."
"The first thing you need to combat pervasive surveillance is to know that it's occurring."
Sara Collins, policy counsel at the digital rights organization Public Knowledge, says that one of the problems with the current administration is that there's not much transparency, so she worries surveillance could be increased without the public realizing it.
"You'll often see the White House come out with something—that they're going to take this action or an agency just says they're going to take this action—and there's no congressional authorization," Collins says. "There's no regulation. There's nothing there for the public discourse."
Collins says it's almost impossible to protect against infringements on people's privacy rights if you don't actually know what kind of surveillance is being done and at what scale.
"I think that's very concerning when there's no accountability and no way to understand what's actually happening," Collins says. "The first thing you need to combat pervasive surveillance is to know that it's occurring."
We should also be worried about corporate surveillance, Collins says, because the tech companies that keep track of our data work closely with the government and do not have a good track record when it comes to protecting people's privacy. She suspects these companies could use the coronavirus outbreak to defend the kind of data collection they've been engaging in for years.
Collins stresses that any increase in surveillance should be transparent and short-lived, and that there should be a limit on how long people's data can be kept. Otherwise, she says, we're risking an indefinite infringement on privacy rights. Her organization will be keeping tabs as the crisis progresses.
It's not that we shouldn't avail ourselves of modern technology to fight the pandemic. Indeed, once lockdown restrictions are gradually lifted, public health officials must increase their ability to isolate new cases and trace, test, and quarantine contacts.
But tracking the entire populace "Big Brother"-style is not the ideal way out of the crisis. Last week, for instance, a group of policy experts -- including former FDA Commissioner Scott Gottlieb -- published recommendations for how to achieve containment. They emphasized the need for widespread diagnostic and serologic testing as well as rapid case-based interventions, among other measures -- and they, too, were wary of pervasive measures to follow citizens.
The group wrote: "Improved capacity [for timely contact tracing] will be most effective if coordinated with health care providers, health systems, and health plans and supported by timely electronic data sharing. Cell phone-based apps recording proximity events between individuals are unlikely to have adequate discriminating ability or adoption to achieve public health utility, while introducing serious privacy, security, and logistical concerns."
The bottom line: Any broad increases in surveillance should be carefully considered before we go along with them out of fear. The Founders knew that privacy is integral to freedom; that's why they wrote the Fourth Amendment to protect it, and that right shouldn't be thrown away because we're in an emergency. Once you lose a right, you don't tend to get it back.