The Case for an Outright Ban on Facial Recognition Technology
[Editor's Note: This essay is in response to our current Big Question, which we posed to experts with different perspectives: "Do you think the use of facial recognition technology by the police or government should be banned? If so, why? If not, what limits, if any, should be placed on its use?"]
In a surprise appearance at the tail end of Amazon's much-hyped annual product event last month, CEO Jeff Bezos casually told reporters that his company is writing its own facial recognition legislation.
The use of computer algorithms to analyze massive databases of footage and photographs could render human privacy extinct.
It seems that when you're the wealthiest human alive, there's nothing strange about your company––the largest in the world profiting from the spread of face surveillance technology––writing the rules that govern it.
But if lawmakers and advocates fall into Silicon Valley's trap of "regulating" facial recognition and other forms of invasive biometric surveillance, that's exactly what will happen.
Industry-friendly regulations won't fix the dangers inherent in widespread use of face scanning software, whether it's deployed by governments or for commercial purposes. The use of this technology in public places and for surveillance purposes should be banned outright, and its use by private companies and individuals should be severely restricted. As artificial intelligence expert Luke Stark wrote, it's dangerous enough that it should be outlawed for "almost all practical purposes."
Like biological or nuclear weapons, facial recognition poses such a profound threat to the future of humanity and our basic rights that any potential benefits are far outweighed by the inevitable harms.
We live in cities and towns with an exponentially growing number of always-on cameras, installed in everything from cars to children's toys to Amazon's police-friendly doorbells. The use of computer algorithms to analyze massive databases of footage and photographs could render human privacy extinct. It's a world where nearly everything we do, everywhere we go, everyone we associate with, and everything we buy — or look at and even think of buying — is recorded and can be tracked and analyzed at a mass scale for unimaginably awful purposes.
Biometric tracking enables the automated and pervasive monitoring of an entire population. There's ample evidence that this type of dragnet mass data collection and analysis is not useful for public safety, but it's perfect for oppression and social control.
Law enforcement defenders of facial recognition often state that the technology simply lets them do what they would be doing anyway: compare footage or photos against mug shots, drivers licenses, or other databases, but faster. And they're not wrong. But the speed and automation enabled by artificial intelligence-powered surveillance fundamentally changes the impact of that surveillance on our society. Being able to do something exponentially faster, and using significantly less human and financial resources, alters the nature of that thing. The Fourth Amendment becomes meaningless in a world where private companies record everything we do and provide governments with easy tools to request and analyze footage from a growing, privately owned, panopticon.
Tech giants like Microsoft and Amazon insist that facial recognition will be a lucrative boon for humanity, as long as there are proper safeguards in place. This disingenuous call for regulation is straight out of the same lobbying playbook that telecom companies have used to attack net neutrality and Silicon Valley has used to scuttle meaningful data privacy legislation. Companies are calling for regulation because they want their corporate lawyers and lobbyists to help write the rules of the road, to ensure those rules are friendly to their business models. They're trying to skip the debate about what role, if any, technology this uniquely dangerous should play in a free and open society. They want to rush ahead to the discussion about how we roll it out.
We need spaces that are free from government and societal intrusion in order to advance as a civilization.
Facial recognition is spreading very quickly. But backlash is growing too. Several cities have already banned government entities, including police and schools, from using biometric surveillance. Others have local ordinances in the works, and there's state legislation brewing in Michigan, Massachusetts, Utah, and California. Meanwhile, there is growing bipartisan agreement in U.S. Congress to rein in government use of facial recognition. We've also seen significant backlash to facial recognition growing in the U.K., within the European Parliament, and in Sweden, which recently banned its use in schools following a fine under the General Data Protection Regulation (GDPR).
At least two frontrunners in the 2020 presidential campaign have backed a ban on law enforcement use of facial recognition. Many of the largest music festivals in the world responded to Fight for the Future's campaign and committed to not use facial recognition technology on music fans.
There has been widespread reporting on the fact that existing facial recognition algorithms exhibit systemic racial and gender bias, and are more likely to misidentify people with darker skin, or who are not perceived by a computer to be a white man. Critics are right to highlight this algorithmic bias. Facial recognition is being used by law enforcement in cities like Detroit right now, and the racial bias baked into that software is doing harm. It's exacerbating existing forms of racial profiling and discrimination in everything from public housing to the criminal justice system.
But the companies that make facial recognition assure us this bias is a bug, not a feature, and that they can fix it. And they might be right. Face scanning algorithms for many purposes will improve over time. But facial recognition becoming more accurate doesn't make it less of a threat to human rights. This technology is dangerous when it's broken, but at a mass scale, it's even more dangerous when it works. And it will still disproportionately harm our society's most vulnerable members.
Persistent monitoring and policing of our behavior breeds conformity, benefits tyrants, and enriches elites.
We need spaces that are free from government and societal intrusion in order to advance as a civilization. If technology makes it so that laws can be enforced 100 percent of the time, there is no room to test whether those laws are just. If the U.S. government had ubiquitous facial recognition surveillance 50 years ago when homosexuality was still criminalized, would the LGBTQ rights movement ever have formed? In a world where private spaces don't exist, would people have felt safe enough to leave the closet and gather, build community, and form a movement? Freedom from surveillance is necessary for deviation from social norms as well as to dissent from authority, without which societal progress halts.
Persistent monitoring and policing of our behavior breeds conformity, benefits tyrants, and enriches elites. Drawing a line in the sand around tech-enhanced surveillance is the fundamental fight of this generation. Lining up to get our faces scanned to participate in society doesn't just threaten our privacy, it threatens our humanity, and our ability to be ourselves.
[Editor's Note: Read the opposite perspective here.]
How to have a good life, based on the world's longest study of happiness
What makes for a good life? Such a simple question, yet we don't have great answers. Most of us try to figure it out as we go along, and many end up feeling like they never got to the bottom of it.
Shouldn't something so important be approached with more scientific rigor? In 1938, Harvard researchers began a study to fill this gap. Since then, they’ve followed hundreds of people over the course of their lives, hoping to identify which factors are key to long-term satisfaction.
Eighty-five years later, the Harvard Study of Adult Development is still going. And today, its directors, the psychiatrists Bob Waldinger and Marc Shulz, have published a book that pulls together the study’s most important findings. It’s called The Good Life: Lessons from the World’s Longest Scientific Study of Happiness.
In this podcast episode, I talked with Dr. Waldinger about life lessons that we can mine from the Harvard study and his new book.
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
More background on the study
Back in the 1930s, the research began with 724 people. Some were first-year Harvard students paying full tuition, others were freshmen who needed financial help, and the rest were 14-year-old boys from inner city Boston – white males only. Fortunately, the study team realized the error of their ways and expanded their sample to include the wives and daughters of the first participants. And Waldinger’s book focuses on the Harvard study findings that can be corroborated by evidence from additional research on the lives of people of different races and other minorities.
The study now includes over 1,300 relatives of the original participants, spanning three generations. Every two years, the participants have sent the researchers a filled-out questionnaire, reporting how their lives are going. At five-year intervals, the research team takes a peek their health records and, every 15 years, the psychologists meet their subjects in-person to check out their appearance and behavior.
But they don’t stop there. No, the researchers factor in multiple blood samples, DNA, images from body scans, and even the donated brains of 25 participants.
Robert Waldinger, director of the Harvard Study of Adult Development.
Katherine Taylor
Dr. Waldinger is Clinical Professor of Psychiatry at Harvard Medical School, in addition to being Director of the Harvard Study of Adult Development. He got his M.D. from Harvard Medical School and has published numerous scientific papers he’s a practicing psychiatrist and psychoanalyst, he teaches Harvard medical students, and since that is clearly not enough to keep him busy, he’s also a Zen priest.
His book is a must-read if you’re looking for scientific evidence on how to design your life for more satisfaction so someday in the future you can look back on it without regret, and this episode was an amazing conversation in which Dr. Waldinger breaks down many of the cliches about the good life, making his advice real and tangible. We also get into what he calls “side-by-side” relationships, personality traits for the good life, and the downsides of being too strict about work-life balance.
Show links
- Bob Waldinger
- Waldinger's book, The Good Life: Lessons from the World's Longest Scientific Study of Happiness
- The Harvard Study of Adult Development
- Waldinger's Ted Talk
- Gallup report finding that people with good friends at work have higher engagement with their jobs
- The link between relationships and well-being
- Those with social connections live longer
The Friday Five: A new blood test to detect Alzheimer's
The Friday Five covers five stories in research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on scientific creativity and progress to give you a therapeutic dose of inspiration headed into the weekend.
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
Here are the promising studies covered in this week's Friday Five:
- A blood test to detect Alzheimer's
- War vets can take their psychologist wherever they go
- Does intermittent fasting affect circadian rhythms?
- A new year's resolution for living longer
- 3-D printed eyes?