Bad Actors Getting Your Health Data Is the FBI’s Latest Worry
In February 2015, the health insurer Anthem revealed that criminal hackers had gained access to the company's servers, exposing the personal information of nearly 79 million patients. It's the largest known healthcare breach in history.
FBI agents worry that the vast amounts of healthcare data being generated for precision medicine efforts could leave the U.S. vulnerable to cyber and biological attacks.
That year, the data of millions more would be compromised in one cyberattack after another on American insurers and other healthcare organizations. In fact, for the past several years, the number of reported data breaches has increased each year, from 199 in 2010 to 344 in 2017, according to a September 2018 analysis in the Journal of the American Medical Association.
The FBI's Edward You sees this as a worrying trend. He says hackers aren't just interested in your social security or credit card number. They're increasingly interested in stealing your medical information. Hackers can currently use this information to make fake identities, file fraudulent insurance claims, and order and sell expensive drugs and medical equipment. But beyond that, a new kind of cybersecurity threat is around the corner.
Mr. You and others worry that the vast amounts of healthcare data being generated for precision medicine efforts could leave the U.S. vulnerable to cyber and biological attacks. In the wrong hands, this data could be used to exploit or extort an individual, discriminate against certain groups of people, make targeted bioweapons, or give another country an economic advantage.
Precision medicine, of course, is the idea that medical treatments can be tailored to individuals based on their genetics, environment, lifestyle or other traits. But to do that requires collecting and analyzing huge quantities of health data from diverse populations. One research effort, called All of Us, launched by the U.S. National Institutes of Health last year, aims to collect genomic and other healthcare data from one million participants with the goal of advancing personalized medical care.
Other initiatives are underway by academic institutions and healthcare organizations. Electronic medical records, genetic tests, wearable health trackers, mobile apps, and social media are all sources of valuable healthcare data that a bad actor could potentially use to learn more about an individual or group of people.
"When you aggregate all of that data together, that becomes a very powerful profile of who you are," Mr. You says.
A supervisory special agent in the biological countermeasures unit within the FBI's weapons of mass destruction directorate, it's Mr. You's job to imagine worst-case bioterror scenarios and figure out how to prevent and prepare for them.
That used to mean focusing on threats like anthrax, Ebola, and smallpox—pathogens that could be used to intentionally infect people—"basically the dangerous bugs," as he puts it. In recent years, advances in gene editing and synthetic biology have given rise to fears that rogue, or even well-intentioned, scientists could create a virulent virus that's intentionally, or unintentionally, released outside the lab.
"If a foreign source, especially a criminal one, has your biological information, then they might have some particular insights into what your future medical needs might be and exploit that."
While Mr. You is still tracking those threats, he's been traveling around the country talking to scientists, lawyers, software engineers, cyber security professionals, government officials and CEOs about new security threats—those posed by genetic and other biological data.
Emerging threats
Mr. You says one possible situation he can imagine is the potential for nefarious actors to use an individual's sensitive medical information to extort or blackmail that person.
"If a foreign source, especially a criminal one, has your biological information, then they might have some particular insights into what your future medical needs might be and exploit that," he says. For instance, "what happens if you have a singular medical condition and an outside entity says they have a treatment for your condition?" You could get talked into paying a huge sum of money for a treatment that ends up being bogus.
Or what if hackers got a hold of a politician or high-profile CEO's health records? Say that person had a disease-causing genetic mutation that could affect their ability to carry out their job in the future and hackers threatened to expose that information. These scenarios may seem far-fetched, but Mr. You thinks they're becoming increasingly plausible.
On a wider scale, Kavita Berger, a scientist at Gryphon Scientific, a Washington, D.C.-area life sciences consulting firm, worries that data from different populations could be used to discriminate against certain groups of people, like minorities and immigrants.
For instance, the advocacy group Human Rights Watch in 2017 flagged a concerning trend in China's Xinjiang territory, a region with a history of government repression. Police there had purchased 12 DNA sequencers and were collecting and cataloging DNA samples from people to build a national database.
"The concern is that this particular province has a huge population of the Muslim minority in China," Ms. Berger says. "Now they have a really huge database of genetic sequences. You have to ask, why does a police station need 12 next-generation sequencers?"
Also alarming is the potential that large amounts of data from different groups of people could lead to customized bioweapons if that data ends up in the wrong hands.
Eleonore Pauwels, a research fellow on emerging cybertechnologies at United Nations University's Centre for Policy Research, says new insights gained from genomic and other data will give scientists a better understanding of how diseases occur and why certain people are more susceptible to certain diseases.
"As you get more and more knowledge about the genomic picture and how the microbiome and the immune system of different populations function, you could get a much deeper understanding about how you could target different populations for treatment but also how you could eventually target them with different forms of bioagents," Ms. Pauwels says.
Economic competitiveness
Another reason hackers might want to gain access to large genomic and other healthcare datasets is to give their country a leg up economically. Many large cyber-attacks on U.S. healthcare organizations have been tied to Chinese hacking groups.
"This is a biological space race and we just haven't woken up to the fact that we're in this race."
"It's becoming clear that China is increasingly interested in getting access to massive data sets that come from different countries," Ms. Pauwels says.
A year after U.S. President Barack Obama conceived of the Precision Medicine Initiative in 2015—later renamed All of Us—China followed suit, announcing the launch of a 15-year, $9 billion precision health effort aimed at turning China into a global leader in genomics.
Chinese genomics companies, too, are expanding their reach outside of Asia. One company, WuXi NextCODE, which has offices in Shanghai, Reykjavik, and Cambridge, Massachusetts, has built an extensive library of genomes from the U.S., China and Iceland, and is now setting its sights on Ireland.
Another Chinese company, BGI, has partnered with Children's Hospital of Philadelphia and Sinai Health System in Toronto, and also formed a collaboration with the Smithsonian Institute to sequence all species on the planet. BGI has built its own advanced genomic sequencing machines to compete with U.S.-based Illumina.
Mr. You says having access to all this data could lead to major breakthroughs in healthcare, such as new blockbuster drugs. "Whoever has the largest, most diverse dataset is truly going to win the day and come up with something very profitable," he says.
Some direct-to-consumer genetic testing companies with offices in the U.S., like Dante Labs, also use BGI to process customers' DNA.
Experts worry that China could race ahead the U.S. in precision medicine because of Chinese laws governing data sharing. Currently, China prohibits the exportation of genetic data without explicit permission from the government. Mr. You says this creates an asymmetry in data sharing between the U.S. and China.
"This is a biological space race and we just haven't woken up to the fact that we're in this race," he said in January at an American Society for Microbiology conference in Washington, D.C. "We don't have access to their data. There is absolutely no reciprocity."
Protecting your data
While Mr. You has been stressing the importance of data security to anyone who will listen, the National Academies of Sciences, Engineering, and Medicine, which makes scientific and policy recommendations on issues of national importance, has commissioned a study on "safeguarding the bioeconomy."
In the meantime, Ms. Berger says organizations that deal with people's health data should assess their security risks and identify potential vulnerabilities in their systems.
As for what individuals can do to protect themselves, she urges people to think about the different ways they're sharing healthcare data—such as via mobile health apps and wearables.
"Ask yourself, what's the benefit of sharing this? What are the potential consequences of sharing this?" she says.
Mr. You also cautions people to think twice before taking consumer DNA tests. They may seem harmless, he says, but at the end of the day, most people don't know where their genetic information is going. "If your genetic sequence is taken, once it's gone, it's gone. There's nothing you can do about it."
Facial Recognition Can Reduce Racial Profiling and False Arrests
[Editor's Note: This essay is in response to our current Big Question, which we posed to experts with different perspectives: "Do you think the use of facial recognition technology by the police or government should be banned? If so, why? If not, what limits, if any, should be placed on its use?"]
Opposing facial recognition technology has become an article of faith for civil libertarians. Many who supported the bans in cities like San Francisco and Oakland have declared the technology to be inherently racist and abusive.
The greatest danger would be to categorically oppose this technology and pretend that it will simply go away.
I have spent my career as a criminal defense attorney and a civil libertarian -- and I do not fear it. Indeed, I see it as positive so long as it is appropriately regulated and controlled.
We are living in the beginning of a biometric age, where technology uses our physical or biological characteristics for a variety of products and services. It holds great promises as well as great risks. The greatest danger, however, would be to categorically oppose this technology and pretend that it will simply go away.
This is an age driven as much by consumer as it is government demand. Living in denial may be emotionally appealing, but it will only hasten the creation of post-privacy world. If we do not address this emerging technology, movements in public will increasingly result in instant recognition and even tracking. It is the type of fish-bowl society that strips away any expectation of privacy in our interactions and associations.
The biometrics field is expanding exponentially, largely due to the popularity of consumer products using facial recognition technology (FRT) -- from the iPhone program to shopping ones that recognize customers.
But the privacy community is losing this battle because it is using the privacy rationales and doctrines forged in the earlier electronic surveillance periods. Just as generals are often accused of planning to fight the last war, civil libertarians can sometimes cling to past models despite their decreasing relevance in the current world.
I see FRT as having positive implications that are worth pursuing. When properly used, biometrics can actually enhance privacy interests and even reduce racial profiling by reducing false arrests and the warrantless "patdowns" allowed by the Supreme Court. Bans not only deny police a technology widely used by businesses, but return police to the highly flawed default of "eye balling" suspects -- a system with a considerably higher error rate than top FRT programs.
Officers are often wrong and stop a great number of suspects in the hopes of finding a wanted felon.
A study in Australia showed that passport officers who had taken photographs of subjects in ideal conditions nonetheless experienced high error rates when identifying them shortly afterward, including 14 percent false acceptance rates. Currently, officers stop suspects based on their memory from seeing a photograph days or weeks earlier. They are often wrong and stop a great number of suspects in the hopes of finding a wanted felon. The best FRT programs achieve an astonishing accuracy rate, though real-world implementation has challenges that must be addressed.
One legitimate concern raised in early studies showed higher error rates in recognitions for certain groups, particularly African American women. An MIT study finding that error rate prompted major improvements in the algorithms as well as training changes to greatly reduce the frequency of errors. The issue remains a concern, but there is nothing inherently racist in algorithms. These are a set of computer instructions that isolate and process with the parameters and conditions set by creators.
To be sure, there is room for improvement in some algorithms. Tests performed by the American Civil Liberties Union (ACLU) reportedly showed only an 80 percent accuracy rate in comparing mug shots to pictures of members of Congress when using Amazon's "Rekognition" system. It recently showed the same 80 percent rate in doing the same comparison to members of the California legislators.
However, different algorithms are available with differing levels of performance. Moreover, these products can be set with a lower discrimination level. The fact is that the top algorithms tested by the National Institute of Standards and Technology showed that their accuracy rate is greater than 99 percent.
The greatest threat of biometric technologies is to democratic values.
Assuming a top-performing algorithm is used, the result could be highly beneficial for civil liberties as opposed to the alternative of "eye balling" suspects. Consider the Boston Bombing where police declared a "containment zone" and forced families into the street with their hands in the air.
The suspect, Dzhokhar Tsarnaev, moved around Boston and was ultimately found outside the "containment zone" once authorities abandoned near martial law. He was caught on some surveillance systems but not identified. FRT can help law enforcement avoid time-consuming area searches and the questionable practice of forcing people out of their homes to physically examine them.
If we are to avoid a post-privacy world, we will have to redefine what we are trying to protect and reconceive how we hope to protect it. In my view, the greatest threat of biometric technologies is to democratic values. Authoritarian nations like China have made huge investments into FRT precisely because they know that the threat of recognition in public deters citizens from associating or interacting with protesters or dissidents. Recognition changes conduct. That chilling effect is what we have the worry about the most.
Conventional privacy doctrines do not offer much protection. The very concept of "public privacy" is treated as something of an oxymoron by courts. Public acts and associations are treated as lacking any reasonable expectation of privacy. In the same vein, the right to anonymity is not a strong avenue for protection. We are not living in an anonymous world anymore.
Consumers want products like FaceFind, which link their images with others across social media. They like "frictionless" transactions and authentications using faceprints. Despite the hyperbole in places like San Francisco, civil libertarians will not succeed in getting that cat to walk backwards.
The basis for biometric privacy protection should not be focused on anonymity, but rather obscurity. You will be increasingly subject to transparency-forcing technology, but we can legislatively mandate ways of obscuring that information. That is the objective of the Biometric Privacy Act that I have proposed in recent research. However, no such comprehensive legislation has passed through Congress.
The ability to spot fraudulent entries at airports or recognizing a felon in flight has obvious benefits for all citizens.
We also need to recognize that FRT has many beneficial uses. Biometric guns can reduce accidents and criminals' conduct. New authentications using FRT and other biometric programs could reduce identity theft.
And, yes, FRT could help protect against unnecessary police stops or false arrests. Finally, and not insignificantly, this technology could stop serious crimes, from terrorist attacks to the capturing of dangerous felons. The ability to spot fraudulent entries at airports or recognizing a felon in flight has obvious benefits for all citizens.
We can live and thrive in a biometric era. However, we will need to bring together civil libertarians with business and government experts if we are going to control this technology rather than have it control us.
[Editor's Note: Read the opposite perspective here.]
The Case for an Outright Ban on Facial Recognition Technology
[Editor's Note: This essay is in response to our current Big Question, which we posed to experts with different perspectives: "Do you think the use of facial recognition technology by the police or government should be banned? If so, why? If not, what limits, if any, should be placed on its use?"]
In a surprise appearance at the tail end of Amazon's much-hyped annual product event last month, CEO Jeff Bezos casually told reporters that his company is writing its own facial recognition legislation.
The use of computer algorithms to analyze massive databases of footage and photographs could render human privacy extinct.
It seems that when you're the wealthiest human alive, there's nothing strange about your company––the largest in the world profiting from the spread of face surveillance technology––writing the rules that govern it.
But if lawmakers and advocates fall into Silicon Valley's trap of "regulating" facial recognition and other forms of invasive biometric surveillance, that's exactly what will happen.
Industry-friendly regulations won't fix the dangers inherent in widespread use of face scanning software, whether it's deployed by governments or for commercial purposes. The use of this technology in public places and for surveillance purposes should be banned outright, and its use by private companies and individuals should be severely restricted. As artificial intelligence expert Luke Stark wrote, it's dangerous enough that it should be outlawed for "almost all practical purposes."
Like biological or nuclear weapons, facial recognition poses such a profound threat to the future of humanity and our basic rights that any potential benefits are far outweighed by the inevitable harms.
We live in cities and towns with an exponentially growing number of always-on cameras, installed in everything from cars to children's toys to Amazon's police-friendly doorbells. The use of computer algorithms to analyze massive databases of footage and photographs could render human privacy extinct. It's a world where nearly everything we do, everywhere we go, everyone we associate with, and everything we buy — or look at and even think of buying — is recorded and can be tracked and analyzed at a mass scale for unimaginably awful purposes.
Biometric tracking enables the automated and pervasive monitoring of an entire population. There's ample evidence that this type of dragnet mass data collection and analysis is not useful for public safety, but it's perfect for oppression and social control.
Law enforcement defenders of facial recognition often state that the technology simply lets them do what they would be doing anyway: compare footage or photos against mug shots, drivers licenses, or other databases, but faster. And they're not wrong. But the speed and automation enabled by artificial intelligence-powered surveillance fundamentally changes the impact of that surveillance on our society. Being able to do something exponentially faster, and using significantly less human and financial resources, alters the nature of that thing. The Fourth Amendment becomes meaningless in a world where private companies record everything we do and provide governments with easy tools to request and analyze footage from a growing, privately owned, panopticon.
Tech giants like Microsoft and Amazon insist that facial recognition will be a lucrative boon for humanity, as long as there are proper safeguards in place. This disingenuous call for regulation is straight out of the same lobbying playbook that telecom companies have used to attack net neutrality and Silicon Valley has used to scuttle meaningful data privacy legislation. Companies are calling for regulation because they want their corporate lawyers and lobbyists to help write the rules of the road, to ensure those rules are friendly to their business models. They're trying to skip the debate about what role, if any, technology this uniquely dangerous should play in a free and open society. They want to rush ahead to the discussion about how we roll it out.
We need spaces that are free from government and societal intrusion in order to advance as a civilization.
Facial recognition is spreading very quickly. But backlash is growing too. Several cities have already banned government entities, including police and schools, from using biometric surveillance. Others have local ordinances in the works, and there's state legislation brewing in Michigan, Massachusetts, Utah, and California. Meanwhile, there is growing bipartisan agreement in U.S. Congress to rein in government use of facial recognition. We've also seen significant backlash to facial recognition growing in the U.K., within the European Parliament, and in Sweden, which recently banned its use in schools following a fine under the General Data Protection Regulation (GDPR).
At least two frontrunners in the 2020 presidential campaign have backed a ban on law enforcement use of facial recognition. Many of the largest music festivals in the world responded to Fight for the Future's campaign and committed to not use facial recognition technology on music fans.
There has been widespread reporting on the fact that existing facial recognition algorithms exhibit systemic racial and gender bias, and are more likely to misidentify people with darker skin, or who are not perceived by a computer to be a white man. Critics are right to highlight this algorithmic bias. Facial recognition is being used by law enforcement in cities like Detroit right now, and the racial bias baked into that software is doing harm. It's exacerbating existing forms of racial profiling and discrimination in everything from public housing to the criminal justice system.
But the companies that make facial recognition assure us this bias is a bug, not a feature, and that they can fix it. And they might be right. Face scanning algorithms for many purposes will improve over time. But facial recognition becoming more accurate doesn't make it less of a threat to human rights. This technology is dangerous when it's broken, but at a mass scale, it's even more dangerous when it works. And it will still disproportionately harm our society's most vulnerable members.
Persistent monitoring and policing of our behavior breeds conformity, benefits tyrants, and enriches elites.
We need spaces that are free from government and societal intrusion in order to advance as a civilization. If technology makes it so that laws can be enforced 100 percent of the time, there is no room to test whether those laws are just. If the U.S. government had ubiquitous facial recognition surveillance 50 years ago when homosexuality was still criminalized, would the LGBTQ rights movement ever have formed? In a world where private spaces don't exist, would people have felt safe enough to leave the closet and gather, build community, and form a movement? Freedom from surveillance is necessary for deviation from social norms as well as to dissent from authority, without which societal progress halts.
Persistent monitoring and policing of our behavior breeds conformity, benefits tyrants, and enriches elites. Drawing a line in the sand around tech-enhanced surveillance is the fundamental fight of this generation. Lining up to get our faces scanned to participate in society doesn't just threaten our privacy, it threatens our humanity, and our ability to be ourselves.
[Editor's Note: Read the opposite perspective here.]