Can AI help create “smart borders” between countries?
In 2016, border patrols in Greece, Latvia and Hungary received a prototype for an AI-powered lie detector to help screen asylum seekers. The detector, called iBorderCtrl, was funded by the European Commission in hopes to eventually mitigate refugee crises like the one sparked by the Syrian civil war a year prior.
iBorderCtrl, which analyzes micro expressions in the face, received but one slice of the Commission’s €34.9 billion border control and migration management budget. Still in development is the more ambitious EuMigraTool, a predictive AI system that will process internet news and social media posts to estimate not only the number of migrants heading for a particular country, but also the “risks of tensions between migrants and EU citizens.”
Both iBorderCtrl and EuMigraTool are part of a broader trend: the growing digitization of migration-related technologies. Outside of the EU, in refugee camps in Jordan, the United Nations introduced iris scanning software to distribute humanitarian aid, including food and medicine. And in the United States, Customs and Border Protection has attempted to automate its services through an app called CBP One, which both travelers and asylum seekers can use to apply for I-94 forms, the arrival-departure record cards for people who are not U.S. citizens or permanent residents.
According to Koen Leurs, professor of gender, media and migration studies at Utrecht University in the Netherlands, we have arrived at a point where migration management has become so reliant on digital technology that the former can no longer be studied in isolation from the latter. Investigating this reliance for his new book, Digital Migration, Leurs came to the conclusion that applications like those mentioned above are more often than not a double-edged sword, presenting both benefits and drawbacks.
There has been “a huge acceleration” in the way digital technologies “dehumanize people,” says Koen Leurs, professor of gender, media and migration studies at Utrecht University in the Netherlands. Governments treat asylum seekers as test subjects for new inventions, all along the borders of the developed world.
On the one hand, digital technology can make migration management more efficient and less labor intensive, enabling countries to process larger numbers of people in a time when global movement is on the rise due to globalization and political instability. Leurs also discovered that informal knowledge networks such as Informed Immigrant, an online resource that connects migrants to social workers and community organizers, have positively impacted the lives of their users. The same, Leurs notes, is true of platforms like Twitter, Facebook, and WhatsApp, all of which migrants use to stay in touch with each other as well as their families back home. “The emotional support you receive through social media is something we all came to appreciate during the COVID pandemic,” Leurs says. “For refugees, this had already been common knowledge for years.”
On the flipside, automatization of migration management – particularly through the use of AI – has spawned extensive criticism from human rights activists. Sharing their sentiment, Leurs attests that many so-called innovations are making life harder for migrants, not easier. He also says there has been “a huge acceleration” in the way digital technologies “dehumanize people,” and that governments treat asylum seekers as test subjects for new inventions, all along the borders of the developed world.
In Jordan, for example, refugees had to scan their irises in order to collect aid, prompting the question of whether such measures are ethical. Speaking to Reuters, Petra Molnar, a fellow at Harvard University’s Berkman Klein Center for Internet and Society, said that she was troubled by the fact that this experiment was done on marginalized people. “The refugees are guinea pigs,” she said. “Imagine what would happen at your local grocery store if all of a sudden iris scanning became a thing,” she pointed out. “People would be up in arms. But somehow it is OK to do it in a refugee camp.”
Artificial intelligence programs have been scrutinized for their unreliability, their complex processing, thwarted by the race and gender biases picked up from training data. In 2019, a female reporter from The Intercept tested iBorderCtrl and, despite answering all questions truthfully, was accused by the machine of lying four out of 16 times. Had she been waiting at checkpoint on the Greek or Latvian border, she would have been flagged for additional screening – a measure that could jeopardize her chance of entry. Because of its biases, and the negative press that this attracted, iBorderCtrl did not move past its test phase.
While facial recognition caused problems on the European border, it was helpful in Ukraine, where programs like those developed by software company Clearview AI are used to spot Russian spies, identify dead soldiers, and check movement in and out of war zones.
In April 2021, not long after iBorderCtrl was shut down, the European Commission proposed the world’s first-ever legal framework for AI regulation: the Artificial Intelligence Act. The act, which is still being developed, promises to prevent potentially “harmful” AI practices from being used in migration management. In the most recent draft, approved by the European Parliament’s Liberties and Internal Market committees, the ban included emotion recognition systems (like iBorderCtrl), predictive policing systems (like EUMigraTool), and biometric categorization systems (like iris scanners). The act also stipulates that AI must be subject to strict oversight and accountability measures.
While some worry the AI Act is not comprehensive enough, others wonder if it is in fact going too far. Indeed, many proponents of machine learning argue that, by placing a categorical ban on certain systems, governments will thwart the development of potentially useful technology. While facial recognition caused problems on the European border, it was helpful in Ukraine, where programs like those developed by software company Clearview AI are used to spot Russian spies, identify dead soldiers, and check movement in and out of war zones.
Instead of flat-out banning AI, why not strive to make it more reliable? “One of the most compelling arguments against AI is that it is inherently biased,” says Vera Raposo, an assistant professor of law at NOVA University in Lisbon specializing in digital law. “In truth, AI itself is not biased; it becomes biased due to human influence. It seems that complete eradication of biases is unattainable, but mitigation is possible. We can strive to reduce biases by employing more comprehensive and unbiased data in AI training and encompassing a wider range of individuals. We can also work on developing less biased algorithms, although this is challenging given that coders, being human, inherently possess biases of their own.”
AI is most effective when it enhances human performance rather than replacing it.
Accessibility is another obstacle that needs to be overcome. Leurs points out that, in migration management, AI often functions as a “black box” because the migration officers operating it are unable to comprehend its complex decision-making process and thus unable to scrutinize its results. One solution to this problem is to have law enforcement work closely with AI experts. Alternatively, machine learning could be limited to gathering and summarizing information, leaving evaluation of that information to actual people.
Raposo agrees AI is most effective when it enhances human performance rather than replacing it. On the topic of transparency, she does note that making an AI that is both sophisticated and easy to understand is a little bit like having your cake and eating it too. “In numerous domains,” she explains, “we might need to accept a reduced level of explainability in exchange for a high degree of accuracy (assuming we cannot have both).” Using healthcare as an analogy, she adds that “some medications work in ways not fully understood by either doctors or pharma companies, yet persist due to demonstrated efficacy in clinical trials.”
Leurs believes digital technologies used in migration management can be improved through a push for more conscientious research. “Technology is a poison and a medicine for that poison,” he argues, which is why new tech should be developed with its potential applications in mind. “Ethics has become a major concern in recent years. Increasingly, and particularly in the study of forced migration, researchers are posing critical questions like ‘what happens with the data that is gathered?’ and ‘who will this harm?’” In some cases, Leurs thinks, that last question may need to be reversed: we should be thinking about how we can actively disarm oppressive structures. “After all, our work should align with the interests of the communities it is going to affect.”
Have You Heard of the Best Sport for Brain Health?
The Friday Five covers five stories in research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on scientific creativity and progress to give you a therapeutic dose of inspiration headed into the weekend.
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
Here are the promising studies covered in this week's Friday Five:
- Reprogram cells to a younger state
- Pick up this sport for brain health
- Do all mental illnesses have the same underlying cause?
- New test could diagnose autism in newborns
- Scientists 3D print an ear and attach it to woman
Can blockchain help solve the Henrietta Lacks problem?
Science has come a long way since Henrietta Lacks, a Black woman from Baltimore, succumbed to cervical cancer at age 31 in 1951 -- only eight months after her diagnosis. Since then, research involving her cancer cells has advanced scientific understanding of the human papilloma virus, polio vaccines, medications for HIV/AIDS and in vitro fertilization.
Today, the World Health Organization reports that those cells are essential in mounting a COVID-19 response. But they were commercialized without the awareness or permission of Lacks or her family, who have filed a lawsuit against a biotech company for profiting from these “HeLa” cells.
While obtaining an individual's informed consent has become standard procedure before the use of tissues in medical research, many patients still don’t know what happens to their samples. Now, a new phone-based app is aiming to change that.
Tissue donors can track what scientists do with their samples while safeguarding privacy, through a pilot program initiated in October by researchers at the Johns Hopkins Berman Institute of Bioethics and the University of Pittsburgh’s Institute for Precision Medicine. The program uses blockchain technology to offer patients this opportunity through the University of Pittsburgh's Breast Disease Research Repository, while assuring that their identities remain anonymous to investigators.
A blockchain is a digital, tamper-proof ledger of transactions duplicated and distributed across a computer system network. Whenever a transaction occurs with a patient’s sample, multiple stakeholders can track it while the owner’s identity remains encrypted. Special certificates called “nonfungible tokens,” or NFTs, represent patients’ unique samples on a trusted and widely used blockchain that reinforces transparency.
Blockchain could be used to notify people if cancer researchers discover that they have certain risk factors.
“Healthcare is very data rich, but control of that data often does not lie with the patient,” said Julius Bogdan, vice president of analytics for North America at the Healthcare Information and Management Systems Society (HIMSS), a Chicago-based global technology nonprofit. “NFTs allow for the encapsulation of a patient’s data in a digital asset controlled by the patient.” He added that this technology enables a more secure and informed method of participating in clinical and research trials.
Without this technology, de-identification of patients’ samples during biomedical research had the unintended consequence of preventing them from discovering what researchers find -- even if that data could benefit their health. A solution was urgently needed, said Marielle Gross, assistant professor of obstetrics, gynecology and reproductive science and bioethics at the University of Pittsburgh School of Medicine.
“A researcher can learn something from your bio samples or medical records that could be life-saving information for you, and they have no way to let you or your doctor know,” said Gross, who is also an affiliate assistant professor at the Berman Institute. “There’s no good reason for that to stay the way that it is.”
For instance, blockchain could be used to notify people if cancer researchers discover that they have certain risk factors. Gross estimated that less than half of breast cancer patients are tested for mutations in BRCA1 and BRCA2 — tumor suppressor genes that are important in combating cancer. With normal function, these genes help prevent breast, ovarian and other cells from proliferating in an uncontrolled manner. If researchers find mutations, it’s relevant for a patient’s and family’s follow-up care — and that’s a prime example of how this newly designed app could play a life-saving role, she said.
Liz Burton was one of the first patients at the University of Pittsburgh to opt for the app -- called de-bi, which is short for decentralized biobank -- before undergoing a mastectomy for early-stage breast cancer in November, after it was diagnosed on a routine mammogram. She often takes part in medical research and looks forward to tracking her tissues.
“Anytime there’s a scientific experiment or study, I’m quick to participate -- to advance my own wellness as well as knowledge in general,” said Burton, 49, a life insurance service representative who lives in Carnegie, Pa. “It’s my way of contributing.”
Liz Burton was one of the first patients at the University of Pittsburgh to opt for the app before undergoing a mastectomy for early-stage breast cancer.
Liz Burton
The pilot program raises the issue of what investigators may owe study participants, especially since certain populations, such as Black and indigenous peoples, historically were not treated in an ethical manner for scientific purposes. “It’s a truly laudable effort,” Tamar Schiff, a postdoctoral fellow in medical ethics at New York University’s Grossman School of Medicine, said of the endeavor. “Research participants are beautifully altruistic.”
Lauren Sankary, a bioethicist and associate director of the neuroethics program at Cleveland Clinic, agrees that the pilot program provides increased transparency for study participants regarding how scientists use their tissues while acknowledging individuals’ contributions to research.
However, she added, “it may require researchers to develop a process for ongoing communication to be responsive to additional input from research participants.”
Peter H. Schwartz, professor of medicine and director of Indiana University’s Center for Bioethics in Indianapolis, said the program is promising, but he wonders what will happen if a patient has concerns about a particular research project involving their tissues.
“I can imagine a situation where a patient objects to their sample being used for some disease they’ve never heard about, or which carries some kind of stigma like a mental illness,” Schwartz said, noting that researchers would have to evaluate how to react. “There’s no simple answer to those questions, but the technology has to be assessed with an eye to the problems it could raise.”
To truly make a difference, blockchain must enable broad consent from patients, not just de-identification.
As a result, researchers may need to factor in how much information to share with patients and how to explain it, Schiff said. There are also concerns that in tracking their samples, patients could tell others what they learned before researchers are ready to publicly release this information. However, Bogdan, the vice president of the HIMSS nonprofit, believes only a minimal study identifier would be stored in an NFT, not patient data, research results or any type of proprietary trial information.
Some patients may be confused by blockchain and reluctant to embrace it. “The complexity of NFTs may prevent the average citizen from capitalizing on their potential or vendors willing to participate in the blockchain network,” Bogdan said. “Blockchain technology is also quite costly in terms of computational power and energy consumption, contributing to greenhouse gas emissions and climate change.”
In addition, this nascent, groundbreaking technology is immature and vulnerable to data security flaws, disputes over intellectual property rights and privacy issues, though it does offer baseline protections to maintain confidentiality. To truly make a difference, blockchain must enable broad consent from patients, not just de-identification, said Robyn Shapiro, a bioethicist and founding attorney at Health Sciences Law Group near Milwaukee.
The Henrietta Lacks story is a prime example, Shapiro noted. During her treatment for cervical cancer at Johns Hopkins, Lacks’s tissue was de-identified (albeit not entirely, because her cell line, HeLa, bore her initials). After her death, those cells were replicated and distributed for important and lucrative research and product development purposes without her knowledge or consent.
Nonetheless, Shapiro thinks that the initiative by the University of Pittsburgh and Johns Hopkins has potential to solve some ethical challenges involved in research use of biospecimens. “Compared to the system that allowed Lacks’s cells to be used without her permission, Shapiro said, “blockchain technology using nonfungible tokens that allow patients to follow their samples may enhance transparency, accountability and respect for persons who contribute their tissue and clinical data for research.”
Read more about laws that have prevented people from the rights to their own cells.