Can AI help create “smart borders” between countries?
In 2016, border patrols in Greece, Latvia and Hungary received a prototype for an AI-powered lie detector to help screen asylum seekers. The detector, called iBorderCtrl, was funded by the European Commission in hopes to eventually mitigate refugee crises like the one sparked by the Syrian civil war a year prior.
iBorderCtrl, which analyzes micro expressions in the face, received but one slice of the Commission’s €34.9 billion border control and migration management budget. Still in development is the more ambitious EuMigraTool, a predictive AI system that will process internet news and social media posts to estimate not only the number of migrants heading for a particular country, but also the “risks of tensions between migrants and EU citizens.”
Both iBorderCtrl and EuMigraTool are part of a broader trend: the growing digitization of migration-related technologies. Outside of the EU, in refugee camps in Jordan, the United Nations introduced iris scanning software to distribute humanitarian aid, including food and medicine. And in the United States, Customs and Border Protection has attempted to automate its services through an app called CBP One, which both travelers and asylum seekers can use to apply for I-94 forms, the arrival-departure record cards for people who are not U.S. citizens or permanent residents.
According to Koen Leurs, professor of gender, media and migration studies at Utrecht University in the Netherlands, we have arrived at a point where migration management has become so reliant on digital technology that the former can no longer be studied in isolation from the latter. Investigating this reliance for his new book, Digital Migration, Leurs came to the conclusion that applications like those mentioned above are more often than not a double-edged sword, presenting both benefits and drawbacks.
There has been “a huge acceleration” in the way digital technologies “dehumanize people,” says Koen Leurs, professor of gender, media and migration studies at Utrecht University in the Netherlands. Governments treat asylum seekers as test subjects for new inventions, all along the borders of the developed world.
On the one hand, digital technology can make migration management more efficient and less labor intensive, enabling countries to process larger numbers of people in a time when global movement is on the rise due to globalization and political instability. Leurs also discovered that informal knowledge networks such as Informed Immigrant, an online resource that connects migrants to social workers and community organizers, have positively impacted the lives of their users. The same, Leurs notes, is true of platforms like Twitter, Facebook, and WhatsApp, all of which migrants use to stay in touch with each other as well as their families back home. “The emotional support you receive through social media is something we all came to appreciate during the COVID pandemic,” Leurs says. “For refugees, this had already been common knowledge for years.”
On the flipside, automatization of migration management – particularly through the use of AI – has spawned extensive criticism from human rights activists. Sharing their sentiment, Leurs attests that many so-called innovations are making life harder for migrants, not easier. He also says there has been “a huge acceleration” in the way digital technologies “dehumanize people,” and that governments treat asylum seekers as test subjects for new inventions, all along the borders of the developed world.
In Jordan, for example, refugees had to scan their irises in order to collect aid, prompting the question of whether such measures are ethical. Speaking to Reuters, Petra Molnar, a fellow at Harvard University’s Berkman Klein Center for Internet and Society, said that she was troubled by the fact that this experiment was done on marginalized people. “The refugees are guinea pigs,” she said. “Imagine what would happen at your local grocery store if all of a sudden iris scanning became a thing,” she pointed out. “People would be up in arms. But somehow it is OK to do it in a refugee camp.”
Artificial intelligence programs have been scrutinized for their unreliability, their complex processing, thwarted by the race and gender biases picked up from training data. In 2019, a female reporter from The Intercept tested iBorderCtrl and, despite answering all questions truthfully, was accused by the machine of lying four out of 16 times. Had she been waiting at checkpoint on the Greek or Latvian border, she would have been flagged for additional screening – a measure that could jeopardize her chance of entry. Because of its biases, and the negative press that this attracted, iBorderCtrl did not move past its test phase.
While facial recognition caused problems on the European border, it was helpful in Ukraine, where programs like those developed by software company Clearview AI are used to spot Russian spies, identify dead soldiers, and check movement in and out of war zones.
In April 2021, not long after iBorderCtrl was shut down, the European Commission proposed the world’s first-ever legal framework for AI regulation: the Artificial Intelligence Act. The act, which is still being developed, promises to prevent potentially “harmful” AI practices from being used in migration management. In the most recent draft, approved by the European Parliament’s Liberties and Internal Market committees, the ban included emotion recognition systems (like iBorderCtrl), predictive policing systems (like EUMigraTool), and biometric categorization systems (like iris scanners). The act also stipulates that AI must be subject to strict oversight and accountability measures.
While some worry the AI Act is not comprehensive enough, others wonder if it is in fact going too far. Indeed, many proponents of machine learning argue that, by placing a categorical ban on certain systems, governments will thwart the development of potentially useful technology. While facial recognition caused problems on the European border, it was helpful in Ukraine, where programs like those developed by software company Clearview AI are used to spot Russian spies, identify dead soldiers, and check movement in and out of war zones.
Instead of flat-out banning AI, why not strive to make it more reliable? “One of the most compelling arguments against AI is that it is inherently biased,” says Vera Raposo, an assistant professor of law at NOVA University in Lisbon specializing in digital law. “In truth, AI itself is not biased; it becomes biased due to human influence. It seems that complete eradication of biases is unattainable, but mitigation is possible. We can strive to reduce biases by employing more comprehensive and unbiased data in AI training and encompassing a wider range of individuals. We can also work on developing less biased algorithms, although this is challenging given that coders, being human, inherently possess biases of their own.”
AI is most effective when it enhances human performance rather than replacing it.
Accessibility is another obstacle that needs to be overcome. Leurs points out that, in migration management, AI often functions as a “black box” because the migration officers operating it are unable to comprehend its complex decision-making process and thus unable to scrutinize its results. One solution to this problem is to have law enforcement work closely with AI experts. Alternatively, machine learning could be limited to gathering and summarizing information, leaving evaluation of that information to actual people.
Raposo agrees AI is most effective when it enhances human performance rather than replacing it. On the topic of transparency, she does note that making an AI that is both sophisticated and easy to understand is a little bit like having your cake and eating it too. “In numerous domains,” she explains, “we might need to accept a reduced level of explainability in exchange for a high degree of accuracy (assuming we cannot have both).” Using healthcare as an analogy, she adds that “some medications work in ways not fully understood by either doctors or pharma companies, yet persist due to demonstrated efficacy in clinical trials.”
Leurs believes digital technologies used in migration management can be improved through a push for more conscientious research. “Technology is a poison and a medicine for that poison,” he argues, which is why new tech should be developed with its potential applications in mind. “Ethics has become a major concern in recent years. Increasingly, and particularly in the study of forced migration, researchers are posing critical questions like ‘what happens with the data that is gathered?’ and ‘who will this harm?’” In some cases, Leurs thinks, that last question may need to be reversed: we should be thinking about how we can actively disarm oppressive structures. “After all, our work should align with the interests of the communities it is going to affect.”
The Friday Five: Artificial DNA Could Give Cancer the Hook
The Friday Five covers five stories in research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on scientific creativity and progress to give you a therapeutic dose of inspiration headed into the weekend.
Listen on Apple | Listen on Spotify | Listen on Stitcher | Listen on Amazon | Listen on Google
Here are the promising studies covered in this week's Friday Five:
- Artificial DNA gives cancer the hook
- This daily practice could improve relationships
- Can social media handle the truth?
- Injecting a gel could speed up recovery
- A blood pressure medicine for a long healthy life
9 Tips for Online Mental Health Therapy
Telehealth offers a vast improvement in access and convenience to all sorts of medical services, and online therapy for mental health is one of the most promising case studies for telehealth. With many online therapy options available, you can choose whatever works best for you. Yet many people are hesitant about using online therapy. Even if they do give it a try, they often don’t know how to make the most effective use of this treatment modality.
Why do so many feel uncertain about online therapy? A major reason stems from its novelty. Humans are creatures of habit, prone to falling for what behavioral scientists like myself call the status quo bias, a predisposition to stick to traditional practices and behaviors. Many people reject innovative solutions even when they would be helpful. Thus, while teletherapy was available long before the pandemic, and might have fit the needs of many potential clients, relatively few took advantage of this option.
Even when we do try new methodologies, we often don’t do so effectively, because we cling to the same approaches that worked in previous situations. Scientists call this behavior functional fixedness. It’s kind of like the saying about the hammer-nail syndrome: “when you have a hammer, everything looks like a nail.”
These two mental blindspots, the status quo bias and functional fixedness, impact decision making in many areas of life. Fortunately, recent research has shown effective and pragmatic strategies to defeat these dangerous errors in judgment. The nine tips below will help you make the best decisions to get effective online therapy, based on the latest research.
Trust the science of online therapy
Extensive research shows that, for most patients, online therapy offers the same benefits as in-person therapy.
For instance, a 2014 study in the Journal of Affective Disorders reported that online treatment proved just as effective as face-to-face treatment for depression. A 2018 study, published in Journal of Psychological Disorders, found that online cognitive behavioral therapy, or CBT, was just as effective as face-to-face treatment for major depression, panic disorder, social anxiety disorder, and generalized anxiety disorder. And a 2014 study in Behaviour Research and Therapy discovered that online CBT proved effective in treating anxiety disorders, and helped lower costs of treatment.
During the forced teletherapy of COVID, therapists worried that those with serious mental health conditions would be less likely to convert to teletherapy. Yet research published in Counselling Psychology Quarterly has helped to alleviate that concern. It found that those with schizophrenia, bipolar disorder, severe depression, PTSD, and even suicidality converted to teletherapy at about the same rate as those with less severe mental health challenges.
Yet teletherapy may not be for everyone. For example, adolescents had the most varied response to teletherapy, according to a 2020 study in Family Process. Some adapted quickly and easily, while others found it awkward and anxiety-inducing. On the whole, children with trauma respond worse to online therapy, per a 2020 study in Child Abuse & Neglect. The treatment of mental health issues can sometimes require in-person interactions, such as the use of eye movement desensitization and reprocessing to treat post-traumatic stress disorder. And according to a 2020 study from the Journal of Humanistic Psychology, online therapy may not be as effective for those suffering from loneliness.
Leverage the strengths of online therapy
Online therapy is much more accessible than in-person therapy for those with a decent internet connection, webcam, mic, and digital skills. You don’t have to commute to your therapist’s office, wasting money and time. You can take much less medical leave from work, saving you money and hassle with your boss. If you live in a sparsely populated area, online therapy could allow you to access many specialized kinds of therapy that isn’t accessible locally.
Online options are much quicker compared to the long waiting lines for in-person therapy. You also have much more convenient scheduling options. And you won’t have to worry about running into someone you know in the waiting room. Online therapy is easier to conceal from others and reduces stigma. Many patients may feel more comfortable and open to sharing in the privacy and comfort of their own home.
You can use a variety of communication tools suited to your needs at any given time. Video can be used to start a relationship with a therapist and have more intense and nuanced discussions, but can be draining, especially for those with social anxiety. Voice-only may work well for less intense discussions. Email offers a useful option for long-form, well-thought-out messages. Texting is useful for quick, real-time questions, answers, and reinforcement.
Plus, online therapy is often cheaper than in-person therapy. In the midst of COVID, many insurance providers have decided to cover online therapy.
Address the weaknesses
One weakness is the requirement for appropriate technology and skills to engage in online therapy. Another is the difficulty of forming a close therapeutic relationship with your therapist. You won’t be able to communicate non-verbals as fully and the therapist will not be able to read you as well, requiring you to be more deliberate in how you express yourself.
Another important issue is that online therapy is subject to less government oversight compared to the in-person approach, which is regulated in each state, providing a baseline of quality control. As a result, you have to do more research on the providers that offer online therapy to make sure they’re reputable, use only licensed therapists, and have a clear and transparent pay structure.
Be intentional about advocating for yourself
Figure out what kind of goals you want to achieve. Consider how, within the context of your goals, you can leverage the benefits of online therapy while addressing the weaknesses. Write down and commit to achieving your goals. Remember, you need to be your own advocate, especially in the less regulated space of online therapy, so focus on being proactive in achieving your goals.
Develop your Hero’s Journey
Because online therapy can occur at various times of day through videos calls, emails and text, it might feel more open-ended and less organized, which can have advantages and disadvantages. One way you can give it more structure is to ground these interactions in the story of your self-improvement. Our minds perceive the world through narratives. Create a story of how you’ll get from where you are to where you want to go, meaning your goals.
A good template to use is the Hero’s Journey. Start the narrative with where you are, and what caused you to seek therapy. Write about the obstacles you will need to overcome, and the kind of help from a therapist that you’ll need in the process. Then, describe the final end state: how will you be better off after this journey, including what you will have learned.
Especially in online therapy, you need to be on top of things. Too many people let the therapist manage the treatment plan. As you pursue your hero’s journey, another way to organize for success is to take notes on your progress, and reevaluate how you’re doing every month with your therapist.
Identify your ideal mentor
Since it’s more difficult to be confident about the quality of service providers in an online setting, you should identify in advance the traits of your desired therapist. Every Hero’s Journey involves a mentor figure who guides the protagonist through this journey. So who’s your ideal mentor? Write out their top 10 characteristics, from most to least important.
For example, you might want someone who is:
- Empathetic
- Caring
- Good listener
- Logical
- Direct
- Questioning
- Non-judgmental
- Organized
- Curious
- Flexible
That’s my list. Depending on what challenge you’re facing and your personality and preferences, you should make your own. Then, when you are matched with a therapist, evaluate how well they fit your ideal list.
Fail fast
When you first match with a therapist, try to fail fast. That means, instead of focusing on getting treatment, focus on figuring out if the therapist is a good match based on the traits you identified above. That will enable you to move on quickly if they’re not, and it’s very much worth it to figure that out early.
Tell them your goals, your story, and your vision of your ideal mentor. Ask them whether they think they are a match, and what kind of a treatment plan they would suggest based on the information you provided. And observe them yourself in your initial interactions, focusing on whether they’re a good match. Often, you’ll find that your initial vision of your ideal mentor is incomplete, and you’ll learn through doing therapy what kind of a therapist is the best fit for you.
Choose a small but meaningful subgoal to work on first
This small subgoal should be sufficient to be meaningful and impactful for improving your mental health, but not a big stretch for you to achieve. This subgoal should be a tool for you to use to evaluate whether the therapist is indeed a good fit for you. It will also help you evaluate whether the treatment plan makes sense, or whether it needs to be revised.
Know when to wrap things up
As you approach the end of your planned work and you see you’re reaching your goals, talk to the therapist about how to wrap up rather than letting things drag on for too long. You don’t want to become dependent on therapy: it’s meant to be a temporary intervention. Some less scrupulous therapists will insist that therapy should never end and we should all stay in therapy forever, and you want to avoid falling for this line. When you reach your goals, end your therapy, unless you discover a serious new reason to continue it. Still, it may be wise to set up occasional check-ins once every three to six months to make sure you’re staying on the right track.