Essay

AI, surveillance and the privatisation of migration management

The rise of the border industrial complex

Artificial intelligence (AI) and digital surveillance technologies have become central to the global governance of mixed migration. From biometric registration inside refugee camps to algorithmic risk scoring at the border, these systems increasingly shape how states and international organisations define, control and regulate mobility. During these times of geopolitical turbulence, governments are embracing AI to render people on the move as knowable and manageable, framing data and relying on technical solutions to so-called migration crises. Yet this shift toward digital border governance raises urgent concerns about transparency, accountability and the human rights impacts of these unregulated and high-risk technologies. High-risk technologies used in migration are also deeply embedded in a system of power relations that reproduce structural inequalities, systemic racism and exclusion.

Governments are not the only powerful actors managing mobility in the digital age. Private companies are increasingly setting the agenda on how to innovate at the border. Since governments are often not able to develop technologies in-house, they rely on public-private partnerships to do so. When governments frame migration as a “problem” to solve, the private sector steps in and offers a technological solution, in the form of a robo-dog at the US-Mexico border or an AI lie detector in the EU. These so-called solutions are often military-grade, created with opaque algorithms and with very few human rights safeguards governing their use. They are also very lucrative, giving rise to a USD 70 billion global border industrial complex. This industry is also deeply tied to politics. For example, the day after the re-election of current US President Donald Trump, stocks of private prison and surveillance companies soared, as did those of Tesla, the company belonging to Elon Musk who suddenly had an open-door invitation to the Oval Office, along with other tech actors who are increasingly able to influence government policy.

Therefore, the introduction of AI-type tools does not merely streamline border control: it reconfigures the very logic of migration governance. These systems are often deployed without adequate legal safeguards or meaningful oversight, leaving people on the move exposed to opaque decision making and rights violations. Technologies initially developed for military use are now repurposed to surveil and sort mobile populations, often under the guise of humanitarian efficiency or national security. As a result, digital infrastructures are becoming central to how borders are enforced, movement is restricted and vulnerability is managed.

This essay examines how AI and digital surveillance are transforming migration governance across key global regions, including North America, Europe, the Gulf and East Africa. It explores the expansion of public-private partnerships and the growth of the border surveillance industry, the political role of misinformation and narrative construction, and the ways in which people on the move and human smugglers alike navigate these technologies. While the promise of more efficient governance is often highlighted, the legal, ethical and humanitarian risks of digital technologies are profound and growing in a climate that is curtailing human movement with increasing violence.

Tech-driven borders: the new infrastructure of migration control

How AI-driven systems are increasingly becoming a central part of migration management

From visa processing and asylum screening to deportation prioritisation and real-time border surveillance, AI-driven systems are being deployed to manage migration at every stage across multiple jurisdictions globally. These systems vary widely in complexity and legality but share a common logic: data-driven control of mobility under conditions of heightened securitisation and politicisation of migration.

In the United States, federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have adopted predictive analytics, facial recognition and machine learning tools to monitor people’s digital footprints and assess risk profiles, in addition to the growing physical surveillance infrastructure at both the US-Mexico and US-Canada borders such as AI towers, drones and even robo-dogs. Private companies like Clearview AI and Palantir Technologies provide the platforms that scrape social media and public data to generate behavioural assessments used in detention and deportation decisions. These tools operate in legal grey zones, bypassing traditional requirements for judicial oversight or due process. During the second Trump presidency, there has been an exponential rise in automated surveillance technologies, including scraping of social media data and revoking visas as a result, as well as the reliance on surveillance technologies and algorithmic decision making. Similar trends are visible in Canada, where automated decision systems have been used in immigration adjudication processes, raising concerns over transparency and algorithmic bias, while border surveillance infrastructure only continues to grow.

In the European Union, the EU’s border force Frontex is a primary driver of technological development. Budgets have ballooned: the EU’s overall 2021-2027 financial framework increased funding for border policies by 94 percent (EUR 113.3 billion) compared to the previous budget cycle, with various funds such as EUR 5.2 billion for the Integrated Border Management Fund: Border Management and Visa Instrument (BMVI) covering technical equipment for Frontex. Frontex has partnered with major defence contractors and AI developers to enhance its surveillance infrastructure, including a special interest in predictive analytics for border interdictions. Reliance on interoperable data bases and data infrastructure is also growing across the region, such as the ongoing development of the European Travel Information and Authorization System (ETIAS), which consolidates biometric and travel data across multiple EU databases. The use of AI to assess travel risk and automate border screening processes is accelerating, despite evidence that such tools may exacerbate racial profiling and undermine procedural fairness. The EU also missed an opportunity to strengthen global governance of AI, including in migration. Heralded as the first regional attempt to govern AI and ratified in August 2024 after years of negotiations, the EU’s Act to Regulate Artificial Intelligence (AI Act) offers very limited protection for non-citizens and contains loopholes for migration-related AI applications. These loopholes include not recognising various migration-related applications (including biometric identification systems, fingerprint scanners and forecasting tools used to predict, interdict and curtail migration) as high risk, weakening what little governance and oversight mechanisms exist. The act also allows for the expansion of large and interoperable databases and does not set guardrails against the exportation of surveillance systems developed by EU companies – even those which may be banned in the EU – to places like China and the Occupied Palestinian Territories.

In the Gulf States and the Middle East, biometric surveillance of migrant labourers is a central feature of migration control. Systems such as facial recognition, iris scans and digital ID cards are used to monitor movement, enforce curfews and restrict access to services. These technologies, often provided by Western and Chinese firms, are embedded in exploitative labour governance regimes like the Kafala sponsorship system, where surveillance functions as a tool of coercion and containment. Israel is another major player in the development of surveillance technology which is first tested out in Palestine and then exported for border enforcement to the European Union and the United States, including Israeli drones that patrol the Mediterranean and AI surveillance towers along the US-Mexico border, among other surveillance technologies.

In East Africa, another trend of digitisation and surveillance is unfolding through international humanitarian actors, with predictive analytics for humanitarian preparedness, such as the Foresight Model of the Danish Refugee Council, on the rise. These tools use big data to predict displacement in order to aid people as quickly and efficiently as possible. While predictive analytics in this space offer laudable solutions to complex problems, predicting human movement without adequate safeguards presents the risk of co-optation by other actors, such as border enforcement, which continually signals its appetite for developing similar predictive tools to use for interdictions. The United Nations High Commissioner for Refugees (UNHCR) has also implemented biometric registration programmes in Kenyan and Ugandan refugee camps, often in partnership with private firms. For example, the collaboration between UNHCR and the company IrisGuard in Jordanian refugee camps uses iris scans to manage aid distribution, but this raises concerns about consent, data ownership and surveillance creep. While presented as tools for efficiency and fraud prevention, these systems also consolidate sensitive biometric data without robust safeguards or meaningful oversight mechanisms.

What are the risks?

Across all these contexts, several critical risks emerge. Privacy is routinely undermined by the collection of sensitive biometric and behavioural data with minimal transparency. Free and informed consent is also threatened when people feel coerced into giving up their data in spaces with vast power differentials, like a refugee camp. Discrimination is often encoded into algorithmic systems, whether through biased training data or unaccountable decision-making logics. Procedural rights, such as the ability to challenge decisions or understand how data is being used, are frequently denied to people on the move, who are already structurally excluded from the legal protections available to citizens. The result is a global system where technological opacity compounds existing inequalities and systemic racism. This system reinforces hierarchies between who is allowed to enter and who is turned away, with the Global North enjoying increased mobility while the Majority World continues to be tracked, surveilled and, ultimately, excluded.

Furthermore, the global nature of these technologies creates a patchwork of accountability gaps. International data-sharing agreements and outsourced tech infrastructure weaken national oversight, while global migration institutions such as international humanitarian organisations often lack strong governance frameworks for AI procurement and deployment. As AI tools proliferate, they risk entrenching a digital border regime in which migration is managed less by law than by proprietary code and predictive algorithms

Narratives and politics: AI, misinformation and the construction of a border crisis

AI technologies are not only deployed at physical borders. They also operate in digital spaces, such as social media platforms, where migration is politicised and public opinion is shaped. Social media, in particular, has played an increasingly powerful role in shaping elections and exacerbating anti-migrant sentiments, with politicians manipulating numbers and stoking fears of migration for political gains. For example, misinformation and disinformation along with hateful language have become the hallmarks of several far-right leaders, including US President Donald Trump, Hungary’s Prime Minister Viktor Orbán and others. Governments and private actors increasingly also use AI to monitor, interpret and influence online discourse around migration. This type of technological weaponisation expands the reach of border control into the realm of perception management, where mobile communities are often framed as threats and algorithms amplify xenophobic narratives. The convergence of AI, surveillance and disinformation is not a passive byproduct of the digital age: it has become a strategic feature of migration governance. AI is also actively shaping how migration is portrayed. Automated content recommendation systems used by platforms like Facebook (Meta), X (formerly Twitter) and TikTok have been shown to amplify anti-migrant rhetoric, particularly during periods of political tension. For example, in the run-up to the 2020 US election, AI-curated feeds on Meta platforms promoted posts associating people on the move with crime, disease and invasion – narratives that have no basis in fact but generate high engagement metrics.

In the United States, multiple federal agencies including the Department of Homeland Security (DHS) and ICE have also implemented AI-driven systems to monitor social media activity for so-called risk profiling. Tools provided by companies like Palantir scrape public data and generate behavioural risk assessments of individuals used in immigration adjudication and enforcement decisions. For example, since the launch of the Trump Administration’s AI-driven Catch and Revoke initiative, over 600 visas have been rescinded.

The effects of AI-fuelled misinformation are not only discursive, they also directly shape policy and enforcement. Algorithms help construct risk profiles not only for individuals, but for entire groups or migratory routes, reinforcing the notion of ‘dangerous flows’ that require preemptive control. Predictive models used by ICE and CBP to estimate future arrivals or ‘threats’ often rely on decontextualised data scraped from social media, weather patterns and economic indicators. These models have been critiqued for encoding racial and national biases as they routinely target migrants from Latin America and the Middle East for enhanced scrutiny.

This AI-driven securitisation extends to platform-moderation practices, where migration-related content is often over-moderated or removed based on automated classifiers. People on the move who use social media to document abuses at borders, share survival information or challenge state narratives are silenced by algorithmic moderation systems that conflate their content with extremism or illegal facilitation. As predictive profiling and misinformation combine, people on the move are increasingly treated, not as rights-bearing individuals to be protected, but as risks to be managed. The use of AI in narrative construction contributes to a securitised discourse which justifies invasive technologies, in turn reinforcing exclusionary narratives.

The role of privatisation and the growing border industrial complex

The global expansion of AI in migration governance has not been led solely by governments or humanitarian institutions. Increasingly, private technology companies are the architects, contractors and operators of border infrastructure. These firms play a central role in shaping how migration is understood, monitored and managed: through contracts that are often opaque, under-regulated and driven by profit rather than protection. Many of these players are active in developing technologies for the defence industry, used in wars and causing mass forced displacement. This creates a sort of perverse ‘circular economy’ where companies benefit twice: once from technologies that create displacement and then, a second time, from technologies that contain it.

Big Tech firms have become powerful geopolitical actors in migration governance. US-based companies like Palantir, Microsoft, Clearview AI, Anduril and Amazon are deeply embedded in the infrastructures of immigration enforcement. In Europe, a similar ecosystem has emerged. The EU’s border agency Frontex has increasingly outsourced AI development and biometric surveillance systems to major defence and IT contractors. These partnerships enable real-time drone surveillance, thermal imaging and facial recognition integration at external borders. While framed as “smart borders”, these technologies often facilitate pushbacks, automated risk screening and racialised profiling, raising questions about their compliance with international law.

The humanitarian sector has not been immune to this privatisation trend either. Public-private partnerships between UN agencies and tech companies now play a central role in refugee identification, aid distribution and data analytics. These arrangements often begin as pilot projects or innovation partnerships but quickly evolve into core infrastructure with long-term dependencies. The World Food Programme’s partnership with Palantir is a prominent example. Initially justified to enhance logistics and food security, the collaboration granted Palantir access to vast datasets on refugee movements and aid delivery, raising alarm over data ownership, consent and surveillance. The IOM has also commissioned a digital tool to speed up the removal of people from their host countries, contracting with private firms that have previously been used by the Bangladeshi Rapid Action Battalion, a US-sanctioned elite police unit also alleged to be a death squad.

These public-private partnerships reveal a deeper structural problem: the outsourcing of core governance functions to actors who are not bound by the same accountability frameworks as states or UN bodies. As procurement processes prioritise innovation and scalability over transparency and consent, tech firms are increasingly positioned as de facto migration policymakers. Their systems determine eligibility, shape risk profiles and mediate access to rights, with little-to-no external scrutiny. The over-reliance on private actors also weakens state and institutional accountability in several ways. First, it obscures responsibility: when a wrongful deportation or biometric data breach occurs, it is often unclear whether the state, the vendor or a subcontractor is at fault. Second, it entrenches asymmetries of power, as private companies gain leverage through proprietary technologies and exclusive access to sensitive data. Third, it promotes a model of governance that treats migration as a logistics problem to be optimised, rather than a human rights issue requiring care, deliberation and justice.

Despite growing awareness of these risks, regulatory mechanisms remain weak. Humanitarian procurement contracts are rarely made public, and data protection standards vary widely by country and agency. While the EU’s AI Act contains provisions on transparency and high-risk systems, its jurisdiction is limited, and exemptions for migration-related technologies remain troubling. In humanitarian contexts, few binding standards exist to ensure that private AI systems adhere to principles of consent, proportionality and redress. The rise of the multi-billion dollar border industrial complex must be understood as a shift in the political economy of migration.

Use of technologies by mobile communities: navigating surveillance and risk

While much of the discourse on AI in migration focuses on how states and institutions govern movement, mobile communities themselves are active users, developers and subverters of technology. People on the move increasingly rely on social media, digital tools and AI-enhanced platforms to find and plan routes, communicate with families, avoid detection and access services. These practices demonstrate, not only technological adaptation, but also the political agency of mobile communities navigating a world of intensified surveillance.

For example, people on the move often use encrypted messaging apps, GPS tools and crowd-sourced platforms to evade state surveillance or circumvent hostile enforcement environments. WhatsApp, Telegram and Signal remain vital channels for real-time route coordination and updates on border patrols or checkpoints. More recently, AI-powered tools such as real-time translation apps, chatbots for asylum assistance or location-aware warning systems have been used by people on the move to assess risks and organise their journeys.

In parallel, smuggling networks and human traffickers have integrated AI and digital technologies into their operations with increasing sophistication. Social media platforms, particularly TikTok, Facebook and YouTube are used to advertise clandestine services and target potential clients, often using deceptive tactics or false safety assurances. Trafficking for labour or sexual exploitation has also evolved digitally. Recruiters use online job portals, informal apps and social media chat groups to lure vulnerable people, particularly women, under the pretence of legitimate employment opportunities. In regions like Southeast Asia and the Gulf, biometric surveillance is often repurposed, not to protect, but to entrench exploitative systems like the Kafala regime, which ties migrant workers to sponsors with near-total control.

Traffickers often use geolocation and messaging apps to track victims and prevent escape. Emerging digital technologies can therefore simultaneously empower people on the move to survive and resist, while also exposing them to heightened risks of exploitation. The same digital tools that allow people to navigate repression can also generate data trails which feed AI surveillance systems. Smugglers’ digital presence, meanwhile, is often weaponised by states to justify broader crackdowns which criminalise all forms of irregular movement, regardless of intent or necessity.

Moreover, discussions around technologies used in migration insufficiently consider the agency or digital literacy of mobile communities. Too often, people on the move are framed as passive subjects rather than active creators, users and participants in tech ecosystems. A more nuanced and inclusive model would recognise that mobile communities engage with technology in ways that reflect both resistance and necessity, and that these practices must inform how technologies in migration are designed and regulated.

Future developments: AI at a crossroads in migration governance

As AI systems become more deeply embedded in migration management, they present both the possibility for more humane governance and the risk of deeper exclusion. The coming years will determine whether AI and digital technologies used in migration serve to enhance rights and dignity or further entrench control and inequality. Unfortunately, the ecosystem seems to be trending towards the latter.

Governments and international agencies often present AI as a way to enhance efficiency, reduce bias and increase fairness in decision making. Tools such as AI-based asylum screening algorithms, digital ID verification systems and automated fraud detection mechanisms are marketed as objective and scalable. Yet numerous audits and rights assessments have shown that these tools frequently reproduce existing social and geopolitical hierarchies, especially when they are trained on biased or incomplete data. For example, predictive models used to determine refugee eligibility or prioritise deportation cases tend to focus on perceived risk factors tied to nationality, social media behaviour or prior travel patterns. While still relatively new in the asylum determination space, various member states of the EU have started to use dialect recognition to verify or obtain further information on people’s country or region of origin. Yet these use cases which lack transparency in how decisions are made or how they can be challenged, especially for factors as malleable as people’s accents or tone of voice. Without meaningful appeal mechanisms or due process safeguards, AI becomes a tool for automated exclusion rather than enhanced protection.

Unfortunately, there is inadequate global governance to rein in the techno-solutionism which has taken over migration management. While, in 2023, the United Nations Office of the High Commissioner for Human Rights (OHCHR) issued guidance for a human rights-based approach to digital border governance (co-written by this author), these guardrails have not been meaningfully implemented. Regional attempts like the EU’s AI Act could have set global standards on the regulation of AI; instead, they are falling short on some of the most high-risk implementations, such as border tech, emboldening other regions like the United States to further dilute what little regulation exists. Globally, AI regulation remains fragmented, with little enforcement power.

Moreover, a rights-based AI governance framework requires more than regulatory checklists. It must also include participatory design, transparency and meaningful accountability. To date, there is little evidence that the perspectives of affected communities, especially people on the move, are meaningfully incorporated into the development or auditing of migration-related AI. Mobile communities and their lived expertise remain largely invisible in data governance processes, despite being among the most surveilled and risk-assessed populations in the world.

Some emerging efforts offer hope. Initiatives such as the Migration and Technology Monitor incubate community-led digital rights audits and human-centred design practices in refugee technologies, while participatory digital ID governance pilots are beginning to challenge top-down models of migration management technologies. Advocacy coalitions are also pushing to include migrant justice frameworks in global tech governance negotiations, such as the EU’s Protect Not Surveil project.

Ultimately, the future of AI in migration governance hinges, not only on the sophistication of its technical architecture, but also on the political choices and powerful narratives that shape how it is deployed. AI systems are not neutral tools. They are embedded with the values, assumptions and priorities of the institutions that design and implement them. When driven by securitised logics, commercial incentives or exclusionary politics, AI becomes a force multiplier for control, surveillance and discrimination. People on the move are turned into visible and trackable data subjects, while their rights are steadily eroded by opaque and unaccountable systems.

During times of geopolitical turmoil that challenge the bedrock of global migration governance, AI continues to transform the global mobility landscape. Unfortunately, the question is no longer whether AI and other digital technologies will be used but, rather, whose interests they will serve. Technologies can be tools for empowerment. They can strengthen protection, transparency and dignity, but only if their development, deployment and subsequent governance are reoriented towards human rights, participatory design and global justice. This approach requires more than regulatory reform. It demands a shift in who gets to define the problem, who builds the technology and who holds the power to decide what constitutes ethical use. People on the move must not be treated merely as end users or data points, but as co-creators of the systems that affect their lives.

Petra Molnar

Author

Petra Molnar is a lawyer and anthropologist. She is the Associate Director of the Refugee Law Lab at York University where she leads the Migration and Technology Monitor Project. She is a Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society and the author of her first book The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence, a finalist for Canada’s Governor General Literary Prize.

Funding cuts and crises: struggles and hopes of humanitarian NGOs How offshore asylum processing and migration management went global in 2025

Article Details

Mixed Migration Review '25

  1. Migration in the context of geopolitical turmoil
  2. AI, surveillance and the privatisation of migration management
Share
LinkedInXBlueskyWhatsApp