The emerging digital nervous system: Technology, mixed migration, and human mobility across borders
The following essay was originally compiled for the Mixed Migration Review 2022 and has been reproduced here for wider access through this website’s readership.
The essay’s author Jessica Bither is a Senior Expert on Migration at Robert Bosch Stiftung.
Digital technologies are re-shaping—and in some cases radically transforming—the management and control of human mobility and mixed migration across borders. This rapid evolution of automation and AI involves a wide range of actors with varying interests and motives. Alongside its promised benefits lie a myriad privacy and protection risks. As stakeholders work on building digital infrastructure and developing standards, rules, and laws, it is essential that issues of human rights and dignity are not overlooked.
Introduction: our current crossroads
From digital transformation processes at national migration agencies, automated immigration service chatbots like the Finnish Kamu, through more complex machine-learning based models to predict migration flows, to algorithmic decision-making in visa processes, or refugees in Jordan buying groceries via iris scans, there is no area of our current migration and protection system left untouched by rapidly evolving technologies. The Russian invasion of Ukraine and the resulting massive human displacement has shown some of these technologies being applied in real time: launched by the Ukrainian government in 2020, the Diia wallet app with 13 million registered users has helped with identifying and registering Ukrainian refugees in Europe, as well as sending aid to those internally displaced. Institutions have gone digital, too: the European Union, for instance, for the first time developed a shared digital platform for EU member states to coordinate registrations as a response to the war.
Thinking of tech and mixed migration brings to mind pictures of technologically enhanced walls or fences, namely those that are seen at physical land borders: the use of satellite imagery, unmanned drones, or infrared cameras meant to monitor the border or stop people from entering. But this is only part of the story.
A digital nervous system that manages human mobility across borders is emerging that is far more complex: it consists of different central processing centres (human and digital), as well as a peripheral nervous system. These sub-networks can be more or less directly tied to human mobility, but they give signals and information to the larger system. It is, accordingly, highly complex, involving decisions and actions from a variety of actors across different sectors, from government agencies to international organisations, to the private sector, tech companies, and civil society. It is also not fixed, but rather adaptable and highly sensitive to change. One area of the system may affect another instantaneously, or only when it is “activated” or triggered in a given migration context. Finally, its function is that of a digital filter, involving a highly calibrated sorting function allowing for selective mobility. Due to the exponential increase in the volume and speed of data-processing capabilities, it can operate increasingly removed from time and space. It is still very fragmented and uncoordinated, which also means that it is still shape-able, for now.
Three key nodes of this emerging system are central: ports of entry, where the border is “activated” and where data on individuals is verified, collected, and shared between actors; the rules that govern the rights of passage and access to and across countries, for example through algorithmic decision-making or predictive analytics as part of visa and screening processes; and digital IDs, or digital wallet-based applications that link individuals to this unevenly networked system.
There are new risks, potential harms, and human rights concerns that are already apparent, such as the potential for discrimination and bias (including according to race, ethnicity, or sexual orientation) or issues related to individual privacy or data protection, to name just a few. These risks could also exacerbate global mobility inequality, while the proliferation of certain technologies left unchecked, such as facial recognition or massive biometric databases, could undermine democratic norms and feed into growing digital authoritarianism. At the same time, digital technologies have the potential to empower individuals or communities across borders, have already given people access to identification and access to services where they previously had none (for example through biometrically-linked registration systems such as UNHCR’s PRIMES) and, as Covid vaccination apps rolled out in the EU during the pandemic have shown, can allow human mobility to resume more easily in emergency contexts.
Perhaps most importantly, the design of this digital nervous system is entirely dependent on human choice, with all the ethical complexities this entails in the migration space. In order to shape it, stakeholders concerned with managing mixed migration in a digital world will need to move away from asking only whether to use to use digital technologies to govern human mobility, towards asking which technologies we choose to use how, and according to which rules, and who gets to make them.
Checking our mindset: understanding the socio-technological context
The increasing use of technology is often presented or justified to make the management of human mobility more efficient or secure. Algorithms can speed up processes and procedures, and technically could make them more standardised. Biometrics, in turn, can make systems more secure and less susceptible to fraud or abuse. But employing digital technologies in the context of people moving across borders is neither neutral, nor predetermined. To understand this socio-technological context is to understand that migration and protection spaces are highly political and the implications deeply contextual. Only then can we begin to distinguish between beneficial and innovative ways of using technology to manage human mobility from those areas where potential harms should ring alarm bells. This is the mindset which we need to bring to the emerging digital nervous system.
Boons and banes
Accordingly, the technology in and of itself in most cases cannot tell us whether we should use it, because this depends entirely on the purpose. Take satellite imagery: it can be used by coast guards or border agencies to save lives of migrants at sea, but also to intercept boats and ban access to territory, sometimes with deadly consequences. Remote video interviews, as adopted by many migration and asylum agencies during the pandemic, could be used for people to apply for asylum without having to go on a life-threatening journey, but they could also lay the infrastructure to allow governments to offshore their asylum processing more easily. Forecasting tools to better anticipate future displacement crises or migratory movements, in turn, could help international organisations or governments better prepare adequate humanitarian responses, or to increase border controls (or both). The mindset guiding the use of these technologies therefore must consider the specifics of the migration context as well as the technological backbone (the specifics of data sets or models) that was used to develop them.
Regarding the context of migration and human mobility, decisions made in this space will very often have a fundamental impact on individuals’ lives. Legally speaking, access to territory and the international protection system is still the prerogative of states. In part due to this fact, decision-making in this area often lacks transparency and is often tied to national security concerns. It is characterised by a power imbalance between the authority granting access (most often, but not only, government authorities) and the migrant, refugee, traveller, or any other individual seeking to cross a border. Given this opaqueness and power imbalances, the migration and refugee space has also served as a technological testing ground for certain digital technologies and tools.
Technological considerations, in turn, will depend on issues such as which data is collected for which purposes, potential data bias, or which type of machine-learning system is used (for example black box models). It matters whether systems like digital IDs operate on a centralised or decentralised digital infrastructure. Decentralised digital architectures and federated systems, for instance, could, in fact, rebalance power away from authorities and towards individuals, even in the migration and protection space. Only combining context with such technical inquiry will provide the necessary granularity to decide which technical standards, rules, and regulations can best support a human rights and civil liberties approach to managing human mobility on a global scale.
Ports of entry: points of activation, data exchange, and interoperability
One area that has been receiving increased scrutiny is technology and AI-based systems used at physical borders, as a digital fortification of border walls. Examples of this include the increased use of satellite imagery, autonomous surveillance towers, heartbeat detectors, unmanned drones, and infrared cameras for observation purposes or monitoring of actual borders. More dystopian examples of late include the testing of robotic dogs at the US-Mexico border. Digital fortification also happens in other physical infrastructure near the border, such as the newly digitally-fortified “controlled access centres” on the Greek islands.18 In this sense, these technological tools are an extension of long-standing government or border agency practices aimed to bar or deter (irregular) entry. Critical scholar and civil society organisations point to the fact that such tools are being used by governments to undermine the right to territorial asylum, are part and parcel of a growing surveillance apparatus at border, and, by extension, of state-sanctioned use of surveillance technologies more broadly. In addition, rather than deterring people from crossing, they could lead people to take more dangerous journeys, increasing the risk of more migrant deaths.
Beyond land borders, technology is increasingly employed at airports, not only by governments but also for commercial reasons by airlines and airport logistics companies. Automated border control gates (ABC gates) match a travel document to the passenger and activate according to entry rules set elsewhere (see below). At the point of entry, the border “knows” you already (albeit a digitally imperfect copy of you).
These automated controls are closely linked with the exponential collection and use of biometrics, most prominently facial recognition technology, which is employed for verification (for example by matching passengers to machine-readable documents), to board airplanes, or as live facial recognition technology, say to screen against criminal watchlists. The pandemic has “Covid-accelerated” the deployment of these systems, in the words of a managing director of a biometrics and ID company. Pilot programs by airlines and airport providers now allow for facial recognition-only passage from baggage drop to boarding. Another more controversial application at ports of entry was the EU-funded research project iBorderCtrl. One part of the project tested an automated deception detection tool as part of a broader decision support system. Funded with €4.5 million, the project ran from 2016 to 2019 at border points in Hungary, Greece, and Latvia, but it never progressed beyond the testing stage, and serious doubts exist in the scientific community about the reliability of AI-based emotion-recognition technology in general.
Biometrics refer to measurable physiological or behavioural traits like voice recognition, fingerprints, retinal recognition or facial thermograms that can verify the identity of a person through the translation into unique data points. It can be used in automated access controls, criminal forensics, immigration procedures, social security, and surveillance, but can also be embedded in hardware, such as notebooks or cell phones. One important distinction is the difference of one-to-one (verification) vs. one-to-many (identification) systems.
A further example of this growing activation infrastructure development is the US Trusted Traveller Program that includes the Global Entry Program which individuals can join as pre-approved travellers. It also includes the many programmes put in place after 9/11 as part of the US Smart Borders Initiative, such as the Electronic System for Travel Authorization (ESTA), which facilitates visa-free travel to the US, and an Entry Exit System (EES) including biometric registration. On the EU level, the similar European Travel Information and Authorization System (ETIAS), an electronic pre-screening system of passengers from states eligible for visa-free travel to the Schengen area, as well as an Entry Exit System (EES) for third country nationals, are both set to enter into force in 2022/23.
Data collection, sharing, and interoperability
More than just right-of-passage infrastructure, these systems are large data-collection generators that in turn are used to automate onward passage. Travel through these border technologies further relies on data conduits, data collection, and data sharing between different entities: governments, security agencies, airlines, airport logistics companies, and private technology companies. Among these data are notably the Advance Passenger Information (API) and Passenger Name Record (PNR) data collected by airlines. Information about which rules apply at which airport or crossing and according to which laws is not publicly available—aside from notices such as those at Amsterdam’s Schiphol airport indicating that photos created at ABC kiosks will only be stored for 24 hours.
Advance Passenger Information (API) and Passenger Name Recognition data (PNR)
Advance Passenger Information (API) is verified biographic information about travellers that corresponds with their official travel documents as well as travel route information. API data are collected and compiled by air carriers and transmitted to border control agencies of the destination country. While the passenger is in-flight, border control authorities carry out a pre-border identity check and screen the passenger for migration management and law enforcement purposes.
Passenger Name Record (PNR) data are unverified passenger information collected by air carriers for reservations and check-in. Depending on which data passengers share with air carriers, the PNR information can include dates and itinerary of travel, ticket information, contact details, travel agent, payment information, dietary preferences, seat number, and baggage information. PNR data might be shared with police authorities according to the respective regulations.
Accordingly, the ease, speed, and reliable screening of this infrastructure depend upon interoperability between data sets, databases, and entities. This can happen via interfaces between private and public entities, and between countries. The eu-LISA agency is creating an interoperable system of eight EU-based migration- and security-related systems to be completed in 2022 or 2023. In February 2022, the US Department of Homeland Security reportedly notified some EU member states of a clause in its Enhanced Border Security Partnership that, starting in 2027, makes eligibility for the US visa waiver programme dependent upon US access to biometric police databases of partner countries. What makes this context unique—and thus a source of concern additional to those related to growing digital state surveillance by governments over their own citizens— is that the border is a place where states can legally collect data on non-citizens. Negotiations about sharing of data for migration purposes also build upon existing state practices of embedding political negotiations on migration management to other areas, such as visa liberalisation.
The use of such technologies can provide for easier and seamless travel to some and allow more governments and airlines to deal with an ever-growing number of international people moving across borders worldwide. But risks regarding the protection of civil liberties and human rights are becoming more apparent. Civil society and digital rights groups have made public some of these risks, such as the proliferation—not only at borders— of facial recognition technology and the potential for indiscriminate surveillance and racial discrimination. Other risks include issues of data privacy, or a de facto lack of informed consent. In the case of interoperability, the dangers include that of mission creep (where data collected in one instance is used for something else), or the potential for harm to certain communities. The collection and sharing of data can also inadvertently cause danger to individuals. For example, PNR data can reveal information about sexual orientation (if two people always travel together), health issues (if they require special assistance), or religion (dietary preferences).
There are also technical aspects to consider as this infrastructure is built further. Using facial recognition as a verification tool may not be as security-enhancing as promised: for example, morphing techniques (where two images are superimposed, allowing someone to enter via another’s identity) or adversarial attacks (where a hacker uses data inputs to trick a facial recognition system to create a match with a no-fly list). But any benefits or risks will depend on how such data is stored or shared for how long, and how securely. Technically speaking, different app-based systems tied to individuals’ phones that contain biometric and cryptographic information could maintain privacy while allowing anyone crossing a border to more selectively choose which information to provide to which entity (see below). Sharing a single health data point—such as proof of vaccination via digital Covid passes—is one such example.
Automating the rules of passage: algorithmic decision making and risk screening
While the emerging digital nervous system of human mobility is activated at ports of entry, the rules that govern it are increasingly automated via varyingly complex algorithmic or machine-learning-based systems as well. Two key areas in this regard are visa processing and risk screening.
To be clear, visa processes themselves have never been neutral. Even before the digital era, risk assessments have been part of managing human mobility of groups and individuals. Ministries operate with white-list countries deemed safe for visa-free travel. Such assessments entail a high degree of discretion and secrecy by governments, consulates, and even individual case officers. Moreover, decisions are often based on multiple migration-related considerations that may be linked to political negotiations (for example, where visa rules are eased in return for cooperation on returns or migration management), economic factors (such as labour migration), or other issues (including family reunification). Automating parts of these processes, then, is to digitally codify a complex set of underlying normative and political assumptions and policies.
Immigration, Refugees and Citizenship Canada has been piloting an automated decision system since 2018 with temporary resident visa applications from China and India. The machine-learning-based algorithm automatically triages these applications into three categories based on their complexity. The system then automatically approves the eligibility portion for “the most straightforward” ones. Algorithms are also used in many countries to sort through online visa applications and to flag certain ones for review. The UK’s Home Office reportedly altered its “visa application streaming algorithm” after a case brought to court by civil society organisations for discrimination and bias, because nationality was one of the factors used to sort applications for student visas or visits into three separate “risk” categories.
This brings us to automation and data analytics used in risk-screening tools and indicators that play a role in visa-granting processes, among others. A very basic form of screening would be running visa applications against a criminal watchlist or another database that indicates an individual’s eligibility to enter a given country. But the machine-learning-based models in development aim to do more. For example, ETIAS foresees the inclusion of “risk indicators” in its automated screening process. A report by Deloitte and the European Commission states that “AI could support in selecting these indicators and possibly adapting them over time depending on the applicant.” The risks screened for are mainly irregular migration, security, and public health.
The US Customs and Border Protection agency flags passengers for additional screening as part of its Automated Targeting System and has a related Risk Prediction Program which uses internal data from government agencies and passenger data supplied by airlines (though how exactly the prediction operates, and which criteria or indicator data it uses has not been made public). The UN Office of Counter Terrorism has its own proprietary software available to member states—called goTravel—for the collection and analysis of API and PNR data.
The central questions in all these systems, one that is key to assessing any human rights implications or potential discrimination and bias, are: which proxies or data points do AI-powered automated programmes use to arrive at their assessment? And what type of action or consequence follows from that assessment?
Digital technologies and algorithmic-based systems can help immigration agencies and consulates speed up the processing of visa applications and in theory could allow many more people worldwide to travel and move with less bureaucracy and waiting times. If employed correctly, they could also make labour migration and recruitment far less arduous than currently is the case. However, there are a number of new important socio-technical questions to address.
Bias in, bias out
A more obvious risk is that of bias in training data in algorithmic and machine-based learning models.41 If developed unchecked, such models could simply reproduce biases of previous visa case officers. There are more indirect risks of discrimination against certain groups, such as those flagged by EU’s Fundamental Rights Agency with regard to ETIAS, such as when security risks are linked to criminal convictions in one country for crimes related to a person’s LGBTQ+ status that are not offences in EU states, or when certain ethnic groups are flagged for “high in-migration risk.”There are, moreover, currently no clear rules that address options for appealing decisions or providing information to individuals who are flagged by these automated systems. Finally, one criticism of using advanced analytics and models in this area is that governments can obscure discriminatory or racist practices or a predisposition toward certain foreigners, migrants, or refugees behind seemingly “neutral” technological applications.
But again, technical nuances do matter when it comes to answering the question of how to regulate the development and use of these sorting, scoring, and screening tools as part of a human mobility system. For example, it makes a difference whether an AI model uses triaging or scores. As the European Commission has noted: “a classifier is probably the most sensible approach, as training a regression model to predict some kind of score […] instead might be seen as attempting to directly predict a risk level.” Case officers may make different decisions on whether they sees a risk score of, say, percent or simply sees that an application is in Category 2. Yet another issue, is whether the models would draw information from other sources, such as social media, or other countries’ assessments.
Other important nuances relate to whether systems automate only positive decisions, as in the Canadian pilot, or whether they are used to automatically deny visas. Canada thus far does not use “black box” algorithms (where decisions cannot be knowable or explainable). The level of automation matters, too (automatic decision- making systems are rarely fully automated).
The biggest risk of these systems is that they can scale existing socio-political inequalities and discriminatory practices related to human mobility. For instance, while citizens of South Korea or Germany enjoy visa-free entry to 190 countries worldwide, those holding passports from African states need visas to travel to all but about 20-25 percent of countries in the world, with the later mostly being states neighbouring their own.44 Uncertainty continues to surround the impact on people “rejected” by these automated systems, those simply fearing they will be rejected, and those (such as political dissidents active on social media) worried that the digital processing of their biodata could lead to surveillance by other entities. But different models could also be designed to mitigate these risks, for example by exposing biases in past decisions and current models on visas or risk screening.
Designing the keys: Digital ID systems and wallet applications
Connecting us as individuals to the digital nervous system of human mobility are digital identities (digital IDs), the keys that determine our physical and virtual movement across the different nodes of the system. The type of digital ID will differ depending on the human mobility or mixed migration context, as well as in terms of technical design or how directly it relates to the actual movement across borders.
At a very basic level, a digital ID is a digital collection of data tied to an individual that is used for verification and authorisation that then gives access to certain types of tools and services, in some cases serving as a foundational or legal ID. It increasingly is combined with biometric identifiers. It can be helpful to distinguish between foundational IDs, which serve as a direct proof of legal identity, and functional IDs, which are used only in certain sectors or limited-use cases, such as financial transactions. It is also the way we present ourselves or are presented by others in the digital space. A person is never the same as their digital identity.
Closely related to digital IDs are digital wallets, applications that allow users to share their verified identity credentials. A digital wallet can be narrowly defined as: “an electronic method of storing, managing, and exchanging money and/or identity credentials, often through the use of mobile phones”. In the context of migration and protection, digital wallets are most relevant for financial inclusion, and for cross-border recognition of some sort of credentials.
Digital IDs have been promoted as a primary tool to reach Target 9 of Sustainable Development Goal 16: “By 2030, provide legal identity for all, including birth registration.” They are also hailed as effective tools for the delivery of public services while strengthening the transparency and targeting of resources and programmes. Accordingly, they can be tools for inclusion and recognition, although more critical voices point to the dangers of an increasing surveillance net when tied to large biometric databases.
In the humanitarian sector, the World Food Program’s beneficiary and transfer management platform, SCOPE, and UNHCR’s Population Registration and Identity Management EcoSystem (PRIMES), are the two largest biometric databases in what are, in essence, digital ID management systems with over 10 million entries between them. Apple has piloted verified driver’s licences and state IDs in the US that can be used at certain Transport Security Administration pre-check screening points. (Apple also filed a patent in 2021 for digital passports). The Known Traveler Digital ID System is a World Economic Forum and Accenture-led initiative for flight travellers currently being piloted in Canada and the Netherlands. It is based on blockchain technology to “cryptographically issue, revoke, and verify credential identifiers without the need of a centralised intermediary (like a certification authority)” and will run on mobile phones.
In addition, many countries are establishing national digital ID systems that can impact mixed migration settings in different ways. For instance, digital IDs are being developed to preserve the culture of diasporas across borders, like in the Rohingya Project, while wallets can preserve documents in digital lockers for those forced to flee, as illustrated by the Ukrainian Diia app that also holds passports or driver’s licences.
Big data, big risks
Recent examples show some real dangers inherent in the creation of large biometric databases that often go hand in hand with digital ID systems. In the autumn of 2021, for example, the Taliban gained access to some of the digital ID and payroll systems created by Western governments and international organisations—including biometric data, occupations, home addresses and names of relatives—which could be used to target opposition figures or Afghans who supported Western forces. In January 2022, a cyber-attack on the International Committee of the Red Cross hacked the personal data of over 515,000 people. In 2021, UNHCR was widely criticised for sharing the biometric data of Rohingya refugees with the government of Bangladesh.
Given how varied the usages and digital backends of these applications are, it is not possible to definitively argue for or against their use in the management of human mobility going forward. There is a need for a more granular, context-specific, and nuanced approach that looks at the potential risks related to security or data-access controls (as in the cases above), or the type of technology, for example blockchain, including privacy preserving features by design. If designed correctly, decentralised digital architectures and federated systems could move identity management and control away from public and private intermediaries back to the users and individuals, also as it relates to the migration and protection space.
Charting a path forward: Human mobility across borders in a digital world
Digital technologies that are the building blocks of the new digital nervous system of human mobility are often still developed in silos, with little regard for their potential risk and harms to individuals and societies. And even in cases where potential harms could be apparent, the lack of public transparency that often surrounds their development means that there is a real risk that government agencies or other actors hide behind the veneer of seemingly technocratic processes without any accountability. Choices regarding technology and governance are needed to better support an emerging system that upholds human dignity, civil liberties, and fundamental human rights. Too often, actors do not yet include looking at technology deliberately in a way that enhances the prospect of greater and more equal mobility opportunities across countries and regions, and that increases fairer and faster access to international protection. There is a window to shape this emerging system, and migration and protection stakeholders can work on multiple points now in order to do so.
It will first be important to include migration and human mobility aspects in emerging standards, rules, and legislation in different technologies or AI-based applications. The current draft of the EU’s Artificial Intelligence Act, for example, sets AI-based technologies in the migration and border management context in the highest risk category, which requires certain safeguards and governance tools to accompany their development and deployment. It will also be highly important to address the aspects of cross-border mobility in data-sharing agreements between countries or sets of countries. The G7 ministers’ meeting in June 2022 called for states to work towards a “trusted free flow of data.” The EU and the US agreed on a transatlantic Data-Privacy- Framework, while the United States has established a separate Global Cross-Border Privacy Rules Forum, both in early 2022. It is important to holistically address the issue of migration and human mobility as an emerging system within these frameworks.
Relatedly, policy tools emerging for the governance of technologies overall should be adapted to the migration space. This would include algorithmic impact assessments, rules on transparency, and regulatory or independent oversight bodies. The ETIAS regulation has already included an oversight body that will be involved in the selection and creation of risk indicators, though its ultimate role is still being determined. It would also involve a more deliberate choice in what type of technological design should be used in a given migration setting and where. Examples are decentralised digital ID systems, moratoria on certain uses (such as black box models in visa decisions), and questions related to information or access to recourse for individuals.
Many players, many motives
A crucial step will be figuring out how to best collaborate across sectors and to create formats and spaces to do so. The complex web of actors from the private sector, tech companies, government agencies, international organisations, and civil society all have their own interests and rationales, which are often inconsistent within sectors. Take the private sector: there are clear business interest at stake in the introduction of many of these technologies. The border security industry alone is set to grow to a total of $65-68 billion by 2025 at an annual growth rate between 7.2 – 8.6 percent. Meanwhile, biometrics and AI markets are accelerating, with the biometrics market alone to reach $65.3 billion by 2024. That being said, involving private companies in the management of human mobility across borders in and of itself is not new and predates the digital era and follows the trend of, for example, co-opting airlines and other transport providers through carrier sanctions. Newer actors include those providing offshored visa processing. These firms are in essence “digital data brokers”: it is often not clear under which jurisdiction they fall. The design and operation of this new digital nervous system will need to include these and other actors (such as tech companies, banks, etcetera) in some shape or form.
Civil society organisations have played an essential role in monitoring developments in a field that is often shrouded in secrecy. They have also highlighted the potentially grave implications of migration-related technologies in terms of broader proliferation of (state) surveillance and human rights violations. Notably, certain groups are also using digital technologies themselves to better monitor and cover these issues. With a few exceptions, they have not yet been present in the debate of how to actually set the standards and rules in the choices and governance, and of which technologies to employ in the human mobility and how.
Governments and migration agencies, in turn, will have differing motivations and also knowledge bases when it comes to employing technologies in the various areas of migration policy. Moreover, as the central guardians in the current international system of managing human mobility across borders, the way in which governments or groups of governments choose to cooperate and build a digital infrastructure for human mobility is entwined with geopolitical competition. Relevant here are questions of growing digital authoritarianism and democratic backsliding, such as when practices employed in the area of migration end up eroding core rule-of-law principles. The way in which governments choose to work with technology in the migration space is not separable from the issue of trust of their citizens and trust and cooperation between like-minded governments and countries.
Ideally, these steps could create an international alliance on digital tech in human mobility across borders, a space for negotiations on which standards, rules, and guidelines will operate in this tech architecture, and where a political strategy guides the use of technology deliberately to advance a system of human mobility based on fairness and dignity.
Unlike its human counterpart, the digital nervous system discussed in this essay is neither natural nor naturally predetermined. But like the nervous system of the human body, it will be intricately interlinked in a complex web of signals, communication circuits, and processing centres, (both human and digital). It will thus also include feedback loops, meaning that any decisions inscribed into the technological infrastructure—including rules, accountability features, and policy decisions—will set up a certain self-reinforcing trajectory. Making sure that new opportunities in the way we think about managing human mobility are realised in this emerging system, and that the feedback loops are based on human rights and dignity, is a matter that could not be more urgent.