The following interview was originally compiled for the Mixed Migration Review 2019 and has been reproduced here for wider access through this website’s readership.
Within a decade, machines are likely to be well over 100 times faster than they are today, and a million times faster 30 years from now, knocking humans off our most-intelligent-species pedestal and rendering half of us unemployable, predicts Calum Chace. The long term outcome could be either utopia, or extinction. In the meantime, artificial intelligence (AI) might well lead to less migration.
What will AI look like? Is it going to be ethereal? Ubiquitous? Where will it be situated?
The short answer is, yes, it will be ethereal. It will mostly be inside computers: it’s software. If software is eating the world, now AI is eating software.
Artificial intelligence mostly lives inside computers, and because you need vast computer power to develop or to deploy cutting-edge AI, those computers are usually in huge server banks somewhere with a lot of cheap power available, and somewhere cold as well because they get very hot. Advanced robots should be thought of as kind of the arms and legs and the eyes and ears of AI.
It is said that when “superintelligence” is developed, it will take off with exponential growth, doubling its capacity in shorter and shorter times. If and when this happens, will all the computers at that level you’re talking about be doing that at the same time, or will there be a monopoly somewhere?
We don’t know, but it’s important to realise that the exponential improvement in the power of computers is already happening. It’s been going on for a long time and it will continue for a long time. It’s very likely that we will develop machines which are as smart as we are, which have all the cognitive abilities of an adult human. Human-equivalent intelligence is called AGI, artificial general intelligence. Because machines can be improved, doubling their speed every 18 months, then they will go on to become superintelligent, and we will become the second smartest species on the planet, which is an uncomfortable position currently held by chimpanzees, who have the good fortune not to know about it. We will know about it.
As for whether that will happen in a lot of machines at the same time, or one machine, depends probably mostly on the way it’s created. If it is still the case that you need cutting-edge, Google-scale computing power to create a superintelligence, then there’ll probably be one which arrives first, and then another one fairly shortly afterwards, and then a few.
But it might happen that there’s a huge amount of very powerful computers in the world, and somebody invents a clever trick which uses that computer power much more efficiently than was possible the day before. If that happens, we might get superintelligence appearing all over the place very quickly.
Could you describe the concept of “singularity”?
I think that in the next century, we’ll get two singularities. One of them is the technological singularity, that’s when you develop an AGI, and it goes on to become super-intelligent. But I think well before then, there’ll be another event, a hugely transformational event, which I call the economic singularity.
The word “singularity” comes from maths and physics, where it means a point in a process where a variable becomes infinite. The classic example is a black hole. At the centre of a black hole, the gravitational field becomes infinite. What happens is that the rules break down, everything changes. When applied to human affairs, it’s just a metaphor for the biggest kind of change you can have. It’s much bigger than disruption, it’s much bigger than a revolution.
The further out singularity is the technological one, the nearer one is the economic singularity. I don’t know if this is going to happen, I think it’s very likely. I also don’t know when it’s going to happen, but I think maybe in 30 years. We have to accept that half the population or more is perpetually unemployable. There is nothing that we can do for money, this half of us, which a machine can’t do cheaper, better and faster. The reason why I think that’s going to happen is because, assuming Moore’s law or something like it continues, the machines we will have in 10 years’ time will be 128 times more powerful than they are today. In 20 years’ time, they’ll be 8,000 times more powerful than they are today. In 30 years’ time, they’ll be a million times more powerful.
How will this impact employment and work in the future?
In the past, the automation of agriculture was mechanization: the machines took over muscle jobs. What we’ve got coming next is cognitive automation, where machines take over our intellectual jobs.
I think it’s a matter of time before AI replaces humans in virtually all of the jobs we currently do. Humans will have to retrain and re-skill more and more often, and more and more radically. We’re not currently good at that, we need to get much better at it.
But this business of technological unemployment isn’t going to happen tomorrow, it’s not going to happen in 10 years. As I say, it’s probably 30 years. There’s going to be a need for humans to make the ultimate decisions in governments and in companies for a long time, probably until we get to AGI.
But won’t AI also open up employment opportunities for humans?
We don’t know whether this AI revolution will go on to create all sorts of new jobs which for some reason even a machine a million times smarter than today’s ones would never be able to do. That’s not impossible, it’s not conceptually impossible. It seems to me very unlikely.
I think that probably in about 30 years’ time, we will need an economy which does very well for half the population or more who can never do a job again, they can’t get paid for doing anything. But it doesn’t mean that they’re irrelevant as human beings. We will still at that point presumably be the only conscious entities on the planet, and that’s valuable. The economic singularity is this point or this journey towards mass unemployability, technological unemployment, and how we reshape our economy to cope with that.
In terms of existential threat to human life, which do you think is greater, AI or climate change?
AI. I think we shouldn’t be wary of using the word “existential risk.” If we create a superintelligence which doesn’t like us, or doesn’t understand us, or doesn’t give a damn about us, we’ll probably go extinct. If we didn’t care anything about chimpanzees, they would go extinct because their future depends entirely on us.
If there’s a superintelligence on this planet, unless it leaves, and there’s no reason to think it just would leave, then our future could be very perilous if we… If it doesn’t like us, or if it doesn’t understand us very well. The biggest job for humanity this century is to make sure the first superintelligence does like us a lot, and understand us very well.
Do you think we’re sleepwalking into this future, this inevitable future, as you and many other people say? Sam Harris calls our fearlessness of AI our “failure of intuition”.
I think most humans are blithely unaware of what’s going on. What is worrying is that our political leaders have no clue what’s happening. But collectively there’s quite a lot of people who are aware of it, and there’s quite a lot of people who are working on how to make sure we get a good outcome. The people we need working on that problem are very smart people who understand AI very well, and have the time and resources to figure out a good solution. That is happening, there’s a decent number of organisations working on it. It’s also quite a few decades away. We’ve got time to solve this problem.
Many are saying the most viable harnessing of AI for good is the technical and biological merging, sort of a cyborg future civilisation. Can you elaborate on this?
That is something which I think is most likely to happen after the technological singularity. I think the technology to enable really pervasive and intimate brain-computer interface is going to be so complex and so hard to achieve, we’re probably going to need superintelligence to help us do it. I don’t think we’re likely to get that in the next 10, 20, 30, even 50 years.
But once we have a superintelligence on this planet, humans can either watch this thing become more and more powerful, more and more impressive, and we can get more and more depressed by our relative puniness as we watch this god evolving, or we can join it. To me, the second option is infinitely better. I think it’s probably the only survivable option as well. Now that means uploading our minds into computers, and merging with the computers. I think it is our best option once superintelligence arrives. But that’s a long way in the future. Most AI researchers think it will happen sometime this century, or next century. Perhaps in 70 years. It’s a long way off.
Do you think global inequality may be increased by AI?
First of all, I would make a controversial observation, which is that inequality is not worse today than it was 20 years ago, or 30 years ago, or 50 years ago. Essentially, if you were a king or a baron, you lived okay. Everybody else was dirt poor throughout human history, until the beginning of the Industrial Revolution. The Gini coefficient, which is the best measure for inequality in society, has actually remained reasonably stable over the last 20 or 30 years. That’s not what the conventional wisdom is, but it’s what the data shows.
I don’t think we’re in a world where the tech giants are hoovering up all the money, and everybody else is getting poor. That is simply not happening, and I don’t think it will happen either, partly because that’s not what technology does. What technology does is to make labour-saving devices, and devices and software which improve our lives, cheaper and cheaper, and available to everybody.
If you go to a train station in Kampala or Buenos Aires, people are glued to their smartphones just like they are in London and New York. Technology does disseminate around the world surprisingly fast.
Will the roll-out of AI cause more migration?
I think there’s a good chance it will lead to less migration. The reason for that if countries are on the whole well-governed, it should make everybody richer. Generally speaking, people migrate when they’re desperate, and it is the people with the most intelligence, the people with most resources, the people with most drive who migrate, which is bad news for the country they leave, good news for the country they arrive in.
If in the society that they’re leaving, it’s possible to get rich, it’s possible to do interesting things with your life, then they don’t leave. AI should make that more possible. AI successfully deployed in countries where governance is not disastrously bad should lead to less migration, not more.
When we get to the economic singularity, the issue of migration becomes slightly moot because what’s the point of going from one place where there’s no jobs to another one where there’s no jobs, and you’re rich in either place anyway?
In the meantime, with the churn, things could get very hairy. But this anti-immigration sentiment comes and goes in cycles, it’s not a straight-line curve from one place to another.
What about AI’s involvement with the securitisation of borders? Can you envisage a greater role for AI in that, preventing people moving?
Absolutely. Face recognition technology, and the ability to track where migration flows are happening, and to predict where people will try to cross borders, should make the border controls job easier. It would be nice if AI also improved the science of economics, and made it easier for people to understand how beneficial immigration is for the new host country. Maybe that could happen too. Then it wouldn’t be so resisted.
Can you see application of AI to assist refugees?
For sure, AI can improve any process. Knowing where the refugees are going to turn up, working out how to get to them the resources that they need, that can all be enormously enhanced by AI. Of course, it would be much better to stop the problem which turns people into refugees in the first place.
Are you a pessimist, an optimist, dystopian, or a utopian?
A very wise man said that both optimism and pessimism are forms of bias. They are deliberately not accepting reality. So you should try and not be either, but I think that’s very boring. I’m an optimist. I am actually temperamentally an optimist. I think technology carries enormous dangers, but overall, it produces great benefits. I can see a possibility of a world as a result of these two singularities, individually and together, that is absolutely wonderful. A world in which humans don’t have to work, and we can do whatever we want to do. We can all be like comfortably-off retired people, or we can all be like aristocrats and have a really great life.
Then in the future, we can merge with the superintelligence and become god-like. These are almost unbelievably wonderful outcomes. I think they’re possible. There are possible outcomes which are disastrous. If we mess up either of the singularities, it could go very badly wrong for us, and the technological singularity, in particular, could make us go extinct if we mess it up.