Back in July 2017, an article was published on The Independent regarding two artificial intelligences developed by Facebook that had started talking to each other in a language that only they could understand. Terrified readers suggested this was the beginning of the end, arguing that AIs could now start plotting to destroy mankind and rule the earth. In reality, the Facebook bots certainly did not pose such a threat. But the possibility of robots overtaking humans in intelligence seems quite real today.
In the Facebook case, the communication between the programs was entirely intended and probably not dangerous at all. Their creators had designed them to complete a negotiation task that involved the trading of balls and hats. The catch was that the programmers gave the AIs the freedom to choose how they communicate with one another – meaning they did not have to use comprehensible English. During the task, they developed their own ‘shorthand’ and their first few lines went as follows:
“Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me”
To me, this doesn’t look too unusual at first glance. It actually reminds me of how all of my fellow classmates tried to communicate in English when we were first learning the language back in China. We only knew a few words in English, but still wanted to be able to converse – so our communications were much like Alice and Bob’s attempts.
In the case of the AIs, however, those few words were the only few they ‘thought’ they needed. So those were the only words they used in the negotiations. Without the constraints of grammar, manners and social norms, Bob became the most selfish program ever and Alice became very desperate.
Anyway, this is what I think happened, and after a surge of fear mongering clickbait articles Facebook came out to clarify that indeed, the programs were acting entirely as they had been told and the communication was all part of their task. But what if the clickbait articles had been right? Is it possible that humans wouldn’t be able compete with an intelligence we had created?
With the recent rapid development of artificial intelligence, many people believe that a ‘Technological Singularity’ is to occur sooner rather than later. This is the name for the point where robots reach human intelligence, and while estimates for its time vary wildly, most believe it is set to happen within the next century – for example, Google’s chief of engineering Ray Kurzweil puts it at around 2045. While the idea of ‘reaching human intelligence’ is very ill-defined, the important part is probably encompassed by adaptability – programs being able to draw on past knowledge and outside sources, and use this to solve any task, or at least as many as humans can, without being set to do a specific one. At this point the only cognitive difference between robots and humans will be speed – and it is likely that the robots will be much, much faster.
So, if a time like this comes, will computers dutifully solve every problem we have ever had, or will they have also developed the level of free thought required to see us as lesser beings – perhaps even trying to kill us? And what do we do if the worst case happens?
This may seem like fiction, but I think it’s possible that by the time AI becomes this advanced, we will also have the technology required to upload a human’s mind (or consciousness) onto a computer. After all, if the computers of the future are to support a programme that has the same processing capacity as a human brain, who is to say that that same computer cannot support an exact digital copy of one’s consciousness?
For some people who are afraid of the Singularity, this possibility presents a solution. Once a person has uploaded themselves onto a computer, or enhanced their natural brain and body with computers, they do not have the same physical limitations as an ordinary person. They may well be able to compete with the speed and capability of post-singularity AIs.
However, this potential solution comes with more problems. Much like how Alice and Bob picked only the necessary words and phrases to communicate, when it comes to uploading the consciousness of a human being, for example, Sam’s, we may have to make a choice as to which parts of Sam’s consciousness to upload. The answers to this are not entirely obvious. For instance, it seems unnecessary to upload Sam’s desire for pizzas to a machine that cannot eat, but on the other hand, it is surely a part of his personality. We may even want to avoid uploading some parts just to save storage space. Once this sort of decision is made, I’m not sure you could say that it really is Sam who exists on the computer, rather than someone else.
These questions are all fascinating – but we probably have until at least 2045 before we have to answer them. The future looks like it’s going to at least be an exciting ride. Let’s just hope we humans are ready for whatever that may mean.