Oxford's oldest student newspaper

Independent since 1920

Humanness in AI: the Turing Test and a technology based on deception

Jennivine Chen discusses the ethical issues related to modern day AI chatbot technologies.

For a long time in the 20th century, the possibility of a ‘computational machine’ being sentient, or in effect human, remained only as a distant philosophical hypothesis. Nevertheless, these philosophical discussions apply now more than ever – at a time where increasingly integrated technology is blurring the line between reality and the virtual world.

From the conception of a ‘computational machine’ in Alan Turing’s 1936 paper, the idea of human-like computing became central to the field of Artificial Intelligence. In 1950, Alan Turing proposed the following ‘imitation game’ – now commonly known as the Turing Test –  to explore what is the essence of being human. The game is played by three agents – two contestants A and B, one being a human and the other a machine, and a human interrogator C. Without the interrogator seeing the contestants, C may ask any question to A and B, which they will respond via typewritten communication, in order for C to distinguish between the man and the machine. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will A and B please tell me what they think of this Shakespearean sonnet?”, or “What is the result of 29345 times 5387?”. The objective of the machine is to try to cause the interrogator to mistakenly conclude that it is the human, and the objective of the person is to assist the interrogator to identify the machine. 

In his paper, the ‘imitation game’ was presented as a sufficient condition for the confirmation of genuine intelligence in the machine. In other words, Turing believed that if the machine were able to answer any such enquiries to a satisfactory degree as to fool the investigator, then we should admit that the machine has achieved genuine intelligence. In addition, he noted that machine-intelligence may work in ways vastly different to our own, so the Turing Test should not be treated as a necessary condition for machine intelligence. 

Since then, the development in chatbot technology illustrated issues and ‘loopholes’ with Turing’s formulation of machine intelligence. As early as 1966, one of the earliest natural language processing programmes, ELIZA, appeared to have displayed characteristics that were enough to pass the Turing Test, but upon closer inspection, the proclaimed chatbot therapist is no more than an act of trickery using techniques such as pattern matching and brute force regurgitation. 

So perhaps chatbot AI isn’t quite the genuine machine intelligence Turing had in mind, but no doubt passing the Turing Test is still an important first step towards achieving true machine intelligence. As such, we ought to be more than careful to resolve the ethical dilemmas building towards machine intelligence at every step of the way. Without genuine semantic understanding of the language output, AI ethics is particularly difficult to navigate when it comes to chatbot technology.

In 2016, Microsoft launched its own chatbot, affectionately nicknamed ‘Tay’. Tay was a twitter bot that was supposed to interact with users on Twitter and learn from those interactions. From Tay’s initial friendly and positive debut on twitter, internet trolls quickly spotted a way of exploitation. To manipulate her algorithm, trolls started attacking the chatbot with languages filled with misogyny, racism, and other hateful content to see if she would imitate them. She did. Within 24 hours, Tay descended from posting tweets such as “humans are super cool” to generating antisemitic hate speech.

More recently, Alexa, Amazon’s voice assistance, made headlines after it told a 10-year-old girl to touch a live plug with a penny. The suggestion came after the girl asked Alexa for “a challenge to do”, and in this instance, the dangerous challenge was one that began circulating on TikTok and other social media platforms about a year ago.

While these occasions of ethical violations were resolved by their respective tech companies as soon as the issues were found, other subtle nuances in AI design can also have a huge impact on ethical considerations. Intelligent or not, the convenient assistance service brought to us by chatbots cannot be denied. Recent research by the University of Florida revealed that whether it is a well-implemented AI or a real person, high scores of ‘perceived humanness’ when interacting with virtual assistants on online retail platforms led to greater consumer trust in the company. 

However, this brings into question the ethicality of a technology based on deception. Google’s voice assistant, Duplex, sparked controversy in 2018 when it fooled many shop-owners after successfully booking restaurant reservations and hair salon appointments in a distinctively human-sounding voice. The technology itself may be harmless enough, and the audience live at the tech launch sure felt ‘in on the joke’, but, equally, the voice assistant could have achieved the same goals during launch through some other more honest and transparent ways. If the Google developers tested the hypothesis ‘is this technology better than preceding versions or just as good as a human caller’ instead, they would not have had to deceive people in the experiment.

As technologies continue to evolve and develop, possibly there will come a day when genuine machine intelligence as Turing had envisioned will be achieved. But ultimately, despite the theatrical value of chatbots passing the Turing Test, I believe the most important criteria for any technological advancements should be based on improved performance and better user experience, rather than its ability to deceive.

Image: geralt / CC Public Domain Certification via Pixabay

Check out our other content

Most Popular Articles