As the headlines have been dominated by the World Cup recently, one of the news stories that fell by the wayside last week was the story of chatbot Eugene Goostman passing the Turing Test. If you aren't familiar with it, the Turing Test is a method for testing artificial intelligence posited by Alan Turing.
Alan Turing was a British mathematician who helped crack the Enigma code during World War II, in addition to being considered one of the most important pioneers of artificial intelligence. To simplify his test, Turing suggested that if a human participant could not tell whether simulated behaviour was that of an AI or a human, the AI will have passed the test.
1. The participant has five minutes to interact with the AI.
2. At least 30% of respondents must be fooled by the AI into thinking it is a real human.
The reason this news particularly interests us is because of how the test is conducted. Language is used as the primary indicator for participants as to whether or not they are speaking to an AI. In the most recent test, participants had a conversation with Eugene Goostman via typed chat. However, they were told that Eugene Goostman was a 13 year-old Ukrainian boy. As we said in our earlier post on the Turing Test, a human who does not speak English as their first language may be considered to be an AI by respondents.
The reverse can also be true. If participants are told that they are speaking to a 13 year-old Ukranian boy, they are likely to be more lenient when it comes to judging the responses of the AI, assuming that misunderstandings, odd answers, and incorrect grammar are all failings of the boy's age and mother tongue rather than that of the AI.
While we have no doubt that Eugene Goostman is an impressive piece of programming, we can't help but think that these results are exaggerated. Had people thought that they were talking to a native English speaker, would he have passed the Turing Test?
What do you think? Genuine AI or skewed results? Tell us your thoughts in the comments below.