Every year or so, a new version of an Apple product is announced, with claims that it is better than its predecessor. This year, the ‘4S’ version that Apple trotted out featured ‘Siri’, a voice-driven personal assistant that users could talk to as though they were talking to their personal assistants.
Behind the scenes, Siri is the result of decades of research on artificial intelligence (AI) funded by the US Defense Department and conducted at Stanford University’s contract research centre. Apple acquired Siri in April 2010 and continued to develop it. Its key focus areas were understanding human conversations, interpreting contexts in those conversations and, finally, creating actionable tasks based on requests.
For all those tasks, Siri brims with AI. For many rhetorical or nonsensical questions, it even gives funny answers. That makes it seem like Siri has a personality. But in reality Siri doesn’t have a personality.
What would it be like if Siri actually had a personality though? What if she was more human?
To find out, we tracked down Bruce Wilcox, a veteran AI programmer who crafts rather intelligent ‘chatbots’ that have been known to fool humans on occasion.
Wilcox’s bots Suzette and Rosette won the well regarded Loebner prize in 2010 and 2011 respectively for being able to fool human judges into thinking they were human too. That test, the gold standard of AI, is called the Turing Test.
We pitched the same set of questions to Rosette and Siri to understand the differences between the two.