So just how should you speak to a chatbot? If you cast your mind back to Tay the chatbot built by Microsoft. She was shut down on the grounds of inappropriateness because she was posting offensive and unsuitable content on her Twitter account. Hardly surprising really considering she was built using corpus from Twitter posts and dialogues, a perfect example of the hunter becoming the hunted.
The apparent ubiquity of chatbots in the customer service sector is proving to be somewhat beneficial to the companies using them, but less convenient for users. The majority of conversation agents are built using a retrieval-based model, which reply based on a repository of predefined responses from which the most appropriate is selected, based on the input text and context. The output could be limited to as little as three utterances per response. Let’s look at an opening turn to see how this works:
‘Hello, what can I do for you today?’
> No response, delayed response from user, or the chatbot is unable to interpret the user input.
‘I missed that, say that again?’
> No response, delayed response from user, or the chatbot is unable to interpret the user input.
‘Sorry, can you say that again?’
> No response, delayed response from user, or the chatbot is unable to interpret the user input.
‘Sorry, I can’t help.’.
This leads me to believe that we as users need to learn how to speak to an automated conversation agent before determining what we want from it. If we don’t respond, or respond using undecipherable discourse then we are expecting a machine to manage a task that humans would also face problems with interpreting. While considerable research and development is being carried out in the field of intelligent conversational agents, we are still a long way from them becoming an integral part of mainstream customer service interfaces that are able to interpret our utterances and commands to the best of our expectations.