Tag Archives: conversational agents

From RALL to chatbots

I began the year with a strong desire to continue my research into RALL, and while that is still the case, my research has lead me to investigate the benefits and  pedagogical potential of using chatbot teachers to assist in language learning.

The research examines the use of a speech-to-speech interface as the language-learning tool, designed with the specific intention of promoting oral interaction in English. The computer (chatbot) will assume the role of conversational partner, allowing the learner to practice conversing in English. A retrieval-based model will be used to select appropriate output from predefined responses. This model will then be mapped onto a gamification framework to ensure an interesting and engaging interactional experience.

Speech is one of the most powerful forms of communication between humans; hence, it is my intention to add to current research in the human-computer interaction research field to improve speech interaction between learners and the conversational agent (the chatbot) in order to simulate human-human speech interaction.

So just how should you speak to a chatbot?

So just how should you speak to a chatbot? If you cast your mind back to Tay the chatbot built by Microsoft. She was shut down on the grounds of inappropriateness because she was posting offensive and unsuitable content on her Twitter account. Hardly surprising really considering she was built using corpus from Twitter posts and dialogues, a perfect example of the hunter becoming the hunted.  

The apparent ubiquity of chatbots in the customer service sector is proving to be somewhat beneficial to the companies using them, but less convenient for users. The majority of conversation agents are built using a retrieval-based model, which reply based on a repository of predefined responses from which the most appropriate is selected, based on the input text and context. The output could be limited to as little as three utterances per response. Let’s look at an opening turn to see how this works:

‘Hello, what can I do for you today?’

> No response, delayed response from user, or the chatbot is unable to interpret the user input.

‘I missed that, say that again?’

> No response, delayed response from user, or the chatbot is unable to interpret the user input.

‘Sorry, can you say that again?’

> No response, delayed response from user, or the chatbot is unable to interpret the user input.

‘Sorry, I can’t help.’.

This leads me to believe that we as users need to learn how to speak to an automated conversation agent before determining what we want from it. If we don’t respond, or respond using undecipherable discourse then we are expecting a machine to manage a task that humans would also face problems with interpreting. While considerable research and development is being carried out in the field of intelligent conversational agents, we are still a long way from them becoming an integral part of mainstream customer service interfaces that are able to interpret our utterances and commands to the best of our expectations.