My current investigation into speech interfaces as language-learning partners is revealing that one of the main problems companies are facing when developing these tools is that they do not behave how they would like them to. This refers to the product often lacking the necessary layers of programming necessary to design a bot that can fully simulate human-human interaction.
This is does not come as a surprise to me because I firmly believe that until we reach a stage where we fully understand the human brain. I believe we are still quite far from reaching this stage, so I find the efforts to try and build an AI tool that can fully replicate the human brain and fully simulate human-human interaction akin to herding cats. To some extent it can be done, but not quite as we’d like or expect.
From my own experiences of building a conversational bot I appreciate the intricacies of the programming required to build a tool which acts as we would like. It is time consuming, arduous, and extremely challenging to say the least. This is why, I presume, that the English language learning landscape is not flooded with such tools.
I am currently examining learner reactions to spoken digital interface interaction by trying to understand how learners respond and what it is specifically that makes them respond in different ways. My hope is that by better understanding user discourse, it will provide some insights into the characteristics an effective chat interface requires.