My current investigation into speech interfaces as language-learning partners is revealing that one of the main problems companies are facing when developing these tools is that they do not behave how they would like them to. This refers to the product often lacking the necessary layers of programming necessary to design a bot that can fully simulate human-human interaction.
This is does not come as a surprise to me because I firmly believe that until we reach a stage where we fully understand the human brain. I believe we are still quite far from reaching this stage, so I find the efforts to try and build an AI tool that can fully replicate the human brain and fully simulate human-human interaction akin to herding cats. To some extent it can be done, but not quite as we’d like or expect.
From my own experiences of building a conversational bot I appreciate the intricacies of the programming required to build a tool which acts as we would like. It is time consuming, arduous, and extremely challenging to say the least. This is why, I presume, that the English language learning landscape is not flooded with such tools.
I am currently examining learner reactions to spoken digital interface interaction by trying to understand how learners respond and what it is specifically that makes them respond in different ways. My hope is that by better understanding user discourse, it will provide some insights into the characteristics an effective chat interface requires.
According to The Oxford Dictionary, intelligence is the ability to learn, understand and think in a logical way about things, and the ability to do this well. Emotional intelligence, otherwise known as emotional quotient (EQ) is the ability to manage and understand emotions. I am making a parallel between AI and EQ because I strongly believe there are expectations regarding the level of intelligence machines, robots and autonomous agents are required to have, yet as humans, we have seemingly low expectations of each other to have EQ. Yes, I am comparing EQ to AI, but if AI is a simulation of human intelligence in machines, then this also includes emotional intelligence.
The point I am making is related to my current research which is using an AI tool to investigate its capacity for interactional conversation with humans. I have tried myself to design a tool, and the outcome was a chat interface which had limited capacity to understand the oral input and the output was also very slow and finite. While the potentials for programming a more effective tool are clearly possible from the many examples of virtual assistants now available such as Siri and Alexa, I am questioning if our expectations of their capabilities are perhaps unreasonable. If humans can lack EQ and often not able to empathise with others or communicate effectively, why do we expect intelligent autonomous systems to be able to do this?
We are very far removed from fully understanding the human brain, and until we do, I think we need to be realistic with the potential capabilities of AI.