Tag Archives: Autonomous Agents

Less is more: the argument in defence of HCI for speaking skills

Less is more, or is it?

I was taught from a young age that the wise man is the one who observes and says very little. However, for foreign language learners I think it is quite the opposite, and the more they try to speak and express themselves orally the more they can practise and learn about oral interaction.

My current research is investigating the oral output prompted by interacting with an autonomous agent, and surprisingly I am not finding that the output varies from that of human-human interaction. There are days where participants are motivated and enthusiastic to interact and others where they provide monosyllabic answers.

Where I’m going with this, is that investigating learners interacting with a digital tool has demonstrated to me that in the classroom I often have an expectation of learners to constantly perform, and feel frustrated when they don’t willingly provide output when requested. I am learning that deliberate practice is perhaps not an effective method of language learning and adopting a more laissez-faire approach maybe more appropriate.

So, on the one hand we need learners to speak as often as possible, but on the other hand we can’t expect them to always be willing to speak. For me this highlights the value of human computer interaction (HCI) for language learning and demonstrates that we should lean more heavily on autonomous agents for speaking practise. They provide limitless opportunities, never tire and can be used when learners feel they want to speak, not when they have to.

AI vs EQ

According to The Oxford Dictionary, intelligence is the ability to learn, understand and think in a logical way about things, and the ability to do this well. Emotional intelligence, otherwise known as emotional quotient (EQ) is the ability to manage and understand emotions. I am making a parallel between AI and EQ because I strongly believe there are expectations regarding the level of intelligence machines, robots and autonomous agents are required to have, yet as humans, we have seemingly low expectations of each other to have EQ. Yes, I am comparing EQ to AI, but if AI is a simulation of human intelligence in machines, then this also includes emotional intelligence.

The point I am making is related to my current research which is using an AI tool to investigate its capacity for interactional conversation with humans. I have tried myself to design a tool, and the outcome was a chat interface which had limited capacity to understand the oral input and the output was also very slow and finite. While the potentials for programming a more effective tool are clearly possible from the many examples of virtual assistants now available such as Siri and Alexa, I am questioning if our expectations of their capabilities are perhaps unreasonable. If humans can lack EQ and often not able to empathise with others or communicate effectively, why do we expect intelligent autonomous systems to be able to do this?

We are very far removed from fully understanding the human brain, and until we do, I think we need to be realistic with the potential capabilities of AI.