Category Archives: AI

Enthusiasm to learn is emotionally driven

Enthusiasm can be displayed in different ways and it can also be present in a learner, but they do not show any visible signs of being enthusiastic to learn, they are simply enjoying the learning and keen to learn.

My current research investigation focuses on how learners demonstrate their enthusiasm when interacting with a speech recognition interface. This includes both linguistic and non-linguistic features. The dataset I am using clearly demonstrates that the psychological state of learners impacts their enthusiasm, and therefore language output and capacity to engage in learning more than any other factor. While this came as a surprise, it aligns with motivation theory and learning which purports that positive emotional and hence psychological states favour learning, and a negative emotional state (anxiety, stress, depression) can adversely affect learning.

I’ve spent a lot of time with humanoid robots, speech recognition interfaces, and autonomous agents and despite their degree of humanness, there is something decidedly safe for me about interacting with a non-conscious being. Maybe that is why Weizenbaum’s research was so successful! The non-judgmental attributes of a machine make the user feel comfortable to interact, and therefore they get more out of the learning experience. This is something I am still investigating, but Buddy, the robot in the image above aims to understand the mood of the use, and then respond accordingly. So empathy is now going beyond human…

AI vs EQ

According to The Oxford Dictionary, intelligence is the ability to learn, understand and think in a logical way about things, and the ability to do this well. Emotional intelligence, otherwise known as emotional quotient (EQ) is the ability to manage and understand emotions. I am making a parallel between AI and EQ because I strongly believe there are expectations regarding the level of intelligence machines, robots and autonomous agents are required to have, yet as humans, we have seemingly low expectations of each other to have EQ. Yes, I am comparing EQ to AI, but if AI is a simulation of human intelligence in machines, then this also includes emotional intelligence.

The point I am making is related to my current research which is using an AI tool to investigate its capacity for interactional conversation with humans. I have tried myself to design a tool, and the outcome was a chat interface which had limited capacity to understand the oral input and the output was also very slow and finite. While the potentials for programming a more effective tool are clearly possible from the many examples of virtual assistants now available such as Siri and Alexa, I am questioning if our expectations of their capabilities are perhaps unreasonable. If humans can lack EQ and often not able to empathise with others or communicate effectively, why do we expect intelligent autonomous systems to be able to do this?

We are very far removed from fully understanding the human brain, and until we do, I think we need to be realistic with the potential capabilities of AI.

Don’t make me laugh!

The role of humour in conversation

We know it feels good to laugh and it is argued scientifically that humour in general can decrease emotional distress and anxiety in stressful situations (Nijholt, 2003; Szabo, 2003). In addition to the feel good factor, several studies argue that another function of humour is to create solidarity among participants in a conversation and promote a sense of trust and interpersonal attraction (Atardo, 2017; Tsakona & Chovanec, 2018;Glen & Holt, 2007; Nijholt, 2003).

Chatbots and humour

Previous investigations into the role of humour in human–computer interaction with conversational agents (Adiwardana et al, 2020, Nijholt, 2003) emphasise that users consider the agent to be more human-like when they use humour, and this makes them feel more comfortable and positive about the interaction. It has been observed that humour can help make natural language interfaces more appealing and appear ‘friendlier’ to the user (Nijholt, 2003), which in the case of a foreign language learner could reduce the possibility of anxiety, and improve language output.

It has been conclusively shown that “chatbots are designed as communicators” (Fryer et al, 2020;8) with previous studies into the use of chatbots (Adiwardana et al, 2020; Fryer et al, 2020; Westerman et al, 2019) establishing that the common priority of chatbot designers is to strive for the communication between humans and conversational agents to simulate human-human interaction. It is argued that the deficit in conversational ability of interactional tools is compensated for by what are considered to be human-like qualities like humour (Clark et al, 2020). This tendency, combined with the surge in AI research has had a profound effect on the design of chatbots in more recent years as software architects also strive to create agents that are able to emulate emotion and humour in their interactions.

Even if they wore Crocs?

I now understand why when chatting with a bot last week, it randomly asked me the following question and gave the following response:

Chatbot: Could you date someone if they had a terrible sense of fashion?

Kat: Yes

Chatbot: Even if they wore Crocs?

So next time you are chatting with a social chatbot, see if they try to interject humour and jokes, and think about how this makes you feel.

References:

Adiwardana, D., Luong, M.-T., So, D. R., Hall, J., Fiedel, N., Thoppilan, R., Yang, Z., Kulshreshtha, A., Nemande, G., Lu, Y., & Le, Q. V. (2020). Towards a Human-like Open-Domain Chatbot, Google Research, Brain Team, arXiv:2001.09977v3

Attardo, S. (2017) Humor in Language, Oxford Research Encyclopaedia of Linguistics online, https://doi.org/10.1093/acrefore/9780199384655.013.342

Clark, L., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D., Edwards, J., Spillane, B., Gilmartin, E., Murad, C., Munteanu, C., Wade, V., & Cowan, B. R. (2019). What makes a good conversation? Challenges in designing truly conversational agents. Conference on Human Factors in Computing Systems – Proceedings, pp. 1–12.

Fryer, L., Coniam, D., Carpenter, R., & Lapusneanu, D. (2020) Bots for language learning now: Current and future directions, Language Learning & Technology, Vol. 24, Issue 2, pp. 8-22

Nijholt, A. (2003) Humor and Embodied Conversational Agents, http://doc.utwente.nl/41392/

Szabo, A. (2003) The acute effect of humor and exercise on mood and anxiety, Journal of Leisure Research,  35(2), pp. 152-162

Tsakona, V. & Chovanec. J. (2018) Investigating the dynamics of humor: Towards a theory of interactional humor, In (Eds Tsakona, V. & Chovanec. J) The Dynamics of Interactional Humor: Creating and Negotiating Humor in Everyday Encounters, John Benjamins, pp.1-26

What does it mean to be human?

With the surge of interest and investment into AI, the question at the forefront of my mind is ‘What does it mean to be human?’ The apparent obsession with AI is to replicate human intelligence on all levels, but the problem I have with this is that I don’t think we fully understand what it means to be human. I think it is impossible to reproduce human ‘intelligence’ without first appreciating the complexities of the human brain. Hawkins (2004) argues that the primary reason we have been unable to successfully build a machine that thinks exactly like a human, is our lack of knowledge about the complex functioning of cerebral activity, and how the human brain is able to process information without thinking.

This is the reason why the work of Hiroshi Ishiguro, the creator of both Erica and Geminoid, interests me so much. The motivation for Ishiguro to create android robots is to better understand humans, in order to build better robots, which can in turn help humans. I met Erica in 2016 and the experience made me realise that we are in fact perhaps pursing goals of human replication that are unnecessary. Besides, which model of human should be used as the blueprint for androids and humanoid robots? Don’t get me wrong, I am fascinated with Ishiguro’s creation of Erica.

My current research focuses on speech dialogue systems and human computer interaction (HCI) for language learning, which I intend to develop so it can be mapped onto an anthropomorphic robot for the same purposes. Research demonstrates, that one of the specific reasons the use of non-human interactive agents are successful in language learning is because they disinhibit learners and therefore promote interaction, especially amongst those with special educational needs.

The attraction is of humanoid robots and androids for me therefore, is not necessary how representative they are of humans, but more about the affordances of the non-human aspects they have, such as being judgemental. In my opinion, we need more Erica’s in the world.

What does 2020 mean for Ed Tech?

A new year AND a new decade, so what does 2020 mean for Ed Tech? Twenty years ago we were getting to grips with communicating via email. Ten years ago iPhones had already been around for three years, but their price bracket pitched them out of reach for the majority of mobile phone users. So here we are in 2020 with driverless car technology being widely tried and tested, and with China witnessing the birth of the third gene-edited baby. So where does this leave language learning and tech, and what is in store for the near future?

Where we are now

Apps, apps, apps… With the 2019 gaming community reaching a population of 2.5 billion globally (statista.com), it is no surprise that apps are an attractive option for learning English. The default options tend to be Babel, Duolingo and Memrise, but there are a plethora of options to choose from. Some recent fun apps I have experimented with are ESLA for pronunciation, TALK for speaking and listening, and EF Hello.

In the classroom however, the digital landscape can be quite different. Low resource contexts and reluctance from teaching professionals to incorporate tech into the learning environment can mean that opportunities for learners to connect with others and seek information are not available. Even is some of the most highly penetrated tech societies 19th century rote based learning and high stakes testing approaches are favoured.

Predictions for the future

Does educational technology have all the answers we need to improve the language output of ESL learners globally? No, probably not. However, society has been so dramatically altered by the impact of technology in almost every facet or our lives, it would be rather odd I feel, to reject it in teaching and learning environments.

In higher education the main concern is data privacy and ethics with exposure to digital areas such as the cloud. Yet, chatbots are starting to become integrated to support students asking university related FAQ’s. Both Differ and Hubert chatbots are being researched for their potential to improve qualitative student interaction and feedback.

Kat’s predictions

In all honesty I think it is a tough call to gauge where we will be with Ed Tech during the next ten years. Data privacy is a considerable issue when incorporating elements of AI into learning fields. This is not an issue with VR and AR and therefore underpins its relevant proliferation in teaching and learning. I feel that VR and AR will continue to mature and provide a more full-bodied learning experience when using VLEs. This may however be a slightly more complex paradigm than some may be able or prepared to employ.

I still firmly believe that reflective practice is a solid foundation for learners using recorded audio or visual content of their language production. So while this doesn’t mean the introduction of a big pioneering tech tool, it highlights its relevance as a reliable learning tool. In the same way, I continue to use Whatsapp, WeChat and Line to share learning content with learners and encourage them to interact with each other, and other learning communities.

AI: a new currency or the next industrial revolution?

A question that has been on my lips recently is whether AI is set to be the next industrial revolution, or a new currency of the future.

AI Past

The industrial revolution as its name denotes, revolutionised modern industry and manufacturing as we know it today. When the internet emerged in the late 1980s it seemed unimaginable that less than 30 years later, wireless connections and digital devices would have such a pervasive presence in society. New inventions come and go, and technological innovations are created whether they are successful or not, but in most cases they are shaped by the demands of people.

The origins of AI date back to Turing’s computational machine more commonly known as the Turning machine, built in 1935, however the term was coined later in 1955 by McCarthy who defined it as “the science and engineering of making intelligent machines, especially intelligent computer programmes” (ibid 2001:02), in other words trying to understand human intelligence by using computers.

AI Present

During the last 80 years, advances in AI technology have reached astounding levels. It has clearly had a prolific impact on society, to the extent that it has been transformed into a tool in all aspects of life; from banking and email pop ups, to ‘personalised’ selected products, and Siri and Alexa the intelligent personal assistants, and chatbots.

AI Future

Both academic and business investigation and reporting in the field of AI, consider it to be one of the biggest influencers for the future of the market and society. Predicted revenues from AI are unprecedented, resulting in extensive funding and investment from private companies and governments, which highlights the significance of AI in society. China has recently announced they are building a $2.1 billion industrial park for AI research. The past year has witnessed an increasing amount of nations realising the importance of AI in shaping the economics of the future, some even consider it a currency. Bitcoins stand aside, AI is the new currency..