Don’t make me laugh!

The role of humour in conversation

We know it feels good to laugh and it is argued scientifically that humour in general can decrease emotional distress and anxiety in stressful situations (Nijholt, 2003; Szabo, 2003). In addition to the feel good factor, several studies argue that another function of humour is to create solidarity among participants in a conversation and promote a sense of trust and interpersonal attraction (Atardo, 2017; Tsakona & Chovanec, 2018;Glen & Holt, 2007; Nijholt, 2003).

Chatbots and humour

Previous investigations into the role of humour in human–computer interaction with conversational agents (Adiwardana et al, 2020, Nijholt, 2003) emphasise that users consider the agent to be more human-like when they use humour, and this makes them feel more comfortable and positive about the interaction. It has been observed that humour can help make natural language interfaces more appealing and appear ‘friendlier’ to the user (Nijholt, 2003), which in the case of a foreign language learner could reduce the possibility of anxiety, and improve language output.

It has been conclusively shown that “chatbots are designed as communicators” (Fryer et al, 2020;8) with previous studies into the use of chatbots (Adiwardana et al, 2020; Fryer et al, 2020; Westerman et al, 2019) establishing that the common priority of chatbot designers is to strive for the communication between humans and conversational agents to simulate human-human interaction. It is argued that the deficit in conversational ability of interactional tools is compensated for by what are considered to be human-like qualities like humour (Clark et al, 2020). This tendency, combined with the surge in AI research has had a profound effect on the design of chatbots in more recent years as software architects also strive to create agents that are able to emulate emotion and humour in their interactions.

Even if they wore Crocs?

I now understand why when chatting with a bot last week, it randomly asked me the following question and gave the following response:

Chatbot: Could you date someone if they had a terrible sense of fashion?

Kat: Yes

Chatbot: Even if they wore Crocs?

So next time you are chatting with a social chatbot, see if they try to interject humour and jokes, and think about how this makes you feel.

References:

Adiwardana, D., Luong, M.-T., So, D. R., Hall, J., Fiedel, N., Thoppilan, R., Yang, Z., Kulshreshtha, A., Nemande, G., Lu, Y., & Le, Q. V. (2020). Towards a Human-like Open-Domain Chatbot, Google Research, Brain Team, arXiv:2001.09977v3

Attardo, S. (2017) Humor in Language, Oxford Research Encyclopaedia of Linguistics online, https://doi.org/10.1093/acrefore/9780199384655.013.342

Clark, L., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D., Edwards, J., Spillane, B., Gilmartin, E., Murad, C., Munteanu, C., Wade, V., & Cowan, B. R. (2019). What makes a good conversation? Challenges in designing truly conversational agents. Conference on Human Factors in Computing Systems – Proceedings, pp. 1–12.

Fryer, L., Coniam, D., Carpenter, R., & Lapusneanu, D. (2020) Bots for language learning now: Current and future directions, Language Learning & Technology, Vol. 24, Issue 2, pp. 8-22

Nijholt, A. (2003) Humor and Embodied Conversational Agents, http://doc.utwente.nl/41392/

Szabo, A. (2003) The acute effect of humor and exercise on mood and anxiety, Journal of Leisure Research,  35(2), pp. 152-162

Tsakona, V. & Chovanec. J. (2018) Investigating the dynamics of humor: Towards a theory of interactional humor, In (Eds Tsakona, V. & Chovanec. J) The Dynamics of Interactional Humor: Creating and Negotiating Humor in Everyday Encounters, John Benjamins, pp.1-26

Teaching & learning in the Covid-19 era

There is a reason why The Open University is still going strong some 51 years after it was founded in 1969; it was created for the specific purpose of distance learning, and based all its principles on sound pedagogy to reach the learning objectives they set out.

Distance learning and online teaching and learning are nothing new in today’s technology rich society, so why is it proving such a challenge to find effective learning solutions in a world engulfed by physical confinement within the four walls of our homes? The answer is quite simply that there is a major difference between online learning and emergency remote teaching (ERT).

Online courses have been specifically designed to be delivered online, in a self-paced learning mode, to learners that have not and maybe never will meet the tutor delivering the course. The expectations of learners can only come from themselves as the medium of learning immersion; the more learners are prepared to put into a course, the more they will get out of it. They can be passive learners that ‘lurk’ in the background scrolling through the course forums and absenting from any synchronous interaction, or they may be active learners that actively contribute to threads in the forum and are keen to participate in live sessions held with the tutor and others on the course.  The motivations for choosing a distance course could be due to geographical location or other work and/or family commitments, so online learning generally offers flexibility with very few timed or location commitments.

ERT refers to courses that have been developed for face-to-face instruction, however through force majeure, they have been transferred to online delivery. The intention is for the same content to be included and completed in the same time frame as it would with live delivery, and for the learning objectives to be met regardless of the change in medium of delivery. What I am hearing from colleagues, and experiencing myself, is that this is most definitely not the case, and I think this needs to be given consideration when re-designing courses for online delivery.

Often times, less is more, and in the case of the Covid-19 emergency remote teaching and learning contexts that many of us find ourselves in, the role of empathy and compassion for our learners is increasingly important. We are all suffering imposter syndrome, anxiety, social and family pressures that are debilitating our motivation, strength, self worth and productivity. So I think we need to lessen our expectations of our learners and offer more support. The 3 things that this global pandemic has taught me with respect to teaching and learning online are:

1: Increase the task completion time.

2: Don’t be disappointed if what was on the agenda is not completed.

3: Lower your expectations.

4: Take lots of breaks and reward yourself often.

Okay, I know I said 3, but number 4 is really important. While it pains me to admit this, we are not robots (yet) so we need to factor in the human side to this rather odd situation we are all living in.

Is EdTech trying to reinvent the wheel?

I attended the Digital Learning Colloquium at Cambridge last week, and it was a fascinating insight into the future landscape of EdTech painted by a broad spectrum of attendees from different backgrounds: product development, research, academia, consultants, product design, and the odd ELT teacher and trainer.

While there were clear threads of discussion regarding the normalisation of the tech we are using today in ten years time, AR and VR to name a couple, there is one clear question that springs to my mind: Is EdTech trying to reinvent the wheel?

My opinion regarding the use of EdTech for teaching and learning is the same as it is for any activity a teacher or learner engages in: sound pedagogical reasoning. For me, it is not so much what is being done to learn something, but the rationale for how it reaches the learning objective. If an activity which incorporates an AR app really does improve the learning outcomes, or facilitate reaching the pedagogical goal of the lesson, then I’m all for it. I do, however, strongly believe that a lot of products and tools are trying to tap into the multi-billion dollar industry that EdTech has become.

Penny Ur (1996) claimed that there is a difference between a teacher with twenty years’ experience and one years’ experience repeated twenty times. I wholeheartedly agree with this, because I believe that teaching professionals need to learn, adapt and grow along with their experience, teaching context, and learner needs. So, yes, EdTech could well be a part of this growing and development as a teacher, but just because a tool looks good, doesn’t mean to say it actually is. The tool needs to achieve the learning goal that has been set, this can be by motivating learners, or improving interaction, but I reiterate, the main motivation for using any tool, digital or not, should be pedagogical grounds, and the tool must be exploited effectively.

The talk I gave looked at 3 simple tools I use in the classroom to promote interaction and provide learning solutions to some of the problems I encounter with learners in specific contexts. The tools were: Padlet; IM apps (Whatsapp & WeChat); and Dictaphone apps on smartphones. Gone are the days of recording ourselves on a TDKC90 cassette to see how we sound when we speak a foreign language, but this practice is so effective. The modern day version is a Dictaphone app which I regularly incorporate into my lessons, and encourage learners to record themselves out of class to playback and identify action points to work on with their pronunciation and speaking skills. I use IM apps for a range of collaborative tasks (more information to come in future posts!), and Padlet I use as a visual live collaborative tool both inside and beyond the classroom.

So, that said, the literature has been telling us for years what good pedagogical practice is, we just need to stick with that, and map it onto current language learning contexts.  

Ur, P. (1996). A course in Language Teaching: Practice and Theory. Cambridge University Press, Cambridge.

What does it mean to be human?

With the surge of interest and investment into AI, the question at the forefront of my mind is ‘What does it mean to be human?’ The apparent obsession with AI is to replicate human intelligence on all levels, but the problem I have with this is that I don’t think we fully understand what it means to be human. I think it is impossible to reproduce human ‘intelligence’ without first appreciating the complexities of the human brain. Hawkins (2004) argues that the primary reason we have been unable to successfully build a machine that thinks exactly like a human, is our lack of knowledge about the complex functioning of cerebral activity, and how the human brain is able to process information without thinking.

This is the reason why the work of Hiroshi Ishiguro, the creator of both Erica and Geminoid, interests me so much. The motivation for Ishiguro to create android robots is to better understand humans, in order to build better robots, which can in turn help humans. I met Erica in 2016 and the experience made me realise that we are in fact perhaps pursing goals of human replication that are unnecessary. Besides, which model of human should be used as the blueprint for androids and humanoid robots? Don’t get me wrong, I am fascinated with Ishiguro’s creation of Erica.

My current research focuses on speech dialogue systems and human computer interaction (HCI) for language learning, which I intend to develop so it can be mapped onto an anthropomorphic robot for the same purposes. Research demonstrates, that one of the specific reasons the use of non-human interactive agents are successful in language learning is because they disinhibit learners and therefore promote interaction, especially amongst those with special educational needs.

The attraction is of humanoid robots and androids for me therefore, is not necessary how representative they are of humans, but more about the affordances of the non-human aspects they have, such as being judgemental. In my opinion, we need more Erica’s in the world.

What does 2020 mean for Ed Tech?

A new year AND a new decade, so what does 2020 mean for Ed Tech? Twenty years ago we were getting to grips with communicating via email. Ten years ago iPhones had already been around for three years, but their price bracket pitched them out of reach for the majority of mobile phone users. So here we are in 2020 with driverless car technology being widely tried and tested, and with China witnessing the birth of the third gene-edited baby. So where does this leave language learning and tech, and what is in store for the near future?

Where we are now

Apps, apps, apps… With the 2019 gaming community reaching a population of 2.5 billion globally (statista.com), it is no surprise that apps are an attractive option for learning English. The default options tend to be Babel, Duolingo and Memrise, but there are a plethora of options to choose from. Some recent fun apps I have experimented with are ESLA for pronunciation, TALK for speaking and listening, and EF Hello.

In the classroom however, the digital landscape can be quite different. Low resource contexts and reluctance from teaching professionals to incorporate tech into the learning environment can mean that opportunities for learners to connect with others and seek information are not available. Even is some of the most highly penetrated tech societies 19th century rote based learning and high stakes testing approaches are favoured.

Predictions for the future

Does educational technology have all the answers we need to improve the language output of ESL learners globally? No, probably not. However, society has been so dramatically altered by the impact of technology in almost every facet or our lives, it would be rather odd I feel, to reject it in teaching and learning environments.

In higher education the main concern is data privacy and ethics with exposure to digital areas such as the cloud. Yet, chatbots are starting to become integrated to support students asking university related FAQ’s. Both Differ and Hubert chatbots are being researched for their potential to improve qualitative student interaction and feedback.

Kat’s predictions

In all honesty I think it is a tough call to gauge where we will be with Ed Tech during the next ten years. Data privacy is a considerable issue when incorporating elements of AI into learning fields. This is not an issue with VR and AR and therefore underpins its relevant proliferation in teaching and learning. I feel that VR and AR will continue to mature and provide a more full-bodied learning experience when using VLEs. This may however be a slightly more complex paradigm than some may be able or prepared to employ.

I still firmly believe that reflective practice is a solid foundation for learners using recorded audio or visual content of their language production. So while this doesn’t mean the introduction of a big pioneering tech tool, it highlights its relevance as a reliable learning tool. In the same way, I continue to use Whatsapp, WeChat and Line to share learning content with learners and encourage them to interact with each other, and other learning communities.

Hello, can I help you?

Facebook messenger bots can be set up and working within an hour. It is no wonder then that text-to-text chatbots have replaced the automated customer service answer machines in many sectors of industry.

The chatbot can be programmed with a training corpus of customer service complaints in the form of recognisable input data, and possible solution phrases. The algorithms then use key word identification to identify the issue and match it with a suitable response. Given the many experiences of miscommunication with lackadaisical customer service telephone operators, I feel this is a perfect use of chatbot technology.

I have been experimenting with building a bespoke chatbot for my own research purposes, so I can confirm that the practice is comparatively complex compared to the theory of providing an interactional partner for learners of English as a second language. Using the model and frameworks of customer-service chatbots was not possible to modify in my case. I tried using the Dialogue Flow framework provided by Google, which surprisingly provided rather disappointing results.

I feel the fear of a digital world where machines take over from humans is somewhat premature, as there is still a lot of development needed in order to iron out the creases of chatbot technology.

Disinhibition and Human Computer Interaction

For some reason, when we are learning a foreign language, we feel intimidated to speak it. We fear we will be laughed at, won’t say the right thing and won’t be understood or simply lack the confidence to put a voice to the words floating around in our brains forming utterances.

It is clear inhibition to speak is a common problem among language learners for whatever reason. So, I am investigating strategies to disinhibit learners, and to provide them with oral interaction confidence, by engaging with a computer to practice speaking, so they have the confidence to interact with humans.

Human computer interaction (HCI) to practise English conversation offers several advantages compared to practising with a human. The main motivations being:

  •  low inhibition because learners know are they are interacting with a machine that will not judge their performance unless asked to do so.
  • a low-anxiety environment which promotes confidence because of the absence of a human waiting form the next turn.
  • Interaction for as long as the learner wants to practice.
  • Computers do not lose their patience, or tire of conversing or repeating the same conversation pattern.

I therefore strongly believe that HCI is a promising solution for learner disinhibitition.Updates on experiments carried out with chatbots to fulfil this research to follow…

Man or machine?

Man or machine? That is the question! There is an endless flow of information being pushed onto our screens about the danger of robots and machines taking over the world. Martin Ford’s Rise of the robots (2015) presents a blatantly bleak view of automation and the ‘threat of jobless future’ due to the advances of technology.

When it comes to automated customer service agents, I am sure we all have long winded stories of negative experiences. On the flip-side however, I have also had my share of less than favourable customer service experiences with humans. While there is evidence of the frustrations of not being able to interact with a human to resolve customer service issues, there is considerably more evidence which supports the view that the human was unable to resolve the query, and a chatbot could have more than adequately dealt with the matter in a considerably shorter time frame (Xu et al, 2017). Chatbots are also consistently patient and polite; remain unruffled by rude customers, high traffic, or repeated requests for the same information, and never tire (McNeal & Newyear, 2013).

I think there is a time and a place for everything. But given the inflated lack of patience and the abundance of immediacy that humans expect from the service sector nowadays, I think chatbots are a good option for quick enquiries and to resolve systematic ‘problems’.


From RALL to chatbots

I began the year with a strong desire to continue my research into RALL, and while that is still the case, my research has lead me to investigate the benefits and  pedagogical potential of using chatbot teachers to assist in language learning.

The research examines the use of a speech-to-speech interface as the language-learning tool, designed with the specific intention of promoting oral interaction in English. The computer (chatbot) will assume the role of conversational partner, allowing the learner to practice conversing in English. A retrieval-based model will be used to select appropriate output from predefined responses. This model will then be mapped onto a gamification framework to ensure an interesting and engaging interactional experience.

Speech is one of the most powerful forms of communication between humans; hence, it is my intention to add to current research in the human-computer interaction research field to improve speech interaction between learners and the conversational agent (the chatbot) in order to simulate human-human speech interaction.

So just how should you speak to a chatbot?

So just how should you speak to a chatbot? If you cast your mind back to Tay the chatbot built by Microsoft. She was shut down on the grounds of inappropriateness because she was posting offensive and unsuitable content on her Twitter account. Hardly surprising really considering she was built using corpus from Twitter posts and dialogues, a perfect example of the hunter becoming the hunted.  

The apparent ubiquity of chatbots in the customer service sector is proving to be somewhat beneficial to the companies using them, but less convenient for users. The majority of conversation agents are built using a retrieval-based model, which reply based on a repository of predefined responses from which the most appropriate is selected, based on the input text and context. The output could be limited to as little as three utterances per response. Let’s look at an opening turn to see how this works:

‘Hello, what can I do for you today?’

> No response, delayed response from user, or the chatbot is unable to interpret the user input.

‘I missed that, say that again?’

> No response, delayed response from user, or the chatbot is unable to interpret the user input.

‘Sorry, can you say that again?’

> No response, delayed response from user, or the chatbot is unable to interpret the user input.

‘Sorry, I can’t help.’.

This leads me to believe that we as users need to learn how to speak to an automated conversation agent before determining what we want from it. If we don’t respond, or respond using undecipherable discourse then we are expecting a machine to manage a task that humans would also face problems with interpreting. While considerable research and development is being carried out in the field of intelligent conversational agents, we are still a long way from them becoming an integral part of mainstream customer service interfaces that are able to interpret our utterances and commands to the best of our expectations.