ChatGPT in an HE context

With the current popularity of ChatGPT and the looming development of Artificial General Intelligence (AGI) how is this affecting the English language learner?

The advent of ChatGPT has had a significant impact on language learners worldwide. In addition, there are impactful consequences for both teachers in higher education (HE) contexts that are evaluating learners on the basis of the work they submit being original.

ChatGPT is trained with extensive data trawled from the internet. Note the use of the present tense, because it is constantly being updated and learning from the input that internet users openly share on the worldwide web. We need to think of ChatGPT as a living and growing organism that is constantly gaining information and becoming stronger. It all sounds rather science fiction, and if we think back to many of the Sci-Fi television series from the 80’s, we would be right in thinking so.

ChatGTP will be a year old next month, and it is leaving its AI footprint on diverse industries. This includes pharmaceuticals, wealth management, and bio weaponry. My primary concern is its use in English language learning, more notably academic English in HE contexts because that is one of my areas of expertise.

The problems all begin with academic integrity, or plagiarism as it is more commonly known. HE learners are clearly guided towards using one of the acknowledged citation and referencing systems. For example, they can use The Harvard system to reference all the academic source texts they refer to in their academic writing. This means they to cite ideas AND acknowledge them which will result in their work being 100% original. This also ensures that the criticality of the ideas presented aligns with the citations and a sound piece of academic writing is created. If, however, ChatGPT is used, citations may be sound, the writing will be flawless, but the ideas tend to be hollow. This is where students are being caught out.

So, my message here is that while ChatGPT may have many fortitudes, supporting students with academic writing is not one of them in my honest opinion.

What can we learn from the ELIZA effect?

Weinbaum’s experiments with ELIZA proved that when we know we aren’t being judged we are happy to talk about anything and even divulge personal information. The ELIZA effect as it is known, addressed the idea that we as humans presume that the behaviour of computers is as analogous as that of humans. Created as a psychotherapy chatbot ELIZA provided a disinhibited low-anxiety environment for patients to talk about their problems. With patients assuming that the computer programme was responding in a purely analogous fashion, and not in the pattern matching way that it actually was.

The ELIZA model has been repeatedly emulated with the creation of chatbot apps that provide virtual friendships and emotional support, such as Woebot, Replika, and Wysa. These therapy bots aim to help people combat depression and loneliness, and feel they have ‘someone’ to turn to. This demonstrates that our willingness to communicate (WTC) is enhanced when the interlocuter we are conversing with is unable to judge us.

This leads me to the main argument of this post. It would appear humans feel more comfortable communicating with chatbots that to date do not possess the AI capacities to fully understand and interpret human emotions. Therefore, the fear of being judged or losing face is drastically reduced. In the language learning classroom, we should therefore try to create a relaxed environment that facilitates learning and help promote WTC so learners feel more comfortable to interact orally and more confident to express their ideas. So while machines endeavour to hone their AI skills to perfectly emulate human behaviour, maybe we as teaching practitioners should try to emulate machine behaviour by encouraging a non-judgemental environment in the language learning classroom that promotes confidence among learners to speak and interact more confidently, especially in online environments where learners appear to feel more reluctant to speak up.

Our expectations of digital language learning partners

My current investigation into speech interfaces as language-learning partners is revealing that one of the main problems companies are facing when developing these tools is that they do not behave how they would like them to. This refers to the product often lacking the necessary layers of programming necessary to design a bot that can fully simulate human-human interaction.

This is does not come as a surprise to me because I firmly believe that until we reach a stage where we fully understand the human brain. I believe we are still quite far from reaching this stage, so I find the efforts to try and build an AI tool that can fully replicate the human brain and fully simulate human-human interaction akin to herding cats. To some extent it can be done, but not quite as we’d like or expect.

From my own experiences of building a conversational bot I appreciate the intricacies of the programming required to build a tool which acts as we would like. It is time consuming, arduous, and extremely challenging to say the least. This is why, I presume, that the English language learning landscape is not flooded with such tools.

I am currently examining learner reactions to spoken digital interface interaction by trying to understand how learners respond and what it is specifically that makes them respond in different ways. My hope is that by better understanding user discourse, it will provide some insights into the characteristics an effective chat interface requires.

Enthusiasm to learn is emotionally driven

Enthusiasm can be displayed in different ways and it can also be present in a learner, but they do not show any visible signs of being enthusiastic to learn, they are simply enjoying the learning and keen to learn.

My current research investigation focuses on how learners demonstrate their enthusiasm when interacting with a speech recognition interface. This includes both linguistic and non-linguistic features. The dataset I am using clearly demonstrates that the psychological state of learners impacts their enthusiasm, and therefore language output and capacity to engage in learning more than any other factor. While this came as a surprise, it aligns with motivation theory and learning which purports that positive emotional and hence psychological states favour learning, and a negative emotional state (anxiety, stress, depression) can adversely affect learning.

I’ve spent a lot of time with humanoid robots, speech recognition interfaces, and autonomous agents and despite their degree of humanness, there is something decidedly safe for me about interacting with a non-conscious being. Maybe that is why Weizenbaum’s research was so successful! The non-judgmental attributes of a machine make the user feel comfortable to interact, and therefore they get more out of the learning experience. This is something I am still investigating, but Buddy, the robot in the image above aims to understand the mood of the use, and then respond accordingly. So empathy is now going beyond human…

Less is more: the argument in defence of HCI for speaking skills

Less is more, or is it?

I was taught from a young age that the wise man is the one who observes and says very little. However, for foreign language learners I think it is quite the opposite, and the more they try to speak and express themselves orally the more they can practise and learn about oral interaction.

My current research is investigating the oral output prompted by interacting with an autonomous agent, and surprisingly I am not finding that the output varies from that of human-human interaction. There are days where participants are motivated and enthusiastic to interact and others where they provide monosyllabic answers.

Where I’m going with this, is that investigating learners interacting with a digital tool has demonstrated to me that in the classroom I often have an expectation of learners to constantly perform, and feel frustrated when they don’t willingly provide output when requested. I am learning that deliberate practice is perhaps not an effective method of language learning and adopting a more laissez-faire approach maybe more appropriate.

So, on the one hand we need learners to speak as often as possible, but on the other hand we can’t expect them to always be willing to speak. For me this highlights the value of human computer interaction (HCI) for language learning and demonstrates that we should lean more heavily on autonomous agents for speaking practise. They provide limitless opportunities, never tire and can be used when learners feel they want to speak, not when they have to.

AI vs EQ

According to The Oxford Dictionary, intelligence is the ability to learn, understand and think in a logical way about things, and the ability to do this well. Emotional intelligence, otherwise known as emotional quotient (EQ) is the ability to manage and understand emotions. I am making a parallel between AI and EQ because I strongly believe there are expectations regarding the level of intelligence machines, robots and autonomous agents are required to have, yet as humans, we have seemingly low expectations of each other to have EQ. Yes, I am comparing EQ to AI, but if AI is a simulation of human intelligence in machines, then this also includes emotional intelligence.

The point I am making is related to my current research which is using an AI tool to investigate its capacity for interactional conversation with humans. I have tried myself to design a tool, and the outcome was a chat interface which had limited capacity to understand the oral input and the output was also very slow and finite. While the potentials for programming a more effective tool are clearly possible from the many examples of virtual assistants now available such as Siri and Alexa, I am questioning if our expectations of their capabilities are perhaps unreasonable. If humans can lack EQ and often not able to empathise with others or communicate effectively, why do we expect intelligent autonomous systems to be able to do this?

We are very far removed from fully understanding the human brain, and until we do, I think we need to be realistic with the potential capabilities of AI.

Don’t make me laugh!

The role of humour in conversation

We know it feels good to laugh and it is argued scientifically that humour in general can decrease emotional distress and anxiety in stressful situations (Nijholt, 2003; Szabo, 2003). In addition to the feel good factor, several studies argue that another function of humour is to create solidarity among participants in a conversation and promote a sense of trust and interpersonal attraction (Atardo, 2017; Tsakona & Chovanec, 2018;Glen & Holt, 2007; Nijholt, 2003).

Chatbots and humour

Previous investigations into the role of humour in human–computer interaction with conversational agents (Adiwardana et al, 2020, Nijholt, 2003) emphasise that users consider the agent to be more human-like when they use humour, and this makes them feel more comfortable and positive about the interaction. It has been observed that humour can help make natural language interfaces more appealing and appear ‘friendlier’ to the user (Nijholt, 2003), which in the case of a foreign language learner could reduce the possibility of anxiety, and improve language output.

It has been conclusively shown that “chatbots are designed as communicators” (Fryer et al, 2020;8) with previous studies into the use of chatbots (Adiwardana et al, 2020; Fryer et al, 2020; Westerman et al, 2019) establishing that the common priority of chatbot designers is to strive for the communication between humans and conversational agents to simulate human-human interaction. It is argued that the deficit in conversational ability of interactional tools is compensated for by what are considered to be human-like qualities like humour (Clark et al, 2020). This tendency, combined with the surge in AI research has had a profound effect on the design of chatbots in more recent years as software architects also strive to create agents that are able to emulate emotion and humour in their interactions.

Even if they wore Crocs?

I now understand why when chatting with a bot last week, it randomly asked me the following question and gave the following response:

Chatbot: Could you date someone if they had a terrible sense of fashion?

Kat: Yes

Chatbot: Even if they wore Crocs?

So next time you are chatting with a social chatbot, see if they try to interject humour and jokes, and think about how this makes you feel.

References:

Adiwardana, D., Luong, M.-T., So, D. R., Hall, J., Fiedel, N., Thoppilan, R., Yang, Z., Kulshreshtha, A., Nemande, G., Lu, Y., & Le, Q. V. (2020). Towards a Human-like Open-Domain Chatbot, Google Research, Brain Team, arXiv:2001.09977v3

Attardo, S. (2017) Humor in Language, Oxford Research Encyclopaedia of Linguistics online, https://doi.org/10.1093/acrefore/9780199384655.013.342

Clark, L., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D., Edwards, J., Spillane, B., Gilmartin, E., Murad, C., Munteanu, C., Wade, V., & Cowan, B. R. (2019). What makes a good conversation? Challenges in designing truly conversational agents. Conference on Human Factors in Computing Systems – Proceedings, pp. 1–12.

Fryer, L., Coniam, D., Carpenter, R., & Lapusneanu, D. (2020) Bots for language learning now: Current and future directions, Language Learning & Technology, Vol. 24, Issue 2, pp. 8-22

Nijholt, A. (2003) Humor and Embodied Conversational Agents, http://doc.utwente.nl/41392/

Szabo, A. (2003) The acute effect of humor and exercise on mood and anxiety, Journal of Leisure Research,  35(2), pp. 152-162

Tsakona, V. & Chovanec. J. (2018) Investigating the dynamics of humor: Towards a theory of interactional humor, In (Eds Tsakona, V. & Chovanec. J) The Dynamics of Interactional Humor: Creating and Negotiating Humor in Everyday Encounters, John Benjamins, pp.1-26

Teaching & learning in the Covid-19 era

There is a reason why The Open University is still going strong some 51 years after it was founded in 1969; it was created for the specific purpose of distance learning, and based all its principles on sound pedagogy to reach the learning objectives they set out.

Distance learning and online teaching and learning are nothing new in today’s technology rich society, so why is it proving such a challenge to find effective learning solutions in a world engulfed by physical confinement within the four walls of our homes? The answer is quite simply that there is a major difference between online learning and emergency remote teaching (ERT).

Online courses have been specifically designed to be delivered online, in a self-paced learning mode, to learners that have not and maybe never will meet the tutor delivering the course. The expectations of learners can only come from themselves as the medium of learning immersion; the more learners are prepared to put into a course, the more they will get out of it. They can be passive learners that ‘lurk’ in the background scrolling through the course forums and absenting from any synchronous interaction, or they may be active learners that actively contribute to threads in the forum and are keen to participate in live sessions held with the tutor and others on the course.  The motivations for choosing a distance course could be due to geographical location or other work and/or family commitments, so online learning generally offers flexibility with very few timed or location commitments.

ERT refers to courses that have been developed for face-to-face instruction, however through force majeure, they have been transferred to online delivery. The intention is for the same content to be included and completed in the same time frame as it would with live delivery, and for the learning objectives to be met regardless of the change in medium of delivery. What I am hearing from colleagues, and experiencing myself, is that this is most definitely not the case, and I think this needs to be given consideration when re-designing courses for online delivery.

Often times, less is more, and in the case of the Covid-19 emergency remote teaching and learning contexts that many of us find ourselves in, the role of empathy and compassion for our learners is increasingly important. We are all suffering imposter syndrome, anxiety, social and family pressures that are debilitating our motivation, strength, self worth and productivity. So I think we need to lessen our expectations of our learners and offer more support. The 3 things that this global pandemic has taught me with respect to teaching and learning online are:

1: Increase the task completion time.

2: Don’t be disappointed if what was on the agenda is not completed.

3: Lower your expectations.

4: Take lots of breaks and reward yourself often.

Okay, I know I said 3, but number 4 is really important. While it pains me to admit this, we are not robots (yet) so we need to factor in the human side to this rather odd situation we are all living in.

Is EdTech trying to reinvent the wheel?

I attended the Digital Learning Colloquium at Cambridge last week, and it was a fascinating insight into the future landscape of EdTech painted by a broad spectrum of attendees from different backgrounds: product development, research, academia, consultants, product design, and the odd ELT teacher and trainer.

While there were clear threads of discussion regarding the normalisation of the tech we are using today in ten years time, AR and VR to name a couple, there is one clear question that springs to my mind: Is EdTech trying to reinvent the wheel?

My opinion regarding the use of EdTech for teaching and learning is the same as it is for any activity a teacher or learner engages in: sound pedagogical reasoning. For me, it is not so much what is being done to learn something, but the rationale for how it reaches the learning objective. If an activity which incorporates an AR app really does improve the learning outcomes, or facilitate reaching the pedagogical goal of the lesson, then I’m all for it. I do, however, strongly believe that a lot of products and tools are trying to tap into the multi-billion dollar industry that EdTech has become.

Penny Ur (1996) claimed that there is a difference between a teacher with twenty years’ experience and one years’ experience repeated twenty times. I wholeheartedly agree with this, because I believe that teaching professionals need to learn, adapt and grow along with their experience, teaching context, and learner needs. So, yes, EdTech could well be a part of this growing and development as a teacher, but just because a tool looks good, doesn’t mean to say it actually is. The tool needs to achieve the learning goal that has been set, this can be by motivating learners, or improving interaction, but I reiterate, the main motivation for using any tool, digital or not, should be pedagogical grounds, and the tool must be exploited effectively.

The talk I gave looked at 3 simple tools I use in the classroom to promote interaction and provide learning solutions to some of the problems I encounter with learners in specific contexts. The tools were: Padlet; IM apps (Whatsapp & WeChat); and Dictaphone apps on smartphones. Gone are the days of recording ourselves on a TDKC90 cassette to see how we sound when we speak a foreign language, but this practice is so effective. The modern day version is a Dictaphone app which I regularly incorporate into my lessons, and encourage learners to record themselves out of class to playback and identify action points to work on with their pronunciation and speaking skills. I use IM apps for a range of collaborative tasks (more information to come in future posts!), and Padlet I use as a visual live collaborative tool both inside and beyond the classroom.

So, that said, the literature has been telling us for years what good pedagogical practice is, we just need to stick with that, and map it onto current language learning contexts.  

Ur, P. (1996). A course in Language Teaching: Practice and Theory. Cambridge University Press, Cambridge.

What does it mean to be human?

With the surge of interest and investment into AI, the question at the forefront of my mind is ‘What does it mean to be human?’ The apparent obsession with AI is to replicate human intelligence on all levels, but the problem I have with this is that I don’t think we fully understand what it means to be human. I think it is impossible to reproduce human ‘intelligence’ without first appreciating the complexities of the human brain. Hawkins (2004) argues that the primary reason we have been unable to successfully build a machine that thinks exactly like a human, is our lack of knowledge about the complex functioning of cerebral activity, and how the human brain is able to process information without thinking.

This is the reason why the work of Hiroshi Ishiguro, the creator of both Erica and Geminoid, interests me so much. The motivation for Ishiguro to create android robots is to better understand humans, in order to build better robots, which can in turn help humans. I met Erica in 2016 and the experience made me realise that we are in fact perhaps pursing goals of human replication that are unnecessary. Besides, which model of human should be used as the blueprint for androids and humanoid robots? Don’t get me wrong, I am fascinated with Ishiguro’s creation of Erica.

My current research focuses on speech dialogue systems and human computer interaction (HCI) for language learning, which I intend to develop so it can be mapped onto an anthropomorphic robot for the same purposes. Research demonstrates, that one of the specific reasons the use of non-human interactive agents are successful in language learning is because they disinhibit learners and therefore promote interaction, especially amongst those with special educational needs.

The attraction is of humanoid robots and androids for me therefore, is not necessary how representative they are of humans, but more about the affordances of the non-human aspects they have, such as being judgemental. In my opinion, we need more Erica’s in the world.