Tag Archives: Eliza

What can we learn from the ELIZA effect?

Weinbaum’s experiments with ELIZA proved that when we know we aren’t being judged we are happy to talk about anything and even divulge personal information. The ELIZA effect as it is known, addressed the idea that we as humans presume that the behaviour of computers is as analogous as that of humans. Created as a psychotherapy chatbot ELIZA provided a disinhibited low-anxiety environment for patients to talk about their problems. With patients assuming that the computer programme was responding in a purely analogous fashion, and not in the pattern matching way that it actually was.

The ELIZA model has been repeatedly emulated with the creation of chatbot apps that provide virtual friendships and emotional support, such as Woebot, Replika, and Wysa. These therapy bots aim to help people combat depression and loneliness, and feel they have ‘someone’ to turn to. This demonstrates that our willingness to communicate (WTC) is enhanced when the interlocuter we are conversing with is unable to judge us.

This leads me to the main argument of this post. It would appear humans feel more comfortable communicating with chatbots that to date do not possess the AI capacities to fully understand and interpret human emotions. Therefore, the fear of being judged or losing face is drastically reduced. In the language learning classroom, we should therefore try to create a relaxed environment that facilitates learning and help promote WTC so learners feel more comfortable to interact orally and more confident to express their ideas. So while machines endeavour to hone their AI skills to perfectly emulate human behaviour, maybe we as teaching practitioners should try to emulate machine behaviour by encouraging a non-judgemental environment in the language learning classroom that promotes confidence among learners to speak and interact more confidently, especially in online environments where learners appear to feel more reluctant to speak up.

Turn taking and chatbots

Turn taking is a natural part of conversation that we subconsciously engage in order for the discourse to flow. Here is an example:

A: “Good morning”

B: “Morning. How are you? Good weekend?”

A: “Yes thanks, and you? How was Brighton?”

For the Cambridge main suite speaking exams, candidates are assessed on their turn-taking ability under the criteria of ‘Interactive Communication’. In other words, this means the candidates’ ability to:

  • Interact with the other candidate easily and effectively.
  • Listening to the other candidate and answering in a way that makes sense.
  • The ability to start a discussion and keep it going with their partner/s.
  • The ability to think of new ideas to add to the discussion.

Along with the onslaught of technological advances came advance in automated responses from portable digital devices. These conversational agents or dialogue systems are capable of single interactions or up to 6 task-oriented turns. An example of these dialogue agents would be Siri, and an example of a talk-oriented interaction would be: “Siri call Dad”.

Chatbots are not a ‘new’ invention per say. Eliza, created between 1964-1966 at the MIT was a natural language processing computer programme that demonstrated the same characteristics of chatbots day, but on a less sophisticated scale, and with less complex interaction. The aim of chatbot builders is to create natural language processing programmes that replicate human-human interaction by enabling more turns and therefore extended conversations.

The interesting challenge then becomes, how to use each turn as a springboard for the next, and ensure that each one prompts a response that has been pre-programmed, in order not to receive a generic message like: “I’m sorry, but I’m not sure what you mean by that”, when the user is expressing a specific request or a expressing a turn that is not recognised. More about chatbots soon!