Author Archives: admin

What does 2020 mean for Ed Tech?

A new year AND a new decade, so what does 2020 mean for Ed Tech? Twenty years ago we were getting to grips with communicating via email. Ten years ago iPhones had already been around for three years, but their price bracket pitched them out of reach for the majority of mobile phone users. So here we are in 2020 with driverless car technology being widely tried and tested, and with China witnessing the birth of the third gene-edited baby. So where does this leave language learning and tech, and what is in store for the near future?

Where we are now

Apps, apps, apps… With the 2019 gaming community reaching a population of 2.5 billion globally (statista.com), it is no surprise that apps are an attractive option for learning English. The default options tend to be Babel, Duolingo and Memrise, but there are a plethora of options to choose from. Some recent fun apps I have experimented with are ESLA for pronunciation, TALK for speaking and listening, and EF Hello.

In the classroom however, the digital landscape can be quite different. Low resource contexts and reluctance from teaching professionals to incorporate tech into the learning environment can mean that opportunities for learners to connect with others and seek information are not available. Even is some of the most highly penetrated tech societies 19th century rote based learning and high stakes testing approaches are favoured.

Predictions for the future

Does educational technology have all the answers we need to improve the language output of ESL learners globally? No, probably not. However, society has been so dramatically altered by the impact of technology in almost every facet or our lives, it would be rather odd I feel, to reject it in teaching and learning environments.

In higher education the main concern is data privacy and ethics with exposure to digital areas such as the cloud. Yet, chatbots are starting to become integrated to support students asking university related FAQ’s. Both Differ and Hubert chatbots are being researched for their potential to improve qualitative student interaction and feedback.

Kat’s predictions

In all honesty I think it is a tough call to gauge where we will be with Ed Tech during the next ten years. Data privacy is a considerable issue when incorporating elements of AI into learning fields. This is not an issue with VR and AR and therefore underpins its relevant proliferation in teaching and learning. I feel that VR and AR will continue to mature and provide a more full-bodied learning experience when using VLEs. This may however be a slightly more complex paradigm than some may be able or prepared to employ.

I still firmly believe that reflective practice is a solid foundation for learners using recorded audio or visual content of their language production. So while this doesn’t mean the introduction of a big pioneering tech tool, it highlights its relevance as a reliable learning tool. In the same way, I continue to use Whatsapp, WeChat and Line to share learning content with learners and encourage them to interact with each other, and other learning communities.

Hello, can I help you?

Facebook messenger bots can be set up and working within an hour. It is no wonder then that text-to-text chatbots have replaced the automated customer service answer machines in many sectors of industry.

The chatbot can be programmed with a training corpus of customer service complaints in the form of recognisable input data, and possible solution phrases. The algorithms then use key word identification to identify the issue and match it with a suitable response. Given the many experiences of miscommunication with lackadaisical customer service telephone operators, I feel this is a perfect use of chatbot technology.

I have been experimenting with building a bespoke chatbot for my own research purposes, so I can confirm that the practice is comparatively complex compared to the theory of providing an interactional partner for learners of English as a second language. Using the model and frameworks of customer-service chatbots was not possible to modify in my case. I tried using the Dialogue Flow framework provided by Google, which surprisingly provided rather disappointing results.

I feel the fear of a digital world where machines take over from humans is somewhat premature, as there is still a lot of development needed in order to iron out the creases of chatbot technology.

Disinhibition and Human Computer Interaction

For some reason, when we are learning a foreign language, we feel intimidated to speak it. We fear we will be laughed at, won’t say the right thing and won’t be understood or simply lack the confidence to put a voice to the words floating around in our brains forming utterances.

It is clear inhibition to speak is a common problem among language learners for whatever reason. So, I am investigating strategies to disinhibit learners, and to provide them with oral interaction confidence, by engaging with a computer to practice speaking, so they have the confidence to interact with humans.

Human computer interaction (HCI) to practise English conversation offers several advantages compared to practising with a human. The main motivations being:

  •  low inhibition because learners know are they are interacting with a machine that will not judge their performance unless asked to do so.
  • a low-anxiety environment which promotes confidence because of the absence of a human waiting form the next turn.
  • Interaction for as long as the learner wants to practice.
  • Computers do not lose their patience, or tire of conversing or repeating the same conversation pattern.

I therefore strongly believe that HCI is a promising solution for learner disinhibitition.Updates on experiments carried out with chatbots to fulfil this research to follow…

Man or machine?

Man or machine? That is the question! There is an endless flow of information being pushed onto our screens about the danger of robots and machines taking over the world. Martin Ford’s Rise of the robots (2015) presents a blatantly bleak view of automation and the ‘threat of jobless future’ due to the advances of technology.

When it comes to automated customer service agents, I am sure we all have long winded stories of negative experiences. On the flip-side however, I have also had my share of less than favourable customer service experiences with humans. While there is evidence of the frustrations of not being able to interact with a human to resolve customer service issues, there is considerably more evidence which supports the view that the human was unable to resolve the query, and a chatbot could have more than adequately dealt with the matter in a considerably shorter time frame (Xu et al, 2017). Chatbots are also consistently patient and polite; remain unruffled by rude customers, high traffic, or repeated requests for the same information, and never tire (McNeal & Newyear, 2013).

I think there is a time and a place for everything. But given the inflated lack of patience and the abundance of immediacy that humans expect from the service sector nowadays, I think chatbots are a good option for quick enquiries and to resolve systematic ‘problems’.


From RALL to chatbots

I began the year with a strong desire to continue my research into RALL, and while that is still the case, my research has lead me to investigate the benefits and  pedagogical potential of using chatbot teachers to assist in language learning.

The research examines the use of a speech-to-speech interface as the language-learning tool, designed with the specific intention of promoting oral interaction in English. The computer (chatbot) will assume the role of conversational partner, allowing the learner to practice conversing in English. A retrieval-based model will be used to select appropriate output from predefined responses. This model will then be mapped onto a gamification framework to ensure an interesting and engaging interactional experience.

Speech is one of the most powerful forms of communication between humans; hence, it is my intention to add to current research in the human-computer interaction research field to improve speech interaction between learners and the conversational agent (the chatbot) in order to simulate human-human speech interaction.

So just how should you speak to a chatbot?

So just how should you speak to a chatbot? If you cast your mind back to Tay the chatbot built by Microsoft. She was shut down on the grounds of inappropriateness because she was posting offensive and unsuitable content on her Twitter account. Hardly surprising really considering she was built using corpus from Twitter posts and dialogues, a perfect example of the hunter becoming the hunted.  

The apparent ubiquity of chatbots in the customer service sector is proving to be somewhat beneficial to the companies using them, but less convenient for users. The majority of conversation agents are built using a retrieval-based model, which reply based on a repository of predefined responses from which the most appropriate is selected, based on the input text and context. The output could be limited to as little as three utterances per response. Let’s look at an opening turn to see how this works:

‘Hello, what can I do for you today?’

> No response, delayed response from user, or the chatbot is unable to interpret the user input.

‘I missed that, say that again?’

> No response, delayed response from user, or the chatbot is unable to interpret the user input.

‘Sorry, can you say that again?’

> No response, delayed response from user, or the chatbot is unable to interpret the user input.

‘Sorry, I can’t help.’.

This leads me to believe that we as users need to learn how to speak to an automated conversation agent before determining what we want from it. If we don’t respond, or respond using undecipherable discourse then we are expecting a machine to manage a task that humans would also face problems with interpreting. While considerable research and development is being carried out in the field of intelligent conversational agents, we are still a long way from them becoming an integral part of mainstream customer service interfaces that are able to interpret our utterances and commands to the best of our expectations.

Turn taking and chatbots

Turn taking is a natural part of conversation that we subconsciously engage in order for the discourse to flow. Here is an example:

A: “Good morning”

B: “Morning. How are you? Good weekend?”

A: “Yes thanks, and you? How was Brighton?”

For the Cambridge main suite speaking exams, candidates are assessed on their turn-taking ability under the criteria of ‘Interactive Communication’. In other words, this means the candidates’ ability to:

  • Interact with the other candidate easily and effectively.
  • Listening to the other candidate and answering in a way that makes sense.
  • The ability to start a discussion and keep it going with their partner/s.
  • The ability to think of new ideas to add to the discussion.

Along with the onslaught of technological advances came advance in automated responses from portable digital devices. These conversational agents or dialogue systems are capable of single interactions or up to 6 task-oriented turns. An example of these dialogue agents would be Siri, and an example of a talk-oriented interaction would be: “Siri call Dad”.

Chatbots are not a ‘new’ invention per say. Eliza, created between 1964-1966 at the MIT was a natural language processing computer programme that demonstrated the same characteristics of chatbots day, but on a less sophisticated scale, and with less complex interaction. The aim of chatbot builders is to create natural language processing programmes that replicate human-human interaction by enabling more turns and therefore extended conversations.

The interesting challenge then becomes, how to use each turn as a springboard for the next, and ensure that each one prompts a response that has been pre-programmed, in order not to receive a generic message like: “I’m sorry, but I’m not sure what you mean by that”, when the user is expressing a specific request or a expressing a turn that is not recognised. More about chatbots soon!

IM: words in the air or recorded forever?

IM: words in the air or recorded forever? How many times have you misread a text or IM, or been misinterpreted yourself? Written spoken discourse leaves itself open to misinterpretation because the suprasegmental features are not apparent and neither is body language.

When we speak face-to-face with others, we are careful about what we say for fear of misinterpretation or offending the person/people you are conversing with. We therefore carefully and diplomatically communicate our message and use body language and features of connected speak to express ourselves.

Ironically though when we message others using one of the plethora of online messaging apps and services, or a mobile phone service providers’ texting service, we often send the message before we have had time to re-read it. It seems to be the case that the immediacy that instant messaging has brought the global messaging community has affected the way we communicate.

The written word is recorded, and the spoken is ephemeral discourse in the air, yet we pay attention to the ephemeral and not the recorded!

Just what do we expect from Chatbots?

Chatbots are the future of conversation intelligence, and can be used to stimulate human conversations. But just what do we expect from chatbots? On the one hand are those that firmly believe intelligent systems will dissipate the element of human interaction in years to come. On the other hand others revel in the delights of giving Siri instructions to challenge her intelligence and gauge the level of response.

Personally I feel that benefits for intelligent systems (chatbots) outweigh the disadvantages, but I am convinced that the advantages will depend on our behaviour and receptiveness to accept their merits. AI cynics were delighted when Microsoft’s Tay was morphed to demonstrate bad behaviour. At last there was proof to substantiate the argument in favour of the severe dangers of AI.

Users of Alexa were slightly disturbed by her random outbursts of laughter, to the extent that her code was re-written to disable a reaction when requested to do so, and to avoid reactions to false positives that try to trick her. This all leads to the question of the levels of humanness we expect from intelligent systems and chatbots, or more to the point the level of humanness we, as ‘humans’, are comfortable with accepting from ‘machines’.

Reflective video

Reflective video is an element of reflective practice I have waxed lyrical about for a long time. The ability to see oneself and analyse how we come across when we communicate, and if we are capable of transmitting a clear and succinct message that learners and/or attendees at a workshop/seminar can comprehend.

I was asked to make some ‘short’ videos about the future of teaching and learning, if technology influences and shapes how we communicate, and therefore the teaching and learning of languages.

Here is a trial run of one of the videos – it was supposed to be 2.5 – 3 minutes long. Reflective practice note to self – get to the point and make your message clear.