Here are the slides from the ‘Boosting learner vocabulary for Cambridge exams’ workshop I gave at Oxford Tefl, Barcelona yesterday. Thank you to all of you that attended, there was a great turn out.
Here are the slides from the ‘Boosting learner vocabulary for Cambridge exams’ workshop I gave at Oxford Tefl, Barcelona yesterday. Thank you to all of you that attended, there was a great turn out.
Recent talks with colleagues working in the public education sector in the UK about SATs (Suite of Assessments), and my own experiences tutoring on pre-sessional courses, have given me a first-hand insight into the exhaustive measures some institutions employ to ‘promote’ learning through continuous formative assessment. The term they have coined is ‘assessment for learning’. Experience has demonstrated to me that the learning gains are limited compared with the time taken to prepare for the testing, the testing process, and of course the marking and feedback sessions.
A typical writing test could include learners being given several extracts from source texts to read and make notes on a week prior to the actual test. On test day, these notes are not permitted into the classroom and a new set of notes is given with a question to analyse. In my humble opinion, while learners will have read the texts and have a deeper understanding than seeing them for the first time, the test is in fact an evaluation of memory where they are desperately digging deep in their brains to retrieve the information about the points they deemed worthy of remembering. The question is analysed in groups, a draft plan is drawn up individually, and finally a 90-minute test is undertaken. Learners are notably exhausted after a testing process, which has essentially been drawn out over an entire week.
As an experiment, I tried an alternative approach where I gave learners 4 short extracts. In pairs each learner read 2 different texts and made notes. The notes were swapped with their partners who used them as a springboard to understand the 2 texts they did not read. Each learner proceeded to read the texts to accompany their partners’ notes to discover if they had identified all the key themes. A group discussion was held, a question was given, and learners wrote a short piece of discourse with a 40-minute time limit, to answer the question referring to the key themes identified previously, and citing as necessary. When the writing was completed, learners exchanged their scripts with a peer, and it was reviewed for content, accuracy of answering the question given, coherence, cohesion, stance, and argumentation. Another group discussion was held, and at this point I also participated with language support and academic guidance. This ‘think tank’ approach appeared to be effective and the feedback I received from the class this was tested with was positive. Including comments such as ”I learnt from my friends so it helped me feel confident to write”, “she was able to notice some additional points I didn’t see”
After spending time with Nao (Softbank robotics) in February I am not in the slightest bit surprised at one of his many skills is the ability to write any word asked, and spell the word as he writes. Through speech recognition programming, the robot is able to perform many tasks, but the one of writing is a profound tool that can help those with literacy skill deficiencies, and of course those wanting to learn a language. Another interesting feature that will support my current research.
I recently read an interesting article about listening: “Listening for needles in haystacks: How lecturers introduce key terms” (Martinez R, Adophs S, Carteer R, ELT Journal, vol.67, issue 3 (2013) pp. 313.323). As a teacher and trainer for the Cambridge exams, I meet teachers and learners alike that comment regarding the complexity of the listening part of the exams, and they ask for strategies to help them develop stronger listening skills to produce better outcomes in the exam. While I clearly suggest high exposure to audio texts in the form the plethora of free podcasts and radio programmes available on the internet, I became aware of the fact that what exam candidates need are strategies to identify where the answers are in the audio script and how to recognise them.
Here are a few ideas that I have tried recently that I have found work for specific parts of the Cambridge First, Cambridge Advanced, and Cambridge Proficiency exams.
All the options offered in the multiple-choice question are often mentioned in the audio text and can often distract candidates from selecting the correct answer. Identifying conjunctions like ‘but’ will help learners notice when the information is being contrasted and the right answer is being given. The same exercise can be done with ‘because’ and ‘besides’.
To practice this in class, you could play practice tests to learners and get them to stand up when they hear ‘but’, and sit down when they hear it again. This highlights the importance of conjunctions and becoming aware of their use in the audio texts enables learners to pick out the correct information. You could also record your own audio texts, ask students to record their own, or play short audio extracts to practice this and introduce a fun kinaesthetic activity into the classroom.
Often the content of the listening extracts are contain topics or lexis that candidates are unfamiliar with, so a good way of helping learners with this is to take questions from the exam and use them as class discussion points.
Give learners the question and allow them time to think about language that they may hear or that could be used in a discussion about the topic, they can also look up new lexis they would like to use therefore broadening their lexical range. Allow them a limited time to discuss the topic and practice the lexis. This is an effective way of using an integrated skills approach to listening where learners focus on lexis in the listening questions by using a speaking activity. In the same way, the listening questions from Part 1 can be used as starting points for classroom speaking and extending learning and identify question type patterns. For example:
Why is he/she talking to…
Who is he/she talking to..
What is he/she doing?
How does he/she feel?
Another useful strategy to share with learners is that the verbs used in the listening extract will often indicate the tense. For example ‘warn about…’ is used for something in the future. This could help learners with the correct selection of an answer if it contains answers that refer to the past.
Automation: friend or foe?
The debate regarding automation is becoming increasingly charged as technology progressively continues to permeate ever-increasing sectors of society. While on the one hand users scoff at self-check out tills in supermarkets, I’m not entirely convinced that shop assistants in the UK can offer a better service. Reflecting back on a recent trip I have felt even more alienated as the people “serving” me look utterly perplexed when they are challenged to engage in conversation other than stating the price and asking which method of payment I would like to use. I usually leave the till disappointed and question whether a robot would in fact be capable of offering me better customer service because it would be programmed to do so.
A recent white paper published by the Association for Advancing Automation (www.a3automate.org, April 2017), puts forward several arguments regarding career sustainability and changing job titles as tasks evolve and shift more heavily towards automation. Automation is nothing new, society has been relying on machines as early back as the industrial revolution, what is new however is the way that society needs to adapt and implement the appropriate changes as the skills required to support the technological advances that evolve.
Many argue that robots will deprive many manual labour workers of their employment opportunities; I however would argue very differently and believe that there is room for both manual and automated labour. Unimate, the industrial robot designed by General Motors in 1961 was considered a welcome relief from the heavy duty lifting and welding work that was deemed unpleasant and dangerous by blue-collar workers that had previously carried it out. In today’s society many of the most advanced robots continue to be those designed for industrial purposes as automation seems to provide an attractive technological solution to increasing labour costs in societies like China, South Korea and Japan where there is still a strong emphasis on production. Many in fact see a clear correlation between automation and manufacturing and claim it could save the manufacturing industry in China.
Robots have also had a considerable impact on white-collar jobs or knowledge workers. Robots that replace white-collar workers have weaved their way into society in many contexts. In some societies, autonomous humanoid robots are already replacing shop assistants and bank tellers, which demonstrates the societal changes and trends towards the use of robots to replace human workers in white-collar jobs. While the predicted abundance of robots in society and the effect they will have on the human labour force in white-collar jobs is perceived as a threat by many, I do not share the same view. Automated machines have been integrated into our lives without a second thought, providing quick solutions in many contexts. Long gone are the days of queuing at the bank during banking hours to withdraw cash, or queuing to buy a train ticket. These machines are considered unobtrusive and their existence is not challenged yet they are replacing white-collar workers. When the machine takes on a humanoid form however, the convenience is often perceived as a threat. Maybe this is due to lack of confidence in humans to believe they are able to carry out a task as efficiently as a robot, and to return to the beginning of this post, maybe that explains the increasing lack of apparent customer service skills nowadays.
20 years ago we could never have imagined the impact of digital technologies on society. Maybe we need to embrace the automation age and consider the opportunities for career prospects as the rise for new careers and industries based around automation continue to grow. Instead of creating skills gap perhaps we should consider training options that embrace automation and the changes it has created in our society, irrespective of the sector we work in. Research and development investment in technology will continue, and this includes automation. I prefer to be making the necessary changes to be prepared for what is next to come, and to be served by humanoid robot shop assistants that are guaranteed to smile and be courteous to ask if everything is okay and if I would like any further help, but that is just me personally..
To tech, or not to tech? That is the question!
At seminars, conferences, and workshops recently, my attention has once again been drawn to the technology debate. I was asked why I blog and how often, if I feel it is important to have an online presence, and how exactly I harness technology for teaching, training and learning purposes.
To coin Bax’s ‘Normalisation’ term, for me personally, technology is so embedded into my day-to-day, that I use it automatically without giving it a second thought, just as I do a pair of glasses.
The bottom line, is that I like tech, so I have an inherent curiosity to explore what it can do for me, if it can facilitate my teaching practice, and more specifically, how. This is greatly reflected in my teaching and training, and furthermore, I endeavour to transmit this at talks, in the classroom, and in online teaching and training context. I understand that digital tools will not appeal to everyone, but if fear is present, then the watershed between being a technophobe and a tech user will become more ingrained. That is not to say that digital tools can provide all the answers to our pedagogical goals and challenges, but my message here is that if we don’t even give them an opportunity and experiment, then we will never discover what tech could do for us, and more importantly our learners, whether they themselves are millennials, screenagers, digital natives or non-tech users.
So, in answer to the questions above, I blog when I have time and when I have ideas and/or reflections I would like to share. I feel an online presence supports who I am and what I do. It enables people to gain a sense of the development ideas I am interested in and I feel I am contributing to the online community that so many of us take for granted when curating resources and ideas. As I have previously said, I like to experiment with technology, so I share any new ideas I discover or learn from others in my teaching and training contexts as and where appropriate.
To tech, or not to tech? The answer is up to you!
Lately, I have been questioning the human robot relationship, the natural reactions we as humans have towards humanoid robots, and the value of learning human values from robots.
While being perfectly aware that these beings are not living beings, recent interaction with Erica and Nao have made me realise that whether the interaction is with an android or a humanoid, my emotional reactions towards them are the same.
When touching Erica’s hand I was careful to place my hand on hers gently refraining from any sudden movements that may startle her, just as I would with a human whose hand I place mine upon for the first time. Interacting with Nao, I was careful not to take his hand to firmly in mine as I walked him along the worktop, for fear of hurting or damaging him.
What is this inherent ‘care’ that the human brain automatically takes on when interacting with humanoid robots? A research sample from studies carried out by Hiroshi Ishiguro demonstrates that human interaction patterns with androids parallel those with humans, and evidence demonstrates that it is the ‘humanness’ of the robot, which provokes this subconscious reaction.
I have recently become extremely interested in research carried out by Ishiguro regarding human responses to android robots. By using the Total Turing Test it was possible to determine that subjects were unable to identify android robots when being flash exposed to them for one or two seconds when given the task of remembering the colour of the clothing the android was wearing. For me this demonstrates that the neurological capacity of the brain believes what it sees but is also influenced by what it wants to see. With respect to language learning and the use of androids, studies have demonstrated that the lack of emotion in androids supports learning in individuals with autism because they do not respond emotionally to the subjects they are interacting with. This highlights important parallels with inhibition in language learning and the subconscious facial gestures teachers often demonstrate in response to learner performance. One raised eyebrow is enough for a learner to become aware that something they said was incorrect and they will directly react to this by either losing their train of thought, pausing for correction or stopping what they were saying all together. Remove the facial gesture from the teacher out of this equation and the learner will probably continue to speak. Perhaps androids can offer a different solution to this problem.
This week I have been talking about the best strategies for developing turn-taking skills for the Cambridge exams. It has been an interesting insight on many levels, but specifically into how other teaching professionals deal with the challenges of a mismatch of pairs when candidates have different levels of proficiency. Differentiation is a very common issue in the classroom and not one that I feel is given enough consideration. While we can do our best to mix students when carrying out different activities, what I have really realised is that the students that are less proficient really need extra support and scaffolding to help them progress so changing their partner in class regularly isn’t necessarily going to solve this problem.
I often suggest to both teachers and learners that voice recordings using mobile phone dictaphone apps can be a very quick and easy solution to confidence building. By making short recordings on the dictaphones, and listening to themselves, learners are able to become familiar with how their voice sounds and feel less inhibited to speak. I then suggest that by playing back short recordings several times, it is possible to identify weaknesses in pronunciation, and the features of connected speech but also whether the task requirements of the exam task have been met. Part 3 of the FCE, CAE and CPE is designed to evaluate candidate turn-taking skills by assessing a two-way conversation between them. Candidates are assessed on their ability to maintain a conversation, suggest ideas, and respond accordingly to the other candidates’ ideas. All of this needs to be achieved in a time limit of 2 minutes, in addition to fulfilling the requirements of lexical resource and a varied grammatical range. By timing and recording their practise runs, students are able to refine and focus their answers in order to meet the task requirements.
The final stage of part 3 in the speaking test is a decision-making question where the candidates need to reach a decision regarding the discussion just taken place during the previous stage. A common error for candidates is to choose the most ‘important, urgent, greatest effect etc’ during the first stage, which often results in them repeating themselves during the second stage. To avoid this I encourage learners to only answer the question in the diagram for the first stage and nothing more. I have found this to be the best strategy to avoid the aforementioned. Again, timing and recording themselves has proven to be invaluable especially considering the 1-minute time limit given.
I have just returned from my annual trip to Japan, which has proved to be extremely insightful. I had the great pleasure of meeting Prof Ishiguro in Osaka and the opportunity to see some of his current research in action.
Ishiguro: Through his research, it is possible to gain a sense of Ishiguro’s motivation for creating android robots. He argues that society itself is responsible for shaping humans, therefore by using a combination of computers, motors, and sensors he is able to create androids that are capable of mimicking humans. So synergistic androids are created, that with exposure to language and HRI, are able to develop a personality, therefore making them as human as any other being that depends on exposure to language, society, others and interaction to shape who they are and who they become. In addition, robotic research enables us to gain further insights into the activities of the human brain, and therefore a greater understanding of cognitive neuroscience. In this way robots reflect the activity of the human mind which permit this understanding.
Robots in Japan: Japanese citizens openly accept robots and autonomous systems into their society so they don’t feel the need to distinguish the differences between them, and humans. Robots are considered beings, just like any other being, and take an active part in society in theatre productions, as caregivers, companions and shop assistants.
Erica: Erica, one of Ishiguro’s projects designed as a research platform for an autonomous conversational android, uses voice recognition software to interact with humans. Unfortunately my Japanese is not proficient enough to have successfully interacted with her myself, but here is a short clip of her talking with one of Ishiguro’s research students.
Intelligent microphones: Ishiguro is also working on intelligent microphones that would permit scheduled turn taking among robots, thereby releasing the pressure for humans to partake in interaction. From a pedagogical perspective this is a very interesting development for language training and the treatment and education of humans with communicational disorders like autism.
Geminoid: When asked about the reaction of his students to learning with Geminoid, the responses were all positive. Japanese communication etiquette is an inherent part of the country’s culture. By teaching his classes via a tele-operated android doppelgänger, Ishiguro confirms that students feel less intimidated to ask questions and extend their enquiry, which they may not otherwise do with the professor himself. Ishiguro also confirms positive learning outcomes in European contexts (with 13 different nationalities) and is working alongside several European companies to bring these positive learning outcomes to a wider variety of contexts and nationalities.
The current goal for me is to get my Japanese to a proficient enough level to be able to reap the rewards of HRI myself.