Category Archives: Robots

Enthusiasm to learn is emotionally driven

Enthusiasm can be displayed in different ways and it can also be present in a learner, but they do not show any visible signs of being enthusiastic to learn, they are simply enjoying the learning and keen to learn.

My current research investigation focuses on how learners demonstrate their enthusiasm when interacting with a speech recognition interface. This includes both linguistic and non-linguistic features. The dataset I am using clearly demonstrates that the psychological state of learners impacts their enthusiasm, and therefore language output and capacity to engage in learning more than any other factor. While this came as a surprise, it aligns with motivation theory and learning which purports that positive emotional and hence psychological states favour learning, and a negative emotional state (anxiety, stress, depression) can adversely affect learning.

I’ve spent a lot of time with humanoid robots, speech recognition interfaces, and autonomous agents and despite their degree of humanness, there is something decidedly safe for me about interacting with a non-conscious being. Maybe that is why Weizenbaum’s research was so successful! The non-judgmental attributes of a machine make the user feel comfortable to interact, and therefore they get more out of the learning experience. This is something I am still investigating, but Buddy, the robot in the image above aims to understand the mood of the use, and then respond accordingly. So empathy is now going beyond human…

What does it mean to be human?

With the surge of interest and investment into AI, the question at the forefront of my mind is ‘What does it mean to be human?’ The apparent obsession with AI is to replicate human intelligence on all levels, but the problem I have with this is that I don’t think we fully understand what it means to be human. I think it is impossible to reproduce human ‘intelligence’ without first appreciating the complexities of the human brain. Hawkins (2004) argues that the primary reason we have been unable to successfully build a machine that thinks exactly like a human, is our lack of knowledge about the complex functioning of cerebral activity, and how the human brain is able to process information without thinking.

This is the reason why the work of Hiroshi Ishiguro, the creator of both Erica and Geminoid, interests me so much. The motivation for Ishiguro to create android robots is to better understand humans, in order to build better robots, which can in turn help humans. I met Erica in 2016 and the experience made me realise that we are in fact perhaps pursing goals of human replication that are unnecessary. Besides, which model of human should be used as the blueprint for androids and humanoid robots? Don’t get me wrong, I am fascinated with Ishiguro’s creation of Erica.

My current research focuses on speech dialogue systems and human computer interaction (HCI) for language learning, which I intend to develop so it can be mapped onto an anthropomorphic robot for the same purposes. Research demonstrates, that one of the specific reasons the use of non-human interactive agents are successful in language learning is because they disinhibit learners and therefore promote interaction, especially amongst those with special educational needs.

The attraction is of humanoid robots and androids for me therefore, is not necessary how representative they are of humans, but more about the affordances of the non-human aspects they have, such as being judgemental. In my opinion, we need more Erica’s in the world.

Reflections 2017

Reflections of 2017: The debate regarding the dangers of spending too many hours glued to an electronic device continue to bubble. The unknown abyss and potential of AI in its many guises continues to be explored. The fears of a robot-controlled world continue to rise. What will 2018 bring?!

Personally, I find all the above extremely exciting. Do I use my phone too much? I know I work too much, and because 80% of my work is online, I am obliged to use a digital device. This has become part of the natural shift in the plethora of work that has become created as geographical boarders are transcended by cyberspace and the power of technology, telecommunications, and IT. Just as technology is shaped by the society that uses it, tech very much shapes society and the way we interact and go about our day-to-day. I view technological developments as portals to opportunities that can be enhanced or were not previously available, especially in a teaching and learning context (whether that be English or dancing to Michael Jackson’s Thriller!!).

I will continue to explore how ed tech can support language learning this year, as I delve deeper into the AI and machine learning chasm. I will also wonder that if smoking hadn’t been banned in pubs and bars, if smartphones wouldn’t be the go to company we choose as we sit alone sipping a coffee contemplating the week, or waiting for a friend.

Learn to dance Thriller with NAO

Nao (Softbank robotics) – Robots that write

After spending time with Nao (Softbank robotics) in February I am not in the slightest bit surprised at one of his many skills is the ability to write any word asked, and spell the word as he writes. Through speech recognition programming, the robot is able to perform many tasks, but the one of writing is a profound tool that can help those with literacy skill deficiencies, and of course those wanting to learn a language. Another interesting feature that will support my current research.

Automation: friend or foe?

Automation: friend or foe?

The debate regarding automation is becoming increasingly charged as technology progressively continues to permeate ever-increasing sectors of society. While on the one hand users scoff at self-check out tills in supermarkets, I’m not entirely convinced that shop assistants in the UK can offer a better service. Reflecting back on a recent trip I have felt even more alienated as the people “serving” me look utterly perplexed when they are challenged to engage in conversation other than stating the price and asking which method of payment I would like to use. I usually leave the till disappointed and question whether a robot would in fact be capable of offering me better customer service because it would be programmed to do so.

A recent white paper published by the Association for Advancing Automation (www.a3automate.org, April 2017), puts forward several arguments regarding career sustainability and changing job titles as tasks evolve and shift more heavily towards automation. Automation is nothing new, society has been relying on machines as early back as the industrial revolution, what is new however is the way that society needs to adapt and implement the appropriate changes as the skills required to support the technological advances that evolve.

Many argue that robots will deprive many manual labour workers of their employment opportunities; I however would argue very differently and believe that there is room for both manual and automated labour. Unimate, the industrial robot designed by General Motors in 1961 was considered a welcome relief from the heavy duty lifting and welding work that was deemed unpleasant and dangerous by blue-collar workers that had previously carried it out. In today’s society many of the most advanced robots continue to be those designed for industrial purposes as automation seems to provide an attractive technological solution to increasing labour costs in societies like China, South Korea and Japan where there is still a strong emphasis on production. Many in fact see a clear correlation between automation and manufacturing and claim it could save the manufacturing industry in China.

Robots have also had a considerable impact on white-collar jobs or knowledge workers. Robots that replace white-collar workers have weaved their way into society in many contexts. In some societies, autonomous humanoid robots are already replacing shop assistants and bank tellers, which demonstrates the societal changes and trends towards the use of robots to replace human workers in white-collar jobs. While the predicted abundance of robots in society and the effect they will have on the human labour force in white-collar jobs is perceived as a threat by many, I do not share the same view. Automated machines have been integrated into our lives without a second thought, providing quick solutions in many contexts. Long gone are the days of queuing at the bank during banking hours to withdraw cash, or queuing to buy a train ticket. These machines are considered unobtrusive and their existence is not challenged yet they are replacing white-collar workers. When the machine takes on a humanoid form however, the convenience is often perceived as a threat. Maybe this is due to lack of confidence in humans to believe they are able to carry out a task as efficiently as a robot, and to return to the beginning of this post, maybe that explains the increasing lack of apparent customer service skills nowadays.

20 years ago we could never have imagined the impact of digital technologies on society. Maybe we need to embrace the automation age and consider the opportunities for career prospects as the rise for new careers and industries based around automation continue to grow. Instead of creating skills gap perhaps we should consider training options that embrace automation and the changes it has created in our society, irrespective of the sector we work in. Research and development investment in technology will continue, and this includes automation. I prefer to be making the necessary changes to be prepared for what is next to come, and to be served by humanoid robot shop assistants that are guaranteed to smile and be courteous to ask if everything is okay and if I would like any further help, but that is just me personally..

 

 

The value of learning human values from robots

Lately, I have been questioning the human robot relationship, the natural reactions we as humans have towards humanoid robots, and the value of learning human values from robots.

While being perfectly aware that these beings are not living beings, recent interaction with Erica and Nao have made me realise that whether the interaction is with an android or a humanoid, my emotional reactions towards them are the same.

When touching Erica’s hand I was careful to place my hand on hers gently refraining from any sudden movements that may startle her, just as I would with a human whose hand I place mine upon for the first time. Interacting with Nao, I was careful not to take his hand to firmly in mine as I walked him along the worktop, for fear of hurting or damaging him.

What is this inherent ‘care’ that the human brain automatically takes on when interacting with humanoid robots? A research sample from studies carried out by Hiroshi Ishiguro demonstrates that human interaction patterns with androids parallel those with humans, and evidence demonstrates that it is the ‘humanness’ of the robot, which provokes this subconscious reaction.

 

android robots for language learning

I have recently become extremely interested in research carried out by Ishiguro regarding human responses to android robots. By using the Total Turing Test it was possible to determine that subjects were unable to identify android robots when being flash exposed to them for one or two seconds when given the task of remembering the colour of the clothing the android was wearing. For me this demonstrates that the neurological capacity of the brain believes what it sees but is also influenced by what it wants to see. With respect to language learning and the use of androids, studies have demonstrated that the lack of emotion in androids supports learning in individuals with autism because they do not respond emotionally to the subjects they are interacting with. This highlights important parallels with inhibition in language learning and the subconscious facial gestures teachers often demonstrate in response to learner performance. One raised eyebrow is enough for a learner to become aware that something they said was incorrect and they will directly react to this by either losing their train of thought, pausing for correction or stopping what they were saying all together. Remove the facial gesture from the teacher out of this equation and the learner will probably continue to speak. Perhaps androids can offer a different solution to this problem.

Androids – Erica – Ishiguro – Geminoid

I have just returned from my annual trip to Japan, which has proved to be extremely insightful. I had the great pleasure of meeting Prof Ishiguro in Osaka and the opportunity to see some of his current research in action.

Ishiguro: Through his research, it is possible to gain a sense of Ishiguro’s motivation for creating android robots. He argues that society itself is responsible for shaping humans, therefore by using a combination of computers, motors, and sensors he is able to create androids that are capable of mimicking humans. So synergistic androids are created, that with exposure to language and HRI, are able to develop a personality, therefore making them as human as any other being that depends on exposure to language, society, others and interaction to shape who they are and who they become. In addition, robotic research enables us to gain further insights into the activities of the human brain, and therefore a greater understanding of cognitive neuroscience. In this way robots reflect the activity of the human mind which permit this understanding.

Robots in Japan: Japanese citizens openly accept robots and autonomous systems into their society so they don’t feel the need to distinguish the differences between them, and humans. Robots are considered beings, just like any other being, and take an active part in society in theatre productions, as caregivers, companions and shop assistants.

Erica: Erica, one of Ishiguro’s projects designed as a research platform for an autonomous conversational android, uses voice recognition software to interact with humans. Unfortunately my Japanese is not proficient enough to have successfully interacted with her myself, but here is a short clip of her talking with one of Ishiguro’s research students.

Intelligent microphones: Ishiguro is also working on intelligent microphones that would permit scheduled turn taking among robots, thereby releasing the pressure for humans to partake in interaction. From a pedagogical perspective this is a very interesting development for language training and the treatment and education of humans with communicational disorders like autism.

Geminoid: When asked about the reaction of his students to learning with Geminoid, the responses were all positive. Japanese communication etiquette is an inherent part of the country’s culture. By teaching his classes via a tele-operated android doppelgänger, Ishiguro confirms that students feel less intimidated to ask questions and extend their enquiry, which they may not otherwise do with the professor himself. Ishiguro also confirms positive learning outcomes in European contexts (with 13 different nationalities) and is working alongside several European companies to bring these positive learning outcomes to a wider variety of contexts and nationalities.

The current goal for me is to get my Japanese to a proficient enough level to be able to reap the rewards of HRI myself.

Enhancing Emotional Facial Expressiveness on NAO: Pluggable Eyebrows

Apparently we say a lot with our eyes. I have woken this morning to see a short video which you can watch here of Nao, one of my favourite robots, who now has the option of unpluggable eybrows. It is believed that enhancing facial expressiveness can enhance emotional expression so Nao is now able to express anger, sadness and delight more effectively. Nao was the first humanoid robot created by Softbank Robotics in 2006 and has been continuously evolving since its release onto the market. It stands at only 58cm tall and was designed as an interactive companion robot. While the new unpluggable eyebrow option may not appear a revelation to many, it is yet another step towards giving humanoid robots a more human-like guise by providing them with the option of expressing emotion.

 

 

The first robot

I recently found out that the first robot was not in fact invented by the Japanese as I presumed, but by Leonardo da Vinci in 1515! Here is clip of a modern day replica of the robot. I find it fascinating to think that robotics dates back this far in fact da Vinci sketched plans for a humanoid robot as early as 1495.

Leonardo's Robots - Book Mario Taddei -_Page_189

Da Vinci’s mechanical lion was presented as the star gift in a pageant in honour of the new king of France in 1515. He also designed a mechanical knight, able to bend its legs, move its arms and hands, turn its head and open its mouth. It also had ‘talking’ capabilities created by using an internal automatic drum roll and is often claimed to be the first ‘programmable’ computer.