Tag Archives: robot

Nao (Softbank robotics) – Robots that write

After spending time with Nao (Softbank robotics) in February I am not in the slightest bit surprised at one of his many skills is the ability to write any word asked, and spell the word as he writes. Through speech recognition programming, the robot is able to perform many tasks, but the one of writing is a profound tool that can help those with literacy skill deficiencies, and of course those wanting to learn a language. Another interesting feature that will support my current research.

The value of learning human values from robots

Lately, I have been questioning the human robot relationship, the natural reactions we as humans have towards humanoid robots, and the value of learning human values from robots.

While being perfectly aware that these beings are not living beings, recent interaction with Erica and Nao have made me realise that whether the interaction is with an android or a humanoid, my emotional reactions towards them are the same.

When touching Erica’s hand I was careful to place my hand on hers gently refraining from any sudden movements that may startle her, just as I would with a human whose hand I place mine upon for the first time. Interacting with Nao, I was careful not to take his hand to firmly in mine as I walked him along the worktop, for fear of hurting or damaging him.

What is this inherent ‘care’ that the human brain automatically takes on when interacting with humanoid robots? A research sample from studies carried out by Hiroshi Ishiguro demonstrates that human interaction patterns with androids parallel those with humans, and evidence demonstrates that it is the ‘humanness’ of the robot, which provokes this subconscious reaction.

 

Androids – Erica – Ishiguro – Geminoid

I have just returned from my annual trip to Japan, which has proved to be extremely insightful. I had the great pleasure of meeting Prof Ishiguro in Osaka and the opportunity to see some of his current research in action.

Ishiguro: Through his research, it is possible to gain a sense of Ishiguro’s motivation for creating android robots. He argues that society itself is responsible for shaping humans, therefore by using a combination of computers, motors, and sensors he is able to create androids that are capable of mimicking humans. So synergistic androids are created, that with exposure to language and HRI, are able to develop a personality, therefore making them as human as any other being that depends on exposure to language, society, others and interaction to shape who they are and who they become. In addition, robotic research enables us to gain further insights into the activities of the human brain, and therefore a greater understanding of cognitive neuroscience. In this way robots reflect the activity of the human mind which permit this understanding.

Robots in Japan: Japanese citizens openly accept robots and autonomous systems into their society so they don’t feel the need to distinguish the differences between them, and humans. Robots are considered beings, just like any other being, and take an active part in society in theatre productions, as caregivers, companions and shop assistants.

Erica: Erica, one of Ishiguro’s projects designed as a research platform for an autonomous conversational android, uses voice recognition software to interact with humans. Unfortunately my Japanese is not proficient enough to have successfully interacted with her myself, but here is a short clip of her talking with one of Ishiguro’s research students.

Intelligent microphones: Ishiguro is also working on intelligent microphones that would permit scheduled turn taking among robots, thereby releasing the pressure for humans to partake in interaction. From a pedagogical perspective this is a very interesting development for language training and the treatment and education of humans with communicational disorders like autism.

Geminoid: When asked about the reaction of his students to learning with Geminoid, the responses were all positive. Japanese communication etiquette is an inherent part of the country’s culture. By teaching his classes via a tele-operated android doppelgänger, Ishiguro confirms that students feel less intimidated to ask questions and extend their enquiry, which they may not otherwise do with the professor himself. Ishiguro also confirms positive learning outcomes in European contexts (with 13 different nationalities) and is working alongside several European companies to bring these positive learning outcomes to a wider variety of contexts and nationalities.

The current goal for me is to get my Japanese to a proficient enough level to be able to reap the rewards of HRI myself.

The first robot

I recently found out that the first robot was not in fact invented by the Japanese as I presumed, but by Leonardo da Vinci in 1515! Here is clip of a modern day replica of the robot. I find it fascinating to think that robotics dates back this far in fact da Vinci sketched plans for a humanoid robot as early as 1495.

Leonardo's Robots - Book Mario Taddei -_Page_189

Da Vinci’s mechanical lion was presented as the star gift in a pageant in honour of the new king of France in 1515. He also designed a mechanical knight, able to bend its legs, move its arms and hands, turn its head and open its mouth. It also had ‘talking’ capabilities created by using an internal automatic drum roll and is often claimed to be the first ‘programmable’ computer.

Speech synthesis, voice recognition and humanoid robots

Speech synthesis or the artificial production of human speech had been around long before daleks on Doctor Who. Apparently, the first speech-generating device was prototyped in the UK in 1960, in the shape of a sip and puff typewriter controller, the POSSUM. Wolfgang von Kempleton preceded all of this with a a speaking machine built in leather and wood that had great significance in the early study of phonetics. Today, text to speech computers and synthesisers are widely used by those with speech impediments to facilitate communication.

Speech to text systems became more prominent thanks to the IBM typewriter Tangora which held a remarkable 20,000-word vocabulary by the mid 1980s. Nowadays speech to text has advanced phenomenally with the Dragon Dictation iOS software being a highly favoured choice. Our world is increasingly becoming dominated by voice automation, from customer service choices by phone to personal assistants like Siri. Voice and speech recognition has been used for identification purposes by banks too since 2014.

I’m curious how these systems work, how they are programmed, what corpus is used and which accents are taken into consideration. Why, because robots fascinate me, and I wonder if it will be possible to “ humanize” digital voices to such an extent that humanoid robots will appear more human than ever because of their voice production and recognition capabilities. It seems like a far cry from the days of speak and spell the kids speech synthesizer of the 80s, but it is looking increasingly more probable as advances in AI develop.

Developments have gone as far as Hiroshi Ishiguro’s Gemonoid HI-1 Android Prototype Humanoid Robot. Hiroshi is a Roboticist at Osaka University Japan, who create a Germaoid robot in 2010 that is a life size replica of himself. He used silicone rubber, pneumatic actuators, powerful electronics, and hair from his own scalp.

Gemonoid is basically a doppelganger droid which is controlled by a motion-capture interface. It can imitate Ishiguro’s body and facial movements, and it can reproduce his voice in sync with his motion and posture. Ishiguro hopes to develop the robot’s human-like presence to such a degree that he could use it to teach classes remotely, lecturing from home  while the Germonoid interacts with his classes at Osaka Univerisity.

You can see a demonstration of Gemonoid here