I have recently become extremely interested in research carried out by Ishiguro regarding human responses to android robots. By using the Total Turing Test it was possible to determine that subjects were unable to identify android robots when being flash exposed to them for one or two seconds when given the task of remembering the colour of the clothing the android was wearing. For me this demonstrates that the neurological capacity of the brain believes what it sees but is also influenced by what it wants to see. With respect to language learning and the use of androids, studies have demonstrated that the lack of emotion in androids supports learning in individuals with autism because they do not respond emotionally to the subjects they are interacting with. This highlights important parallels with inhibition in language learning and the subconscious facial gestures teachers often demonstrate in response to learner performance. One raised eyebrow is enough for a learner to become aware that something they said was incorrect and they will directly react to this by either losing their train of thought, pausing for correction or stopping what they were saying all together. Remove the facial gesture from the teacher out of this equation and the learner will probably continue to speak. Perhaps androids can offer a different solution to this problem.