CROSS-MODAL BINDINGS
In our-day-to-day activities, upon hearing one’s name, the immediate reaction is to turn the head to look in that direction. This is a cross-modality of three modes, (1) audibility and (2) feedback from the joints to (3) see the source uttering the name, and focusing attention towards this event. This way, sound, vision and proprioception can help a robot develop internal states which can identify an action, object or its own-self by the sound. Cross-modality is said to be a superior functionality and a mark of conscious behavior and has been a popular model and also a tool used by various research groups.
Experiments on the Cog with a tambourine and associating the three modes led to the Cog to ‘see the sound of a tambourine’ and identifying the tambourine by its rhythm. Extending such ideas allows for the robot to identify its own bodily rhythm and states of self-awareness. Here, the robot’s response can be correlated to cognitive capabilities of a 10-12 month infant.
At Yale, Nico, a humanoid robot is programmed to play a drum in a performance with human drummers at the direction of a human conductor. This is done by integration across three modalities: visual, auditory and proprioceptive. Nico’s vision detects and monitors the arm motion of the human drummer, and it also detects the drum beats and ictus movement of the drum from both the human being and the robot. Nico can follow a human drummer or take instructions from a human conductor and delivers a precise synchronization with the human performers. It can adapt to brief changes in rhythm and tempo like a seasoned drummer. Such adaptation suggests internal states of self-awareness. Similar experiments at Hertfordshire on KASPAR, a humanoid robot with a child-like appearance, has confirmed the social functions of such cross modal bindings.