Mathias Schmerling, BCCN Berlin

Visuomotor Coordination on a Humanoid Robot Using Slowness learning

A fundamental skill to acquire for both humans and humanoid robots is the abilityto exercise coordinated control over their bodies and the environment, as it is believed to be a prerequisite for the development of more complex cognitive and social skills. This type of sensorimotor coordination, exemplified by the learning of reaching movements, is formalized in the literature by the concept of internal models. Internals models are assumed to serve as an interface between the high-dimensional motor system of the agent and its sensory state by predicting the sensory effects of actions and by inferring the actions necessary to achieve desired effects. However, abstracting a meaningful sensory state, e.g. the hand position of the agent, from the high-dimensional raw sensory information that is usually available to such an agent is challenging and an instance of representation learning.

Contrary to previous publications in which representations were hand-crafted by the experimenter, in the present work an encoding of the raw visual input was learned autonomously and in an unsupervised way using slowness learning. It was shown that 1) Incremental Slow Feature Analysis can learn representations of the visual input that encode highly informative aspects of the environment and that 2) these representations provide a basis for successful learning of internal models on a humanoid robot. Ultimately, the humanoid robot was thus shown to acquire coordinated reaching skills autonomously from raw pixel information alone.

Additional Information

Master thesis defence in the Master Computational Neuroscience

Organized by

Verena Hafner / Robert Martin

Go back