Project Coordinators

Prof. Dr. Thomas Schack
Prof. Dr. Helge Ritter

Responsible Investigators

Dr. Jonathan Maycock

Single and Dyadic Visuo-Haptic Task

IP 24 2014/01/01 - 2016/07/31 203.000 Euro

How do humans acquire a new manual skill that requires the organization of a sequence of rapid sensorimotor actions each of which is characterized by a delicate coordination of tactile, kinesthetic and visual sensing? In this project we introduce a physical, visuo-haptic and bi-manual, maze task to investigate this question in a realistic setting, involving a sequential coordination of vision and haptic control. The task consists of moving and tilting a two-dimensional maze such that a rolling sphere passes a configuration of obstacles and reaches a goal position. Building on prior work for a highly advanced measurement set-up that integrates kinematic motion capture, finger contact force measurements and gaze tracking, we will be able to record, analyse and model the structure and progression of learning across the different stages of acquisition of this exemplary and demanding type of bi-manual task.

We will study how humans decompose the entire task into an initial sequence of simpler, distinguishable ?moves", elucidate the characteristics and determinants of these ?moves" and relate them to Basic Action Units (BACs) of the SDA-M (structural dimensional analysis of mental representation). We expect that useful characterizations are obtained from an optimization perspective, analyzing how observed strategies afford simplications at the sensorimotor level (for instance, temporarily ?stabilizing" the sphere in a corner, or choosing between low- vs. high-compliance force feedback laws) and at the planning/sequencing level (for instance, choosing geometrically simple path segments). As a next step, we can then characterize how BACs and their transitions change during learning. Subjects will be provided with replicas of the maze to train for repeated pre-specified periods at home (training duration will be monitored by a built-in timer activated during the time the subjects hold the handles of the maze).

By connecting ideas and methods from movement science and algorithmic concepts from robotics and machine learning, we will develop models that can contribute to an in-depth understanding of computational and representational aspects of underlying learning strategies and provide guidance for the realization of comparable learning capabilities in robots. As a further, rather novel feature, we will include experiments of dyadic skill learning, involving two cooperating subjects. Its structure of requiring a sequential combination of skilled ?moves" towards a goal, embedded in a natural bi-manual coordination task involving touch and vision, makes it an ideal and rich paradigm for studying task learning at a level where subsymbolic sensor-motor control and symbolic task-sequencing meet. Moreover, by modifying maze and obstacle configurations we can tailor the task to pinpoint specific research questions at the computational, the control and the cognitive levels.

For more information, see [ here ] .