The abilities to control one’s own actions in a goal-related way and to understand the goals and intentions behind the actions of others are important aspects of the agentive self. Both aspects develop during infancy and depend on the so-far acquired own agentive experience. It has been hypothesized that cognitive representations of own actions are partially used to plan own actions and to understand the actions of others. However, when mapping observed motions onto cognitive action representations, challenging problems of correspondence, perspective, and motor inference need to be solved. Although a critical role of the mirror neuron system (MNS) is supposed, the actual encodings and computational processes involved as well as their ontogenetic development remain elusive.
The planned project seeks to fill this explanatory gap by combining insights and further experimental evaluations from developmental psychology with machine learning-oriented cognitive modeling. This interdisciplinary collaboration promises benefits in a bidirectional manner: Developmental psychology will be augmented with a functional, computational model of the cognitive development of action understanding. Machine-learning and cognitive-systems research will profit from the identification of inductive biases that foster the emergence of action understanding. In eye-tracking and EEG studies with infants and driven by the modeling efforts, the project will assess in further detail which cues and cue combinations of agency (e.g., human visual appearance, self-propelledness, production of salient action-effects, own action experience) are most relevant for infants’ ability to anticipate the goals of observed actions. The computational models will combine our current biological-motion model with our theory of event-predictive cognition. The current model expects, for example, that perceptual highlighting the final goal will support anticipatory action observations. By modeling the concrete scenarios, we will also generate more concrete behavioral predictions.
Overall, we expect to answer critical developmental and cognitive-science questions. For example, for which types of observed actions will eye-tracking- and EEG-derived signals for MNS activity be detectable? Do own action experiences or observations of others’ actions influence subsequent action understanding in infants of different ages? Can agency-cue augmentations facilitate the learning of computational models of action understanding? Overall, the project will shed further light on the development of the agentive self, the MNS, and the resulting social competencies.
Investigators: Prof. Dr. Birgit Elsner, Prof. Dr. Martin Butz (Eberhard Karls Universität Tübingen), Dr. Maurits Adam
Funded by: German Research Foundation, DFG (SPP 2134, Project 3)
We know that children are skilled at breaking information down into more manageable pieces. To do this, children need to be able to chunk information together, instead of dealing with each piece of information individually. This allows children to process information more quickly and efficiently. For example, rather than process the following syllables "BAY" and "BEE" as separate units, children learn that they belong together, and form the word "baby". We are interested in how children learn which units belong together, and which units should be kept separate. We want to take the novel approach of examining whether children chunk other types of information in the same way as they chunk speech. To do this, we want to examine how children perceive sequences of actions, so that we can understand whether children chunk these sequences in the same was as they do sentences.
This project focuses on the influence of social-pragmatic cues on infant action processing and production. One part of this project deals with how emotions can impact infants own behavior regulation. In imitation studies infants of different age groups are faced with a model who performs two differentially emoted (positive vs. negative) actions on a novel object consecutively. We are interested in whether infants regulate their own imitative behavior as a function of the emotional displays.
Another part of this project focuses on when and how infants come to understand that verbal cues cannot only refer to objects or persons but also to mental states, like action intentions. In imitation studies, infants of different age groups are faced with an adult who first announces to perform a certain object-directed action and then performs an action that either matches or mismatches the prior action announcement. We are especially interested in how infants deal with the incongruent situation that is when there is a conflict between speech and action cue. Which cue due they favour when they are given the opportunity to act on the object themselves? To complement this research, we also run EEG and eye tracking studies to identify indicators of infants’ conflict detection.
Communication with infants is often multimodal: In case of action learning, we often accompany actions that we show them with fitting verbal descriptions. This may help infants to direct their attention to relevant aspects of the demonstration and to identify new important information.
While social-emotional cues (eye contact, smiling, onomatopoeia) have been shown to influence imitation behavior, evidence for an influence of the specific semantic content of verbal cues on action learning in infants is still scarce.
We are specifically interested, how infants learn that one object can be used for one action, but not for another, and how verbal cues that the model uses during action demonstration, influence object-action-associations. We further want to investigate how this influence changes during early infancy. To answer this research question, we are conducting an imitation study.
Furthermore, we want to investigate neural correlates of object-action-association learning. To achieve this, we plan on conducting a study using electroencephalography (EEG).
Investigators:Prof. Dr. Birgit Elsner, Léonie Trouillet
Funded by: Deutsche Forschungsgemeinschaft, DFG (FOR 2253, TP 3)
Humans use tools in several ways – not only in the workshop or the garden, but also for eating, speaking on the phone or writing. Not only do infants begin using tools very early in development, they are also remarkably quick in learning how to handle them. In a series of studies we investigate how infants at the age of two years use tools: What learning strategies do they apply, on which information do they rely, what impact do adults have as role models? Furthermore, we are interested in whether infants are able to transfer acquired knowledge about tool use to other situations and if so, how? It is likely that this kind of transfer is modulated by the original learning context as well as the characteristics of the tools. Since age or prior experience with tools might also affect the transfer pre-school children are considered in our research.