MC3 – Embodied Symbol Emergence

Lecturer: Malte Schilling and Michael Spranger
Fields: Robotics / Autonomous systems / Neurobiology / Artificial Intelligence / Developmental Artificial Intelligence / Symbol Emergence

Content

Symbols are the bedrock of human cognition. They play a role in planning, but are also crucial to understanding and modeling language. Since they are so important for human cognition, they are likely also vital for implementing similar abilities in software agents and robots.

The course will focus on symbols from two integrated perspectives. On the one hand, we look at the emergence of internal models through interaction with the environment and their role in sensorimotor behavior. This perspective is the embodied perspective. The first two lectures of the course concentrate on the emergence of internal models and grounded symbols in simple animals and agents and show how interaction with an environment requires internal models and how these are structured. Here we use robots to show how effective the discussed mechanisms are.

The second perspective is that symbols can also be socially constructed. In particular, we will focus on language and how it is grounded in embodiment but also social interaction. This will be the topic of the third and fourth lecture. We first investigate the emergence of grounded names and categories (and their terms) in social interactions between robots. The second two lectures of the course will focus on compositionality – that is the interaction of embodied categories in larger phrases or sentences and grammar.

Lecture 1: Embodied systems

Embodied systems: sophisticated behaviors do not necessarily require internal models. There are many examples of relatively simple animals (for example insects) that are able to perform complex behaviors. In the first lecture we focus on behavior-based robots that simply react to their environment without internal models. Crucially, these reactive behaviors can lead to complex and adaptive behavior, but the agent is not relying on internal representations. Instead, the systems is exploiting the relation to the environment.

Lecture 2: Grounded internal models

Grounded internal models serve a function for the system first. But the flexibility of these models allows them to be recruited in additional tasks. An example is the use of internal body models in perception. In the second part of the course internal models will be introduced, how they co-evolve in service for a specific behavior and how flexible models can be recruited for higher level tasks such as perception or cognition. The session will consist of case studies from neuroscience, psychology and behavioral science as well as modeling approaches of internal models in robotics. Sharing such internal models in a population of agents provides a step towards symbolic systems and communication.

Lecture 3: Symbol emergence in robot populations

The lecture will examine the emergence of grounded, shared lexical language in populations of robots. Lexical languages consist of single (or in some cases multi-word) expressions. We show how such systems emerge in referential games. In particular, we focus on how internal representations become shared across agents through communication. The lecture will cover (proper) naming and categorization of objects, for instance, using color. The lecture will introduce important concepts such as symbol grounding and discuss them from the viewpoint of language emergence.

Lecture 4: Compositional Language

Human language is compositional – which means that the meaning of phrases depends on its constituents but also the grammatical relations between them. For instance, projective categories such as “front”, “back”, “left” and “right” can be used as adjectives or prepositionally. Different syntactic usage signals a different conceptualization. This lecture will focus on compositional representations of language meaning, how they are related to syntax and how such systems might emerge in populations of agents.

Objectives

The course will give an introduction to computational models of symbol emergence through sensorimotor behavior and social construction. These models can be run in simulation or on real robots. Participants will be introduced to the field of Embodied Cognition – providing an overview on interdisciplinary results from neuroscience, psychology, computer science, linguistics and robotics.

Literature

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2016). Building Machines That Learn and Think Like People. Behav Brain Sci, 1–101. https://doi.org/10.1017/S0140525X16001837

Lecture 1-2

Dickinson, M. H., Farley, C. T., Full, R. J., Koehl, M. a. R., Kram, R., & Lehman, S. (2000). How Animals Move: An Integrative View. Science, 288(5463), 100–106. https://doi.org/10.1126/science.288.5463.100

Ijspeert, A. J. (2014). Biorobotics: Using robots to emulate and investigate agile locomotion. Science, 346(6206), 196–203. https://doi.org/10.1126/science.1254486

Gallese, V., & Lakoff, G. (2005). The Brain’s concepts: The role of the Sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22(3–4), 455–479. https://doi.org/10.1080/02643290442000310

Lecture 3-4

Steels, L.. The symbol grounding problem has been solved. so what’s next? In M. de Vega, editor, Symbols and Embodiment: Debates on Meaning and Cognition. Oxford University Press, 2008.

Steels, L.. The Talking Heads Experiment: Origins of Words and Meanings, volume 1 of Computational Models of Language Evolution. Language Science Press, Berlin, DE, 2015.

Spranger, M.. The Evolution of Grounded Spatial Language. Language Science Press, 2016.

Lecturer

Dr. Malte Schilling
Dr. Malte Schilling

Malte Schilling is a Responsible Investigator at the Center of Excellence for ‘Cognitive Interaction Technology’ in Bielefeld. His work concentrates on internal models, their grounding in behavior and application in higher-level cognitive function like planning ahead or communication. Before, he was a PostDoc at the ICSI in Berkeley and did research on the connection of linguistic to sensorimotor representation. He received his PhD in Biology from Bielefeld University in 2010 working on decentralized biologically-inspired minimal cognitive systems. He has studied Computer Science at Bielefeld University and finished 2003 the Diploma with his thesis on knowledge-based systems for virtual environments.

Dr. Michael Spranger
Dr. Michael Spranger

Michael Spranger received a PhD from the Vrije Universiteit in Brussels (Belgium) in 2011 (in Computer Science). For his PhD he was a researcher at Sony CSL Paris (France). He then worked in the R&D department of Sony Corporation in Tokyo (Japan) for almost 2 years. He is currently a researcher at Sony Computer Science Laboratories Inc (Tokyo, Japan). Michael is a roboticist by training with extensive experience in research on and construction of autonomous systems including research on robot perception, world modeling and behavior control. After his undergraduate degree he fell in love with the study of language and has since worked on different language domains from action language and posture verbs to time, tense, determination and spatial language. His work focuses on artificial language evolution, machine learning for NLP (and applications), developmental language learning, computational cognitive semantics and construction grammar.

Affiliation: Bielefeld University and Sony