MC2 – Symbolic Reasoning within Connectionist Systems

Lecturer: Klaus Greff
Fields: Artificial Intelligence / Neural Networks, Draws upon Theoretical Neuroscience and Cognitive Psychology


Our brains effortlessly organize our perception into objects which it uses to compose flexible mental models of the world. Objects are fundamental to our thinking and our brains are so good at forming them from raw perception, that it is hard to notice anything special happening at all. Yet, perceptual grouping is far from trivial and has puzzled neuroscientists, psychologists and AI researchers alike. 

Current neural networks show impressive capacities in learning perceptual tasks but struggle with tasks that require a symbolic understanding. This ability to form high-level symbolic representations from raw data, I believe, is going to be a key ingredient of general AI. 

During this course, I will try to share my fascination with this important but often neglected topic. 

Within the context of neural networks, we will discuss the key challenges and how they may be addressed. Our main focus will be the so-called Binding Problem and how it prevents current neural networks from effectively dealing with multiple objects in a symbolic fashion.

After a general overview in the first session, the next lectures will explore in-depth three different aspects of the problem:

Session 2 (Representation) focuses on the challenges regarding distributed representations of multiple objects in artificial neural networks and the brain.

Session 3 (Segregation) is about splitting raw perception into objects, and we will discuss what they even are in the first place.

Session 4 (Composition) will bring things back together and show how different objects can be related and composed into complex structures. 


  • Develop an appreciation for the subtleties of object perception.
  • Understand the importance of symbol-like representations in neural networks and how they relate to generalization.
  • Become familiar with the binding problem and its three aspects: representation, segregation, and composition.
  • Get an overview of the challenges and available approaches for each subproblem.


The course is a non-technical high-level overview, so only basic familiarity with neural networks is assumed. Optional background material: 


Klaus Greff studied Computerscience at the University of Kaiserslautern and is currently a PhD candidate under the supervision of Prof. Jürgen Schmidhuber. His main research interest revolves around the unsupervised learning of symbol-like representations in neural networks (the content of this course).

Previously, Klaus has worked with Recurrent Neural Networks and the training of very deep neural networks, and is also the maintainer of the popular experiment management framework Sacred.

Affiliation: IDSIA