Lecturer:Afsaneh Fazly Fields: Machine Learning, Cognitive Modelling, Language Acquisition
Content
In session 1, we cover the basics of several mapping (association) problems, including theoretically important challenges such as the acquisition of word meanings in young children, as well as applied settings such as learning multimodal or multilingual representations.
Session 2 focuses on
the early approaches applied to a mapping problem, including symbolic
and probabilistic methods.
Session 3 covers the more recent techniques (linear transformations and deep learning), in the context of several mapping problems, such as learning multimodal and multilingual mappings.
Objectives
The objective is to cover three different approaches applied to the same problem of learning mappings across modalities (e.g., learning the meanings of words, learning mappings between audio/words and image/video segments, learning multilingual representations, etc.).
Literature
J.M. Siskind (1995). Grounding Language in Perception. Artificial Intelligence Review, 8:371-391, 1995. [LINK]
J.M. Siskind (1996). A Computational Study of Cross-Situational Techniques for Learning Word-to-Meaning Mappings. Cognition, 61(1-2):39-91, October/November 1996. Also appeared in Computational Approaches to Language Acquisition, M.R. Brent, ed., Elsevier, pp. 39-91, 1996. [LINK]
Frank, M. C., Goodman, N. D., & Tenenbaum, J. B. (2009). Using speakers’ referential intentions to model early cross-situational word learning. Psychological Science, 20, 579-585. [LINK]
Fazly, A., Alishahi A., Stevenson, S. (2010). A probabilistic computational model of cross-situational word learning. Cognitive Science: A Multidisciplinary Journal, 34(6): 1017—1063. [LINK]
Tadas Baltrusaitis, Chaitanya Ahuja, and Louis-Philippe Morency (2017). Multimodal Machine Learning: A Survey and Taxonomy. [LINK]
Zhang, Y., Chen, C.H., & Yu, C. (2019). Mechanisms of Cross-situational Learning: Behavioral and Computational Evidence. Advances in child development and behavior. [LINK]
Sebastian Ruder, Ivan Vulić, Anders Søgaard (2019). A Survey of Cross-lingual Word Embedding Models. Journal of Artificial Intelligence Research 65: 569-631. [LINK]
Lecturer
Afsaneh Fazly is a Research Director at Samsung Toronto AI Centre, and an Adjunct Professor at the Computer Science Department of University of Toronto in Canada. Afsaneh has extensive experience in both academia and the industry, publishing award-winning papers, and building strong teams solving real-world problems. Afsaneh’s research draws on many subfields of AI, including Computational Linguistics, Cognitive Science, Computational Vision, and Machine Learning. Afsaneh strongly believes that solving many of today’s real-world problems requires an interdisciplinary approach that can bridge the gap between machine intelligence and human cognition.
Before joining Samsung Research, Afsaneh worked at several Canadian companies as Research Director, where she helped build and lead teams of outstanding scientists and engineers solving a diverse set of AI problems. Prior to that, Afsaneh was a Research Scientist and Course Instructor at the University of Toronto, where she also received her PhD from. Afsaneh lives in Toronto, with her husband and two young children. Afsaneh’s main hobby these days is reading and spending time with her family.
Lecturer:Malte Schilling and Michael Spranger Fields: Robotics / Autonomous systems / Neurobiology / Artificial Intelligence / Developmental Artificial Intelligence / Symbol Emergence
Content
Symbols are the bedrock of human cognition. They play a role in planning, but are also crucial to understanding and modeling language. Since they are so important for human cognition, they are likely also vital for implementing similar abilities in software agents and robots.
The
course will focus on symbols from two integrated perspectives. On the
one hand, we look at the emergence of internal models through
interaction with the environment and their role in sensorimotor
behavior. This perspective is the embodied perspective. The first two
lectures of the course concentrate on the emergence of internal
models and grounded symbols in simple animals and agents and show how
interaction with an environment requires internal models and how
these are structured. Here we use robots to show how effective the
discussed mechanisms are.
The
second perspective is that symbols can also be socially constructed.
In particular, we will focus on language and how it is grounded in
embodiment but also social interaction. This will be the topic of the
third and fourth lecture. We first investigate the emergence of
grounded names and categories (and their terms) in social
interactions between robots. The second two lectures of the course
will focus on compositionality – that is the interaction of
embodied categories in larger phrases or sentences and grammar.
Lecture 1: Embodied systems
Embodied
systems: sophisticated behaviors do not necessarily require internal
models. There are many examples of relatively simple animals (for
example insects) that are able to perform complex behaviors. In the
first lecture we focus on behavior-based robots that simply react to
their environment without internal models. Crucially, these reactive
behaviors can lead to complex and adaptive behavior, but the agent is
not relying on internal representations. Instead, the systems is
exploiting the relation to the environment.
Lecture 2: Grounded internal models
Grounded
internal models serve a function for the system first. But the
flexibility of these models allows them to be recruited in additional
tasks. An example is the use of internal body models in perception.
In the second part of the course internal models will be introduced,
how they co-evolve in service for a specific behavior and how
flexible models can be recruited for higher level tasks such as
perception or cognition. The session will consist of case studies
from neuroscience, psychology and behavioral science as well as
modeling approaches of internal models in robotics. Sharing such
internal models in a population of agents provides a step towards
symbolic systems and communication.
Lecture 3: Symbol emergence in robot populations
The
lecture will examine the emergence of grounded, shared lexical
language in populations of robots. Lexical languages consist of
single (or in some cases multi-word) expressions. We show how such
systems emerge in referential games. In particular, we focus on how
internal representations become shared across agents through
communication. The lecture will cover (proper) naming and
categorization of objects, for instance, using color. The lecture
will introduce important concepts such as symbol grounding and
discuss them from the viewpoint of language emergence.
Lecture 4: Compositional Language
Human language is compositional – which means that the meaning of phrases depends on its constituents but also the grammatical relations between them. For instance, projective categories such as “front”, “back”, “left” and “right” can be used as adjectives or prepositionally. Different syntactic usage signals a different conceptualization. This lecture will focus on compositional representations of language meaning, how they are related to syntax and how such systems might emerge in populations of agents.
Objectives
The course will give an introduction to computational models of symbol emergence through sensorimotor behavior and social construction. These models can be run in simulation or on real robots. Participants will be introduced to the field of Embodied Cognition – providing an overview on interdisciplinary results from neuroscience, psychology, computer science, linguistics and robotics.
Literature
Lake, B. M., Ullman, T. D.,
Tenenbaum, J. B., & Gershman, S. J. (2016). Building Machines
That Learn and Think Like People. Behav Brain Sci, 1–101.
https://doi.org/10.1017/S0140525X16001837
Lecture 1-2
Dickinson, M. H., Farley, C. T., Full, R. J., Koehl, M. a. R., Kram, R., & Lehman, S. (2000). How Animals Move: An Integrative View. Science, 288(5463), 100–106. https://doi.org/10.1126/science.288.5463.100
Ijspeert, A. J. (2014). Biorobotics: Using robots to emulate and investigate agile locomotion. Science, 346(6206), 196–203. https://doi.org/10.1126/science.1254486
Gallese, V., & Lakoff, G. (2005). The Brain’s concepts: The role of the Sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22(3–4), 455–479. https://doi.org/10.1080/02643290442000310
Lecture 3-4
Steels,
L.. The symbol grounding problem has been solved. so what’s next?
In M. de Vega, editor, Symbols and Embodiment: Debates on Meaning and
Cognition. Oxford University Press, 2008.
Steels,
L.. The Talking Heads Experiment: Origins of Words and Meanings,
volume 1 of Computational Models of Language Evolution. Language
Science Press, Berlin, DE, 2015.
Spranger, M.. The Evolution of Grounded Spatial Language. Language Science Press, 2016.
Lecturer
Malte Schilling is a Responsible Investigator at the Center of Excellence for ‘Cognitive Interaction Technology’ in Bielefeld. His work concentrates on internal models, their grounding in behavior and application in higher-level cognitive function like planning ahead or communication. Before, he was a PostDoc at the ICSI in Berkeley and did research on the connection of linguistic to sensorimotor representation. He received his PhD in Biology from Bielefeld University in 2010 working on decentralized biologically-inspired minimal cognitive systems. He has studied Computer Science at Bielefeld University and finished 2003 the Diploma with his thesis on knowledge-based systems for virtual environments.
Michael Spranger received a PhD from the Vrije Universiteit in Brussels (Belgium) in 2011 (in Computer Science). For his PhD he was a researcher at Sony CSL Paris (France). He then worked in the R&D department of Sony Corporation in Tokyo (Japan) for almost 2 years. He is currently a researcher at Sony Computer Science Laboratories Inc (Tokyo, Japan). Michael is a roboticist by training with extensive experience in research on and construction of autonomous systems including research on robot perception, world modeling and behavior control. After his undergraduate degree he fell in love with the study of language and has since worked on different language domains from action language and posture verbs to time, tense, determination and spatial language. His work focuses on artificial language evolution, machine learning for NLP (and applications), developmental language learning, computational cognitive semantics and construction grammar.
In this course we will teach you how to juggle. Juggling is a motor activity that requires a lot of different skills.
The
activity of juggling requires a lot of different abilities.
Obviously, you need to learn the movement pattern and practice a lot
to get the reward – being able to juggle! To learn such specific
movement patterns requires a highly complex electrical and chemical
circuitry in the brain, which becomes a more and more important field
of neuroscience. Juggling seems to encourage nerve fiber growth and
therefore scientist believe it not only promotes brain fitness in
general but could also help with debilitating illnesses.
Nevertheless,
learning to juggle requires attention, focus, concentration and
persistence. As every juggler would agree, the key for success is
repetition. We will teach juggling mainly practical. While training
you can feel constant progress independently of your previous skill
level.
In
the last session you will also get an introduction to site swap, a
mathematical description of juggling patterns you can notate,
calculate and e.g. feed into a juggling simulator.
Session: Basic introduction to juggling and the neuroscience behind it
Session: How to learn juggling most effectively
Session: Common mistakes and how to avoid them
Session: Site swap – a mathematical description of juggling patterns
Objectives
In this course you will learn to juggle with 3 balls, you will learn how to avoid common mistakes when practicing, how to improve effectively also when practicing on your own. Apart from the basic 3-ball-cascade you will learn additional simple patterns and get an introduction to advanced tricks and techniques.
All
sessions are mainly practical training of juggling.
Lecturer
Susan Wache studied Cognitive Science at the University of Osnabrück. She worked in the Research Group feelSpace that investigates human senses and co-founded in 2015 the startup feelSpace that develops and sells naviBelts, tactile navigation devices especially for the visually impaired.
Julia Wache studied Cognitive Science in Vienna and Potsdam. She finished her PhD in Trento working on the Emotion Recognition via physiological signals and mental effort in the context of using tactile belts for orientation. In parallel she participated in the EIT Digital doctoral program to learn entrepreneurial skills. In 2016 she joined the feelSpace GmbH.
Together the sisters started juggling and performing over 20 years ago and gave courses for different audiences in various occasions.
Lecturer:Sao Mai Nguyen Fields: Machine learning, robot learning, reinforcement learning, goal babbling, active imitation learning
Content
This course will
provide an overview of research in machine learning and robotics of
artificial curiosity. Also referred to as intrinsic motivation, this
stream of algorithms inspired by theories of developmental psychology
allow artificial agents to learn more autonomously, especially in
stochastic high-dimensional environments, for redundant tasks, for
multi-task, life-long or curriculum learning. The course will cover
the following topics:
Basis of
reinforcement learning
Curiosity-driven
exploration
Goal babbling
Intrinsic
motivation for imitation learning
Objectives
The students will learn about the different uses of intrinsic motivation for motor control, and see several illustrations of application and implementation of intrinsically motivated exploration algorithms for motor control by embodied agents. They will also understand the importance of data sampling, exploration and source of information selection for robot learning. They will also have a practical experience on a simple robotic simulation setup.
Literature
J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). IEEE Transactions on Autonomous Mental Development, 2(3):230–247, 2010. https://doi.org/10.1109/TAMD.2010.2056368
G. Baldassarre. What are intrinsic motivations? a biological perspective. In Development and Learning (ICDL), 2011 IEEE International Conference on, volume 2, pages 1–8. IEEE, 2011. https://doi.org/10.1109/DEVLRN.2011.6037367
J. Gottlieb and P.-Y. Oudeyer. Towards a neuroscience of active sampling and curiosity. Nature Reviews Neuroscience, 19(12):758–770, 2018. https://doi.org/10.1038/s41583-018-0078-0
P.-Y. Oudeyer. The New Science of Curiosity, chapter Computational Theories of Curiosity-Driven Learning. NOVA, 02 2018. https://arxiv.org/abs/1802.10546
Lecturer
Nguyen Sao Mai specialises in robotic learning, especially cognitive developmental learning. She is currently an associate professor at the UI2S Lab at Ensta Paris, France, after a few years in IMT Atlantique. She received a PhD in 2013 in computer science, for her studies on how to combine curiosity-driven exploration and socially guided exploration for multi-task learning and curriculum learning. She holds a master’s degree in computer science from Ecole Polytechnique and a master’s degree in adaptive machine systems from Osaka University. She has coordinated of the experiment KERAAL, funded by the European Union through project ECHORD++, which proposes an intelligent tutoring humanoid robot for physical rehabilitation. She is currently associate editor of IEEE TCDS and co-chair of the Task force “Action and Perception” du IEEE Technical Committee on Cognitive and Developmental Systems.
Lecturer: Katharina Krämer Fields: Psychology / Developmental Psychology, Social Psychology, and Clinical Psychology
Content
This course is intended for all participants (psychologists and non-psychologists alike), who are curious about the human mind and its functions. And that is basically what psychology is about: Psychological research investigates the role of mental functions in individual and social behaviour and explores the physiological and biological processes that underlie cognitive functions and behaviours. As a social science, psychology aims to understand individuals and groups by establishing general principles and researching specific cases by using quantitative and qualitative research methods.
During
this introduction to psychology we will get to know the major schools
of thought, including behaviourism and cognitive psychology as well
as psychoanalysis and psychodynamic psychology. Psychological
research encompasses many subfields and includes different approaches
to the study of mental processes and behaviour. In this course, we
will focus in particular on the sub-disciplines of developmental
psychology, social psychology, and clinical psychology. Thereby, we
will explore the following questions:
How does the human mind develop through the live span? How do people come to perceive, understand, and act within the world? How do these processes change as people age?
How do humans think about each other? How do they relate to each other? What are the influences of others on an individual’s behaviour? How do people form beliefs, attitudes, and stereotypes about other people?
How and why do mental disorders develop? How can we prevent mental disorders and psychologically based distress? How can we promote subjective well-being and personal development?
Objectives
To get an overview of the different sub-disciplines of psychology and psychological research methods
To get a broad idea how the human mind works, how people function and what motivates and explains their behaviour
To understand how psychological knowledge can be applied to the assessment and treatment of mental health problems
Literature
As
there are many excellent textbooks on psychology and its
sub-disciplines, this is just a small selection if you want to do
some background reading.
Aronson,
E., Wilson, T.D. & Akert, R.M. (2013). Social Psychology (8th
Edition). Pearson.
Barlow,
D.H. (2014). The Oxford Handbook of Clinical Psychology. Oxford
University Press.
Gerrig,
R.J., Zimbardo, P., Svartdal, F., Brennan, T., Donaldson, R. &
Archer, T. (2012). Psychology and Life (19th
Edition). Pearson.
Slater,
A. & Bremner, J.G. (2017). An Introduction to Developmental
Psychology (3rd
Edition). The British Psychological Society and Wiley.
Lecturer
Katharina Krämer is a psychologist and psychoanalytic psychotherapist. She works as a professor for psychology at the Rheinische Fachhochschule Köln, Germany, and as a psychotherapist at the Department of Psychiatry at the University Hospital Cologne, Germany. In 2014, Katharina Krämer received her doctoral degree from the University of Cologne, Germany, on a thesis investigating the perception of dynamic nonverbal cues in cross-cultural psychology and high-functioning autism. She works with patients with different mental disorders, focusing on adult patients with autism. Her research interests include the application of Mentaliszation-Based Group-Therapy with patients with autism and the vocational integration of patients with autism.
This course is intended for non-machine learners with little to no prior knowledge. It will provide many examples as well as accompanying exercises and limit the number of formulae to a bare minimum, while instead maximizing the number of meaningful images. In more detail, the course will cover the following topics.
Session 1: Basics of optimization (What is a mathematical optimization problem? How do we model the world in optimization? How do we solve optimization problems?), basics of probability theory (What are distributions, joint and conditional probabilities, and Bayes’ rule? How do we maximize probabilities?), and linear regression from a geometric and a risk minimization perspective
Session 2: Machine learning from the perspective of distances for classification (how do I put things into known categories?), clustering (how do I discover new categories of things?), regression (how do I infer an unknown variable from a known one, based on examples?), and dimensionality reduction (how do I simplify data that is too big to process?)
Session 3: Neural-network-based learning (What is an artificial neural network? What are popular components? What kind of models can I build? How do I learn such models?) and the problems of generalization (When can learning fail? How do I prevent that? How can hackers attack my model?)
Session 4: Reinforcement learning (What is it and how do I do it?) and algorithmic fairness (What does it mean to be fair and what role to risk, reward, and curiosity play?)
Objectives
becoming
familiar with key concepts from machine learning (e.g. risk
minimization, exploration versus exploitation, priors and
posteriors, generalization)
achieving a
high-level understanding of how the most popular machine learning
methods work and which method can be used for which application
(e.g. when not to use deep
learning methods)
de-mystifying machine learning (it is just a collection of methods
with certain assumptions)
optionally, becoming able to apply some machine learning methods in
Python on your own data (exercises)
Benjamin Paassen received their doctoral degree in 2019 from Bielefeld University, Germany on the topic of ‘Metric Learning for Structured Data’. Prior work has focused on machine learning algorithms to support applications in computer science education and hand prosthesis research, but has also included research on discrimination in video game culture and in machine learning. Research interests include interpretable machine learning, metric learning, transfer learning, and fairness.