RC1 – Sort-it by Intuity: Knowledge Representation in Action

Lecturer: Alexandra Kirsch
Fields: Artificial Intelligence

Content

Intuity combines strategy, creativity, design, science, and technology in one place. We deal with numerous interconnected and ever changing pieces of information, for instance snippets from a requirement workshop with users, customers or product owners of a new service, product or software application. This is why we set out to build our own tool for exploring, consolidating and structuring knowledge.
From a user perspective Sort-it helps to solve complex problems. Under the hood Sort-it implements a novel knowledge representation approach that combines classical frame-like knowledge representation with cognitive models of human categorization.
Sort-it is a prime example demonstrating that AI is more than data science and that AI techniques are only as powerful as their application to real-world user demands.

Literature

  • https://www.intuity.de/en/blog/2020/wie-man-mehr-aus-workshops-herausholt/
  • George Lakoff: Women, Fire and Dangerous Things. University of Chicago Press, 1987
  • Herbert A. Simon: The Architure of Complexity. Proceedings of the American Philosophical Society, Vol. 106, No. 6. (Dec. 12, 1962), pp.467-482

Lecturer

Alexandra Kirsch
Dr. Alexandra Kirsch

Alexandra Kirsch received her PhD in computer science at TU München. She gathered experience as a business consultant before returning to TU München as a junior research group leader in the cluster of excellence “Cognition for Technical Systems”. She was a Carl-von-Linde Junior Fellow of the Institute for Advanced Study of TU München and member of the Young Scholars Programme of the Bavarian Academy of Sciences and Humanities, where she could profit from a broad interdisciplinary exchange. She was appointed assistant professor at the University of Tübingen for the area of Human-Computer Interaction and Artificial Intelligence. Since 2018 Alexandra Kirsch has been exploring and prototyping applications that combine Artificial Intelligence with User Experience Design at Intuity Media Lab.

Affiliation: Intuity Media Lab GmbH
Homepage: https://www.intuity.de/, https://www.alexkirsch.de/

Pan1 – Industry Panel

Moderator: Noa Tamir

When hearing about transitioning to a data science job, one is often faced with a list of technical skills, from programming packages and frameworks, to applied statistics topics and data visualizations. But making the transition to data science is not only about technical competencies. It is valuable to understand the context and environment in which you will be working, the required soft skills, and the culture of potential work places. Join our panelists to hear about their personal experiences so far, their understanding of the gap between recent graduates’ expectations and the job itself, as well as some insights to the job market one would need to traverse to get it.

Moderator

Noa Tamir

Noa has not only made the transition to data science, but has since supported many in their first steps of their careers. As a former Director of Data Science, and Team Lead, she hired and trained talented academics. Noa is currently an independent consultant, and teaches a M.Sc. Data Science Lab at HTW Berlin, a university of applied sciences.

Panelists

Dr. Marielle Dado
Dr. Marielle Dado

Dr. Marielle Dado completed a PhD in Applied Cognitive Sciences from the University of Duisburg-Essen, as part of the “User-Centred Social Media” Interdisciplinary Research Training Group (“Graduiertenkolleg”) of the German Research Foundation (“Deutsche Forschungsgemeinschaft”). A psychologist and educator by training, she has been working as a data scientist since 2018 and is currently focusing on data integrity and governance within organizations.

Kat
Kat Rasch

PhD Computer Science | Freelance data scientist / researcher
/ teacher | interested in exploring the breaking points of AI systems
(aka hacking the AI)

Rob Cao

After studying Theoretical Physics I completed a PhD in Neuroscience at the Uni. of Magdeburg, followed by a postdoc at the Gatsby Computational Neuroscience Unit (UCL) during which gradually strayed away from Neuroscience and towards ML/AI. This eventually led me to a Data Science position at GroupM Data & Technologies, where I have now been working for a year.

IC21 – Towards Learning Compositional Conceptual Structures from Sensorimotor Experiences

Evening lecture sponsered by the German Society for Cognitive Science.

Lecturer: Martin Butz
Fields: Cognitive Science

Content

Our minds learn conceptual structures from sensorimotor experiences. We intuitively know how objects behave based on physics. We perceive things as entities with particular distinctive properties, and other animals and humans as agentive. Meanwhile, we become able to recombine these conceptual structures in compositional meaningful manners. This ability allows us to plan, reason, and (partially) solve problems, which we had never encountered before. Cognitive science still searches for an explanation on how humans learn such compositional conceptual structures from sensorimotor-based experiences.

In this talk, I provide evidence and own research insights that suggest that our brains have the tendency to develop event-predictive, generative models from the encountered sensorimotor experiences, actively exploring them to foster further development. Interestingly, these models may have also set the stage for language development and thus for the generation of even more abstract cognitive abilities.

Lecturer

Prof. Martin Butz
Prof. Martin Butz

Martin Butz is Professor for Cognitive Modeling at the University of Tübingen since 2011. He has studied computer science and psychology at Würzburg University from 1995-2001. In 2004 he finished his PhD in computer science with a focus on rule-based evolutionary online learning systems at the University of Illinois at Urbana-Champaign, IL. Since the very beginning of his research career, he has collaborated with psychologists, neuroscientists, machine learners, and roboticists and has attempted to integrate their respective disciplinary perspectives into an overarching theory on the mind, cognition, and behavior.

Affiliation: University of Tübingen

IC9 – How to know

Lecturer: Celeste Kidd
Fields: Developmental psychology, cognitive science, (and a tiny bit of neuroscience)

Content

This talk will discuss Kidd’s research about how people come to know what they know. The world is a sea of information too vast for any one person to acquire entirely. How then do people navigate the information overload, and how do their decisions shape their knowledge and beliefs? In this talk, Kidd will discuss research from her lab about the core cognitive systems people use to guide their learning about the world—including attention, curiosity, and metacognition (thinking about thinking). The talk will discuss the evidence that people play an active role in their own learning, starting in infancy and continuing through adulthood. Kidd will explain why we are curious about some things but not others, and how our past experiences and existing knowledge shape our future interests. She will also discuss why people sometimes hold beliefs that are inconsistent with evidence available in the world, and how we might leverage our knowledge of human curiosity and learning to design systems that better support access to truth and reality.

Objectives

I hope to introduce students to the approach of combining computational models with behavioural experiments in order to develop robust theories of the systems that govern human cognition, especially attention, curiosity, and learning. We will take a very high-level conceptual approach to these topics, and I also hope students will leave understanding something useful about how people solve the problem of sampling from the world in order to understand something profound about it. I hope students will leave with a better understanding about how a person’s past experiences and expectations combine in a way that influences their subsequent sampling decisions and beliefs.    

Literature

Optional to read: Kidd, C., & Hayden, B. Y. (2015). The psychology and neuroscience of curiosity. Neuron88(3), 449-460. https://www.cell.com/neuron/fulltext/S0896-6273(15)00767-9

Lecturer

Prof. Celeste Kidd

Celeste Kidd is an Assistant Professor of Psychology at the University of California, Berkeley. Her lab investigates learning and belief formation using a combination of computational models and behavioural experiments. She earned her PhD in Brain and Cognitive Sciences at the University of Rochester, then held brief visiting fellow positions at Stanford’s Center for the Study of Language and Information and MIT’s Department of Brain and Cognitive Sciences before starting up her own lab. Her research has been funded by the National Science Foundation, the Jacobs Foundation, the Templeton Foundation, the Human Frontiers Science Program, Google, and the Berkeley Center for New Media. Kidd also advocates for equity in educational opportunities, work for which she was made one of TIME Magazines 2017 Persons of the Year as one of the “Silence Breakers”. 

Affiliation: University of California, Berkeley
Website: www.kiddlab.com

 

IC15 – How art creates meaning and what we can learn about this for human-centric AI

Lecturer: Luc Steels
Fields: Artificial Intelligence

Content

Artificial Intelligence keeps making great strides and its vitality is remarkable. But, as AI scientists, we constantly have to ask ourselves whether we are on the right track towards our fundamental goal, which is to understand intelligence, of which human intelligence is the most magnificent example in nature, and build useful artifacts based on this understanding. In this talk I will argue that a crucial component of human intelligence, which AI keeps avoiding and circumventing, is MEANING and UNDERSTANDING. I will suggest three steps to start tackling this area. The first step is to come to grips with what meaning and understanding are and to recognize that we are not confronting it enough today in AI. To do this I will look at examples of art, both music and painting. The second step is to identify fundamental processes and data structures that we need to model the production of art works, particularly the creative part of it. We also need to identify the processes and data structures for the interpretation and experience of art works, which also requires considerable creativity. The third step is to start experimenting, partly using all the tools available in our AI toolkit, from deep learning to knowledge graphs, and complementing that with hand-made additions to fill gaps. From such basic research we can then start to get a clearer view on how we can push our field further. There is still so much to do and discover!

Literature

  • Steels, L. (2020) Personal Dynamic Memories are Necessary to Deal with Meaning and Understanding in Human-Centric AI. In: Saffiotti, A, L. Serafini and P. Lukowicz (eds). Proceedings of the First International Workshop on New Foundations for Human-Centered AI (NeHuAI) Co-located with 24th European Conference on Artificial Intelligence (ECAI 2020) CEUR Workshop Proceedings (CEUR-WS.org, ISSN 1613-0073) Vol-2659.
  • Sinem, A. and L. Steels (2021) Identifying centres of interest in paintings using alignment and edge detection. Case studies on works by Luc Tuymans. In: International Workshop on Fine Art Pattern Extraction and Recognition (FAPER 2020). Proceedings of the International Conference on Pattern Recognition (ICPR) Part III. LNCS 12663. Springer Verlag, Berlin.
  • Steels, L. (2021) From audio signals to musical meaning. In: Miranda, E. (ed.) Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches, and Developments for Creativity. The MIT Press, Cambridge Ma. [in press]

Lecturer

Luc Steels
Prof. Luc Steels

Luc Steels is currently an ICREA research fellow at the Institute for Evolutionary Biology (UPF-CSIC) in Barcelona. He studied linguistics at the University of Antwerp (BE) and computer science at MIT (US) and became a a professor of Artificial Intelligence (AI) at the University of Brussels (VUB) in 1983 where he then founded the VUB AI Lab. In 1996 Steels founded the Sony Computer Science Laboratory in Paris. Steels has been active in many areas of AI: knowledge representation and knowledge-based systems, behavior-based robotics and artificial life, language evolution, and digital community memories. At the moment he is focused on questions of meaning and understanding in relation to creativity and art and on the early history of AI.

Affiliation: Catalan Institute for Research and Advanced Studies (iCREA)
Homepage: https://www.icrea.cat/Web/ScientificStaff/luc-steels-539

PC3 – The AI Go Tournament

Lecturer: Benjamin Paaßen
Fields: Artificial Intelligence/Machine Learning

Content

Go is an ancient and deceptively simple game: Two players alternate at placing a stone somewhere on a 19 x 19 grid. Whenever a group of opposing stones is surrounded, it is removed. And whoever has more stones left at the end of the game wins (in simplified Chinese rules). But despite its simplicity, Go is exceptionally hard to master. Any move can have highly chaotic repercussions a hundred moves later.
As such, it was celebrated as a breakthrough when Google DeepMind released AlphaGo, an AI that could win at the game of Go four out of five games against the world champion Lee Sedol.

In this course, we will attempt a much more modest goal: Programming an AI which learns to play the game of Go on a 5 x 5 board, where games are short and the chaos more tame. In the introductory session, we will consider the problem in more detail and get a small reinforcement learning prior. Then, each participant (you can also form groups) starts to program their AI. We will check in again one week after start and at the deadline after two weeks. Then, all AIs need to be submitted as Python programs. The submitted AIs will compete against each other in a tournament and the results of the tournament will be announced at the last evening of IK.

There will be a limit of 30 participants for this course to keep things manageable. A list for registration will be made available at the conference.

Link to the source code

Literature

  • Wikipedia (2021). Basic rules of go. https://en.wikipedia.org/wiki/Go_(game)#Basic_rules
  • DeepMind (2020). AlphaGo – The Movie. https://www.youtube.com/watch?v=WXuK6gekU1Y
  • Silver, D., Schrittwieser, J., Simonyan, K. et al. (2017). Mastering the game of Go without human knowledge. Nature 550, 354-359. https://doi.org/10.1038/nature24270
  • Brunskill, E. (2019). Reinforcement Learning | Winter 2019 | Lecture 1 – Introduction. https://www.youtube.com/watch?v=FgzM3zpZ55o&list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u

Lecturer

Dr. Benjamin Paassen

Benjamin Paassen received their doctoral degree in 2019 from Bielefeld University, Germany on the topic of ‘Metric Learning for Structured Data’. They have spent an adventurous pandemic year 2020 at The University of Sydney using neural networks for processing students’ computer programs and supporting their programming skill. Now they are affiliated with the Humboldt-University of Berlin, continuing work to improving education with machine learning. Other research interests include hand prothesis research, fairness in machine learning, and video game culture.

Affiliation: Humboldt-University of Berlin
Homepage: https://bpaassen.gitlab.io/

PC2 – Mindfulness as a method to explore your mind-wandering with curiosity

Lecturer: Marieke van Vugt
Fields: Cognitive science/Psychology

Content

In the first session, we will introduce the methods of mindfulness, and discuss how mindfulness differs from mind-wandering. Contrary to popular belief, mindfulness is not the opposite of mind-wandering, but rather the cultivation of mindfulness involves becoming better friends with your mind so that you learn to become less stuck in thought processes. We will also review conceptual models of mindfulness and mind-wandering together with some research underpinnings. In addition, we will introduce the first and third-person perspective on studying the mind and basics of microphenomenology. We will also start a small experiment with our own mindfulness practice, which we will analyse in the last session of the course.
In the second session, we will continue our practice of mindfulness, and review research findings on the effects of mindfulness on cognitive function and brain activity.
In the third session, we will continue our practice of mindfulness. We will place mindfulness in the context of different meditation practices, discussing similarities and differences. We will also discuss in general how we can study mindfulness scientifically and how to do so rigorously.
In the fourth session, apart from practicing mindfulness, we will discuss the findings of our little experiments. There will also be ample space for questions and additional topics to discuss.

Literature

  • Tang, Y. Y., Hölzel, B. K., & Posner, M. I. (2015). The neuroscience of mindfulness meditation. Nature Reviews Neuroscience, 16(4), 213. https://www-nature-com/articles/nrn3916
  • Vago, D. R., & David, S. A. (2012). Self-awareness, self-regulation, and self-transcendence (S-ART): a framework for understanding the neurobiological mechanisms of mindfulness. Frontiers in human neuroscience, 6, 296. https://www.frontiersin.org/articles/10.3389/fnhum.2012.00296
  • Petitmengin, C., van Beek, M., Bitbol, M., Nissou, J. M., & Roepstorff, A. (2018). Studying the experience of meditation through micro-phenomenology. Current opinion in psychology. https://www-sciencedirect-com/science/article/pii/S2352250X18301908

Lecturer

Marieke van Vugt
Dr. van Vugt

Dr. van Vugt is an assistant professor at the University of Groningen in the Netherlands, working in the department of artificial intelligence. She obtained her PhD in model-based neuroscience from the University of Pennsylvania, then worked as a postdoc at Princeton University before moving to the University of Groningen. In her lab, she focuses on understanding the cognitive and neural mechanisms underlying decision making, mind-wandering and meditation by means of EEG, behavioural studies and computational modeling. In some slightly outside-the-box research, she also records the brain waves of Tibetan monks and dancers.

Affiliation: University of Groningen
Homepage: https://mkvanvugt.wordpress.com

PC1 – Dynamical Systems: a Navigation Guide

Lecturer: Herbert Jaeger
Fields: Complex systems modeling, general

Content

This is a crash course (4 sessions at 2 hrs each) on dynamical systems. It’s meant to be introductory, understandable for the IK audience, not only for mathematicians. Whenever in the wider sciences of cognitive processes (neuroscience, AI, robotics, …) one wants to model cognitive processes, one should model them as cognitive processes, that is, as dynamical systems. So there is no escape: you MUST learn about them.

Literature

  • No literature to pre-read – the course slides contain references to a wide variety of writings from diverse niches in the labyrinth of dynamical systems theories (plural!). I don’t know of any single tutorial text that tries to do the impossible (which I try in this course), namely to give a comprehensive introduction. Existing tutorials (e.g. at https://dsweb.siam.org/) only each cover special subfields.

Lecturer

Herbert Jaeger
Prof. Herbert Jaeger

Herbert Jaeger is full Professor for Computing in Cognitive Materials at the Rijksuniversiteit Groningen (RUG) and head of the MINDS Group ”Modeling Intelligent Dynamical Systems”. He studied mathematics and psychology at the University of Freiburg and obtained his PhD in Computer Science (Artificial Intelligence) at the University of Bielefeld in 1994. After a 5-year postdoctoral fellowship at the German National Research Center for Computer Science (Sankt Augustin, Germany) he headed the “Intelligent Dynamical Systems” group at the Fraunhofer Institute for Autonomous Intelligent Systems AIS (Sankt Augustin, Germany). In 2003 he was appointed as Associate Professor for Computational Science at Jacobs University Bremen, where he stayed until he moved to RUG in 2019. Jaeger is one of several lucky independent co-discoverers of the “reservoir computing” principle for training recurrent neural networks, but has other interests as well (among them, do good teaching).

Affiliation: University of Groningen, NL
Homepage: https://www.ai.rug.nl/minds/

IC22 – Data-driven dynamical models for neuroscience and neuroengineering

Lecturer: Bing Brunton
Fields: Computational neuroscience / Neuroengineering, Data Science

Content

Discoveries in modern neuroscience are increasingly driven by quantitative understanding of complex data. The work in my lab lies at an emerging, fertile intersection of computation and biology. I develop data-driven analytic methods that are applied to, and are inspired by, neuroscience questions. Projects in my lab explore neural computations in diverse organisms. We work with theoretical collaborators on developing methods, and with experimental collaborators studying insects, rodents, and primates. The common theme in our work is the development of methods that leverage the escalating scale and complexity of neural and behavioural data to find interpretable patterns.

Lecturer

Bing Brunton is the Washington Research Foundation Innovation Associate Professor of Neuroengineering in the Department of Biology. She joined the University of Washington in 2014 as part of the Provost’s Initiative in Data-Intensive Discovery to build an interdisciplinary research program at the intersection of biology and data science. She also holds appointments in the Paul G. Allen School of Computer Science & Engineering and the Department of Applied Mathematics. Her training spans biology, biophysics, molecular biology, neuroscience, and applied mathematics (B.S. in Biology from Caltech in 2006, Ph.D. in Neuroscience from Princeton in 2012). Her group develops data-driven analytic methods that are applied to, and are inspired by, neuroscience questions. The common thread in this work is the development of methods that leverage the escalating scale and complexity of neural and behavioural data to find interpretable patterns. She has received the Alfred P. Sloan Research Fellowship in Neuroscience (2016), the UW Innovation Award (2017), and the AFOSR Young Investigator Program award (2018) for her work on sparse sensing with wing mechanosensory neurons.

Affiliation: The Ohio State University
Homepage: www.bingbrunton.com
@bringbunton

IC3 – A short introduction to Bayesian descriptions of information processing in the brain

Lecturer: Chris Mathys
Fields: Cognitive neuroscience, computational modelling

Content

Assuming that the brain is an organ of prediction is one of the most fruitful approaches to understanding what it does and how it works. In this short introduction, we will look at how we can describe the brain\’s activity as reflecting updates to predictions in response to new information. Beyond that, we will see that actions too can be understood as being driven by predictions. But how do we update predictions when our environment changes? If we want our predictions to be accurate and our uncertainty about them realistic, we need to observe the rules of probability. This means that our belief updates can be described in terms of Bayesian inference, and so can the brain\’s neural activity, which we will see in several examples.

Literature

  • Mathys, C., Daunizeau, J., Friston, K. J., & Stephan, K. E. (2011). A Bayesian foundation for individual learning under uncertainty. Frontiers in Human Neuroscience, 5, 39. https://doi.org/10.3389/fnhum.2011.00039
  • Iglesias, S., Mathys, C., Brodersen, K. H., Kasper, L., Piccirelli, M., den Ouden, H. E. M., & Stephan, K. E. (2013). Hierarchical Prediction Errors in Midbrain and Basal Forebrain during Sensory Learning. Neuron, 80(2), 519–530. https://doi.org/10.1016/j.neuron.2013.09.009

Lecturer

Chris Mathys

Chris Mathys is Associate Professor of Cognitive Science at Aarhus University. He originally trained as a physicist and has a PhD in Information Technology from ETH Zurich.

Affiliation: Interacting Minds Centre, Aarhus University
Homepage: https://chrismathys.com