IC18 – Implicit coordination in Multi-Agent-Pathfinding

Lecturer: Bernhard Nebel
Fields: Artificial Intelligence/Planning

Content

When using a decentralized planning approach in a cooperative multi-agent system, you either can make use of explicit coordination, i.e. using communication and negotiation during the planning phase, or you rely on implicit coordination, i.e., completely decentralized planning with runtime coordination. We will study such an implicit coordination regime in the setting of multi-agent pathfinding. In this setting it is usually assumed that planning is performed centrally and that the destinations of the agents are common knowledge. We will drop both assumptions and analyze under which conditions it can be guaranteed that the agents reach their respective destinations using implicitly coordinated plans without communication. Furthermore, we will analyze what the computational costs associated with such a coordination regime are. As it turns out, guarantees can be given assuming that the agents are of a certain type. However, the implied computational costs are quite severe.

Literature

  • Thomas Bolander, Thorsten Engesser, Robert Mattmüller and Bernhard Nebel. Better Eager Than Lazy? How Agent Types Impact the Successfulness of Implicit Coordination. In Proceedings of the Sixteenth Conference on Principles of Knowledge Representation and Reasoning (KR18), pp. 445-453. 2018. https://gki.informatik.uni-freiburg.de/papers/bolander-etal-kr18.pdf
  • Bernhard Nebel, Thomas Bolander, Thorsten Engesser and Robert Mattmüller. Implicitly Coordinated Multi-Agent Path Finding under Destination Uncertainty: Success Guarantees and Computational Complexity. Journal of Artificial Intelligence Research 64, pp. 497-527. 2019. https://gki.informatik.uni-freiburg.de/papers/nebel:et:al:jair-19.pdf
  • Roni Stern, Nathan R. Sturtevant, Ariel Felner, Sven Koenig, Hang Ma, Thayne T. Walker, Jiaoyang Li, Dor Atzmon, Liron Cohen, T. K. Satish Kumar, Roman Barták, Eli Boyarski: Multi-Agent Pathfinding: Definitions, Variants, and Benchmarks. SOCS 2019: 151-159, https://www.aaai.org/ocs/index.php/SOCS/SOCS19/paper/view/18341/17457

Lecturer

Bernhard Nebel
Prof. Bernhard Nebel

Bernhard Nebel received his first degree in Computer Science (Dipl.-Inform.) from the University of Hamburg in 1980 and his Ph.D. (Dr. rer. nat.) from the University of Saarland in 1989. Between 1982 and 1993 he worked on different AI projects at the University of Hamburg, the Technical University of Berlin, ISI/USC, IBM Germany, and the German Research Center for AI (DFKI). From 1993 to 1996 he held an Associate Professor position (C3) at the University of Ulm. Since 1996 he is Professor at Albert-Ludwigs-Universität Freiburg and head of the research group on Foundations of Artificial Intelligence. His research interest are in knowledge representation and reasoning and AI planning, and how you can apply techniques from these areas in robotics. His achievements range from winning the Robocup competition and building an autonomous football table to solving theoretical problems in the area of planning. His current research interest is mostly in multi-agent pathfinding, contributing to the theoretical foundations in this area and applying it in real-world scenarios such as automating the ground-traffic on airports. Bernhard Nebel is an EurAI Fellow and a AAAI Fellow. He is also an elected member of the German Academy of Science Leopoldina and the Academia Europaea. In 2019, he was mentioned as one of the 10 formative researchers of German AI.

Affiliation: Albert-Ludwigs-Universität
Homepage: https://gki.informatik.uni-freiburg.de/~nebel/

IC23 – Considering the human in developing AI systems in an anti-Black environment

Lecturer: Christopher L. Dancy
Fields: Artificial Intelligence, Black Studies, Cognitive Science

Content

How can we develop AI systems that are more don’t enact and enable existing anti-Blackness? What does it mean to consider anti-Blackness within the context of the design, development, and deployment of AI systems? I will approach these questions from cognitive science and black studies perspective. We will consider and discuss how the continual enacting of the “Man” “genre of the human” (see Wynter, 2003, 2015) fuels anti-Blackness in our AI systems and use a hybrid cognitive architecture as a vehicle for thinking about the design and interaction with those systems.

Literature

  • Dancy, C. L. (2019). A hybrid cognitive architecture with primal affect and physiology. IEEE Transactions on Affective Computing. doi:10.1109/TAFFC.2019.2906162
  • Wynter, S. (2003). Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, After Man, Its Overrepresentation – An Argument. CR: The New Centennial Review, 3(3), 257-337.
  • Wynter, S., & McKittrick, K. (2015). Unparalleled Catastrophe for Our Species? Or, to Give Humanness a Different Future: Conversations. In K. McKittrick (Ed.), Sylvia Wynter: On being human as praxis (pp. 9-89). Durham, NC, USA: Duke University Press.
  • Cave, S. (2020). The Problem with Intelligence: Its Value-Laden History and the Future of AI. In proceedings of the Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 29–35.

Lecturer

Christopher L. Dancy
Dr. Christopher L Dancy

Dr. Christopher L. Dancy is currently an Assistant Professor of Computer Science at Bucknell University. He received a Ph.D. in Information Sciences and Technology (with a focus on Cognitive Science and Artificial Intelligence) from Penn State, University Park, as well as a B.S. in Computer Science from Penn State, University Park. He has research and teaching interests in AI and Cognitive science, and related interests in computational physiology, affective neuroscience, and emotion theory. Dr. Dancy also has research interests in AI & Society, particularly as it relates to anti-Blackness. He uses these perspectives for thinking about and creating computational agents in software, using these agents for behavior simulation, theory exploration, HCI, and systems engineering purposes, as well as to study agents and AI systems in the context of existing social structures.

Affiliation: Department of Computer Science, Bucknell University
Homepage: https://eg.bucknell.edu/~cld028/

IC16 – Testing for Neural Correlates of Consciousness

Lecturer: Sascha Benjamin Fink
Fields: Philosophy, Neuroscience, Psychology

Content

The search for neural correlates of consciousness (NCCs) is at the heart of the contemporary science of consciousness. What an NCC is supposed to be has varied over the course of the last 30 years, but most researchers want more from NCCs than a record of a merely statistical relation between neural and conscious goings-on: Most search for where consciousness, this fundamentally subjective phenomenon, has its footing in the objective, natural world. So any NCC can be given a stronger reading: THIS is what makes a neural event come with consciousness. But such more general hypotheses ought to be tested. In the presentation, I sketch a bit of the historical development and how we got here and talk about four basic kinds of tests (the Which-, When-, What-, and How-Test) as well as their respective advantages and shortcomings.

Literature

  • Chalmers, David J. (2000). What is a neural correlate of consciousness? In Thomas Metzinger (ed.), Neural Correlates of Consciousness: Empirical and Conceptual Questions. MIT Press. pp. 17–39.
  • Fink, Sascha Benjamin (2016). A Deeper Look at the \”Neural Correlate of Consciousness\”. Frontiers in Psychology 7.
  • Fink, Sascha Benjamin (Ed.) (2020) The Neural Correlates of Consciousness. Special Issue at Philosophy and the Mind Sciences.
  • Aru, J., Bachmann, T., Singer, W., & Melloni, L. (2012). Distilling the neural correlates of consciousness. Neuroscience & Biobehavioral Reviews, 36(2), 737-746.
  • Metzinger, T. (Ed.). (2000). Neural correlates of consciousness: Empirical and conceptual questions. MIT press.
  • Noë, A., & Thompson, E. (2004). Are there neural correlates of consciousness?. Journal of Consciousness Studies, 11(1), 3-28.
  • Block, N. (2005). Two neural correlates of consciousness. Trends in cognitive sciences, 9(2), 46-52.
  • Crick, F., & Koch, C. (1990). Towards a neurobiological theory of consciousness. In Seminars in the Neurosciences (Vol. 2, No. 263-275, p. 203).

Lecturer

Jun.-Prof. Dr. Sascha Benjamin Fink received his PhD at the University of Osnabrück\’s Institute for Cognitive Science and currently is Juniorprofessor for Neurophilosophy at Otto-von-Guericke-Universität Magdeburg. He is working on foundational issues on the neuroscience of consciousness, transformative experiences, the psychology of vagueness, and the role of paradoxes in the sciences.

Affiliation: Otto-von-Guericke-Universität Magdeburg; Center for Behavioral Brain Sciences; Graduate School “Extrospection”
Homepage: www.finks.de

IC11 – Your Wit Is My Command: Automating Humour with Computational Creativity

Lecturer: Tony Veale
Fields: Artificial Intelligence

Content

Until quite recently, AI was a scientific discipline defined more by its portrayal in science fiction than by its actual technical achievements. Real AI systems are now catching up to their fictional counterparts, and are as likely to be seen in news headlines as on the big screen. Yet as AI outperforms people on tasks that were once considered yardsticks of human intelligence, one area of human experience still remains unchallenged by technology: our sense of humour.
This is not for want of trying, as this course will show. The true nature of humour has intrigued scholars for millennia, but AI researchers can now go one step further than philosophers, linguists and psychologists once could: by building computer systems with a sense of humour, capable of appreciating the jokes of human users or even of generating their own, AI researchers can turn academic theories into practical realities that amuse, explain, provoke and delight.

Objectives
This course will use the ideas and achievements of AI to explore what it means to have a sense of humour, and moreover, to understand what it is to not have one. It will challenge the archetype of the humourless machine in popular culture, to celebrate what science fiction gets right and to learn from what it gets wrong. It will make a case for the necessity of a computational understanding of humour, to better understand ourselves and to better construct machines that are more flexible, more understanding, and more willing to laugh at their own limitations.

Course content
The course will comprise four lectures, which will explore the following topics.
Newspaper personal columns are routinely filled with people seeking partners with a good sense of humour (GSOH), with many rating this as highly as physical fitness or physical appearance. Yet what does it mean to have a sense of humour? Conversely, what does it mean to have NO sense of humour, and how might we imbue a humourless machine with a capacity for wit and a flair for the absurd? We begin by unpacking these questions, to suggest some initial answers and models.

So, for example, what would it mean for a computer to have a numeric humour setting, as in the case of the robot TARS in the film Interstellar? Can a machine’s sense of humour be reduced to a single number or parameter setting? Is humour a modular ability? Can it be gifted to computers as a bolt-on unit like Commander Data’s “humour chip” in Star Trek, or is it an emergent phenomenon that arises from complex interactions among all our other faculties? Might humour emerge naturally within complex AI systems without explicitly being programmed to do so, as in the mischievous supercomputer Mike in Robert Heinlein’s The Moon is a Harsh Mistress or in the sarcastic droid K2SO in Rogue One: A Star Wars story?

This course will survey and critique the competing humour theories that scholars have championed through the ages, enlarging on recurring themes (incongruity, relief, superiority) while considering the amenability of each to computational modelling. What is it that these theories are really explaining, and which comes closest to capturing the elusive essence of humour?

The centrality of incongruity in modern theories demands that this concept be given a special focus. So we will unpack its many meanings to show how our understanding of incongruity can be as multifaceted as the idea of humour itself. Popular myths about the brittleness of machines in the face of the incongruous and the unexpected will be unpicked and debunked as we explore how machines might deliberately seek out and invent incongruities of their own.

But computational humour is still in its infancy, and it is no coincidence that the mode of humour for which machines show the greatest aptitude is that which humans embrace at a very early age, puns. Puns vary in wit and sophistication, but the simplest require only an ear for sound similarity and a disregard for the consequences of replacing a word with its phonetic doppelgänger. The challenge for AI systems is to progress, as children do, from these simple beginnings to modes of ever greater conceptual sophistication.

To do so, is it possible to capture the essence of jokes in a mathematical formula, much as physicists have done for electromagnetism and gravity? Do jokes have quantifiable features that we and our machines can intuitively appreciate? Can we build statistical models to characterize the signature qualities of a humorous artefact, so that machines can learn to tell funny from unfunny for themselves? And what do these measurable qualities say about humour and about us?

Finally, however we slice it, conflict sits at the heart of humour, whether it is a conflict in meaning, attitude, expectation or perspective. Double acts personalize this conflict by recognizing the different roles a comic can play. Computers can likewise play multiple rules in the creation of humour, from rigid “straight man” to absurdist provocateur, in double-acts with humans and with other machines. So we will explore the ways in which smart machines can contribute to the social emergence of humour, either as embodied robots or disembodied software.

Literature

  • Computational humour studies is an established field that has produced a range of academic books, from Victor Raskin’s Semantic Mechanisms of Humour (1985, one of the first) to the more recent Primer of Humour Research (with chapters from computational humorists). Non-computational humour researchers, such as Elliott Oring, have also written accessible books on humour, such as Engaging Humour, while the computer scientist Graeme Ritchie has written a pair of well-received academic books on humour. Comedians and comedy professionals have also written some noteworthy books on humour, with individual chapters that focus on computational humour or that offer algorithmic insights into the author’s own comedy production strategies. Toplyn’s Comedy Writing for Late-Night TV offers a beginner’s guide to humour production that is frequently schematic in style. The Naked Jape, by Jimmy Carr and Lucy Greeves, considers humour more broadly, but also offers a chapter on computational models and the people who build them. I will quote from each of the sources as needed.

Lecturer

Tony Veale
Prof. Dr, Tony Veale

Tony Veale is an associate professor in the School of Computer Science at UCD (University College Dublin), Ireland. He has worked in AI research for 25 years, in academia and in industry, with a special emphasis on humour and linguistic creativity. He is the author of the 2012 monograph Exploding the Creativity Myth: The Computational Foundations of Linguistic Creativity (from Bloomsbury), co-author of the 2016 textbook, Metaphor: A Computational Perspective (Morgan Claypool), and co-author of 2018’s Twitterbots: Making Machines That Make Meaning (MIT Press). He led the European Commission’s coordination action on Computational Creativity (named PROSECCO), and collaborated on international research projects with an emphasis on computational humour and imagination, such as the EC’s What-If Machine (WHIM) project. He runs a web-site dedicated to explaining AI with humour at RobotComix.com. He is active in the field of Computational Creativity, and is currently the elected chair of the international Association for Computational Creativity (ACC).

Affiliation: University College Dublin
Homepage: http://Afflatus.UCD.ie

PrfC3- Curiosity, Risk, and Reward in the Academic Job Search

Lecturer: Emily King
Fields: Soft skills

Content

So you’ve decided that you want a career or at least a job in academia; what’s next?  Imposter Syndrome is something that makes applying to lots of interesting jobs seem like a risk; when should you take the time to apply?  How can you leverage curiosity to broaden your job search? This single session course will cover some of the basics of the academic job search: how to decide whether to apply for a particular job and then how to make your application pop.  As the cover letter is the first thing that most hiring committees see, we will focus on how to make yours strong.  CVs, research statements, teaching portfolios, and interview questions will also be touched on.

This class will be discussion-based.  Please come with your questions! Also, if you want to submit a cover letter to be (constructively!) critiqued by the participants, please send it to me over email.

Objectives

The participants should leave with a better idea of how to choose where to apply and then how to successfully apply.  (Reward!)

Literature

N/A

Lecturer

Emily King is a professor of mathematics at Colorado State University, reigning IK Powerpoint Karaoke champion, an avid distance runner, and a lover of slow food / craft beer / third wave coffee.  Her research interests include algebraic and applied harmonic analysis, signal and image processing, data analysis, and frame theory.  In layman’s terms, she looks for the best building blocks to represent data, images, and even theoretical mathematical objects to better understand them.  She also has a tattoo symbolizing most of her favorite classes of mathematical objects.  If you are curious, you should ask her about it over a beer.

Affiliation: Colorado State University
Website: https://www.math.colostate.edu/~king/

MC5 – Low Complexity Modeling in Data Analysis and Image Processing

Lecturer: Emily King
Fields: Mathematical methods, data analysis, machine learning, image processing, harmonic analysis

Content

Are you curious about how to extract important information from a data set?  Very likely, you will be rewarded if you use some sort of low complexity model in your analysis and processing.  A low complexity model is a representation of data which is in some sense much simpler than what the original format of the data would suggest. For example, every time you take a picture with a phone, about 80% of the data is discarded when the image is saved as a JPEG file.  The JPEG compression algorithm works due to the fact that discrete cosine functions yield a low complexity model for natural images that tricks human perception. As another example, linear bottlenecks, pooling, pruning, and dropout are all examples of enforcing a low complexity model on neural networks to prevent overfitting. Some benefits of  low complexity models include:

  • Approximating data via a low complexity model often highlights overall structure of the data set or key features.
  •  Appropriately reducing the complexity of data as a pre-processing step can speed up algorithms without drastically affecting the outcome.
  • Reducing the complexity of a system during a training task can prevent overfitting.

The course will begin with an introduction to applied harmonic analysis, touching on pertinent topics from linear algebra, Fourier analysis, time-frequency analysis, and wavelet/shearlet analysis.  Then an overview of low complexity models will be given, followed by specific discussions of

  • Linear dimensionality reduction (principal component analysis, Johnson-Lindenstrauss embeddings)
  • Sparsity and low rank assumptions (LASSO, l^p norms, k-means clustering, dictionary learning)
  • Nonlinear dimensionality reduction / manifold learning (Isomap, Locally Linear Embedding, local PCA)
  • Low complexity models in neural networks (linear bottlenecks, pooling, pruning, dropout, generative adversarial networks, Gaussian mean width)

Objectives

The course aims to provide participants with a good understanding of basic concepts and applications of both classical mathematical tools like the Fourier or wavelet transform and more cutting edge methods like dropout in neural networks.  A variety of applications and algorithms will be presented.  Participants should finish the course with a clearer idea of when and how to use various approaches in data analysis and image processing.

Literature

The linear algebra chapter of MIT’s Deep Learning textbook:

http://www.deeplearningbook.org/contents/linear_algebra.html

Lecturer

Emily King is a professor of mathematics at Colorado State University, reigning IK Powerpoint Karaoke champion, an avid distance runner, and a lover of slow food / craft beer / third wave coffee.  Her research interests include algebraic and applied harmonic analysis, signal and image processing, data analysis, and frame theory.  In layman’s terms, she looks for the best building blocks to represent data, images, and even theoretical mathematical objects to better understand them.  She also has a tattoo symbolizing most of her favorite classes of mathematical objects.  If you are curious, you should ask her about it over a beer.

Affiliation: Colorado State University
Website: https://www.math.colostate.edu/~king/


MC1 – Applications of Bayesian Inference and the Free Energy Principle

Lecturer: Christoph Mathys
Fields: Bayesian inference, free energy principle, active inference,
computational neuroscience, time series

Content

We will start with a look at the fundamentals of Bayesian inference, model selection, and the free energy principle. We will then look at ways to reduce Bayesian inference to simple prediction adjustments based on precision-weighted prediction errors. This will provide a natural entry point to the field of active inference, a framework for modelling and programming the behaviour of agents negotiating their continued existence in a given environment. Under active inference, an agent uses Bayesian inference to choose its actions such that they minimize the free energy of its model of the environment. We will look at how an agent can infer the state of the environment and its own internal control states in order to generate appropriate actions.

Objectives

  • To understand the reduction of Bayesian inference to precision-weighting of
    prediction errors
  • To understand the free energy principle and the modelling framework of
    active inference
  • To know the principles of Bayesian inference and model selection, and to understand their application to a given data set.

Literature

  • Friston, K. J., Daunizeau, J., & Kiebel, S. J. (2009). Reinforcement Learning
    or Active Inference? PLoS ONE, 4(7), e6421.
  • Mathys, C., Lomakina, E.I., Daunizeau, J., Iglesias, S., Brodersen, K.H.,
    Friston, K.J., & Stephan, K.E. (2014). Uncertainty in perception and the
    Hierarchical Gaussian Filter. Frontiers in Human Neuroscience, 8:825.
  • Mathys, C., Daunizeau, J., Friston, K.J., Stephan, K.E., 2011. A Bayesian
    foundation for individual learning under uncertainty. Front. Hum. Neurosci. 5,
    39.
  • Friston, K. (2009). The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences, 13(7), 293–301.

Lecturer

Christoph Mathys is Associate Professor of Cognitive Science at Aarhus University. Originally a theoretical physicist, he worked in the IT industry for several years before doing a PhD in information technology at ETH Zurich and a master’s degree in psychology and psychopathology at the University of Zurich. During his graduate studies, he developed the hierarchical Gaussian filter (HGF), a generic hierarchical Bayesian model of inference in volatile environments. Based on this, he develops and maintain the HGF Toolbox, a Matlab-based free software package for the analysis of behavioural and neuroimaging experiments. His research focus is on the hierarchical message passing that supports inference in the brain, and on failures of inference that lead to psychopathology.

Affiliation: sissa
Website: https://chrismathys.com


ET1 – How to Know

Lecturer: Celeste Kidd
Fields: Developmental psychology, cognitive science, (and a tiny bit of neuroscience)

Content

This evening lecture will discuss Kidd’s research about how people come to know what they know. The world is a sea of information too vast for any one person to acquire entirely. How then do people navigate the information overload, and how do their decisions shape their knowledge and beliefs? In this talk, Kidd will discuss research from her lab about the core cognitive systems people use to guide their learning about the world—including attention, curiosity, and metacognition (thinking about thinking). The talk will discuss the evidence that people play an active role in their own learning, starting in infancy and continuing through adulthood. Kidd will explain why we are curious about some things but not others, and how our past experiences and existing knowledge shape our future interests. She will also discuss why people sometimes hold beliefs that are inconsistent with evidence available in the world, and how we might leverage our knowledge of human curiosity and learning to design systems that better support access to truth and reality.

Objectives

I hope to introduce students to the approach of combining computational models with behavioural experiments in order to develop robust theories of the systems that govern human cognition, especially attention, curiosity, and learning. We will take a very high-level conceptual approach to these topics, and I also hope students will leave understanding something useful about how people solve the problem of sampling from the world in order to understand something profound about it. I hope students will leave with a better understanding about how a person’s past experiences and expectations combine in a way that influences their subsequent sampling decisions and beliefs.    

Literature

Optional to read: Kidd, C., & Hayden, B. Y. (2015). The psychology and neuroscience of curiosity. Neuron88(3), 449-460.

https://www.cell.com/neuron/fulltext/S0896-6273(15)00767-9

Lecturer

Celeste Kidd is an Assistant Professor of Psychology at the University of California, Berkeley. Her lab investigates learning and belief formation using a combination of computational models and behavioural experiments. She earned her PhD in Brain and Cognitive Sciences at the University of Rochester, then held brief visiting fellow positions at Stanford’s Center for the Study of Language and Information and MIT’s Department of Brain and Cognitive Sciences before starting up her own lab. Her research has been funded by the National Science Foundation, the Jacobs Foundation, the Templeton Foundation, the Human Frontiers Science Program, Google, and the Berkeley Center for New Media. Kidd also advocates for equity in educational opportunities, work for which she was made one of TIME Magazines 2017 Persons of the Year as one of the “Silence Breakers”. 

Affiliation: University of California, Berkeley
Website: www.kiddlab.com

 

PrfC2 – Ethics in Science and Good Scientific Practice

Lecturer: Hans-Joachim Pflüger
Fields: Science in general, Neuroscience in particular

Content

This course will focus on aspects of ethics in science and that “scientific curiosity” is not without limits although ethical standards may vary in different cultures. We shall focus on the relatively new field of neuro-ethics. Another important topic will be concerned with good scientific practice and its implications for the performance of science, and what kind of rules apply. To follow these rules of good scientific practice is among the key requirements for scientists worldwide as well as for any autonomous artificial intelligent systems and, thus, is one of the pillars of trust into scientific results. The participants will be confronted with examples of fraud in science, ethical problems and decisions as well as good scientific practice. The outcome of such ethical decisions and questions around good scientific practice will be discussed.

  1. Ethics in (Neuro)Science
  2. Fraud in Science? Is it a problem? Introduction to a few exemplary cases. Task for each or a group of participants (pending on number of attendees): Internet search for cases of fraud, offenders and reasons to commit fraud.
  3. Good Scientific Practice Are there obligatory rules for performing experiments, keeping records, storing data? Are there any laws having to be considered?
  4. Publishing and Final discussion What are hybrid journals, open access journals and what is the future of publishing? Are there also “dark sides” of open access?

Objectives

The course should teach participants the rules of good scientific practice, how to implement them in their own work and how to teach others. The participants will also be made aware of ethical problems that may arise with designing and carrying out experiments on animals or humans.

Literature

  • Ahlgrim et al. (2019) Prodromes and Preclinical Detection of Brain Diseases: Surveying the Ethical Landscape of Predicting Brain Health July/August 2019, 6(4) ENEURO.0439-18.2019 1–11
  • Greely et al. (2018) Neuroethics Guiding Principles for the NIH BRAIN Initiative. The Journal of Neuroscience, December 12,2018 • 38(50):10586–10588
  • Global Neuroethics Summit Delegates, Rommelfanger et al. (2018) Neuroethics Questions to Guide Ethical
  • Research in the International Brain Initiatives. Neuron 100:19-36, October 10, 2018
  • Ramos et al. (2018) Neuroethics and the NIH BRAIN Initiative. J Responsible Innov. 2018 ; 5(1): 122–130. doi:10.1080/23299460.2017.1319035
  • Kreutzberg, GW, The Rules of Good Science, EMBO reports, vol. 5, 330-332, 2004

Lecturer

Hans-Joachim Pflüger is a retired professor of Functional Neuroanatomy/Neurobiology at Freie Universität Berlin who is still active in basic neurobiological research where he is interested in the neuronal basis of locomotory behaviour in insects such as pattern generation, sensory reflex circuits and neuromodulation. He studied Biology and Chemistry in Stuttgart, obtained his doctoral degree from the University of Kaiserslautern and was a postdoc in Cambridge/UK. He was an assistant professor at the Universities of Bielefeld and Konstanz before his habilitation in Konstanz in 1985. He moved to Freie Universität, Berlin in 1987 and spent several sabbaticals in Tucson (Univ. of Arizona), Tempe (Arizona State Univ.), both USA, and Christchurch (Univ. of Canterbury), New Zealand. He was a visiting scientist to Ben Gurion Univ., Beer Sheva, Israel and received the Ernst-Bresslau guest professor award of the Universität zu Köln. He has served on several DFG-reviewing panels, was treasurer and president of the German Neuroscience Society (NWG) and also treasurer for both FENS (Federation of European Neuroscience Societies) and IBRO (International Brain Research Organization). 

Affiliation: Freie Universität Berlin
Website: https://www.bcp.fu-berlin.de/biologie/arbeitsgruppen/neurobiologie/ag_pflueger/mitarbeiter/leiter/pflueger/index.html


MC2 – Symbolic Reasoning within Connectionist Systems

Lecturer: Klaus Greff
Fields: Artificial Intelligence / Neural Networks, Draws upon Theoretical Neuroscience and Cognitive Psychology

Content

Our brains effortlessly organize our perception into objects which it uses to compose flexible mental models of the world. Objects are fundamental to our thinking and our brains are so good at forming them from raw perception, that it is hard to notice anything special happening at all. Yet, perceptual grouping is far from trivial and has puzzled neuroscientists, psychologists and AI researchers alike. 

Current neural networks show impressive capacities in learning perceptual tasks but struggle with tasks that require a symbolic understanding. This ability to form high-level symbolic representations from raw data, I believe, is going to be a key ingredient of general AI. 

During this course, I will try to share my fascination with this important but often neglected topic. 

Within the context of neural networks, we will discuss the key challenges and how they may be addressed. Our main focus will be the so-called Binding Problem and how it prevents current neural networks from effectively dealing with multiple objects in a symbolic fashion.

After a general overview in the first session, the next lectures will explore in-depth three different aspects of the problem:

Session 2 (Representation) focuses on the challenges regarding distributed representations of multiple objects in artificial neural networks and the brain.

Session 3 (Segregation) is about splitting raw perception into objects, and we will discuss what they even are in the first place.

Session 4 (Composition) will bring things back together and show how different objects can be related and composed into complex structures. 

Objectives

  • Develop an appreciation for the subtleties of object perception.
  • Understand the importance of symbol-like representations in neural networks and how they relate to generalization.
  • Become familiar with the binding problem and its three aspects: representation, segregation, and composition.
  • Get an overview of the challenges and available approaches for each subproblem.

Literature

The course is a non-technical high-level overview, so only basic familiarity with neural networks is assumed. Optional background material: 

Lecturer

Klaus Greff studied Computerscience at the University of Kaiserslautern and is currently a PhD candidate under the supervision of Prof. Jürgen Schmidhuber. His main research interest revolves around the unsupervised learning of symbol-like representations in neural networks (the content of this course).

Previously, Klaus has worked with Recurrent Neural Networks and the training of very deep neural networks, and is also the maintainer of the popular experiment management framework Sacred.

Affiliation: IDSIA
Website: https://people.lu.usi.ch/greffk/publications.html