RC1 – Homeostatically-driven behavioral architectures: How to model biological organisms throughout their life-cycle

Lecturer: Panagiotis Sakagiannis
Fields: Behavioral modeling, Systems Neuroscience, Robotics, Computational Ecology

Content

Why do organisms behave? When do they take risks and when do rewards matter to them? What is the nervous system’s role in a successful life cycle and how does it relate to its evolutionary origins? In this course, we adopt a behavioral modeler’s view integrating insights from systems neuroscience, ecological energetics, and layered robotic architectures in order to sketch a framework for dynamic mechanistic models of biological behavior. We address the advantages and shortcomings of region-specific biologically realistic neurocomputational models, of agent-based ecological simulations and of optimality-driven intelligent artificial agents and discuss ways of combining these powerful computational tools with a focus on the persisting individual homeostasis. Nested behaviors, recurrent neural networks, and entangled spatiotemporal scales are our main modeling challenges. An intensively studied organism, the drosophila fruit fly larva, will serve as our model agent for the whole course.

Objectives

Participants will benefit from an introduction to diverse scientific fields studying behavior or homeostasis, along with their computational tools. Philosophical debate on the normativity of behavior and mechanistic explanation will be touched upon, in the face of pressing modeling decisions. The valuable interaction between modelers and experimentalists will be highlighted. Finally, the delicate balance between detail and abstraction in behavioral modeling will be interactively discussed.

Literature

[1] S. a. L. M. Kooijman, “Dynamic Energy Budget theory for metabolic organisation : third edition,” Water, vol. 365, p. 68, 2010.
[2] T. J. Prescott, P. Redgrave, and K. Gurney, “Layered Control Architectures in Robots and Vertebrates,” Adaptive Behavior, pp. 99–127, 1999.
[3] M. J. Almeida-Carvalho et al., “The Ol1mpiad: Concordance of behavioural faculties of stage 1 and stage 3 Drosophila larvae,” J. Exp. Biol., vol. 220, no. 13, pp. 2452–2475, 2017.
[4] A. Campos-Candela, M. Palmer, S. Balle, A. Álvarez, and J. Alós, “A mechanistic theory of personality-dependent movement behaviour based on dynamic energy budgets,” Ecol. Lett., vol. 22, no. 2, pp. 213–232, 2018.
[5] W. Bechtel and A. Abrahamsen, “Dynamic mechanistic explanation: Computational modeling of circadian rhythms as an exemplar for cognitive science,” Stud. Hist. Philos. Sci. Part A, vol. 41, no. 3, pp. 321–333, 2010.

Lecturer

Panagiotis Sakagiannis:
Transitions across scientific fields are signs of both uneasiness and curiosity. In my case, a dual path can be traced, having medicine and clinical neurology on one side, mathematics and computational neuroscience on the other, while lately a PhD on insect behavioral modeling. Always seeking the broad picture when confronted with biological detail and the operationally useful formalization when attending philosophical debates, I still remain agnostic to my true inclination.

RC2 – Can patterns of word usage tell us what lemon and moon have in common? Analyzing the semantic content of distributional semantic models

Lecturer: Pia Sommerauer
Fields: Computational linguistics, cognitive linguistics

Content

Can patterns of textual contexts in which words appear tell you (or your model) that both, a lemon and the moon are described as yellow and round but differ with respect to (almost) everything else? In other words: How much information about concepts is encoded in patterns of word usage (i.e. distributional data)?

In this course, I will take stock of what we know about the semantic content encoded in data-derrived meaning representations (e.g Word2Vec), which are commonly used in Natural Language Processing and cognitive modelling (e.g. metaphor interpretation).

I will focus on how we can find out whether (and what) semantic knowledge they represent (beyond a general sense of semantic word similarity and relatedness). Drawing on methods in the area of neural network interpretability, I will discuss how we can “diagnose” semantic knowledge to find out whether a model can in fact distinguish flying from non-flying birds or tell you what lemons and the moon have in common.

Objectives

  • Become familiar with linguistic theories of the semantic encoded in linguistic context and what we could expect from it
  • Understand how distributional word representations are created, evaluated and used (with practical examples)
  • Understand why distributional word representations provide rich information for machine learning systems, but at the same time do not allow for straight-forward semantic interpretation
  • Understand the challenges of diagnostic methods and how they can be dealt with

Literature


Lecturer

Pia Sommerauer is a PhD student at the Computational Lexicology and Terminology Lab at Vrije Universiteit Amsterdam. Her research focuses on the type of semantic information captured by distributional representations of word meaning and whether they could be used for semantic reasoning. She has authored papers on this topic at venues specialized in lexical semantics and model interpretability together with her supervisors Antske Fokkens and Piek Vossen.

Website: https://piasommerauer.github.io/

RC3 – Representing Uncertainties in Artificial Neural Networks

Lecturer: Kai Standvoss
Fields: Computational Neuroscience, Artificial Intelligence

Content

Tracking uncertainties is a key capacity of an intelligent system for successfully interacting within a changing environment. Representations of uncertainty are relevant to optimally weigh sensory evidence against prior expectations, to adjust learning rates accordingly, and importantly to trade-off exploitation and exploration. Thus, uncertainty is a crucial component of curiosity and reward driven behavior. Additionally, calibrated uncertainty estimates are relevant for human interaction as well as reliable artificial systems. However, it is not yet well understood how uncertainties are tracked in the brain.
Bayesian views on Deep Learning offer a way to specify distributions over model parameters and to learn generative models of data generation processes. Thereby, different levels and kind of uncertainties can be represented. In this course, we will discuss different Bayesian methods to track uncertainties in neural networks and speculate about possible links to neuroscience.

Objectives

The objective of this course is to discuss the relevance of uncertainty for intelligent systems and its relationship to neural information processing. Participants will get an overview of Bayesian methods to calculate uncertainties in Deep Neural Networks.
After the course, participants should have the resources to choose the right tools for specific research questions or applications requiring explicit uncertainty estimates.
Open questions in the literature will be discussed.

Literature

  • Knill, D. C., & Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding and computation. TRENDS in Neurosciences, 27(12), 712- 719
  • Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114
  • Kendall, A., & Gal, Y. (2017). What uncertainties do we need in bayesian deep learning for computer vision?. In Advances in neural information processing systems (pp. 5574-5584).
  • Standvoss, K., & Grossberger, L. (2019). Uncertainty through Sampling: The Correspondence of Monte Carlo Dropout and Spiking in Artificial Neural Networks. In 2019 Conference on Cognitive Computational Neuroscience

Lecturer

Kai Standvoss

Kai obtained his bachelor’s degree in Cognitive Science from the University of Osnabrück. Later he studied Cognitive Neuroscience and Artificial Intelligence at the Donders Institute for Brain, Cognition, and Behavior. There he got interested in the representation of uncertainty and worked on a deep learning model of visual attention guided by uncertainty minimization. Currently he pursues a PhD at the Einstein Center for Neurosciences Berlin where he investigates visual metacognition.