RC3 – Representing Uncertainties in Artificial Neural Networks

Lecturer: Kai Standvoss
Fields: Computational Neuroscience, Artificial Intelligence

Content

Tracking uncertainties is a key capacity of an intelligent system for successfully interacting within a changing environment. Representations of uncertainty are relevant to optimally weigh sensory evidence against prior expectations, to adjust learning rates accordingly, and importantly to trade-off exploitation and exploration. Thus, uncertainty is a crucial component of curiosity and reward driven behavior. Additionally, calibrated uncertainty estimates are relevant for human interaction as well as reliable artificial systems. However, it is not yet well understood how uncertainties are tracked in the brain.
Bayesian views on Deep Learning offer a way to specify distributions over model parameters and to learn generative models of data generation processes. Thereby, different levels and kind of uncertainties can be represented. In this course, we will discuss different Bayesian methods to track uncertainties in neural networks and speculate about possible links to neuroscience.

Objectives

The objective of this course is to discuss the relevance of uncertainty for intelligent systems and its relationship to neural information processing. Participants will get an overview of Bayesian methods to calculate uncertainties in Deep Neural Networks.
After the course, participants should have the resources to choose the right tools for specific research questions or applications requiring explicit uncertainty estimates.
Open questions in the literature will be discussed.

Literature

  • Knill, D. C., & Pouget, A. (2004). The Bayesian brain: the role of uncertainty in neural coding and computation. TRENDS in Neurosciences, 27(12), 712- 719
  • Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114
  • Kendall, A., & Gal, Y. (2017). What uncertainties do we need in bayesian deep learning for computer vision?. In Advances in neural information processing systems (pp. 5574-5584).
  • Standvoss, K., & Grossberger, L. (2019). Uncertainty through Sampling: The Correspondence of Monte Carlo Dropout and Spiking in Artificial Neural Networks. In 2019 Conference on Cognitive Computational Neuroscience

Lecturer

Kai Standvoss

Kai obtained his bachelor’s degree in Cognitive Science from the University of Osnabrück. Later he studied Cognitive Neuroscience and Artificial Intelligence at the Donders Institute for Brain, Cognition, and Behavior. There he got interested in the representation of uncertainty and worked on a deep learning model of visual attention guided by uncertainty minimization. Currently he pursues a PhD at the Einstein Center for Neurosciences Berlin where he investigates visual metacognition.