Lecturer: Udo Ernst
Fields: Neurobiology / Robotics
Content
The visual system of higher mammals is a complex neural machinery which efficiently solves sophisticated computational problems on a massively parallel stream of information originating in dynamic environments. This is only possible by being highly flexible, i.e. by adapting visual processing to sensory, behavioral, and cognitive contexts. Flexibility also makes our visual system (still) superior to computer vision, in which state-of-the-art deep convolutional networks may perform near error-free object recognition, but fail to adapt to novel situations or break down under adversarial attacks.
In my presentation, I will discuss different examples of flexibility in the visual system in the context of three major principles: configuration, coordination and control. Configuration adapts circuits and networks to current behavioural needs, optimizing their function towards specific tasks or for performing specific computations more efficiently. The interplay between computational units is organized by coordination principles towards common goals, leading to interaction of multiple ‘players’ such as different visual areas in the brain, or and to dynamical network changes on multiple time scales. Both configuration and coordination needs control units to monitor and signal changes in the external and/or internal situation, and to initiate appropriate reaction mechanisms.
We will argue that it is necessary to combine different methodological approaches for understanding flexibility in vision: for example, electrophysiological studies to reveal mechanisms of flexibility, psychophysical investigations to characterize the impact of neural flexibility on function, and theoretical work to provide unifying frameworks and explanations for dynamics, mechanisms and function of flexibility.
The aim of our workshop is to implement different principles of flexibility in a computer simulation, and make them work together. Participants will team up in small groups which will first focus on one particular, simple aspect of flexibility, i.e. adapting to the ambient light, focusing attention on particular visual features, detecting rapid changes in the environment. Our goal is to realize flexibility with appropriate neural mechanisms, for better understanding how the brain might solve a corresponding task. In a second step, different groups will put their solutions together and try to ‘coordinate’ them, i.e. to combine flexible processing on multiple levels in a meaningful manner.
For testing your ideas, we will use our webcams or short movie sequences and investigate how well flexible neural processing works under different conditions – maybe you can even mount a webcam to your head, close your eyes, and try for yourself if your artificial visual system can direct you safely towards the coffee machine in your home office, hereby avoiding all obstacles… :-))))
Let’s see what complex and unexpected behaviours will emerge, and let’s be flexible! For participating in our workshop, you only need some knowledge in programming, preferrably in Python. In our course repository you will find more information about program packages required for Python, installation guides, literature and other ressources. We suggest you perform installation of an appropriate Python package and editor prior to the course, and familiarize yourself with the most important features of these tools.
Check out the following link, information will be flexibly updated: https://www.neuro.uni-bremen.de/~nc/index.php/s/83JXmNcL8NSL2Ld
On this page you will also find information on how to contact us by e-mail if you have questions in advance.
Let’s see what complex and unexpected behaviours will emerge, and let’s be flexible! For participating in our workshop, you only need some knowledge in programming, preferrably in Python. Please bring a laptop; we will inform you in advance which program packages you would have to install prior to the course. This course is configured to take place on-site, but we will try to be flexible and activate our control circuits for coordinating with one (small) external group of participants if necessary…
Lecturer
Dr. Udo Ernst studied Physics in Frankfurt and received his PhD in 1999 at the Max-Planck-Institute for Dynamics and Self-Organization in Göttingen. Since 2000, he is working at the Institute for Theoretical Physics at the University of Bremen, with interim research stays at University of Tsukuba (Japan), the Weizmann Institute (Israel), and Ecole Normale Superieure (France). Having received the Bernstein Award in Computational Neuroscience in 2010, Dr. Ernst is now leading the Computational Neurophysics Lab in Bremen. Research interests revolve around understanding collective dynamics in neural systems using data analysis, mathematical analysis, modelling and simulation; with particular interest in feature integration, criticality, and flexible information processing in the visual system.
Maik Schünemann is a PhD-student at the Computational
Neurophysics Lab in Bremen. He joined the Lab after completing
masters studies in Mathematics, with a focus on dynamical systems
and random processes, and Neurosciences, with a focus on
Computational Neurosciences. His research focuses on how
attention establishes flexible and selective information
processing in the visual system. In addition, he participated both
as student and tutor in the G-Node Advanced Neural Data Analysis
Course.
Affiliation: University of Bremen
Homepage: http://www.neuro.uni-bremen.de/content/dr-udo-ernst