IC12 – Robot models of Insect Navigation

Lecturer: Barbara Webb
Fields: Computational Neuroscience, Robotics, AI

Content

Insect navigation has been a focus of behavioural study for many years, and provides a striking example of cognitive complexity in a miniature brain. We have used computational modelling to bridge the gap from behaviour to neural mechanisms by relating the computational requirements of navigational tasks to the type of computation offered by invertebrate brain circuits. We have shown that visual memory of multiple views could be acquired by associative
learning in the mushroom body neuropil, and allow insects to recapitulate long routes. We have also proposed a circuit in the central complex neuropil that integrates sky compass and optic flow information on an outbound path and can thus steer the animal directly home. The models are strongly constrained by neuroanatomy, and are tested in realistic agent and robot
simulations.

Literature

  • Webb, B. (2020). Robots with insect brains. Science, 368(6488), 244-245. Webb, B. (2019). The internal maps of insects. Journal of Experimental Biology, 222(Suppl 1).
  • Stone, T., Webb, B., Adden, A., Weddig, N. B., Honkanen, A., Templin, R., Wcislo, W., Scimeca, L., Warrant, E. & Heinze, S. (2017). An anatomically constrained model for path integration in the bee brain. Current Biology, 27(20), 3069-3085.
  • Ardin, P., Peng, F., Mangan, M., Lagogiannis, K., & Webb, B. (2016). Using an insect mushroom body circuit to encode route memory in complex natural environments. PLoS computational biology, 12(2), e1004683.

Lecturer

Barbara Webb

Barbara Webb joined the School of Informatics at the University of Edinburgh in May 2003. Previously she lectured at the University of Stirling (1999-2003), the University of Nottingham (1995-1998) and the University of Edinburgh (1993-1995). She received her Ph.D. (in Artificial Intelligence) from the University of Edinburgh in 1993, and her B.Sc. (in Psychology) from the University of Sydney in 1988. Her main research interest is in perceptual systems for the control of behaviour, through building computational and physical (robot) models of the hypothesised mechanisms. In particular she focuses on insect behaviours, as their smaller nervous systems may be easier to understand. Recent work includes study of some of the more complex capabilities of insects, including multimodal integration (in crickets and flies), navigation (in ants) and learning (in flies and maggots). She also has an interest in theoretical issues of methodology; in particular the problems of measurement, modeling and simulation.

Affiliation: University of Edinburgh
Homepage: https://homepages.inf.ed.ac.uk/bwebb/

IC10 – Minimal neural encoding of space

Lecturer: Jutta Kretzberg
Fields: Neuroscience

Content

Mechanoreception of pressure applied to the skin is the basis of the most direct perception of space. When the receptive field of a mechanoreceptor is stimulated, it will fire an action potential. The classical idea of space encoding was a labeled line code, with each mechanoreceptor representing a tiny area of skin. Increasing pressure to that point in space is encoded by an increasing action potential frequency.
However: is this the most efficient way of encoding tactile stimuli?
For the tiny nervous system of the leech, which is evolutionary optimized for a minimum number of cells and energy consumption, we found a different strategy: Each point in space on the skin is innervated by two mechanoreceptors with overlapping, spatially extended receptive fields. For each of these cells, the position of the stimulus in the receptive field AND the stimulus intensity influence two response features: the frequency AND the timing of the action potentials. Hence, for each individual cell, the representation of stimulus intensity in space is ambiguous. However, when the responses of only two cells are combined, the combination of stimulus location and intensity is encoded unambiguously in a much larger region than it would be possible with a labeled line code.
Hence, the leech uses the smallest possible system for multiplexing: two cells representing two stimulus properties with two response features.
This is good news not only for computational neuroscientists, who rejoice in a biological system implementing a minimal encoding strategy. Since the mechanoreceptors of leeches and humans share surprising similarities, this finding might also be relevant e.g. for the development of hand prosthesis providing sensory input.

Literature

  • Kretzberg J, Pirschel F, Fathiazar E and Hilgen G (2016) Encoding of Tactile Stimuli by Mechanoreceptors and Interneurons of the Medicinal Leech. Front. Physiol. 7:506. doi: 10.3389/fphys.2016.00506
  • Pirschel, F., Hilgen, G., Kretzberg, J. (2018) Effects of Touch Location and Intensity on Interneurons of the Leech Local Bend Network. Scientific Reports 8:3046. DOI:10.1038/s41598-018-21272-6
  • Pirschel, F., Kretzberg, J. (2016) Multiplexed Population Coding of Stimulus Properties by Leech Mechanosensory Cells. Journal of Neuroscience 36(13):3636 –3647. DOI:10.1523/JNEUROSCI.1753-15.2016
  • Saal HP, Bensmaia SJ (2014) Touch is a team effort: interplay of submodalities in cutaneous sensibility. Trends Neurosci 37:689–697. http://dx.doi.org/10.1016/j.tins.2014.08.012

Lecturer

Jutta Kretzberg

Jutta Kretzberg studied computer science and biology at University of Bielefeld, Germany. In her PhD in biology she modelled neuronal responses in the fly visual system. As a postdoc in San Diego, California, she started to work also experimentally on the leech tactile system. In 2004, Jutta Kretzberg became a Junior Professor at University of Oldenburg, Germany, where she is now professor for computational neuroscience and head of the master’s program neuroscience. As a member of the cluster of excellence Hearing4all and having worked also on the vertebrate retina, her main research interest is neural coding in different sensory systems of vertebrates (including humans) and invertebrates. While juggling her family, teaching, research and administration duties, her favorite task is mentoring.

Affiliation: Oldenburg University, Germany
Homepage: https://uol.de/en/neurosciences/compneuro

IC19 – Spreading dynamics in neural networks – and of COVID-19

Lecturer: Viola Priesemann
Fields: Networks, neural information processing, COVID

Content

We introduce the spreading dynamics of activity in neural networks, and then show how it fosters information processing.

Literature

  • Cramer et al., Nature Communications, 2020. https://www.nature.com/articles/s41467-020-16548-3
  • Contreras et al., 2020. https://arxiv.org/pdf/2011.11413
  • Contreras et al., Nature Communications, 2021. https://www.nature.com/articles/s41467-020-20699-8
  • Dehning et al., Science 2020. https://science.sciencemag.org/content/early/2020/05/14/science.abb9789
  • Wilting et al., Nature Communications, 2018. https://www.nature.com/articles/s41467-018-04725-4

Lecturer

Viola Priesemann
Prof. Viola Priesemann

Dr. Viola Priesemann is a researcher at the Max Planck Institute for Dynamics and Self-Organization and teaches at the Georg-August University Göttingen. She studies spreading processes, self-organization and information processing in living and artificial networks. Since the COVID-19 outbreak, she has been studying the spread of SARS-CoV-2, quantified the effectiveness of interventions, and developed containment strategies. Viola Priesemann is co-author of several position papers (e.g. of the National Academy Leopoldina), Fellow of the Schiemann-Kolleg and member of the Cluster of Excellence “Multiscale Bioimaging” at the Campus Göttingen.

Affiliation: Max Planck Institute for Dynamics and Self-Organization
Homepage: www.viola-priesemann.de

IC11 – Your Wit Is My Command: Automating Humour with Computational Creativity

Lecturer: Tony Veale
Fields: Artificial Intelligence

Content

Until quite recently, AI was a scientific discipline defined more by its portrayal in science fiction than by its actual technical achievements. Real AI systems are now catching up to their fictional counterparts, and are as likely to be seen in news headlines as on the big screen. Yet as AI outperforms people on tasks that were once considered yardsticks of human intelligence, one area of human experience still remains unchallenged by technology: our sense of humour.
This is not for want of trying, as this course will show. The true nature of humour has intrigued scholars for millennia, but AI researchers can now go one step further than philosophers, linguists and psychologists once could: by building computer systems with a sense of humour, capable of appreciating the jokes of human users or even of generating their own, AI researchers can turn academic theories into practical realities that amuse, explain, provoke and delight.

Objectives
This course will use the ideas and achievements of AI to explore what it means to have a sense of humour, and moreover, to understand what it is to not have one. It will challenge the archetype of the humourless machine in popular culture, to celebrate what science fiction gets right and to learn from what it gets wrong. It will make a case for the necessity of a computational understanding of humour, to better understand ourselves and to better construct machines that are more flexible, more understanding, and more willing to laugh at their own limitations.

Course content
The course will comprise four lectures, which will explore the following topics.
Newspaper personal columns are routinely filled with people seeking partners with a good sense of humour (GSOH), with many rating this as highly as physical fitness or physical appearance. Yet what does it mean to have a sense of humour? Conversely, what does it mean to have NO sense of humour, and how might we imbue a humourless machine with a capacity for wit and a flair for the absurd? We begin by unpacking these questions, to suggest some initial answers and models.

So, for example, what would it mean for a computer to have a numeric humour setting, as in the case of the robot TARS in the film Interstellar? Can a machine’s sense of humour be reduced to a single number or parameter setting? Is humour a modular ability? Can it be gifted to computers as a bolt-on unit like Commander Data’s “humour chip” in Star Trek, or is it an emergent phenomenon that arises from complex interactions among all our other faculties? Might humour emerge naturally within complex AI systems without explicitly being programmed to do so, as in the mischievous supercomputer Mike in Robert Heinlein’s The Moon is a Harsh Mistress or in the sarcastic droid K2SO in Rogue One: A Star Wars story?

This course will survey and critique the competing humour theories that scholars have championed through the ages, enlarging on recurring themes (incongruity, relief, superiority) while considering the amenability of each to computational modelling. What is it that these theories are really explaining, and which comes closest to capturing the elusive essence of humour?

The centrality of incongruity in modern theories demands that this concept be given a special focus. So we will unpack its many meanings to show how our understanding of incongruity can be as multifaceted as the idea of humour itself. Popular myths about the brittleness of machines in the face of the incongruous and the unexpected will be unpicked and debunked as we explore how machines might deliberately seek out and invent incongruities of their own.

But computational humour is still in its infancy, and it is no coincidence that the mode of humour for which machines show the greatest aptitude is that which humans embrace at a very early age, puns. Puns vary in wit and sophistication, but the simplest require only an ear for sound similarity and a disregard for the consequences of replacing a word with its phonetic doppelgänger. The challenge for AI systems is to progress, as children do, from these simple beginnings to modes of ever greater conceptual sophistication.

To do so, is it possible to capture the essence of jokes in a mathematical formula, much as physicists have done for electromagnetism and gravity? Do jokes have quantifiable features that we and our machines can intuitively appreciate? Can we build statistical models to characterize the signature qualities of a humorous artefact, so that machines can learn to tell funny from unfunny for themselves? And what do these measurable qualities say about humour and about us?

Finally, however we slice it, conflict sits at the heart of humour, whether it is a conflict in meaning, attitude, expectation or perspective. Double acts personalize this conflict by recognizing the different roles a comic can play. Computers can likewise play multiple rules in the creation of humour, from rigid “straight man” to absurdist provocateur, in double-acts with humans and with other machines. So we will explore the ways in which smart machines can contribute to the social emergence of humour, either as embodied robots or disembodied software.

Literature

  • Computational humour studies is an established field that has produced a range of academic books, from Victor Raskin’s Semantic Mechanisms of Humour (1985, one of the first) to the more recent Primer of Humour Research (with chapters from computational humorists). Non-computational humour researchers, such as Elliott Oring, have also written accessible books on humour, such as Engaging Humour, while the computer scientist Graeme Ritchie has written a pair of well-received academic books on humour. Comedians and comedy professionals have also written some noteworthy books on humour, with individual chapters that focus on computational humour or that offer algorithmic insights into the author’s own comedy production strategies. Toplyn’s Comedy Writing for Late-Night TV offers a beginner’s guide to humour production that is frequently schematic in style. The Naked Jape, by Jimmy Carr and Lucy Greeves, considers humour more broadly, but also offers a chapter on computational models and the people who build them. I will quote from each of the sources as needed.

Lecturer

Tony Veale
Prof. Dr, Tony Veale

Tony Veale is an associate professor in the School of Computer Science at UCD (University College Dublin), Ireland. He has worked in AI research for 25 years, in academia and in industry, with a special emphasis on humour and linguistic creativity. He is the author of the 2012 monograph Exploding the Creativity Myth: The Computational Foundations of Linguistic Creativity (from Bloomsbury), co-author of the 2016 textbook, Metaphor: A Computational Perspective (Morgan Claypool), and co-author of 2018’s Twitterbots: Making Machines That Make Meaning (MIT Press). He led the European Commission’s coordination action on Computational Creativity (named PROSECCO), and collaborated on international research projects with an emphasis on computational humour and imagination, such as the EC’s What-If Machine (WHIM) project. He runs a web-site dedicated to explaining AI with humour at RobotComix.com. He is active in the field of Computational Creativity, and is currently the elected chair of the international Association for Computational Creativity (ACC).

Affiliation: University College Dublin
Homepage: http://Afflatus.UCD.ie

IC20 – Computer modeling and responsibility

Lecturer: Christiane Floyd (emer. University of Hamburg)
Fields: Computer Science and Society

Modeling is pervasive in computing, even so there is no general definition of ‘computer modeling’. As some authors emphasise, each program embodies a model, though it may be implicit. For explicit modeling, the notion of modeling is normally taken over from the natural, social, cognitive or technical sciences relevant to the specific area of interest. Common to all models in computing is that they are ‘operational’ – when implemented and executed in computer programs, they become effective: they serve to enable or constrain human action and communication, or even to have direct impact on the real world.

Modeling inherently relies on ‘abstraction’ which involves reduction and decontextualisation. Human modellers – sometimes on their own, but more often working in teams and subject to collective interests – make choices in designing the model. These choices affect

  • the model base, i. e. which theories and concepts are suitable for modelling?
  • the model features, i.e. which objects, attributes, actions and relationships in the area of interested are relevant?
  • the model implementation, i. e. which technical platforms are appropriate and how are they used?
  • the methods for modeling, i. e. which techniques, tools and forms of organisation are used in design and implementation?

Design choices determine the basic properties of the operational system, when implemented and embedded in socio-technical environments: 1) its model-controlled behaviour, 2) its model- induced perception, and 3) the model-enabled human-computer interaction.

Thus, modellers have a scope for choice, which is characteristic for all design. Choice means freedom (within external constraints), and freedom inherently comes with responsibility. As designers, we cannot escape this responsibility, but we can accept, acknowledge and reflect it, making our choices transparent. We can develop value-based criteria for choices, communicate them to all stakeholders, discuss and agree on priorities and reach joint decisions.

In responsibility for design, technical quality of the operational system is important but not sufficient. Primary attention needs to be given to re-contextualization, i. e. to the embedding of the operational system in its socio-technical context. Therefore, responsible modellers cannot stay in their technical area of expertise only, but need to find ways for reaching out into the context and base their design decisions on careful anticipation of the operational system in use.

Lecturer

Prof. Dr. Christiane Floyd
Prof. Dr. Christiane Floyd

Christiane Floyd is professor emerita for software engineering and honorary professor at the Technical University of Vienna. Floyd obtained her doctorate in Mathematics at the University of Vienna. She gained experience in computing as compiler developer at Siemens, Munich, as research associate and instructor at Stanford University, and as senior consultant for software development methods at Softlab, Munich. As head of the software engineering group at the Technical University of Berlin (1978-1991) and the University of Hamburg (1991-2008), she was the principal author of STEPS, a participatory and evolutionary approach to software development. Throughout her career, she pursued her interest in philosophical foundations of computing and had a strong concern for the responsible use of computing technology. Since 2006 she is committed to promoting the use of information and communication technologies for development in Ethiopia.

IC8 – Computational principles of gaze-stabilization during locomotion

Lecturer: Hans Straka
Fields: Experimental Neurobiology

Content

Continuous accurate perception of the visual world is a behavioral requirement during self-generated motion. All animals are confronted with the disruptive effects of locomotor activity on the ability to maintain stable images on the retina. This is due to the fact that self-motion is accompanied by head movements that cause retinal image displacement with a resultant degradation of visual information processing. To stabilize gaze and to retain visual acuity during locomotion, retinal image drift is offset by counteractive eye and/or head-adjustments. These offsetting motor reactions are classically attributed to the concerted action of visuo-vestibular and proprioceptive reflexes. However, stereotyped, rhythmic locomotion has predictable consequences for image perturbations. This, in principle, allows employing efference copies of propulsive motor commands to directly initiate spatio-temporally adequate eye movements. Such eye-adjusting motor commands have been demonstrated in the amphibian Xenopus laevis. These signals are feed-forward replica of the spinal central pattern generator output that produces the actual propulsive body movements. Spinal locomotor efference copies directly target horizontal extraocular motoneurons, consistent with the plane and direction of swimming-related head rotations. The signals actively attenuate vestibulo-ocular reflexes, emphasizing the predominant role for intrinsic efference copies for gaze-stabilization during self-motion. The suppressive influence of motor efference copies on vestibular signals occurs at the mechanosensory periphery. The resultant gain reduction in sensory signal encoding likely prevents overstimulation by adjusting the system to increased stimulus magnitudes during locomotion. This leaves efference copy-evoked gaze-stabilizing eye movements as dominant computational mechanism. Further suggestive evidence for a ubiquitous role of such signals in this context has been provided for quadrupedal and bipedal locomotion in terrestrial vertebrates including humans.

Literature

  • Lambert F.M., Combes D., Simmers J. and Straka H. (2012) Gaze stabilization by efference copy signaling without sensory feedback during vertebrate locomotion. Curr. Biol. 22: 1649-1658.
  • Chagnaud B.P., Simmers J. and Straka H. (2012) Predictability of visual perturbation during locomotion: implications for corrective efference copy signaling. Biol. Cybern. 106: 669-679.
  • von Uckermann G., Le Ray D., Combes D., Straka H. and Simmers J. (2013) Spinal efference copy signaling and gaze stabilization during locomotion in juvenile Xenopus frogs. J. Neurosci. 33: 4253-4264.
  • Chagnaud B.P., Banchi R., Simmers J. and Straka H. (2015) Spinal corollary discharge modulates motion sensing during vertebrate locomotion. Nat. Comm. 6: 7982 doi: 10.1038/ncomms8982.
  • von Uckermann G., Lambert F.M., Combes D., Straka H. and Simmers J. (2016) Adaptive plasticity of retinal image stabilization during locomotion in developing Xenopus. J. Exp. Biol. 219: 1110-1121.
  • Straka H., Simmers J. and Chagnaud B.P. (2018) A new perspective on predictive motor signaling. Curr. Biol. 28: R232-R243.

Lecturer

Hans Straka

Hans Straka is Professor for Systemic Neurosciences at the Faculty of Biology at the LMU Munich. He studied Biology at the LMU Munich and received his PhD from the same University. Starting with his postdoc, he got interested in the functional organization of the vestibular system including its variable morphology as well as the ontogeny and phylogeny of this sensory system. Using a variety of animal models, he has studied over the past years in the US, in France and currently in Munich the respective contributions of cellular and neural networks to the sensory transformation of head/body motion-related signals into appropriate extraocular motor commands. Interactions with computational neuroscientists have resulted in a number of conceptual novelties on gaze control and computational models that bridge the gap between empiric experiments and theoretical background.

Affiliation: Depatment Biology II, Ludwig-Maximilians-Universiy Munich
Homepage: https://neuro.bio.lmu.de/members/systems_neuro_straka/straka_h/index.html

IC13 – Personalizing instruction and recognizing student misunderstandings using reinforcement learning

Lecturer: Anna Rafferty
Fields: Artifical Intelligence/Machine learning

Content

Online educational technologies provide opportunities to monitor learners’ knowledge in real time and modify instruction based on learners’ responses. In this talk, I’ll give a brief overview of some of the ways that reinforcement learning has been used to achieve these goals, and then provide a more in-depth discussion of my own work using inverse reinforcement learning to make inferences about learners’ understanding. In this work, we are particularly focused on interpreting learners’ behavior in multi-step tasks, such as games or mathematical problem solving, and we combine ideas from machine learning and computational cognitive modeling. Our approach offers the potential to provide feedback about learners’ strategies and misunderstandings based on their pattern of interactions. Overall, the talk will argue that work in reinforcement learning for education has the potential to create smarter educational resources and that taking an interdisciplinary perspective suggests new insights and approaches.

Literature

  • Rafferty, A. N., Jansen R. A., & Griffiths, T. L. (2020). Assessing Mathematics Misunderstandings via Bayesian Inverse Planning. Cognitive Science. DOI: 10.1111/cogs.12900
  • Rafferty, A. N., Jansen, R. A., & Griffiths, T. L. (2016) Using Inverse Planning for Personalized Feedback. Proceedings of the 9th International Conference on Educational Data Mining (pp. 472-477). http://tiny.cc/IRLFeedbackEDM2016

Lecturer

Anna Rafferty

Dr. Anna Rafferty earned her PhD from the University of California, Berkeley, and is currently an associate professor of computer science at Carleton College. Her work addresses questions at the intersection of machine learning, computational cognitive science, and education. She is particularly interested in developing automated strategies to provide effective feedback to students and in developing technologies that can both continuously improve instruction for students and provide valuable data for researchers to draw more general conclusions about the effectiveness of educational interventions. Dr. Rafferty has recently begun work emphasizing the importance of considering equitable impacts across students in educational technologies that have the potential for personalization.

Affiliation: Carleton College
Homepage: https://sites.google.com/site/annanrafferty/

IC2 – The gendered nature of gamer stereotypes and what we can do about it

Lecturer: Thekla Morgenroth
Fields: Psychology

Content

Female gamers are seen as atypical and often have their competence challenged in gaming spaces. We argue that this is partly driven by masculine gamer stereotypes and that exposure to female gamers has the potential to change them. We investigate the content of gamer stereotypes across two studies and find that they contain both negative aspects, such as lacking social skills, and positive aspects, such as being competent and agentic. Both studies demonstrate that gamer stereotypes are more similar to stereotypes of men and boys than those of women and girls. In Study 2 we further find evidence suggesting that exposure to a female gamer can change the negative association between female stereotypes and gamer stereotypes. We conclude that increasing the visibility of female gamers could potentially reduce the incompatibility between femininity and gaming and alleviate some of the issues female gamers currently face.

Literature

  • Blackburn, G., & Scharrer, E. (2019). Video game playing and beliefs about masculinity among male and female emerging adults. Sex Roles, 80(5-6), 310-324.
  • Paaßen, B., Morgenroth, T., & Stratemeyer, M. (2017). What is a true gamer? The male gamer stereotype and the marginalization of women in video game culture. Sex Roles, 76(7), 421-435.
  • Wasserman, J. A., & Rittenour, C. E. (2019). Who wants to play? Cueing perceived sex-based stereotypes of games. Computers in human behavior, 91, 252-262.

Lecturer

Thekla Morgenroth

Dr. Thekla Morgenroth received their PhD in Social and Organizational Psychology from the University of Exeter in 2015. Their research focuses on how and why people maintain social hierarchies with a specific focus on the barriers encountered by members of the LGBTQ+ community and women.

Affiliation: University of Exeter
Homepage: https://psychology.exeter.ac.uk/staff/profile/index.php?web_id=Thekla_Morgenroth

IC5 – Exploring your own mind

Lecturer: Marieke van Vugt
Fields: cognitive science, contemplative science

Content

During this talk, I will introduce the science of mind-wandering, and connect this to the topic of mindfulness. In the study of mind-wandering we empirically test how subjective experience influences objective measures such as EEG, eye-tracking and behaviour. In contrast, in mindfulness, we explore our own minds. How can you explore your own mind, and become more familiar with mind-wandering from the inside?

Literature

  • Huijser et al. (2020). Captivated by thought: “Sticky” thinking leaves traces of perceptual decoupling in task-evoked pupil size. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0243532

Lecturer

Marieke van Vugt
Prof. van Vugt

Dr. van Vugt is an assistant professor at the University of Groningen in the Netherlands, working in the department of artificial intelligence. She obtained her PhD in model-based neuroscience from the University of Pennsylvania, then worked as a postdoc at Princeton University before moving to the University of Groningen. In her lab, she focuses on understanding the cognitive and neural mechanisms underlying decision making, mind-wandering and meditation by means of EEG, behavioural studies and computational modeling. In some slightly outside-the-box research, she also records the brain waves of Tibetan monks and dancers.

Affiliation: University of Groningen
Homepage: https://mkvanvugt.wordpress.com