BC4 – Introduction to Brain–Computer Interfaces and Neuroadaptive Human–Computer Interaction

Lecturer: Thorsten O. Zander
Fields: Artificial Intelligence, Neuroscience

Content

Brain–computer interfaces (BCIs) extend the interaction space by translating neurophysiological activity into machine-relevant information. In this course, we treat BCIs as human–computer interfaces that add a direct information channel from the brain, rather than as “mind reading.” We structure the field by interaction function and intent: (i) active/reactive BCIs for direct control and communication, and (ii) passive BCIs that infer covert user state (e.g., workload, attention, error processing, surprise, affect-related responses) to enable neuroadaptive technology, that is, systems that adapt their behavior based on implicit neurophysiological evidence.

We build a coherent end-to-end view of the BCI pipeline: selecting target signals with neuroscientific motivation, acquiring EEG, establishing a design that supports inference, transforming signals into robust features, training and validating models, and integrating outputs into real-time applications. Throughout, we emphasize what typically breaks when moving from clean laboratory calibration to interactive, non-stationary, artifact-rich contexts. In particular, we contrast lab performance metrics with application-oriented evaluation, and we discuss typical failure modes such as artifact learning, overfitting to calibration, and hidden confounds.

The course uses EEG as the main modality because it remains the most widely deployed non-invasive option. We cover what EEG can and cannot represent, why context and task structure shape interpretation, and how time-, frequency-, and time–frequency-based perspectives map to neural rhythms and event-related responses.

Learning outcomes

After the course, participants will be able to:
– Distinguish active/reactive and passive BCI interaction modes and map them to HCI use cases.
– Explain the EEG measurement chain and the major determinants of signal quality and interpretability.
– Design BCI calibration and evaluation paradigms that reduce confounds and support generalization.
– Implement the core analysis logic from raw EEG to features and classification/regression outputs.
– Critically assess validation, identify common pitfalls, and choose evaluation strategies appropriate for real-world deployment.

Sessions

Session 1 — What is a BCI, and what is it for?

We define BCIs as systems that translate brain activity into machine-relevant information and situate them in human–computer interaction. We introduce a functional taxonomy: direct-control BCIs (active/reactive) versus passive BCIs for covert state assessment in neuroadaptive systems. Building on this, we discuss the practical consequences of design choices: user learning versus machine learning, calibration requirements, and the basic operating loop (calibration → model training → online inference → feedback/adaptation). We also introduce the notion of augmenting the information space of an interactive system by adding channels that represent user state and context.

Session 2 — EEG as a measurement and inference substrate

We cover EEG fundamentals needed for BCI work: sensors and impedances, referencing, reproducible placement (10–20/10–10), sampling, filtering, and physiological constraints on scalp-level observability. We connect measurement to experimental design: synchronous versus asynchronous paradigms, causal versus non-causal processing constraints, and how artifact structure (EOG/EMG/motion) can dominate apparent “BCI performance” if not controlled. We discuss how to design calibration tasks that produce valid labels for the target state, and how to reduce confounds using behavioral and peripheral measures.

Session 3 — From EEG to features and models

We treat feature extraction as “setting a focus” on signal aspects relevant to the cognitive or affective target construct. We cover epoching/segmentation, time-domain and spectral representations, spatial filtering, and how features become machine-learning inputs. We then introduce classification/regression logic with an emphasis on separability, class imbalance, and generalization beyond calibration. We discuss why continuous state estimates can be more robust than hard thresholds in non-stationary contexts, and what that implies for system design and validation.

Session 4 — Validation, failure modes, and neuroadaptive HCI case studies

We discuss how to verify whether a BCI meets the assumptions behind its design. We contrast (i) effect-driven validation (performance across calibration/test/application), (ii) data-driven diagnostics (quality indices, artifact sensitivity, distribution shift), and (iii) neuroscience-informed checks (plausible spatio-temporal patterns, condition contrasts). We conclude with passive BCI and neuroadaptive HCI case studies, including error-related and workload/attention-related signals, and we highlight boundary conditions for transfer from lab paradigms to interactive real-world use.

Literature

  • Zander, T. O. (2012). Utilizing brain-computer interfaces for human-machine systems. Doctoral dissertation, Technische Universität Berlin. Available at: https://refubium.fu-berlin.de/handle/fub188/13536
  • Vidal, J. J. (1973). Toward direct brain-computer communication. Annual Review of Biophysics and Bioengineering.
  • Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., & Vaughan, T. M. (2002). Brain–computer interfaces for communication and control. Clinical Neurophysiology.
  • Wolpaw, J. R., & Wolpaw, E. W. (2012). Brain–Computer Interfaces: Principles and Practice.
  • Zander, T. O., & Kothe, C. (2011). Towards passive brain–computer interfaces: applying BCI technology to human–machine systems in general. Journal of Neural Engineering.
  • Zander, T. O., & colleagues: Neuroadaptive technology enabling implicit interaction (cursor control). PNAS (2016).
  • Brouwer, A.-M., Zander, T. O., et al. (2015). Six recommendations to avoid common pitfalls when using neurophysiological signals for cognitive/affective state inference. Frontiers in Neuroscience.
  • Zander, T. O., & Jatzev, S. Context-aware BCIs and the information space of user, technical system, and environment. Journal of Neural Engineering (early 2010s).
  • Blankertz, B., et al. (2007). The non-invasive Berlin brain–computer interface. NeuroImage.
  • Stern, J. (2013). Atlas of EEG Patterns (2nd ed.).
  • Muthukumaraswamy, S. D., Johnson, B. W., & McNair, N. A. (2004). Mu rhythm modulation… Cognitive Brain Research.
  • Maeder, C. L., et al. (2012). Pre-stimulus sensorimotor rhythms influence BCI

Lecturer

Prof. Dr. rer. nat. Thorsten O. Zander is Lichtenberg Professor for Neuroadaptive Human–Technology Interaction at Brandenburg University of Technology (BTU) Cottbus-Senftenberg, where he researches neuroadaptive interaction using passive brain-computer interfaces, including implicit interaction with technology, cognitive exploration for the automated guidance of artificial intelligences, and the ethics of neuroadaptive technologies; he previously held postdoctoral positions at TU Berlin (Biological Psychology and Neuroergonomics) and the Max Planck Institute for Intelligent Systems in Tübingen and served as a group leader at TU Berlin. He studied mathematics (with a focus on mathematical logic) at the University of Münster and earned his PhD at TU Berlin on the overarching topic of applying brain-computer interfaces to human-machine systems. His work has been recognized with awards, including the Raja Parasuraman Award (Best Senior Researcher, Neuroergonomics Society) and Best Dissertation of the Willumeit Foundation.

Affiliation: BTU Cottbus-Senftenberg
Homepage: https://www.b-tu.de/fg-neuroadaptive-hci

SC5 – From Word to World: Bridging Language and Perceptual Reality 

Lecturer: Ivana Kajić, Philipp Wicke
Fields: Cognitive Science, Linguistics, Artificial Intelligence

Content

This course examines the relationship between language, perception, and intelligence, using recent developments in generative AI as a central case study. Moving from cognitive linguistics to multimodal machine learning systems, the course investigates how systems transition from text-based representations to models that increasingly integrate perception and action. Across four lectures, we move from theoretical foundations to technical architectures and finally to societal and industrial implications.

  • Lecture 1 introduces the conceptual foundations of the course. We explore the hypothesis that human thinking is deeply structured by language, examining linguistic universals, linguistic relativity, and the role of metaphor and conceptual framing. Language is presented not merely as a communicative tool but as a generative system that structures world models. This session establishes the idea that if human cognition is scaffolded by language, then language-trained AI systems offer a particularly revealing lens through which to rethink intelligence.
  • Lecture 2 shifts the focus to embodiment and perceptual grounding. We examine theories of embodied cognition and consider how bodily experience shapes conceptual systems. The lecture discusses how abstract thought is rooted in sensorimotor experience and presents language as an interface between pre-linguistic cognition and articulated reasoning. By contrasting embodied human cognition with predominantly text-trained AI systems, this session sharpens the central question of the course: can intelligence emerge from language alone, or does meaningful understanding require grounding in perception and action?
  • Lecture 3 explores the technical foundations of modern generative AI, moving from large language models (LLMs) to multimodal architectures. After reviewing the core principles of transformer-based language models, the lecture expands to vision–language models, multimodal training paradigms, and large-scale deployment techniques such as retrieval-augmented generation and in-context learning. The session highlights how these systems are developed in practice, the role of human data and alignment, and current challenges including interpretability and safety. By examining how AI systems increasingly integrate text and perception, we assess both their capabilities and structural limitations.
  • Lecture 4 turns to real-world applications and broader impact. Rather than focusing exclusively on speculative AGI narratives, this session highlights how AI is already shaping scientific research, industrial processes, and economic infrastructures. We examine examples from scientific discovery, energy optimization, manufacturing, and operations research, alongside ongoing debates around trust, labor, and human–AI interaction. Designed as an interactive and discussion-based session, this lecture also critically evaluates the gap between technological hype and practical implementation, offering a forward-looking yet grounded perspective on the future of multimodal and agentic systems.

Literature

  • Kajić, Ivana, et al. “Evaluating numerical reasoning in text-to-image models.” Advances in Neural Information Processing Systems 37 (2024): 42211-42224.
  • Kajić, Ivana, and Aida Nematzadeh. “Evaluating Visual Number Discrimination in Deep Neural Networks.” Proceedings of the Annual Meeting of the Cognitive Science Society. Vol. 45. No. 45. 2023.
  • Albuquerque, I., Ktena, I., Wiles, O., Kajić, I., Rannen-Triki, A., Vasconcelos, C., & Nematzadeh, A. (2025). Benchmarking Diversity in Image Generation via Attribute-Conditional Human Evaluation. arXiv preprint arXiv:2511.10547.
  • Evans, Vyvyan, and Melanie Green. Cognitive linguistics: An introduction. Routledge, 2018.
  • Wei, Jason, et al. “Emergent abilities of large language models.” arXiv preprint arXiv:2206.07682 (2022).
  • Boroditsky, Lera. “Does language shape thought?: Mandarin and English speakers’ conceptions of time.” Cognitive psychology 43.1 (2001): 1-22.
  • Wicke, Philipp, Wachowiak, Lennart. “Exploring Spatial Schema Intuitions in Large Language and Vision Models” ACL 2024 Findings. 
  • Wicke, Philipp, and Marianna Bolognesi. “Emoji-based semantic representations for abstract and concrete concepts.” Cognitive processing 21.4 (2020): 615-635.

Lecturer

Ivana Kajić is a Senior Research Scientist at Google DeepMind in Montréal, Canada. Her research interests include applying methods and techniques from cognitive science in analysis and characterization of behavior of machine learning models. Specifically, this includes designing evaluation protocols, benchmarks and metrics to comprehensively understand capabilities and limitations of large vision-language models that in recent years have demonstrated strong performance in a variety of tasks. She completed her PhD thesis titled “Computational Mechanisms of Language Understanding and Use in the Brain and Behaviour” in 2020 at the University of Waterloo in Canada.

Affiliation: Google DeepMind
Homepage: www.ivanakajic.me

Philipp Wicke studied Cognitive Science at the University of Osnabrück in the B.Sc. programme. During these studies he interned at Dauwels Lab at the NTU Singapore in the field of neuroinformatics, he also interned at the Creative Language Systems Lab at UCD Dublin at which he later wrote his dissertation on “Computational Storytelling as an Embodied Robot Performance with Gesture and Spatial Metaphor”. He was an assistant professor at the LMU Munich at the Center for Language and Information Processing (CIS) and a Researcher in Residency at the Center for Advanced Studies (CAS). Philipp is researching on Natural Language Processing and teaches Artificial Intelligence at the BTU Cottbus. Philipp Wicke is the Lead AI Engineer at AURYAL, a Europe-based neuro-tech startup funded by the German Federal Agency for Disruptive Innovation (SPRIND).

Affiliation: BTU Cottbus, AURYAL GmbH
Homepage: www.phil-wicke.com

ET3: Empathy in humans, nonhuman animals and AI systems: Multidimensional cognitive profiles as a tool for comparative evaluations

Lecturer: Albert Newen
Fields: Philosophy of Mind

Content

Despite some recent criticism, empathy is still seem as the glue that binds us and holds societies together. What exactly is empathy (the definitional question)? Is it uniquely human or which nonhuman animals possess empathy (the distribution question)? And which type or quality of empathy is realized in different species (the quality question)? We suggest a new methodological approach to answer all three questions, namely a species-sensitive, multifactorial profile theory of empathy: This includes the claim that we cannot offer a strict definition with necessary and sufficient conditions of empathy which captures the rich variety of empathic phenomena. Thus, we develop a multifactorial characterization of five typical and testable dimensions which together and in sufficient clustering indicate a type of empathy; furthermore, each of the five general dimensions is characterized in detail by several features which allow us to test the degree of realization for each feature. The degree of implementation of one dimension results from the degrees of realization of the relevant features. This framework is unfolded by starting with a minimal condition which we aim to constrain with testable cognitive dimensions such that depending on the degree to which these dimensions are realized, we can ascribe a species-specific profile of empathy. In an additional step this framework is used to discuss the question whether and how AI system can be empathic. This is investigated by looking at the present performance of Large Language Models (LLMs). The profile account needs additional aspects to work out the commonalities and differences of LLMs compared to humans and animals.

Literature

  • Ramya Srinivasan, Beatriz San Miguel González (2022), The role of empathy for artificial intelligence accountability, Journal of Responsible Technology, 9, 100021,
  • ISSN 2666-6596, https://doi.org/10.1016/j.jrt.2021.100021.
  • to appear:
  • Newen et al. (accepted): Animal Empathy Reconsidered: A Multidimensional Profile Account. (send me an email: albert.newen@rub.de)
  • Preston, S. D., & Waal, F. B. M. (2002). Empathy: Its ultimate and proximate bases. Behavioural and Brain Sciences 25(1), 1–20.

Lecturer

(2023) Picture © RUB, Marquard.

Albert Newen is full professor of philosophy at the Ruhr-University Bochum (RUB), Germany. His central research areas are philosophy of mind and cognition. Furthermore, he is the director of the interdisciplinary Center for Mind and Cognition at RUB since 2011. He was president of the German Society for Cognitive Science (2018-2020) and since 2017, he is the speaker of an interdisciplinary Research Training Group (DFG-Graduiertenkolleg) on “Situated Cognition”.

Affiliation: Ruhr-Universität Bochum, Institut für Philosophie II
Homepage: https://www.pe.ruhr-uni-bochum.de/philosophie/ii/newen/index.html.de

PC3 – Building Intelligent LLM Applications: From Foundations to Autonomous Agents

Lecturer: Kerem Şenel
Fields: Artificial Intelligence, Machine Learning, Natural Language Processing

Content

This hands-on course takes you from transformer fundamentals to cutting-edge agentic AI systems. You’ll learn how large language models work under the hood, train and fine-tune your own models, and build autonomous agents that use tools, access external resources, and collaborate to solve complex tasks. Each session combines theory with practical implementation using industry-standard frameworks like Hugging Face, LangChain, and the Model Context Protocol.

Session 1: Foundations & Architecture
Understanding Transformers: The Engine Behind Modern AI

Discover how LLMs predict and generate text through self-attention mechanisms. We’ll demystify the transformer architecture—from tokenization to embeddings—and you’ll implement a tokenizer from scratch while visualizing how models “pay attention” to different parts of text.

Session 2: Training Fundamentals
Training Your Own Language Model

Learn what it takes to train an LLM: pre-training objectives, dataset curation, and scaling laws. You’ll fine-tune a real model (GPT-2 or TinyLlama) on custom data using Hugging Face tools, understanding the trade-offs between model size, compute, and performance.

Session 3: Advanced Training & Alignment
Making Models Helpful: Instruction Tuning & RLHF

Explore how base models are transformed into helpful assistants through instruction tuning and alignment techniques like RLHF. You’ll fine-tune a model to follow instructions and experiment with advanced prompting techniques including chain-of-thought reasoning.

Session 4: Cutting-Edge Applications & Agentic Systems
Autonomous AI: Building Agents That Think and Act

Go beyond chatbots to autonomous agents that use tools, access databases, and collaborate on complex tasks. You’ll implement function calling, integrate the Model Context Protocol (MCP) to connect LLMs with external resources, and build multi-agent systems using LangGraph—culminating in an autonomous agent demo.

Lecturer

Kerem Şenel received his PhD in Computer Science from LMU Munich in 2025, specializing in Natural Language Processing. During his doctoral research at the Center for Information and Language Processing (CIS), he explored diverse topics including interpretability, multilinguality, and evaluation of language models. He currently works as an IT consultant in industry, specializing in AI applications and solutions.

Affiliation: TNG Technology Consulting
Homepage: https://www.tngtech.com/en/

SC2 – At the intersection between Virtual Reality and Brain-Computer Interfaces

Lecturer: Léa Pillette
Fields: Brain-Computer Interface, Neurofeedback, Virtual Reality, Human-Computer Interaction, Neurology, Computer Science

Content

This course provides a comprehensive overview of how Virtual Reality (VR) intersect with Brain-Computer Interfaces (BCIs), which are neurotechnologies that introduce promising possibilities to interact with digital devices solely through the acquisition and analysis of brain activity, typically measured using electroencephalography.

BCIs enhance VR applications in two key ways: by enabling direct control of virtual elements through mental commands, such as imagining hand movements to guide a virtual character, and by gathering real-time neural data to adapt and personalize the VR experience to the user\’s cognitive and emotional state. Conversely, VR platforms offer immersive environments that facilitate BCI user training and rehabilitation, creating tailored scenarios that improve brain activity modulation and learning outcomes.

The course is structured into four sessions: the first two cover the mutual benefits of integrating BCI with VR technologies. The final two sessions will focus on therapies that utilize VR-based and BCI-based approaches independently, as well as innovative interventions at the intersection of both technologies. These combined VR-BCI therapies harness neurofeedback, and immersive environments to promote functional recovery, for instance in motor rehabilitation after stroke. This integrated approach provides patient-centered, adaptable, and motivating rehabilitation protocols that leverage real-time brain activity monitoring to enhance neuroplasticity and clinical outcomes.

Session 1 and 2: mutual benefits of integrating BCI with VR technologies
Sessions 3 and 4: state of the art on VR and BCI-based therapies

Literature

  • • Drigas, A., & Sideraki, A. (2024). Brain neuroplasticity leveraging virtual reality and brain–computer interface technologies. Sensors, 24(17), 5725.
  • • Kober, S. E., Wood, G., & Berger, L. M. (2024). Controlling virtual reality with brain signals: state of the art of using VR-based feedback in neurofeedback applications. Applied psychophysiology and biofeedback, 1-20.
  • • Lotte, F., Faller, J., Guger, C., Renard, Y., Pfurtscheller, G., Lécuyer, A., & Leeb, R. (2012). Combining BCI with virtual reality: towards new applications and improved BCI. In Towards practical brain-computer interfaces: Bridging the gap from research to real-world applications (pp. 197-220). Berlin, Heidelberg: Springer Berlin Heidelberg.
  • • Roc, A., Pillette, L., Mladenovic, J., Benaroch, C., N’Kaoua, B., Jeunet, C., & Lotte, F. (2021). A review of user training methods in brain computer interfaces based on mental tasks. Journal of Neural Engineering, 18(1), 011002.

Lecturer

Dr. Léa Pillette is a CNRS researcher and member of the Seamless team at IRISA, Rennes, France, since 2022. She obtained her PhD in computer science from the University of Bordeaux in 2019. Her research focuses on developing innovative methods to train individuals to regulate their brain activity, enabling more accessible and effective use of brain-computer interfaces for applications such as medical interventions and virtual world interactions.

Affiliation: Univ. Rennes, Inria, CNRS, IRISA, Rennes, France
Homepage: https://lea-pillette.ovh/

MC3 – Introduction to Robotics and Active Learning

Lecturer: Tim Tiedemann
Fields: Robotics, Machine Learning

Content

This course will try to build a bridge between machine learning and robotics.
– Session 1 will give an introduction to robotics, including robotic software frameworks and potential starting points for own robotic experiments
– Session 2 will continue on bio-robotics (showing how biology gained new insights from robotics) and introduce (non-robotic) active learning
– Session 3 will combine both, robotics and active learning. We will also talk about the idea of embodiment.

Literature

  • Siciliano et al. (Eds.) (2016, 2024): Handbook of Robotics. Springer

Lecturer

Prof. Tim Tiedemann

Since 2016: Professor of Intelligent Sensing at the University of Applied Sciences Hamburg (HAW Hamburg, Hamburg, Germany). Main research interests are sensors and sensor data processing (including machine learning methods) and robotics. 2010-2016 Postdoc in the area of space and underwater robotics at the German Research Center for Artificial Intelligence (DFKI) in Bremen, Germany. 2009 Ph.D. in biorobotics, focus on the transfer of (neuro-) biological concepts to the robotic domain 2003-2010 Research assistant at the Computer Engineering Group, Bielefeld University 2003 Research assistant at the Cognitive Psychology Group, Bielefeld University 2003 Diploma in computer science (i.e. Master degree level), study computer science with a focus on robotics and neural networks at Bielefeld University, Bielefeld, Germany

Affiliation: HAW Hamburg

SC4 – Introduction to Intelligent User Interfaces

Lecturer: Sven Mayer
Fields: Artificial Intelligence, Human-Computer Interaction, Human-AI Interaction

Content

The course Introduction to Intelligent User Interfaces (IUI) introduces participants to key concepts at the intersection of Human-Computer Interaction (HCI) and Artificial Intelligence (AI). It explores how methods from Machine Learning and AI can be transferred to the design of interactive systems that act intelligently, adapt to users, and support human goals. Emphasis is placed on a human-centered perspective that prioritizes usability, transparency, and user trust. Across four sessions, participants will gain a conceptual understanding of the foundations, design principles, and open challenges of intelligent user interfaces, preparing them to critically assess and discuss current and future developments in this field.
* 1. Session: Motivation and Introduction
* 2. Session: Machine Learning and Human-Computer Interaction basics
* 3. Session: Designing, Building, and Evaluating Human-AI Systems
* 4. Session: Human-Centered Challenges and Future Directions

Literature

  • Andy Field and Graham Hole (2002). How to Design and Report Experiments
  • Kasper Hornbæk, Per-Ola Kristensson, and Antti Oulasvirta (2025). Introduction to Human-Computer Interaction. Oxford University Press.
  • Course: Intelligent User Interfaces, https://iui-lecture.org/
  • Course: Practical Machine Learning, https://sven-mayer.com/pml/
  • Course: Human-Computer Interaction, https://hci-lecture.org/

Lecturer

Sven Mayer is a full professor of computer science at the TU Dortmund University (Germany) and the Research Center Trustworthy Data Science and Security, where he is the head of the chair for Human-AI Interaction. His research focuses on Human-AI Interaction at the intersection between Human-Computer Interaction and Artificial Intelligence, where he focuses on the next generation of computing systems. He uses artificial intelligence to design, build, and evaluate future human-centered interfaces. In particular, he envisions enabling humans to outperform their performance in collaboration with the machine. He focuses on areas such as augmented and virtual reality, mobile scenarios, and robotics.

Affiliation: TU Dortmund University
Homepage: https://sven-mayer.com/

PC5 – Bridging Realities of the Self: A Self-Experience Workshop

Lecturer: Katharina Krämer, Annekatrin Vetter, Sophia Reul
Fields: Psychology, Psychotherapy

Content

Where do apparently opposite qualities of experience show up in our inner lives? How do thinking and feeling fit together — and where don’t they? What shapes the connection between body and mind? How do we experience the outer world, and what do we experience within — and how can these be linked and integrated? What stays unconscious, and what becomes conscious?
These and similar questions are at the heart of our self-experience workshop. Based on experiential exercises drawing from psychoanalysis, humanistic psychology, and body-oriented approaches, participants are invited into a reflective space to explore self-awareness, perception, and communication. No prior knowledge is required — all you need is a little curiosity and a willingness to gently step beyond the edge of your comfort zone. Then self-experience can become a bridge to new realities of being and relating.

Lecturer

Katharina Krämer is a psychologist and analytic psychotherapist. She works as a professor for psychology at the Rheinische Hochschule Köln, University of Applied Sciences, Cologne, Germany, and as a lecturer and supervisor for psychotherapists in training. Additionally, she works as a psychotherapist in private practice. In 2014, Katharina Krämer received her doctoral degree from the University of Cologne, Germany, on a thesis investigating the perception of dynamic nonverbal cues in cross-cultural psychology and high-functioning autism. Her research interests include the application of Mentalization-Based Group-Therapy with patients with autism and the vocational integration of patients with autism.

Affiliation: Rheinische Hochschule Köln, University of Applied Sciences
Homepage: https://rh-koeln.de/dozierende/katharina-krmer

👤

Annekatrin Vetter is a clinical psychologist and analytic psychotherapist. As a psychotherapist, she treats patients with different mental disorders in private practice. Additionally, she works as a lecturer and supervisor for psychotherapists in training and as a trainer for Coaches at Inscape – Coaching & Counselling, Cologne, Germany.

Affiliation: Praxis für Psychotherapie und Psychoanalyse, Supervision und Coaching Annekatrin Vetter, Cologne

Sophia Reul is a clinical psychologist and analytic psychotherapist. She works as a psychotherapist in private practice. In 2021, she received her doctoral degree from the Westfälische Wilhelms-University Münster, Germany, on a thesis investigating the impact of neuropsychological methods in diagnoses of early dementia. Today, her research interests include the application of Mentalization-Based Group-Therapy (MBT-G) with patients with autism.

Affiliation: Praxis für Psychotherapie und Psychoanalyse Sophia Reul, Kirchweidach (Bay.)

BC2 – Introduction to Theoretical Neuroscience

Lecturer: Terrence Stewart
Fields: Computational Neuroscience, Neuroscience, AI

Content

This course provides an overview of computational neuroscience, the science of creating computer simulations of neurons, groups of neurons, and different brain systems, and then comparing the results of these simulations to the behaviour of real brains. This lets us better understand how brains work, and it also has the potential of inspiring new types of Artificial Intelligence systems.

We start by looking at individual neurons and their details, then move to the three major approaches to making large-scale models capable of producing detailed behaviour: Parallel Distributed Processing (PDP++/Emergent), Dynamic Neural Fields (DNF/Cedar), and the Neural Engineering Framework (NEF/Nengo). Python notebooks will be provided for hands-on examples.

Session 1: Individual neurons
Session 2: Many neurons in parallel (PDP++)
Session 3: Dynamic Neural Fields (DNF)
Session 4: The Neural Engineering Framework and Nengo

Literature

  • Kriegeskorte, N., & Douglas, P. K. (2018). Cognitive computational neuroscience. Nature neuroscience, 21(9), 1148–1160. https://doi.org/10.1038/s41593-018-0210-5
  • Rumelhart, D., & McClelland, J., (1986). Parallel distributed processing: Explorations in the microstructure of cognition. MIT Press, Cambridge, MA, USA
  • Schöner, G. (2023). Dynamical Systems Approaches to Cognition. In Sun, Ron (Ed.), The Cambridge Handbook of Computational Cognitive Sciences (2nd ed.). Cambridge University Press.
  • Stewart, T.C., & Eliasmith, C. (2014). Large-scale synthesis of functional spiking neural circuits. Proceedings of the IEEE, 102(5):881–898.

Lecturer

Terry Stewart is a Senior Research Officer at the National Research Council Canada, and Site Lead of the NRC-University Waterloo Collaboration Centre. His research includes large-scale brain simulation, cognitive modelling, energy-efficient neuromorphic computing, and AI safety.

Affiliation: National Research Council Canada

ET2 – Nanobrains on Microchips: a possibly possible answer for a possibly impossible question

Lecturer: Herbert Jaeger
Fields: all IK disciplines and some more

Content

In this talk I want to explain my love affair with a scientific field that has no name – yet. Or rather, it has too many names. The current most popular branding is ‘neuromorphic computing’. However, when you hear ‘natural computing’, ‘in-materio-computing’, ‘physical computing’, ‘unconventional computing’, ‘non-digital computing’, ‘fluent computing’ (I have a list of twenty more), it might mean the same thing – or something else. The idea behind all of this is to engineer novel kinds of microchips to get novel kinds of ‘computing’ directly out of nanoscale physics, just like real brains pull their cognition magic from neuronal biophysics, without the detour of digital simulation of neural dynamics. Current big-scale funding for this sort of research thrives on the hope to replace Gigawatt AI server farms with artificial microbrains that burn only 20 Watts, like our brains. And as a bonus, to get implantable neuro-implants that need no batteries, or principally un-hackable computing machines, or robustly self-adapting edge computing. Super fascinating. Alas, nobody seems to have a clue how to do it – yet.

Literature

  • H. Jaeger, B. Noheda, W.G. van der Wiel (2023): Toward a formal theory for computing machines made out of whatever physics offers. Nature Communications 14, 4911 (or the long version, 70 pages: https://arxiv.org/abs/2307.15408)

Lecturer

Herbert Jaeger studied mathematics and psychology in Freiburg (Germany), got his PhD in Computer Science / AI in Bielefeld (Germany) and then did a postdoc fellowship at the (then) German National Research Institute for Mathematics and Computer Science (GMD) in Sankt Augustin (Germany), where he subsequently founded the research unit on modeling intelligent dynamical sytsems (MINDS); then from 2001 to 2019 he served as professor of Computing Science at Jacobs University Bremen (Germany). Since 2019 he has been Professor for Computing in Cognitive Materials at the University of Groningen. Current research focus: mathematical foundations for computing in non-digital physical substrates. Jaeger retired 2025 and now has almost enough time for thinking. With only one exception (stupid flu), he attended all IKs, and served on various functions for the IK community since the beginning in 1997.

Affiliation: University of Groningen
Homepage: https://www.ai.rug.nl/minds/