Lecturer: Thomas Eßmeyer Fields: Human Computer Interaction, Psychology
Content
The design of online content is increasingly governing and disrupting our choices, while the truthfulness of content becomes harder to assess. A root lies in the design of user interfaces, which are often driven by commercial incentives conflicting with users’ agency and their best interests. In a time of generative AI and LLMs, where the design process is frequently offloaded to automated systems, it is all the more important to understand the ethical caveats of modern technologies before they leave our labs. This course will discuss how deceptive design and dark patterns shape our behaviour and expectations, leading to consequences from frustrations to actual harm. We will catch up with the current state of the art behind this research domain, take an excursion to regulatory protective measures, and discuss paths forward to develop human-centred technologies. Below is a preliminary structure for this course, which will be accompanied with interactive elements. This structure might be subject to slight changes:
Session 1 introduces the concept of dark patterns in the context of human-centred design. Session 2 discusses the cognitive mechanisms and biases at play and risks through generative AI and LLMs. Session 3 addresses both legal and organisational responsibilities when people are harmed. Session 4 covers ethical caveats for the development of user interfaces and what we can do better.
Literature
Colin M. Gray, Cristiana Teixeira Santos, Nataliia Bielova, and Thomas Mildner. 2024. An Ontology of Dark Patterns Knowledge: Foundations, Definitions, and a Pathway for Shared Knowledge-Building. In Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–22. https://doi.org/10.1145/3613904.3642436
Ana Caraban, Evangelos Karapanos, Daniel Gonçalves, and Pedro Campos. 2019. 23 Ways to Nudge: A Review of Technology-Mediated Nudging in Human-Computer Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3290605.3300733
Richard H. Thaler and Cass R. Sunstein. 2008. Nudge: improving decisions about health, wealth, and happiness. Yale University Press, New Haven.
Arunesh Mathur, Jonathan Mayer, and Mihir Kshirsagar. 2021. What Makes a Dark Pattern … Dark ? Design Attributes, Normative Considerations, and Measurement Methods. In CHI’21, 18. https://doi.org/10.1145/3411764.3445610
Lecturer
After successfully completing a BA in Digital Media and an MSc in Computer Science, Dr Thomas Eßmeyer (né Mildner) received a PhD at the University of Bremen. In his work, Thomas focuses on user wellbeing and countermeasures to unfair and deceptive design practices, often referred to as Dark Patterns.
Lecturer: Thorsten O. Zander Fields: Artificial Intelligence, Neuroscience
Content
Brain–computer interfaces (BCIs) extend the interaction space by translating neurophysiological activity into machine-relevant information. In this course, we treat BCIs as human–computer interfaces that add a direct information channel from the brain, rather than as “mind reading.” We structure the field by interaction function and intent: (i) active/reactive BCIs for direct control and communication, and (ii) passive BCIs that infer covert user state (e.g., workload, attention, error processing, surprise, affect-related responses) to enable neuroadaptive technology, that is, systems that adapt their behavior based on implicit neurophysiological evidence.
We build a coherent end-to-end view of the BCI pipeline: selecting target signals with neuroscientific motivation, acquiring EEG, establishing a design that supports inference, transforming signals into robust features, training and validating models, and integrating outputs into real-time applications. Throughout, we emphasize what typically breaks when moving from clean laboratory calibration to interactive, non-stationary, artifact-rich contexts. In particular, we contrast lab performance metrics with application-oriented evaluation, and we discuss typical failure modes such as artifact learning, overfitting to calibration, and hidden confounds.
The course uses EEG as the main modality because it remains the most widely deployed non-invasive option. We cover what EEG can and cannot represent, why context and task structure shape interpretation, and how time-, frequency-, and time–frequency-based perspectives map to neural rhythms and event-related responses.
Learning outcomes
After the course, participants will be able to: – Distinguish active/reactive and passive BCI interaction modes and map them to HCI use cases. – Explain the EEG measurement chain and the major determinants of signal quality and interpretability. – Design BCI calibration and evaluation paradigms that reduce confounds and support generalization. – Implement the core analysis logic from raw EEG to features and classification/regression outputs. – Critically assess validation, identify common pitfalls, and choose evaluation strategies appropriate for real-world deployment.
Sessions
Session 1 — What is a BCI, and what is it for?
We define BCIs as systems that translate brain activity into machine-relevant information and situate them in human–computer interaction. We introduce a functional taxonomy: direct-control BCIs (active/reactive) versus passive BCIs for covert state assessment in neuroadaptive systems. Building on this, we discuss the practical consequences of design choices: user learning versus machine learning, calibration requirements, and the basic operating loop (calibration → model training → online inference → feedback/adaptation). We also introduce the notion of augmenting the information space of an interactive system by adding channels that represent user state and context.
Session 2 — EEG as a measurement and inference substrate
We cover EEG fundamentals needed for BCI work: sensors and impedances, referencing, reproducible placement (10–20/10–10), sampling, filtering, and physiological constraints on scalp-level observability. We connect measurement to experimental design: synchronous versus asynchronous paradigms, causal versus non-causal processing constraints, and how artifact structure (EOG/EMG/motion) can dominate apparent “BCI performance” if not controlled. We discuss how to design calibration tasks that produce valid labels for the target state, and how to reduce confounds using behavioral and peripheral measures.
Session 3 — From EEG to features and models
We treat feature extraction as “setting a focus” on signal aspects relevant to the cognitive or affective target construct. We cover epoching/segmentation, time-domain and spectral representations, spatial filtering, and how features become machine-learning inputs. We then introduce classification/regression logic with an emphasis on separability, class imbalance, and generalization beyond calibration. We discuss why continuous state estimates can be more robust than hard thresholds in non-stationary contexts, and what that implies for system design and validation.
Session 4 — Validation, failure modes, and neuroadaptive HCI case studies
We discuss how to verify whether a BCI meets the assumptions behind its design. We contrast (i) effect-driven validation (performance across calibration/test/application), (ii) data-driven diagnostics (quality indices, artifact sensitivity, distribution shift), and (iii) neuroscience-informed checks (plausible spatio-temporal patterns, condition contrasts). We conclude with passive BCI and neuroadaptive HCI case studies, including error-related and workload/attention-related signals, and we highlight boundary conditions for transfer from lab paradigms to interactive real-world use.
Vidal, J. J. (1973). Toward direct brain-computer communication. Annual Review of Biophysics and Bioengineering.
Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., & Vaughan, T. M. (2002). Brain–computer interfaces for communication and control. Clinical Neurophysiology.
Wolpaw, J. R., & Wolpaw, E. W. (2012). Brain–Computer Interfaces: Principles and Practice.
Zander, T. O., & Kothe, C. (2011). Towards passive brain–computer interfaces: applying BCI technology to human–machine systems in general. Journal of Neural Engineering.
Brouwer, A.-M., Zander, T. O., et al. (2015). Six recommendations to avoid common pitfalls when using neurophysiological signals for cognitive/affective state inference. Frontiers in Neuroscience.
Zander, T. O., & Jatzev, S. Context-aware BCIs and the information space of user, technical system, and environment. Journal of Neural Engineering (early 2010s).
Blankertz, B., et al. (2007). The non-invasive Berlin brain–computer interface. NeuroImage.
Stern, J. (2013). Atlas of EEG Patterns (2nd ed.).
Muthukumaraswamy, S. D., Johnson, B. W., & McNair, N. A. (2004). Mu rhythm modulation… Cognitive Brain Research.
Maeder, C. L., et al. (2012). Pre-stimulus sensorimotor rhythms influence BCI
Lecturer
Prof. Dr. rer. nat. Thorsten O. Zander is Lichtenberg Professor for Neuroadaptive Human–Technology Interaction at Brandenburg University of Technology (BTU) Cottbus-Senftenberg, where he researches neuroadaptive interaction using passive brain-computer interfaces, including implicit interaction with technology, cognitive exploration for the automated guidance of artificial intelligences, and the ethics of neuroadaptive technologies; he previously held postdoctoral positions at TU Berlin (Biological Psychology and Neuroergonomics) and the Max Planck Institute for Intelligent Systems in Tübingen and served as a group leader at TU Berlin. He studied mathematics (with a focus on mathematical logic) at the University of Münster and earned his PhD at TU Berlin on the overarching topic of applying brain-computer interfaces to human-machine systems. His work has been recognized with awards, including the Raja Parasuraman Award (Best Senior Researcher, Neuroergonomics Society) and Best Dissertation of the Willumeit Foundation.
Lecturer: Ivana Kajić, Philipp Wicke Fields: Cognitive Science, Linguistics, Artificial Intelligence
Content
This course examines the relationship between language, perception, and intelligence, using recent developments in generative AI as a central case study. Moving from cognitive linguistics to multimodal machine learning systems, the course investigates how systems transition from text-based representations to models that increasingly integrate perception and action. Across four lectures, we move from theoretical foundations to technical architectures and finally to societal and industrial implications.
Lecture 1 introduces the conceptual foundations of the course. We explore the hypothesis that human thinking is deeply structured by language, examining linguistic universals, linguistic relativity, and the role of metaphor and conceptual framing. Language is presented not merely as a communicative tool but as a generative system that structures world models. This session establishes the idea that if human cognition is scaffolded by language, then language-trained AI systems offer a particularly revealing lens through which to rethink intelligence.
Lecture 2 shifts the focus to embodiment and perceptual grounding. We examine theories of embodied cognition and consider how bodily experience shapes conceptual systems. The lecture discusses how abstract thought is rooted in sensorimotor experience and presents language as an interface between pre-linguistic cognition and articulated reasoning. By contrasting embodied human cognition with predominantly text-trained AI systems, this session sharpens the central question of the course: can intelligence emerge from language alone, or does meaningful understanding require grounding in perception and action?
Lecture 3 explores the technical foundations of modern generative AI, moving from large language models (LLMs) to multimodal architectures. After reviewing the core principles of transformer-based language models, the lecture expands to vision–language models, multimodal training paradigms, and large-scale deployment techniques such as retrieval-augmented generation and in-context learning. The session highlights how these systems are developed in practice, the role of human data and alignment, and current challenges including interpretability and safety. By examining how AI systems increasingly integrate text and perception, we assess both their capabilities and structural limitations.
Lecture 4 turns to real-world applications and broader impact. Rather than focusing exclusively on speculative AGI narratives, this session highlights how AI is already shaping scientific research, industrial processes, and economic infrastructures. We examine examples from scientific discovery, energy optimization, manufacturing, and operations research, alongside ongoing debates around trust, labor, and human–AI interaction. Designed as an interactive and discussion-based session, this lecture also critically evaluates the gap between technological hype and practical implementation, offering a forward-looking yet grounded perspective on the future of multimodal and agentic systems.
Literature
Kajić, Ivana, et al. “Evaluating numerical reasoning in text-to-image models.” Advances in Neural Information Processing Systems 37 (2024): 42211-42224.
Kajić, Ivana, and Aida Nematzadeh. “Evaluating Visual Number Discrimination in Deep Neural Networks.” Proceedings of the Annual Meeting of the Cognitive Science Society. Vol. 45. No. 45. 2023.
Albuquerque, I., Ktena, I., Wiles, O., Kajić, I., Rannen-Triki, A., Vasconcelos, C., & Nematzadeh, A. (2025). Benchmarking Diversity in Image Generation via Attribute-Conditional Human Evaluation. arXiv preprint arXiv:2511.10547.
Evans, Vyvyan, and Melanie Green. Cognitive linguistics: An introduction. Routledge, 2018.
Wei, Jason, et al. “Emergent abilities of large language models.” arXiv preprint arXiv:2206.07682 (2022).
Boroditsky, Lera. “Does language shape thought?: Mandarin and English speakers’ conceptions of time.” Cognitive psychology 43.1 (2001): 1-22.
Wicke, Philipp, Wachowiak, Lennart. “Exploring Spatial Schema Intuitions in Large Language and Vision Models” ACL 2024 Findings.
Wicke, Philipp, and Marianna Bolognesi. “Emoji-based semantic representations for abstract and concrete concepts.” Cognitive processing 21.4 (2020): 615-635.
Lecturer
Ivana Kajić is a Senior Research Scientist at Google DeepMind in Montréal, Canada. Her research interests include applying methods and techniques from cognitive science in analysis and characterization of behavior of machine learning models. Specifically, this includes designing evaluation protocols, benchmarks and metrics to comprehensively understand capabilities and limitations of large vision-language models that in recent years have demonstrated strong performance in a variety of tasks. She completed her PhD thesis titled “Computational Mechanisms of Language Understanding and Use in the Brain and Behaviour” in 2020 at the University of Waterloo in Canada.
Philipp Wicke studied Cognitive Science at the University of Osnabrück in the B.Sc. programme. During these studies he interned at Dauwels Lab at the NTU Singapore in the field of neuroinformatics, he also interned at the Creative Language Systems Lab at UCD Dublin at which he later wrote his dissertation on “Computational Storytelling as an Embodied Robot Performance with Gesture and Spatial Metaphor”. He was an assistant professor at the LMU Munich at the Center for Language and Information Processing (CIS) and a Researcher in Residency at the Center for Advanced Studies (CAS). Philipp is researching on Natural Language Processing and teaches Artificial Intelligence at the BTU Cottbus. Philipp Wicke is the Lead AI Engineer at AURYAL, a Europe-based neuro-tech startup funded by the German Federal Agency for Disruptive Innovation (SPRIND).
Despite some recent criticism, empathy is still seem as the glue that binds us and holds societies together. What exactly is empathy (the definitional question)? Is it uniquely human or which nonhuman animals possess empathy (the distribution question)? And which type or quality of empathy is realized in different species (the quality question)? We suggest a new methodological approach to answer all three questions, namely a species-sensitive, multifactorial profile theory of empathy: This includes the claim that we cannot offer a strict definition with necessary and sufficient conditions of empathy which captures the rich variety of empathic phenomena. Thus, we develop a multifactorial characterization of five typical and testable dimensions which together and in sufficient clustering indicate a type of empathy; furthermore, each of the five general dimensions is characterized in detail by several features which allow us to test the degree of realization for each feature. The degree of implementation of one dimension results from the degrees of realization of the relevant features. This framework is unfolded by starting with a minimal condition which we aim to constrain with testable cognitive dimensions such that depending on the degree to which these dimensions are realized, we can ascribe a species-specific profile of empathy. In an additional step this framework is used to discuss the question whether and how AI system can be empathic. This is investigated by looking at the present performance of Large Language Models (LLMs). The profile account needs additional aspects to work out the commonalities and differences of LLMs compared to humans and animals.
Literature
Ramya Srinivasan, Beatriz San Miguel González (2022), The role of empathy for artificial intelligence accountability, Journal of Responsible Technology, 9, 100021,
Albert Newen is full professor of philosophy at the Ruhr-University Bochum (RUB), Germany. His central research areas are philosophy of mind and cognition. Furthermore, he is the director of the interdisciplinary Center for Mind and Cognition at RUB since 2011. He was president of the German Society for Cognitive Science (2018-2020) and since 2017, he is the speaker of an interdisciplinary Research Training Group (DFG-Graduiertenkolleg) on “Situated Cognition”.
This hands-on course takes you from transformer fundamentals to cutting-edge agentic AI systems. You’ll learn how large language models work under the hood, train and fine-tune your own models, and build autonomous agents that use tools, access external resources, and collaborate to solve complex tasks. Each session combines theory with practical implementation using industry-standard frameworks like Hugging Face, LangChain, and the Model Context Protocol.
Session 1: Foundations & Architecture Understanding Transformers: The Engine Behind Modern AI
Discover how LLMs predict and generate text through self-attention mechanisms. We’ll demystify the transformer architecture—from tokenization to embeddings—and you’ll implement a tokenizer from scratch while visualizing how models “pay attention” to different parts of text.
Session 2: Training Fundamentals Training Your Own Language Model
Learn what it takes to train an LLM: pre-training objectives, dataset curation, and scaling laws. You’ll fine-tune a real model (GPT-2 or TinyLlama) on custom data using Hugging Face tools, understanding the trade-offs between model size, compute, and performance.
Session 3: Advanced Training & Alignment Making Models Helpful: Instruction Tuning & RLHF
Explore how base models are transformed into helpful assistants through instruction tuning and alignment techniques like RLHF. You’ll fine-tune a model to follow instructions and experiment with advanced prompting techniques including chain-of-thought reasoning.
Session 4: Cutting-Edge Applications & Agentic Systems Autonomous AI: Building Agents That Think and Act
Go beyond chatbots to autonomous agents that use tools, access databases, and collaborate on complex tasks. You’ll implement function calling, integrate the Model Context Protocol (MCP) to connect LLMs with external resources, and build multi-agent systems using LangGraph—culminating in an autonomous agent demo.
Lecturer
Kerem Şenel received his PhD in Computer Science from LMU Munich in 2025, specializing in Natural Language Processing. During his doctoral research at the Center for Information and Language Processing (CIS), he explored diverse topics including interpretability, multilinguality, and evaluation of language models. He currently works as an IT consultant in industry, specializing in AI applications and solutions.
This course provides a comprehensive overview of how Virtual Reality (VR) intersect with Brain-Computer Interfaces (BCIs), which are neurotechnologies that introduce promising possibilities to interact with digital devices solely through the acquisition and analysis of brain activity, typically measured using electroencephalography.
BCIs enhance VR applications in two key ways: by enabling direct control of virtual elements through mental commands, such as imagining hand movements to guide a virtual character, and by gathering real-time neural data to adapt and personalize the VR experience to the user\’s cognitive and emotional state. Conversely, VR platforms offer immersive environments that facilitate BCI user training and rehabilitation, creating tailored scenarios that improve brain activity modulation and learning outcomes.
The course is structured into four sessions: the first two cover the mutual benefits of integrating BCI with VR technologies. The final two sessions will focus on therapies that utilize VR-based and BCI-based approaches independently, as well as innovative interventions at the intersection of both technologies. These combined VR-BCI therapies harness neurofeedback, and immersive environments to promote functional recovery, for instance in motor rehabilitation after stroke. This integrated approach provides patient-centered, adaptable, and motivating rehabilitation protocols that leverage real-time brain activity monitoring to enhance neuroplasticity and clinical outcomes.
Session 1 and 2: mutual benefits of integrating BCI with VR technologies Sessions 3 and 4: state of the art on VR and BCI-based therapies
Literature
• Drigas, A., & Sideraki, A. (2024). Brain neuroplasticity leveraging virtual reality and brain–computer interface technologies. Sensors, 24(17), 5725.
• Kober, S. E., Wood, G., & Berger, L. M. (2024). Controlling virtual reality with brain signals: state of the art of using VR-based feedback in neurofeedback applications. Applied psychophysiology and biofeedback, 1-20.
• Lotte, F., Faller, J., Guger, C., Renard, Y., Pfurtscheller, G., Lécuyer, A., & Leeb, R. (2012). Combining BCI with virtual reality: towards new applications and improved BCI. In Towards practical brain-computer interfaces: Bridging the gap from research to real-world applications (pp. 197-220). Berlin, Heidelberg: Springer Berlin Heidelberg.
• Roc, A., Pillette, L., Mladenovic, J., Benaroch, C., N’Kaoua, B., Jeunet, C., & Lotte, F. (2021). A review of user training methods in brain computer interfaces based on mental tasks. Journal of Neural Engineering, 18(1), 011002.
Lecturer
Dr. Léa Pillette is a CNRS researcher and member of the Seamless team at IRISA, Rennes, France, since 2022. She obtained her PhD in computer science from the University of Bordeaux in 2019. Her research focuses on developing innovative methods to train individuals to regulate their brain activity, enabling more accessible and effective use of brain-computer interfaces for applications such as medical interventions and virtual world interactions.
Lecturer: Tim Tiedemann Fields: Robotics, Machine Learning
Content
This course will show a bridge between machine learning and robotics — which does not use ML to solve robotics’ tasks but the other way around! Further, the course tries to show where using robots could be a method to help finding biological or cognitive psychological insights.
– Session 1 will give an introduction first to bio-robotics, and afterwards to robotics in general — including robotic software frameworks and potential starting points for own robotic experiments. Here, examples are shown where biology gained new insights from robotics (and the other way around, too).
– Session 2 will continue the introduction to robotics and starts the introduction to active learning with methods that are NOT active learning (but that could help solving problems you have with large data sets you need to label…).
– Session 3 will continue on active learning and combine both, robotics and active learning. We will also talk about the idea of embodiment, here.
Literature
Siciliano et al. (Eds.) (2016, 2024): Handbook of Robotics. Springer
Burr Settles: Active Learning, Morgan & Claypool Publishers, 2012
Robert (Munro) Monarch: Human-in-the-Loop Machine Learning : Active Learning and Annotation for Human-Centered AI, Shelter Island, NY: Manning, 2021
Lecturer
Since 2016: Professor of Intelligent Sensing at the University of Applied Sciences Hamburg (HAW Hamburg, Hamburg, Germany). Main research interests are sensors and sensor data processing (including machine learning methods) and robotics. 2010-2016 Postdoc in the area of space and underwater robotics at the German Research Center for Artificial Intelligence (DFKI) in Bremen, Germany. 2009 Ph.D. in biorobotics, focus on the transfer of (neuro-) biological concepts to the robotic domain 2003-2010 Research assistant at the Computer Engineering Group, Bielefeld University 2003 Research assistant at the Cognitive Psychology Group, Bielefeld University 2003 Diploma in computer science (i.e. Master degree level), study computer science with a focus on robotics and neural networks at Bielefeld University, Bielefeld, Germany
Lecturer: Sven Mayer Fields: Artificial Intelligence, Human-Computer Interaction, Human-AI Interaction
Content
The course Introduction to Intelligent User Interfaces (IUI) introduces participants to key concepts at the intersection of Human-Computer Interaction (HCI) and Artificial Intelligence (AI). It explores how methods from Machine Learning and AI can be transferred to the design of interactive systems that act intelligently, adapt to users, and support human goals. Emphasis is placed on a human-centered perspective that prioritizes usability, transparency, and user trust. Across four sessions, participants will gain a conceptual understanding of the foundations, design principles, and open challenges of intelligent user interfaces, preparing them to critically assess and discuss current and future developments in this field. * 1. Session: Motivation and Introduction * 2. Session: Machine Learning and Human-Computer Interaction basics * 3. Session: Designing, Building, and Evaluating Human-AI Systems * 4. Session: Human-Centered Challenges and Future Directions
Literature
Andy Field and Graham Hole (2002). How to Design and Report Experiments
Kasper Hornbæk, Per-Ola Kristensson, and Antti Oulasvirta (2025). Introduction to Human-Computer Interaction. Oxford University Press.
Sven Mayer is a full professor of computer science at the TU Dortmund University (Germany) and the Research Center Trustworthy Data Science and Security, where he is the head of the chair for Human-AI Interaction. His research focuses on Human-AI Interaction at the intersection between Human-Computer Interaction and Artificial Intelligence, where he focuses on the next generation of computing systems. He uses artificial intelligence to design, build, and evaluate future human-centered interfaces. In particular, he envisions enabling humans to outperform their performance in collaboration with the machine. He focuses on areas such as augmented and virtual reality, mobile scenarios, and robotics.
Where do apparently opposite qualities of experience show up in our inner lives? How do thinking and feeling fit together — and where don’t they? What shapes the connection between body and mind? How do we experience the outer world, and what do we experience within — and how can these be linked and integrated? What stays unconscious, and what becomes conscious? These and similar questions are at the heart of our self-experience workshop. Based on experiential exercises drawing from psychoanalysis, humanistic psychology, and body-oriented approaches, participants are invited into a reflective space to explore self-awareness, perception, and communication. No prior knowledge is required — all you need is a little curiosity and a willingness to gently step beyond the edge of your comfort zone. Then self-experience can become a bridge to new realities of being and relating.
Katharina Krämer is a psychologist and analytic psychotherapist. She works as a professor for psychology at the Rheinische Hochschule Köln, University of Applied Sciences, Cologne, Germany, and as a lecturer and supervisor for psychotherapists in training. Additionally, she works as a psychotherapist in private practice. In 2014, Katharina Krämer received her doctoral degree from the University of Cologne, Germany, on a thesis investigating the perception of dynamic nonverbal cues in cross-cultural psychology and high-functioning autism. Her research interests include the application of Mentalization-Based Group-Therapy with patients with autism and the vocational integration of patients with autism.
Sophia Reul is a clinical psychologist and analytic psychotherapist. She works as a psychotherapist in private practice. In 2021, she received her doctoral degree from the Westfälische Wilhelms-University Münster, Germany, on a thesis investigating the impact of neuropsychological methods in diagnoses of early dementia. Today, her research interests include the application of Mentalization-Based Group-Therapy (MBT-G) with patients with autism.
Affiliation: Praxis für Psychotherapie und Psychoanalyse Sophia Reul, Kirchweidach (Bay.)
👤
Annekatrin Vetter is a clinical psychologist and analytic psychotherapist. As a psychotherapist, she treats patients with different mental disorders in private practice. Additionally, she works as a lecturer and supervisor for psychotherapists in training and as a trainer for Coaches at Inscape – Coaching & Counselling, Cologne, Germany.
Affiliation: Praxis für Psychotherapie und Psychoanalyse, Supervision und Coaching Annekatrin Vetter, Cologne
Lecturer: Marius Klug Fields: Neuroscience, Psychology
Content
This course provides an introduction to psychophysiology — the study of how psychological phenomena are expressed in, and can be revealed through, physiological signals. Starting from the historical roots of the field and the basic anatomy of the nervous system, the course builds up to the practical measurement and analysis of peripheral and central physiological signals. Participants will learn how biosignals are recorded, digitized, and processed, how to design psychophysiological experiments that avoid common pitfalls, and how specific measures — electromyography, electrodermal activity, electrocardiography, and electroencephalography — relate to psychological states. No prior background in physiology or signal processing is assumed.
Session 1: Introduction, Nervous System, and Measurement. From the earliest attempts to diagnose lovesickness by pulse to Hans Berger\’s first human EEG recording, this session traces how psychophysiology became a scientific discipline. It then covers the structural and functional organization of the nervous system and the anatomy of the brain relevant to psychophysiological measures. The second half introduces the fundamentals of biosignal measurement: electrode types and placement, bipolar versus unipolar recording, digitization, spectral analysis, and digital filtering.
Session 2: Psychophysiological Experimentation. This session teaches you to think like an experimental scientist. It covers experimental design as it applies specifically to psychophysiology. Starting from general principles — variables, operationalization, confounds — it addresses the particular challenges of psychophysiological inference, where the independent variable is a psychological state and the dependent variable a physiological measure. Topics include within- versus between-subjects designs and their trade-offs, serial dependency, continuous versus event-related analysis, epoching and ERP averaging, and the identification and control of external and internal artifacts.
Session 3: Peripheral Physiology (EMG, EDA, ECG). This session covers three peripheral psychophysiological measures. Electromyography (EMG): the physiology of motor units, recording of surface EMG, and EMG signal processing. Electrodermal activity (EDA): eccrine sweat gland physiology and innervation, skin conductance measurement, and the distinction between tonic skin conductance level and phasic skin conductance responses. Electrocardiography (ECG): cardiac physiology, the ECG waveform, and the extraction of heart rate and heart rate variability, including spectral HRV analysis as a window into sympathetic–parasympathetic balance.
Session 4: Electroencephalography (EEG). This session covers the physiological origins of EEG, EEG electrode placement and measurement, and the major analysis approaches: frequency domain analysis (spectral power in canonical bands such as alpha, theta, and mu, with examples from motor imagery and workload paradigms), time domain analysis (event-related potentials, epoching, difference waves, and topographic mapping), time-frequency analysis (event-related spectral perturbation), and time-time analysis (ERP images).
Literature
Gramann, K., & Schandry, R. (2009). Psychophysiologie (4th ed.). Basel, Switzerland: Beltz.
Lecturer
Marius Klug studied cognitive science in Tübingen and was already in contact with EEG as a measurement method and brain-computer interface during that time. He subsequently earned his doctorate in the field of mobile brain research under Prof. Klaus Gramann at TU Berlin. There, he extensively dealt with EEG analysis methods and virtual reality as an experimental method. Specifically, the application of EEG in a mobile context, the cleaning of data, and their interpretation in conjunction with other measurements, such as body and eye movements, were the focus of the research. The continuation of this research can now be found at BTU in the form of the practical use of psychophysiological measurement methods as an interface for real-time applications.