SC9 – Challenges and opportunities of incremental learning

Lecturer: Barbara Hammer, Fabian Hinder
Fields: Machine Learning

Content

Incremental learning refers to the notion of machine learning methods which learn continuously from a given stream of training data rather than a priorly available batch. It carries great promises for model personalization and model adaptation in case of a changing environment. Example applications are personalization of wearables or monitoring of critical infrastructure. In comparison to classical batch learning, incremental addresses two main challenges: How to solve the algorithmic problem to efficiently adapt a model incrementally given limited memory recourses? How to solve the learning problem that the underlying input distribution might change within the stream, i.e. drift occurs?

The course will be split into three main parts: (1) Fundamentals of incremental learning algorithms and its applications, dealing with prototypical algorithmic solutions and exemplary applications. (2) Drift detection, dealing with the question what exactly is referred to by drift, and algorithms to locate drift in time. (4) Monitoring change, dealing with the question how to locate drift in space and provide explanations what exactly has caused the observed drift.

Literature

  • João Gama, Indrė Žliobaitė, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. 2014. A survey on concept drift adaptation. ACM Comput. Surv. 46, 4, Article 44 (April 2014), 37 pages. https://doi.org/10.1145/2523813
  • Viktor Losing, Barbara Hammer, Heiko Wersing: Incremental on-line learning: A review and comparison of state of the art algorithms. Neurocomputing 275: 1261-1274 (2018)
  • Fabian Hinder, Valerie Vaquet, Barbara Hammer: One or two things we know about concept drift – a survey on monitoring in evolving environments.
    • Part A: detecting concept drift. Frontiers Artif. Intell. 7 (2024) One or two things we know about concept drift – a survey on monitoring in evolving environments.
    • Part B: locating and explaining concept drift. Frontiers Artif. Intell. 7 (2024)
  • Fabian Fumagalli, Maximilian Muschalik, Eyke Hüllermeier, Barbara Hammer: Incremental permutation feature importance (iPFI): towards online explanations on data streams. Mach. Learn. 112(12): 4863-4903 (2023)

Lecturer

Prof. Barbara Hammer

Barbara Hammer chairs the Machine Learning research group at the Research Institute for Cognitive Interaction Technology (CITEC) at Bielefeld University. After completing her doctorate at the University of Osnabrück in 1999, she was Professor of Theoretical Computer Science at Clausthal University of Technology and a visiting researcher in Bangalore, Paris, Padua and Pisa. Her areas of specialisation include trustworthy AI, lifelong machine learning, and the combination of symbolic and sub-symbolic representations. She is PI in the ERC Synergy Grant WaterFutures and in the DFG Transregio Contructing Explainability. Barbara Hammer has been active at IEEE CIS as member of chair of the Data Mining Technical committee and the Neural Networks Technical Committee. She has been elected as a review board member for Machine Learning of the German Research Foundation in 2024 and she represents computer science as a member of the selection committee for fellowships of the Alexander von Humboldt Foundation. She is member of the Scientific Directorate Schloss Dagstuhl. Further, she has been selected as member of Academia Europaea.

Affiliation: Bielefeld University
Homepage: https://hammer-lab.techfak.uni-bielefeld.de/

Fabian Hinder

Fabian Hinder is a Ph.D. student in the Machine Learning group at Bielefeld University, Germany. He received his Master’s degree in mathematics from Bielefeld University in 2018. His research interests cover learning in non-stationary environments, concept drift detection, statistical learning theory, explainable AI, and foundations of machine learning.

Affiliation: Bielefeld University
Homepage: https://hammer-lab.techfak.uni-bielefeld.de/

SC10 – Uncertainty and Trustworthiness in Natural Language Processing

Lecturer: Barbara Plank
Fields: Artificial Intelligence, Natural Language Processing

Content

Despite the recent success of Natural Language Processing (NLP) driven by advances in large language models (LLMs), there are many challenges ahead to make NLP more trustworthy. In this course, we will look at trustworthiness by taking the lens of uncertainty in language, from uncertainty in inputs, in outputs and how models deal with uncertainty themselves.

Literature

Lecturer

Prof. Barbara Plank

Barbara Plank is Professor and co-director of the Center for Information and Language Processing at LMU Munich. She holds the Chair for AI and Computational Linguistics at LMU where she leads the MaiNLP research lab (Munich AI and NLP lab, pronounced “my NLP”). Her lab focuses on robust machine learning for Natural Language Processing with an emphasis on human-inspired and data-centric approaches.

Affiliation: LMU Munich
Homepage: https://bplank.github.io/

BC4 – Interfacing brain and body – Insights into opportunities, challenges and limitations from an engineering perspective

Lecturer: Thomas Stieglitz
Fields: Brain-Body Interface, Engineering, Society

Content

Neural engineering addresses the wet interface between electronic and biological circuits and systems. There is a need to establish stable and reliable functional interfaces to neuronal and muscular target structure in chronic application in neuroscientific experiments but especially in chronic applications in humans. Here, the focus will be laid on medical applications rather than on fictional scenarios towards eternal storage of the mind in the cloud. The physiological mechanisms as well as basic technical concepts will be introduced to lay a common knowledge for informed decisions and discussions. From a neural engineering point of view, proper selection of substrate, insulation and electrode materials is of utmost importance to bring the interface in close contact with the neural target structures, minimize foreign body reaction after implantation and maintain functionality over the complete implantation period. Different materials and associated manufacturing technologies will be introduced and assessed with respect to their strengths and weaknesses for different application scenarios. Different design and development aspects from the first idea to first-in-human studies are presented and challenges in translational research are discussed. Reliability data from long-term ageing studies and chronic experiments show the applicability of thin-film implants for stimulation and recording and ceramic packages for electronics protection. Examples of sensory feedback after amputation trauma, vagal nerve stimulation to treat hypertension and chronic recordings from the brain display opportunities and challenges of these miniaturized implants. System assembly and interfacing microsystems to robust cables and connectors still is a major challenge in translational research and transition of research results into medical products. Clinical translation raises questions and concerns when applications go beyond treatment of serious medical conditions or rehabilitation purposes towards life-style applications. The four sessions within this course “Interfacing brain and body” will cover (1) physiological and engineering aspects of technical interfaces to brain and body: fundamentals on optogenetics, recording of bioelectricity and electrical stimulation, (2) neuroscientific and clinical applications of neural technology like (3) the challenges of neural implant longevity and (4) ethical and societal considerations in neural technology use.

Literature

  • Cogan SF. Neural stimulation and recording electrodes. Annu Rev Biomed Eng. 10:275-309 (2008). DOI: 10.1146/annurev.bioeng.10.061807.160518
  • Hassler, C., Boretius, T., Stieglitz, T.: “Polymers for Neural Implants” J Polymer Science-Part B: Polymer Physics, 49 (1), 18-33 (2011). Erratum in: 49, 255 (2011); DOI: 10.1002/polb.22169
  • Alt, M.T., Fiedler, E., Rudmann, L., Ordonez, J.S., Ruther, P., Stieglitz, T. “Let there be Light – Optoprobes for Neural Implants”, Proceedings of the IEEE 105 (1), 101-138 (2017); DOI: 10.1109/JPROC.2016.2577518
  • Stieglitz, T.: Of man and mice: translational research in neuro¬technology, Neuron, 105(1), 12-15 (2020). DOI:10.1016/j.neuron.2019.11.030
  • Stieglitz, T.: Why Neurotechnologies ? About the Purposes, Opportunities and Limitations of Neurotechnologies in Clinical Applications. Neuroethics, 14: 5-16 (2021), doi: 10.1007/s12152-019-09406-7
  • Jacob T. Robinson, Eric Pohlmeyer, Malte Gather, Caleb Kemere, John E. Kitching, George G. Malliaras, Adam Marblestone, Kenneth L. Shepard, Thomas Stieglitz, Chong Xie. Developing Next-Generation Brain Sensing Technologies—A Review. IEEE Sensors Journal, 18(22), pp. 10163-10175 (2019) DOI: 10.1109/JSEN.2019.2931159
  • Boehler, C., Carli, S., Fadiga, L., Stieglitz, T., Asplund, M.: Tutorial: Guidelines for standardized performance tests for electrodes intended for neural interfaces and bioelectronics. Nature Protocols, 15 (11), 3557-3578 (2020) https://doi.org/10.1038/s41596-020-0389-2
  • Hofmann UG, Stieglitz T. Why some BCI should still be called BMI. Nat Commun. 2024 Jul 23;15(1):6207. doi: 10.1038/s41467-024-50603-7

Lecturer

Prof. Thomas Stieglitz

Thomas Stieglitz was born in Goslar in 1965. He received a Diploma degree in electrical Engineering from Technische Hochschule Karlsruhe, Germany, in 1993, and a PhD and habilitation degree in 1998 and 2002 from the University of Saarland, Germany, respectively. In 1993, he joined the Fraunhofer Institute for Biomedical Engineering in St. Ingbert, Germany, where he established the Neural Prosthetics Group. Since 2004, he is a full professor for Biomedical Microtechnology at the Albert-Ludwig-University Freiburg, Germany, in the Department of Microsystems Engineering (IMTEK) at the Faculty of Engineering and currently serves the IMTEK as managing director, is deputy spokesperson of the Cluster BrainLinks-BrainTools, board member of the Intelligent Machine Brain Interfacing Technology (IMBIT) Center and spokesperson of the research profile “Signals for Life” of the university. He is further serving the university as member of the senate and as co-spokesperson of the commission for responsibility in research as well as the university medical center as advisory board member. He was awarded IEEE Fellow in 2022. Dr. Stieglitz is co-author of about 200 scientific journal and about 350 conference proceedings presentations and co-inventor of about 30 patents. He is co-founder and scientific advisory board member of the neurotech spin offs CorTec an neuroloop. His research interests include neural interfaces and implants, biocompatible assembling and packaging and brain machine interfaces.

Affiliation: University of Freiburg
Homepage: https://www.imtek.de/professuren/bmt

MC4 – New bodies, new minds – A Practical Guide to Body Ownership Illusions

Lecturer: Andreas Kalckert
Fields: Cognitive neuroscience, Experimental psychology

Content

The introduction of body ownership illusion experiments has revolutionized our ability to explore and manipulate the subjective experience of our own bodies. Innovative paradigms like the rubber hand illusion and body swapping have greatly expanded our understanding of the cognitive and neural mechanisms that shape our sense of self. Due to their relatively low technical and practical demands, these paradigms have surged in popularity, leading to hundreds of studies. However, this rapid expansion has also resulted in a variety of experimental approaches, often lacking standardized methods—making it challenging to ensure comparability and replicability across studies.
In this methods course, we aim to tackle both the methodological and conceptual dimensions of these intriguing paradigms. We will dive deep into the existing literature, critically evaluating the theoretical and practical aspects of these experiments. We will also identify specific elements of experimental paradigms—such as stimulation protocols and measurement techniques—that are not only practically relevant but also crucial for refining our understanding of these illusions.
The course is designed to be both informative and interactive, consisting of two lectures and two practical sessions where we will explore body ownership illusions in both physical and virtual reality environments. By the end of the course, participants will have gained a clearer insight into the nuances and demands of these fascinating experiments and be equipped with a set of best practices for conducting replicable studies.

Literature

  • Ehrsson, H. H. (2019). Multisensory processes in body ownership. In Multisensory Perception: From Laboratory to Clinic (pp. 179–200). Elsevier. https://doi.org/10.1016/B978-0-12-812492-5.00008-5
  • Riemer, M., Trojan, J., Beauchamp, M., & Fuchs, X. (2019). The rubber hand universe: On the impact of methodological differences in the rubber hand illusion. Neuroscience and Biobehavioral Reviews, 104(January), 268–280. https://doi.org/10.1016/j.neubiorev.2019.07.008
  • Kilteni, K., Maselli, A., Kording, K. P., & Slater, M. (2015). Over my fake body: Body ownership illusions for studying the multisensory basis of own-body perception. Frontiers in Human Neuroscience, 9(MAR). https://doi.org/10.3389/fnhum.2015.00141

Lecturer

Andreas Kalckert earned his PhD in Neuroscience from the Karolinska Institute in Sweden. He previously lectured in psychology at the University of Reading Malaysia and is now a senior lecturer in Cognitive Neuroscience at the University of Skövde, Sweden. His research focuses on the psychological and neuroscientific processes involved in the experience of the body, with a particular interest in the role of movement.

Affiliation: Department of Cognitive Neuroscience and Philosophy, University of Skövde, Sweden
Homepage: https://www.his.se/en/about-us/staff/andreas.kalckert/

PC2 – Hands-on Hardware: (No) Brain in Robots and Edge Computing? Put the brain on ’em!

Lecturer: Tim Tiedemann
Fields: Robotics, Sensor Data Processing, Edge Computing, Machine Learning

Content

(this course description is under construction)
In this practical course, we will touch hardware — robots and microcontrollers — and we will see: there is no brain inside! But as robots and lone small edge hardware out in the woods would really benefit from something like a brain, this course combines hardware and cognitive sciences. And as we are nice guys and gals, we will do it on our own: Put it on ’em! Put the brain on ’em!

As it is currently planned, different hardware will be on site and/or accessible:
– mobile wheel-based robot
– autonomous underwater vehicle (AUV)
– multiple microcontroller boards with different sensors
– (and as there need to be disappointments: systems in simulation)

We will try to implement different findings from the cognitive sciences (ie. Biology and Cognitive Psychology) or Data Science on the (small!) systems (some were already implemented by the instructor in research projects, some are brand-new and unrevealed).

The course is planned as BYOD and detailed descriptions of what notebook/installation should be brought to the course to participate hands-on, will be made available after the welcome event on Friday, 14th March. As first preparations you could already start with the installation/download of the tools (first links in the literature list below).

Literature

  • (optionally) Linux helps in many cases, “WSL” can sometimes do the job, much better is am VM or a parallel installation with Linux. There are many distributions and help can be given in the course. To start a “Long Term Support” LTS version of Ubuntu could be a good solution, e.g., Ubuntu 24.04 LTS or Ubuntu 20.04 LTS
  • (optionally) We will learn about the robotics framework “ROS”. Installation help will be given in the practical sessions of the course. If you like, install already before: ROS version “Noetic” (aka “ROS 1”), and optionally ROS 2, “Humble” or “Jazzy” (depends on Linux version).
  • Install the Arduino Software (IDE), available for Windows, MacOS, and Linux: https://www.arduino.cc/en/Main/Software
  • Among others we will work with ELEGOO robots. Infos and source code is available here:  https://www.elegoo.com/blogs/arduino-projects/elegoo-smart-robot-car-kit-v4-0-tutorial
  • And another system will be the “Waveshare PiRacer”. Infos can be found here: https://www.waveshare.com/wiki/PiRacer_AI_Kit
  • – (papers and book recommendations will follow…)

Lecturer

Prof. Tim Tiedemann

Tim Tiedemann studied computer science with a focus on robotics and neural networks at Bielefeld University, Bielefeld, Germany. After receiving his Diploma in computer science (i.e. Master degree level), he worked as research assistant at the Cognitive Psychology Group and at the Computer Engineering Group (both at Bielefeld University). In his Ph.D. studies in biorobotics he focused on the transfer of (neuro-) biological concepts to the robotic domain. From 2010 till 2016 he worked as postdoc in the area of space and underwater robotics at the German Research Center for Artificial Intelligence (DFKI) in Bremen, Germany. Since 2016 he is professor of intelligent sensing at the University of Applied Sciences Hamburg (HAW Hamburg, Hamburg, Germany). His main research interests are sensors and sensor data processing (including machine learning methods) and robotics.

Affiliation: HAW Hamburg

SC6 – Mirroring, alerting, guiding, on-loading, off-loading. How can (and should) adaptive technology support learning and teaching?

Lecturer: Sebastian Strauß
Fields: Educational Psychology, Learning Analytics, Learning Sciences

Content

Educational technology has come a long way since the Teaching Machines from the 1920s. While some commercial educational software can arguably still be classified as Teaching Machines, research and development in the learning sciences and technology-enhanced learning have produced educational technologies that can facilitate different aspects of teaching and learning alike.

The objective of this course is to examine the concept of learning from the standpoint of educational psychology, with a particular focus on the potential of adaptive educational technology to facilitate learning and teaching processes. We will do so by conceptualizing learning and teaching contexts as a relationship between (usually) one teacher and a group of students. Educational technology can now be leveraged as a tool to facilitate learning and teaching. For example, educational technology can offer insight into learning and teaching processes (mirroring), draw inferences, offer diagnoses (alerting), or take automated actions on behalf of the learners or the teacher (guiding).

The course will examine the perspectives of both learners and their teachers. We will explore how educational technology can enhance learners’ cognitive and metacognitive abilities, beyond learning more efficiently. To this end, we first look at learning processes and learning outcomes in the context of school and higher education and discuss how they can be observed by humans (and machines).

From the perspective of the learners, we then explore educational technologies that can provide us with information about the development of our skills and our learning behavior. As an example, we will look at learning analytics dashboards. Further, we look at technologies that provide learning tasks, assess the learning progress, and utilize this information to provide us with individualized support. Examples for such a technology are intelligent tutoring systems.

Taking the perspective of the teacher, we look at educational technologies that provide us with an overview of our learners’ progress, for example teacher dashboards. Such tools may enhance our teacher vision, by providing information that is usually difficult to gather and to aggregate. This information, in turn, can provide valuable insights into the learning of the entire class and individual students which allows us to provide them with the assistance that they need. Going one step further, teacher dashboards may also process the data further and alert us of challenges that our learners face, or even suggest instructional support for the learners.
At the same time, data analytics approaches can also focus on the teachers and their teaching. For example, tools allow teachers to observe and adapt their own teaching, which offers benefits for teacher professional development.

As we look at these different perspectives, we\’ll explore the various levels of automation that educational technology can offer the classroom, the challenges that result from the need to utilize valid measurements of learning and teaching, from biased data sets, and from automation-induced complacency. Further, we will explore the consequences of (partially) offloading cognitive and metacognitive processes to automated systems. On the side of the students, this includes the question which tasks should be on-loaded or off-loaded to foster learning. With respect to teachers, we will discuss the concepts of hybrid intelligence and human-AI co-orchestration in the classroom.

Literature

  • Baker, R.S., & Hawn, A. (2022) Algorithmic Bias in Education. International Journal of Artificial Intelligence in Education 32, 1052–1092. https://doi.org/10.1007/s40593-021-00285-9
  • Celik, I., Dindar, M., Muukkonen, H. et al. (2022). The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research. TechTrends 66, 616–630. https://doi.org/10.1007/s11528-022-00715-y
  • Doroudi, S (2023). The Intertwined Histories of Artificial Intelligence and Education. International Journal of Artificial Intelligence in Education 33, 885–928. https://doi.org/10.1007/s40593-022-00313-2
  • Eberle, J., Strauß, S., Nachtigall, V., & Rummel, N. (2024). Analyse prozessbezogener Verhaltensdaten mittels Learning Analytics: Aktuelle und zukünftige Bedeutung für die Unterrichtswissenschaft. Unterrichtswissenschaft, 1-13.
  • Holmes, W., Persson, J., Chounta, I. A., Wasson, B., & Dimitrova, V. (2022). Artificial intelligence and education: A critical view through the lens of human rights, democracy and the rule of law. Council of Europe.
  • Holstein, K., Aleven, V., Rummel, N. (2020). A Conceptual Framework for Human–AI Hybrid Adaptivity in Education. In: Bittencourt, I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds) Artificial Intelligence in Education. AIED 2020. Lecture Notes in Computer Science, vol 12163. Springer, Cham. https://doi.org/10.1007/978-3-030-52237-7_20
  • Molenaar, I. (2022). Towards hybrid human‐AI learning technologies. European Journal of Education, 57(4), 632-645.
  • Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), 620-631.
  • van Leeuwen, A., Strauß, S., & Rummel, N. (2023) Participatory design of teacher dashboards: navigating the tension between teacher input and theories on teacher professional vision. Frontiers in Artificial Intelligence. 6:1039739. doi: 10.3389/frai.2023.1039739
  • van Leeuwen, A., Rummel, N. (2019) Orchestration tools to support the teacher during student collaboration: a review. Unterrichtswissenschaft 47, 143–158. https://doi.org/10.1007/
  • Wise, A. F., & Shaffer, D. W. (2015). Why Theory Matters More than Ever in the Age of Big Data. Journal of Learning Analytics, 2(2), 5-13. https://doi.org/10.18608/jla.2015.22.2

Lecturer

Sebastian Strauß is a postdoctoral researcher at the Educational Psychology and Technology research group at the Ruhr-University Bochum (Germany). His core research focuses on collaborative learning in computer-supported settings. He is interested in how students learn and work together in small groups, how they adapt their interaction, how they acquire collaboration skills, and how we can use computer technology to facilitate collaboration. In this context, he is also interested in human-computer-collaboration. Recently, his research focus expanded to using fine-grained data about the learning process to assess and support individual learning.

Affiliation: Ruhr-University Bochum
Homepage: https://www.pe.ruhr-uni-bochum.de/erziehungswissenschaft/pp/team/strauss.html.de

MC3 – Augmented Thought: Language, Embodiment, and Agentic Language Models 

Content

This course examines the connection between language and thought, using the recent boom in large language models (LLMs) and AI-agents as a starting point. The course is organized into four lectures that move from theoretical foundations to practical implications and ethical considerations.

  • Lecture 1 introduces the core ideas behind the course. We explore the hypothesis that our thinking is largely based on language, looking at language universals and the concept of linguistic relativity. This session sets the stage by discussing how language might serve as the basis for what we recognize as intelligence.
  • Lecture 2 shifts the focus to the body’s role in shaping language and cognition. Here, we examine embodied cognition and consider how physical experience influences language conceptualization. The discussion presents the idea that language serves as an interface between raw cognitive processes and the articulation of thought—a process that is augmented in both human and machine learning, although in very different ways.
  • Lecture 3 delves (!) into the technical and conceptual details of LLMs and generative agents. This lecture outlines what LLMs are designed to achieve and highlights their limitations, especially in contrast to the human learning process. By exploring the gaps between language-based AI systems and embodied human cognition, we develop a clearer understanding of current AI capabilities.
  • Lecture 4 critically reviews the broader impact of agentic AI. We address how these technologies are already influencing society through issues like artificial intimacy, trust, and unexpected market dynamics. This session also examines ethical concerns and challenges the promise of fully agentic AI by considering where current implementations fall short and what this means for future development.

Together, these lectures offer an in-depth look at how language serves as a bridge between thought and action, and how both technological and human forms of intelligence can inform one another. The course is designed to engage students in discussions that are both technically rigorous and socially relevant, providing a balanced view of the promises and limitations of augmented thought in the age of AI.

(created using o3-mini with high reasoning effort and P. Wicke’s notes)

Literature

  • Evans, Vyvyan, and Melanie Green. Cognitive linguistics: An introduction. Routledge, 2018.
  • Boroditsky, Lera. “Does language shape thought?: Mandarin and English speakers’ conceptions of time.” Cognitive psychology 43.1 (2001): 1-22.
  • Wei, Jason, et al. “Emergent abilities of large language models.” arXiv preprint arXiv:2206.07682 (2022).
  • Boroditsky, Lera. “Does language shape thought?: Mandarin and English speakers’ conceptions of time.” Cognitive psychology 43.1 (2001): 1-22.
  • ImaniGooghari, Ayyoob, et al. “Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages.” arXiv preprint arXiv:2305.12182 (2023).
  • Wicke, Philipp, Wachowiak, Lennart. “Exploring Spatial Schema Intuitions in Large Language and Vision Models” ACL 2024 Findings. 
  • George, A. Shaji, et al. “The Allure of Artificial Intimacy: Examining the Appeal and Ethics of Using Generative AI for Simulated Relationships.” Partners Universal International Innovation Journal 1.6 (2023): 132-147.
  • Wicke, Philipp, and Marianna Bolognesi. “Emoji-based semantic representations for abstract and concrete concepts.” Cognitive processing 21.4 (2020): 615-635.

Lecturer

Philipp Wicke studied Cognitive Science at the University of Osnabrück in the B.Sc. programme. During these studies he interned at Dauwels Lab at the NTU Singapore in the field of neuroinformatics, he also interned at the Creative Language Systems Lab at UCD Dublin at which he later wrote his dissertation on “Computational Storytelling as an Embodied Robot Performance with Gesture and Spatial Metaphor” under supervisor Tony Veale. In his current role at the Center for Language and Information Processing (CIS) at LMU, Philipp is researching on Natural Language Processing and teaches programming in the B.A. and M.A. Computational Linguistics. Philipp Wicke was the Head of AI Applications of the AI for People Association, is an Associate Member of the Munich Center for Machine Learning (MCML) and the Munich Center of Linguistics (MCL). Recently, he has been awarded a Junior Researcher in Residency at the Center for Advanced Studies (CAS).

Affiliation: Center for Information and Language Processing (CIS), LMU – Munich
Homepage: www.phil-wicke.com

MC2 – Social mechanisms and human compatible agents

Lecturer: Ralf Möller
Fields: Artificial Intelligence

Content

In the course we develop notions of human-compatibility in mechanisms in which humans and artificial agents interact (social mechanisms). Properties of social mechanisms are investigated in terms of AI alignment in general and assistance games in particular. Modeling formalisms needed to realise human-aligned agents are introduced.

Literature

  • Russell, S.R. Human Compatible: AI and the Problem of Control. Allen Lane of Penguin Books, Random House, UK, 2019.
  • Stuart Russell and Peter Norvig. 2020. Artificial Intelligence: A Modern Approach (4th. ed.). Prentice Hall Press, USA.

Lecturer

Prof. Ralf Möller

Ralf Möller is Full Professor of Artificial Intelligence in Humanities and heads the Institute of Humanities-Centered AI (CHAI) at the Universität Hamburg. His main research area is artificial intelligence, in particular probabilistic relational modeling techniques and natural language technologies for information systems as well as machine learning and data mining for decision making of agents in social mechanisms. Ralf Möller is co-speaker of the Section for Artificial Intelligence of the German Informatics Society. He is also an affiliated professor at DFKI zu Lübeck, a branch of Deutsches Forschungszentrum für Künstliche Intelligence with several sites in Germany. DFKI is responsible for technology transfer of AI research results into industry and society. Before joining the Universität Hamburg in 2024, Ralf Möller was Full Professor for Computer Science and headed the Institute of Information Systems at Universität zu Lübeck. In Lübeck he was also the head of the research department Stochastic Relational AI in Heathcare at DFKI. In his earlier carrier, Ralf Möller also was Associate Professor for Computer Science at Hamburg University of Technology from 2003 to 2014. From 2001 to 2003 he was Professor at the University of Applied Sciences in Wedel/Germany. In 1996 he received the degree Dr. rer. nat. from the University of Hamburg and successfully submitted his Habilitation thesis in 2001 also at the University of Hamburg. Professor Möller was co-organizer of several national and international workshops on humanities-centered AI as well as on description logics. He also was co-organizer of the European Lisp Symposium 2011. In 2019, he co-chaired the organization of the International Conference on Big Knowledge ICBK19 in Beijing, and he is co-organizing the conference “Artificial Intelligence” KI2021 in Berlin with colleagues Stefan Edelkamp and Elmar Rueckert. Prof. Möller was an Associate Editor for the Journal of Knowledge and Information Systems, member of the Editorial Board of the Journal on Big Data Research, as well as Mathematical Reviews/MathSciNet Reviewer.

Affiliation: University of Hamburg
Homepage: https://www.chai.uni-hamburg.de/~moeller

BC3 – Situated Affectivity and its applications

Lecturer: Achim Stephan
Fields: Philosophy (of emotions)

Content

The course offers (1) an introduction to affective phenomena such as emotions and moods, in general; (2) it also introduces to the key notions of situated affectivity such as 4E, user-resource interactions, mind shaping, mind invasion, and scaffolds; (3) next, we will apply these notions to the study of cases of mind invasion from individuals to nations; (4) finally, we will explore whether we can trust our (own) emotions.

Literature

  • Stephan, Achim, Sven Walter & Wendy Wilutzky (2014). Emotions Beyond Brain and Body. Philosophical Psychology 27(1), 65-81.
  • Stephan, Achim (2017). Moods in Layers. Philosophia 45, 1481-1495. doi: 10.1007/s11406-017-9841-0
  • Stephan, Achim & Sven Walter (2020). Situated Affectivity. In: T. Szanto & H. Landweer (eds.) The Routledge Handbook of Phenomenology of Emotion. Abingdon: Routledge, pp. 299-311.
  • Coninx, Sabrina & Achim Stephan (2021). A Taxonomy of Environmentally Scaffolded Affectivity. Danish Yearbook of Philosophy 54, 38-64. doi: https://doi.org/10.1163/24689300-bja10019

Lecturer

Prof. Achim Stephan

Achim Stephan’s current research is mainly focused on human affectivity, particularly from a situated perspective. He was head of the Philosophy of Mind and Cognition group at the Institute of Cognitive Science at Osnabrück University (2001-2023) and co-speaker of the bi-local DFG-funded research training group on Situated Cognition (2017-2023). In his PhD thesis, he worked on meaning theoretic aspects in the psychoanalysis of Sigmund Freud (1988); his habilitation thesis covers various theories of emergence and their applications (1998). From 2012 to 2015, he was president of the German Society for Analytic Philosophy (GAP); from 2017 to 2020 he was president of the European Philosophical Society for the Study of Emotions (EPSSE).

Affiliation: Osnabrück University, Institut of Cognitive Science

MC1 – Neurons and the Dynamics of Cognition: How Neurons Compute

Lecturer: Andreas Stöckel
Fields: Computational Neuroscience / Neuromorphic Computing

Content

While the brain does perform some sort of computation to produce cognition, it is clear that this sort of computation is wildly different from traditional computers, and indeed also wildly different from traditional machine learning neural networks. In this course, we identify the type of computation that biological neurons are good at (in particular, dynamical systems), and show how to build large-scale neural models that realize basic aspects of cognition (sensorimotor, memory, symbolic reasoning, action selection, learning, etc.). These models can either be made to be biologically realistic (to varying levels of detail) or mapped onto energy-efficient neuromorphic hardware.

Literature

  • Eliasmith, C. and Anderson, C. (2003). Neural engineering: Computation, representation, and dynamics in neurobiological systems. MIT Press, Cambridge, MA.
  • Eliasmith, C. et al., (2012). A large-scale model of the functioning brain. Science, 338:1202-1205.
  • Stöckel, A. et al., (2021). Connecting biological detail with neural computation: application to the cerebellar granule-golgi microcircuit. Topics in Cognitive Science, 13(3):515-533.
  • Dumont, N. S.-Y. et al., (2023) Biologically-based computation: How neural details and dynamics are suited for implementing a variety of algorithms. Brain Sciences, 13(2):245, Jan 2023.

Lecturer

Dr. Andreas Stöckel

Andreas Stöckel received his PhD in computer science at the University of Waterloo, Canada, in 2021. During his PhD, his research focused on integrating biological detail into the Neural Engineering Framework, a method for constructing large-scale models of neurobiological systems. His work specifically focused on harnessing nonlinear synaptic interactions and temporal tuning as computational resources. Today, he is a senior research scientist at Applied Brain Research Inc., where he co-designed the TSP1 time-series processor, a low-power neural-network accelerator chip that utilizes some of the techniques that he investigated during his PhD.

Affiliation: Applied Brain Research
Homepage: https://compneuro.uwaterloo.ca/people/andreas-stoeckel.html