BC1 – Introduction to Machine Learning

Lecturer: Magda Gregorova
Fields: Machine Learning, Deep Learning

Content

In this introductory course we will cover the basics of machine learning (ML) targeting specifically the un-initiated audience. If you are not sure what machine learning actually is, if you have never trained an ML model, if for you deep learning means learning in deep sleep and ChatGPT is a result of dark magic, then this course is meant for you. The course will be organized in four sessions where (1) we will begin from the basic concepts of learning from data reviewing the fundamental ideas and building stones of machine learning, (2) we shall discuss some classical ML algorithms which are still the workhorses for solving many practical problems, (3) we shall explore the more modern deep learning approaches based on neural network models, and (4) we shall uncover some of the magic behind ChatGPT by discussing the concepts of deep generative modelling. The field is vast and very fast paced combining mathematics with computer science while spicing it up with ideas coming from physics, neuroscience, and many other areas. While some math cannot be avoided, it is not my ambition to cover all the technicalities of ML. I will rather endeavor to help you build your own picture of the field based on basic understanding of the underlying fundamental ideas and nature your own intuition for data analysis hoping you will become more comfortable and confident when exploring the ML methods in your future work.

Literature

  • Bishop, C. M. (2006). Pattern recognition and machine learning. Springer.
  • Murphy, K. P. (2012). Machine learning: A probabilistic perspective. MIT Press.
  • Hastie, T., Tibshirani, R., & Friedman, J. (2001). The Elements of Statistical Learning. Springer New York Inc.
  • Shalev-Shwartz, S., & Ben-David, S. (2014). Understanding Machine Learning: From Theory to Algorithms (1st ed.). Cambridge University Press. https://doi.org/10.1017/CBO9781107298019
  • MacKay, D. J. C. (2003). Information theory, inference and learning algorithms. Cambridge University Press.
  • Cover, T.M. and Thomas, J.A. (2006) Elements of Information Theory. John Wiley & Sons, Inc., Hoboken.
  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
  • Zhang, A., Lipton, Z. C., Li, M., & Smola, A. J. (2021). Dive into Deep Learning. ArXiv Preprint ArXiv:2106.11342.
  • Prince, S. J. D. (2023). Understanding deep learning. The MIT Press. http://udlbook.com
  • Tomczak, J. M. (2022). Deep Generative Modeling. Springer International Publishing. https://doi.org/10.1007/978-3-030-93158-2

Lecturer

Magda Gregorova comes from Prague, Czech Republic, where she obtained her Master‘s degree in Statistics (2001) from the University of Economics. She started her career as an applied statistician in the Czech National Bank, where she headed a technical unit on financial statistics and collaborated closely with the ECB and the IMF. After several years in banking she has decided to follow an international career and joined Eurocontrol, the European Organization for the Safety of Air Navigation based in Brussels, Belgium, as a statistical analyst and forecaster. She then moved to Geneva, Switzerland, where she obtained in 2018 a PhD in machine learning from the Computer Science Department of the University of Geneva. She continued as a post-doc in the Data Mining and Machine Learning group of the University of Applied Sciences of Western Switzerland. In 2021 Magda has moved to Germany, where she obtained the research professorship for “Representation and Learning in Artificial Intelligence” at the Faculty of Computer Science and Business Information Systems of the Technical University of Applied Sciences Würzburg-Schweinfurt (THWS). She is a founding member of the THWS research Center for Artificial Intelligence (CAIRO) which she has led from its beginnings in 2022 till mid 2024. Her teaching activities are mainly within the international masters on AI in the areas of deep learning and generative modelling. In her research she focuses on deep unsupervised learning methods for modelling complex high-dimensional distributions and data representations for downstream tasks (https://scholar.google.com/citations?user=68MKCOwAAAAJ&hl=en). In addition to her own research she regularly contributes to the machine learning community through reviewing service (ICML, ICLR, NeurIPS, etc.) and by active participation in outreach and educational events such as IK.

Affiliation: Technische Hochschule Würzburg-Schweinfurt
Homepage: https://fiw.thws.de/en/our-faculty/staff/person/prof-dr-magda-gregorova/

SC3 – Interfacing Spinal Motor Neurons in Humans for Highly Intuitive Neuromotor Interfaces

Lecturer: Alessandro Del Vecchio
Fields: AI, Neuroscience, BCI

Content

Spinal motor neurons represent the final gateway from neural intention to physical movement, making them crucial for any interface that aims to restore or augment motor function. In cases of spinal cord injury (SCI), paralysis of the hand muscles significantly impacts quality of life, as individuals lose the ability to perform fundamental tasks. However, our recent research demonstrates that even in individuals with motor complete SCI (C5–C6), the activity of spinal motor neurons remains accessible and task-modulatable. Using a minimally-invasive electromygraphic interface, we tested eight SCI individuals and identified distinct groups of motor units under voluntary control that could encode specific hand movements, from grasping to individual finger flexion and extension. By mapping these motor unit discharges to a virtual hand interface, we enabled participants to proportionally control multiple degrees of freedom, successfully matching various cued hand postures. These findings underscore the potential of wearable muscle sensors to access voluntarily controlled motor neurons in SCI populations, presenting a pathway to restore lost motor functions through assistive technologies.

Alongside this study, we explored the neural organization of motor unit activity in different muscle groups, focusing on the low-dimensional latent structures—or motor unit modes—that underlie the coordinated output of motor units. By applying factor analysis, we identified two primary motor unit modes that captured most of the variability in motor unit discharge rates across knee extensor and hand muscles. Interestingly, we observed a distinct pattern in the hand muscles, where motor unit modes were largely specific to individual muscles, whereas knee extensors displayed a more continuous distribution, with shared synaptic inputs leading to overlapping motor unit modes across muscle groups. Simulations with large populations of integrate-and-fire neurons confirmed the accuracy of these modes, shedding light on the common inputs that drive correlated activity in synergistic muscle groups.

Building on these insights, we have now developed an open-source software platform that translates real-time EMG activity into controllable movement outputs. This software seamlessly integrates with both exoskeletons and prosthetics, allowing for precise and intuitive movement control that aligns with the user’s intent. With this tool, we can now bring intuitive neuromotor interfaces closer to clinical reality, offering individuals with SCI and other neuromuscular impairments a new level of interaction and independence.

Literature

Lecturer

Prof. Del Vecchio leads the n-squared lab (neuromuscular physiology and neural interfacing) at FAU since 2020 in the Dpt of AI in Biomedical Engineeirng. He is mainly interested in motor unit physiology, neuromotor interfaces, and machine learning.

Affiliation: FAU Erlangen-Nurnberg
Homepage: https://www.nsquared.tf.fau.de/

PC5 – Augmenting your career with a PhD?

Lecturer: Jutta Kretzberg
Fields: Personal development / career advice

Content

Are you a student? Are you thinking about a PhD? And maybe even about a career in academia?
Does “doing a PhD” sound like fun? Or rather like pain?
Many Master students struggle with the decision if a PhD would be the right choice for their career. And a considerable percentage of PhD students continue wondering if their decision was right until they graduate (or even beyond).
There is no general advice who should pursue a PhD. The decision for or against a PhD is a personal one – it depends on many factors including your personality, your personal situation and the available job opportunities. The goal of this workshop is to help you to develop a clearer personal perspective on this decision.

Session 1: External perspectives
In the first session, I will start with a brief overview of different options how to do a PhD in Germany. After that, we will interactively explore the perspectives of different stakeholders: What do Master students expect from doing a PhD? What do PhD supervisors expect from their PhD students? What do employers expect from applicants with a PhD versus a Master’s degree? And what is the perspective of family and friends?

Session 2: Your personal perspective
In preparation of the second session, you will write cards with your hopes, neutral expectations, and fears concerning yourself doing a PhD. Teaming up with one of the other participants, both of you will cluster these cards into the categories “tasks / skills”, “topics / scientific questions”, “working environment”, and “personal factors”. Explaining your thoughts (and listening to the thoughts of your teammate) can sharpen your personal perspective and help you to identify core aspects of your career decisions.

Please note: This workshop consists of two sessions and will be offered two times for a maximum of 20 participants each. Please register for one of the two workshop iterations via the list at ‘Glaskasten’ during IK.
The main target group of this workshop are Master’s and advanced Bachelor’s students. However, the method to develop your personal perspective is also applicable to further career steps. PhD students, PhD holders, and positive non-PhDs who are willing to share their perspectives are highly welcome!

Literature

Lecturer

Jutta Kretzberg is professor for Computational Neuroscience and head of the MSc program Neuroscience at University of Oldenburg. She studied applied computer science and biology at University of Bielefeld, where she also did her PhD in Biology. After being a postdoc (and having a baby) in San Diego, California, she came back to Germany to be a junior professor and became a professor some years (and another baby) later. Nowadays, while juggling her family, teaching, research and administration duties, her favorite task is mentoring.

Affiliation: University of Oldenburg, Germany
Homepage: https://uol.de/en/neurosciences/compneuro

ET2 – How can cognitive science help in understanding what artificial intelligence systems cannot yet do?

Lecturer: Constantin Rothkopf
Fields: Cognitive Science, Artificial Intelligence

Content

Recent advances in artificial intelligence based on deep learning have led to the discovery of new medical drugs, the development of new materials, and the optimization of fusion reactor designs. However, claims about fundamental limitations persist: unpredictable blunders, limited robustness, and lack of explainability. The talk will present recent examples and studies contributing to the current debate on what current AI systems can do and what they cannot yet do. A central topic will be how to leverage Cognitive Science to understand the properties of such AI systems. The systems discussed include large language models, neural network models of economic decision-making, visual-language foundation models and the considered tasks range from the classic Bongard problems to sensorimotor control and planning under uncertainty to deontological ethical judgments. Topics will cover the anthropomorphization of AI systems, problems of data contamination and bias, Clever-Hans phenomena, inherent limitations of benchmarks, and fundamental limitations of evaluations and comparisons in terms of performance measures of behavior.

Literature

  • Schramowski, P., Turan, C., Andersen, N., Rothkopf, C. A., & Kersting, K. (2022). Large pre-trained language models contain human-like biases of what is right and wrong to do. Nature Machine Intelligence, 4(3), 258-268.
  • Binz, M., & Schulz, E. (2023). Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences, 120(6), e2218523120.
  • Mitchell, M. (2023). How do we know how smart AI systems are? Science, 381(6654), eadj5957.
  • Valmeekam, K., Marquez, M., Sreedharan, S., & Kambhampati, S. (2023). On the planning abilities of large language models-a critical investigation. Advances in Neural Information Processing Systems, 36, 75993-76005.
  • Thomas, T., Straub, D., Tatai, F., Shene, M., Tosik, T., Kersting, K., & Rothkopf, C. A. (2024). Modelling dataset bias in machine-learned theories of economic decision-making. Nature Human Behaviour, 8(4), 679-691.
  • McCoy, R. T., Yao, S., Friedman, D., Hardy, M. D., & Griffiths, T. L. (2024). Embers of autoregression show how large language models are shaped by the problem they are trained to solve. Proceedings of the National Academy of Sciences, 121(41), e2322420121.
  • Wüst, A., Tobiasch, T., Helff, L. , Singh Dhami, D., Rothkopf, C. A., Kersting, K. (2024). Bongard in wonderland: visual puzzles that still make AI go mad? Sys2-Reasoning, Neurips Workshops.

Lecturer

Constantin Rothkopf is a full professor (W3) at the Institute of Psychology in the Department of Human Sciences with a secondary appointment in the Department of Computer Science at the Technical University of Darmstadt. He is the founding director of the Center for Cognitive Science and a founding member as well as a member of the executive board of the Hessian Center for Artificial Intelligence (hessian.AI). He is also a member of the board of directors of the Center for Mind, Brain and Behavior (CMBB). He is a member of the European Laboratory for Learning and Intelligent Systems (ELLIS), a faculty member of the ELLIS Unit Darmstadt, and a member of the DAAD Konrad Zuse Schools of Excellence in Artificial Intelligence (ELIZA). He is currently co-speaker of the collaborative projects The Adaptive Mind and Whitebox. After obtaining a joint PhD in Brain & Cognitive Sciences and Computer Science at the Center for Visual Science at the University of Rochester, NY in 2009, he started a postdoc at the Frankfurt Institute for Advanced Studies (FIAS) working in the theoretical neuroscience group. In 2009 he started as a lecturer at the Goethe University, Frankfurt, and from 2010 to 2012 he was the principal investigator of the “beliefs, representations, and actions group” at FIAS. After a year as a substitute professor at the Institute of Cognitive Science at the University Osnabrück, he started as an associate professor for “psychology of information processing” at the Institute of Psychology at the Technical University of Darmstadt in 2013. During the winter semester 2017 he was a visiting professor at the Department of Cognitive Science at the Central European University, Budapest. In 2022 he received an ERC Consolidator Grant from the European Research Council for his project ‘ACTOR’. During the summer semester 2023 he was a visiting professor at the Zuckerman Institute, Columbia University, New York, USA.

Affiliation: TU Darmstadt
Homepage: https://www.pip.tu-darmstadt.de

SC9 – Challenges and opportunities of incremental learning

Lecturer: Barbara Hammer, Fabian Hinder
Fields: Machine Learning

Content

Incremental learning refers to the notion of machine learning methods which learn continuously from a given stream of training data rather than a priorly available batch. It carries great promises for model personalization and model adaptation in case of a changing environment. Example applications are personalization of wearables or monitoring of critical infrastructure. In comparison to classical batch learning, incremental addresses two main challenges: How to solve the algorithmic problem to efficiently adapt a model incrementally given limited memory recourses? How to solve the learning problem that the underlying input distribution might change within the stream, i.e. drift occurs?

The course will be split into three main parts: (1) Fundamentals of incremental learning algorithms and its applications, dealing with prototypical algorithmic solutions and exemplary applications. (2) Drift detection, dealing with the question what exactly is referred to by drift, and algorithms to locate drift in time. (4) Monitoring change, dealing with the question how to locate drift in space and provide explanations what exactly has caused the observed drift.

Literature

  • João Gama, Indrė Žliobaitė, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. 2014. A survey on concept drift adaptation. ACM Comput. Surv. 46, 4, Article 44 (April 2014), 37 pages. https://doi.org/10.1145/2523813
  • Viktor Losing, Barbara Hammer, Heiko Wersing: Incremental on-line learning: A review and comparison of state of the art algorithms. Neurocomputing 275: 1261-1274 (2018)
  • Fabian Hinder, Valerie Vaquet, Barbara Hammer: One or two things we know about concept drift – a survey on monitoring in evolving environments.
    • Part A: detecting concept drift. Frontiers Artif. Intell. 7 (2024) One or two things we know about concept drift – a survey on monitoring in evolving environments.
    • Part B: locating and explaining concept drift. Frontiers Artif. Intell. 7 (2024)
  • Fabian Fumagalli, Maximilian Muschalik, Eyke Hüllermeier, Barbara Hammer: Incremental permutation feature importance (iPFI): towards online explanations on data streams. Mach. Learn. 112(12): 4863-4903 (2023)

Lecturer

Barbara Hammer chairs the Machine Learning research group at the Research Institute for Cognitive Interaction Technology (CITEC) at Bielefeld University. After completing her doctorate at the University of Osnabrück in 1999, she was Professor of Theoretical Computer Science at Clausthal University of Technology and a visiting researcher in Bangalore, Paris, Padua and Pisa. Her areas of specialisation include trustworthy AI, lifelong machine learning, and the combination of symbolic and sub-symbolic representations. She is PI in the ERC Synergy Grant WaterFutures and in the DFG Transregio Contructing Explainability. Barbara Hammer has been active at IEEE CIS as member of chair of the Data Mining Technical committee and the Neural Networks Technical Committee. She has been elected as a review board member for Machine Learning of the German Research Foundation in 2024 and she represents computer science as a member of the selection committee for fellowships of the Alexander von Humboldt Foundation. She is member of the Scientific Directorate Schloss Dagstuhl. Further, she has been selected as member of Academia Europaea.

Affiliation: Bielefeld University
Homepage: https://hammer-lab.techfak.uni-bielefeld.de/

Fabian Hinder is a Ph.D. student in the Machine Learning group at Bielefeld University, Germany. He received his Master’s degree in mathematics from Bielefeld University in 2018. His research interests cover learning in non-stationary environments, concept drift detection, statistical learning theory, explainable AI, and foundations of machine learning.

Affiliation: Bielefeld University
Homepage: https://hammer-lab.techfak.uni-bielefeld.de/

SC10 – Uncertainty and Trustworthiness in Natural Language Processing

Lecturer: Barbara Plank
Fields: Artificial Intelligence, Natural Language Processing

Content

Despite the recent success of Natural Language Processing (NLP) driven by advances in large language models (LLMs), there are many challenges ahead to make NLP more trustworthy. In this course, we will look at trustworthiness by taking the lens of uncertainty in language, from uncertainty in inputs, in outputs and how models deal with uncertainty themselves.

Literature

Lecturer

Barbara Plank is Professor and co-director of the Center for Information and Language Processing at LMU Munich. She holds the Chair for AI and Computational Linguistics at LMU where she leads the MaiNLP research lab (Munich AI and NLP lab, pronounced “my NLP”). Her lab focuses on robust machine learning for Natural Language Processing with an emphasis on human-inspired and data-centric approaches.

Affiliation: LMU Munich
Homepage: https://bplank.github.io/

BC4 – Interfacing brain and body – Insights into opportunities, challenges and limitations from an engineering perspective

Lecturer: Thomas Stieglitz
Fields: Brain-Body Interface, Engineering, Society

Content

Neural engineering addresses the wet interface between electronic and biological circuits and systems. There is a need to establish stable and reliable functional interfaces to neuronal and muscular target structure in chronic application in neuroscientific experiments but especially in chronic applications in humans. Here, the focus will be laid on medical applications rather than on fictional scenarios towards eternal storage of the mind in the cloud. The physiological mechanisms as well as basic technical concepts will be introduced to lay a common knowledge for informed decisions and discussions. From a neural engineering point of view, proper selection of substrate, insulation and electrode materials is of utmost importance to bring the interface in close contact with the neural target structures, minimize foreign body reaction after implantation and maintain functionality over the complete implantation period. Different materials and associated manufacturing technologies will be introduced and assessed with respect to their strengths and weaknesses for different application scenarios. Different design and development aspects from the first idea to first-in-human studies are presented and challenges in translational research are discussed. Reliability data from long-term ageing studies and chronic experiments show the applicability of thin-film implants for stimulation and recording and ceramic packages for electronics protection. Examples of sensory feedback after amputation trauma, vagal nerve stimulation to treat hypertension and chronic recordings from the brain display opportunities and challenges of these miniaturized implants. System assembly and interfacing microsystems to robust cables and connectors still is a major challenge in translational research and transition of research results into medical products. Clinical translation raises questions and concerns when applications go beyond treatment of serious medical conditions or rehabilitation purposes towards life-style applications. The four sessions within this course “Interfacing brain and body” will cover (1) physiological and engineering aspects of technical interfaces to brain and body: fundamentals on optogenetics, recording of bioelectricity and electrical stimulation, (2) neuroscientific and clinical applications of neural technology like (3) the challenges of neural implant longevity and (4) ethical and societal considerations in neural technology use.

Literature

  • Cogan SF. Neural stimulation and recording electrodes. Annu Rev Biomed Eng. 10:275-309 (2008). DOI: 10.1146/annurev.bioeng.10.061807.160518
  • Hassler, C., Boretius, T., Stieglitz, T.: “Polymers for Neural Implants” J Polymer Science-Part B: Polymer Physics, 49 (1), 18-33 (2011). Erratum in: 49, 255 (2011); DOI: 10.1002/polb.22169
  • Alt, M.T., Fiedler, E., Rudmann, L., Ordonez, J.S., Ruther, P., Stieglitz, T. “Let there be Light – Optoprobes for Neural Implants”, Proceedings of the IEEE 105 (1), 101-138 (2017); DOI: 10.1109/JPROC.2016.2577518
  • Stieglitz, T.: Of man and mice: translational research in neuro¬technology, Neuron, 105(1), 12-15 (2020). DOI:10.1016/j.neuron.2019.11.030
  • Stieglitz, T.: Why Neurotechnologies ? About the Purposes, Opportunities and Limitations of Neurotechnologies in Clinical Applications. Neuroethics, 14: 5-16 (2021), doi: 10.1007/s12152-019-09406-7
  • Jacob T. Robinson, Eric Pohlmeyer, Malte Gather, Caleb Kemere, John E. Kitching, George G. Malliaras, Adam Marblestone, Kenneth L. Shepard, Thomas Stieglitz, Chong Xie. Developing Next-Generation Brain Sensing Technologies—A Review. IEEE Sensors Journal, 18(22), pp. 10163-10175 (2019) DOI: 10.1109/JSEN.2019.2931159
  • Boehler, C., Carli, S., Fadiga, L., Stieglitz, T., Asplund, M.: Tutorial: Guidelines for standardized performance tests for electrodes intended for neural interfaces and bioelectronics. Nature Protocols, 15 (11), 3557-3578 (2020) https://doi.org/10.1038/s41596-020-0389-2
  • Hofmann UG, Stieglitz T. Why some BCI should still be called BMI. Nat Commun. 2024 Jul 23;15(1):6207. doi: 10.1038/s41467-024-50603-7

Lecturer

Thomas Stieglitz was born in Goslar in 1965. He received a Diploma degree in electrical Engineering from Technische Hochschule Karlsruhe, Germany, in 1993, and a PhD and habilitation degree in 1998 and 2002 from the University of Saarland, Germany, respectively. In 1993, he joined the Fraunhofer Institute for Biomedical Engineering in St. Ingbert, Germany, where he established the Neural Prosthetics Group. Since 2004, he is a full professor for Biomedical Microtechnology at the Albert-Ludwig-University Freiburg, Germany, in the Department of Microsystems Engineering (IMTEK) at the Faculty of Engineering and currently serves the IMTEK as managing director, is deputy spokesperson of the Cluster BrainLinks-BrainTools, board member of the Intelligent Machine Brain Interfacing Technology (IMBIT) Center and spokesperson of the research profile “Signals for Life” of the university. He is further serving the university as member of the senate and as co-spokesperson of the commission for responsibility in research as well as the university medical center as advisory board member. He was awarded IEEE Fellow in 2022. Dr. Stieglitz is co-author of about 200 scientific journal and about 350 conference proceedings presentations and co-inventor of about 30 patents. He is co-founder and scientific advisory board member of the neurotech spin offs CorTec an neuroloop. His research interests include neural interfaces and implants, biocompatible assembling and packaging and brain machine interfaces.

Affiliation: University of Freiburg
Homepage: https://www.imtek.de/professuren/bmt

MC4 – New bodies, new minds – A Practical Guide to Body Ownership Illusions

Lecturer: Andreas Kalckert
Fields: Cognitive neuroscience, Experimental psychology

Content

The introduction of body ownership illusion experiments has revolutionized our ability to explore and manipulate the subjective experience of our own bodies. Innovative paradigms like the rubber hand illusion and body swapping have greatly expanded our understanding of the cognitive and neural mechanisms that shape our sense of self. Due to their relatively low technical and practical demands, these paradigms have surged in popularity, leading to hundreds of studies. However, this rapid expansion has also resulted in a variety of experimental approaches, often lacking standardized methods—making it challenging to ensure comparability and replicability across studies.
In this methods course, we aim to tackle both the methodological and conceptual dimensions of these intriguing paradigms. We will dive deep into the existing literature, critically evaluating the theoretical and practical aspects of these experiments. We will also identify specific elements of experimental paradigms—such as stimulation protocols and measurement techniques—that are not only practically relevant but also crucial for refining our understanding of these illusions.
The course is designed to be both informative and interactive, consisting of two lectures and two practical sessions where we will explore body ownership illusions in both physical and virtual reality environments. By the end of the course, participants will have gained a clearer insight into the nuances and demands of these fascinating experiments and be equipped with a set of best practices for conducting replicable studies.

Literature

  • Ehrsson, H. H. (2019). Multisensory processes in body ownership. In Multisensory Perception: From Laboratory to Clinic (pp. 179–200). Elsevier. https://doi.org/10.1016/B978-0-12-812492-5.00008-5
  • Riemer, M., Trojan, J., Beauchamp, M., & Fuchs, X. (2019). The rubber hand universe: On the impact of methodological differences in the rubber hand illusion. Neuroscience and Biobehavioral Reviews, 104(January), 268–280. https://doi.org/10.1016/j.neubiorev.2019.07.008
  • Kilteni, K., Maselli, A., Kording, K. P., & Slater, M. (2015). Over my fake body: Body ownership illusions for studying the multisensory basis of own-body perception. Frontiers in Human Neuroscience, 9(MAR). https://doi.org/10.3389/fnhum.2015.00141

Lecturer

Andreas Kalckert earned his PhD in Neuroscience from the Karolinska Institute in Sweden. He previously lectured in psychology at the University of Reading Malaysia and is now a senior lecturer in Cognitive Neuroscience at the University of Skövde, Sweden. His research focuses on the psychological and neuroscientific processes involved in the experience of the body, with a particular interest in the role of movement.

Affiliation: Department of Cognitive Neuroscience and Philosophy, University of Skövde, Sweden
Homepage: https://www.his.se/en/about-us/staff/andreas.kalckert/

PC2 – Hands-on Hardware: (No) Brain in Robots and Edge Computing? Put the brain on ’em!

Lecturer: Tim Tiedemann
Fields: Robotics, Sensor Data Processing, Edge Computing, Machine Learning

Content

(this course description is under construction)
In this practical course, we will touch hardware — robots and microcontrollers — and we will see: there is no brain inside! But as robots and lone small edge hardware out in the woods would really benefit from something like a brain, this course combines hardware and cognitive sciences. And as we are nice guys and gals, we will do it on our own: Put it on ’em! Put the brain on ’em!

As it is currently planned, different hardware will be on site and/or accessible:
– mobile wheel-based robot
– autonomous underwater vehicle (AUV)
– multiple microcontroller boards with different sensors
– (and as there need to be disappointments: systems in simulation)

We will try to implement different findings from the cognitive sciences (ie. Biology and Cognitive Psychology) or Data Science on the (small!) systems (some were already implemented by the instructor in research projects, some are brand-new and unrevealed).

The course is planned as BYOD and detailed descriptions of what notebook/installation should be brought to the course to participate hands-on, will follow.

Literature

  • (will follow soon)

Lecturer

Tim Tiedemann studied computer science with a focus on robotics and neural networks at Bielefeld University, Bielefeld, Germany. After receiving his Diploma in computer science (i.e. Master degree level), he worked as research assistant at the Cognitive Psychology Group and at the Computer Engineering Group (both at Bielefeld University). In his Ph.D. studies in biorobotics he focused on the transfer of (neuro-) biological concepts to the robotic domain. From 2010 till 2016 he worked as postdoc in the area of space and underwater robotics at the German Research Center for Artificial Intelligence (DFKI) in Bremen, Germany. Since 2016 he is professor of intelligent sensing at the University of Applied Sciences Hamburg (HAW Hamburg, Hamburg, Germany). His main research interests are sensors and sensor data processing (including machine learning methods) and robotics.

Affiliation: HAW Hamburg

SC6 – Mirroring, alerting, guiding, on-loading, off-loading. How can (and should) adaptive technology support learning and teaching?

Lecturer: Sebastian Strauß
Fields: Educational Psychology, Learning Analytics, Learning Sciences

Content

Educational technology has come a long way since the Teaching Machines from the 1920s. While some commercial educational software can arguably still be classified as Teaching Machines, research and development in the learning sciences and technology-enhanced learning have produced educational technologies that can facilitate different aspects of teaching and learning alike.

The objective of this course is to examine the concept of learning from the standpoint of educational psychology, with a particular focus on the potential of adaptive educational technology to facilitate learning and teaching processes. We will do so by conceptualizing learning and teaching contexts as a relationship between (usually) one teacher and a group of students. Educational technology can now be leveraged as a tool to facilitate learning and teaching. For example, educational technology can offer insight into learning and teaching processes (mirroring), draw inferences, offer diagnoses (alerting), or take automated actions on behalf of the learners or the teacher (guiding).

The course will examine the perspectives of both learners and their teachers. We will explore how educational technology can enhance learners\’ cognitive and metacognitive abilities, beyond learning more efficiently. To this end, we first look at learning processes and learning outcomes in the context of school and higher education and discuss how they can be observed by humans (and machines).

From the perspective of the learners, we then explore educational technologies that can provide us with information about the development of our skills and our learning behavior. As an example, we will look at learning analytics dashboards. Further, we look at technologies that provide learning tasks, assess the learning progress, and utilize this information to provide us with individualized support. Examples for such a technology are intelligent tutoring systems.

Taking the perspective of the teacher, we look at educational technologies that provide us with an overview of our learners’ progress, for example teacher dashboards. Such tools may enhance our teacher vision, by providing information that is usually difficult to gather and to aggregate. This information, in turn, can provide valuable insights into the learning of the entire class and individual students which allows us to provide them with the assistance that they need. Going one step further, teacher dashboards may also process the data further and alert us of challenges that our learners face, or even suggest instructional support for the learners.
At the same time, data analytics approaches can also focus on the teachers and their teaching. For example, tools allow teachers to observe and adapt their own teaching, which offers benefits for teacher professional development.

As we look at these different perspectives, we\’ll explore the various levels of automation that educational technology can offer the classroom, the challenges that result from the need to utilize valid measurements of learning and teaching, from biased data sets, and from automation-induced complacency. Further, we will explore the consequences of (partially) offloading cognitive and metacognitive processes to automated systems. On the side of the students, this includes the question which tasks should be on-loaded or off-loaded to foster learning. With respect to teachers, we will discuss the concepts of hybrid intelligence and human-AI co-orchestration in the classroom.

Literature

  • Baker, R.S., & Hawn, A. (2022) Algorithmic Bias in Education. International Journal of Artificial Intelligence in Education 32, 1052–1092. https://doi.org/10.1007/s40593-021-00285-9
  • Celik, I., Dindar, M., Muukkonen, H. et al. (2022). The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research. TechTrends 66, 616–630. https://doi.org/10.1007/s11528-022-00715-y
  • Doroudi, S (2023). The Intertwined Histories of Artificial Intelligence and Education. International Journal of Artificial Intelligence in Education 33, 885–928. https://doi.org/10.1007/s40593-022-00313-2
  • Eberle, J., Strauß, S., Nachtigall, V., & Rummel, N. (2024). Analyse prozessbezogener Verhaltensdaten mittels Learning Analytics: Aktuelle und zukünftige Bedeutung für die Unterrichtswissenschaft. Unterrichtswissenschaft, 1-13.
  • Holmes, W., Persson, J., Chounta, I. A., Wasson, B., & Dimitrova, V. (2022). Artificial intelligence and education: A critical view through the lens of human rights, democracy and the rule of law. Council of Europe.
  • Holstein, K., Aleven, V., Rummel, N. (2020). A Conceptual Framework for Human–AI Hybrid Adaptivity in Education. In: Bittencourt, I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds) Artificial Intelligence in Education. AIED 2020. Lecture Notes in Computer Science, vol 12163. Springer, Cham. https://doi.org/10.1007/978-3-030-52237-7_20
  • Molenaar, I. (2022). Towards hybrid human‐AI learning technologies. European Journal of Education, 57(4), 632-645.
  • Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), 620-631.
  • van Leeuwen, A., Strauß, S., & Rummel, N. (2023) Participatory design of teacher dashboards: navigating the tension between teacher input and theories on teacher professional vision. Frontiers in Artificial Intelligence. 6:1039739. doi: 10.3389/frai.2023.1039739
  • van Leeuwen, A., Rummel, N. (2019) Orchestration tools to support the teacher during student collaboration: a review. Unterrichtswissenschaft 47, 143–158. https://doi.org/10.1007/
  • Wise, A. F., & Shaffer, D. W. (2015). Why Theory Matters More than Ever in the Age of Big Data. Journal of Learning Analytics, 2(2), 5-13. https://doi.org/10.18608/jla.2015.22.2

Lecturer

Sebastian Strauß is a postdoctoral researcher at the Educational Psychology and Technology research group at the Ruhr-University Bochum (Germany). His core research focuses on collaborative learning in computer-supported settings. He is interested in how students learn and work together in small groups, how they adapt their interaction, how they acquire collaboration skills, and how we can use computer technology to facilitate collaboration. In this context, he is also interested in human-computer-collaboration. Recently, his research focus expanded to using fine-grained data about the learning process to assess and support individual learning.

Affiliation: Ruhr-University Bochum
Homepage: https://www.pe.ruhr-uni-bochum.de/erziehungswissenschaft/pp/team/strauss.html.de