SC9 – Challenges and opportunities of incremental learning

Lecturer: Barbara Hammer, Fabian Hinder
Fields: Machine Learning

Content

Incremental learning refers to the notion of machine learning methods which learn continuously from a given stream of training data rather than a priorly available batch. It carries great promises for model personalization and model adaptation in case of a changing environment. Example applications are personalization of wearables or monitoring of critical infrastructure. In comparison to classical batch learning, incremental addresses two main challenges: How to solve the algorithmic problem to efficiently adapt a model incrementally given limited memory recourses? How to solve the learning problem that the underlying input distribution might change within the stream, i.e. drift occurs?

The course will be split into three main parts: (1) Fundamentals of incremental learning algorithms and its applications, dealing with prototypical algorithmic solutions and exemplary applications. (2) Drift detection, dealing with the question what exactly is referred to by drift, and algorithms to locate drift in time. (4) Monitoring change, dealing with the question how to locate drift in space and provide explanations what exactly has caused the observed drift.

Literature

  • João Gama, Indrė Žliobaitė, Albert Bifet, Mykola Pechenizkiy, and Abdelhamid Bouchachia. 2014. A survey on concept drift adaptation. ACM Comput. Surv. 46, 4, Article 44 (April 2014), 37 pages. https://doi.org/10.1145/2523813
  • Viktor Losing, Barbara Hammer, Heiko Wersing: Incremental on-line learning: A review and comparison of state of the art algorithms. Neurocomputing 275: 1261-1274 (2018)
  • Fabian Hinder, Valerie Vaquet, Barbara Hammer: One or two things we know about concept drift – a survey on monitoring in evolving environments.
    • Part A: detecting concept drift. Frontiers Artif. Intell. 7 (2024) One or two things we know about concept drift – a survey on monitoring in evolving environments.
    • Part B: locating and explaining concept drift. Frontiers Artif. Intell. 7 (2024)
  • Fabian Fumagalli, Maximilian Muschalik, Eyke Hüllermeier, Barbara Hammer: Incremental permutation feature importance (iPFI): towards online explanations on data streams. Mach. Learn. 112(12): 4863-4903 (2023)

Lecturer

Barbara Hammer chairs the Machine Learning research group at the Research Institute for Cognitive Interaction Technology (CITEC) at Bielefeld University. After completing her doctorate at the University of Osnabrück in 1999, she was Professor of Theoretical Computer Science at Clausthal University of Technology and a visiting researcher in Bangalore, Paris, Padua and Pisa. Her areas of specialisation include trustworthy AI, lifelong machine learning, and the combination of symbolic and sub-symbolic representations. She is PI in the ERC Synergy Grant WaterFutures and in the DFG Transregio Contructing Explainability. Barbara Hammer has been active at IEEE CIS as member of chair of the Data Mining Technical committee and the Neural Networks Technical Committee. She has been elected as a review board member for Machine Learning of the German Research Foundation in 2024 and she represents computer science as a member of the selection committee for fellowships of the Alexander von Humboldt Foundation. She is member of the Scientific Directorate Schloss Dagstuhl. Further, she has been selected as member of Academia Europaea.

Affiliation: Bielefeld University
Homepage: https://hammer-lab.techfak.uni-bielefeld.de/

Fabian Hinder is a Ph.D. student in the Machine Learning group at Bielefeld University, Germany. He received his Master’s degree in mathematics from Bielefeld University in 2018. His research interests cover learning in non-stationary environments, concept drift detection, statistical learning theory, explainable AI, and foundations of machine learning.

Affiliation: Bielefeld University
Homepage: https://hammer-lab.techfak.uni-bielefeld.de/