ET4 – Biosignal Processing for Human-Machine Interaction

Lecturer: Tanja Schultz
Fields:

Content

Human interaction is a complex process involving modalities such as speech,
gestures, motion, and brain activities emitting a wide range of biosignals, which can be captured by a broad panoply of sensors. The processing and interpretation of these biosignals offer an inside perspective on human physical and mental activities and thus complement the traditional way of observing human interaction from the outside. As recent years have seen major advances in sensor technologies integrated into ubiquitous devices, and in machine learning methods to process and learn from the resulting data, the time is right to use of the full range of biosignals to gain further insights into the process of human-machine interaction.

In my talk I will present ongoing research at the Cognitive Systems Lab (CSL), where we
explore interaction-related biosignals with the goal of advancing machine-mediated human
communication and human-machine interaction. Several applications will be described such as Silent Speech Interfaces that rely on articulatory muscle movement captured by
electromyography to recognize and synthesize silently produced speech, as well as Brain
Computer Interfaces that use brain activity captured by electrocorticography to recognize
speech (brain-to-text) and directly convert electrocortical signals into audible speech (brain-to-speech). I will also describe the recording, processing and automatic structuring of human everyday activities based on multimodal high-dimensional biosignals within the framework of EASE, a collaborative research center on cognition-enabled robotics. This work aims to establish an open-source biosignals corpus for investigations on how humans plan and execute interactions with the aim of facilitating robotic mastery of everyday activities.

Objectives

None

Literature

None

Lecturer

Dr. Tanja Schultz

Tanja Schultz received her diploma (1995) and doctoral degree (2000) in Informatics from University of Karlsruhe and completed her Masters degree (1989) in Mathematics, Sports, and Educational Science from Heidelberg University, Germany.
Dr. Schultz is the Professor for Cognitive Systems at the University of Bremen, Germany and adjunct Research Professor at the Language Technologies Institute of Carnegie Mellon, PA USA. Since 2007, she directs the Cognitive Systems Lab, where her research activities include multilingual speech recognition and the processing of biosignals for human-centered technologies and applications. Since 2019 she is the spokesperson of Bremen’s high-profile area “Minds, Media, Machines”. Dr. Schultz is an Associate Editor of ACM Transactions on Asian Language Information Processing and serves on the Editorial Board of Speech Communication. She was President and elected Board Member of ISCA, and a General Co-Chair of Interspeech 2006. She is a Fellow of ISCA and member of the European Academy of Sciences and Arts. Dr. Schultz was the recipient of several awards including the Alcatel Lucent Award for Technical Communication, the PLUX Wireless Biosignals Award, the Allen Newell Medal for Research Excellence, and received the Speech Communication Best paper awards in 2001 and 2015.  

Affiliation: University of Bremen

PC4 – Curious Making, Taking Fabrication Risks and Crafting Rewards

Lecturer: Janis Meißner
Fields: Design, Human Computer Interaction

Content

This course is about getting hands-on curious with electronics and different crafts materials. Maker toolkits are a great way to get started with designing your own interactive sensor systems – but what if these designs could also integrate other (potentially more aesthetic) materials? E-textiles and paper circuits are good examples for how functional electronic systems can be recrafted with rewarding results. In principle, any every-day materials could be used with a bit of thinking outside the (tool)box. Let’s see what you will use to hack for your ideas!

Course Outline:

After a brief intro to microcontrollers and programming them with the Arduino IDE, participants will design their own simple input-output systems and gradually re-craft the hardware in innovative ways by using crafting materials such as for example paper, fabric and paperclips. Participants who seek a little extra-challenge are invited to work in small teams (2-4) to design an interactive artefact in this way that combines their respective research interests.

The course is structured as follows:

Session 1: Introduction to microcontrollers, off-the-shelf components and self-paced experimenting with the help of tutorials

Session 2: Designing an input-output system with off-the-shelf components. Starting to explore how ready-made components can be re-made with crafts materials.

Session 3-4: Recrafting your system design with craft materials of your choice. Don’t forget to present your inventions to your course mates so that everyone can applaud your creative hacking genius! 🙂

Objectives

  • Learning the basics of programming electronics with microcontrollers
  • Learning the basics of how a selection of sensors and actuators work
  • Exploring alternative approaches to electronics than using o
  • Unleashing your creative hacking skills

Literature

Perner-Wilson, H., Buechley, L. & Satomi, M. (2011) ‘Handcrafting textile interfaces from a kit-of-no-parts’, in Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction – TEI ’11. New York, USA: ACM Press. p. 61. https://doi.org/10.1145/1935701.1935715

Posch, I. & Fitzpatrick, G. (2018) Integrating Textile Materials with Electronic Making. Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction – TEI ’18. 158–165. https://doi.org/10.1145/3173225.3173255

Meissner, J.L., Strohmayer, A., Wright, P. & Fitzpatrick, G. (2018) ‘A Schnittmuster for Crafting Context-Sensitive Toolkits’, in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems – CHI ’18. New York, New York, USA: ACM Press. https://doi.org/10.1145/3173574.3173725

Lecturer

Janis Lena Meißner

Janis Lena Meißner is a doctoral trainee in Digital Civics at Open Lab, Newcastle University, and co-founder of fempower.tech, a group of intersectional feminists who aim to raise awareness of feminist issues in Human Computer Interaction. As maker technologies give individuals an opportunity to develop their own objects and tools, Janis is interested in exploring ways that these technologies can empower different non-technical communities who lack access to infrastructures such as fablabs or makerspaces. In her research she has collaborated with groups as diverse as urban knitters, glass artists, quilting sex workers, makers with disabilities and members of a Men Shed interested in combining their woodworking skills with 3D-printing. Using a Participatory Action Research methodology and a portable makerspace for adapting tool(kit)s to the specific contexts of making, her aim is to develop a community-driven approach to Making that allows people to weave in pre-existing crafting skills into their use of digital maker technologies.

Affiliation: Newcastle University
Websites: https://fempower.tech/ https://openlab.ncl.ac.uk/people/janis-lena-meissner/ https://twitter.com/janislena

IC4 – Introduction to Ethics in AI

Lecturer: Heike Felzmann
Fields: Ethics, AI

Content

The last few years have seen an explosion of societal uses of AI technologies, but at the same time widespread public scepticism and fear about their use have emerged. In response to these concerns, a wide range of guidance documents for good practice in AI have been published by professional and societal actors recently. Both as researchers in AI and as consumers of AI it is helpful to understand ethical concepts and concerns associated with the use of AI and to be familiar with some of these guidance documents, in order to be able to reflect carefully on their ethical and social meaning and the balance of their benefits and risks and adapt one’s practices accordingly.

This course provides a general introduction to emergent ethical issues in the field of AI. It will be suitable for anyone with an interest in reflecting on how AI impacts on contemporary life and society. Over the four sessions of the course we will introduce and reflect on ideas and practical applications related to the following topics:

  • Understanding privacy, consent and transparency
  • Automated decision-making, algorithmic biases, autonomous artificial agents and accountability for decisions by artificial agents
  • Assistance, surveillance, persuasion, and human replacement
  • Responsible design and implementation, trustworthiness, and AI for good

Objectives

The goal of the course is for participants to gain familiarity with core ethical concepts and concerns arising in the development and societal uses of AI, allowing participants to engage in a differentiated and informed manner with the societal debates on AI.

Literature

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. (on Google Books)

HLEG on AI (2019) Ethics Guidelines for Trustworthy AI, https://ec.europa.eu/futurium/en/ai-alliance-consultation 

Nissenbaum, H. (2019). Contextual Integrity Up and Down the Data Food Chain. Theoretical Inquiries in Law, 20(1), 221-256. http://www7.tau.ac.il/ojs/index.php/til/article/download/1614/1715 (ignore the abstract, which is much more obscure than the rest of the article! Contextual integrity is a useful theory of privacy.)

Zuboff, S. (2019) The Age of Surveillance Capitalism: The fight for a human future at the frontier of power. (Youtube interviews with Zuboff might be a good introduction.)

Lecturer

Heike Felzmann is a lecturer in Ethics in the School of History and Philosophy at NUI Galway, Ireland. She works on ethics in information technologies (especially on healthcare robots and AI), research ethics, and general health care ethics. She has been part of several European projects, including H2020 MARIO on a care robot for patients with dementia, H2020 ROCSAFE on robot supported incident response, COST 16116 on robotic exoskeletons, COST RANCARE on rationing in nursing care, ITN DISTINCT on technology use in dementia care, ERASMUS PROSPERO on education on social robots for social care, and was the chair of the COST Action CHIPME on innovations in genomics for health. She has also had extensive experience with research ethics governance and research ethics training. She teaches ethics widely across disciplines and is looking forward to meeting the interdisciplinary audience at the IK.

Website: http://www.nuigalway.ie/our-research/people/humanities/heikefelzmann/

Affiliation: NUI Galway

FC13 – Hominum-ex-Machina: About Artificial and Real Intelligence

Lecturer: Markus Krause
Fields: Human Computer Interaction, Artificial Intelligence (actually: Advanced Statistical Analysis and Pattern Recognition), Human Computation

Content

Modern computational systems have amazing capabilities. They can detect a face or fingerprint in millions of samples, find a search term in a sea of billions of documents, and control the flow of trillions of dollars. Some of these abilities seem almost supernatural and even frightening. Yet, our brains are still the architects of invention and might remain to be so for aeons to come. Understanding and utilising the difference between machine und human intelligence is one of the new frontiers of computer science. With the advent of the next AI winter integrating human intervention into almost autonomous systems will gain crucial importance in the near future.

In this course we aim at lifting a bit of the mystic shroud that surrounds artificial intelligence. We will uncover its abilities, unveil short comings, and even conjure a deep neural network from (almost) thin air. You do not need to be an experienced coder or mathematical genius. Basic python understanding, and 8 grade math skills are enough to follow the course and build your own “AI”. After this hopefully disillusionary exercise we take a refreshing dive into reality. We will investigate real intelligence and how our brains talent for strategic problem solving can fuse with the sheer calculation power of machines. We will explore how these socio-technical systems will shape the future and the risks and pitfalls of the Hominum-ex-Machina.

Objectives

Understanding the limitations of machine-based decision capabilities, the abilities setting humans apart from computers, and how human and machine abilities can fuse to form large scale computational systems.

Literature

Interesting AI Papers:
David Saxton: https://arxiv.org/pdf/1904.01557.pdf
Rumelhart et al: Learning internal representations by error-propagation
Krizhevsky et al: Imagenet classification with deep convolutional neural networks
Hochreiter, Schmidhuber: Long short-term memory
A Vaswan et al: Attention is all you need

HComp Papers:
https://dl.acm.org/conference/chi
https://dl.acm.org/conference/cscw
https://www.aaai.org/Library/HCOMP/hcomp-library.php

First Paper about Human Computation and the inverse Turing test: http://www.wisdom.weizmann.ac.il/~naor/PAPERS/human.pdf

Book by Luis von Ahn and Edith Law basics about the inverted touring test at work: https://www.google.com/books/edition/Human_Computation/bF7ePcj-cUMC?hl=en&gbpv=1&printsec=frontcover

A set of interesting papers to take crowdsourcing to a higher complexity level:
https://hci.stanford.edu/publications/2017/flashorgs/flash-orgs-chi-2017.pdf
https://hci.stanford.edu/publications/2017/crowdresearch/crowd-research-uist2017.pdf
https://www.mooqita.org/publications/empoweringhiddentalents.pdf

Brian Christian’s account of participating in the Turing test yearly competition https://www.amazon.com/Most-Human-Artificial-Intelligence-Teaches/dp/0307476707

Lecturer

Dr. Markus Krause
Dr. Markus Krause

Dr. Markus Krause is a computer scientist, professional game designer, and serial entrepreneur. He co-founded Mooqita a Berkeley based Non-Profit supporting students in finding the job they love. Mooqita uses a novel approach combining human and machine intelligence. Dr. Krause also co-founded Brainworks.ai. Brainworks develops a new neural cortex to use smartphones as diagnostic tools for online health care applications. He is also the primary investigator for the Mooqita project at the International Computer Science Institute at UC Berkeley and part of the advisory committee to the DAAD IFI. Dr. Krause earned is doctoral degree in computer science from the University of Bremen, Germany and the Carnegie Mellon University in Pittsburgh, USA.

Websites: https://www.mooqita.org/
http://brainworks.ai/
https://www.linkedin.com/in/markus-krause-3490b246/

MC4 – Learning Mappings via Symbolic, Probabilistic, and Connectionist Modeling

Lecturer: Afsaneh Fazly
Fields: Machine Learning, Cognitive Modelling, Language Acquisition

Content

In session 1, we cover the basics of several mapping (association) problems, including theoretically important challenges such as the acquisition of word meanings in young children, as well as applied settings such as learning multimodal or multilingual representations.

Session 2 focuses on the early approaches applied to a mapping problem, including symbolic and probabilistic methods.

Session 3 covers the more recent techniques (linear transformations and deep learning), in the context of several mapping problems, such as learning multimodal and multilingual mappings.

Objectives

The objective is to cover three different approaches applied to the same problem of learning mappings across modalities (e.g., learning the meanings of words, learning mappings between audio/words and image/video segments, learning multilingual representations, etc.).

Literature

J.M. Siskind (1995). Grounding Language in PerceptionArtificial Intelligence Review, 8:371-391, 1995. [LINK]

J.M. Siskind (1996). A Computational Study of Cross-Situational Techniques for Learning Word-to-Meaning MappingsCognition, 61(1-2):39-91, October/November 1996. Also appeared in Computational Approaches to Language Acquisition, M.R. Brent, ed., Elsevier, pp. 39-91, 1996. [LINK]

Frank, M. C., Goodman, N. D., & Tenenbaum, J. B. (2009). Using speakers’ referential intentions to model early cross-situational word learningPsychological Science, 20, 579-585. [LINK]

Fazly, A., Alishahi A., Stevenson, S. (2010). A probabilistic computational model of cross-situational word learning. Cognitive Science: A Multidisciplinary Journal, 34(6): 1017—1063. [LINK]

Tadas Baltrusaitis, Chaitanya Ahuja, and Louis-Philippe Morency (2017). Multimodal Machine Learning: A Survey and Taxonomy. [LINK]

Zhang, Y., Chen, C.H., & Yu, C. (2019). Mechanisms of Cross-situational Learning: Behavioral and Computational Evidence. Advances in child development and behavior. [LINK]

Sebastian Ruder, Ivan Vulić, Anders Søgaard (2019). A Survey of Cross-lingual Word Embedding Models. Journal of Artificial Intelligence Research 65: 569-631. [LINK]

Lecturer

Dr. Afsaneh Fazly

Afsaneh Fazly is a Research Director at Samsung Toronto AI Centre, and an Adjunct Professor at the Computer Science Department of University of Toronto in Canada. Afsaneh has extensive experience in both academia and the industry, publishing award-winning papers, and building strong teams solving real-world problems. Afsaneh’s research draws on many subfields of AI, including Computational Linguistics, Cognitive Science, Computational Vision, and Machine Learning. Afsaneh strongly believes that solving many of today’s real-world problems requires an interdisciplinary approach that can bridge the gap between machine intelligence and human cognition.

Before joining Samsung Research, Afsaneh worked at several Canadian companies as Research Director, where she helped build and lead teams of outstanding scientists and engineers solving a diverse set of AI problems. Prior to that, Afsaneh was a Research Scientist and Course Instructor at the University of Toronto, where she also received her PhD from. Afsaneh lives in Toronto, with her husband and two young children. Afsaneh’s main hobby these days is reading and spending time with her family.

Affiliation: Samsung Toronto AI Centre


MC3 – Embodied Symbol Emergence

Lecturer: Malte Schilling and Michael Spranger
Fields: Robotics / Autonomous systems / Neurobiology / Artificial Intelligence / Developmental Artificial Intelligence / Symbol Emergence

Content

Symbols are the bedrock of human cognition. They play a role in planning, but are also crucial to understanding and modeling language. Since they are so important for human cognition, they are likely also vital for implementing similar abilities in software agents and robots.

The course will focus on symbols from two integrated perspectives. On the one hand, we look at the emergence of internal models through interaction with the environment and their role in sensorimotor behavior. This perspective is the embodied perspective. The first two lectures of the course concentrate on the emergence of internal models and grounded symbols in simple animals and agents and show how interaction with an environment requires internal models and how these are structured. Here we use robots to show how effective the discussed mechanisms are.

The second perspective is that symbols can also be socially constructed. In particular, we will focus on language and how it is grounded in embodiment but also social interaction. This will be the topic of the third and fourth lecture. We first investigate the emergence of grounded names and categories (and their terms) in social interactions between robots. The second two lectures of the course will focus on compositionality – that is the interaction of embodied categories in larger phrases or sentences and grammar.

Lecture 1: Embodied systems

Embodied systems: sophisticated behaviors do not necessarily require internal models. There are many examples of relatively simple animals (for example insects) that are able to perform complex behaviors. In the first lecture we focus on behavior-based robots that simply react to their environment without internal models. Crucially, these reactive behaviors can lead to complex and adaptive behavior, but the agent is not relying on internal representations. Instead, the systems is exploiting the relation to the environment.

Lecture 2: Grounded internal models

Grounded internal models serve a function for the system first. But the flexibility of these models allows them to be recruited in additional tasks. An example is the use of internal body models in perception. In the second part of the course internal models will be introduced, how they co-evolve in service for a specific behavior and how flexible models can be recruited for higher level tasks such as perception or cognition. The session will consist of case studies from neuroscience, psychology and behavioral science as well as modeling approaches of internal models in robotics. Sharing such internal models in a population of agents provides a step towards symbolic systems and communication.

Lecture 3: Symbol emergence in robot populations

The lecture will examine the emergence of grounded, shared lexical language in populations of robots. Lexical languages consist of single (or in some cases multi-word) expressions. We show how such systems emerge in referential games. In particular, we focus on how internal representations become shared across agents through communication. The lecture will cover (proper) naming and categorization of objects, for instance, using color. The lecture will introduce important concepts such as symbol grounding and discuss them from the viewpoint of language emergence.

Lecture 4: Compositional Language

Human language is compositional – which means that the meaning of phrases depends on its constituents but also the grammatical relations between them. For instance, projective categories such as “front”, “back”, “left” and “right” can be used as adjectives or prepositionally. Different syntactic usage signals a different conceptualization. This lecture will focus on compositional representations of language meaning, how they are related to syntax and how such systems might emerge in populations of agents.

Objectives

The course will give an introduction to computational models of symbol emergence through sensorimotor behavior and social construction. These models can be run in simulation or on real robots. Participants will be introduced to the field of Embodied Cognition – providing an overview on interdisciplinary results from neuroscience, psychology, computer science, linguistics and robotics.

Literature

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2016). Building Machines That Learn and Think Like People. Behav Brain Sci, 1–101. https://doi.org/10.1017/S0140525X16001837

Lecture 1-2

Dickinson, M. H., Farley, C. T., Full, R. J., Koehl, M. a. R., Kram, R., & Lehman, S. (2000). How Animals Move: An Integrative View. Science, 288(5463), 100–106. https://doi.org/10.1126/science.288.5463.100

Ijspeert, A. J. (2014). Biorobotics: Using robots to emulate and investigate agile locomotion. Science, 346(6206), 196–203. https://doi.org/10.1126/science.1254486

Gallese, V., & Lakoff, G. (2005). The Brain’s concepts: The role of the Sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22(3–4), 455–479. https://doi.org/10.1080/02643290442000310

Lecture 3-4

Steels, L.. The symbol grounding problem has been solved. so what’s next? In M. de Vega, editor, Symbols and Embodiment: Debates on Meaning and Cognition. Oxford University Press, 2008.

Steels, L.. The Talking Heads Experiment: Origins of Words and Meanings, volume 1 of Computational Models of Language Evolution. Language Science Press, Berlin, DE, 2015.

Spranger, M.. The Evolution of Grounded Spatial Language. Language Science Press, 2016.

Lecturer

Dr. Malte Schilling
Dr. Malte Schilling

Malte Schilling is a Responsible Investigator at the Center of Excellence for ‘Cognitive Interaction Technology’ in Bielefeld. His work concentrates on internal models, their grounding in behavior and application in higher-level cognitive function like planning ahead or communication. Before, he was a PostDoc at the ICSI in Berkeley and did research on the connection of linguistic to sensorimotor representation. He received his PhD in Biology from Bielefeld University in 2010 working on decentralized biologically-inspired minimal cognitive systems. He has studied Computer Science at Bielefeld University and finished 2003 the Diploma with his thesis on knowledge-based systems for virtual environments.

Dr. Michael Spranger
Dr. Michael Spranger

Michael Spranger received a PhD from the Vrije Universiteit in Brussels (Belgium) in 2011 (in Computer Science). For his PhD he was a researcher at Sony CSL Paris (France). He then worked in the R&D department of Sony Corporation in Tokyo (Japan) for almost 2 years. He is currently a researcher at Sony Computer Science Laboratories Inc (Tokyo, Japan). Michael is a roboticist by training with extensive experience in research on and construction of autonomous systems including research on robot perception, world modeling and behavior control. After his undergraduate degree he fell in love with the study of language and has since worked on different language domains from action language and posture verbs to time, tense, determination and spatial language. His work focuses on artificial language evolution, machine learning for NLP (and applications), developmental language learning, computational cognitive semantics and construction grammar.

Affiliation: Bielefeld University and Sony


PC3 – Juggling – experience your brain at work

Lecturer: Susan Wache & Julia Wache
Fields: Neurobiology

Content

In this course we will teach you how to juggle. Juggling is a motor activity that requires a lot of different skills. 

The activity of juggling requires a lot of different abilities. Obviously, you need to learn the movement pattern and practice a lot to get the reward – being able to juggle! To learn such specific movement patterns requires a highly complex electrical and chemical circuitry in the brain, which becomes a more and more important field of neuroscience. Juggling seems to encourage nerve fiber growth and therefore scientist believe it not only promotes brain fitness in general but could also help with debilitating illnesses.

Nevertheless, learning to juggle requires attention, focus, concentration and persistence. As every juggler would agree, the key for success is repetition. We will teach juggling mainly practical. While training you can feel constant progress independently of your previous skill level. 

In the last session you will also get an introduction to site swap, a mathematical description of juggling patterns you can notate, calculate and e.g. feed into a juggling simulator.

  1. Session: Basic introduction to juggling and the neuroscience behind it
  2. Session: How to learn juggling most effectively
  3. Session: Common mistakes and how to avoid them
  4. Session: Site swap – a mathematical description of juggling patterns

Objectives

In this course you will learn to juggle with 3 balls, you will learn how to avoid common mistakes when practicing, how to improve effectively also when practicing on your own. Apart from the basic 3-ball-cascade you will learn additional simple patterns and get an introduction to advanced tricks and techniques.

All sessions are mainly practical training of juggling.

Lecturer

Susan Wache studied Cognitive Science at the University of Osnabrück. She worked in the Research Group feelSpace that investigates human senses and co-founded in 2015 the startup feelSpace that develops and sells naviBelts, tactile navigation devices especially for the visually impaired.

Julia Wache studied Cognitive Science in Vienna and Potsdam. She finished her PhD in Trento working on the Emotion Recognition via physiological signals and mental effort in the context of using tactile belts for orientation. In parallel she participated in the EIT Digital doctoral program to learn entrepreneurial skills. In 2016 she joined the feelSpace GmbH.

Together the sisters started juggling and performing over 20 years ago and gave courses for different audiences in various occasions.

Affiliation: feelSpace GmbH

FC11 – Artificial curiosity for robot learning in interaction with its environment and teachers

Lecturer: Sao Mai Nguyen
Fields: Machine learning, robot learning, reinforcement learning, goal babbling, active imitation learning

Content

This course will provide an overview of research in machine learning and robotics of artificial curiosity. Also referred to as intrinsic motivation, this stream of algorithms inspired by theories of developmental psychology allow artificial agents to learn more autonomously, especially in stochastic high-dimensional environments, for redundant tasks, for multi-task, life-long or curriculum learning. The course will cover the following topics:

  • Basis of reinforcement learning
  • Curiosity-driven exploration
  • Goal babbling
  • Intrinsic motivation for imitation learning

Objectives

The students will learn about the different uses of intrinsic motivation for motor control, and see several illustrations of application and implementation of intrinsically motivated exploration algorithms for motor control by embodied agents. They will also understand the importance of data sampling, exploration and source of information selection for robot learning. They will also have a practical experience on a simple robotic simulation setup.

Literature

  1. J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). IEEE Transactions on Autonomous Mental Development, 2(3):230–247, 2010. https://doi.org/10.1109/TAMD.2010.2056368
  2. G. Baldassarre. What are intrinsic motivations? a biological perspective. In Development and Learning (ICDL), 2011 IEEE International Conference on, volume 2, pages 1–8. IEEE, 2011. https://doi.org/10.1109/DEVLRN.2011.6037367
  3. J. Gottlieb and P.-Y. Oudeyer. Towards a neuroscience of active sampling and curiosity. Nature Reviews Neuroscience, 19(12):758–770, 2018. https://doi.org/10.1038/s41583-018-0078-0
  4. P.-Y. Oudeyer. The New Science of Curiosity, chapter Computational Theories of Curiosity-Driven Learning. NOVA, 02 2018. https://arxiv.org/abs/1802.10546

Lecturer

Nguyen Sao Mai

Nguyen Sao Mai specialises in robotic learning, especially cognitive developmental learning. She is currently an associate professor at the UI2S Lab at Ensta Paris, France, after a few years in IMT Atlantique. She received a PhD in 2013 in computer science, for her studies on how to combine curiosity-driven exploration and socially guided exploration for multi-task learning and curriculum learning. She holds a master’s degree in computer science from Ecole Polytechnique and a master’s degree in adaptive machine systems from Osaka University. She has coordinated of the experiment KERAAL, funded by the European Union through project ECHORD++, which proposes an intelligent tutoring humanoid robot for physical rehabilitation. She is currently associate editor of IEEE TCDS and co-chair of the Task force “Action and Perception” du IEEE Technical Committee on Cognitive and Developmental Systems.

Affiliation: Ensta Paris
Website: http://nguyensmai.free.fr/