Lecturer: Stefan Kopp
Fields: Artificial Intelligence, Cognitive Science, Human-Agent/Robot Interaction
Content
With A.I. systems becoming more capable and being widely deployed in our everyday life and work environments, the question how human users can and should interact with intelligent systems becomes pivotal. The AAAI 20-year research roadmap (2019) pointed out that “meaningful interaction” between humans and AI should be a top research priority, encompassing questions like trust, transparency and responsibility, as well as online interaction, multiple interaction channels or collaboration. Going beyond “promptification”, humans and AI systems should be able to engage in forms of interaction that encompass mixtures of instructions, question answering, explanations, argumentation, or negotiation — all embedded in a flexible dialog, often multimodal and potentially even interwoven with physical action. In this course we will discuss how such advanced forms of interaction between AI systems and humans can be realized in an efficient and robust way. What they all have in common is that they hinge on the effect that systems with a certain degree of competence and autonomy are readily perceived and treated as having agency. Correspondingly, the interaction is framed as human-agent interaction (HAI), i.e. between two or more agentive entities, and applies to systems such as conversational personal assistants, socially assistive robots, conversational recommender systems, or autonomous vehicles.
The course will touch upon three questions: (1) What abilities do AI-based agents need to have in order to engage in robust and efficient HAI? (2) How can those abilities be modeled? (3) What are the possible effects on users? In discussing these questions we will focus on socio-cognitive core abilities for recognizing, reasoning about, and engaging in interactions with other social agents — what we call “Artificial Social Intelligence”. We start by learning about the basics of human-agent interaction and the emerging state of the art in modeling Artificial Social Intelligence in AI-based agents. We will then look at how robust and efficient HAI can be achieved through multimodal communication. In communication, participants continuously adjust their behaviors based on their interlocutor’s language, gestures, and facial expressions during social interaction. We will learn about modern approaches to create AI agents that can understand and participate in these dynamic, multimodal dialogue interactions. Finally, we will discuss computational approaches to realize “theory of mind” abilities such as recognizing or inferring another agent\’s intentions, beliefs, emotions or other mental states. Such abilities are indispensable for robust and efficient cooperation and have moved into the focus of much recent research on AI-based agents. For each topic, we will draw connections between insights from Cognitive Science, Psychology or Linguistics, and the modeling approaches in AI by means of cognitive modeling, model-based machine learning, deep learning, or lately LLMs.
Literature
- B. Lugrin, C. Pelachaud & D. Traum (2021). The Handbook on Socially Interactive Agents, Volume 2, pp. 77-111. ACM Press.
- Buschmeier, H., & Kopp, S. (2018). Communicative listener feedback in human–agent interaction: Artificial speakers need to be attentive and adaptive. Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018).
- Pöppel, J., & Kopp, S. (2018). Satisficing Models of Bayesian Theory of Mind for Explaining Behavior of Differently Uncertain Agents. Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018)
- Kopp, S., & Krämer, N. (2021). Revisiting Human-Agent Communication: The Importance of Joint Co-construction and Understanding Mental States. _Frontiers in Psychology: Human-Media Interaction_, _12_, 1-15. [https://doi.org/10.3389/fpsyg.2021.580955]
- Gurney, N., Marsella, S., Ustun, V., Pynadath, D.V. (2022). Operationalizing Theories of Theory of Mind: A Survey. In: Gurney, N., Sukthankar, G. (eds) Computational Theory of Mind for Human-Machine Teams. AAAI-FSS 2021. Lecture Notes in Computer Science, vol 13775. Springer, Cham. https://doi.org/10.1007/978-3-031-21671-8_1
Lecturer
Stefan Kopp is professor of Computer Science and head of the “Social Cognitive Systems” working group at the Faculty of Technology at Bielefeld University. He obtained his PhD in Artificial Intelligence for research on intelligent multimodal agents. After a postdoc stay in the US and a research fellowship at the Center for Interdisciplinary Research ZiF (Bielefeld), he has been deputy coordinator of SFB 673 “Alignment in Communication“, principal investigator in the Cluster of Excellence “Cognitive Interaction Technology“ (CTEC), and chairman of the German Cognitive Science Society (GK). Currently he is co-coordinator of the research center CITEC and a member of various other research centers and networks (TRR 318, CoAI, it’s OWL, CoR-Lab). He research interests are centered around the cognitive and interactive mechanisms of social interaction, communication and cooperation, and how such skills can be transferred to intelligent technical systems to enable new levels of human-technology cooperation.
Affiliation: Bielefeld University
Homepage: https://scs.techfak.uni-bielefeld.de