Lecturer:Markus Krause Fields: Human Computer Interaction, Artificial Intelligence (actually: Advanced Statistical Analysis and Pattern Recognition), Human Computation
Modern computational systems have amazing capabilities. They can detect a face or fingerprint in millions of samples, find a search term in a sea of billions of documents, and control the flow of trillions of dollars. Some of these abilities seem almost supernatural and even frightening. Yet, our brains are still the architects of invention and might remain to be so for aeons to come. Understanding and utilising the difference between machine und human intelligence is one of the new frontiers of computer science. With the advent of the next AI winter integrating human intervention into almost autonomous systems will gain crucial importance in the near future.
In this course we aim at lifting a bit of the mystic shroud that surrounds artificial intelligence. We will uncover its abilities, unveil short comings, and even conjure a deep neural network from (almost) thin air. You do not need to be an experienced coder or mathematical genius. Basic python understanding, and 8 grade math skills are enough to follow the course and build your own “AI”. After this hopefully disillusionary exercise we take a refreshing dive into reality. We will investigate real intelligence and how our brains talent for strategic problem solving can fuse with the sheer calculation power of machines. We will explore how these socio-technical systems will shape the future and the risks and pitfalls of the Hominum-ex-Machina.
Understanding the limitations of machine-based decision capabilities, the abilities setting humans apart from computers, and how human and machine abilities can fuse to form large scale computational systems.
Interesting AI Papers: David Saxton: https://arxiv.org/pdf/1904.01557.pdf Rumelhart et al: Learning internal representations by error-propagation Krizhevsky et al: Imagenet classification with deep convolutional neural networks Hochreiter, Schmidhuber: Long short-term memory A Vaswan et al: Attention is all you need
Dr. Markus Krause is a computer scientist, professional game designer, and serial entrepreneur. He co-founded Mooqita a Berkeley based Non-Profit supporting students in finding the job they love. Mooqita uses a novel approach combining human and machine intelligence. Dr. Krause also co-founded Brainworks.ai. Brainworks develops a new neural cortex to use smartphones as diagnostic tools for online health care applications. He is also the primary investigator for the Mooqita project at the International Computer Science Institute at UC Berkeley and part of the advisory committee to the DAAD IFI. Dr. Krause earned is doctoral degree in computer science from the University of Bremen, Germany and the Carnegie Mellon University in Pittsburgh, USA.
Lecturer:Sao Mai Nguyen Fields: Machine learning, robot learning, reinforcement learning, goal babbling, active imitation learning
This course will
provide an overview of research in machine learning and robotics of
artificial curiosity. Also referred to as intrinsic motivation, this
stream of algorithms inspired by theories of developmental psychology
allow artificial agents to learn more autonomously, especially in
stochastic high-dimensional environments, for redundant tasks, for
multi-task, life-long or curriculum learning. The course will cover
the following topics:
motivation for imitation learning
The students will learn about the different uses of intrinsic motivation for motor control, and see several illustrations of application and implementation of intrinsically motivated exploration algorithms for motor control by embodied agents. They will also understand the importance of data sampling, exploration and source of information selection for robot learning. They will also have a practical experience on a simple robotic simulation setup.
G. Baldassarre. What are intrinsic motivations? a biological perspective. In Development and Learning (ICDL), 2011 IEEE International Conference on, volume 2, pages 1–8. IEEE, 2011. https://doi.org/10.1109/DEVLRN.2011.6037367
Nguyen Sao Mai specialises in robotic learning, especially cognitive developmental learning. She is currently an associate professor at the UI2S Lab at Ensta Paris, France, after a few years in IMT Atlantique. She received a PhD in 2013 in computer science, for her studies on how to combine curiosity-driven exploration and socially guided exploration for multi-task learning and curriculum learning. She holds a master’s degree in computer science from Ecole Polytechnique and a master’s degree in adaptive machine systems from Osaka University. She has coordinated of the experiment KERAAL, funded by the European Union through project ECHORD++, which proposes an intelligent tutoring humanoid robot for physical rehabilitation. She is currently associate editor of IEEE TCDS and co-chair of the Task force “Action and Perception” du IEEE Technical Committee on Cognitive and Developmental Systems.
Insects are often thought to show only fixed ‘robotic’ behaviours but in fact exhibit substantial flexibility, from maggots exploring their world to find which odours signal risk or reward, to ants and bees discovering and efficiently navigating between food sources scattered over a large environment. Yet insects also have small brains, providing the promise that we may be able to understand and model these aspects of intelligent behaviour down to the single neuron level. This course will describe the current state of research in insect exploration, emphasising an explicitly mechanistic view of explanation: to understand a system, we should (literally) try to build it. The final lecture will reflect on this methodology of modelling and what we can learn by implementing biological explanations as robots.
Session 1: Exploration in maggots, and the role of the body in behaviour.
Session 2: The neural basis of risk and reward in insect learning.
Session 3: Expert insect navigators – how do they discover and remember key locations in their world?
Session 4: Satisfying our own curiosity: using robots as models
Understand the importance of linking brain, body and environment to explain behaviour.
Gain knowledge of current models of the neural mechanisms of exploration and learning in insects, and the key open questions.
Explore the role of (robot) models in scientific explanation
Barbara Webb completed a BSc in Psychology at the University of Sydney then a PhD in Artificial Intelligence at the University of Edinburgh. Her PhD research on building a robot model of cricket sound localization was featured in Scientific American. This established her as a pioneer in the field of biorobotics – using embodied models to evaluate biological hypotheses of behavioural control. She has published influential review articles on this methodology in Behavioural and Brain
Sciences, Nature, Trends in Neurosciences and Current Biology. In the last ten years the focus of her research has moved from basic sensorimotor control towards more complex insect behavioural capabilities, in the areas of associative learning and navigation. She has held lectureships at the University of Nottingham and University of Stirling before returning to a faculty position in the School of Informatics at Edinburgh in 2003. She was appointed to a personal chair as Professor of Biorobotics in 2010.
Curiosity has been described as an important driver for learning from infancy onwards. But what is curiosity? How has it been conceptualized, and how has its role in infant learning been identified and characterized? This course will describe the main theories of what curiosity is and how it affects behaviour, and how recent developmental research has studied curiosity in infants and children. Here I will address children’s active role in their learning and in their language development, as well as their preference for specific types of information. I will also touch on the role of play in infant and child development. Computational modelling can help us to develop theories of the mechanisms underlying curiosity-based exploratory behaviour, and I will discuss some of these models.
This course does not require any prior knowledge and all topics will be introduced gently.
At the end of this course you will be able to
describe the major theories of curiosity
explain how scientists conduct studies with infants
describe the research done with infants and children on curiosity-based learning
explain principles of computational modelling and some relevant models of curiosity-based learning
Bazhydai, M., Twomey, K. E., & Westermann, G. (in press). Exploration and curiosity. In J. B. Benson (Ed.), Encyclopedia of Infant and Early Childhood Development, 2nd ed.
Gottlieb, J., Oudeyer, P.-Y., Lopes, M., & Baranes, A. (2013). Information-seeking, curiosity, and attention: computational and neural mechanisms. Trends in Cognitive Sciences, 17(11), 585–593. http://doi.org/10.1016/j.tics.2013.09.001
Gert Westermann studied Computer Science in Braunschweig and Austin, TX, and received a PhD in Cognitive Science from the University of Edinburgh. After postdocs at the Sony Computer Science Lab in Paris and at Birkbeck College, London, he worked at Oxford Brookes University for several years and in 2011 joined Lancaster University as Professor of Psychology.
Gert is Director of the Leverhulme Trust Doctoral Scholarship Centre on Interdisciplinary Research in Infant Development which trains 22 PhD students on infancy research, and co-director of the ESRC International Centre for Language and Communicative Development which is a large-scale collaboration between the Universities of Manchester, Liverpool and Lancaster.
Gert’s research takes an interdisciplinary approach, combining looking time, pupil dilation, ERP, fNIRS, and behavioural studies with computational modelling to investigate the early cognitive, social and language development in infancy, with a recent focus on curiosity-based learning.
Until quite recently, AI was a scientific discipline defined more by its portrayal in science fiction than by its actual technical achievements. Real AI systems are now catching up to their fictional counterparts, and are as likely to be seen in news headlines as on the big screen. Yet as AI outperforms people on tasks that were once considered yardsticks of human intelligence, one area of human experience still remains unchallenged by technology: our sense of humour.
This is not for want of trying, as this course will show. The true nature of humour has intrigued scholars for millennia, but AI researchers can now go one step further than philosophers, linguists and psychologists once could: by building computer systems with a sense of humour, capable of appreciating the jokes of human users or even of generating their own, AI researchers can turn academic theories into practical realities that amuse, explain, provoke and delight.
The course will comprise four lectures, which will explore the following topics.
Newspaper personal columns are routinely filled with people seeking partners with a good sense of humour (GSOH), with many rating this as highly as physical fitness or physical appearance. Yet what does it mean to have a sense of humour? Conversely, what does it mean to have NO sense of humour, and how might we imbue a humorless machine with a capacity for wit and a flair for the absurd? We begin by unpacking these questions, to suggest some initial answers and models.
So, for example, what would it mean for a computer to have a numeric humour setting, as in the case of the robot TARS in the film Interstellar? Can a machine’s sense of humour be reduced to a single number or parameter setting? Is humour a modular ability? Can it be gifted to computers as a bolt-on unit like Commander Data’s “humour chip” in Star Trek, or is it an emergent phenomenon that arises from complex interactions among all our other faculties? Might humour emerge naturally within complex AI systems without explicitly being programmed to do so, as in the mischievous supercomputer Mike in Robert Heinlein’s The Moon is a Harsh Mistress or in the sarcastic droid K2SO in Rogue One: A Star Wars story?
This course will survey and critique the competing humor theories that scholars have championed through the ages, enlarging on recurring themes (incongruity, relief, superiority) while considering the amenability of each to computational modeling. What is it that these theories are really explaining, and which comes closest to capturing the elusive essence of humour?
The centrality of incongruity in modern theories demands that this concept be given a special focus. So we will unpack its many meanings to show how our understanding of incongruity can be as multifaceted as the idea of humour itself. Popular myths about the brittleness of machines in the face of the incongruous and the unexpected will be unpicked and debunked as we explore how machines might deliberately seek out and invent incongruities of their own.
But computational humour is still in its infancy, and it is no coincidence that the mode of humour for which machines show the greatest aptitude is that which humans embrace at a very early age, puns. Puns vary in wit and sophistication, but the simplest require only an ear for sound similarity and a disregard for the consequences of replacing a word with its phonetic doppelganger. The challenge for AI systems is to progress, as children do, from these simple beginnings to modes of ever greater conceptual sophistication.
To do so, is it possible to capture the essence of jokes in a mathematical formula, much as physicists have done for electromagnetism and gravity? Do jokes have quantifiable features that we and our machines can intuitively appreciate? Can we build statistical models to characterize the signature qualities of a humorous artifact, so that machines can learn to tell funny from unfunny for themselves? And what do these measurable qualities say about humour and about us?
Finally, however we slice it, conflict sits at the heart of humour, whether it is a conflict in meaning, attitude, expectation or perspective. Double acts personalize this conflict by recognizing the different roles a comic can play. Computers can likewise play multiple rules in the creation of humour, from rigid “straight man” to absurdist provocateur, in double-acts with humans and with other machines. So we will explore the ways in which smart machines can contribute to the social emergence of humour, either as embodied robots or disembodied software.
This course will use the ideas and achievements of AI to explore what it means to have a sense of humour, and moreover, to understand what it is to not have one. It will challenge the archetype of the humorless machine in popular culture, to celebrate what science fiction gets right and to learn from what it gets wrong. It will make a case for the necessity of a computational understanding of humour, to better understand ourselves and to better construct machines that are more flexible, more understanding, and more willing to laugh at their own limitations.
The SEEKING mind: Primal neuro-affective substrates for appetitive incentive states and their pathological dyComputational humour studies is an established field that has produced a range of academic books, from Victor Raskin’s Semantic Mechanisms of Humor (1985, one of the first) to the more recent Primer of Humour Research (with chapters from computational humorists). Non-computational humor researchers, such as Elliott Oring, have also written accessible books on humour, such as Engaging Humor, while the computer scientist Graeme Ritchie has written a pair of well-received academic books on humour. Comedians and comedy professionals have also written some noteworthy books on humour, with individual chapters that focus on computational humour or that offer algorithmic insights into the author’s own comedy production strategies. Toplyn’s Comedy Writing for Late-Night TV offers a beginner’s guide to humor production that is frequently schematic in style. Jimmy Carr and Lucy Greeves’ The Naked Jape considers humour more broadly, but also offers a chapter on computational models and the people who build them. I will quote from each of the sources as needed.
Tony Veale is an associate professor in the School of Computer Science at UCD (University College Dublin), Ireland. He has worked in AI research for 25 years, in academia and in industry, with a special emphasis on humour and linguistic creativity. He is the author of the 2012 monograph Exploding the Creativity Myth: The Computational Foundations of Linguistic Creativity (from Bloomsbury), co-author of the 2016 textbook, Metaphor: A Computational Perspective (Morgan Claypool), and co-author of 2018’s Twitterbots: Making Machines That Make Meaning (MIT Press). He led the European Commission’s coordination action on Computational Creativity (named PROSECCO), and collaborated on international research projects with an emphasis on computational humour and imagination, such as the EC’s What-If Machine (WHIM) project. He runs a web-site dedicated to explaining AI with humour at RobotComix.com. He is active in the field of Computational Creativity, and is currently the elected chair of the international Association for Computational Creativity (ACC).
In this focused course, we will cover some of the many ways in which stress and arousal can modulate behaviors related to curiosity, risk, and reward through the interactions between physiological, affective, and cognitive processes. We will use the ACT-R/Phi architecture to think about this from a perspective of interacting mind and body processes. The Project Malmo (Minecraft) environment will also be used to show how we might implement some of the theoretical accounts as simulated agents in a virtual environment.
Session 1: Theoretical Background
Session 2: Short Recap, Theoretical Background, and Cognitive Architectures (general)
Session 3: Short Recap, Cognitive Architectures (general), and ACT-R/Phi Background
Session 4: Short Recap, ACT-R/Phi, and Using Project Malmo with Cognitive Architectures to study interactions between arousal, curiosity, risk, and reward
Learn some a theoretical background on connections between memory systems, stress, arousal, and curiosity
Learn some background on some cognitive architectures
Learn about ACT-R/Phi
Learn about the Project Malmo Environment (Minecraft)
Learn about how one might create cognitive agents to run in Project Malmo
(Some are meant to be hands on, but we’ll work with what we can if some don’t have a computer!)
Could also read Pankepp & Biven (The Archaeology of Mind: Neuroevolutionary Origins of Human Emotions) as a supplement, but it is a book, so I would say it is a useful read if you find some of the above articles interesting.
Christopher L. Dancy received a B.S. in Computer Science, in 2010, and Ph.D. in Information Sciences and Technology, with a focus on artificial intelligence and cognitive science, in 2014, both from The Pennsylvania State University (University Park). He is an assistant professor of computer science at Bucknell University. His research involves the computational modeling of physiological, affective, and cognitive systems in humans. He studies how these systems interact, what these interactions mean for human-like intelligent behavior and interaction between humans and artificial intelligent systems. His work has been funded by National Science Foundation, US Office of Naval Research, US Army Research Lab, and The Social Science Research Council. Chris Dancy has previously chaired the Behavior Representation in Modeling and Simulation Society and is currently a member of ACM, AAAI, the Cognitive Science Society, National Society of Black Engineers, IEEE SMC, and the IEEE Computer Society.
Lecturer:Lily FitzGibbon Fields:Cognitive, developmental and educational psychology; neuroscience
This course will provide an overview of research from a number of fields of psychology and neuroscience pertinent to the understanding of the motivational power of curiosity. In particular, we will discuss empirical findings from across the lifespan in the context of a reward learning framework of knowledge acquisition. We will consider where the subjective experiences of curiosity and interest fit into the model and how they might be differentiated. Finally, we will discuss and develop challenges, open questions, and testable predictions from the model, setting out a programme of work for the field. The aim of this final session is to generate and develop research ideas and foster new collaborations between course participants.
Session 1: Introduction to curiosity and interest
Session 2: A reward learning model of knowledge acquisition
Session 3: A lifespan perspective on information as reward
Session 4: Challenges, open questions and testable predictions
In this course, participants will gain an understanding of a new model of information acquisition and its power to integrate a previously divided literature and generate new predictions about the process of knowledge acquisition. Participants will also learn about methods from a large array of disciplines that can be applied to the empirical study of information as reward.
Lily FitzGibbon works as a postdoctoral researcher in the Motivation Science Lab at the University of Reading. She has a PhD in Psychology from the University of Sheffield and has worked as a postdoctoral researcher at the University of Birmingham and the University of Southern California. Her research focuses on the cognitive processes involved in decision making, including curiosity, risk processing, and emotional evaluation of actual and hypothetical outcomes.
Decision-making in human and animal societies often uses a confidence heuristic – trusting the decisions made by confident individuals. This could have the benefit of quick decision-making without having to explore risky options yourself. However, confidence is a good guide to decisions only if it reflects accuracy. When the trusted individuals are overconfident, this results in risky and often catastrophic decisions. Despite the possibility of these negative outcomes, overconfidence persists and is widespread. What are then the advantages of overconfidence? Using an evolutionary perspective demonstrates the individual and social rewards of overconfidence. This also helps us understand how we can make the most of confidence while avoiding the obvious costs of overconfidence.
Vivek Nityananda has a PhD in Animal Behaviour from the Indian Institute of Science, Bangalore. He was worked at the University of Minnesota, St Paul and Queen Mary University of London. He is currently a BBSRC David Phillips Fellow at the University of Newcastle and has previously been a Marie Curie Research Fellow, a Human Frontiers Science Program Fellow and a fellow of the Wissenschaftskolleg Zu Berlin. He has researched communication and visual cognition in insects, overconfidence in humans and hearing in frogs. He is also a published author and illustrator and has worked towards engaging the public with research using comics, animation and theatre. He was awarded a public engagement fellowship from the Great North Museum, Newcastle and a Wellcome Trust Small Arts Award to support these efforts. He currently researches the ecology and evolution of sensory and cognitive behaviour and the evolution of overconfidence.
Lecturer:Marieke Van Vugt Fields: Neuroscience, cognitive science, psychology, contemplative science
In the first session, we will introduce the methods of mindfulness, and discuss how mindfulness differs from mind-wandering. Contrary to popular belief, mindfulness is not the opposite of mind-wandering, but rather the cultivation of mindfulness involves becoming better friends with your mind so that you learn to become less stuck in thought processes. We will also review conceptual models of mindfulness and mind-wandering together with some research underpinnings. In addition, we will introduce the first and third-person perspective on studying the mind and basics of microphenomenology. We will also start a small experiment with our own mindfulness practice, which we will analyse in the last session of the course.
In the second session, we will continue our practice of mindfulness, and review research findings on the effects of mindfulness on cognitive function and brain activity.
In the third session, we will continue our practice of mindfulness. We will place mindfulness in the context of different meditation practices, discussing similarities and differences. We will also discuss in general how we can study mindfulness scientifically and how to do so rigorously.
In the fourth session, apart from practicing mindfulness, we will discuss the findings of our little experiments. There will also be ample space for questions and additional topics to discuss.
Being familiar with mindfulness practices
Being familiar with research on mindfulness
Being familiar with the scientific study of mind-wandering
Being able to combine first- and third-person perspectives in research on mindfulness
Dr. Van Vugt is an assistant professor at the University of Groningen in the Netherlands, working in the department of artificial intelligence. She obtained her PhD in model-based neuroscience from the University of Pennsylvania, then worked as a postdoc at Princeton University before moving to the University of Groningen. In her lab, she focuses on understanding the cognitive and neural mechanisms underlying decision making, mind-wandering and meditation by means of EEG, behavioural studies and computational modeling. In some slightly outside-the-box research, she also records the brain waves of Tibetan monks and dancers.
Civilization’s Sid Meier defined video games as a series of interesting choices. Game-design aims to balance risk and reward for each choice made in a game, with the goal to create compelling experiences that draw people in and keep them spellbound. In this course you will create your own game and explore how modifying formal game elements applying psychological theory affects play experience.
Each session is a combination of a lecture (45min), applied game-design (30 min), and discussion (15 min). Knowledge about digital games is not required!
In session one, we will learn the basics of game-design, prototype a game, and discuss your experiences with the game.
In session two, we will discuss how risk and reward a represented in games and how risk/reward trade-offs require player to take action and make decisions. You will modify your game to actively explore the effects of risk and reward design on play experience.
In session three, we will dive into psychological theories of decision making, biases, and how games leverage our expectations to manipulate play experience. In the game-design session we will change the paradigm of play to explore a different approach to manipulate the outcome of decision moments and the resulting experience.
In session four, we will have a close look into digital games and how they approach risk and reward and apply our knowledge about game-design, risk and reward, and psychological theories. We will break down design decisions to create tension and recreate the different experiences using play cards.
Conjure your most playful analytical self to face new challenges and learn about how risk and reward are fundamental to game design.
1. Understand and apply the basics of game-design
2. Gain and leverage psychological knowledge on risk/reward mechanism to modify play experiences
3. Learn about biases and their application in contemporary game-design; apply your knowledge to consciously manipulate experience
4. Synthesize what you learned by deconstructing digital games and reproduce their risk/reward mechanism using play cards
Fullerton, T. (2018). Game design workshop: a playcentric approach to creating innovative games. AK Peters/CRC Press.
Weber, Elke U., and Eric J. Johnson. “Decisions under uncertainty: Psychological, economic, and neuroeconomic explanations of risk preference.” In Neuroeconomics, pp. 127-144. Academic Press, 2009.
Gutwin, Carl, Christianne Rooke, Andy Cockburn, Regan L. Mandryk, and Benjamin Lafreniere. “Peak-end effects on player experience in casual games.” In Proceedings of the 2016 CHI conference on human factors in computing systems, pp. 5608-5619. ACM, 2016.
Wuertz, Jason, Max V. Birk, and Scott Bateman. “Healthy Lies: The Effects of Misrepresenting Player Health Data on Experience, Behavior, and Performance.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 319. ACM, 2019.
Max Birk is an Assistant Professor in the Department of Industrial Design at Eindhoven University of Technology. With an interdisciplinary background, Max draws from psychology, interaction design, data science, and game design, to investigate the effects of game-based design strategies on mental processes and design-induced behaviour change. His research contributes to games user research, digital health, and motivational interface design. He is interested in projects contributing to a healthy society, improving entertainment experiences, and developing tools and methods for researching interactive experiences.
Max’ research has been published in international top HCI venues, and he has contributed to research on player experience, individual differences in play, task adherence, crowdsourcing, and on the intersection between video games and mental health. He has organized well-received workshops across the globe and led research projects spanning multiple continents. Max has collaborated with game-designers in North America, Europe, and China, and experience working with independent developers like AlienTrap and global tech companies like Tencent.