PrfC3- Curiosity, Risk, and Reward in the Academic Job Search

Lecturer: Emily King
Fields: Soft skills

Content

So you’ve decided that you want a career or at least a job in academia; what’s next?  Imposter Syndrome is something that makes applying to lots of interesting jobs seem like a risk; when should you take the time to apply?  How can you leverage curiosity to broaden your job search? This single session course will cover some of the basics of the academic job search: how to decide whether to apply for a particular job and then how to make your application pop.  As the cover letter is the first thing that most hiring committees see, we will focus on how to make yours strong.  CVs, research statements, teaching portfolios, and interview questions will also be touched on.

This class will be discussion-based.  Please come with your questions! Also, if you want to submit a cover letter to be (constructively!) critiqued by the participants, please send it to me over email.

Objectives

The participants should leave with a better idea of how to choose where to apply and then how to successfully apply.  (Reward!)

Literature

N/A

Lecturer

Emily King is a professor of mathematics at Colorado State University, reigning IK Powerpoint Karaoke champion, an avid distance runner, and a lover of slow food / craft beer / third wave coffee.  Her research interests include algebraic and applied harmonic analysis, signal and image processing, data analysis, and frame theory.  In layman’s terms, she looks for the best building blocks to represent data, images, and even theoretical mathematical objects to better understand them.  She also has a tattoo symbolizing most of her favorite classes of mathematical objects.  If you are curious, you should ask her about it over a beer.

Affiliation: Colorado State University
Website: https://www.math.colostate.edu/~king/

MC5 – Low Complexity Modeling in Data Analysis and Image Processing

Lecturer: Emily King
Fields: Mathematical methods, data analysis, machine learning, image processing, harmonic analysis

Content

Are you curious about how to extract important information from a data set?  Very likely, you will be rewarded if you use some sort of low complexity model in your analysis and processing.  A low complexity model is a representation of data which is in some sense much simpler than what the original format of the data would suggest. For example, every time you take a picture with a phone, about 80% of the data is discarded when the image is saved as a JPEG file.  The JPEG compression algorithm works due to the fact that discrete cosine functions yield a low complexity model for natural images that tricks human perception. As another example, linear bottlenecks, pooling, pruning, and dropout are all examples of enforcing a low complexity model on neural networks to prevent overfitting. Some benefits of  low complexity models include:

  • Approximating data via a low complexity model often highlights overall structure of the data set or key features.
  •  Appropriately reducing the complexity of data as a pre-processing step can speed up algorithms without drastically affecting the outcome.
  • Reducing the complexity of a system during a training task can prevent overfitting.

The course will begin with an introduction to applied harmonic analysis, touching on pertinent topics from linear algebra, Fourier analysis, time-frequency analysis, and wavelet/shearlet analysis.  Then an overview of low complexity models will be given, followed by specific discussions of

  • Linear dimensionality reduction (principal component analysis, Johnson-Lindenstrauss embeddings)
  • Sparsity and low rank assumptions (LASSO, l^p norms, k-means clustering, dictionary learning)
  • Nonlinear dimensionality reduction / manifold learning (Isomap, Locally Linear Embedding, local PCA)
  • Low complexity models in neural networks (linear bottlenecks, pooling, pruning, dropout, generative adversarial networks, Gaussian mean width)

Objectives

The course aims to provide participants with a good understanding of basic concepts and applications of both classical mathematical tools like the Fourier or wavelet transform and more cutting edge methods like dropout in neural networks.  A variety of applications and algorithms will be presented.  Participants should finish the course with a clearer idea of when and how to use various approaches in data analysis and image processing.

Literature

The linear algebra chapter of MIT’s Deep Learning textbook:

http://www.deeplearningbook.org/contents/linear_algebra.html

Lecturer

Emily King is a professor of mathematics at Colorado State University, reigning IK Powerpoint Karaoke champion, an avid distance runner, and a lover of slow food / craft beer / third wave coffee.  Her research interests include algebraic and applied harmonic analysis, signal and image processing, data analysis, and frame theory.  In layman’s terms, she looks for the best building blocks to represent data, images, and even theoretical mathematical objects to better understand them.  She also has a tattoo symbolizing most of her favorite classes of mathematical objects.  If you are curious, you should ask her about it over a beer.

Affiliation: Colorado State University
Website: https://www.math.colostate.edu/~king/


MC1 – Applications of Bayesian Inference and the Free Energy Principle

Lecturer: Christoph Mathys
Fields: Bayesian inference, free energy principle, active inference,
computational neuroscience, time series

Content

We will start with a look at the fundamentals of Bayesian inference, model selection, and the free energy principle. We will then look at ways to reduce Bayesian inference to simple prediction adjustments based on precision-weighted prediction errors. This will provide a natural entry point to the field of active inference, a framework for modelling and programming the behaviour of agents negotiating their continued existence in a given environment. Under active inference, an agent uses Bayesian inference to choose its actions such that they minimize the free energy of its model of the environment. We will look at how an agent can infer the state of the environment and its own internal control states in order to generate appropriate actions.

Objectives

  • To understand the reduction of Bayesian inference to precision-weighting of
    prediction errors
  • To understand the free energy principle and the modelling framework of
    active inference
  • To know the principles of Bayesian inference and model selection, and to understand their application to a given data set.

Literature

  • Friston, K. J., Daunizeau, J., & Kiebel, S. J. (2009). Reinforcement Learning
    or Active Inference? PLoS ONE, 4(7), e6421.
  • Mathys, C., Lomakina, E.I., Daunizeau, J., Iglesias, S., Brodersen, K.H.,
    Friston, K.J., & Stephan, K.E. (2014). Uncertainty in perception and the
    Hierarchical Gaussian Filter. Frontiers in Human Neuroscience, 8:825.
  • Mathys, C., Daunizeau, J., Friston, K.J., Stephan, K.E., 2011. A Bayesian
    foundation for individual learning under uncertainty. Front. Hum. Neurosci. 5,
    39.
  • Friston, K. (2009). The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences, 13(7), 293–301.

Lecturer

Christoph Mathys is Associate Professor of Cognitive Science at Aarhus University. Originally a theoretical physicist, he worked in the IT industry for several years before doing a PhD in information technology at ETH Zurich and a master’s degree in psychology and psychopathology at the University of Zurich. During his graduate studies, he developed the hierarchical Gaussian filter (HGF), a generic hierarchical Bayesian model of inference in volatile environments. Based on this, he develops and maintain the HGF Toolbox, a Matlab-based free software package for the analysis of behavioural and neuroimaging experiments. His research focus is on the hierarchical message passing that supports inference in the brain, and on failures of inference that lead to psychopathology.

Affiliation: sissa
Website: https://chrismathys.com


ET1 – How to Know

Lecturer: Celeste Kidd
Fields: Developmental psychology, cognitive science, (and a tiny bit of neuroscience)

Content

This evening lecture will discuss Kidd’s research about how people come to know what they know. The world is a sea of information too vast for any one person to acquire entirely. How then do people navigate the information overload, and how do their decisions shape their knowledge and beliefs? In this talk, Kidd will discuss research from her lab about the core cognitive systems people use to guide their learning about the world—including attention, curiosity, and metacognition (thinking about thinking). The talk will discuss the evidence that people play an active role in their own learning, starting in infancy and continuing through adulthood. Kidd will explain why we are curious about some things but not others, and how our past experiences and existing knowledge shape our future interests. She will also discuss why people sometimes hold beliefs that are inconsistent with evidence available in the world, and how we might leverage our knowledge of human curiosity and learning to design systems that better support access to truth and reality.

Objectives

I hope to introduce students to the approach of combining computational models with behavioural experiments in order to develop robust theories of the systems that govern human cognition, especially attention, curiosity, and learning. We will take a very high-level conceptual approach to these topics, and I also hope students will leave understanding something useful about how people solve the problem of sampling from the world in order to understand something profound about it. I hope students will leave with a better understanding about how a person’s past experiences and expectations combine in a way that influences their subsequent sampling decisions and beliefs.    

Literature

Optional to read: Kidd, C., & Hayden, B. Y. (2015). The psychology and neuroscience of curiosity. Neuron88(3), 449-460.

https://www.cell.com/neuron/fulltext/S0896-6273(15)00767-9

Lecturer

Celeste Kidd is an Assistant Professor of Psychology at the University of California, Berkeley. Her lab investigates learning and belief formation using a combination of computational models and behavioural experiments. She earned her PhD in Brain and Cognitive Sciences at the University of Rochester, then held brief visiting fellow positions at Stanford’s Center for the Study of Language and Information and MIT’s Department of Brain and Cognitive Sciences before starting up her own lab. Her research has been funded by the National Science Foundation, the Jacobs Foundation, the Templeton Foundation, the Human Frontiers Science Program, Google, and the Berkeley Center for New Media. Kidd also advocates for equity in educational opportunities, work for which she was made one of TIME Magazines 2017 Persons of the Year as one of the “Silence Breakers”. 

Affiliation: University of California, Berkeley
Website: www.kiddlab.com

 

PrfC2 – Ethics in Science and Good Scientific Practice

Lecturer: Hans-Joachim Pflüger
Fields: Science in general, Neuroscience in particular

Content

This course will focus on aspects of ethics in science and that “scientific curiosity” is not without limits although ethical standards may vary in different cultures. We shall focus on the relatively new field of neuro-ethics. Another important topic will be concerned with good scientific practice and its implications for the performance of science, and what kind of rules apply. To follow these rules of good scientific practice is among the key requirements for scientists worldwide as well as for any autonomous artificial intelligent systems and, thus, is one of the pillars of trust into scientific results. The participants will be confronted with examples of fraud in science, ethical problems and decisions as well as good scientific practice. The outcome of such ethical decisions and questions around good scientific practice will be discussed.

Objectives

The course should teach participants the rules of good scientific practice, how to implement them in their own work and how to teach others. The participants will also be made aware of ethical problems that may arise with designing and carrying out experiments on animals or humans.

Literature

Kreutzberg, GW, The Rules of Good Science, EMBO reports, vol. 5, 330-332, 2004

Global Neuroethics summit delegates, Rommelfanger, KS et al., Neuroethics question to guide ethical research in the international brain initiatives.  Neuron 100:19-36, 2018

Lecturer

Hans-Joachim Pflüger is a retired professor of Functional Neuroanatomy/Neurobiology at Freie Universität Berlin who is still active in basic neurobiological research where he is interested in the neuronal basis of locomotory behaviour in insects such as pattern generation, sensory reflex circuits and neuromodulation. He studied Biology and Chemistry in Stuttgart, obtained his doctoral degree from the University of Kaiserslautern and was a postdoc in Cambridge/UK. He was an assistant professor at the Universities of Bielefeld and Konstanz before his habilitation in Konstanz in 1985. He moved to Freie Universität, Berlin in 1987 and spent several sabbaticals in Tucson (Univ. of Arizona), Tempe (Arizona State Univ.), both USA, and Christchurch (Univ. of Canterbury), New Zealand. He was a visiting scientist to Ben Gurion Univ., Beer Sheva, Israel and received the Ernst-Bresslau guest professor award of the Universität zu Köln. He has served on several DFG-reviewing panels, was treasurer and president of the German Neuroscience Society (NWG) and also treasurer for both FENS (Federation of European Neuroscience Societies) and IBRO (International Brain Research Organization). 

Affiliation: Freie Universität Berlin
Website: https://www.bcp.fu-berlin.de/biologie/arbeitsgruppen/neurobiologie/ag_pflueger/mitarbeiter/leiter/pflueger/index.html


MC2 – Symbolic Reasoning within Connectionist Systems

Lecturer: Klaus Greff
Fields: Artificial Intelligence / Neural Networks, Draws upon Theoretical Neuroscience and Cognitive Psychology

Content

Our brains effortlessly organize our perception into objects which it uses to compose flexible mental models of the world. Objects are fundamental to our thinking and our brains are so good at forming them from raw perception, that it is hard to notice anything special happening at all. Yet, perceptual grouping is far from trivial and has puzzled neuroscientists, psychologists and AI researchers alike. 

Current neural networks show impressive capacities in learning perceptual tasks but struggle with tasks that require a symbolic understanding. This ability to form high-level symbolic representations from raw data, I believe, is going to be a key ingredient of general AI. 

During this course, I will try to share my fascination with this important but often neglected topic. 

Within the context of neural networks, we will discuss the key challenges and how they may be addressed. Our main focus will be the so-called Binding Problem and how it prevents current neural networks from effectively dealing with multiple objects in a symbolic fashion.

After a general overview in the first session, the next lectures will explore in-depth three different aspects of the problem:

Session 2 (Representation) focuses on the challenges regarding distributed representations of multiple objects in artificial neural networks and the brain.

Session 3 (Segregation) is about splitting raw perception into objects, and we will discuss what they even are in the first place.

Session 4 (Composition) will bring things back together and show how different objects can be related and composed into complex structures. 

Objectives

  • Develop an appreciation for the subtleties of object perception.
  • Understand the importance of symbol-like representations in neural networks and how they relate to generalization.
  • Become familiar with the binding problem and its three aspects: representation, segregation, and composition.
  • Get an overview of the challenges and available approaches for each subproblem.

Literature

The course is a non-technical high-level overview, so only basic familiarity with neural networks is assumed. Optional background material: 

Lecturer

Klaus Greff studied Computerscience at the University of Kaiserslautern and is currently a PhD candidate under the supervision of Prof. Jürgen Schmidhuber. His main research interest revolves around the unsupervised learning of symbol-like representations in neural networks (the content of this course).

Previously, Klaus has worked with Recurrent Neural Networks and the training of very deep neural networks, and is also the maintainer of the popular experiment management framework Sacred.

Affiliation: IDSIA
Website: https://people.lu.usi.ch/greffk/publications.html


ET2 – Data-Driven Dynamical Models for Neuroscience and Neuroengineering

Lecturer: Bing W. Brunton
Fields: Computational neuroscience, Neuoengineering, Data Science

Content

Discoveries in modern neuroscience are increasingly driven by quantitative understanding of complex data. The work in my lab lies at an emerging, fertile intersection of computation and biology. I develop data-driven analytic methods that are applied to, and are inspired by, neuroscience questions. Projects in my lab explore neural computations in diverse organisms.  We work with theoretical collaborators on developing methods, and with experimental collaborators studying insects, rodents, and primates. The common theme in our work is the development of methods that leverage the escalating scale and complexity of neural and behavioural data to find interpretable patterns.

Lecturer

Bing Brunton is the Washington Research Foundation Innovation Associate Professor of Neuroengineering in the Department of Biology. She joined the University of Washington in 2014 as part of the Provost’s Initiative in Data-Intensive Discovery to build an interdisciplinary research program at the intersection of biology and data science. She also holds appointments in the Paul G. Allen School of Computer Science & Engineering and the Department of Applied Mathematics. Her training spans biology, biophysics, molecular biology, neuroscience, and applied mathematics (B.S. in Biology from Caltech in 2006, Ph.D. in Neuroscience from Princeton in 2012). Her group develops data-driven analytic methods that are applied to, and are inspired by, neuroscience questions. The common thread in this work is the development of methods that leverage the escalating scale and complexity of neural and behavioural data to find interpretable patterns. She has received the Alfred P. Sloan Research Fellowship in Neuroscience (2016), the UW Innovation Award (2017), and the AFOSR Young Investigator Program award (2018) for her work on sparse sensing with wing mechanosensory neurons.

Affiliation: University of Washington
Website: www.bingbrunton.com
Twitter: @bingbrunton

 

FC09 – Using Robot Models to Explore the Exploratory Behaviour of Insects

Lecturer: Barbara Webb
Fields: Behavioural biology, neuroscience, computational modelling, robotics

Content

Insects are often thought to show only fixed ‘robotic’ behaviours but in fact exhibit substantial flexibility, from maggots exploring their world to find which odours signal risk or reward, to ants and bees discovering and efficiently navigating between food sources scattered over a large environment. Yet insects also have small brains, providing the promise that we may be able to understand and model these aspects of intelligent behaviour down to the single neuron level. This course will describe the current state of research in insect exploration, emphasising an explicitly mechanistic view of explanation: to understand a system, we should (literally) try to build it. The final lecture will reflect on this methodology of modelling and what we can learn by implementing biological explanations as robots. 

Session 1: Exploration in maggots, and the role of the body in behaviour.

Session 2: The neural basis of risk and reward in insect learning.

Session 3: Expert insect navigators – how do they discover and remember key locations in their world?

Session 4: Satisfying our own curiosity: using robots as models 

Objectives

  1. Understand the importance of linking brain, body and environment to explain behaviour.
  2. Gain knowledge of current models of the neural mechanisms of exploration and learning in insects, and the key open questions. 
  3. Explore the role of (robot) models in scientific explanation

Literature

Lecturer

Barbara Webb completed a BSc in Psychology at the University of Sydney then a PhD in Artificial Intelligence at the University of Edinburgh. Her PhD research on building a robot model of cricket sound localization was featured in Scientific American. This established her as a pioneer in the field of biorobotics – using embodied models to evaluate biological hypotheses of behavioural control. She has published influential review articles on this methodology in Behavioural and Brain

Sciences, Nature, Trends in Neurosciences and Current Biology. In the last ten years the focus of her research has moved from basic sensorimotor control towards more complex insect behavioural capabilities, in the areas of associative learning and navigation. She has held lectureships at the University of Nottingham and University of Stirling before returning to a faculty position in the School of Informatics at Edinburgh in 2003. She was appointed to a personal chair as Professor of Biorobotics in 2010.

Affiliation: School of Informatics at Edinburgh
Website: http://blog.inf.ed.ac.uk/insectrobotics/


FC12 – The Development of Curiosity

Lecturer: Gert Westermann
Fields: Psychology, Neuroscience, Cognitive Science

Content

Curiosity has been described as an important driver for learning from infancy onwards. But what is curiosity? How has it been conceptualized, and how has its role in infant learning been identified and characterized? This course will describe the main theories of what curiosity is and how it affects behaviour, and how recent developmental research has studied curiosity in infants and children. Here I will address children’s active role in their learning and in their language development, as well as their preference for specific types of information. I will also touch on the role of play in infant and child development. Computational modelling can help us to develop theories of the mechanisms underlying curiosity-based exploratory behaviour, and I will discuss some of these models.

This course does not require any prior knowledge and all topics will be introduced gently.

Objectives

At the end of this course you will be able to

  • describe the major theories of curiosity 
  • explain how scientists conduct studies with infants
  • describe the research done with infants and children on curiosity-based learning
  • explain principles of computational modelling and some relevant models of curiosity-based learning 

Literature

  • Bazhydai, M., Twomey, K. E., & Westermann, G. (in press). Exploration and curiosity. In J. B. Benson (Ed.), Encyclopedia of Infant and Early Childhood Development, 2nd ed.
  • Gottlieb, J., Oudeyer, P.-Y., Lopes, M., & Baranes, A. (2013). Information-seeking, curiosity, and attention: computational and neural mechanisms. Trends in Cognitive Sciences, 17(11), 585–593. http://doi.org/10.1016/j.tics.2013.09.001
  • Kidd, C., & Hayden, B. Y. (2015). The Psychology and Neuroscience of Curiosity. Neuron, 88(3), 449–460. http://doi.org/10.1016/j.neuron.2015.09.010
  • Loewenstein, G. (1994). The psychology of curiosity: A review and reinterpretation. Psychological Bulletin, 116(1), 75–98. http://doi.org/10.1037/0033-2909.116.1.75

Lecturer

Gert Westermann studied Computer Science in Braunschweig and Austin, TX, and received a PhD in Cognitive Science from the University of Edinburgh. After postdocs at the Sony Computer Science Lab in Paris and at Birkbeck College, London, he worked at Oxford Brookes University for several years and in 2011 joined Lancaster University as Professor of Psychology.

Gert is Director of the Leverhulme Trust Doctoral Scholarship Centre on Interdisciplinary Research in Infant Development which trains 22 PhD students on infancy research, and co-director of the ESRC International Centre for Language and Communicative Development which is a large-scale collaboration between the Universities of Manchester, Liverpool and Lancaster. 

Gert’s research takes an interdisciplinary approach, combining looking time, pupil dilation, ERP, fNIRS, and behavioural studies with computational modelling to investigate the early cognitive, social and language development in infancy, with a recent focus on curiosity-based learning. 

Affiliation: Lancaster University
Website: https://www.lancaster.ac.uk/sci-tech/about-us/people/gert-westermann


FC02 – Your Wit Is My Command: Toward A Computational Understanding of Humour

Lecturer: Tony Veale
Fields: Artificial Intelligence, Computational Creativity

Content

Until quite recently, AI was a scientific discipline defined more by its portrayal in science fiction than by its actual technical achievements. Real AI systems are now catching up to their fictional counterparts, and are as likely to be seen in news headlines as on the big screen. Yet as AI outperforms people on tasks that were once considered yardsticks of human intelligence, one area of human experience still remains unchallenged by technology: our sense of humour.

This is not for want of trying, as this course will show. The true nature of humour has intrigued scholars for millennia, but AI researchers can now go one step further than philosophers, linguists and psychologists once could: by building computer systems with a sense of humour, capable of appreciating the jokes of human users or even of generating their own, AI researchers can turn academic theories into practical realities that amuse, explain, provoke and delight.

The course will comprise four lectures, which will explore the following topics.

Newspaper personal columns are routinely filled with people seeking partners with a good sense of humour (GSOH), with many rating this as highly as physical fitness or physical appearance. Yet what does it mean to have a sense of humour? Conversely, what does it mean to have NO sense of humour, and how might we imbue a humorless machine with a capacity for wit and a flair for the absurd? We begin by unpacking these questions, to suggest some initial answers and models.

So, for example, what would it mean for a computer to have a numeric humour setting, as in the case of the robot TARS in the film Interstellar? Can a machine’s sense of humour be reduced to a single number or parameter setting? Is humour a modular ability? Can it be gifted to computers as a bolt-on unit like Commander Data’s “humour chip” in Star Trek, or is it an emergent phenomenon that arises from complex interactions among all our other faculties? Might humour emerge naturally within complex AI systems without explicitly being programmed to do so, as in the mischievous supercomputer Mike in Robert Heinlein’s The Moon is a Harsh Mistress or in the sarcastic droid K2SO in Rogue One: A Star Wars story?

This course will survey and critique the competing humor theories that scholars have championed through the ages, enlarging on recurring themes (incongruity, relief, superiority) while considering the amenability of each to computational modeling. What is it that these theories are really explaining, and which comes closest to capturing the elusive essence of humour? 

The centrality of incongruity in modern theories demands that this concept be given a special focus. So we will unpack its many meanings to show how our understanding of incongruity can be as multifaceted as the idea of humour itself. Popular myths about the brittleness of machines in the face of the incongruous and the unexpected will be unpicked and debunked as we explore how machines might deliberately seek out and invent incongruities of their own.

But computational humour is still in its infancy, and it is no coincidence that the mode of humour for which machines show the greatest aptitude is that which humans embrace at a very early age, puns. Puns vary in wit and sophistication, but the simplest require only an ear for sound similarity and a disregard for the consequences of replacing a word with its phonetic doppelganger. The challenge for AI systems is to progress, as children do, from these simple beginnings to modes of ever greater conceptual sophistication. 

To do so, is it possible to capture the essence of jokes in a mathematical formula, much as physicists have done for electromagnetism and gravity? Do jokes have quantifiable features that we and our machines can intuitively appreciate? Can we build statistical models to characterize the signature qualities of a humorous artifact, so that machines can learn to tell funny from unfunny for themselves? And what do these measurable qualities say about humour and about us?

Finally, however we slice it, conflict sits at the heart of humour, whether it is a conflict in meaning, attitude, expectation or perspective. Double acts personalize this conflict by recognizing the different roles a comic can play. Computers can likewise play multiple rules in the creation of humour, from rigid “straight man” to absurdist provocateur, in double-acts with humans and with other machines. So we will explore the ways in which smart machines can contribute to the social emergence of humour, either as embodied robots or disembodied software.

Objectives

This course will use the ideas and achievements of AI to explore what it means to have a sense of humour, and moreover, to understand what it is to not have one. It will challenge the archetype of the humorless machine in popular culture, to celebrate what science fiction gets right and to learn from what it gets wrong. It will make a case for the necessity of a computational understanding of humour, to better understand ourselves and to better construct machines that are more flexible, more understanding, and more willing to laugh at their own limitations.

Literature

The SEEKING mind: Primal neuro-affective substrates for appetitive incentive states and their pathological dyComputational humour studies is an established field that has produced a range of academic books, from Victor Raskin’s Semantic Mechanisms of Humor (1985, one of the first) to the more recent Primer of Humour Research (with chapters from computational humorists). Non-computational humor researchers, such as Elliott Oring, have also written accessible books on humour, such as Engaging Humor, while the computer scientist Graeme Ritchie has written a pair of well-received academic books on humour. Comedians and comedy professionals have also written some noteworthy books on humour, with individual chapters that focus on computational humour or that offer algorithmic insights into the author’s own comedy production strategies. Toplyn’s Comedy Writing for Late-Night TV offers a beginner’s guide to humor production that is frequently schematic in style. Jimmy Carr and Lucy Greeves’ The Naked Jape considers humour more broadly, but also offers a chapter on computational models and the people who build them. I will quote from each of the sources as needed.

Lecturer

Tony Veale is an associate professor in the School of Computer Science at UCD (University College Dublin), Ireland. He has worked in AI research for 25 years, in academia and in industry, with a special emphasis on humour and linguistic creativity. He is the author of the 2012 monograph Exploding the Creativity Myth: The Computational Foundations of Linguistic Creativity (from Bloomsbury), co-author of the 2016 textbook, Metaphor: A Computational Perspective (Morgan Claypool), and co-author of 2018’s Twitterbots: Making Machines That Make Meaning (MIT Press). He led the European Commission’s coordination action on Computational Creativity (named PROSECCO), and collaborated on international research projects with an emphasis on computational humour and imagination, such as the EC’s What-If Machine (WHIM) project. He runs a web-site dedicated to explaining AI with humour at RobotComix.com. He is active in the field of Computational Creativity, and is currently the elected chair of the international Association for Computational Creativity (ACC). 

Affiliation: University College Dublin

Website:

http://Afflatus.UCD.ie

http://RobotComix.com