Keynote lecture 7 – When Trust Meets Tech: Who has to be how flexible to make trustworthy AI happen?

Lecturer: Tarek R. Besold
Fields: Artificial Intelligence/Machine Learning

AI & Robotics


Trust and technology don’t always go hand in hand–be it because of scepticism when technological advances change familiar structures of everyday life, or because of actual (ab)uses of technology that infringe upon the rights of individuals or of society as a whole. With the widespread adoption of (currently mostly ML-driven) AI solutions in different domains of professional and private life, the concept of “trustworthy AI” has gained quite some popularity with general audiences, as well as with regulators, system producers/developers, and researchers.

In this talk we will have a look at some of the key questions relating to “trustworthy AI”, such as:
(1) Why is trust (or the lack thereof) an actual issue in the context of AI (and especially ML) systems?
(2) What are the corresponding theoretical and/or practical challenges?
(3) What are some regulatory tools and processes that can be deployed to govern AI and assure trustworthiness (to some degree)?
(4) What does all of this mean for the AI ecosystem and the different market participants?


Dr. Tarek R. Besold
Dr. Tarek R. Besold

Dr. Tarek R. Besold is Head of Strategic AI at DEKRA DIGITAL. Before taking up his role in the management team of the DEKRA AI Hub, which was founded in 2020, he held various positions as CTO at a Berlin deep-tech start-up, as Chief Science Officer of Telefonica’s Digital Health Moonshot in Barcelona, and as Lecturer/Assistant Professor in Data Science at City, University of London. Tarek completed his PhD in 2014 at the Institute for Cognitive Science in Osnabrück on topics at the interface between cognition and AI. He is chairman of the DIN standardization committee for Artificial Intelligence (NA 043-01-42 AA) and a member of the AI expert advisory board of Microsoft Germany.

Affiliation: DEKRA DIGITAL

Lecture Series 4 – The malleability of perception in ancient Buddhist thought and practice

Lecturer: Andrea Sangiacomo
Fields: Philosophy



Contemporary discussions in philosophy and cognitive science analyze experience by seeking to understand how it is possible for a conscious subject to get in touch, know, and operate within a world outside of it. The notions of ‘consciousness,’ ‘mind,’ and ‘external world’ take center stage in these approaches and a spectrum of different theoretical options is available to flesh out the relation between these concepts. For realist accounts, consciousness and world are two relatively independent (albeit related) domains of reality, while for idealist accounts, the world is just a projection of conscious activity itself. More recently, a new enactivist account suggested that both conscious experience and the world are co-originated together in their mutual interplay.This short course explores the way in which early Buddhist philosophy navigated between the extremes of realism and idealism, by arriving at an understanding of experience that might fit the ‘enactivist’ approach. In particular, we will investigate how what is called ‘consciousness’ in today’s Western discussions is best understood in Buddhist thought in terms of ‘contact’ (Pali phassa), namely, the complex activity of discerning and parsing contents of experience which gives rise to subjective experience. For early Buddhist thought, conscious experience requires a basis in something that is different and outside the sentient subject itself and that conditions the way in which consciousness works (hence strict idealism is rejected). However, the contents of conscious experience are not representations of what is in the ‘external world,’ but rather constructions conditioned by meaning (perception), conative drives, and feelings (hence strict realism is also rejected). By underscoring this point, Buddhist texts draw attention to the way in which experience is not only constantly shaped by various factors and conditions, but also how a disciplined practice can allow one to steer this process at will and thus shape their own experience in specific ways, which might be conducive to the achievement of the soteriological goals that are central in early Buddhism (freedom from craving, and peace).


Dr. Andrea Sangiacomo

Andrea Sangiacomo is Associate Professor of Philosophy at the Faculty of Philosophy at the University of Groningen, where he currently teaches global hermeneutics and ancient Buddhist philosophy. His research interests include Western early modern philosophy and science, soteriological conceptions of selfhood in a cross-cultural perspective, and ancient Buddhist thought and practice.

Affiliation: University of Groningen
Personal Website:

Practical Course 5 – Flexible Human-AI Interaction

Lecturer: Jan Smeddinck
Fields: Human-Computer Interaction, Machine Learning, Artificial Intelligence, Interaction Design

AI & Robotics


Machine learning (ML) and artificial intelligence (AI) services are having a growing impact on the way we live and work. The most prominent goal of contemporary AI is to support human decision making and action with intelligent services. Widely available ML and AI tools are increasingly enabling the design and development of automated processes that provide (potentially) deep integration of complex information, often with the capacity to respond autonomously, mimicking aspects of human cognition and behavior. However – even questionable marketing aside – the term “artificial intelligence” alone is prone to generating misunderstandings and bloated expectations, leading to bad user experiences or worse. In this context, the course will explore flexibility in human-AI interaction with a view of both the potential upsides and pitfalls. The talk for this course will introduce the foundations of critical and responsible design, development, and evaluation of AI technologies with a focus on human-AI-interaction. It aims to provide participants with an intuition towards utilizing – and critically evaluating the impact of – human-AI interaction concepts and technologies. The workshop elements will scaffold further critical discussion along hands-on ML/AI use-cases.


  • Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91.
  • Confalonieri, R., Coba, L., Wagner, B., & Besold, T. R. (2021). A historical perspective of explainable Artificial Intelligence. WIREs Data Mining and Knowledge Discovery, 11(1), e1391.
  • Dauvergne, P. (2020). AI in the Wild: Sustainability in the Age of Artificial Intelligence. The MIT Press.
  • Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer Nature.
  • Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
  • Hassenzahl, M., Borchers, J., Boll, S., Pütten, A. R. der, & Wulf, V. (2021). Otherware: How to best interact with autonomous systems. Interactions, 28(1), 54–57.
  • Le, H. V., Mayer, S., & Henze, N. (2021). Deep learning for human-computer interaction. Interactions, 28(1), 78–82.
  • Mattu, J. A., Jeff Larson,Lauren Kirchner,Surya. (n.d.). Machine Bias. ProPublica. Retrieved 7 November 2021, from
  • O’Neil, C. (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Reprint edition). Crown.
  • Pfau, J., Smeddinck, J. D., & Malaka, R. (2020). The Case for Usable AI: What Industry Professionals Make of Academic AI in Video Games. In Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play (pp. 330–334). Association for Computing Machinery.
  • Sheridan, T. B. (2001). Rumination on automation, 1998. Annual Reviews in Control, 25, 89–97.
  • Shneiderman, B., & Maes, P. (1997). Direct manipulation vs. Interface agents. Interactions, 4(6), 42–61.
  • Swartz, L. (2003). Why People Hate the Paperclip: Labels, Appearance, Behavior, and Social Responses to User Interface Agents.
  • Thieme, A., Cutrell, E., Morrison, C., Taylor, A., & Sellen, A. (2020). Interpretability as a dynamic of human-AI interaction. Interactions, 27(5), 40–45.
  • Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180083.
  • Qian Yang, Aaron Steinfeld, Carolyn Rosé, & John Zimmerman. (2020). Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13.
  • Zimmerman, J., Oh, C., Yildirim, N., Kass, A., Tung, T., & Forlizzi, J. (2020). UX designers pushing AI in the enterprise: A case for adaptive UIs. Interactions, 28(1), 72–77.


Jan Smeddinck
Jan Smeddinck

Jan Smeddinck is currently a Principal Investigator at – and the Co-Director of – the Ludwig Boltzmann Institute for Digital Health and Prevention (LBI-DHP) in Salzburg, Austria. For the LBI-DHP, he leads research programme lines on digital technologies and data analytics. Prior to this appointment he was a Lecturer (Assistant Professor) in Digital Health at Open Lab and the School of Computing at Newcastle University in the UK. He also spent one year as a postdoc visiting research scholar at the International Computer Science Institute (ICSI) in Berkeley and retains an association with his PhD alma mater, the TZI Digital Media Lab at the University of Bremen in Germany. Building on his background in interaction design, serious games, web technologies, human computation, machine learning, and visual effects, he has found a home in the research field of human-computer interaction (HCI) research with a focus on digital health.

Affiliation: Ludwig Boltzmann Institute for Digital Health and Prevention

Practical Course 4 – FlexVision – Flexibility in Vision: Dynamics, Mechanisms and Function

Lecturer: Udo Ernst
Fields: Neurobiology / Robotics

Nervous system


The visual system of higher mammals is a complex neural machinery which efficiently solves sophisticated computational problems on a massively parallel stream of information originating in dynamic environments. This is only possible by being highly flexible, i.e. by adapting visual processing to sensory, behavioral, and cognitive contexts. Flexibility also makes our visual system (still) superior to computer vision, in which state-of-the-art deep convolutional networks may perform near error-free object recognition, but fail to adapt to novel situations or break down under adversarial attacks.

In my presentation, I will discuss different examples of flexibility in the visual system in the context of three major principles: configuration, coordination and control. Configuration adapts circuits and networks to current behavioural needs, optimizing their function towards specific tasks or for performing specific computations more efficiently. The interplay between computational units is organized by coordination principles towards common goals, leading to interaction of multiple ‘players’ such as different visual areas in the brain, or and to dynamical network changes on multiple time scales. Both configuration and coordination needs control units to monitor and signal changes in the external and/or internal situation, and to initiate appropriate reaction mechanisms.

We will argue that it is necessary to combine different methodological approaches for understanding flexibility in vision: for example, electrophysiological studies to reveal mechanisms of flexibility, psychophysical investigations to characterize the impact of neural flexibility on function, and theoretical work to provide unifying frameworks and explanations for dynamics, mechanisms and function of flexibility.

The aim of our workshop is to implement different principles of flexibility in a computer simulation, and make them work together. Participants will team up in small groups which will first focus on one particular, simple aspect of flexibility, i.e. adapting to the ambient light, focusing attention on particular visual features, detecting rapid changes in the environment. Our goal is to realize flexibility with appropriate neural mechanisms, for better understanding how the brain might solve a corresponding task. In a second step, different groups will put their solutions together and try to ‘coordinate’ them, i.e. to combine flexible processing on multiple levels in a meaningful manner.

For testing your ideas, we will use our webcams or short movie sequences and investigate how well flexible neural processing works under different conditions – maybe you can even mount a webcam to your head, close your eyes, and try for yourself if your artificial visual system can direct you safely towards the coffee machine in your home office, hereby avoiding all obstacles… :-))))

Let’s see what complex and unexpected behaviours will emerge, and let’s be flexible! For participating in our workshop, you only need some knowledge in programming, preferrably in Python. In our course repository you will find more information about program packages required for Python, installation guides, literature and other ressources. We suggest you perform installation of an appropriate Python package and editor prior to the course, and familiarize yourself with the most important features of these tools.

Check out the following link, information will be flexibly updated:

On this page you will also find information on how to contact us by e-mail if you have questions in advance.

Let’s see what complex and unexpected behaviours will emerge, and let’s be flexible! For participating in our workshop, you only need some knowledge in programming, preferrably in Python. Please bring a laptop; we will inform you in advance which program packages you would have to install prior to the course. This course is configured to take place on-site, but we will try to be flexible and activate our control circuits for coordinating with one (small) external group of participants if necessary…


Dr. Udo Ernst
Dr. Udo Ernst

Dr. Udo Ernst studied Physics in Frankfurt and received his PhD in 1999 at the Max-Planck-Institute for Dynamics and Self-Organization in Göttingen. Since 2000, he is working at the Institute for Theoretical Physics at the University of Bremen, with interim research stays at University of Tsukuba (Japan), the Weizmann Institute (Israel), and Ecole Normale Superieure (France). Having received the Bernstein Award in Computational Neuroscience in 2010, Dr. Ernst is now leading the Computational Neurophysics Lab in Bremen. Research interests revolve around understanding collective dynamics in neural systems using data analysis, mathematical analysis, modelling and simulation; with particular interest in feature integration, criticality, and flexible information processing in the visual system.

Maik Schünemann
Maik Schünemann

Maik Schünemann is a PhD-student at the Computational
Neurophysics Lab in Bremen. He joined the Lab after completing
masters studies in Mathematics, with a focus on dynamical systems
and random processes, and Neurosciences, with a focus on
Computational Neurosciences. His research focuses on how
attention establishes flexible and selective information
processing in the visual system. In addition, he participated both
as student and tutor in the G-Node Advanced Neural Data Analysis

Affiliation: University of Bremen

Practical Course 3 – Improvisation in dance, and beyond

Lecturer: Bettina Bläsing
Fields: Cognitive movement science / practical course



Improvisation (from latin improvidere: not foreseeing) is a highly sophisticated human activity that draws on different forms of memory and cognitive meta-skills, increasing the individual’s flexibility and supporting adaptation under uncertain conditions. For the human mind, improvisation can also be means of exploring and expanding the options to interact with the world, and a source of enjoyment and stimulation. In dance, improvisation is used for different purposes: as a choreographic tool, to inspire novel ideas in composition; in contemporary dance training, to support dancers’ movement experience and bodily creativity; or as artistic practice per se, in live improvisation performance. Dance and movement improvisation offer a multitude of tools and techniques that help to discover new ways of moving, interacting and communicating through the body. In this course we will use a range of these tools to explore and create. We will encounter unexpected tasks and problems, set and break rules, try to escape habits and enjoy wandering astray, making our way through our own danced stories. Starting from movement and dance improvisation tasks, we will enter other areas of life, including the academic, and watch out for novel ways of approaching old problems, embracing the unforeseeable.


Dr. Bläsing

Bettina Bläsing works as a lecturer in rehabilitation science at the Technical University Dortmund. She studied in Bielefeld, Münster and Edinburgh and received her doctorate in biology from Bielefeld University in 2004. As a postdoc, she worked at the Max Planck Institute for Evolutionary Anthropology and the Institute for Psychology at Leipzig University, as well as in the “Cognitive Interaction Technology” cluster of excellence and in the “Neurocognition and Movement” working group at Bielefeld University. In 2019 she received the venia legendi in sports science for her habilitation on memory, learning and expertise in dance. Her current focus in research and teaching includes memory processes, improvisation and multimodal perception of body and movement in (inclusive) dance.

Affiliation: TU Dortmund

Keynote Lecture 2 – Meditation as flexibility induction? Theory, findings and computational mechanisms

Lecturer: Fynn-Mathis Trautwein
Fields: Cognitive Neuroscience; Contemplative Science

Nervous system


The talk will present central theoretical concepts of (mindfulness) meditation as well as empirical findings regarding meditation-induced neural plasticity and effects on cognition, affect and the sense of self. These findings will be integrated by discussing potential computational mechanisms within the active inference framework. Finally, in line with the neurophenomenological reserach program, it will be explored how meditation can enrich our understanding of the mind not only as an object of study, but also as a tool of investigation.


  • Berkovich-Ohana, A., Dor-Ziderman, Y., Trautwein, F.-M., Schweitzer, Y., Nave, O., Fulder, S., & Ataria, Y. (2020). The hitchhiker’s guide to neurophenomenology – The case of studying self boundaries with meditators. Frontiers in Psychology, 11.
  • Dahl, C. J., Lutz, A., & Davidson, R. J. (2015). Reconstructing and deconstructing the self: Cognitive mechanisms in meditation practice. Trends in Cognitive Sciences, 19(9), 515–523.
  • Laukkonen, R. E., & Slagter, H. A. (2021). From many to (n)one:Meditation and the plasticity of the predictive mind. Neuroscience & Biobehavioral Reviews, 128(June), 199–217.


Dr. Trautwein

Dr. Fynn-Mathis Trautwein investigates mental processes underlying attention, social cognition and the sense of self through the lens of meditation research. After studying psychology, he completed a PhD at the Max Planck Institute for Human Cognitive and Brain Sciences, where he was involved in a large-scale longitudinal mental training study. He then investigated neural mechanisms and phenomenological reports of deep meditative states at the University of Haifa. Currently he is a postdoc at the Department of Psychosomatic Medicine and Psychotherapy, Medical Center – University of Freiburg.

Affiliation: University of Freiburg

Hack2 – Workshop: How to Gather Town

Lecturer: Stefan Riegl
Fields: Architecture

I would like to give a hands-on workshop on how to create a custom space in Gather Town. Afterwards you should have a good grasp of how the parts of a Gather Town space work and you should have built a custom space for yourself.

The workshop will start on Tuesday, 18:30 CEST in the lecture hall of the VIK space.

Please note: This will be an IK-internal event. If you have questions in that regard (or if you cannot make it), feel free to get in contact with me and we can discuss details.

What is it all about

Some people have asked me questions like:

— How difficult is it to create a map or space in Gather Town?
— How long does it take to create a map like [cool map]?
— Can I integrate [cool content] in Gather Town?
— and so on

The workshop is my long-winded answer, such that you can answer those questions yourself afterwards. Based on your own experience. Because you built something. With hands. Your hands. It will be great.

What to expect

The workshop will probably have the following structure:

Part I: The basics
1) How a space works, the Map Maker, how to break things, good practises
2) Creating a map, method 1: Gather Town’s Map Maker
Part II: Advanced methods
3) Creating a map, method 2: external tools, esp. Tiled Map Editor
4) Automating recurring tasks
Part III: Do it yourself
a) Hands-on: Build your own space!
b) Questions and (hopefully) answers

Part II, which requires a bit more technical understanding, builds on Part I, which should be easy to follow along just like that. I intend to make it such that people can sit back and relax _or_ get their hands dirty, but are not forced to do both at the same time. Part III should really be about you creating universes and me shutting up and only talk when being asked questions. (Let’s see how that goes.)

Each Part is supposed to be as short as possible and between parts there will be short breaks. Not sure how to estimate times, but I’ll try to fit Part I and II into 25min each, give or take. Very specific needs and more complicated questions might be postponed and addressed in Part III.

Here’s the cool thing

Come for whatever Part matches your interest and drop out into working on your own space, when you’ve got enough of the talking.

Not interested in tech mumbo jumbo? Just listen in on Part I. You already worked with the Map Maker? Join for Part II. You wonder whether a certain idea could be realised for your specific classroom teaching needs? Shoot your questions at me in Part III.

For those interested I’m happy to build a “space hub” and connect it with all the spaces created on that evening. That’s cool because it invites people to explore and discover, without requesting links or sending emails. Showcasing work on the VIK’s Bunter Abend is an option, but details are to be discussed at the workshop.

Who’s talking

I’m not getting paid by Gather Town for advertisement (dang) and there might be smarter ways than how I do it to achieve the same results. Gather Town certainly is not the ultimate answer. However, I see great potential in this incarnation of virtual environments. I want to share the lessons I learned in the past months to empower people, if possible, such that you can use widely available, modern technology to improve e.g. social well-being or education, especially in the times we have. The stuff you need to get into a space is not rocket science, but imagination. (And your hands.)

I leave it to the reader to judge whether I’m qualified to talk about Gather Town space building. In my defence: I was not insignificantly involved in creating the space for the Virtual IK.

If you have any questions left, please le me know. A response will be be shorter than this email, most probably.

Hack1 – Embellied Cognition Workshop: an in vivo step-by-step tutorial for cooking deconstructed Pizza Soup

Lecturer: Ronald Sladky
Fields: Cooknitive Science

We will meet on Thu, 8. April, 20:00 at the cafeteria buffet in the VIK space.

A list of required materials and resources can be found below the abstract.


Current archaeological consensus suggests that Hominini (H. erectus, H. sapiens) invented cooking at least 500,000 years ago (Pollard, 2015) and there is also evidence for cooking behavior in present-day chimpanzees (P. troglodytes) (Warneken & Rosati, 2015). Several studies have suggested that food consumption appears to have partial relevance for survival, i.e., for maintaining autopoiesis (Maturana & Varela, 1972) in order to consistently counteract the second law of thermodynamics (Schrödinger, 1944), most likely by minimizing (variational) free energy (Friston, 2010). Beyond that, it is known that consuming food is associated with reward and, depending on cultural and geographical factors, mostly positive affectivity, a phenomenon that we will call cooked meal consumption pleasure (CMCP). An important aspect of CMCP is that subjective experience is not entirely stimulus-driven, resulting in a significant inter-subjective variability (e.g., different dietary preferences and requirements, previous experiences), variability depending on socio-cultural and environmental contexts, and multi-modal sensory integration (Auvray & Spence, 2008). If CMCP (or anything else) exists it must be a dynamical system (Jaeger, 2021) apparently implemented by hierarchical Markov blankets (Friston, Wiese, and Hobson, 2020).

Instances of successful CMCP are known to allow for foodborne mental space-time traveling (fMSTT). E.g., subjects reported previously that the smell and taste combination of Langos, cotton candy,  pickles, and ice cream always reactivates childhood memories of trips to the fairground (Any Wiener et al., 1993). By using an optimized CMCP preparation method, we aim at eliciting the same form of fMSTT, i.e., mentally taking subjects back to Günne. The vehicle used in this study is one of IK’s most iconic dishes, i.e., Pizza Soup (PS) that is served as a first meal. Typical reactions by the participants span the full valence spectrum from ‘Oh ja, lecker.’, to ‘What the %$#@ is this?!’. Here we will ensure that reactions will be mostly positive by updating the PS base recipe and modularizing the components to allow for different dietary preferences (e.g., omnivore, herbivore, people with intolerances). This optimized processing workflow results in a deconstructed PS (dPS) (Derrida, 1967). Note, in this context,  optimized is not used as a testable proposition but as a declarative speech act (Searle & Vanderveken, 1985). dPS will implement PS’ main goals, yet entail different sensory experiences and, in so doing, improving overall CMCP while still allowing for fully-embellied fMSTT.

Required and recommended materials


  • 2 cans tomatoes (whole and peeled)
  • 1/8 liter olive oil
  • 2 cloves garlic
  • 1 teaspoon oregano
  • 2-4 slices good white bread (Ciabatta)
  • Salt, black pepper
  • Optional modifiers: Bay leaf, chili, soy sauce, sugar, baking soda


  • Good white bread (Ciabatta)
  • for carnivores: salami, Jalapeno chilis, grated cheese
  • for omnivores: pine nuts, basil, mozarella
  • for herbivores: roasted tomatoes, olives

RC4 – VR as a tool

Lecturer: Tobias Wüstefeld
Fields: Design / Psychology


New Tools are revolutionizing the way designers work in 3D. And no tool can be neutral. With this new ways of working, new ways of seeing and new styles will emerge. With VR (Virtual Reality), a new way of experimental approach comes into the workflow of designers.


Tobias Wüstefeld
Tobias Wüstefeld

Based in the German city of Hamburg, Tobias Wüstefeld is an illustrator who loves creating miniature worlds. Each has its own look, emotion and general atmosphere, and is full of detail for the viewer to inspect and engage with. He has done several Coverdesigns for Nature Magazine (Nature Methods, Nature Microbiology, Nature Cell Biology). For influences he looks well beyond the realms of art and illustration.


PC9 – Neural Engineering: Building Cognitive Models with Neurons

Lecturer: Terry Stewart
Fields: Computational Neuroscience


The Neural Engineering Framework provides a general method for programming with neurons. This can be useful both for constructing models of particular biological systems, or for taking advantage of the energy-efficient computation in neuromorphic hardware. In this course, we\’ll introduce the basic ideas of the NEF, but the emphasis will be on hands-on modelling work using the Python software package Nengo. Nengo lets you quickly build and interact with these sorts of neuron models, and was used to construct Spaun, the first (and so far only) large-scale functional brain model capable of performing multiple tasks.

After the initial part of this course where we introduce the tools and methodology, the course will transition to become more project-based, where we can work together to try building models, based on the particular interests of the participants.


  • Eliasmith et al., 2012. A large-scale model of the functioning brain. Science, 338:1202-1205. 10.1126/science.1225266


Dr. Terry Stewart
Dr. Terry Stewart

Terry is an Associate Research Officer at National Research Council Canada. Before that, he was a post-doctoral research associate working with Chris Eliasmith at the Centre for Theoretical Neuroscience at the University of Waterloo. His first degree was in engineering, then his masters involved applying experimental psychology on simulated robots, and his Ph.D. was on cognitive modelling. So he self-identifies as a cognitive scientist. He is also a co-founder of Applied Brain Research, a research-based start-up company based around using low-power hardware (neuromorphic computer chips) and adaptive neural algorithms.

Affiliation: National Research Council Canada