FL3 – Philosophy of Artificial Consciousness

Lecturer: Wanja Wiese
Fields: Philosophy

Content

It is currently an open question to what extent advances in artificial intelligence (AI) can help to understand the mind (Buckner, 2023). A striking possibility is that, to the extent that AI models do capture mechanisms underlying our own cognitive capacities, implementations of such models in concrete applications will replicate aspects of the mind. This raises the question whether at some point consciousness might emerge in such AI systems, and whether they would be capable of feeling pleasure and pain, or might even suffer (Metzinger, 2021). Evaluating these disconcerting possibilities is made difficult by the current uncertainty about consciousness:
• What is consciousness?
• How can we know which artificial systems are likely (or unlikely) to be conscious?
• What is the ethical significance of consciousness?
Answers to these questions should be based on interdisciplinary research. This talk provides an overview of some philosophical issues that need to be addressed by such efforts. In particular, the talk will cover the following topics:

Phenomenal consciousness, moral status and risk.
The most relevant notion of consciousness in this context is phenomenal consciousness. States of phenomenal consciousness have a phenomenal character, such as the experienced painfulness of feeling pain; there is something it is like to be in such states (Nagel, 1974); they are often associated with qualia (Tye, 2016). Phenomenal consciousness is particularly relevant, because many assume that the capacity for having valenced phenomenally conscious states gives a system at least some degree of moral status (Kriegel, 2019; Shepherd, 2024).

Uncertainty, due to a lack of consensus regarding metaphysical and empirical theories.
Whether one believes that consciousness in artificial systems is possible depends on fundamental metaphysical assumptions. For instance, biological naturalism (Searle, 2017), biological physicalism (Block, 1997, 2001), or certain extended or enactive approaches (Cosmelli & Thompson, 2010; Froese, 2017; Kirchhoff & Kiverstein, 2020) arguably imply that non-biological robots or computer simulations cannot instantiate consciousness. Conversely, computational functionalism, understood as a general theory of the mind (including consciousness), explicitly endorses the possibility of consciousness in artificial systems, including computer simulations (Butlin et al., 2023). Other metaphysical theories (such as panpsychism) are compatible with different views on artificial consciousness. A lack of consensus on fundamental metaphysical assumptions therefore increases the uncertainty surrounding artificial consciousness.
At the empirical front, there has also been marked progress, but, since the science of consciousness is not in a mature stage yet (Wiese, 2018), different theories, which entail diverging statements about artificial consciousness, currently have some currency (Seth & Bayne, 2022).

Not all criteria for consciousness in human beings can be applied to artificial systems.
Insights about the biological mechanisms associated with consciousness in humans (Northoff & Lamme, 2020) and other animals cannot be applied to most artificial systems; other insights are related to cognitive or behavioural capacities associated with consciousness. But these cannot be used as sufficient criteria for ascribing consciousness to artificial systems, because, for any proposed set of criteria, it may be possible to gerrymander an otherwise simple system that fulfills these criteria, but is unlikely to be conscious—i.e., many criteria can be “gamed” (Birch, 2024; Shevlin, 2021). For instance, unless we already know that a system is conscious, self-reports are of little use. Large language models might just retrieve information contained in their training data. Conversely, some artificial systems might be conscious, without being able to express this in any form of reports.

Absence of evidence vs. evidence for the absence of consciousness.
In assessing whether an artificial system is conscious, we would like to avoid both, over-ascribing consciousness (i.e., being confident that an actually non-conscious system is conscious) and under-ascribing consciousness (i.e., being confident that an actually conscious system is non-conscious). The second type of error can be avoided by focusing on positive indicators, i.e., evidence for the presence of consciousness in an artificial system. But an exclusive focus on positive indicators may create a tendency to over-estimate the capacity for consciousness in a system. Without potential evidence for absence of consciousness in artificial systems, it may also be more difficult to justify the belief that an artificial system is non-conscious.

Consciousness as a natural kind.
Uncertainty about consciousness stems not just from a lack of consensus on the nature of consciousness. Additional challenges are related to the possibility that artificial systems might not be able to instantiate the same kind of consciousness that is found in human beings (if any), because artificial systems differ in many respects from conscious living organisms. For one, phenomenal consciousness may not be a (single) natural kind (for discussion, see Bayne, Shea, et al., 2020; Shea & Bayne, 2010; Taylor, 2023). If so, conditions for artificial consciousness might be distinct from conditions for human consciousness, and insights about biological consciousness cannot be transferred to artificial systems.

Indeterminacy.
It has been suggested (Papineau, 2002) that the concept of phenomenal consciousness is indeterminate in the following sense (for discussion, see Simon, 2017): in the case of human beings, being conscious likely entails instantiating both some low-level properties L (e.g., certain neural properties) and high-level properties H (e.g., multiply-realisable functional properties), but there may be no fact to the matter whether the concept of phenomenal consciousness refers to L or to H. Hence, there might be non-human systems for which it would be indeterminate whether they are conscious. Some artificial systems might be such borderline cases: neither determinately non-conscious, nor determinately conscious.

Gradualism about consciousness and dimensions of consciousness.
It is often assumed that, although there may be levels of wakefulness, phenomenal consciousness itself does not come in degrees: either there is something it is like to be for a system, or there isn’t. But if certain empirical theories of consciousness are on the right track (e.g., integrated information theory), consciousness does come in degrees (Lee, 2023). If this is correct, some artificial systems might have a lower (or higher) degree of consciousness than human beings (Butlin et al., 2023). In this case, criteria for human-level degrees of consciousness could be inadequate, because an artificial system could have some (low) level of consciousness, even if it only satisfies much weaker criteria. An opposing view suggests that there are not different levels, but just different types of global states of consciousness, such as ordinary wakefulness, dreaming, drowsiness, or minimally conscious states (for discussion, see Bayne, Seth, et al., 2020). Could artificial systems realise novel types of global states, that are impossible for human beings?

Although many of the issues mentioned above may seem intractable, the talk will hopefully convey that there are promising strategies to address them. Addressing these issues is also required to fully assess the moral implications, risks, and benefits of creating potentially conscious artificial systems.

Literature

  • Bayne, T., Seth, A. K., & Massimini, M. (2020). Are There Islands of Awareness? Trends Neurosci, 43(1), 6–16. https://doi.org/10.1016/j.tins.2019.11.003
  • Bayne, T., Shea, N., & University of Arkansas Press. (2020). Consciousness, Concepts and Natural Kinds. Philosophical Topics, 48(1), 65–83. https://doi.org/10.5840/philtopics20204814
  • Birch, J. (2024). The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. Oxford University Press.
  • Block, N. (1997). Biology versus computation in the study of consciousness. Behavioral and Brain Sciences, 20(1), 159‐165. https://doi.org/10.1017/S0140525X97330052
  • Block, N. (2001). Paradox and cross purposes in recent work on consciousness. Cognition, 79(1‐2), 197‐220.
  • Buckner, C. J. (2023). From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence. Oxford university press.
  • Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., Deane, G., Fleming, S. M., Frith, C., Ji, X., Kanai, R., Klein, C., Lindsay, G., Michel, M., Mudrik, L., Peters, M. A. K., Schwitzgebel, E., Simon, J., & VanRullen, R. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness (arXiv:2308.08708). arXiv. http://arxiv.org/abs/2308.08708
  • Cosmelli, D., & Thompson, E. (2010). Embodiment or Envatment?: Reflections on the Bodily Basis of Consciousness. In J. Stewart, O. Gapenne, & E. A. Di Paolo (Eds.), Enaction: Towards a new paradigm for cognitive science (pp. 361–385). MIT Press.
  • Froese, T. (2017). Life is Precious Because it is Precarious: Individuality, Mortality and the Problem of Meaning. In G. Dodig-Crnkovic & R. Giovagnoli (Eds.), Representation and Reality in Humans, Other Living Organisms and Intelligent Machines (pp. 33–50). Springer International Publishing. https://doi.org/10.1007/978-3-319-43784-2_3
  • Kirchhoff, M. D., & Kiverstein, J. (2020). Attuning to the World: The Diachronic Constitution of the Extended Conscious Mind. Frontiers in Psychology, 11. https://www.frontiersin.org/articles/10.3389/fpsyg.2020.01966
  • Kriegel, U. (2019). The Value of Consciousness. Analysis, 79(3), 503–520. https://doi.org/10.1093/analys/anz045
  • Lee, A. Y. (2023). Degrees of Consciousness. Noûs, 57(553), 575. https://doi.org/10.1111/nous.12421
  • Melloni, L., Mudrik, L., Pitts, M., Bendtz, K., Ferrante, O., Gorska, U., Hirschhorn, R., Khalaf, A., Kozma, C., Lepauvre, A., Liu, L., Mazumder, D., Richter, D., Zhou, H., Blumenfeld, H., Boly, M., Chalmers, D. J., Devore, S., Fallon, F., … Tononi, G. (2023). An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLOS ONE, 18(2), e0268577. https://doi.org/10.1371/journal.pone.0268577
  • Metzinger, T. K. (2021). Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology. Journal of Artificial Intelligence and Consciousness, 8(1), 43–66. https://doi.org/10.1142/S270507852150003X
  • Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435–450.
  • Negro, N. (2024). (Dis)confirming theories of consciousness and their predictions: Towards a Lakatosian consciousness science. Neuroscience of Consciousness, 2024(1), niae012. https://doi.org/10.1093/nc/niae012
  • Northoff, G., & Lamme, V. (2020). Neural signs and mechanisms of consciousness: Is there a potential convergence of theories of consciousness in sight? Neuroscience & Biobehavioral Reviews, 118, 568–587. https://doi.org/10.1016/j.neubiorev.2020.07.019
  • Papineau, D. (2002). Thinking about consciousness. Oxford University Press. https://doi.org/10.1093/0199243824.003.0008
  • Searle, J. R. (2017). Biological Naturalism. In The Blackwell Companion to Consciousness (pp. 327–336). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781119132363.ch23
  • Seth, A. K., & Bayne, T. (2022). Theories of consciousness. Nature Reviews Neuroscience, 23(7), 439–452. https://doi.org/10.1038/s41583-022-00587-4
  • Shea, N., & Bayne, T. (2010). The Vegetative State and the Science of Consciousness. The British Journal for the Philosophy of Science, 61(3), 459‐484. https://doi.org/10.1093/bjps/axp046
  • Shepherd, J. (2024). Sentience, Vulcans, and zombies: The value of phenomenal consciousness. AI & SOCIETY. https://doi.org/10.1007/s00146-023-01835-6
  • Shevlin, H. (2021). Non-human consciousness and the specificity problem: A modest theoretical proposal. Mind & Language, 36(2), 297–314. https://doi.org/10.1111/mila.12338
  • Simon, J. A. (2017). Vagueness and zombies: Why ‘phenomenally conscious’ has no borderline cases. Philosophical Studies, 174(8), 2105–2123. https://doi.org/10.1007/s11098-016-0790-4
  • Taylor, H. (2023). Consciousness as a natural kind and the methodological puzzle of consciousness. Mind & Language, 38(2), 316–335. https://doi.org/10.1111/mila.12413
  • Tye, M. (2016). Qualia. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy ((Winter 2016 Edition)). Metaphysics Research Lab.
  • Wiese, W. (2018). Toward a Mature Science of Consciousness. Frontiers in Psychology, 9, 693. https://doi.org/10.3389/fpsyg.2018.00693

Lecturer

PD Dr. Wanja Wiese received his PhD at Johannes Gutenberg University Mainz. He is currently a researcher and lecturer at Ruhr University Bochum. His research focuses on philosophy of (artificial) consciousness and AI.

Affiliation: Ruhr University Bochum
Homepage: https://homepage.ruhr-uni-bochum.de/wanja.wiese/