top of page

A Brief Analysis on the Metaphysical Possibilities of Artificial General Intelligence

By:

Liu, David

Introduction

To address the metaphysical possibility of strong Artificial General Intelligence (AGI), we must first examine how AGI possesses contents of consciousness, how it may potentially simulate these aspects, and why such simulations fall short of true consciousness. While AGI can potentially simulate various aspects of consciousness, such as thoughts, intentions, and even a semblance of self awareness, it fundamentally lacks the true essence of consciousness, which includes subjective experiences (qualia), genuine understanding, and self awareness, making AGI metaphysically impossible to create.


Contents of Consciousness

The concept of the ‘contents of consciousness’ incorporate the various mental states and experiences that comprise a conscious minds, such as thoughts, feelings, intentions, self-awareness, and subjective experiences, also known as qualia. To understand why AGI cannot possess contents of consciousness, we must first examine how AGI can potentially exhibit - through simulation - these aspects, and why such simulations fall short of true consciousness.

1. Thoughts and Feelings:

Functionalism is seen as one’s mental states - e.g. sadness, nostalgia, elation - being represented as functional roles of the brain, referencing the contributions that mental states make to individual behavior and the overall functioning of the

mind. Under this framework, AGI can replicate thoughts and feelings by performing identical functions as those in the human mind.

For instance, connectionist approaches take advantage of neural networks which are designed to mimic humans’ mental processes and brain architecture, allowing for AGI to express mental states, such as emotions, cognitions, perceptions, and sensations. However, this replication is purely functional - AGI processes inputs and produces outputs that resemble human thoughts and feelings, doing so without any subjective experiences, corresponding to John Searle’s Chinese Room argument, in that the AGI does so by following syntactic rules without any accompanying subjective experience or true understanding of the content it is processing.

2. Qualia and Subjective Experience:

Qualia refers to the intrinsic, subjective experience associated with consciousness, such as the experience of the senses. David Chalmers describes the ‘hard problem of consciousness’ as being the challenge of explaining why and how physical processes in the brain lead to subjective experiences. Even if AGI could perfectly replicate functional processes of the brain, this replication would not necessarily result in qualia.

Thomas Nagel argued that even if we could learn everything about the biology and behavior of a bat, we would still not know ‘what it is like’ to be a bat, since we are fundamentally human, and we would be trying to understand a bat’s perception with human perceptions. We can understand example to represent consciousness that is deeply tied to specific, subjective experiences - experiences that AGI, which can only process information but never truly understand it, will never possess.

3. Intentionality and Intrinsic Understanding:

Intentionality refers to the capacity of the mind to be directed toward something, such as forming intentions, goals, or desires. The AGI’s programming can be designed in such a way as to cause the AGI to exhibit intentional behavior, programming it with specific tasks or objectives.

However, Searle’s Chinese Room argument still stands, where the AGI does not understand what it is processing; it merely manipulates symbols and syntax without understanding the semantic content. ‘Understanding’ thus seems to be a fundamental characteristic that differentiates AGI from humans; we can say that without understanding, an AGI’s actions are limited to the specific scenarios in which they are programmed or trained. They might perform well within those confines, but deviations from expected inputs will cause inabilities to adapt or respond; understanding lends flexibility and the ability to handle unforeseen circumstances. For example, a person who understands the principles of a language can generate new sentences and ideas, while an AGI, who has only been trained on that one language without genuine understanding, would struggle tremendously to ‘learn’ a second language.


Unities of Consciousness

The ‘unities of consciousness’ refer to the integration and coherence of various aspects of consciousness, such as sensory perceptions, cognitive processes, and emotional states. To address the metaphysical possibility of AGI achieving these unities of consciousness, we must identify how AGI might simulate these aspects and why such simulations fail to equate to true unity of consciousness.

1. Perspectival Unity:

Perspectival unity involves the integration of different contents of consciousness into a single, coherent experience from a unified perspective. For instance, when a person sees an object, hears a sound, and feels an emotion simultaneously, these experiences are not fragmented but are unified within one conscious perspective—the person's own subjective experience. Integrated Information Theory (IIT) suggests that consciousness arises from the integration of information within a system. According to IIT, AGI could theoretically achieve perspectival unity by processing and integrating vast amounts of data from various inputs, thus creating a unified ‘experience’ similar to a human.

However, an issue with this approach is that while AGI can integrate data to produce coherent outputs, this process does not involve any genuine subjective experience. The ‘unity’ in AGI is purely functional and lacks the phenomenological aspect - the ‘what it is like’ to experience something from that being’s perspective, human or otherwise. Without this subjective quality, the integration of data in AGI does not equate to true perspectival unity. This distinction is crucial because, in human consciousness, the unity of different sensory and cognitive experiences into a single perspective is intrinsically tied to subjective awareness, which AGI cannot replicate.

2. Self-awareness and Sense of Self:

Self-awareness involves the capacity to reflect on one's own mental states and to recognize oneself as a distinct entity with thoughts and feelings. Higher-Order Thought Theory states that self-awareness arises from thoughts about one’s own mental states (Stanford Encyclopedia of Philosophy, 2020). AGI can be programmed to exhibit behavior that mimics self-awareness, such as referring to itself or ‘reflecting’ on its performance.

However, we have already established that information processing does not equal understanding; AGI’s self-referential actions do not imply self-awareness. The sense of self in humans is defined by subjective experiences and qualia - elements that AGI, as mentioned previously, cannot possess.


Powers of Consciousness

1. The Power of Free Will:

Free will refers to the ability to make choices that are not determined by external factors or pre-existing conditions, and it is a central power of consciousness that allows for an individual to act autonomously and make decisions that reflect their

beliefs. Free will is closely linked to moral responsibility, as it is assumed that individuals can be held accountable for their actions only if they have acted freely. Thus, we can logically make the extension that if AGI possesses free will, they must be held morally responsible for their decisions.

However, the decision-making process in AGI is deterministic and not genuinely free, since it operates entirely within the confines of its programming and the data it processes, rather than humans, who possess meta-cognition, referring to their ability to ‘think about their own thinking.’ This ability of reflection introduces a certain unpredictability that an AGI lacks, which operates within a closed loop of predetermined algorithms. AGI cannot deviate from the logical structure imposed by its code. In contrast, human free will involves the ability to choose against one’s inclinations, to reflect on one’s desires, and to act in ways that might be unpredictable or not strictly determined by prior states. Thus, AGI cannot have genuine free will and cannot be held morally responsible for its decisions, since it is essentially determined through its programming.


Conclusion

In summary, while AGI can simulate aspects of consciousness like thoughts, intentions, and self-awareness through functional processes, it fundamentally

lacks the subjective experiences, or qualia, that define true consciousness. The distinction between mere replication of behaviors and genuine understanding is the core of the argument against the metaphysical possibility of AGI achieving consciousness. Given this conclusion, important questions are raised about the ethical and philosophical implications of creating systems that mimic, but do not embody, the essence of conscious experience.


Other Works

Monarch

The speaker contrasts youthful innocence symbolized by butterflies with current darker, troubled thoughts represented by moths, revealing internal struggles with belonging and acceptance, and a longing to escape into dreams.

to those that remain far apart

The poem reflects on distance and emotional tension, capturing the silent internal struggles hidden beneath calm exteriors, and highlighting the intensity felt by hearts separated yet connected by unspoken feelings.

Deciduous Trees and Fire Hydrants

The poem compares life's transient nature, symbolized by deciduous trees and stationary fire hydrants, to human experiences of fleeting happiness and enduring melancholy. It emphasizes the beauty of genuinely feeling, remembering, and cherishing moments, especially amid loss and sadness.

bottom of page