Technology

Technology

The trap of language models: apparent consciousness, not real

GettyImages-722239515-copia

In recent years, large-scale language models (LLMs), such as ChatGPT, have revolutionized human-machine interaction. However, this innovation has sparked a philosophical and ethical debate about whether these systems possess any kind of consciousness. Are we dealing with machines that understand, or simply tools that generate convincing words?

The illusion of consciousness and the ELIZA effect

The tendency to attribute consciousness to machines is not new. In 1966, the ELIZA program, designed to simulate a therapist, surprised users by generating responses that seemed to reflect understanding.

This phenomenon, known as the ELIZA effect, is amplified in today’s LLMs, which not only answer questions but also generate texts with narrative coherence, humor, and cultural references. However, these systems do not understand the concepts they express. According to philosopher Douglas Hofstadter, this ability is called “superficial fluency,” a capacity to construct sentences without involving reflection or real consciousness.

In other words, what appears to be deep thought is only a linguistic illusion.

Consciousness: beyond language

Consciousness, as philosopher Thomas Nagel argues in his essay “What Is It Like to Be a Bat?”, involves a subjective point of view and an internal experience that machines cannot replicate. LLMs lack internal experiences, emotions, or intentionality. Although they can talk about love or fear, they do not feel or understand these concepts.

Furthermore, philosophers such as John Searle have illustrated this lack of understanding with his Chinese room thought experiment. In it, a person with no knowledge of Chinese can answer questions in that language by following syntactic rules, but without truly understanding the meaning.

Similarly, LLMs generate text without semantic understanding or communicative intent.

Language models generate words, but they lack consciousness and intentionality.

The role of the body and experience in consciousness

Maurice Merleau-Ponty’s phenomenology emphasizes that consciousness is intrinsically linked to the body and the embodied experience of the world. According to this perspective, to claim that a system without a body and without experience can be conscious is to ignore the essential conditions of consciousness.

LLMs are systems without a body or a lived world. Although they can articulate phrases about beauty or suffering, they cannot experience those emotions. This reinforces the idea that their apparent consciousness is only a projection of our human expectations.

The mirror trap: projecting humanity onto machines

The real problem lies not with machines, but with humans. As Hofstadter points out, we tend to project our own experiences onto machines, seeing consciousness where there are only words. This phenomenon, called the “mirror trap,” can have significant ethical and social consequences, such as developing emotional bonds with machines or legitimizing automated decisions based on simulations of empathy.

Continue your professional career

If you are interested in exploring the ethical and technological aspects of artificial intelligence, consider the Master’s Degree in Strategic Management with a Specialization in Information Technology. This program will prepare you to face ethical challenges in an increasingly digitalized world.

source:
The trap of large language models: seeing consciousness where there are only words – The Conversation

Sponsors

Copyright ©2025. International Ibero-American University. All rights reserved.