Saltar al contenido principal

Escribe una PREreview

Observer State in Large Language Models: The Failure of AI Reasoning and Conceptual Logic

Publicada
Servidor
Preprints.org
DOI
10.20944/preprints202512.1073.v1

Large language models perform well on tasks that depend on surface patterns and linguistic continuation; however, they show consistent and well-documented failures when asked to carry out conceptual reasoning. The limitation is structural rather than incidental. Conceptual reasoning requires an observer-state, understood here as a computational vantage point rather than any claim about consciousness, a persistent position that can evaluate its own outputs, compress contradictions, maintain a stance, and assign meaning across contexts. Current large language models lack such a vantage point because their architecture is built around token prediction rather than conceptual anchoring. To examine this constraint, the paper develops a theoretical analysis together with a set of conceptually motivated predictions involving boundary crossing, stance maintenance, contradiction handling, and frame-shift reasoning. These predicted failure patterns align with limitations documented in prior work and are organized within the Observer Ceiling Model, a three-layer constraint system. The architectural layer reflects the absence of mechanisms that could support a persistent observer-state. The substrate layer reflects the dependence on regularities learned during training, which restricts a model’s ability to select contexts or initiate conceptual shifts. The policy layer reflects alignment and safety constraints that interrupt reasoning whenever perspective-taking or stance formation is attempted. Together, these layers form a structural ceiling on conceptual reasoning. The model clarifies why scaling and additional training do not resolve these limitations and why current large language models remain unsuitable for tasks that require conceptual integration, observer-anchored judgment, or meaning assigned across frames.

Puedes escribir una PREreview de Observer State in Large Language Models: The Failure of AI Reasoning and Conceptual Logic. Una PREreview es una revisión de un preprint y puede variar desde unas pocas oraciones hasta un extenso informe, similar a un informe de revisión por pares organizado por una revista.

Antes de comenzar

Te pediremos que inicies sesión con tu ORCID iD. Si no tienes un iD, puedes crear uno.

¿Qué es un ORCID iD?

Un ORCID iD es un identificador único que te distingue de otros/as con tu mismo nombre o uno similar.

Comenzar ahora