A Caveat Regarding the Unfolding Argument: Implications of Plasticity for Computational Theories of Consciousness
- Posted
- Server
- bioRxiv
- DOI
- 10.1101/2025.11.04.686457
The unfolding argument in the neuroscience of consciousness posits that causal structure cannot account for consciousness because any recurrent neural network (RNN) can be “unfolded” into a functionally equivalent feedforward neural network (FNN) with identical input-output behavior. Subsequent debate has focused on dynamical properties and philosophy of science critiques. We examine a novel caveat to the unfolding argument for RNN systems with plasticity in the connection weights. We demonstrate through rigorous mathematical proofs that plasticity negates the functional equivalence between RNN and FNN. Our proofs address history-dependent plasticity, dynamical systems analysis, information-theoretic considerations, perturbational stability, complexity growth, and resource limitations. We demonstrate that neuronal systems that possess properties such as plasticity, history-dependence, and complex temporal information encoding have features that cannot be captured by a static FNN. Our findings delineate limitations of the unfolding argument that apply if consciousness arises from temporally extended dynamic neural processes rather than static input-output mappings. This work has implications for various theories of consciousness, and for the fields of computational neuroscience and philosophy of mind more generally, providing new constraints for theories of consciousness.