A Developmental Framework for Symbolic Cognition in Neural Learning Systems

Significance 

Artificial intelligence has become very good at perceptual chores. Feed a network enough images, audio clips, or text, and it will learn to produce outputs that look competent. In certain narrow contexts, these systems now perform at a level that rivals human benchmarks. But that surface success hides a deeper issue that many practitioners quietly recognize. These models are powerful pattern matchers, but they remain largely silent on how understanding itself takes shape. They predict well, but they do not explain themselves, and they offer no account of how raw perception might mature into abstract thought. Most deep learning systems operate by locking in statistical associations extracted from large, carefully curated datasets. Once trained, the mapping is essentially frozen. It can interpolate, sometimes extrapolate, but it does not grow. Meaning, symbols, and causal structure are assumed to exist upstream, embedded in labels or objectives supplied by the designer. Human cognition unfolds very differently. Early understanding is messy, embodied, and incomplete. Infants do not reason symbolically; they probe, fail, repeat, and gradually compress experience into more stable concepts. Psychologists have described this progression for decades, yet algorithmic learning frameworks have absorbed surprisingly little of that developmental perspective. This disconnect becomes harder to ignore as artificial intelligence pushes toward more general capabilities. Architectural tweaks and optimization tricks cannot answer how symbols emerge, why some representations become explanatory, or how internal models acquire causal traction. Current systems remain dependent on externally imposed goals, which limits their capacity for genuine conceptual development. In that sense, today’s intelligence is engineered rather than formed. Closing this gap may require abandoning the idea that cognition is something we can simply train in one step, and instead treating it as something that must be allowed to develop.

To this end, new research paper published in Neural Networks and completed by Professor DongCai Zhao from the Key Laboratory of Green Fabrication and Surface Technology of Advanced Metal Materials, at Anhui University of Technology, he developed a Cognitive Process and Information Processing Model that organizes artificial cognition into four developmental stages, spanning perception, mapping, symbolization, and essential causality. The new framework integrates deep learning algorithms as functional components within a staged cognitive trajectory rather than as isolated predictors. Crucially, it demonstrates how symbolic reasoning and abstract causal understanding can emerge without explicit symbolic programming.

Professor DongCai Zhao developed a model that divides cognition into four sequential stages, each associated with a distinct database and processing logic, allowing learning outcomes to accumulate rather than reset across experiences. At the initial stage, cognition is driven solely by perception and intrinsic preference. Inputs are sensed and acted upon without retention, producing behavior but no reusable knowledge. This stage establishes the motivational and sensory foundations of learning, analogous to early exploratory behavior. Importantly, no symbols or abstractions are present, and each interaction remains isolated from future ones. The second stage introduces mapping through repeated exposure. Here, deep learning algorithms are employed to associate perceptual inputs with consistent internal representations. Through iterative training, stable correspondences emerge between sensory patterns and locations in a representational space. The findings emphasize that this process mirrors conventional neural network training, but remains fundamentally non-symbolic. Knowledge exists only as learned associations, inseparable from perception and incapable of independent manipulation. A qualitative shift occurs with the onset of symbolization. The model demonstrates how the outputs of learned mappings can themselves become symbolic entities, detached from immediate perception. Symbols inherit relationships established during mapping, allowing sequences and associations to be recombined internally. The thought experiment involving a simple agent illustrates that once symbolization occurs, learning accelerates: training shifts from raw perception to symbol-symbol relationships, enabling rapid generalization and imaginative recombination. The final stage concerns the emergence of essential causality. The agent begins to infer stable patterns that can be expressed as abstract relations by repeatedly training over symbolic relationships. Mathematical regularities are presented as a canonical example: when symbolic variables consistently co-vary, predictive relationships emerge that can be formalized as functional expressions. At this point, cognition transitions from recognizing phenomena to articulating explanatory principles. What emerges across the proposed stages is not a story about improved benchmarks or higher task accuracy, but something more subtle and arguably more important: a shift in how learning itself is structured. Knowledge no longer remains tethered to fleeting perceptual encounters. Instead, it stabilizes, gradually detaching from immediate sensory input and taking on a more abstract, durable form. At that point, reasoning becomes possible even in the absence of direct perception. This change in learning modality is the core contribution of the model. It suggests that deep learning algorithms, when placed within a developmental sequence rather than a single training loop, can support symbolic reasoning without relying on explicitly engineered symbolic rules. Learning, in this sense, is no longer confined to data ingestion alone, but extends to the reuse and transformation of internally generated representations.

The broader significance of Professor DongCai Zhao’s work lies in its reframing of artificial intelligence as a developmental process rather than a static optimization exercise. Much of the field has implicitly assumed that intelligence scales with larger datasets, deeper architectures, or more refined objective functions. This model quietly pushes back against that assumption. It argues that how information is organized over time and how representations are allowed to evolve may matter just as much as raw computational capacity. One of the most immediate implications concerns the status of symbols in artificial systems. Traditionally, symbols enter from the outside, imposed by designers or embedded in labels. Here, they arise from perceptual learning itself. That distinction matters. When symbols grow out of sensory experience, they retain a traceable lineage, linking abstraction back to observation. This has clear implications for explainable AI, where opacity often stems from representations that lack any interpretable grounding. A developmental trajectory makes reasoning easier to follow, not because it is simpler, but because its origins are visible.

Notably, the framework does not demand new algorithms. Instead, it reorganizes familiar tools within a cognitive hierarchy. This suggests that progress toward more general intelligence may depend less on architectural novelty than on how learning phases are staged and connected. Allowing systems to train on their own symbolic outputs marks a meaningful departure from conventional feed-forward pipelines. Beyond engineering, the work also reconnects artificial intelligence with long-standing questions in psychology and epistemology, particularly the transition from experience to explanation. The new study opens space for conversations that have largely fallen outside mainstream AI research by making that transition algorithmically explicit. At the same time, it raises practical and ethical questions. Systems that generate their own abstractions may adapt in unforeseen ways, making early value alignment and intrinsic constraints not optional design choices, but foundational ones.

About the author

Zhao Dongcai, Professor, Anhui University of Technology

Research on the deposition technologies of metal films, oxide films, and hard films;
Research on the manufacturing technologies of physical vapor deposition equipment;
Interdisciplinary research at the intersection of artificial intelligence and cognitive science.

Reference

DongCai Zhao, Cognitive process and information processing model based on deep learning algorithms, Neural Networks, Volume 183, 2025, 106999,

Go to Journal of Neural Networks.

Check Also

Bottom-Up Silicon Metasurfaces Exhibiting High-Quality Optical Magnetism

Significance  Reference Parker MA, Barbosa R, Cibaka-Ndaya C, Castro-Grijalba A, De Marco ML, Korgel BA, …