Self-Learning Neural Architectures Inspired by the Human Brain
Main Article Content
Abstract
Human brain is capable of self-learning, acquiring knowledge and adjusting to the environment as the need arises. Based on the functionality, this study explores an Enhanced Self-Learning Neural Architecture (ESLNA) which simulates learning similar to neuroplasticity (which is the ability to change its structure and functions based on the previous experience), thereby functioning autonomously with a limited need for labeled data. Our approach adds hierarchical feature abstraction and dynamic synaptic updates, learned in a reinforcement-theoretic framework, to encourage efficient generalization across tasks. Unlike existing deep learning models which are based on the static weight changes, the ESLNA enables real-time learning with minimal supervision through neuro-inspired self-adaptive mechanisms. We demonstrate empirically that this leads to a few-shot, transfer learning and lifelong adaptation capability surpassing the robustness and efficiency of conventional neural architectures. This forms an important milestone in the development of brain-inspired AI advancements, offering a widespread prototype for innovation of more sensible, flexible and useful artificial systems.