The goal of NeurAdapt project is to explore a new path in the design of deep Convolutional Neural Networks(CNNs), which could enable a new family of more efficient and adaptive models for any application that rely on the predictive capabilities of deep learning. Inspired by recent advances in the field of biological Interneurons that highlight the importance of inhibition and random connectivity to the encoding efficiency of neuronal circuits, we aim to investigate the mechanisms that could impart similar qualities to artificial CNNs.
Established techniques such as Channel Gating, Channel Attention and calibrated dropout, each proposed independently and with different objectives, can offer tools to formulate a novel building block for CNN models that expands the functional diversity of the standard convolutional layer. By formulating a differentiable convolutional operator with additional mechanisms for competitive inhibition/excitation and stochastic activation with tunable probability, we pursue the hypothesis that the optimization of the learning tasks will drive the model to create modes that are information-rich, in a process like the one observed in biological neural networks.
Furthermore, the stochastic nature of neuronal activity if appropriately modeled and augmented with sparsity-inducing mechanisms, has the potential to enable the training of models with parametrized levels of sparsity, offering the capacity to control inference/complexity tradeoff on-the-fly, without any need for additional finetuning. Achieving a functionality like this can impart new capabilities to information processing systems that are based on CNNs. Inference with an adjustable level of thoroughness/speed can enable applications such as rapid content search in large media databases, energy-aware decision-making, etc.
The main outcome of the proposed project will be a new methodology for designing efficient deep CNN architectures regardless of the specific task and target domain. Furthermore, NeurAdapt aims to create new knowledge regarding the dynamic behavior of the excitation- inhibition mechanisms in feed-forward DNNs, their capabilities for further development and their respective limitations.see website
Development of a Bio-inspired, resource efficient design approach for designing Deep Learning models