publication

Deep Neural Networks: Adding Neural Plasticity to boost computing efficiency

Biologically inspired computing for efficient deep learning​

Brain plasticity refers to the capacity of the nervous system to change its structure and its function over a lifetime, in reaction to environmental diversity. 

A basic drawback when it comes to the use of deep learning networks is their computational complexity and the associated power consumption. Due to these drawbacks, engineers in energy-sensitive fields such as IoT and mobile are reluctant of using them and thus their use remain somehow limited. To this end, the optimization of the computational efficiency of the deep-learning network is getting more and more important. 

There are various attempts to improve the efficiency of a deep learning network. Some of them are based on the development of new model structures (e.g. SqueezeNet) some others to the efficient pruning of the model and some others on the optimization of the constitutional components of a deep learning network (e,g, Winograd algorithm etc). 

In Irida Labs, we followed a completely different approach and we tried to mimic the mother nature and specifically the human brain. The human brain uses around 15 kilocalories per hour, which is the same amount of energy that a quad-core CPU uses in 20 minutes (around 20 watt hours). So the brain uses about 3 times less power to support 40,000 times more synapses than transistors. One could argue, therefore, that the brain is over 100,000 times more energy efficient. So, obviously by observing the brain we can learn something…

As for how the brain does it, the brain performs “switching” using two mechanisms:

  • large molecules (proteins) change shape using electrostatic forces caused by a single atomic attachment (one phosphorous from ATP, or a single neurotransmitter molecule docking with a receptor for a few milliseconds)
  • charged atoms (sodium mostly) pass through tiny pores one at a time

So our team, devised a special mechanism to mimic the switching activity of the brain (patent pending). This mechanism uses a special module that is able to learn to switch a part of the neural network (i.e. a kernel) in order to save time and energy for not computing it.

Download the full-length Paper

Continue reading about how we introduced a Learning Kernel-Activation Module (LKAM), our learning switch module implementation, the experiments executed as well as the accuracy results and method compatibility.

This paper has been published on March, 2017. 





    Skip to content