Brain plasticity refers to the capacity of the nervous system to change its structure and its function over a lifetime, in reaction to environmental diversity.
A basic drawback when it comes to the use of deep learning networks is their computational complexity and the associated power consumption. Due to these drawbacks, engineers in energy-sensitive fields such as IoT and mobile are reluctant of using them and thus their use remain somehow limited. To this end, the optimization of the computational efficiency of the deep-learning network is getting more and more important.
There are various attempts to improve the efficiency of a deep learning network. Some of them are based on the development of new model structures (e.g. SqueezeNet) some others to the efficient pruning of the model and some others on the optimization of the constitutional components of a deep learning network (e,g, Winograd algorithm etc).
In Irida Labs, we followed a completely different approach and we tried to mimic the mother nature and specifically the human brain. The human brain uses around 15 kilocalories per hour, which is the same amount of energy that a quad-core CPU uses in 20 minutes (around 20 watt hours). So the brain uses about 3 times less power to support 40,000 times more synapses than transistors. One could argue, therefore, that the brain is over 100,000 times more energy efficient. So, obviously by observing the brain we can learn something…