Document Type : Research Article
Electrical Engineering Department, Amirkabir University of Technology
Faculty of Mathematical sciences, Shahid Beheshti University
Backpropagation is the foremost prevalent and common algorithm for training conventional neural networks with deep construction. Here we propose DS4NN, temporal backpropagation for deep spiking neural networks with one spike per neuron. We consider a convolutional spiking neural network consisting of simple non-leaky integrate-and-fire (IF) neurons, and a form of coding named time-to-first-spike temporal coding in which, neurons are allowed to fire at most once in a specific time interval, which corresponds to simulation duration here. These features together improve the cost and the speed of network computation. We use a surrogate gradient at firing times to solve the non-differentiability of spike times with respect to the membrane potential of spiking neurons, and to prevent the emergence of dead neurons in deep layers, we propose a relative encoding scheme for determining desired firing times. Evaluations on two classification tasks of MNIST and Fashion-MNIST datasets confirm the capability of DS4NN on the deep structure of SNNs. It achieves the accuracy of 99.3% (99.8%) and 91.6% (95.3%) on testing samples (training samples) of respectively MNIST and Fashion-MNIST datasets with the mean required number of 1126 and 1863 spikes in the whole network. This shows that the proposed approach can make fast decisions with low-cost computation and high accuracy.