Saltar al contenido principal

Escribe una PREreview

A Practical Tutorial on Spiking Neural Networks: Comprehensive Review, Models, Experiments, Software Tools, and Implementation Guidelines

Publicada
Servidor
Preprints.org
DOI
10.20944/preprints202509.2072.v1

Spiking Neural Networks (SNNs) provide a biologically inspired, event-driven alternative to Artificial Neural Networks (ANNs) with the potential to deliver competitive accuracy at substantially lower energy. This tutorial-study offers a unified, practice-oriented assessment combining critical reviews and standardized experiments. We benchmark a shallow Fully Connected Network (FCN) on MNIST and a deeper VGG7 architecture on CIFAR-10 across multiple neuron models (leaky Integrate-and-Fire (LIF), Sigma-Delta, etc.) and input encodings (direct, rate, temporal, etc.) using supervised surrogate-gradient training, implemented with Intel Lava/SLAYER, SpikingJelly, Norse and PyTorch. Empirically, we observe a consistent but tunable trade-off between accuracy and energy. On MNIST, Sigma-Delta neurons with rate or Sigma-Delta encodings reach 98.1% (ANN: 98.23%). On CIFAR-10, Sigma-Delta neurons with direct input achieve 83.0% at just 2 time steps (ANN: 83.6%). A GPU-based operation-count energy proxy indicates many SNN configurations operate below the ANN energy baseline; some frugal codes minimize energy at the cost of accuracy, whereas accuracy-leaning settings (e.g., Sigma-Delta with direct or rate coding ) narrow the performance gap while remaining energy-conscious, yielding up to 3-fold efficiency versus matched ANNs in our setup. Thresholds and the number of time steps are decisive: intermediate thresholds and the minimal time window that still meets accuracy targets typically maximize efficiency per joule. We distill actionable design rules: choose the neuron/encoding pair by application goal (accuracy-critical vs. energy-constrained) and co-tune thresholds and time steps. Finally, we outline how event-driven neuromorphic hardware can amplify these savings through sparse, local, asynchronous computation, providing a practical playbook for embedded, real-time, and sustainable AI deployments.

Puedes escribir una PREreview de A Practical Tutorial on Spiking Neural Networks: Comprehensive Review, Models, Experiments, Software Tools, and Implementation Guidelines. Una PREreview es una revisión de un preprint y puede variar desde unas pocas oraciones hasta un extenso informe, similar a un informe de revisión por pares organizado por una revista.

Antes de comenzar

Te pediremos que inicies sesión con tu ORCID iD. Si no tienes un iD, puedes crear uno.

¿Qué es un ORCID iD?

Un ORCID iD es un identificador único que te distingue de otros/as con tu mismo nombre o uno similar.

Comenzar ahora