Spiking Neural Networks (SNNs) exhibit their optimal information-processing capability at the edge of chaos, but tuning them to this critical regime in reservoir-computing architectures usually relies on costly trial-and-error or plasticity-driven adaptation. This work presents an analytical framework for configuring in the critical regime a SNN-based reservoir with a highly general topology. Specifically, we derive and solve a mean-field equation that governs the evolution of the average membrane potential in leaky integrate-and-fire neurons, and provide an approximation for the critical point. This framework reduces the need for an extensive online fine-tuning, offering a streamlined path to near-optimal network performance from the outset. Through extensive numerical experiments, we validate the theoretical predictions by analyzing the network's spiking dynamics and quantifying its computational capacity using the information-based Lempel-Ziv-Welch complexity near criticality. Finally, we explore self-organized quasi-criticality by implementing a local homeostatic learning rule for synaptic weights, demonstrating that the network's dynamics remain close to the theoretical critical point. Beyond AI, our approach and findings also have significant implications for computational neuroscience, providing a principled framework for quantitatively understanding how (neuro)biological networks exploit criticality for efficient information processing. The paper is accompanied by Python code, enabling the reproducibility of the findings.
A mean-field approach to criticality in spiking neural networks for reservoir computing
Marzetti L.;Basti A.
2025-01-01
Abstract
Spiking Neural Networks (SNNs) exhibit their optimal information-processing capability at the edge of chaos, but tuning them to this critical regime in reservoir-computing architectures usually relies on costly trial-and-error or plasticity-driven adaptation. This work presents an analytical framework for configuring in the critical regime a SNN-based reservoir with a highly general topology. Specifically, we derive and solve a mean-field equation that governs the evolution of the average membrane potential in leaky integrate-and-fire neurons, and provide an approximation for the critical point. This framework reduces the need for an extensive online fine-tuning, offering a streamlined path to near-optimal network performance from the outset. Through extensive numerical experiments, we validate the theoretical predictions by analyzing the network's spiking dynamics and quantifying its computational capacity using the information-based Lempel-Ziv-Welch complexity near criticality. Finally, we explore self-organized quasi-criticality by implementing a local homeostatic learning rule for synaptic weights, demonstrating that the network's dynamics remain close to the theoretical critical point. Beyond AI, our approach and findings also have significant implications for computational neuroscience, providing a principled framework for quantitatively understanding how (neuro)biological networks exploit criticality for efficient information processing. The paper is accompanied by Python code, enabling the reproducibility of the findings.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


