We introduce a novel computational unit for neural networks featuring multiple biases, challenging the conventional perceptron structure. Designed to emphasize preserving uncorrupted information as it transfers from one unit to the next, this unit applies activation functions later in the process, incorporating specialized biases for each unit. We posit this unit as an improved design for neural networks and support this with (1) empirical evidence across diverse datasets; (2) a class of functions where this unit utilizes parameters more efficiently; and (3) biological analogies suggesting closer mimicry to natural neural processing. Source code is available at https://github.com/CuriosAI/dac-dev.
Improving Performance in Neural Networks by Dendrite-Activated Connection
Amato, Gianluca;Marchetti, Alessandro;Parton, Maurizio
;
2025-01-01
Abstract
We introduce a novel computational unit for neural networks featuring multiple biases, challenging the conventional perceptron structure. Designed to emphasize preserving uncorrupted information as it transfers from one unit to the next, this unit applies activation functions later in the process, incorporating specialized biases for each unit. We posit this unit as an improved design for neural networks and support this with (1) empirical evidence across diverse datasets; (2) a class of functions where this unit utilizes parameters more efficiently; and (3) biological analogies suggesting closer mimicry to natural neural processing. Source code is available at https://github.com/CuriosAI/dac-dev.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


