2026
Journal article  Restricted

Enhancing randomized recurrent neural networks with explainable attribution methods

Spinnato Francesco, Ceni Andrea, Cossu Andrea, Guidotti Riccardo, Gallicchio Claudio, Bacciu Davide

Echo state networks  Explainable AI  Recurrent neural networks  Reservoir computing 

Recurrent Neural Networks (RNNs) are well-suited for temporal data modeling but remain limited by their high training computational cost. As a lightweight alternative, randomized RNNs mitigate this issue by employing a fixed, randomly initialized recurrent layer combined with a simple, trainable output layer. To classify a given input sequence, randomized RNNs usually rely on the final reservoir state, which can be suboptimal when relevant temporal information is sparse or masked by noise. In this work, we investigate how explainable attribution methods can improve the performance of randomized RNNs in classification tasks. In particular, we adopt gradient-based attribution explainability techniques to weigh reservoir states according to their relevance to the final prediction. We theoretically justify the effectiveness of our approach through linear stability analysis, offering geometric intuition via an estimation of the variability of the recurrent dynamics by means of explainability techniques. Our experimental evaluation spans 30 binary and 10 multiclass time series classification tasks, comparing several randomized recurrent models. Results show that explainability-guided weighting can improve classification performance in noisy scenarios.

Source: NEUROCOMPUTING, vol. 666


Metrics



Back to previous page
BibTeX entry
@article{oai:iris.cnr.it:20.500.14243/563281,
	title = {Enhancing randomized recurrent neural networks with explainable attribution methods},
	author = {Spinnato Francesco and Ceni Andrea and Cossu Andrea and Guidotti Riccardo and Gallicchio Claudio and Bacciu Davide},
	doi = {10.1016/j.neucom.2025.132318},
	year = {2026}
}