2020
Conference article  Open Access

Analyzing Forward Robustness of Feedforward Deep Neural Networks with LeakyReLU Activation Function Through Symbolic Propagation

Masetti G., Di Giandomenico F.

Deep Neural Network  LeakyReLU  Robustness 

FeedForward Deep Neural Networks (DNNs) robustness is a relevant property to study, since it allows to establish whether the classification performed by DNNs is vulnerable to small perturbations in the provided input, and several verification approaches have been developed to assess such robustness degree. Recently, an approach has been introduced to evaluate forward robustness, based on symbolic computations and designed for the ReLU activation function. In this paper, a generalization of such a symbolic approach for the widely adopted LeakyReLU activation function is developed. A preliminary numerical campaign, briefly discussed in the paper, shows interesting results.

Source: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 460–474, 14/09/2020


Metrics



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:446517,
	title = {Analyzing Forward Robustness of Feedforward Deep Neural Networks with LeakyReLU Activation Function Through Symbolic Propagation},
	author = {Masetti G. and Di Giandomenico F.},
	doi = {10.1007/978-3-030-65965-3_31},
	booktitle = {Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 460–474, 14/09/2020},
	year = {2020}
}