2022
Conference article  Open Access

Tuning neural ODE networks to increase adversarial robustness in image forensics

Caldelli R., Carrara F., Falchi F.

Image forensics Deep Learning  Neural ODE networks  Adversarial samples  Deep Learning 

Although deep-learning-based solutions are pervading different application sectors, many doubts have arisen about their reliability and, above all, their security against threats that can mislead their decision mechanisms. In this work, we considered a particular kind of deep neural network, the Neural Ordinary Differential Equations (N-ODE) networks, which have shown intrinsic robustness against adversarial samples by properly tuning their tolerance parameter at test time. Their behaviour has never been investigated in image forensics tasks such as distinguishing between an original and an altered image. Following this direction, we demonstrate how tuning the tolerance parameter during the prediction phase can control and increase N-ODE's robustness versus adversarial attacks. We performed experiments on basic image transformations used to generate tampered data, providing encouraging results in terms of adversarial rejection and preservation of the correct classification of pristine images.

Source: ICIP 2022 - IEEE International Conference on Image Processing, pp. 1496–1500, Bordeaux, France, 16-19/10/2022

Publisher: IEEE, New York, USA


Metrics



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:472365,
	title = {Tuning neural ODE networks to increase adversarial robustness in image forensics},
	author = {Caldelli R. and Carrara F. and Falchi F.},
	publisher = {IEEE, New York, USA},
	doi = {10.1109/icip46576.2022.9897662},
	booktitle = {ICIP 2022 - IEEE International Conference on Image Processing, pp. 1496–1500, Bordeaux, France, 16-19/10/2022},
	year = {2022}
}

AI4Media
A European Excellence Centre for Media, Society and Democracy


OpenAIRE