2019
Journal article  Open Access

Detecting adversarial inputs by looking in the black box

Carrara F., Falchi F., Amato G., Becarelli R., Caldelli R.

Adversarial example  Deep neural networks  Image classification  Adversarial image detection  Representation learning 

The astonishing and cryptic effectiveness of Deep Neural Networks comes with the critical vulnerability to adversarial inputs - samples maliciously crafted to confuse and hinder machine learning models. Insights into the internal representations learned by deep models can help to explain their decisions and estimate their confidence, which can enable us to trace, characterise, and filter out adversarial attacks.

Source: ERCIM news (2019): 16–17.

Publisher: ERCIM., Le Chesnay



Back to previous page
BibTeX entry
@article{oai:it.cnr:prodotti:404617,
	title = {Detecting adversarial inputs by looking in the black box},
	author = {Carrara F. and Falchi F. and Amato G. and Becarelli R. and Caldelli R.},
	publisher = {ERCIM., Le Chesnay},
	journal = {ERCIM news},
	pages = {16–17},
	year = {2019}
}
CNR ExploRA

Bibliographic record

ISTI Repository

Published version Open Access

Also available from

ercim-news.ercim.euOpen Access