2020
Journal article  Open Access

Detection of Face Recognition Adversarial Attacks

Massoli F. V., Carrara F., Amato G., Falchi F.

Computer Science - Machine Learning  Adversarial detection  I.2.0  Computer Vision and Pattern Recognition  Software  Deep Learning  Face Recognition  Adversarial attacks  Adversarial biometrics  Computer Science - Computer Vision and Pattern Recognition  I.2.6  Signal Processing 

Deep Learning methods have become state-of-the-art for solving tasks such as Face Recognition (FR). Unfortunately, despite their success, it has been pointed out that these learning models are exposed to adversarial inputs - images to which an imperceptible amount of noise for humans is added to maliciously fool a neural network - thus limiting their adoption in sensitive real-world applications. While it is true that an enormous effort has been spent to train robust models against this type of threat, adversarial detection techniques have recently started to draw attention within the scientific community. The advantage of using a detection approach is that it does not require to re-train any model; thus, it can be added to any system. In this context, we present our work on adversarial detection in forensics mainly focused on detecting attacks against FR systems in which the learning model is typically used only as features extractor. Thus, training a more robust classifier might not be enough to counteract the adversarial threats. In this frame, the contribution of our work is four-fold: (i) we test our proposed adversarial detection approach against classification attacks, i.e., adversarial samples crafted to fool an FR neural network acting as a classifier; (ii) using a k-Nearest Neighbor (k-NN) algorithm as a guide, we generate deep features attacks against an FR system based on a neural network acting as features extractor, followed by a similarity-based procedure which returns the query identity; (iii) we use the deep features attacks to fool an FR system on the 1:1 face verification task, and we show their superior effectiveness with respect to classification attacks in evading such type of system; (iv) we use the detectors trained on the classification attacks to detect the deep features attacks, thus showing that such approach is generalizable to different classes of offensives.

Source: Computer vision and image understanding (Print) 202 (2020). doi:10.1016/j.cviu.2020.103103

Publisher: Academic Press,, San Diego , Stati Uniti d'America


Citations

[1] A. Krizhevsky, I. Sutskever, G. E. Hinton, ImageNet Classi cation with Deep Convolutional Neural Networks, in: F. Pereira, C. J. C. Burges, L. Bottou, K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 25, Curran Associates, Inc., 1097{1105, URL http://papers.nips.cc/paper/ 4824-imagenet-classi cation-with-deep-convolutional-neural-networks. pdf, 2012.
[2] R. Girshick, Fast r-cnn, in: Proceedings of the IEEE international conference on computer vision, 1440{1448, 2015.
[3] L. Deng, Y. Liu, Deep Learning in Natural Language Processing, Springer, 2018.
[4] F. Carrara, A. Esuli, T. Fagni, F. Falchi, A. Moreo Fernandez, Picture it in your mind: generating high level visual representations from textual descriptions, Information Retrieval Journal 21 (2) (2018) 208{229, ISSN 1573-7659, URL https://doi.org/10.1007/s10791-017-9318-6.
[5] A. Ortis, G. M. Farinella, S. Battiato, An Overview on Image Sentiment Analysis: Methods, Datasets and Current Challenges, in: Proceedings of the 16th International Joint Conference on e-Business and Telecommunications, ICETE 2019 - Volume 1: DCNET, ICE-B, OPTICS, SIGMAP and WINSYS, Prague, Czech Republic, July 26-28, 2019., 296{306, URL https://doi.org/10.5220/0007909602900300, 2019.
[6] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Srndic, P. Laskov, G. Giacinto, F. Roli, Evasion attacks against machine learning at test time, in: Joint European conference on machine learning and knowledge discovery in databases, Springer, 387{402, 2013.
[7] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, R. Fergus, Intriguing properties of neural networks, arXiv preprint arXiv:1312.6199 .
[8] K. Sundararajan, D. L. Woodard, Deep learning for biometrics: a survey, ACM Computing Surveys (CSUR) 51 (3) (2018) 65.
[9] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, A. Zisserman, Vggface2: A dataset for recognising faces across pose and age, in: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), IEEE, 67{74, 2018.
[10] G. Amato, F. Carrara, F. Falchi, C. Gennaro, C. Vairo, Facial-based Intrusion Detection System with Deep Learning in Embedded Devices, in: Proceedings of the 2018 International Conference on Sensors, Signal and Image Processing, ACM, 64{68, 2018.
[11] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, L. Song, Sphereface: Deep hypersphere embedding for face recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 212{220, 2017.
[12] S. Feldstein, The Global Expansion of AI Surveillance, Working Paper, Carnegie Endowment for International Peace, 1779 Massachusetts Avenue NW, Washington, DC 20036, URL https://carnegieendowment.org/ les/ WP-Feldstein-AISurveillance nal1.pdf, 2019.
[13] Y. Dong, H. Su, B. Wu, Z. Li, W. Liu, T. Zhang, J. Zhu, E cient Decisionbased Black-box Adversarial Attacks on Face Recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7714{7722, 2019.
[15] M. Sharif, S. Bhagavatula, L. Bauer, M. K. Reiter, Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition, in: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 1528{1540, 2016.
[16] A. Kurakin, I. Goodfellow, S. Bengio, Adversarial examples in the physical world, arXiv preprint arXiv:1607.02533 .
[17] X. Li, F. Li, Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics., in: ICCV, 5775{5783, 2017.
[18] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, J. Zhu, Defense against adversarial attacks using high-level representation guided denoiser, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1778{1787, 2018.
[19] A. Kurakin, I. Goodfellow, S. Bengio, Adversarial examples in the physical world, arXiv preprint arXiv:1607.02533 .
[20] N. Papernot, P. McDaniel, X. Wu, S. Jha, A. Swami, Distillation as a defense to adversarial perturbations against deep neural networks, in: 2016 IEEE Symposium on Security and Privacy (SP), IEEE, 582{597, 2016.
[24] J. H. Metzen, T. Genewein, V. Fischer, B. Bischo , On detecting adversarial perturbations, arXiv preprint arXiv:1702.04267 .
[25] N. Carlini, D. Wagner, Adversarial examples are not easily detected: Bypassing ten detection methods, in: Proceedings of the 10th ACM Workshop on Arti cial Intelligence and Security, ACM, 3{14, 2017.
[26] F. Carrara, F. Falchi, R. Caldelli, G. Amato, R. Becarelli, Adversarial image detection in deep neural networks, Multimedia Tools and Applications 78 (3) (2019) 2815{2835.
[27] N. Papernot, P. McDaniel, Deep k-nearest neighbors: Towards con dent, interpretable and robust deep learning, arXiv preprint arXiv:1803.04765 .
[28] C. Sitawarin, D. Wagner, On the Robustness of Deep K-Nearest Neighbors, arXiv preprint arXiv:1903.08333 .
[33] I. J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples (2014), arXiv preprint arXiv:1412.6572 .
[34] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, J. Li, Boosting adversarial attacks with momentum, arXiv preprint .
[35] N. Carlini, D. Wagner, Towards evaluating the robustness of neural networks, in: 2017 IEEE Symposium on Security and Privacy (SP), IEEE, 39{57, 2017.
[36] M. A. Turk, A. P. Pentland, Face recognition using eigenfaces, in: Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, 586{591, 1991.
[37] M. Wang, W. Deng, Deep face recognition: A survey, arXiv preprint arXiv:1804.06655 .
[38] F. V. Massoli, G. Amato, F. Falchi, C. Gennaro, C. Vairo, Improving Multiscale Face Recognition Using VGGFace2, in: International Conference on Image Analysis and Processing, Springer, 21{29, 2019.
[39] H. Qiu, C. Xiao, L. Yang, X. Yan, H. Lee, B. Li, SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing, arXiv preprint arXiv:1906.07927 .
[43] N. Papernot, P. McDaniel, X. Wu, S. Jha, A. Swami, Distillation as a defense to adversarial perturbations against deep neural networks, arXiv preprint arXiv:1511.04508 .
[44] G. Goswami, A. Agarwal, N. Ratha, R. Singh, M. Vatsa, Detecting and mitigating adversarial perturbations for robust face recognition, International Journal of Computer Vision 127 (6-7) (2019) 719{742.
[45] S. Sabour, Y. Cao, F. Faghri, D. J. Fleet, Adversarial manipulation of deep representations, arXiv preprint arXiv:1511.05122 .
[46] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 .


Back to previous page
Projects (via OpenAIRE)

AI4EU
A European AI On Demand Platform and Ecosystem


OpenAIRE
BibTeX entry
@article{oai:it.cnr:prodotti:435199,
	title = {Detection of Face Recognition Adversarial Attacks},
	author = {Massoli F. V. and Carrara F. and Amato G. and Falchi F.},
	publisher = {Academic Press,, San Diego , Stati Uniti d'America},
	doi = {10.1016/j.cviu.2020.103103},
	journal = {Computer vision and image understanding (Print)},
	volume = {202},
	year = {2020}
}