Massoli F. V., Falchi F., Amato G.
k-nearest neighbour adversarial machine learning deep learning adversarial examples machine learning
In the last decade, we have witnessed a renaissance of Deep Learning models. Nowadays, they are widely used in industrial as well as scientific fields, and noticeably, these models reached super-human per-formances on specific tasks such as image classification. Unfortunately, despite their great success, it has been shown that they are vulnerable to adversarial attacks-images to which a specific amount of noise imper-ceptible to human eyes have been added to lead the model to a wrong decision. Typically, these malicious images are forged, pursuing a misclas-sification goal. However, when considering the task of Face Recognition (FR), this principle might not be enough to fool the system. Indeed, in the context FR, the deep models are generally used merely as features ex-tractors while the final task of recognition is accomplished, for example, by similarity measurements. Thus, by crafting adversarials to fool the classifier, it might not be sufficient to fool the overall FR pipeline. Start-ing from this observation, we proposed to use a k-Nearest Neighbour algorithm as guidance to craft adversarial attacks against an FR system. In our study, we showed how this kind of attack could be more threaten-ing for an FR system than misclassification-based ones considering both the targeted and untargeted attack strategies.
Source: SEBD 2020. Italian Symposium on Advanced Database Systems, pp. 302–309, Villasimius, Sud Sardegna, Italia, 21-24/6/2020
@inproceedings{oai:it.cnr:prodotti:445014, title = {KNN-guided Adversarial Attacks}, author = {Massoli F. V. and Falchi F. and Amato G.}, booktitle = {SEBD 2020. Italian Symposium on Advanced Database Systems, pp. 302–309, Villasimius, Sud Sardegna, Italia, 21-24/6/2020}, year = {2020} }