Contributo in atti di convegno, 2020, ENG
Massoli F.V.; Falchi F.; Amato G.
CNR-ISTI, Pisa, Italy; CNR-ISTI, Pisa, Italy; CNR-ISTI, Pisa, Italy
In the last decade, we have witnessed a renaissance of Deep Learning models. Nowadays, they are widely used in industrial as well as scientific fields, and noticeably, these models reached super-human per-formances on specific tasks such as image classification. Unfortunately, despite their great success, it has been shown that they are vulnerable to adversarial attacks-images to which a specific amount of noise imper-ceptible to human eyes have been added to lead the model to a wrong decision. Typically, these malicious images are forged, pursuing a misclas-sification goal. However, when considering the task of Face Recognition (FR), this principle might not be enough to fool the system. Indeed, in the context FR, the deep models are generally used merely as features ex-tractors while the final task of recognition is accomplished, for example, by similarity measurements. Thus, by crafting adversarials to fool the classifier, it might not be sufficient to fool the overall FR pipeline. Start-ing from this observation, we proposed to use a k-Nearest Neighbour algorithm as guidance to craft adversarial attacks against an FR system. In our study, we showed how this kind of attack could be more threaten-ing for an FR system than misclassification-based ones considering both the targeted and untargeted attack strategies.
SEBD 2020. Italian Symposium on Advanced Database Systems, pp. 302–309, Villasimius, Sud Sardegna, Italia, 21-24/6/2020
k-nearest neighbour, adversarial machine learning, deep learning, adversarial examples, machine learning
Massoli Fabio Valerio, Amato Giuseppe, Falchi Fabrizio
ISTI – Istituto di scienza e tecnologie dell'informazione "Alessandro Faedo"
ID: 445014
Year: 2020
Type: Contributo in atti di convegno
Creation: 2021-02-16 09:49:38.000
Last update: 2021-02-25 15:11:48.000
External IDs
CNR OAI-PMH: oai:it.cnr:prodotti:445014
Scopus: 2-s2.0-85090902159