TY - GEN
T1 - Are image-agnostic universal adversarial perturbations for face recognition difficult to detect?
AU - Agarwal, Akshay
AU - Singh, Richa
AU - Vatsa, Mayank
AU - Ratha, Nalini
N1 - Publisher Copyright: © 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - High performance of deep neural network based systems have attracted many applications in object recognition and face recognition. However, researchers have also demonstrated them to be highly sensitive to adversarial perturbation and hence, tend to be unreliable and lack robustness. While most of the research on adversarial perturbation focuses on image specific attacks, recently, image-agnostic Universal perturbations are proposed which learn the adversarial pattern over training distribution and have broader impact on real-world security applications. Such adversarial attacks can have compounding effect on face recognition where these visually imperceptible attacks can cause mismatches. To defend against adversarial attacks, sophisticated detection approaches are prevalent but most of the existing approaches do not focus on image-agnostic attacks. In this paper, we present a simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations. We also present evaluation metrics, namely adversarial perturbation class classification error rate, original class classification error rate, and average classification error rate, to estimate the performance of adversarial perturbation detection algorithms. The experimental results on multiple databases and different DNN architectures show that it is indeed not required to build complex detection algorithms; rather simpler approaches can yield higher detection rates and lower error rates for image agnostic adversarial perturbation.
AB - High performance of deep neural network based systems have attracted many applications in object recognition and face recognition. However, researchers have also demonstrated them to be highly sensitive to adversarial perturbation and hence, tend to be unreliable and lack robustness. While most of the research on adversarial perturbation focuses on image specific attacks, recently, image-agnostic Universal perturbations are proposed which learn the adversarial pattern over training distribution and have broader impact on real-world security applications. Such adversarial attacks can have compounding effect on face recognition where these visually imperceptible attacks can cause mismatches. To defend against adversarial attacks, sophisticated detection approaches are prevalent but most of the existing approaches do not focus on image-agnostic attacks. In this paper, we present a simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations. We also present evaluation metrics, namely adversarial perturbation class classification error rate, original class classification error rate, and average classification error rate, to estimate the performance of adversarial perturbation detection algorithms. The experimental results on multiple databases and different DNN architectures show that it is indeed not required to build complex detection algorithms; rather simpler approaches can yield higher detection rates and lower error rates for image agnostic adversarial perturbation.
UR - https://www.scopus.com/pages/publications/85065444304
U2 - 10.1109/BTAS.2018.8698548
DO - 10.1109/BTAS.2018.8698548
M3 - Conference contribution
T3 - 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems, BTAS 2018
BT - 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems, BTAS 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th IEEE International Conference on Biometrics Theory, Applications and Systems, BTAS 2018
Y2 - 22 October 2018 through 25 October 2018
ER -