Abstract
Face-swapping DeepFakes have become an escalating societal concern, attracting increasing attention in recent years. To counter this, we investigate a new proactive defense framework to prevent individuals from being victimized in DeepFake videos. The core idea of this framework is to contaminate the inputs of DeepFake models by disrupting face detectors, based on the observation that face detectors are commonly used to automatically extract victim faces in most DeepFake techniques. Once the face detectors malfunction, the faces will not be correctly extracted, thereby impairing the training or synthesis stages of DeepFake models. To achieve this, we describe a strategy named FacePoison, which fools face detectors by adding dedicated adversarial perturbations to video frames. Building upon this, we introduce VideoFacePoison, an extended strategy that can efficiently propagate FacePoison across video frames instead of applying it individually to each frame, thus significantly reducing the computational overhead while retaining favorable attack performance. This framework is validated on five face detectors, and extensive experiments against eleven different DeepFake models demonstrate the effectiveness of disrupting face detectors to hinder DeepFake generation.
| Original language | English |
|---|---|
| Pages (from-to) | 7010-7024 |
| Number of pages | 15 |
| Journal | IEEE Transactions on Dependable and Secure Computing |
| Volume | 22 |
| Issue number | 6 |
| DOIs | |
| State | Published - 2025 |
Keywords
- DeepFake defense
- face detection
- multimedia forensics
Fingerprint
Dive into the research topics of 'Hiding Faces in Plain Sight: Defending DeepFakes by Disrupting Face Detection'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver