TAFIM: Targeted Adversarial Attacks against Face Image Manipulations

1Technical University of Munich
2Sony Europe RDC Stuttgart

Method Overview

We propose a novel approach to protect facial images from several image manipulation models simultaneously. Our method works by generating quasi-imperceptible perturbations using a learned neural network. These perturbations when added to real images force the face manipulation models to produce a predefined manipulation target as output (white/blue image in this case). Compared to existing methods that require an image-specific optimization, we propose to leverage a neural network to encode the generation of image specific perturbations, which is several orders of magnitude faster and can be used for real-time applications. In addition, our generated perturbations are robust to jpeg compression.

Abstract

Face image manipulation methods, despite having many beneficial applications in computer graphics, can also raise concerns when affecting an individual's privacy or spreading disinformation. In this work, we propose a proactive defense to prevent face manipulation from happening in the first place. To this end, we introduce a novel data-driven approach that produces image-specific perturbations which are embedded in the original images. The key idea is that these protected images prevent face manipulation by causing the manipulation model to produce a predefined manipulation target (uniformly colored output image in our case) instead of the actual manipulation. Compared to traditional adversarial attacks that optimize noise patterns for each image individually, our generalized model only needs a single forward pass, thus running orders of magnitude faster and allowing for easy integration in image processing stacks, even on resource-constrained devices like smartphones. In addition, we propose to leverage a differentiable compression approximation, hence making generated perturbations robust to common image compression. We further show that a generated perturbation can simultaneously prevent multiple manipulation methods.

Video

Results

Comparison with Baselines
Robustness to Compression
Multiple Manipulation Models

BibTeX

If you find this work useful for your research, please consider citing:

@inproceedings{aneja2021tafim,
    title={{TAFIM}: Targeted {A}dversarial {A}ttacks against {F}acial {I}mage {M}anipulations}, 
    author={Shivangi Aneja and Lev Markhasin and Matthias Nie{\ss}ner},
    booktitle={ArXiv preprint arXiv:2112.09151},
    year={2021}
}