TAFIM: Targeted Adversarial Attacks against Face Image Manipulations

ECCV 2022

1Technical University of Munich
2Sony Europe RDC Stuttgart

Method Overview

We propose a novel approach to protect facial images from several image manipulation models simultaneously. Our method works by generating quasi-imperceptible perturbations using a learned neural network. These perturbations when added to real images force the face manipulation models to produce a predefined manipulation target as output. Compared to existing methods that require an image-specific optimization, we propose to leverage a neural network to encode the generation of image specific perturbations, which is several orders of magnitude faster and can be used for real-time applications. In addition, our generated perturbations are robust to jpeg compression.

ABSTRACT

Face image manipulation methods, despite having many beneficial applications in computer graphics, can also raise concerns when affecting an individual's privacy or spreading disinformation. In this work, we propose a proactive defense to prevent face manipulation from happening in the first place. To this end, we introduce a novel data-driven approach that produces image-specific perturbations which are embedded in the original images. The key idea is that these protected images prevent face manipulation by causing the manipulation model to produce a predefined manipulation target (uniformly colored output image in our case) instead of the actual manipulation. In addition, we propose to leverage a differentiable compression approximation, hence making generated perturbations robust to common image compression. In order to prevent against multiple manipulation methods simultaneously, we further propose a novel attention-based fusion of manipulation-specific perturbations. Compared to traditional adversarial attacks that optimize noise patterns for each image individually, our generalized model only needs a single forward pass, thus running orders of magnitude faster and allowing for easy integration in image processing stacks, even on resource-constrained devices like smartphones.

VIDEO

RESULTS

Comparison with Baselines

Robustness to Compression

Multiple Manipulation Models

BibTeX

If you find this work useful for your research, please consider citing:


@InProceedings{aneja2022tafim,
               author="Aneja, Shivangi and Markhasin, Lev and Nie{\ss}ner, Matthias",
               title="TAFIM: Targeted Adversarial Attacks Against Facial Image Manipulations",
               booktitle="Computer Vision -- ECCV 2022",
               year="2022",
               publisher="Springer Nature Switzerland",
               address="Cham",
               pages="58--75",
               isbn="978-3-031-19781-9"}