NOTICE:

The Jainlab has transitioned to a new website format as of August 2022. For current information, please visit https://faculty.eng.ufl.edu/jain
Protecting Facial Privacy Through Face Swapping

Abstract

This project develops privacy mechanisms for clinical video observation sessions, specifically focusing on mechanisms that protect the facial identity of the child under observation while retaining gaze and expression based information which is critical for diagnostic assessments.

SIGGRAPH Spotlight: Episode 39 – Deepfakes
For the latest episode of SIGGRAPH Spotlight, SIGGRAPH 2021 Technical Papers Chair Sylvain Paris (fellow, Adobe Research) is joined by a group of the computer graphics industry’s best and brightest to tackle the subject of deepfakes — from the history of deepfake tech through to how it’s being used today. Press play to hear insight in two parts from Chris Bregler (sr. staff scientist, Google AI), Eakta Jain (assistant professor, University of Florida), and Matthias Nießner (professor, TU München) [Part 1 – 0:00:34], and from ctrl shift face (independent artist) [Part 2 – 0:52:55].
Practical Digital Disguises: Leveraging Face Swaps to Protect Patient Privacy
"Practical Digital Disguises: Leveraging Face Swaps to Protect Patient Privacy", Ethan Wilson, Frederick Shic, Jenny Skytta, Eakta Jain, arXiv preprint arXiv:2204.03559 (2022)
  • Paper (PDF) (2.61 MB)
  • Bibtex entry:
    @misc{wilson2022digitaldisguises,
    doi = {10.48550/ARXIV.2204.03559},
    url = {https://arxiv.org/abs/2204.03559},
    author = {Wilson, Ethan and Shic, Frederick and Skytta, Jenny and Jain, Eakta},
    keywords = {Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Practical Digital Disguises: Leveraging Face Swaps to Protect Patient Privacy},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
    }
The Uncanniness of Face Swaps
"The Uncanniness of Face Swaps", Ethan Wilson, Aidan Persaud, Nicholas Esposito, Sophie Joerg, Frederick Shic, Rohit Patra, Jenny Skytta, Eakta Jain, Journal of Vision, 2022. (in press)
Abstract: Face swapping algorithms, popularly known as "deep fakes", generate synthetic faces whose movements are driven by an actor's face. To create face swaps, users construct training datasets consisting of the two faces being applied and replaced. Despite the availability of public code bases, creating a compelling, convincing face swap remains an art rather than a science because of the parameter tuning involved and the unclear consequences of parameter choices. In this paper, we investigate the effect of different dataset properties and how they influence the uncanny, eerie feeling viewers experience when watching face swaps. In one experiment, we present participants with video from the FaceForensics++ Deep Fake Detection dataset. We ask them to score the clips on bipolar adjective pairs previously designed to measure the uncanniness of computer-generated characters and faces within three categories: humanness, eeriness, and attractiveness. We find that responses to face swapped clips are significantly more negative than to unmodified clips. In another experiment, participants are presented with video stimuli of face swaps generated using deepfake models that have been trained on deficient data. These deficiencies include low resolution images, lowered numbers of images, deficient/mismatched expressions, and mismatched poses. We find that mismatches in resolution, expression, and pose and deficient expressions all induce a higher negative response compared to using an optimal training dataset. Our experiments indicate that face swapped videos are generally perceived to be more uncanny than original videos, but certain dataset properties can increase the effect, such as image resolution and quality characteristics including expression/pose match-ups between the two faces. These insights on dataset properties could be directly used by researchers and practitioners who work with face swapping to act as a guideline for higher-quality dataset construction. The presented methods additionally open up future directions for perceptual studies of face swapped videos.
  • Poster (PDF)
  • Bibtex entry:
    (on the way)
Annotation System For Aiding Automatic Face Detectors
"Annotation System For Aiding Automatic Face Detectors", Ethan Wilson, Jenny Skytta, Frederick Shic, Eakta Jain, University of Florida Technical Report, IR00011535, 2021.
  • Paper (PDF)
  • Bibtex entry:
    @techreport{wilson2021,
  • title = {Annotation System For Aiding Automatic Face Detectors},
    author = {Ethan Wilson and Jenny Skytta and Frederick Shic and Eakta Jain},
    year = {2021},
    institution = {University of Florida},
Benchmarking Face Detectors
"Benchmarking Face Detectors", Ethan Wilson, Jenny Skytta, Frederick Shic, Eakta Jain, University of Florida Technical Report, IR00011536, 2021.
  • Paper (PDF)
  • Bibtex entry:
    @techreport{wilson2021,
  • title = {Benchmarking Face Detectors},
    author = {Ethan Wilson and Jenny Skytta and Frederick Shic and Eakta Jain},
    year = {2021},
    institution = {University of Florida},