Improving Video Deepfake Detection: A DCT-Based Approach with Patch-Level Analysis



IS and T International Symposium on Electronic Imaging Science and Technology, 2024



Luca Guarnera, Salvatore Manganello, Sebastiano Battiato
Department of Mathematics and Computer Science, University of Catania, Italy
luca.guarnera@unict.it, sebastiano.battiato@unict.it









[RELATED WORKS]





Proposed approach: (a) For each patch in A of the I-frames of the video V , the DCT is calculated and the Beta components of the 63 AC coefficients are extracted. (b) The final feature vectors pVa of video V, are used in the various classifiers to solve the Real Vs Deepfake task and identify the most discriminative regions.



ABSTRACT


A new algorithm for the detection of deepfakes in digital videos is presented. The I-frames were extracted in order to provide faster computation and analysis than approaches described in the literature. To identify the discriminating regions within individual video frames, the entire frame, background, face, eyes, nose, mouth, and face frame were analyzed separately. From the Discrete Cosine Transform (DCT), the $\beta$ components were extracted from the AC coefficients and used as input to standard classifiers. Experimental results show that the eye and mouth regions are those most discriminative and able to determine the nature of the video under analysis.






Download Paper  

Cite:
@inproceedings{guarnera2024improving,
   author = {Guarnera, Luca and Manganello, Salvatore and Battiato, Sebastiano},
   title = {Improving Video Deepfake Detection: A DCT-Based Approach with Patch-Level Analysis},
   year = {2024},
   journal = {IS and T International Symposium on Electronic Imaging Science and Technology},
   volume = {36},
   number = {4},
   doi = {10.2352/EI.2024.36.4.MWSF-333}
}