17 th ACM SAC 2025 - Track
Innovations in Multimedia Forensics and Biometric Security
March 31 - April 4, 2025 — Sicily, Italy
co-located with the 40st ACM/SIGAPP Symposium On Applied Computing

Call for Papers

Important Dates

  • Submission of regular papers: September 20, 2024
  • Notification of paper acceptance/rejection: October 30, 2024
  • Camera-ready copies of accepted papers: November 29, 2024
  • Author registration due date: December 6, 2024

Overview

Digital content is widely regarded as an accurate reflection of reality and is frequently used as legal evidence. However, the widespread availability of user-friendly editing tools raises concerns about the integrity and authenticity of such content. Recent advances in deep learning, particularly in generative models based on Generative Adversarial Networks (GANs) and Diffusion Models, have facilitated the creation and manipulation of hyper-realistic digital content known as Deepfakes. Although these tools can be used creatively, their potential for malicious use is alarming: they are used to spread misinformation by impersonating trusted persons or to tarnish the reputation of public figures by placing them in defamatory contexts. The field of multimedia forensics focuses on developing algorithms to detect manipulated data with state-of-the-art technologies. A significant challenge for these algorithms is generalisation from the training dataset to real-world data. Other hurdles include ensuring the interpretability of the detection results and attributing the generated images to specific generator networks or image/video processing tools. In addition, since attackers may incorporate adversarial models into the generated images to evade detection, forensic algorithms need to take these counter-forensic methods into consideration as well.
The rapidly evolving landscape of Deepfake technology necessitates ongoing development and refinement of detection methods. The interdisciplinary nature of this research is reflected in the proposed special track, which encompasses a wide range of topics. These include Deepfake detection in images, video, and audio, presenting a comprehensive approach to combating illicit uses. Tasks related to Deepfake model recognition and attribution offer innovative perspectives in this field. Adversarial forensics, which intersects with technology, legal, and investigative domains, further enriches the discourse by highlighting the vulnerabilities of machine learning models.
Moreover, the incorporation of new multimodal datasets for Deepfake creation and detection signifies a paradigm shift in research methodologies, acknowledging the intricate interplay between various data sources.
Since manipulated content can be maliciously used to evade biometric recognition systems, the proposed special track acknowledges the importance of advancing research in biometric recognition to counteract this potential threats.
The proposed track is driven by urgent motivations and interdisciplinary aspects, setting itself apart through its innovative approach. It pushes the frontiers of research and exploration in the dynamic field of Deepfake technology, emphasizing the need for advanced detection mechanisms and comprehensive understanding of this rapidly evolving threat landscape. Additionally, modern large language models (LLMs) can play a crucial role in this domain, offering enhanced capabilities for pattern recognition, data analysis, and the interpretation of complex forensic evidence, thereby contributing to more robust and effective solutions in digital forensics and biometric authentication.

Topics of Interest

Topics of interest include (but are not limited to):

  • Digital Forensics and Analysis
  • Multimedia Forensics
  • Deepfake Detection/Creation
  • Multimodal Deepfake Detection/Creation
  • Deepfake Model Recognition or Deepfake Attribution
  • Adversarial Forensics
  • Generative models
  • Multimodal datasets for Digital Forensics
  • Deepfakes and Biometrics
  • Ethics in Data Synthesis and Manipulations
  • Biometrics

Submission Guidelines

We invite the following types of papers:

  • Original research papers on any topic in the intersection of AI, Machine Learning or Deep Learning with security, privacy, or related areas.
  • Position and open-problem papers discussing the relationship of AI , Machine Learning or Deep Learning to Deepfakes. Submitted papers of this type may not substantially overlap with papers that have been published previously or that are simultaneously submitted to a journal or conference/workshop proceedings.
  • Systematization-of-knowledge papers , which should distill the AI, Machine Learning or Deep Learning contributions of a previously-published series of deepfake-papers.

Author kit
Submission Rule
Submissions must be in English and properly anonymized.

Submission Site

Submission link:

All accepted submissions will be presented at the workshop as posters. Accepted papers will be selected for presentation as spotlights based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance and will be included in the ACM workshop proceedings.

One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

For any questions, please contact one the organizers at luca.guarnera@unict.it, alessandro.ortis@unict.it, giulia.orru@unica.it.

Committee

Track Chairs

Steering Committee

  • TBD

Program Committee

  • TBD