Workshops

Accepted Workshops

  1. Title – 2nd International Workshop on Synthetic Data for Face and Gesture Analysis (SD-FGA)

Authors – Vitomir Struc (University of Ljubljana); Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences); Fadi Boutros (Fraunhofer IGD); Naser Damer (Fraunhofer Institute for Computer Graphics Research IGD and TU Darmstadt); Deepak Jain (Dalian University of Technology)

Overview – This workshop aims to delve into the diverse applications of synthetic data in the realm of face and gesture analysis. The workshop will explore the transformative role of synthetic data in facial and gesture-based AI applications. Attendees will gain insight into how synthetic datasets can be leveraged for training AI models to recognize faces, detect emotions, and analyze gestures without reliance on extensive real-world data. With synthetic data offering an ethical and scalable alternative to traditional data collection, it has emerged as a powerful resource for overcoming data limitations and improving model performance. The workshop will provide a collaborative platform for practitioners and researchers to explore novel methods and challenges in the application of synthetic data, discussing the potential for synthetic data to shape future advancements in computer vision.

  1. Title – 3rd Workshop on learning with few or without annotated face, body and gesture data

Authors – Maxime Devanne (Université de Haute Alsace) ; Mohamed DAOUDI (IMT Nord Europe/ CRIStAL (UMR 9189))

Overview – Since more than a decade, Deep Learning has been successfully employed for vision-based face, body and gesture analysis, both for static and dynamic granularities. This is particularly due to the development of effective deep architectures and the release of quite consequent datasets. However, one of the main limitations of Deep Learning is that it requires large scale annotated datasets to train efficient models. Gathering such face, body or gesture data and annotating them can be very very time consuming and laborious. This is particularly the case in areas where experts from the field are required, like in the medical domain. In such a case, using crowdsourcing may not be suitable. In addition, currently available face and/or gesture datasets cover a limited set of categories. This makes the adaptation of trained models to novel categories not straightforward. Finally, while most of the available datasets focus on classification problems with discretized labels, continuous annotations are required in many scenarios. Hence, this significantly complicates the annotation process. The goal of this 3rd edition of the workshop is to explore approaches to overcome such limitations by investigating ways to learn from few annotated data, to transfer knowledge from similar domains or problems, to generate new data or to benefit from the community to gather novel large scale annotated datasets.

  1. Title – Towards Trustworthy Facial Affect Analysis: Advancing Insights of Fairness, Explainability, and Safety (TrustFAA)

Authors – Jiaee Cheong (University of Cambridge); Yang Liu (University of Oulu) ; Harold Soh (National University of Singapore); Hatice Gunes (University of Cambridge)

Overview – With the increasing prevalence and deployment of Emotion AI-powered facial affect analysis (FAA) tools, concerns about the trustworthiness of these systems have become more prominent. This first workshop on “Towards Trustworthy Facial Affect Analysis: Advancing Insights of Fairness, Explainability, and Safety (TrustFAA)” aims to bring together researchers who are investigating different challenges in relation to trustworthiness—such as interpretability, uncertainty, biases, and privacy —across various facial affect analysis tasks, including macro /micro-expression recognition, facial action unit detection, other corresponding applications such as pain and depression detection, as well as human-robot interaction and collaboration. In alignment with FG2025’s emphasis on ethics, as demonstrated by the inclusion of an Ethical Impact Statement requirement for this year’s submissions, this workshop supports FG2025’s efforts by encouraging research, discussion and dialogue on trustworthy FAA.

  1. Title – Foundation and Generative Models in Automatic Face and Gesture Recognition Workshop

Authors – Hatef Otroshi Shahreza (Idiap Research Institute); Vitomir Struc (University of Ljubljana); Zhen Lei (NLPR, CASIA, China); Arun Ross (Michigan State University); Sébastien Marcel (IDIAP)

Overview – Recent developments in foundation and generative models have revolutionized AI, creating enormous opportunities in different fields, including face and ges ture recognition. Foundation models (such as CLIP, GPT, etc.) enable robust feature extraction and transfer learning. In addition, generative models allow synthetic data generation, privacy-preserving learning, and advanced data augmentation techniques. Foundation and generative models are reshaping the field by improving accuracy, robustness, and interpretability in automatic face and gesture recognition. This workshop aims to bring together researchers to discuss state-of-the-art advancements, applications, and challenges at the applications of foundation and generative models for face and gesture recognition. The workshop will foster discussions that inspire innovation and address challenges in real-world applications of these advanced models.

  1. Title – Second Workshop on Interdisciplinary Applications of Biometrics and Identity Science (InterID 2025)

Authors – Tempestt Neal (University of South Florida); Shaun Canavan (University of South Florida); Patrick Flynn (University of Notre Dame) 

Overview – Biometric recognition generally involves person identification or identity verification using well known biometric modalities, such as face, fingerprint, and voice. Researchers in the field investigate and develop novel biometric datasets, data collection methodologies, sensors, feature extraction approaches, and matching models (e.g., machine or deep learning techniques) to en hance accuracy, reduce misidentification errors, improve data quality, and fuse multimodal data sources. Beyond these core technical advances, recent applications in fields like medical sciences, mental health, and transportation have demonstrated the potential of biometrics to drive interdisciplinary innovation. For example, biometric technologies have been used to mon itor changes in behavioral outcomes following interventions or to detect physical or psychological states. These applications highlight the unique value of biometrics as a specialized tool within broader interdisciplinary contexts, where biometric-specific processes—such as identity-focused sensing, multimodal biometric signal correlation, or domain-specific customization of recognition algorithms—are tailored to address challenges beyond traditional person identification.

  1. Title – 1st International Workshop on Foundation and Multimodal Large Language Models for Face and Gesture Recognition

Authors – Tahar Chettaoui (Fraunhofer IGD); Fadi Boutros (Fraunhofer IGD); Peter Peer (University of Ljubljana); Ruben Tolosana (Universidad Autonoma de Madrid); Ruben Vera-Rodriguez (Universidad Autónoma de Madrid); Naser Damer (Fraunhofer Institute for Computer Graphics Research IGD and TU Darmstadt)

Overview – The field of face and gesture recognition has recently experienced a transformative shift with the rise of foundation models and multimodal large language models (LLMs), which offer unprecedented capabilities to process and integrate multimodal data (e.g., text, images, video, and audio) in a unified framework. This workshop aims to explore the implications and potential uses of these models specifically for face and gesture recognition tasks. As LLMs increasingly support multimodal functionalities, they provide a promising avenue to advance the field beyond traditional techniques, facilitating richer, contextually aware, and potentially more accurate recognition systems, among other key aspects in the field such as explainability. This workshop will foster collaboration among researchers interested in advancing multimodal LLMs for face and gesture recognition, encouraging interdisciplinary insights, and new research that leverages these models for tasks such as real-time emotion recognition, social behavior analysis, and advanced biometrics.