IMPACT SCORE JOURNAL RANKING CONFERENCE RANKING Conferences Journals Workshops Seminars SYMPOSIUMS MEETINGS BLOG LaTeX 5G Tutorial Free Tools
AMFG2023@ICCV 2023 : The 11th IEEE International Workshop on Analysis and Modeling of Faces and Gestures
AMFG2023@ICCV 2023 : The 11th IEEE International Workshop on Analysis and Modeling of Faces and Gestures

AMFG2023@ICCV 2023 : The 11th IEEE International Workshop on Analysis and Modeling of Faces and Gestures

Paris
Event Date: October 02, 2023 - October 02, 2023
Submission Deadline: August 11, 2023
Notification of Acceptance: September 01, 2023
Camera Ready Version Due: September 15, 2023


Categories



Call for Papers

We have experienced rapid advances in the face, gesture, and cross-modality (e.g., voice and face) technologies. This is thanks to deep learning (i.e., dating back to 2012, AlexNet) and large-scale, labeled datasets. The progress in deep learning continues to push renowned public databases to near saturation, thus calling for evermore challenging image collections to be compiled as databases. In practice, and even widely in applied research, using off-the-shelf deep learning models has become the norm, as numerous pre-trained networks are available for download and are readily deployed to new, unseen data (e.g., VGG-Face, ResNet). We have almost grown “spoiled” from such luxury, which, in all actuality, has enabled us to stay hidden from many truths. Theoretically, the truth behind what makes neural networks more discriminant than ever before is still, in all fairness, unclear. Rather, they act as a sort of black box to most practitioners and even researchers alike. More troublesome is the absence of tools to quantitatively and qualitatively characterize existing deep models, which could yield greater insights about these all-so-familiar black boxes. With the frontier moving forward at rates incomparable to any spurt of the past, challenges such as high variations in illumination, pose, age, etc., now confront us. However, state-of-the-art deep learning models often fail when faced with such challenges owing to the difficulties in modeling structured data and visual dynamics.

Alongside the effort spent on conventional face recognition is the research done across modality learning, such as face and voice, gestures in imagery, and video motion, along with several other tasks. This line of work has attracted attention from industry and academic researchers from all sorts of domains. Additionally, and in some cases with this, there has been a push to advance these technologies for social media-based applications. Regardless of the exact domain and purpose, the following capabilities must be satisfied: face and body tracking (e.g., facial expression analysis, face detection, gesture recognition), lip reading and voice understanding, face and body characterization (e.g., behavioral understanding, emotion recognition), face, body, and gesture characteristic analysis (e.g., gait, age, gender, ethnicity recognition), group understanding via social cues (e.g., kinship, non-blood relationships, personality), and visual sentiment analysis (e.g., temperament, arrangement). Thus, needing to be able to create effective models for visual certainty has significant value in both the scientific communities and the commercial market, with applications that span topics of human-computer interaction, social media analytics, video indexing, visual surveillance, and internet vision. Currently, researchers have made significant progress addressing many of these problems, especially when considering off-the-shelf and cost-efficient vision products available these days, e.g. Intel RealSense, SHORE, and Affdex. Nonetheless, serious challenges remain, which only amplify when considering the unconstrained imaging conditions captured by different sources focused on non-cooperative subjects. It is these latter challenges that especially grab our interest, as we sought to bring together cutting-edge techniques and recent advances in deep learning to solve the challenges in the wild.

This one-day serial workshop (AMFG2023) provides a forum for researchers to review the recent progress of recognition, analysis, and modeling of face, body, and gesture while embracing the most advanced deep learning systems available for face and gesture analysis, particularly under an unconstrained environment like social media and across modalities like face-to-voice. The workshop includes up to two keynotes and peer-reviewed papers (oral and poster). Original high-quality contributions are solicited on the following topics:
Novel deep model, deep learning survey, or comparative study for face/gesture recognition;
Data-driven or physics-based generative models for faces, poses, and gestures; deep learning for internet-scale soft biometrics and profiling: age, gender, ethnicity, personality, kinship, occupation, beauty ranking, and fashion classification by facial or body descriptor;
Deep learning for detection and recognition of faces and bodies with large 3D rotation, illumination change, partial occlusion, unknown/changing background, and aging (i.e., in the wild); especially large 3D rotation robust face and gesture recognition;
Motion analysis, tracking, and extraction of face and body models captured from several nonoverlapping views;
Face, gait, and action recognition in low-quality (e.g., blurred), or low-resolution video from fixed or mobile device cameras;
AutoML for face and gesture analysis;
Social/psychological-based studies that aid in understanding computational modeling and building better automated face and gesture systems with interactive features;
Multimedia learning models involving faces and gestures (e.g., voice, wearable IMUs, and face);
Trustworthy learning for face and gesture analysis, e.g., fairness, explainability and transparency;
Other applications involving face and gesture analysis.


Summary

AMFG2023@ICCV 2023 : The 11th IEEE International Workshop on Analysis and Modeling of Faces and Gestures will take place in Paris. It’s a 1 day event starting on Oct 2, 2023 (Monday) and will be winded up on Oct 2, 2023 (Monday).

AMFG2023@ICCV 2023 falls under the following areas: etc. Submissions for this Workshop can be made by Aug 11, 2023. Authors can expect the result of submission by Sep 1, 2023. Upon acceptance, authors should submit the final version of the manuscript on or before Sep 15, 2023 to the official website of the Workshop.

Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines.

Other Details of the AMFG2023@ICCV 2023

  • Short Name: AMFG2023@ICCV 2023
  • Full Name: The 11th IEEE International Workshop on Analysis and Modeling of Faces and Gestures
  • Timing: 09:00 AM-06:00 PM (expected)
  • Fees: Check the official website of AMFG2023@ICCV 2023
  • Event Type: Workshop
  • Website Link: https://web.northeastern.edu/smilelab/amfg2023/
  • Location/Address: Paris


Credits and Sources

[1] AMFG2023@ICCV 2023 : The 11th IEEE International Workshop on Analysis and Modeling of Faces and Gestures


Check other Conferences, Workshops, Seminars, and Events