STMUS 2025 : International Workshop on Secure and Trustworthy Machine Unlearning Systems (co-located with ESORICS) Toulouse, France
|
|||||||
Event Date: | September 25, 2025 - September 26, 2025 |
---|---|
Submission Deadline: | June 29, 2025 |
Notification of Acceptance: | July 26, 2025 |
Camera Ready Version Due: | August 08, 2025 |
Call for Papers |
Machine Unlearning (MU) is an emerging and promising technology that addresses the needs for safe AI systems to comply with privacy regulations and safety requirements by removing undesired knowledge from AI models. As AI integration deepens across various sectors, the capability to selectively forget and eliminate knowledge from trained models without model retraining from scratch provides significant advantages. This not only aligns with important data protection principles such as the “Right To Be Forgotten” (RTBF) but also enhances AI by removing undesirable, unethical and even harmful memory from AI models.
However, the development of machine unlearning systems introduces complex security challenges. For example, when unlearning services are integrated into Machine Learning as a Service (MLaaS), multiple participants are involved, e.g., model developers, service providers, and users. Adversaries might exploit vulnerabilities in unlearning systems to attack ML models, e.g., injecting backdoors, compromising model utility, or exploiting information leakage. This can be achieved by crafting unlearning requests, poisoning unlearning data, or reconstructing data and inferring membership using knowledge obtained from the unlearning process. Therefore, unlearning systems are susceptible to various threats, risks, and attacks that could lead to misuses, resulting in potential privacy breaches and data leakage. The intricacies of these vulnerabilities require sophisticated strategies for threat identification, risk assessment, and the implementation of robust security measures to guard against both internal and external attacks. Despite its significance, there remains a widespread lack of comprehensive understanding and consensus among the research community, industry stakeholders, and government agencies regarding methodologies and best practices for implementing secure and trustworthy machine unlearning systems. This gap underscores the need for greater collaboration and knowledge exchange to develop practical and effective mechanisms that ensure the safe and ethical use of machine unlearning techniques. Topics include but are not limited to: 1. Architectures and algorithms for efficient machine unlearning. 2. Security vulnerabilities and threats specific to machine unlearning. 3. Strategies to manage vulnerabilities in machine unlearning systems. 4. Machine unlearning for large-scale AI models, e.g., large language models, multi-modal large models. 5. Evaluation of machine unlearning effectiveness, including metrics and testing methodologies. 6. Machine unlearning for data privacy and public trust. 7. Machine Unlearning as a Service. 8. Machine unlearning in distributed systems, e.g., federated unlearning. 9. Real-world applications and case studies of unlearning for AI systems. |
Summary |
STMUS 2025 : International Workshop on Secure and Trustworthy Machine Unlearning Systems (co-located with ESORICS) will take place in Toulouse, France. It’s a 2 days event starting on Sep 25, 2025 (Thursday) and will be winded up on Sep 26, 2025 (Friday). STMUS 2025 falls under the following areas: MACHINE LEARNING, ARTIFICIAL INTELLIGENCE, AI TESTING, AI MANAGEMENT, etc. Submissions for this Workshop can be made by Jun 29, 2025. Authors can expect the result of submission by Jul 26, 2025. Upon acceptance, authors should submit the final version of the manuscript on or before Aug 8, 2025 to the official website of the Workshop. Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines. Other Details of the STMUS 2025
|
Credits and Sources |
[1] STMUS 2025 : International Workshop on Secure and Trustworthy Machine Unlearning Systems (co-located with ESORICS) |