Machine Unlearning (MU) enables AI systems to selectively forget data, supporting privacy regulations like the Right to Be Forgotten and improving safety by removing harmful or unethical content. As its adoption grows, MU also introduces security challenges, including backdoor attacks, data leakage, and malicious unlearning requests.
This workshop aims to bring together researchers, industry experts, and policymakers to explore the underlying technologies, address emerging threats, and develop secure, ethical, and standardised approaches to machine unlearning.
Venue information: Toulouse, France (co-located with ESORICS 2025)
Machine Unlearning (MU) is an emerging and promising technology that addresses the needs for safe AI systems to comply with privacy regulations and safety requirements by removing undesired knowledge from AI models. As AI integration deepens across various sectors, the capability to selectively forget and eliminate knowledge from trained models without model retraining from scratch provides significant advantages. This not only aligns with important data protection principles such as the “Right To Be Forgotten” (RTBF) but also enhances AI by removing undesirable, unethical and even harmful memory from AI models.
However, the development of machine unlearning systems introduces complex security challenges. For example, when unlearning services are integrated into Machine Learning as a Service (MLaaS), multiple participants are involved, e.g., model developers, service providers, and users. Adversaries might exploit vulnerabilities in unlearning systems to attack ML models, e.g., injecting backdoors, compromising model utility, or exploiting information leakage. This can be achieved by crafting unlearning requests, poisoning unlearning data, or reconstructing data and inferring membership using knowledge obtained from the unlearning process. Therefore, unlearning systems are susceptible to various threats, risks, and attacks that could lead to misuses, resulting in potential privacy breaches and data leakage. The intricacies of these vulnerabilities require sophisticated strategies for threat identification, risk assessment, and the implementation of robust security measures to guard against both internal and external attacks.
Despite its significance, there remains a widespread lack of comprehensive understanding and consensus among the research community, industry stakeholders, and government agencies regarding methodologies and best practices for implementing secure and trustworthy machine unlearning systems. This gap underscores the need for greater collaboration and knowledge exchange to develop practical and effective mechanisms that ensure the safe and ethical use of machine unlearning techniques.
This workshop aims to bring together leading researchers, industry experts, and policy makers to address the critical challenges and opportunities in machine unlearning. Our specific goals are:
(i) To foster a deeper understanding of the underlying technologies behind machine unlearning.
(i) To critically examine the threats and risks within machine unlearning systems, discussing potential security breaches and their mitigations.
(iii) To explore innovative solutions that enhance the security and efficacy of unlearning systems.
(iii) To formulate strategies that address the legal and ethical implications of deploying machine unlearning technologies. (iv) To encourage collaborative efforts that will set the foundation for standardized practices in the application of machine unlearning.
General Chair:
PC Co-Chairs:
Name | Organisation |
Abdelmalek Benzekri | IRIT/UPS Toulouse |
Alessandro Erba | KIT |
Alessio Mora | University of Bologna |
Betty Mayeku | Kibabii University |
Carlo Mazzocca | University of Salerno |
Dieter Gollmann | Hamburg University of Technology |
Elif Bilge Kavun | University of Passau |
Emil Lupu | Imperial College |
Fabio Martinelli | IIT-CNR |
Florian Lemmerich | University of Passau |
Jaideep Vaidya | Rutgers University |
Joachim Posegga | University of Passau |
Johannes Kinder | LMU Munich |
Jun Zhao | Nanyang Technological University |
Konrad Rieck | TU Berlin |
Kwok-Yan Lam | Nanyang Technological University |
Minxin Du | Hong Kong Polytechnic University (PolyU) |
Nora Boulahia-Cuppens | Polytechnique Montréal |
Romain Laborde | IRIT |
Sayan Biswas | EPFL |
Saeedeh Momtaz | Amirkabir University of Technology |
Shuo Wang | Shanghai Jiao Tong University |
Sven Dietrich | City University of New York |
Shynar Mussiraliyeva | Al-Farabi Kazakh National University |
Vincent Nicomette | LAAS/CNRS |
Xingliang Yuan | University of Melbourne |
Yinzhi Cao | Johns Hopkins University |
Ziyao Liu | Nanyang Technological University |
Submitted papers must follow the LNCS template from the time they are submitted.
All submissions must be written in English.
Only PDF files will be accepted.
Submissions must not substantially overlap with papers already published or simultaneously submitted to a journal or another conference with proceedings.
Submissions are not anonymous.
Submissions not meeting these guidelines risk rejection without consideration of their merits.
Full papers: Maximum of 16 pages in LNCS format, including bibliography and well-marked appendices.
Short papers: Maximum of 8 pages in LNCS format, including bibliography.
Note: Committee members are not required to read appendices; papers should be intelligible without them.
Submissions must be uploaded via EasyChair:
https://easychair.org/conferences/?conf=esorics2025
Accepted papers are scheduled to be published by Springer as part of the ESORICS Proceedings under the LNCS series.
Authors of accepted papers must agree to the Springer LNCS copyright and guarantee that their papers will comply.
At least one author of each accepted paper must register and present their work at the workshop.
Papers without a presenting author will not be included in the proceedings.
dtc-esorics@ntu.edu.sg