Visiting Researcher Talk: Dr Feng Liu| 22 January 2026
Talk Title
Statistics as a Compass for AI Security
Speaker
Dr Feng Liu
About the Speaker
Dr Feng Liu is a machine learning researcher with research interests in statistical trustworthy machine learning. Currently, he is the recipient of the ARC DECRA Fellowship, a Senior Lecturer (US Associate Professor) at The University of Melbourne, Australia, and a Visiting Scientist at RIKEN-AIP, Japan. He has served as an Area Chair for AISTATS, ICLR, ICML, NeurIPS, as a senior program committee (SPC) member for AAAI, IJCAI. He has received the Australasian AI Emerging Research Award from the Australian Computer Society, the Discovery Early Career Researcher Award from the Australian Research Council, the Outstanding Paper Award from NeurIPS 2022, the Best Paper Award from AAAI 2025 Workshop CoLoRAI, the Best Student Paper Award from FUZZ-IEEE 2019, and the Best Paper Runner-up Award from ECIS 2023.
Description
As AI systems become pervasive, AI researchers and practitioners face new challenges ranging from adversarial attacks to data privacy leaks. This talk argues that many such AI risks can be more effectively addressed by adopting a statistical perspective. We first demonstrate how a two-sample statistical test, Maximum Mean Discrepancy (MMD) [1], can be adapted to detect adversarial examples by measuring subtle distributional disparities [2]. Building on this detection capability, we introduce a two-pronged defense approach that not only flags adversarial inputs but also purifies them, significantly improving model robustness without sacrificing accuracy [3].
In the latter part of the talk, we shift focus to data privacy, revealing how distributional analysis can uncover hidden usage of unauthorized training data in generative AI models. Even when direct memorization is removed via model distillation, the statistical "fingerprint" of the original dataset remains detectable—a finding that suggests membership inference attacks should evolve from single-instance checks to distribution-level scrutiny [4]. Through these case studies, this talk underscores the importance for the security research community to rethink AI safety from a statistical perspective, showing how rigorous distributional testing can both fortify models against attacks and expose subtle privacy risks.
[1] Learning Deep Kernels for Non-parametric Two Sample Test. ICML 2020. [2] Maximum Mean Discrepancy is Aware of Adversarial Attacks. ICML 2021. [3] One Stone, Two Birds: Enhancing Adversarial Defense Through the Lens of Distributional Discrepancy. ICML 2025. [4] Membership Inference Attack Should Move On to Distributional Statistics for Distilled Generative Models. ICML 2025 Workshop on Reliable and Responsible Foundation Models.