About
I will be joining the Tongyi Lab, Alibaba Group, as a research scientist. I obtained the Ph.D. in Artificial Intelligence from the College of Computer Science and Technology, Zhejiang University (ZJU), China, supervised by Prof. Chao Wu. I was a visiting researcher at the University of Cambridge under the supervision of Prof. Nicholas Lane. I am honored to work with Prof. Tao Lin at Westlake University/EPFL. I am the Associate Chair of FedKDD 2025 (Federated Learning Workshop of KDD 2025).
During my Ph.D., I also earned a minor degree in Technology-based (BEST) MBA from the School of Management, ZJU. Before Ph.D., I received my bachelor’s degree in Engineering from ZJU; meanwhile, I obtained an honorary undergraduate degree from Chu Kochen Honors College of ZJU and a minor degree in Public Affairs (social governance).
My main research interests lie in the following aspects. Large Language models and foundation models. i. Model editing and memory management for large language models (LLMs). ii. LLM agent and reasoning. iii. Parametric understanding of LLMs (localization, merging, scaling, pruning, stitching, unlearning, editing, and etc.) iv. Text-to-model generation by diffusion transformers (DiTs). v. Vision-Language (V-L) representation and understanding of multi-modal foundation models. Trustworthy deep learning. i. Privacy-preserving federated learning (FL), efficient & robust algorithm design, and generalization, personalization & training dynamics understanding. ii. Mechanistic interpretability of neural networks, weight decay, loss landscape, permutation invariance, linear mode connectivity, and etc. iii. Socio-technical issues brought by collaborative learning. iv. Responsible and trustworthy AI.
Contact
Email: zexi.li[at]zju.edu.cn / tomleeze[at]gmail.com
Wechat and Phone: (+86) 18868104540
Selected Publications
- [NeurIPS 2024] WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models
Peng Wang*, Zexi Li*, Ningyu Zhang#, Ziwen Xu, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang, and Huajun Chen# - [Preprint] Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning?
Zexi Li*, Xiangzhu Wang*, William F. Shen, Meghdad Kurmanji, Xinchi Qiu, Dongqi Cai, Chao Wu#, Nicholas D. Lane# - [KDD 2025] FedGuCci: Making Local Models More Connected in Landscape for Federated Learning
Zexi Li*, Jie Lin*, Zhiqi Li*, Didi Zhu, Tao Shen, Tao Lin#, Chao Wu#, Nicholas D. Lane - [ICML 2023] Revisiting Weighted Aggregation in Federated Learning with Neural Networks
Zexi Li, Tao Lin#, Xinyi Shang, and Chao Wu# - [Patterns, Cell Press] Can We Share Models If Sharing Data Is Not an Option?
Zexi Li, Feng Mao#, and Chao Wu# - [ICCV 2023] No Fear of Classifier Biases: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier
Zexi Li, Xinyi Shang, Rui He, Tao Lin#, and Chao Wu#
- [IEEE Transactions on Big Data] Towards Effective Clustered Federated Learning: A Peer-to-peer Framework with Adaptive Neighbor Matching
Zexi Li, Jiaxun Lu, Shuang Luo, Didi Zhu, Yunfeng Shao, Yinchuan Li, Zhimeng Zhang, Yongheng Wang#, and Chao Wu#
Recent News
- [2025.07] I will be joining Tongyi Lab, Alibaba Group, as a research scientist.
- [2025.06] I passed the PhD defense and graduated from the College of Computer Science and Technology, Zhejiang University.
- [2025.06] Our paper “You Are Your Own Best Teacher: Achieving Centralized-level Performance in Federated Learning under Heterogeneous and Long-tailed Data” is accepted by ICCV 2025!
- [2025.05] I am happy to share our new preprint “Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning?”! We try to build a bridge connecting LLM editing and unlearning communities and find that editing methods are strong baselines for LLM unlearning to some extent.
- [2025.05] Our paper titled “FedGuCci: Making Local Models More Connected in Landscape for Federated Learning” is accepted by KDD 2025!
- [2025.04] Our paper titled “Towards Universal Personalization in Federated Learning via Collaborative Foundation Generative Models” is accepted by IEEE Transactions on Mobile Computing! Paper will be released soon.
- [2025.03] I will be the Associate Chair of FedKDD 2025, International Joint Workshop on Federated Learning for Data Mining and Graph Analytics, Co-located with the 31st ACM SIGKDD Conference (KDD 2025). Submissions are welcome!
- [2024.09] I am happy to share our WISE, a model editor for large language models’ lifelong model editing, is accepted to NeurIPS 2024!
- [2024.05] Two papers are accepted by KDD 2024, and one paper is accepted by ICML 2024.
Academic Service
- Invited Reviewers: TKDE, IJCV, TMM, Machine Learning, AISTATS 2024, CVPR 2024, ICML 2024 2025, NeurIPS 2024 2025, ICLR 2024 2025, KDD 2025, ICCV 2025.
- Associate Chair of FedKDD 2025, International Joint Workshop on Federated Learning for Data Mining and Graph Analytics, Co-located with the 31st ACM SIGKDD Conference (KDD 2025).