About me
I am a CS PhD student at UIUC, advised by Prof. Tong Zhang and Prof. Huan Zhang. Previously, I earned my bachelor’s and master’s degrees from the Department of Automation at Tsinghua University and CSE, HKUST. My research interests lie in deep reinforcement learning (RL) and the application of RL algorithms to Large Language Models (LLMs). I am furtunate to have been working closely with Prof. Chongjie Zhang (Washington University in St. Louis), Dr. Lei Han (Tencent AI Lab), and Prof. Meng Fang (University of Liverpool).
Currently, I am actively researching ways to improve the robustness and generalization abilities of deep reinforcement learning, while also trying to enhance the trustworthiness of LLMs. Feel free to contact me by email if you are interested in discussing or collaborating with me.
News
- 🎉 (2024.5) Rewards-in-Context (RiC) is accepted by ICML 2024! Thanks to my co-authors!
- 🎉 (2024.5) GOPlan is accepted by Transactions on Machine Learning Research (TMLR)!
- 🎉 (2024.1) Robust IQL is accepted by ICLR 2024 as a spotlight paper!
Selected Publications
RL for LLMs
Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs. Preprint, 2024.
Rui Yang, Ruomeng Ding, Yong Lin, Huan Zhang, Tong Zhang.
- TL;DR: Enhancing the generalization ability of reward models for LLMs via text-generation regularizations.
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment. International Conference on Machine Learning (ICML) 2024.
Rui Yang $^*$, Xiaoman Pan $^*$, Feng Luo $^*$, Shuang Qiu $^*$, Han Zhong, Dong Yu, Jianshu Chen.
- TL;DR: Efficient and scalable multi-objective alignment for foundation models through multi-reward conditional SFT and inference-time adaption.
Robust Offline RL
Towards Robust Offline Reinforcement Learning under Diverse Data Corruption. International Conference on Learning Representations (ICLR) 2024. $\color{red}{\text{(Spotlight)}}$
Rui Yang $^*$, Han Zhong $^*$, Jiawei Xu $^*$, Amy Zhang, Chongjie Zhang, Lei Han, Tong Zhang.
- TL;DR: SOTA robust offline RL against data corruption through robust value learning and moderate pessimism.
Corruption-Robust Offline Reinforcement Learning with General Function Approximation. Neural Information Processing Systems (NeurIPS) 2023.
Chenlu Ye $^*$, Rui Yang $^*$, Quanquan Gu, Tong Zhang.
- TL;DR: Provable robust offline RL method against reward and dynamics corruption in offline data through uncertainty reweighting.
RORL: Robust Offline Reinforcement Learning via Conservative Smoothing. Neural Information Processing Systems (NeurIPS) 2022. $\color{red}{\text{(Spotlight)}}$
Rui Yang $^*$, Chenjia Bai $^*$, Xiaoteng Ma, Zhaoran Wang, Chongjie Zhang, Lei Han.
- TL;DR: Robust offline RL method against testing-time observation perturbation through pessimism and local smoothing.
Goal-conditioend RL
What Is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL?. International Conference on Machine Learning (ICML) 2023.
Rui Yang, Yong Lin, Xiaoteng Ma, Hao Hu, Chongjie Zhang, Tong Zhang.
- TL;DR: We study the unseen goal generalization ability of offline GCRL, and propose to enhance the OOD generalization.
Rethinking Goal-conditioned Supervised Learning and Its Connection to Offline RL. International Conference on Learning Representations (ICLR), 2022.
Rui Yang, Yiming Lu, Wenzhe Li, Hao Sun, Meng Fang, Yali Du, Xiu Li, Lei Han, Chongjie Zhang.
- TL;DR: An efficient supervised-based offline GCRL method with three effective weighting techniques.
GOPlan: Goal-conditioned Offline Reinforcement Learning by Planning with Learned Models. Transactions on Machine Learning Research (TMLR) 2024.
Mianchu Wang $^*$, Rui Yang $^*$, Xi Chen, Hao Sun, Meng Fang, Giovanni Montana.
- TL;DR: Pretraining prior policy via advantage-weighted CGAN and leveraging reanalysis with model-based planning for policy improvement.
Experiences
Internship at Tencent AI Lab
Internship at Meituan Financial Service Group
Services
Conference Reviewer: ICML (22,24), ICLR (24, 25), NeurIPS (22,23 $\color{red}{\text{Top Reviewer}}$, 24), ICRA (23), AAMAS(24).
Journal Reviewer: IEEE Robotics and Automation Letters (RA-L), IEEE Transactions on Artificial Intelligence (TAI), Machine Learning.
Teaching Assistant: COMP 4211 Machine Learning, HKUSt; COMP 1021 Introduction to Computer Science, HKUST
Others
In my leisure time, I enjoy sports like running, table tennis, and swimming. During my time at Tsinghua University, I was an amateur long-distance runner. In 2019, I completed a half marathon (21.0975 km) in 1 h 30 m and a full marathon (42.195 km) in 3 h 36 m.