Runtian Zhai

翟润天

PhD Student

Fourth-year PhD
Machine Learning
Computer Science Department (CSD)
School of Computer Science (SCS)
Carnegie Mellon University (CMU)

Email: rzhai at cmu dot edu
Office: GHC 5105

Photo

Fourth-year PhD
Machine Learning
Computer Science Department (CSD)
School of Computer Science (SCS)
Carnegie Mellon University (CMU)

Email: rzhai at cmu dot edu
Office: GHC 5105

Bio [CV]
I am a fourth-year PhD student at CMU CSD, co-advised by Zico Kolter and Pradeep Ravikumar. I am broadly interested in statistical learning theory, and developing more white-box learning algorithms and architectures that come with provable guarantees. Currently I am investigating the mathematical structure of representation learning, including how it generalizes with big models and finite samples, and how it represents real-world signals with low-dimensional features. Related topics include kernel methods, dimensionality reduction and semi-supervised learning. I am also working on OOD generalization, that is how a model can generalize to a test data distribution different from the training distribution.
I received my Bachelor's degree in computer science and applied math (double degree) from Peking University, where I was advised by Liwei Wang. I visited UCLA in the summer of 2019 and worked with Cho-Jui Hsieh. In the summer of 2022, I worked at Amazon Alexa AI at Sunnyvale as an applied scientist intern. From Sept 2019 to Jun 2020 I worked as a full-time research intern in Microsoft Research Asia (MSRA) machine learning group at Beijing.
Service
Peer review:
  • ICLR 2023, 2024
  • AISTATS 2023, 2024
  • NeurIPS 2022, 2023
  • ICML 2022, 2023
  • ICCV 2023
  • SDM 2024
  • KDD 2023
  • JMLR
  • NeurIPS workshops: M3L'23, R0-FoMo'23, ML-Safety'22, TSRML'22
  • ICML workshop: PODS'22
Teaching:
  • CMU 10-701: Introduction to Machine Learning Fall 2022 (Head TA)
News
  • Two papers accepted by NeurIPS 2023.
  • One new preprint on arXiv. (Link)
  • Two papers at ICLR 2023 workshops.
  • One paper accepted by ICLR 2023.
Links