Oscar Leong
Assistant Professor
Department of Statistics and Data Science
University of California, Los Angeles
About me
I am an Assistant Professor in the Statistics and Data Science department at UCLA. Previously, I was a von Kármán Instructor at Caltech in the Computing + Mathematical Sciences department, hosted by Venkat Chandrasekaran, where I also worked closely with Katie Bouman and the Computational Cameras group. I completed my PhD from Rice University in Computational and Applied Mathematics under the supervision of Paul Hand and was an NSF Graduate Research Fellow. I received my undergraduate degree in Mathematics from Swarthmore College.
My research interests lie in the mathematics of data science, inverse problems, machine learning, and optimization. Much of my work concerns solving signal recovery problems with approaches inspired by deep learning and uses tools from high dimensional probability, convex and star geometry, random matrix theory, and optimization to develop provable recovery guarantees.
Family: my wife, Wani, is in developmental psychology. We welcomed our first daughter, Gemma, in May of 2023.
Selected papers
Optimal regularization + star geometry
The Star Geometry of Critic-Based Regularizer Learning (with Eliza O'Reilly and Yong Sheng Soh)
NeurIPS 2024. [arXiv]Optimal Regularization for a Data Source (with Eliza O'Reilly, Yong Sheng Soh, and Venkat Chandrasekaran)
Foundations of Computational Mathematics 2024+. [arXiv]
Deep generative models + inverse problems
Flow Priors for Linear Inverse Problems via Iterative Corrupted Trajectory Matching (with Yasi Zhang, Peiyu Yu, Yaxuan Zhu, Yingshan Chang, Feng Gao, and Ying Nian Wu)
NeurIPS 2024. [arXiv]Compressive Phase Retrieval: Optimal Sample Complexity with Deep Generative Priors (with Paul Hand and Vladislav Voroninski)
Communications on Pure and Applied Mathematics 2024. [arXiv][journal]Phase Retrieval Under a Generative Prior (with Paul Hand and Vladislav Voroninski)
NeurIPS 2018. Oral Presentation (0.6% of submissions). [arXiv]
Unsupervised learning from corruption
Discovering Structure From Corruption for Unsupervised Image Reconstruction (with Angela Gao, He Sun, and Katie Bouman)
IEEE Transactions on Computational Imaging 2023. [arXiv] [ICASSP] [project page] [5-min video]
News
October 2024: Our paper on optimal regularization has been accepted to Foundations of Computational Mathematics!
September 2024: Two papers were accepted to NeurIPS 2024! The first on the star geometry of critic-based regularizer learning, joint with Eliza and Yong Sheng, and the second on flow matching priors for linear inverse problems, led by my student Yasi.
September 2024: I gave a talk at the Allerton Conference in the Optimization session on optimal regularization using star geometry. Thanks to Richard and Salar for the invitation!
September 2024: I gave a talk in Edward Castillo's DMIC Lab AI Medical Imaging Seminar on generative networks for inverse problems without ground-truth data. Thanks to Jorge for the invitation!
September 2024: New preprint out with Eliza and Yong Sheng on using star geometry to better understand neural network-based adversarial regularizers!
November 2023: Honored to have been elected as a member of the IEEE Computational Imaging Technical Committee (CI TC).
November 2023: I gave a talk in the Math Machine Learning seminar, joint with MPI MiS and UCLA, on generative networks for inverse problems.
September 2023: Our paper on solving inverse problems without explicit priors by exploiting common structure has been accepted to IEEE Transactions on Computational Imaging!
February 2023: Our paper on establishing optimal sample complexity in phase retrieval with deep generative priors has been accepted to Communications on Pure and Applied Mathematics!
February 2023: Two papers accepted to ICASSP 2023, one on Rohun's SURF project and the other with Angela, He, and Katie on image reconstruction without explicit priors!
December 2022: New preprint out with Eliza, Yong Sheng, and Venkat on characterizing optimal regularizers for a data distribution!
December 2022: Honored to have been awarded an MGB-SIAM Early Career Fellowship.
November 2022: Rohun's SURF project on denoising priors for phase retrieval has been uploaded to arXiv as a preprint!
October 2022: Honored to have been selected as a Rising Star in Data Science by UChicago.
July 2022: I gave a talk at ICCOPT on new work with Yong Sheng and Venkat on our variational analysis of learning convex regularizers. A preprint will be posted soon!
May 2022: I presented during the CMX student/postdoc seminar on some new, exciting work with Angela, He, and Katie on learning image models directly from noisy data! Please see our webpage for more info.
February 2022: Mateo Díaz, Yong Sheng Soh, and I are organizing a pair of sessions on "Convex and nonconvex methods for matrix factorization problems" at ICCOPT 2022!
December 2021: I will be serving as a local arrangement chair for ICCP 2022!
October 2021: Will be presenting in-person (first time in awhile!) during Caltech's CMI seminar on using deep generative models in inverse problems.
September 2021: I'll be virtually presenting our generative prior work at the "Generative Regularization Approaches for Inverse Problems" Minisymposium at the IFIP TC7 Conference.
May 2021: Our paper on analyzing subgradient descent for phase retrieval using non-Lipschitz matrix concentration accepted to Communications in Mathematical Sciences.
April 2021: I successfully defended my dissertation! Excited to announce I'll be joining Caltech in the Department of Computing + Mathematical Sciences as a von Kármán Instructor in the Fall.
Talks
Here is a video of my oral presentation at NeurIPS 2018 on our Deep Phase Retrieval paper along with a link to a 3 minute summary of our work: