Andre
Wibisono
YALE UNIVERSITY andre.wibisono [at] yale [dot] edu |
• | CPSC 486/586: Probabilistic Machine Learning (Spring 2024, Spring 2023) |
• | CPSC/ECON 365: Algorithms (Fall 2023, Spring 2022) |
• | CPSC 481/581: Introduction to Machine Learning (Fall 2021) |
• | CPSC 661: Sampling Algorithms in Machine Learning (Spring 2021)
|
• | Jun-Kun Wang (postdoc 2021-2023, now at UCSD) |
• | Jiaming Liang (postdoc 2022-2023, now at University of Rochester) |
• | A symplectic analysis of alternating mirror descent Jonas Katona, Xiuyuan Wang, Andre Wibisono arXiv preprint arXiv:2405.03472, 2024 |
• | On independent samples along the Langevin diffusion and the Unadjusted Langevin Algorithm Jiaming Liang, Siddharth Mitra, Andre Wibisono arXiv preprint arXiv:2402.17067, 2024 |
• | Optimal score estimation via empirical Bayes smoothing Andre Wibisono, Yihong Wu, Kaylee Yingxi Yang COLT (Conference on Learning Theory) 2024 |
• | Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin Algorithm Vishwak Srinivasan, Andre Wibisono, Ashia Wilson COLT (Conference on Learning Theory) 2024 |
• | Extragradient Type Methods for Riemannian Variational Inequality Problems Zihao Hu, Guanghui Wang, Xi Wang, Andre Wibisono, Jacob Abernethy, Molei Tao AISTATS (Artificial Intelligence and Statistics) 2024 |
• | Learning Exponential Families from Truncated Samples Jane Lee, Andre Wibisono, Manolis Zampetakis NeurIPS (Neural Information Processing Systems) 2023 |
• | On a Class of Gibbs Sampling over Networks Bo Yuan, Jiaojiao Fan, Jiaming Liang, Andre Wibisono, Yongxin Chen COLT (Conference on Learning Theory) 2023 |
• | Towards Understanding GD with Hard and Conjugate Pseudo-labels for Test-Time Adaptation Jun-Kun Wang, Andre Wibisono ICLR (International Conference on Learning Representations) 2023 |
• | Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time Jun-Kun Wang, Andre Wibisono ICLR (International Conference on Learning Representations) 2023 |
• | Continuized Acceleration for Quasar Convex Functions in Non-Convex Optimization Jun-Kun Wang, Andre Wibisono ICLR (International Conference on Learning Representations) 2023 |
• | Convergence in KL Divergence of the Inexact Langevin Algorithm with Application to Score-based Generative Models Kaylee Yingxi Yang, Andre Wibisono arXiv preprint arXiv:2211.01512, 2022 |
• | Alternating Mirror Descent for Constrained Min-Max Games Andre Wibisono, Molei Tao, Georgios Piliouras NeurIPS (Neural Information Processing Systems) 2022 |
• | Provable Acceleration of Heavy Ball beyond Quadratics for a Class of Polyak-Lojasiewicz Functions when the Non-Convexity is Averaged-Out Jun-Kun Wang, Chi-Heng Lin, Andre Wibisono, Bin Hu ICML (International Conference on Machine Learning) 2022 |
• | Improved analysis for a proximal algorithm for sampling Yongxin Chen, Sinho Chewi, Adil Salim, Andre Wibisono COLT (Conference on Learning Theory) 2022 |
• | The Mirror Langevin Algorithm Converges with Vanishing Bias Ruilin Li, Molei Tao, Santosh S. Vempala, Andre Wibisono ALT (Algorithmic Learning Theory) 2022 |
• | Last-iterate convergence rates for min-max optimization Jacob Abernethy, Kevin Lai, and Andre Wibisono ALT (Algorithmic Learning Theory) 2021 |
• | Fast Convergence of Fictitious Play for Diagonal Payoff Matrices Jacob Abernethy, Kevin Lai, and Andre Wibisono SODA (Symposium on Discrete Algorithms) 2021 |
• | Proximal Langevin Algorithm: Rapid convergence under isoperimetry Andre Wibisono arXiv preprint arXiv:1911.01469, 2019 |
• | Rapid convergence of the Unadjusted Langevin Algorithm: Isoperimetry suffices Santosh Vempala and Andre Wibisono NeurIPS (Neural Information Processing System) 2019 arXiv version | poster |
• | Accelerating Rescaled Gradient Descent: Fast optimization of smooth functions Ashia Wilson, Lester Mackey, and Andre Wibisono NeurIPS (Neural Information Processing System) 2019 |
• | Convexity of mutual information along the Ornstein-Uhlenbeck flow Andre Wibisono and Varun Jog ISITA (International Symposium on Information Theory and Applications) 2018 |
• | Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem Andre Wibisono COLT (Conference on Learning Theory) 2018 |
• | Convexity of mutual information along the heat flow Andre Wibisono and Varun Jog ISIT (International Symposium on Information Theory) 2018 |
• | Information and estimation in Fokker-Planck channels Andre Wibisono, Varun Jog, and Po-Ling Loh ISIT (International Symposium on Information Theory) 2017 |
• | A variational perspective on accelerated methods in optimization Andre Wibisono, Ashia Wilson, and Michael Jordan Proceedings of the National Academy of Sciences, 133, E7351--E7358, 2016. [arXiv version] |
• | Optimal rates for zero-order convex optimization: the power of two function evaluations John Duchi, Michael Jordan, Martin Wainwright, and Andre Wibisono IEEE Transactions on Information Theory, 61(5): 2788--2806, May 2015 |
• | A Hadamard-type lower bound for symmetric diagonally dominant positive matrices Christopher Hillar and Andre Wibisono Linear Algebra and Applications, 472: 135--141, 2015 |
• | Convexity of reweighted Kikuchi approximation Po-Ling Loh and Andre Wibisono NIPS (Neural Information Processing System) 2014 |
• | How to hedge an option against an adversary: Black-Scholes pricing is minimax optimal Jake Abernethy, Peter Bartlett, Rafael Frongillo, and Andre Wibisono NIPS (Neural Information Processing System) 2013 |
• | Streaming variational Bayes Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia Wilson, and Michael Jordan NIPS (Neural Information Processing System) 2013 |
• | Maximum entropy distributions on graphs Christopher Hillar and Andre Wibisono arXiv preprint arXiv:1301.3321, 2013 |
• | Inverses of symmetric, diagonally dominant positive matrices and applications Christopher Hillar, Shaowei Lin, and Andre Wibisono arXiv preprint arXiv:1203.6812, 2013 |
• | Finite
sample convergence rates of zero-order stochastic optimization methods John Duchi, Michael Jordan, Martin Wainwright, and Andre Wibisono NIPS (Neural Information Processing System) 2012 |
• | Minimax option pricing meets Black-Scholes in the limit Jacob Abernethy, Rafael Frongillo, and Andre Wibisono STOC (Symposium on the Theory of Computing) 2012 |
• | Variational and Dynamical Perspectives on Learning and Optimization
PhD in Computer Science, University of California, Berkeley, May 2016 |
• | Maximum Entropy Distributions on Graphs
MA in Statistics, University of California, Berkeley, May 2013 |
• | Generalization and Properties of the Neural Response
MEng in Computer Science, Massachusetts Institute of Technology, June 2010 |