Finite-Time Last-Iterate Convergence for Learning in Multi-Player Games

Abstract

We study the question of last-iterate convergence rate of the extragradient algorithm by Korpelevich [1976] and the optimistic gradient algorithm by Popov [1980] in multi- player games. We show that both algorithms with constant step-size have last-iterate convergence rate of $O(1/ sqrt(T))$ to a Nash equilibrium in terms of the gap function in smooth monotone games, where each player’s action set is an arbitrary convex set. Previous results only study the unconstrained setting, where each player’s action set is the entire Euclidean space. Our results address an open question raised in several recent works, which ask for last-iterate convergence rate of either the extragradient or the optimistic gradient algorithm in the constrained setting. Our convergence rates for both algorithms are tight and match the lower bounds by Golowich et al. At the core of our results lies a new notion – the tangent residual, which we use to measure the proximity to a Nash equilibrium. We use the tangent residual (or a modification of the tangent residual) as the the potential function in our analysis of the extragradient algorithm (or the optimistic gradient algorithm).

Publication
The 36th Conference on Neural Information Processing Systems (NeurIPS Oral)
Yang Cai
Yang Cai
Associate Professor

Related