Computer Science Theory Seminar

Saeid Hajizadeh
UIC
TBA
Abstract: Recently, minimax optimization received renewed focus due to modern applications in machine learning, robust optimization, and reinforcement learning. The scale of these applications naturally leads to the use of first-order methods. However, the nonconvexities and nonconcavities present in these problems, prevents the application of typical Gradient Descent-Ascent, which is known to diverge even in bilinear problems. In this talk, we aim to review some current methods dealing with these nonconvexities in the presence of sufficient interaction between the two competing agents. Recently, it was shown that the Proximal Point Method (PPM) converges linearly for such family of nonconvex-nonconcave problems. We then study the convergence of a damped version of Extra-Gradient Method (EGM) which avoids potentially costly proximal computations, only relying on gradient evaluation. We show that EGM converges linearly for smooth minimax optimization problem satisfying the same nonconvex-nonconcave condition needed by PPM.
Wednesday February 2, 2022 at 4:00 PM in 636 SEO
Web Privacy Notice HTML 5 CSS FAE
UIC LAS MSCS > persisting_utilities > seminars >