Treffer: Stochastic Optimization under Hidden Convexity.
Weitere Informationen
In this work, we consider constrained stochastic optimization problems given hidden convexity, i.e., those that admit a convex reformulation via nonlinear (but invertible) map \(c(\cdot)\). A number of nonconvex problems ranging from optimal control, revenue, and inventory management to convex reinforcement learning all admit such a hidden convex structure. Unfortunately, in the majority of applications considered, the map \(c(\cdot)\) is unavailable or implicit; therefore, directly solving the convex reformulation is not possible. On the other hand, the stochastic gradients with respect to the original variable are often easy to obtain. Motivated by these observations, we examine the basic projected stochastic (sub) gradient methods for solving such problems given hidden convexity. We provide the first sample complexity guarantees for global convergence in smooth and nonsmooth settings. Additionally, in the smooth setting, we improve our results to the last iterate convergence in terms of function value gap using the momentum variant of projected stochastic gradient descent. [ABSTRACT FROM AUTHOR]
Copyright of SIAM Journal on Optimization is the property of Society for Industrial & Applied Mathematics and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)