Treffer: Accelerated Objective Gap and Gradient Norm Convergence for Gradient Descent via Long Steps.
Weitere Informationen
This work considers gradient descent for L-smooth convex optimization with stepsizes larger than the classic regime where descent can be ensured. The stepsize schedules considered are similar to but differ slightly from the recent silver stepsizes of Altschuler and Parrilo. For one of our stepsize sequences, we prove a O(1/N1.2716...) convergence rate in terms of objective gap decrease and for the other, we show the same rate of decrease for squared-gradient-norm decrease. This first result improves on the recent result of Altschuler and Parrilo by a constant factor, while the second results improve on the exponent of the prior best squared-gradient-norm convergence guarantee of O(1/N). Funding: B. Grimmer's work was supported in part by the Air Force Office of Scientific Research [Award FA9550-23-1-0531]. [ABSTRACT FROM AUTHOR]
Copyright of INFORMS Journal on Optimization is the property of INFORMS: Institute for Operations Research & the Management Sciences and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Volltext ist im Gastzugang nicht verfügbar. Login für vollen Zugriff.