Proximal Algorithms for Smoothed Online Convex Optimization with Predictions

Abstract

We consider a smoothed online convex optimization (SOCO) problem with predictions, where the learner has access to a finite lookahead window of time-varying stage costs, but suffers a switching cost for changing its actions at each stage. Based on the Alternating Proximal Gradient Descent (APGD) framework, we develop Receding Horizon Alternating Proximal Descent (RHAPD) for proximable, non-smooth and strongly convex stage costs, and RHAPD-Smooth (RHAPD-S) for non- proximable, smooth and strongly convex stage costs. In addition to outperforming gradient descent-based algorithms, while maintaining a comparable runtime complexity, our proposed algorithms also allow us to solve a wider range of problems. We provide theoretical upper bounds on the dynamic regret achieved by the proposed algorithms, which decay exponentially with the length of the lookahead window. The performance of the presented algorithms is empirically demonstrated via numerical experiments on non-smooth regression, dynamic trajectory tracking, and economic power dispatch problems.

Publication
Proximal Algorithms for Smoothed Online Convex Optimization with Predictions