Distributed proximal-gradient method for convex optimization with inequality constraints
DOI:
https://doi.org/10.21914/anziamj.v56i0.7489Keywords:
distributed algorithm, proximal-gradient method, exact penalty function method, convex optimizationAbstract
We consider a distributed optimization problem over a multi-agent network, in which the sum of several local convex objective functions is minimized subject to global convex inequality constraints. We first transform the constrained optimization problem to an unconstrained one, using the exact penalty function method. Our transformed problem has a smaller number of variables and a simpler structure than the existing distributed primal–dual subgradient methods for constrained distributed optimization problems. Using the special structure of this problem, we then propose a distributed proximal-gradient algorithm over a time-changing connectivity network, and establish a convergence rate depending on the number of iterations, the network topology and the number of agents. Although the transformed problem is nonsmooth by nature, our method can still achieve a convergence rate, \(\mathcal{O}({1}/{k})\), after \(k\) iterations, which is faster than the rate, \(\mathcal{O}(1/\sqrt{k})\), of existing distributed subgradient-based methods. Simulation experiments on a distributed state estimation problem illustrate the excellent performance of our proposed method. doi:10.1017/S1446181114000273Published
2015-01-15
Issue
Section
Articles for Printed Issues