Previous Articles     Next Articles

Convergence of Distributed Gradient-Tracking-Based Optimization Algorithms with Random Graphs

WANG Jiexiang · FU Keli · GU Yu · LI Tao   

  1. WANG Jiexiang
    School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China.
    Email: wjx16820138@outlook.com.
    FU Keli · GU Yu · LI Tao (Corresponding author)
    Key Laboratory of Pure Mathematics and Mathematical Practice, School of Mathematical Sciences, East China Normal University, Shanghai 200241, China.
    Email: zzufukeli@163.com; jessicagyrrr@126.com; tli@math.ecnu.edu.cn.
  • Online:2021-08-25 Published:2021-08-10

WANG Jiexiang · FU Keli · GU Yu · LI Tao. Convergence of Distributed Gradient-Tracking-Based Optimization Algorithms with Random Graphs[J]. Journal of Systems Science and Complexity, 2021, 34(4): 1438-1453.

This paper studies distributed convex optimization over a multi-agent system, where each agent owns only a local cost function with convexity and Lipschitz continuous gradients. The goal of the agents is to cooperatively minimize a sum of the local cost functions. The underlying communication networks are modelled by a sequence of random and balanced digraphs, which are not required to be spatially or temporally independent and have any special distributions. The authors use a distributed gradient-tracking-based optimization algorithm to solve the optimization problem. In the algorithm, each agent makes an estimate of the optimal solution and an estimate of the average of all the local gradients. The values of the estimates are updated based on a combination of a consensus method and a gradient tracking method. The authors prove that the algorithm can achieve convergence to the optimal solution at a geometric rate if the conditional graphs are uniformly strongly connected, the global cost function is strongly convex and the step-sizes don’t exceed some upper bounds.
No related articles found!
Viewed
Full text


Abstract