Paper: Distributed learning with non-smooth objective functions

August 13th, 2020

We develop a new distributed algorithm to solve a learning problem with non-smooth objective functions when data are distributed over a multi-agent network.

We employ a zerothorder method to minimize the associated augmented Lagrangian in the primal domain using the alternating direction method of multipliers (ADMM) to develop the proposed algorithm, named distributed zeroth-order based ADMM (D-ZOA).

Unlike most existing algorithms for non-smooth optimization, which rely on calculating subgradients or proximal operators, D-ZOA only requires function values to approximate gradients of the objective function. Convergence of D-ZOA to the centralized solution is confirmed via theoretical analysis and simulation results.

Cristiano Gratton, Naveen K. D. Venkategowda, Reza Arablouei, Stefan Werner. Distributed Learning with Non-Smooth Objective Functions. European Signal Processing Conference. Jan 2021.

Download the full paper here.

For more information, contact us.


Subscribe to our News via Email

Enter your email address to subscribe and receive notifications of new posts by email.