Alternating Direction Method Of Multipliers (ADMM)

about ADMM

sections:

Lagrangian relaxation

Assume that we have the following optimization problem: \(min f(x)\) where \(g(x) = b\) is our constrain.

How can we easily solve this problem ?. What if we incorporate constrain into the main optimization problem. Yes that would be great but how can we do that. Lagrangian relaxation comes to help here.

In mathematical optimization world lagrangian relaxation is used to simplify complex constrained convex optimization problem. It adds cost term so that if we optimize the final term it will approximate the result to the original problem.

lagrangian relaxation is added as following:

$$ min (fx) + \lambda (b - g(x)) $$

here \(\lambda\) is called lagrangian multiplier.

i am not going to stop here for the proof.

Dual ascent

Assume that we have following equality constrained convex optimization problem.

$$ \underset{x \in R^n}{\text{min}} f_0(x) \\ \text{s.t } Ax=b. $$ Then lagrangian of that problem can be written like this:

$$ L(x,y) = f(x) + y^T (Ax-b)$$

Dual decomposition

Augmented lagrangians

Method of multipliers

Majidov Ikhtiyor

Majidov Ikhtiyor

My name is Majidov Ikhtiyor I am AI/Math enthusiast.

--> --> -->