Dual Linear Programs
This is just a quick summary of some proofs and derivations that I needed to do for some lab work review, really really simple stuff. This is more of a place for me to make sure that I have absolutely no holes in my understanding and less of a resource for others.
Canonical Form Linear Progarm:
We can describe a general form of a linear optimization problem as follows, in what is called the “Canonical Form”. The decision variable forms an objective with a cost vector and is constrained by a set of constraint variables and , where stands for . The constraint is standard for removing impossible solutions in physical problems and also for deriving the things later.
Lagrangian Form:
Let’s say we wanted to convey this optimization without explicit constraints and instead handle them within the function itself. We can do this with the Lagrangian as follows where . In general terms, is the objective of the problem and describes each functional constraint (in this case ignoring ) such that satisfies the respective constraint.
Since we have , for the maximum over to not collapse to infinity, we need , which naturally gives rise to the constraint . Since the maximum of would be when multiplied with a negative term, whenever the constraint is satisfied this leaves the original optimization problem. This idea also naturally gives rise to the necesitty of having , since without it we would have degenerate maximums both when the constraint is satisfied and when it is not.
Lagrangian Dual Form:
If we flip the and in the problem, we can derive the dual form of the problem. Duality has a very malleable concept in mathematics, being used to define a way that two structures can be paired with some level of intrinsic symmetry. In our case, this is done by turning our minimization problem, our original problem known as the primal, into a maximization problem, known as the dual.
The function below is often known as the Lagrange Dual Function and will have a couple of notable properties proven later.
We can use the same reasoning of avoiding degenerate minimums for this minimization to prove that we need since . The same reasoning that if and we can know that the minimum is when can also be used as previous to derive a constraint based optimization problem for the dual problem, along with the fact that we know the transpose of a scalar is equal to the scalar.
This problem may not seem significant at all, but we can prove that the values of the primal and dual are linked.
Weak Duality:
Weak Duality guarantees that the solution to any dual problem will always act as a lower bound for the solution to the primal problem. In the case of the simple linear programs here the benefit of optimizing over the dual form instead of the primal may seem restricted to efficiency, but in harder optimization schemes the dual often has a much better optimization landscape. For our cases with Linear Programs, it is often used when a problem has many more decision variables than it does constraints, since the dual has decision variables in line with the number of constraints instead.
For our basic linear program, we can trivially prove weak duality with the bounds of each problem. Since we know that and , we also know that , which we can combine with the fact that and to prove the following.
Since we have , we know that , proving weak duality for this problem.
We can also prove this for the general Lagrangian case by proving that is always less than . This can be proven below using the solution of the primal , which we know satisfies since it has to satisfy the constraints.
Since has to satisfy all of the constraints, we know that , which combined with can remove from the Lagrangian with the same reasoning as before. By definition of the minimum, we know that the following is satisfied, thus proving weak duality for all dual problems.
Strong Duality:
Strong Duality is a condition of an optimization problem that guarantees that value of the solution to the dual problem will give exactly the value of the primal’s solution. This does not hold for all primal dual pairs, but we can prove that it holds for our specific case of linear programs, which means that there exists some and .
This can be proven by setting up a linear system that would find such a , assuming that no such exists, and then finding a contradiction using Farkas’ Lemma, specifically the Inequality Form, which states that exactly one of the following is true for a matrix and a vector .
- There exists such that
- There exists such that and
Since we want a that satisfies the constraints, we know that needs to satisfy both to be feasible and . This lets us set up a system of inequalities as follows.
If we assume that this system has no solutions, then we know that there has to be some vector such that and , which can be rewritten as and . We can break this up into cases based on the value of to show that this is not possible.
In the case where , we can show a contradiction by dividing by , giving and . The first inequality shows that is feasible for the primal, which combined with the second would show that there is a feasible that leads to a minimum strictly less than , which leads to a contradiction because we defined as the minimum.
In the case where , this implies that and . We can add this point to the optimal to get and , which means that would be a feasible solution leading to a lower value than , which again leads to a contradiction.
Since this leads to an impossible situation, we know that there must exist some that satisfies both inequalities. Since this satisfies , and we have already proven from weak duality, we know that , thus proving strong duality for this linear program pair.