Skip to main content

Duality

Suppose that we are given a linear programming problem of the form

 minimize cx subject to Axbx0.\begin{array}{ll}\text { minimize } & \boldsymbol{c}^{\top} \boldsymbol{x} \\ \text { subject to } & \boldsymbol{A}\boldsymbol{x} \geq \boldsymbol{b} \\ & \boldsymbol{x} \geq \boldsymbol{0} .\end{array}

We refer to the above as the primal problem. We define the corresponding dual problem as

 maximize λb subject to λAcλ0.\begin{array}{ll}\text { maximize } & \boldsymbol{\lambda}^{\top} \boldsymbol{b} \\ \text { subject to } & \boldsymbol{\lambda}^{\top} \boldsymbol{A} \leq \boldsymbol{c}^{\top} \\ & \boldsymbol{\lambda} \geq \boldsymbol{0} .\end{array}

We refer to the variable λRm\boldsymbol{\lambda} \in \mathbb{R}^m as the dual vector. Note that the cost vector c\boldsymbol{c} in the primal has moved to the constraints in the dual. The vector b\boldsymbol{b} on the righthand side of Axb\boldsymbol{A} \boldsymbol{x} \geq \boldsymbol{b} becomes part of the cost in the dual. Thus, the roles of b\boldsymbol{b} and c\boldsymbol{c} are reversed. The form of duality defined above is called the symmetric form of duality.

Note that the dual of the dual problem is the primal problem.

In summary:

Properties of Dual Problems

Lemma 1 (Weak Duality Lemma)

Suppose that x\boldsymbol{x} and λ\boldsymbol{\lambda} are feasible solutions to primal and dual LP problems, respectively (either in the symmetric or asymmetric form). Then, cxλb\boldsymbol{c}^{\top} \boldsymbol{x} \geq \boldsymbol{\lambda}^{\top} \boldsymbol{b}.

Proof

We prove this lemma only for the asymmetric form of duality. The proof for the symmetric form involves only a slight modification. Because x\boldsymbol{x} and λ\boldsymbol{\lambda} are feasible, we have Ax=b,x0\boldsymbol{A x}=\boldsymbol{b}, \boldsymbol{x} \geq \boldsymbol{0}, and λAc\boldsymbol{\lambda}^{\top} \boldsymbol{A} \leq \boldsymbol{c}^{\top}. Postmultiplying both sides of the inequality λAc\boldsymbol{\lambda}^{\top} \boldsymbol{A} \leq \boldsymbol{c}^{\top} by x0\boldsymbol{x} \geq \boldsymbol{0} yields λAxc\boldsymbol{\lambda}^{\top} \boldsymbol{A x} \leq \boldsymbol{c}^{\top} x\boldsymbol{x}. But Ax=b\boldsymbol{A x}=\boldsymbol{b}, hence λbcx\boldsymbol{\lambda}^{\top} \boldsymbol{b} \leq \boldsymbol{c}^{\top} \boldsymbol{x}. The weak duality lemma states that a feasible solution to either problem yields a bound on the optimal cost of the other problem. The cost in the dual is never above the cost in the primal. In particular, the optimal cost of the dual is less than or equal to the optimal cost of the primal, that is, "maximum \leq minimum." Hence, if the cost of one of the problems is unbounded, then the other problem has no feasible solution. In other words, if "minimum ==-\infty " or "maximum == ++\infty," then the feasible set in the other problem must be empty.

Theorem 1

Suppose that x0\boldsymbol{x}_0 and λ0\boldsymbol{\lambda}_0 are feasible solutions to the primal and dual, respectively (either in symmetric or asymmetric form). If cx0=λ0b\boldsymbol{c}^{\top} \boldsymbol{x}_0=\boldsymbol{\lambda}_0{ }^{\top} \boldsymbol{b}, then x0\boldsymbol{x}_0 and λ0\boldsymbol{\lambda}_0 are optimal solutions to their respective problems.

Proof

Let x\boldsymbol{x} be an arbitrary feasible solution to the primal problem. Because λ0\boldsymbol{\lambda}_0 is a feasible solution to the dual, by the weak duality lemma, cxλ0b\boldsymbol{c}^{\top} \boldsymbol{x} \geq \boldsymbol{\lambda}_0^{\top} \boldsymbol{b}. So, if cx0=λ0b\boldsymbol{c}^{\top}\boldsymbol{x}_0=\boldsymbol{\lambda}_0^{\top} \boldsymbol{b}, then cx0=λ0bcxc^{\top} \boldsymbol{x}_0=\boldsymbol{\lambda}_0^{\top} \boldsymbol{b} \leq \boldsymbol{c}^{\top} \boldsymbol{x}. Hence, x0\boldsymbol{x}_0 is optimal for the primal.

On the other hand, let λ\boldsymbol{\lambda} be an arbitrary feasible solution to the dual problem. Because x0\boldsymbol{x}_0 is a feasible solution to the primal, by the weak duality lemma, cx0λb\boldsymbol{c}^{\top} \boldsymbol{x}_0 \geq \boldsymbol{\lambda}^{\top} \boldsymbol{b}. Therefore, if cx0=λ0b\boldsymbol{c}^{\top} \boldsymbol{x}_0=\boldsymbol{\lambda}_0^{\top} \boldsymbol{b}, then λbcx0=λ0b\boldsymbol{\lambda}^{\top} \boldsymbol{b} \leq \boldsymbol{c}^{\top} \boldsymbol{x}_0=\boldsymbol{\lambda}_0^{\top} \boldsymbol{b}. Hence, λ0\boldsymbol{\lambda}_0 is optimal for the dual.

We can interpret Theorem 1 as follows. The primal seeks to minimize its cost, and the dual seeks to maximize its cost. Because the weak duality lemma states that "maximum \leq minimum," each problem "seeks to reach the other." When their costs are equal for a pair of feasible solutions, both solutions are optimal, and we have “maximum = minimum.” It turns out that the converse of Theorem 1 is also true, that is, “maximum = minimum” always holds. In fact, we can prove an even stronger result, known as the duality theorem.

Theorem 2 (Duality Theorem)

If the primal problem (either in symmetric or asymmetric form) has an optimal solution, then so does the dual, and the optimal values of their respective objective functions are equal.

Proof

We first prove the result for the asymmetric form of duality. Assume that the primal has an optimal solution. Then, by the fundamental theorem of LP, there exists an optimal basic feasible solution. As is our usual notation, let B\boldsymbol{B} be the matrix of the corresponding mm basic columns, D\boldsymbol{D} the matrix of the nmn-m non-basic columns, cB\boldsymbol{c}_{\boldsymbol{B}} the vector of elements of c\boldsymbol{c} corresponding to basic variables, cD\boldsymbol{c}_{\boldsymbol{D}} the vector of elements of c\boldsymbol{c} corresponding to nonbasic variables, and rD\boldsymbol{r}_D the vector of reduced cost coefficients. Then, by the Theorem presented in the LP section,

rD=cDcBB1D0.\begin{gather*} \boldsymbol{r}_D^{\top}=\boldsymbol{c}_{\boldsymbol{D}}^{\top}-\boldsymbol{c}_{\boldsymbol{B}}^{\top} \boldsymbol{B}^{-1} \boldsymbol{D} \geq \boldsymbol{0}^{\top}. \end{gather*}

Hence,

cBB1DcD.\begin{gather*} \boldsymbol{c}_{\boldsymbol{B}}^{\top}\boldsymbol{B}^{-1}\boldsymbol{D}\leq \boldsymbol{c}_{\boldsymbol{D}}^{\top}. \end{gather*}

Define

λ=cBB1.\begin{gather*} \boldsymbol{\lambda}^{\top} = \boldsymbol{c}_{\boldsymbol{B}}^{\top}\boldsymbol{B}^{-1}. \end{gather*}

Then,

cBB1D=λDcD\begin{gather*} \boldsymbol{c}_{\boldsymbol{B}}^{\top}\boldsymbol{B}^{-1}\boldsymbol{D} = \boldsymbol{\lambda}^{\top}\boldsymbol{D} \leq \boldsymbol{c}_{\boldsymbol{D}}^{\top} \end{gather*}

We claim that λ\boldsymbol{\lambda} is a feasible solution to the dual. To see this, assume for convenience (and without loss of generality) that the basic columns are the first mm columns of A\boldsymbol{A}. Then,

λA=λ[B,D]=[cB,λD][cB,cD]=c.\begin{gather*} \boldsymbol{\lambda}^{\top} \boldsymbol{A}=\boldsymbol{\lambda}^{\top}[\boldsymbol{B}, \boldsymbol{D}]=\left[\boldsymbol{c}_B^{\top}, \boldsymbol{\lambda}^{\top} \boldsymbol{D}\right] \leq\left[\boldsymbol{c}_B^{\top}, \boldsymbol{c}_D^{\top}\right]=\boldsymbol{c}^{\top}. \end{gather*}

Hence, λAc\boldsymbol{\lambda}^{\top} \boldsymbol{A} \leq \boldsymbol{c}^{\top} and thus λ=cBB1\boldsymbol{\lambda}^{\top}=\boldsymbol{c}_B \boldsymbol{B}^{-1} is feasible. We claim that λ\boldsymbol{\lambda} is also an optimal feasible solution to the dual. To see this, note that

λb=cBB1b=cBxB.\begin{gather*} \boldsymbol{\lambda}^{\top} \boldsymbol{b}=\boldsymbol{c}_B^{\top} \boldsymbol{B}^{-1} \boldsymbol{b}=\boldsymbol{c}_B^{\top} \boldsymbol{x}_B. \end{gather*}

Thus, by Theorem 1, λ\boldsymbol{\lambda} is optimal.

We now prove the symmetric case. First, we convert the primal problem for the symmetric form into the equivalent standard form by adding surplus variables:

minimize[c,0][xy]subject to [A,I][xy]=b[xy]0.\begin{align*} \operatorname{minimize} & \left[\boldsymbol{c}^{\top}, \boldsymbol{0}^{\top}\right]\left[\begin{array}{l}\boldsymbol{x} \\ \boldsymbol{y}\end{array}\right] \\ \text{subject to } & [\boldsymbol{A},-\boldsymbol{I}]\left[\begin{array}{l}\boldsymbol{x} \\ \boldsymbol{y}\end{array}\right]=\boldsymbol{b} \\ & \left[\begin{array}{l}\boldsymbol{x} \\ \boldsymbol{y}\end{array}\right] \geq \boldsymbol{0}. \end{align*}

Note that x\boldsymbol{x} is optimal for the original primal problem if and only if [x,(Axb)\left[\boldsymbol{x}^{\top},(\boldsymbol{A}\boldsymbol{x}-\boldsymbol{b})\right. ]\left.{ }^{\top}\right]^{\top} is optimal for the primal in standard form. The dual to the primal in standard form is equivalent to the dual to the original primal in symmetric form. Therefore, the result above for the asymmetric case applies also to the symmetric case. This completes the proof.

Theorem 3 (Complementary Slackness Condition)

The feasible solutions x\boldsymbol{x} and λ\boldsymbol{\lambda} to a dual pair of problems (either in symmetric or asymmetric form) are optimal if and only if:

  1. (cλA)x=0\left(\boldsymbol{c}^{\top}-\boldsymbol{\lambda}^{\top} \boldsymbol{A}\right) \boldsymbol{x}=0.
  2. λ(Axb)=0\boldsymbol{\lambda}^{\top}(\boldsymbol{A} \boldsymbol{x}-\boldsymbol{b})=0.

Proof

We first prove the result for the asymmetric case. Note that condition 2 holds trivially for this case. Therefore, we consider only condition 1.

\Rightarrow : If the two solutions are optimal, then by Theorem 2, cx=λb\boldsymbol{c}^{\top} \boldsymbol{x}=\lambda^{\top} \boldsymbol{b}. Because Ax=b\boldsymbol{A} \boldsymbol{x}=\boldsymbol{b}, we also have (cλA)x=0\left(\boldsymbol{c}^{\top}-\boldsymbol{\lambda}^{\top} \boldsymbol{A}\right) \boldsymbol{x}=0.

\Leftarrow If (cλA)x=0\left(\boldsymbol{c}^{\top}-\boldsymbol{\lambda}^{\top} \boldsymbol{A}\right) \boldsymbol{x}=0, then cx=λAx=λb\boldsymbol{c}^{\top} \boldsymbol{x}=\boldsymbol{\lambda}^{\top} \boldsymbol{A} \boldsymbol{x}=\boldsymbol{\lambda}^{\top} \boldsymbol{b}. Therefore, by Theorem 1, x\boldsymbol{x} and λ\boldsymbol{\lambda} are optimal.

We now prove the result for the symmetric case.

\Rightarrow We first show condition 1. If the two solutions are optimal, then by Theorem 2, cx=λb\boldsymbol{c}^{\top} \boldsymbol{x}=\boldsymbol{\lambda}^{\top} \boldsymbol{b}. Because Axb\boldsymbol{A x} \geq \boldsymbol{b} and λ0\boldsymbol{\lambda} \geq \boldsymbol{0}, we have

(cλA)x=cxλAx=λbλAx=λ(bAx)0.\begin{align*} \left(\boldsymbol{c}^{\top}-\boldsymbol{\lambda}^{\top} \boldsymbol{A}\right) \boldsymbol{x} & =\boldsymbol{c}^{\top} \boldsymbol{x}-\boldsymbol{\lambda}^{\top} \boldsymbol{A} \boldsymbol{x} \\ & = \boldsymbol{\lambda}^{\top} \boldsymbol{b}-\boldsymbol{\lambda}^{\top} \boldsymbol{A} \boldsymbol{x} \\ & = \boldsymbol{\lambda}^{\top}(\boldsymbol{b}-\boldsymbol{A} \boldsymbol{x}) \leq 0. \end{align*}

On the other hand, since λAc\boldsymbol{\lambda}^{\top} \boldsymbol{A} \leq \boldsymbol{c}^{\top} and x0\boldsymbol{x} \geq \boldsymbol{0}, we have (cλA)x0\left(\boldsymbol{c}^{\top}-\boldsymbol{\lambda}^{\top} \boldsymbol{A}\right) \boldsymbol{x} \geq 0. Hence, (c\left(\boldsymbol{c}^{\top}\right.λA)x=0\left.-\boldsymbol{\lambda}^{\top} \boldsymbol{A}\right) \boldsymbol{x}=0. To show condition 2, note that since Axb\boldsymbol{A} \boldsymbol{x} \geq \boldsymbol{b} and λ0\boldsymbol{\lambda} \geq \mathbf{0}, we have λ\boldsymbol{\lambda}^{\top} ( Axb)0\boldsymbol{A x}-\boldsymbol{b}) \geq 0. On the other hand, since λAc\boldsymbol{\lambda}^{\top} \boldsymbol{A} \leq \boldsymbol{c}^{\top} and x0\boldsymbol{x} \geq \mathbf{0}, we have λ(Axb)=\boldsymbol{\lambda}^{\top}(\boldsymbol{A} \boldsymbol{x}-\boldsymbol{b})= (λAc)x0\left(\boldsymbol{\lambda}^{\top} \boldsymbol{A}-\boldsymbol{c}^{\top}\right) \boldsymbol{x} \leq 0.

\Leftarrow Combining conditions 1 and 2, we get cx=λAx=λb\boldsymbol{c}^{\top} \boldsymbol{x}=\boldsymbol{\lambda}^{\top} \boldsymbol{A} \boldsymbol{x}=\boldsymbol{\lambda}^{\top} \boldsymbol{b}. Hence, by Theorem 1, x\boldsymbol{x} and λ\boldsymbol{\lambda} are optimal.

Note that if x\boldsymbol{x} and λ\boldsymbol{\lambda} are feasible solutions for the dual pair of problems, we can write condition 1 , that is, (cλA)x=0\left(\boldsymbol{c}^{\top}-\boldsymbol{\lambda}^{\top} \boldsymbol{A}\right) \boldsymbol{x}=0, as " xi>0\boldsymbol{x}_i>0 implies that λai=ci,i=1\boldsymbol{\lambda} \boldsymbol{a}_i=\boldsymbol{c}_{\boldsymbol{i}}, i=1, ,n\ldots, n," that is, for any component of x\boldsymbol{x} that is positive, the corresponding constraint for the dual must be an equality at λ\boldsymbol{\lambda}. Also, observe that the statement " xi>0\boldsymbol{x}_i>0 implies that λai=ci\boldsymbol{\lambda}^{\top} \boldsymbol{a}_i=\boldsymbol{c}_i " is equivalent to "λai<ci\boldsymbol{\lambda}^{\top} \boldsymbol{a}_i<\boldsymbol{c}_i implies that xi=0\boldsymbol{x}_i=0." A similar representation can be written for condition 2 .

Consider the asymmetric form of duality. Recall that for the case of an optimal basic feasible solution x,r=cλA\boldsymbol{x}, \boldsymbol{r}^{\top}=\boldsymbol{c}^{\top}-\boldsymbol{\lambda}^{\top} \boldsymbol{A} is the vector of reduced cost coefficients. Therefore, in this case, the complementary slackness condition can be written as rx=0\boldsymbol{r}^{\top} \boldsymbol{x}=0.