Karush–Kuhn–Tucker conditions

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a saddle point, i.e. a global maximum (minimum) over the domain of the choice variables and a global minimum (maximum) over the multipliers, which is why the Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.[1]

The KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951.[2] Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.[3][4]

Nonlinear optimization problem

Consider the following nonlinear minimization or maximization problem:

Optimize
subject to

where is the optimization variable chosen from a convex subset of , is the objective or utility function, are the inequality constraint functions and are the equality constraint functions. The numbers of inequalities and equalities are denoted by and respectively. Corresponding to the constraint optimization problem one can form the Lagrangian function

where , . The Karush–Kuhn–Tucker theorem then states the following.

Theorem. If is a saddle point of in , , then is an optimal vector for the above optimization problem. Suppose that and , , are convex in and that there exists such that . Then with an optimal vector for the above optimization problem there is associated a non-negative vector such that is a saddle point of .[5]

Since the idea of this approach is to find a supporting hyperplane on the feasible set , the proof of the Karush–Kuhn–Tucker theorem makes use of the hyperplane separation theorem.[6]

The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.[7]

Necessary conditions

Suppose that the objective function and the constraint functions and are continuously differentiable at a point . If is a local optimum and the optimization problem satisfies some regularity conditions (see below), then there exist constants and , called KKT multipliers, such that the following four groups of conditions hold:

Inequality constraint diagram for optimization problems
Stationarity
For minimizing :
For maximizing :
Primal feasibility
Dual feasibility
Complementary slackness

The last condition is sometimes written in the equivalent form:

In the particular case , i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are called Lagrange multipliers.

If some of the functions are non-differentiable, subdifferential versions of Karush–Kuhn–Tucker (KKT) conditions are available.[8]

Matrix representation

The necessary conditions can be written with Jacobian matrices of the constraint functions. Let be defined as and let be defined as . Let and . Then the necessary conditions can be written as:

Stationarity
For maximizing :
For minimizing :
Primal feasibility
Dual feasibility
Complementary slackness

Regularity conditions (or constraint qualifications)

One can ask whether a minimizer point of the original, constrained optimization problem (assuming one exists) has to satisfy the above KKT conditions. This is similar to asking under what conditions the minimizer of a function in an unconstrained problem has to satisfy the condition . For the constrained case, the situation is more complicated, and one can state a variety of (increasingly complicated) "regularity" conditions under which a constrained minimizer also satisfies the KKT conditions. Some common examples for conditions that guarantee this are tabulated in the following, with the LICQ the most frequently used one:

Constraint Acronym Statement
Linearity constraint qualification LCQ If and are affine functions, then no other condition is needed.
Linear independence constraint qualification LICQ The gradients of the active inequality constraints and the gradients of the equality constraints are linearly independent at .
Mangasarian-Fromovitz constraint qualification MFCQ The gradients of the equality constraints are linearly independent at and there exists a vector such that for all active inequality constraints and for all equality constraints.[9]
Constant rank constraint qualification CRCQ For each subset of the gradients of the active inequality constraints and the gradients of the equality constraints the rank at a vicinity of is constant.
Constant positive linear dependence constraint qualification CPLD For each subset of gradients of active inequality constraints and gradients of equality constraints, if the subset of vectors is linearly dependent at with non-negative scalars associated with the inequality constraints, then it remains linearly dependent in a neighborhood of .
Quasi-normality constraint qualification QNCQ If the gradients of the active inequality constraints and the gradients of the equality constraints are linearly dependent at with associated multipliers for equalities and for inequalities, then there is no sequence such that and
Slater's condition SC For a convex problem (i.e., assuming minimization, are convex and is affine), there exists a point such that and

It can be shown that

LICQ ⇒ MFCQ ⇒ CPLD ⇒ QNCQ

and

LICQ ⇒ CRCQ ⇒ CPLD ⇒ QNCQ

(and the converses are not true), although MFCQ is not equivalent to CRCQ.[10] In practice weaker constraint qualifications are preferred since they apply to a broader selection of problems.

Sufficient conditions

In some cases, the necessary conditions are also sufficient for optimality. In general, the necessary conditions are not sufficient for optimality and additional information is required, such as the Second Order Sufficient Conditions (SOSC). For smooth functions, SOSC involve the second derivatives, which explains its name.

The necessary conditions are sufficient for optimality if the objective function of a maximization problem is a concave function, the inequality constraints are continuously differentiable convex functions and the equality constraints are affine functions. Similarly, if the objective function of a minimization problem is a convex function, the necessary conditions are also sufficient for optimality.

It was shown by Martin in 1985 that the broader class of functions in which KKT conditions guarantees global optimality are the so-called Type 1 invex functions.[11][12]

Second-order sufficient conditions

For smooth, non-linear optimization problems, a second order sufficient condition is given as follows.

The solution found in the above section is a constrained local minimum if for the Lagrangian,

then,

where is a vector satisfying the following,

where only those active inequality constraints corresponding to strict complementarity (i.e. where ) are applied. The solution is a strict constrained local minimum in the case the inequality is also strict.

If , the third order Taylor expansion of the Lagrangian should be used to verify if is a local minimum. The minimization of is a good counter-example, see also Peano surface.

Economics

Often in mathematical economics the KKT approach is used in theoretical models in order to obtain qualitative results. For example,[13] consider a firm that maximizes its sales revenue subject to a minimum profit constraint. Letting be the quantity of output produced (to be chosen), be sales revenue with a positive first derivative and with a zero value at zero output, be production costs with a positive first derivative and with a non-negative value at zero output, and be the positive minimal acceptable level of profit, then the problem is a meaningful one if the revenue function levels off so it eventually is less steep than the cost function. The problem expressed in the previously given minimization form is

Minimize
subject to

and the KKT conditions are

Since would violate the minimum profit constraint, we have and hence the third condition implies that the first condition holds with equality. Solving that equality gives

Because it was given that and are strictly positive, this inequality along with the non-negativity condition on guarantees that is positive and so the revenue-maximizing firm operates at a level of output at which marginal revenue is less than marginal cost — a result that is of interest because it contrasts with the behavior of a profit maximizing firm, which operates at a level at which they are equal.

Value function

If we reconsider the optimization problem as a maximization problem with constant inequality constraints:

The value function is defined as

so the domain of is

Given this definition, each coefficient is the rate at which the value function increases as increases. Thus if each is interpreted as a resource constraint, the coefficients tell you how much increasing a resource will increase the optimum value of our function . This interpretation is especially important in economics and is used, for instance, in utility maximization problems.

Generalizations

With an extra multiplier , which may be zero (as long as ), in front of the KKT stationarity conditions turn into

which are called the Fritz John conditions. This optimality conditions holds without constraint qualifications and it is equivalent to the optimality condition KKT or (not-MFCQ).

The KKT conditions belong to a wider class of the first-order necessary conditions (FONC), which allow for non-smooth functions using subderivatives.

See also

References

  1. Tabak, Daniel; Kuo, Benjamin C. (1971). Optimal Control by Mathematical Programming. Englewood Cliffs, NJ: Prentice-Hall. pp. 19–20. ISBN 0-13-638106-5.
  2. Kuhn, H. W.; Tucker, A. W. (1951). "Nonlinear programming". Proceedings of 2nd Berkeley Symposium. Berkeley: University of California Press. pp. 481–492. MR 0047303.
  3. W. Karush (1939). Minima of Functions of Several Variables with Inequalities as Side Constraints (M.Sc. thesis). Dept. of Mathematics, Univ. of Chicago, Chicago, Illinois.
  4. Kjeldsen, Tinne Hoff (2000). "A contextualized historical analysis of the Kuhn-Tucker theorem in nonlinear programming: the impact of World War II". Historia Math. 27 (4): 331–361. doi:10.1006/hmat.2000.2289. MR 1800317.
  5. Walsh, G. R. (1975). "Saddle-point Property of Lagrangian Function". Methods of Optimization. New York: John Wiley & Sons. pp. 39–44. ISBN 0-471-91922-5.
  6. Kemp, Murray C.; Kimura, Yoshio (1978). Introduction to Mathematical Economics. New York: Springer. pp. 38–44. ISBN 0-387-90304-6.
  7. Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization. Cambridge: Cambridge University Press. p. 244. ISBN 0-521-83378-7. MR 2061575.
  8. Ruszczyński, Andrzej (2006). Nonlinear Optimization. Princeton, NJ: Princeton University Press. ISBN 978-0691119151. MR 2199043.
  9. Dimitri Bertsekas (1999). Nonlinear Programming (2 ed.). Athena Scientific. pp. 329–330. ISBN 9781886529007.
  10. Rodrigo Eustaquio; Elizabeth Karas; Ademir Ribeiro. Constraint Qualification for Nonlinear Programming (PDF) (Technical report). Federal University of Parana.
  11. Martin, D. H. (1985). "The Essence of Invexity". J. Optim. Theory Appl. 47 (1): 65–76. doi:10.1007/BF00941316. S2CID 122906371.
  12. Hanson, M. A. (1999). "Invexity and the Kuhn-Tucker Theorem". J. Math. Anal. Appl. 236 (2): 594–604. doi:10.1006/jmaa.1999.6484.
  13. Chiang, Alpha C. Fundamental Methods of Mathematical Economics, 3rd edition, 1984, pp. 750–752.

Further reading

  • Andreani, R.; Martínez, J. M.; Schuverdt, M. L. (2005). "On the relation between constant positive linear dependence condition and quasinormality constraint qualification". Journal of Optimization Theory and Applications. 125 (2): 473–485. doi:10.1007/s10957-004-1861-9. S2CID 122212394.
  • Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover. ISBN 0-486-43227-0.
  • Boltyanski, V.; Martini, H.; Soltan, V. (1998). "The Kuhn–Tucker Theorem". Geometric Methods and Optimization Problems. New York: Springer. pp. 78–92. ISBN 0-7923-5454-0.
  • Boyd, S.; Vandenberghe, L. (2004). "Optimality Conditions" (PDF). Convex Optimization. Cambridge University Press. pp. 241–249. ISBN 0-521-83378-7.
  • Kemp, Murray C.; Kimura, Yoshio (1978). Introduction to Mathematical Economics. New York: Springer. pp. 38–73. ISBN 0-387-90304-6.
  • Rau, Nicholas (1981). "Lagrange Multipliers". Matrices and Mathematical Programming. London: Macmillan. pp. 156–174. ISBN 0-333-27768-6.
  • Nocedal, J.; Wright, S. J. (2006). Numerical Optimization. New York: Springer. ISBN 978-0-387-30303-1.
  • Sundaram, Rangarajan K. (1996). "Inequality Constraints and the Theorem of Kuhn and Tucker". A First Course in Optimization Theory. New York: Cambridge University Press. pp. 145–171. ISBN 0-521-49770-1.


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.