Search results
Results from the WOW.Com Content Network
Dimitri Panteli Bertsekas (born 1942, Athens, Greek: Δημήτρης Παντελής Μπερτσεκάς) is an applied mathematician, electrical engineer, and computer scientist, a McAfee Professor at the Department of Electrical Engineering and Computer Science in School of Engineering at the Massachusetts Institute of Technology (MIT ...
The 1971 Ph.D. Thesis by Dimitri P. Bertsekas (Proposition A.22) [3] proves a more general result, which does not require that (,) is differentiable. Instead it assumes that (,) is an extended real-valued closed proper convex function for each in the compact set , that ( ()), the interior of the effective domain of , is nonempty, and that is continuous on the set ( ()).
To ensure that the global maximum of a non-linear problem can be identified easily, the problem formulation often requires that the functions be convex and have compact lower level sets. This is the significance of the Karush–Kuhn–Tucker conditions. They provide necessary conditions for identifying local optima of non-linear programming ...
Suppose we are given a linear programming problem, with ... Bertsekas, Dimitri P. (1999). Nonlinear Programming: 2nd Edition. Athena Scientific.
Some special cases of nonlinear programming have specialized solution methods: If the objective function is concave (maximization problem), or convex (minimization problem) and the constraint set is convex , then the program is called convex and general methods from convex optimization can be used in most cases.
Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a global maximum or minimum over the ...
Bertsekas. "Details on Lagrange multipliers" (PDF). athenasc.com (slides / course lecture). Non-Linear Programming. — Course slides accompanying text on nonlinear optimization; Wyatt, John (7 April 2004) [19 November 2002]. "Legrange multipliers, constrained optimization, and the maximum entropy principle" (PDF). www-mtl.mit.edu.
Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective, but the augmented Lagrangian method adds yet another term designed to mimic a Lagrange multiplier.