Search results
Results from the WOW.Com Content Network
In 1994, Boyd and Laurent El Ghaoui, Eric Feron, and Ragu Balakrishnan authored the book Linear Matrix Inequalities in System & Control Theory. [15] Around 1999, he and Lieven Vandenberghe developed a PhD-level course and wrote the book Convex Optimization to introduce and apply convex optimization to other fields. [13]
Convex quadratically constrained quadratic programs can also be formulated as SOCPs by reformulating the objective function as a constraint. [4] Semidefinite programming subsumes SOCPs as the SOCP constraints can be written as linear matrix inequalities (LMI) and can be reformulated as an instance of semidefinite program. [ 4 ]
The following problem classes are all convex optimization problems, or can be reduced to convex optimization problems via simple transformations: [7]: chpt.4 [10] A hierarchy of convex optimization problems. (LP: linear programming, QP: quadratic programming, SOCP second-order cone program, SDP: semidefinite programming, CP: conic optimization.)
According to Boyd/Vandenberghe, which is considered a standard reference, a convex optimization problem has three additional requirements as compared to a general optimization problem, namely 1) the objective function must be convex (in the case of minimization), 2) the inequality constraint functions must be convex, and 3) the equality ...
In convex optimization, a linear matrix inequality (LMI) is an expression of the form ():= + + + + where = [, =, …,] is a real vector,,,, …, are symmetric matrices, is a generalized inequality meaning is a positive semidefinite matrix belonging to the positive semidefinite cone + in the subspace of symmetric matrices .
Young discovered the similarities between fast LP algorithms and Raghavan's method of pessimistic estimators for derandomization of randomized rounding algorithms; Klivans and Servedio linked boosting algorithms in learning theory to proofs of Yao's XOR Lemma; Garg and Khandekar defined a common framework for convex optimization problems that ...
Conic optimization is a subfield of convex optimization that studies problems consisting of minimizing a convex function over the intersection of an affine subspace and a convex cone. The class of conic optimization problems includes some of the most well known classes of convex optimization problems, namely linear and semidefinite programming .
In convex analysis, a non-negative function f : R n → R + is logarithmically concave (or log-concave for short) if its domain is a convex set, and if it satisfies the inequality (+ ()) () for all x,y ∈ dom f and 0 < θ < 1.