enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Subgradient method - Wikipedia

    en.wikipedia.org/wiki/Subgradient_method

    When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent. Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions.

  3. Subderivative - Wikipedia

    en.wikipedia.org/wiki/Subderivative

    Rigorously, a subderivative of a convex function : at a point in the open interval is a real number such that () for all .By the converse of the mean value theorem, the set of subderivatives at for a convex function is a nonempty closed interval [,], where and are the one-sided limits = (), = + ().

  4. Convex function - Wikipedia

    en.wikipedia.org/wiki/Convex_function

    The function () = has ″ = >, so f is a convex function. It is also strongly convex (and hence strictly convex too), with strong convexity constant 2. The function () = has ″ =, so f is a convex function. It is strictly convex, even though the second derivative is not strictly positive at all points.

  5. Frank–Wolfe algorithm - Wikipedia

    en.wikipedia.org/wiki/Frank–Wolfe_algorithm

    The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization.Also known as the conditional gradient method, [1] reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. [2]

  6. Moreau envelope - Wikipedia

    en.wikipedia.org/wiki/Moreau_envelope

    Since the subdifferential of a proper, convex, lower semicontinuous function on a Hilbert space is inverse to the subdifferential of its convex conjugate, we can conclude that if is the maximizer of the above expression, then := is the minimizer in the primal formulation and vice versa.

  7. Mirror descent - Wikipedia

    en.wikipedia.org/wiki/Mirror_descent

    We are also given differentiable convex function :, -strongly convex with respect to the given norm. This is called the distance-generating function , and its gradient ∇ h : R n → R n {\displaystyle \nabla h\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}} is known as the mirror map .

  8. Convex optimization - Wikipedia

    en.wikipedia.org/wiki/Convex_optimization

    Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, [1] whereas mathematical optimization is in general NP-hard. [2 ...

  9. Clarke generalized derivative - Wikipedia

    en.wikipedia.org/wiki/Clarke_generalized_derivative

    Note that the Clarke generalized gradient is set-valued—that is, at each , the function value () is a set. More generally, given a Banach space X {\displaystyle X} and a subset Y ⊂ X , {\displaystyle Y\subset X,} the Clarke generalized directional derivative and generalized gradients are defined as above for a locally Lipschitz continuous ...