enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Mathematical optimization - Wikipedia

    en.wikipedia.org/wiki/Mathematical_optimization

    The function f is variously called an objective function, criterion function, loss function, cost function (minimization), [8] utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution.

  3. Optimization problem - Wikipedia

    en.wikipedia.org/wiki/Optimization_problem

    f : ℝ n → ℝ is the objective function to be minimized over the n-variable vector x, g i (x) ≤ 0 are called inequality constraints; h j (x) = 0 are called equality constraints, and; m ≥ 0 and p ≥ 0. If m = p = 0, the problem is an unconstrained optimization problem. By convention, the standard form defines a minimization problem.

  4. Constraint (mathematics) - Wikipedia

    en.wikipedia.org/wiki/Constraint_(mathematics)

    In this example, the first line defines the function to be minimized (called the objective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint.

  5. Linear programming - Wikipedia

    en.wikipedia.org/wiki/Linear_programming

    For this feasibility problem with the zero-function for its objective-function, if there are two distinct solutions, then every convex combination of the solutions is a solution. The vertices of the polytope are also called basic feasible solutions. The reason for this choice of name is as follows. Let d denote the number of variables.

  6. Loss function - Wikipedia

    en.wikipedia.org/wiki/Loss_function

    In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. [4]

  7. Feasible region - Wikipedia

    en.wikipedia.org/wiki/Feasible_region

    Convex feasible sets arise in many types of problems, including linear programming problems, and they are of particular interest because, if the problem has a convex objective function that is to be minimized, it will generally be easier to solve in the presence of a convex feasible set and any local optimum will also be a global optimum.

  8. Non-linear least squares - Wikipedia

    en.wikipedia.org/wiki/Non-linear_least_squares

    Consider a set of data points, (,), (,), …, (,), and a curve (model function) ^ = (,), that in addition to the variable also depends on parameters, = (,, …,), with . It is desired to find the vector of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squares = = is minimized, where the residuals (in-sample prediction errors) r i are ...

  9. Graph of a function - Wikipedia

    en.wikipedia.org/wiki/Graph_of_a_function

    Given a function: from a set X (the domain) to a set Y (the codomain), the graph of the function is the set [4] = {(, ()):}, which is a subset of the Cartesian product.In the definition of a function in terms of set theory, it is common to identify a function with its graph, although, formally, a function is formed by the triple consisting of its domain, its codomain and its graph.