Search results
Results from the WOW.Com Content Network
The Jacobian matrix is the generalization of the gradient for vector-valued functions of several variables and differentiable maps between Euclidean spaces or, more generally, manifolds. [9] [10] A further generalization for a function between Banach spaces is the Fréchet derivative.
More generally, for a function of n variables (, …,), also called a scalar field, the gradient is the vector field: = (, …,) = + + where (=,,...,) are mutually orthogonal unit vectors. As the name implies, the gradient is proportional to, and points in the direction of, the function's most rapid (positive) change.
When m = 1, that is when f : R n → R is a scalar-valued function, the Jacobian matrix reduces to the row vector; this row vector of all first-order partial derivatives of f is the transpose of the gradient of f, i.e. =.
The gradient theorem states that if the vector field F is the gradient of some scalar-valued function (i.e., if F is conservative), then F is a path-independent vector field (i.e., the integral of F over some piecewise-differentiable curve is dependent only on end points). This theorem has a powerful converse:
By example, in physics, the electric field is the negative vector gradient of the electric potential. The directional derivative of a scalar function f(x) of the space vector x in the direction of the unit vector u (represented in this case as a column vector) is defined using the gradient as follows.
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the ...
Let f(v) be a vector valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) ... is the gradient of a vector function ...
the gradient reconstruction : , is a linear mapping which reconstructs, from an element of ,, a "gradient" (vector-valued function) over . This gradient reconstruction must be chosen such that ‖ ∇ D ⋅ ‖ L 2 ( Ω ) d {\displaystyle \Vert \nabla _{D}\cdot \Vert _{L^{2}(\Omega )^{d}}} is a norm on X D , 0 {\displaystyle X_{D,0}} .