enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Taylor series - Wikipedia

    en.wikipedia.org/wiki/Taylor_series

    The function e (−1/x 2) is not analytic at x = 0: the Taylor series is identically 0, although the function is not. If f ( x ) is given by a convergent power series in an open disk centred at b in the complex plane (or an interval in the real line), it is said to be analytic in this region.

  3. Taylor's theorem - Wikipedia

    en.wikipedia.org/wiki/Taylor's_theorem

    for some polynomial p k of degree 2(k − 1). The function tends to zero faster than any polynomial as , so f is infinitely many times differentiable and f (k) (0) = 0 for every positive integer k. The above results all hold in this case:

  4. Curvature invariant - Wikipedia

    en.wikipedia.org/wiki/Curvature_invariant

    In Riemannian geometry and pseudo-Riemannian geometry, curvature invariants are scalar quantities constructed from tensors that represent curvature.These tensors are usually the Riemann tensor, the Weyl tensor, the Ricci tensor and tensors formed from these by the operations of taking dual contractions and covariant differentiations.

  5. Chebyshev polynomials - Wikipedia

    en.wikipedia.org/wiki/Chebyshev_polynomials

    For any given n ≥ 1, among the polynomials of degree n with leading coefficient 1 (monic polynomials): = is the one of which the maximal absolute value on the interval [−1, 1] is minimal. This maximal absolute value is: 1 2 n − 1 {\displaystyle {\frac {1}{2^{n-1}}}} and | f ( x ) | reaches this maximum exactly n + 1 times at: x = cos ...

  6. Tensor sketch - Wikipedia

    en.wikipedia.org/wiki/Tensor_sketch

    With this method, we only apply the general tensor sketch method to order 2 tensors, which avoids the exponential dependency in the number of rows. It can be proved [ 15 ] that combining c {\displaystyle c} dimensionality reductions like this only increases ε {\displaystyle \varepsilon } by a factor c {\displaystyle {\sqrt {c}}} .

  7. Horner's method - Wikipedia

    en.wikipedia.org/wiki/Horner's_method

    In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation.Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians. [1]

  8. Finite strain theory - Wikipedia

    en.wikipedia.org/wiki/Finite_strain_theory

    The deformation gradient , like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e., = = where the tensor is a proper orthogonal tensor, i.e., = and ...

  9. Polynomial evaluation - Wikipedia

    en.wikipedia.org/wiki/Polynomial_evaluation

    The polynomial given by Strassen has very large coefficients, but by probabilistic methods, one can show there must exist even polynomials with coefficients just 0's and 1's such that the evaluation requires at least (/ ⁡) multiplications. [10] For other simple polynomials, the complexity is unknown.