enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Linear recurrence with constant coefficients - Wikipedia

    en.wikipedia.org/wiki/Linear_recurrence_with...

    In mathematics (including combinatorics, linear algebra, and dynamical systems), a linear recurrence with constant coefficients [1]: ch. 17 [2]: ch. 10 (also known as a linear recurrence relation or linear difference equation) sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence.

  3. Matrix difference equation - Wikipedia

    en.wikipedia.org/wiki/Matrix_difference_equation

    A matrix difference equation is a difference equation in which the value of a vector (or sometimes, a matrix) of variables at one point in time is related to its own value at one or more previous points in time, using matrices. [1] [2] The order of the equation is the maximum time gap between any two indicated values of the variable vector. For ...

  4. Numerical solution of the convection–diffusion equation

    en.wikipedia.org/wiki/Numerical_solution_of_the...

    In this method, the basic shape function is modified to obtain the upwinding effect. This method is an extension of Runge–Kutta discontinuous for a convection-diffusion equation. For time-dependent equations, a different kind of approach is followed. The finite difference scheme has an equivalent in the finite element method (Galerkin method ...

  5. Alternating-direction implicit method - Wikipedia

    en.wikipedia.org/wiki/Alternating-direction...

    In numerical linear algebra, the alternating-direction implicit (ADI) method is an iterative method used to solve Sylvester matrix equations.It is a popular method for solving the large matrix equations that arise in systems theory and control, [1] and can be formulated to construct solutions in a memory-efficient, factored form.

  6. Finite difference - Wikipedia

    en.wikipedia.org/wiki/Finite_difference

    In an analogous way, one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula for f ′(x + ⁠ h / 2 ⁠) and f ′(x − ⁠ h / 2 ⁠) and applying a central difference formula for the derivative of f ′ at x, we obtain the central difference approximation of the second derivative of f:

  7. Difference Equations: From Rabbits to Chaos - Wikipedia

    en.wikipedia.org/wiki/Difference_Equations:_From...

    Other books on similar topics include A Treatise on the Calculus of Finite Differences by George Boole, Introduction to Difference Equations by S. Goldberg, [5] Difference Equations: An Introduction with Applications by W. G. Kelley and A. C. Peterson, An Introduction to Difference Equations by S. Elaydi, Theory of Difference Equations: An Introduction by V. Lakshmikantham and D. Trigiante ...

  8. Crank–Nicolson method - Wikipedia

    en.wikipedia.org/wiki/Crank–Nicolson_method

    The Crank–Nicolson stencil for a 1D problem. The Crank–Nicolson method is based on the trapezoidal rule, giving second-order convergence in time.For linear equations, the trapezoidal rule is equivalent to the implicit midpoint method [citation needed] —the simplest example of a Gauss–Legendre implicit Runge–Kutta method—which also has the property of being a geometric integrator.

  9. Finite-difference time-domain method - Wikipedia

    en.wikipedia.org/wiki/Finite-difference_time...

    The novelty of Kane Yee's FDTD scheme, presented in his seminal 1966 paper, [2] was to apply centered finite difference operators on staggered grids in space and time for each electric and magnetic vector field component in Maxwell's curl equations. The descriptor "Finite-difference time-domain" and its corresponding "FDTD" acronym were ...