enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. State-space representation - Wikipedia

    en.wikipedia.org/wiki/State-space_representation

    The state space or phase space is the geometric space in which the axes are the state variables. The system state can be represented as a vector , the state vector . If the dynamical system is linear, time-invariant, and finite-dimensional, then the differential and algebraic equations may be written in matrix form.

  3. Examples of Markov chains - Wikipedia

    en.wikipedia.org/wiki/Examples_of_Markov_chains

    To see the difference, consider the probability for a certain event in the game. In the above-mentioned dice games, the only thing that matters is the current state of the board. The next state of the board depends on the current state, and the next roll of the dice. It does not depend on how things got to their current state.

  4. Markov chain - Wikipedia

    en.wikipedia.org/wiki/Markov_chain

    The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and ...

  5. Dynamical system - Wikipedia

    en.wikipedia.org/wiki/Dynamical_system

    At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state.

  6. Markov decision process - Wikipedia

    en.wikipedia.org/wiki/Markov_decision_process

    Example of a simple MDP with three states (green circles) and two actions (orange circles), with two rewards (orange arrows) A Markov decision process is a 4-tuple (,,,), where: is a set of states called the state space. The state space may be discrete or continuous, like the set of real numbers.

  7. State space (computer science) - Wikipedia

    en.wikipedia.org/wiki/State_space_(computer_science)

    If the size of the state space is finite, calculating the size of the state space is a combinatorial problem. [4] For example, in the Eight queens puzzle, the state space can be calculated by counting all possible ways to place 8 pieces on an 8x8 chessboard. This is the same as choosing 8 positions without replacement from a set of 64, or

  8. Full state feedback - Wikipedia

    en.wikipedia.org/wiki/Full_state_feedback

    System in open-loop. If the closed-loop dynamics can be represented by the state space equation (see State space (controls)) _ ˙ = _ + _, with output equation _ = _ + _, then the poles of the system transfer function are the roots of the characteristic equation given by

  9. Phase space - Wikipedia

    en.wikipedia.org/wiki/Phase_space

    The phase space of a physical system is the set of all possible physical states of the system when described by a given parameterization. Each possible state corresponds uniquely to a point in the phase space. For mechanical systems, the phase space usually consists of all possible values of the position and momentum parameters.