enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Markov decision process - Wikipedia

    en.wikipedia.org/wiki/Markov_decision_process

    The "Markov" in "Markov decision process" refers to the underlying structure of state transitions that still follow the Markov property. The process is called a "decision process" because it involves making decisions that influence these state transitions, extending the concept of a Markov chain into the realm of decision-making under uncertainty.

  3. Matrix analytic method - Wikipedia

    en.wikipedia.org/wiki/Matrix_analytic_method

    [1] [2] Such models are often described as M/G/1 type Markov chains because they can describe transitions in an M/G/1 queue. [ 3 ] [ 4 ] The method is a more complicated version of the matrix geometric method and is the classical solution method for M/G/1 chains.

  4. Markov chain - Wikipedia

    en.wikipedia.org/wiki/Markov_chain

    In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.

  5. Continuous-time Markov chain - Wikipedia

    en.wikipedia.org/wiki/Continuous-time_Markov_chain

    Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X (0), X (δ), X (2δ), ... give the sequence of states visited by the δ-skeleton.

  6. Markov Chains and Mixing Times - Wikipedia

    en.wikipedia.org/wiki/Markov_Chains_and_Mixing_Times

    A family of Markov chains is said to be rapidly mixing if the mixing time is a polynomial function of some size parameter of the Markov chain, and slowly mixing otherwise. This book is about finite Markov chains, their stationary distributions and mixing times, and methods for determining whether Markov chains are rapidly or slowly mixing. [1] [4]

  7. Uniformization (probability theory) - Wikipedia

    en.wikipedia.org/wiki/Uniformization...

    In probability theory, uniformization method, (also known as Jensen's method [1] or the randomization method [2]) is a method to compute transient solutions of finite state continuous-time Markov chains, by approximating the process by a discrete-time Markov chain. [2]

  8. Markov model - Wikipedia

    en.wikipedia.org/wiki/Markov_model

    The simplest Markov model is the Markov chain.It models the state of a system with a random variable that changes through time. In this context, the Markov property indicates that the distribution for this variable depends only on the distribution of a previous state.

  9. Markov reward model - Wikipedia

    en.wikipedia.org/wiki/Markov_reward_model

    In probability theory, a Markov reward model or Markov reward process is a stochastic process which extends either a Markov chain or continuous-time Markov chain by adding a reward rate to each state. An additional variable records the reward accumulated up to the current time. [1]