enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Markov decision process - Wikipedia

    en.wikipedia.org/wiki/Markov_decision_process

    The "Markov" in "Markov decision process" refers to the underlying structure of state transitions that still follow the Markov property. The process is called a "decision process" because it involves making decisions that influence these state transitions, extending the concept of a Markov chain into the realm of decision-making under uncertainty.

  3. Markov model - Wikipedia

    en.wikipedia.org/wiki/Markov_model

    A Markov decision process is a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards.

  4. Automated planning and scheduling - Wikipedia

    en.wikipedia.org/wiki/Automated_planning_and...

    Discrete-time Markov decision processes (MDP) are planning problems with: durationless actions, nondeterministic actions with probabilities, full observability, maximization of a reward function, and a single agent. When full observability is replaced by partial observability, planning corresponds to a partially observable Markov decision ...

  5. Hidden Markov model - Wikipedia

    en.wikipedia.org/wiki/Hidden_Markov_model

    Figure 1. Probabilistic parameters of a hidden Markov model (example) X — states y — possible observations a — state transition probabilities b — output probabilities. In its discrete form, a hidden Markov process can be visualized as a generalization of the urn problem with replacement (where each item from the urn is returned to the original urn before the next step). [7]

  6. Category:Markov processes - Wikipedia

    en.wikipedia.org/wiki/Category:Markov_processes

    This category is for articles about the theory of Markov chains and processes, and associated processes. See Category:Markov models for models for specific applications that make use of Markov processes.

  7. Markovian arrival process - Wikipedia

    en.wikipedia.org/wiki/Markovian_arrival_process

    The Markov-modulated Poisson process or MMPP where m Poisson processes are switched between by an underlying continuous-time Markov chain. [8] If each of the m Poisson processes has rate λ i and the modulating continuous-time Markov has m × m transition rate matrix R , then the MAP representation is

  8. Partially observable Markov decision process - Wikipedia

    en.wikipedia.org/wiki/Partially_observable...

    A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state.

  9. Subshift of finite type - Wikipedia

    en.wikipedia.org/wiki/Subshift_of_finite_type

    A common object of study is the Markov measure, which is an extension of a Markov chain to the topology of the shift. A Markov chain is a pair ( P , π) consisting of the transition matrix , an n × n matrix P = ( p ij ) for which all p ij ≥ 0 and