enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Markov decision process - Wikipedia

    en.wikipedia.org/wiki/Markov_decision_process

    Like the discrete-time Markov decision processes, in continuous-time Markov decision processes the agent aims at finding the optimal policy which could maximize the expected cumulated reward. The only difference with the standard case stays in the fact that, due to the continuous nature of the time variable, the sum is replaced by an integral:

  3. Stochastic game - Wikipedia

    en.wikipedia.org/wiki/Stochastic_game

    The ingredients of a stochastic game are: a finite set of players ; a state space (either a finite set or a measurable space (,)); for each player , an action set (either a finite set or a measurable space (,)); a transition probability from , where = is the action profiles, to , where (,) is the probability that the next state is in given the current state and the current action profile ; and ...

  4. Markov model - Wikipedia

    en.wikipedia.org/wiki/Markov_model

    A Markov decision process is a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards.

  5. Multiscale decision-making - Wikipedia

    en.wikipedia.org/wiki/Multiscale_decision-making

    Multiscale decision-making, also referred to as multiscale decision theory (MSDT), is an approach in operations research that combines game theory, multi-agent influence diagrams, in particular dependency graphs, and Markov decision processes to solve multiscale challenges [1] in sociotechnical systems. MSDT considers interdependencies within ...

  6. Markov reward model - Wikipedia

    en.wikipedia.org/wiki/Markov_reward_model

    In probability theory, a Markov reward model or Markov reward process is a stochastic process which extends either a Markov chain or continuous-time Markov chain by adding a reward rate to each state. An additional variable records the reward accumulated up to the current time. [1]

  7. Partially observable Markov decision process - Wikipedia

    en.wikipedia.org/wiki/Partially_observable...

    A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state.

  8. Model-free (reinforcement learning) - Wikipedia

    en.wikipedia.org/wiki/Model-free_(reinforcement...

    In reinforcement learning (RL), a model-free algorithm is an algorithm which does not estimate the transition probability distribution (and the reward function) associated with the Markov decision process (MDP), [1] which, in RL, represents the problem to be solved. The transition probability distribution (or transition model) and the reward ...

  9. Discrete-time Markov chain - Wikipedia

    en.wikipedia.org/wiki/Discrete-time_Markov_chain

    A Markov chain with two states, A and E. In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known as a stochastic process, in which the value of the next variable depends only on the value of the current variable, and not any variables in the past.