enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Martingale (probability theory) - Wikipedia

    en.wikipedia.org/wiki/Martingale_(probability...

    A convex function of a martingale is a submartingale, by Jensen's inequality. For example, the square of the gambler's fortune in the fair coin game is a submartingale (which also follows from the fact that X n 2 − n is a martingale). Similarly, a concave function of a martingale is a supermartingale.

  3. Martingale difference sequence - Wikipedia

    en.wikipedia.org/wiki/Martingale_difference_sequence

    By construction, this implies that if is a martingale, then = will be an MDS—hence the name. The MDS is an extremely useful construct in modern probability theory because it implies much milder restrictions on the memory of the sequence than independence , yet most limit theorems that hold for an independent sequence will also hold for an MDS.

  4. Stochastic process - Wikipedia

    en.wikipedia.org/wiki/Stochastic_process

    Markov processes are stochastic processes, traditionally in discrete or continuous time, that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is ...

  5. Stochastic game - Wikipedia

    en.wikipedia.org/wiki/Stochastic_game

    The ingredients of a stochastic game are: a finite set of players ; a state space (either a finite set or a measurable space (,)); for each player , an action set (either a finite set or a measurable space (,)); a transition probability from , where = is the action profiles, to , where (,) is the probability that the next state is in given the current state and the current action profile ; and ...

  6. Optional stopping theorem - Wikipedia

    en.wikipedia.org/wiki/Optional_stopping_theorem

    If it is known that the expected time at which the walk ends is finite (say, from Markov chain theory), the optional stopping theorem predicts that the expected stop position is equal to the initial position a. Solving a = pm + (1 – p)0 for the probability p that the walk reaches m before 0 gives p = a/m.

  7. Martingale representation theorem - Wikipedia

    en.wikipedia.org/wiki/Martingale_representation...

    The martingale representation theorem can be used to establish the existence of a hedging strategy. Suppose that ( M t ) 0 ≤ t < ∞ {\displaystyle \left(M_{t}\right)_{0\leq t<\infty }} is a Q-martingale process, whose volatility σ t {\displaystyle \sigma _{t}} is always non-zero.

  8. Stopping time - Wikipedia

    en.wikipedia.org/wiki/Stopping_time

    Example of a stopping time: a hitting time of Brownian motion.The process starts at 0 and is stopped as soon as it hits 1. In probability theory, in particular in the study of stochastic processes, a stopping time (also Markov time, Markov moment, optional stopping time or optional time [1]) is a specific type of “random time”: a random variable whose value is interpreted as the time at ...

  9. Concentration inequality - Wikipedia

    en.wikipedia.org/wiki/Concentration_inequality

    Markov's inequality. Let be a random variable that is non ... The random variable is a special case of a martingale, and =. Hence, the general form ...