Search results
Results from the WOW.Com Content Network
In mathematics, logarithmic Sobolev inequalities are a class of inequalities involving the norm of a function f, its logarithm, and its gradient . These inequalities were discovered and named by Leonard Gross, who established them in dimension-independent form, [1] [2] in the context of constructive quantum field theory. Similar results were ...
The inequality expressing this fact has constants that do not involve the dimension of the space and, thus, the inequality holds in the setting of a Gaussian measure on an infinite-dimensional space. It is now known that logarithmic Sobolev inequalities hold for many different types of measures, not just Gaussian measures.
Gross was one of the initiators of the study of logarithmic Sobolev inequalities, which he discovered in 1967 for his work in constructive quantum field theory and published later in two foundational papers [11] [12] in which he established these inequalities for the Bosonic and Fermionic cases. The inequalities were named by Gross, who ...
In mathematics, and in particular in mathematical analysis, the Gagliardo–Nirenberg interpolation inequality is a result in the theory of Sobolev spaces that relates the -norms of different weak derivatives of a function through an interpolation inequality.
For example, the approach based on "upper gradients" leads to Newtonian-Sobolev space of functions. Thus, it makes sense to say that a space "supports a Poincare inequality". It turns out that whether a space supports any Poincare inequality and if so, the critical exponent for which it does, is tied closely to the geometry of the space.
where the equality on the left represents integration by parts, and the inequality to the right is a Sobolev inequality [citation needed]. In the latter, equality is attained for the function sin π x {\displaystyle \sin \,\pi x} , implying that the constant − π 2 {\displaystyle -\pi ^{2}} is the best possible.
The log sum inequality can be used to prove inequalities in information theory. Gibbs' inequality states that the Kullback-Leibler divergence is non-negative, and equal to zero precisely if its arguments are equal. [3] One proof uses the log sum inequality.
Bennett's inequality, an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount Bhatia–Davis inequality , an upper bound on the variance of any bounded probability distribution