enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Kolmogorov–Smirnov test - Wikipedia

    en.wikipedia.org/wiki/KolmogorovSmirnov_test

    Illustration of the KolmogorovSmirnov statistic. The red line is a model CDF, the blue line is an empirical CDF, and the black arrow is the KS statistic.. In statistics, the KolmogorovSmirnov test (also K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see Section 2.2), one-dimensional probability distributions.

  3. Two-sample hypothesis testing - Wikipedia

    en.wikipedia.org/wiki/Two-sample_hypothesis_testing

    In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant .

  4. Lilliefors test - Wikipedia

    en.wikipedia.org/wiki/Lilliefors_test

    Lilliefors test is a normality test based on the KolmogorovSmirnov test.It is used to test the null hypothesis that data come from a normally distributed population, when the null hypothesis does not specify which normal distribution; i.e., it does not specify the expected value and variance of the distribution. [1]

  5. Normality test - Wikipedia

    en.wikipedia.org/wiki/Normality_test

    KolmogorovSmirnov test: this test only works if the mean and the variance of the normal distribution are assumed known under the null hypothesis, Lilliefors test: based on the KolmogorovSmirnov test, adjusted for when also estimating the mean and variance from the data, Shapiro–Wilk test, and; Pearson's chi-squared test.

  6. Probability integral transform - Wikipedia

    en.wikipedia.org/wiki/Probability_integral_transform

    Specifically, the probability integral transform is applied to construct an equivalent set of values, and a test is then made of whether a uniform distribution is appropriate for the constructed dataset. Examples of this are P–P plots and KolmogorovSmirnov tests.

  7. Minimum-distance estimation - Wikipedia

    en.wikipedia.org/wiki/Minimum-distance_estimation

    Most theoretical studies of minimum-distance estimation, and most applications, make use of "distance" measures which underlie already-established goodness of fit tests: the test statistic used in one of these tests is used as the distance measure to be minimised. Below are some examples of statistical tests that have been used for minimum ...

  8. Goodness of fit - Wikipedia

    en.wikipedia.org/wiki/Goodness_of_fit

    N = the sample size The resulting value can be compared with a chi-square distribution to determine the goodness of fit. The chi-square distribution has ( k − c ) degrees of freedom , where k is the number of non-empty bins and c is the number of estimated parameters (including location and scale parameters and shape parameters) for the ...

  9. Confidence and prediction bands - Wikipedia

    en.wikipedia.org/wiki/Confidence_and_prediction...

    Confidence bands can be constructed around estimates of the empirical distribution function.Simple theory allows the construction of point-wise confidence intervals, but it is also possible to construct a simultaneous confidence band for the cumulative distribution function as a whole by inverting the Kolmogorov-Smirnov test, or by using non-parametric likelihood methods.