enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Multiple comparisons problem - Wikipedia

    en.wikipedia.org/wiki/Multiple_comparisons_problem

    The following table defines the possible outcomes when testing multiple null hypotheses. Suppose we have a number m of null hypotheses, denoted by: H 1, H 2, ..., H m. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant.

  3. F-test - Wikipedia

    en.wikipedia.org/wiki/F-test

    The hypothesis that a data set in a regression analysis follows the simpler of two proposed linear models that are nested within each other. Multiple-comparison testing is conducted using needed data in already completed F-test, if F-test leads to rejection of null hypothesis and the factor under study has an impact on the dependent variable. [1]

  4. Holm–Bonferroni method - Wikipedia

    en.wikipedia.org/wiki/Holm–Bonferroni_method

    A hypothesis is rejected at level α if and only if its adjusted p-value is less than α. In the earlier example using equal weights, the adjusted p -values are 0.03, 0.06, 0.06, and 0.02. This is another way to see that using α = 0.05, only hypotheses one and four are rejected by this procedure.

  5. Regression analysis - Wikipedia

    en.wikipedia.org/wiki/Regression_analysis

    Simple linear regression and multiple regression using least squares can be done in some spreadsheet applications and on some calculators. While many statistical software packages can perform various types of nonparametric and robust regression, these methods are less standardized.

  6. Omnibus test - Wikipedia

    en.wikipedia.org/wiki/Omnibus_test

    In multiple regression, the omnibus test is an ANOVA F test on all the coefficients, that is equivalent to the multiple correlations R Square F test. The omnibus F test is an overall test that examines model fit, thus failure to reject the null hypothesis implies that the suggested linear model is not significantly suitable to the data.

  7. Multivariate statistics - Wikipedia

    en.wikipedia.org/wiki/Multivariate_statistics

    The Inverse-Wishart distribution is important in Bayesian inference, for example in Bayesian multivariate linear regression. Additionally, Hotelling's T-squared distribution is a multivariate distribution, generalising Student's t-distribution, that is used in multivariate hypothesis testing.

  8. Tukey's range test - Wikipedia

    en.wikipedia.org/wiki/Tukey's_range_test

    The value q s is the sample's test statistic. (The notation | x | means the absolute value of x; the magnitude of x with the sign set to +, regardless of the original sign of x.) This q s test statistic can then be compared to a q value for the chosen significance level α from a table of the studentized range distribution.

  9. Goldfeld–Quandt test - Wikipedia

    en.wikipedia.org/wiki/Goldfeld–Quandt_test

    A parametric test for equal variance can be visualized by indexing the data by some variable, removing data points in the center and comparing the mean deviations of the left and right side. In statistics, the Goldfeld–Quandt test checks for heteroscedasticity in regression analyses. It does this by dividing a dataset into two parts or groups ...