enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Spearman–Brown prediction formula - Wikipedia

    en.wikipedia.org/wiki/Spearman–Brown_prediction...

    The Spearman–Brown prediction formula, also known as the Spearman–Brown prophecy formula, is a formula relating psychometric reliability to test length and used by psychometricians to predict the reliability of a test after changing the test length. [ 1 ] The method was published independently by Spearman (1910) and Brown (1910). [ 2 ][ 3 ]

  3. Reliability (statistics) - Wikipedia

    en.wikipedia.org/wiki/Reliability_(statistics)

    Reliability theory shows that the variance of obtained scores is simply the sum of the variance of true scores plus the variance of errors of measurement. [7] This equation suggests that test scores vary as the result of two factors: 1. Variability in true scores. 2.

  4. Kuder–Richardson formulas - Wikipedia

    en.wikipedia.org/wiki/Kuder–Richardson_formulas

    The name of this formula stems from the fact that is the twentieth formula discussed in Kuder and Richardson's seminal paper on test reliability. [1] It is a special case of Cronbach's α, computed for dichotomous scores. [2] [3] It is often claimed that a high KR-20 coefficient (e.g., > 0.90) indicates a homogeneous test. However, like ...

  5. Cronbach's alpha - Wikipedia

    en.wikipedia.org/wiki/Cronbach's_alpha

    Cronbach's alpha (Cronbach's ), also known as tau-equivalent reliability ( ) or coefficient alpha (coefficient ), is a reliability coefficient and a measure of the internal consistency of tests and measures. [ 1 ][ 2 ][ 3 ] It was named after the American psychologist Lee Cronbach. Numerous studies warn against using Cronbach's alpha ...

  6. Inter-rater reliability - Wikipedia

    en.wikipedia.org/wiki/Inter-rater_reliability

    In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon. Assessment tools that rely on ratings must ...

  7. Psychological statistics - Wikipedia

    en.wikipedia.org/wiki/Psychological_statistics

    Split-half reliability (Spearman- Brown Prophecy) and Cronbach Alpha are popular estimates of this reliability. [5] (D) Parallel Form Reliability: It is an estimate of consistency between two different instruments of measurement. The inter-correlation between two parallel forms of a test or scale is used as an estimate of parallel form reliability.

  8. Stanford–Binet Intelligence Scales - Wikipedia

    en.wikipedia.org/wiki/Stanford–Binet...

    ICD-9-CM. 94.01. The Stanford–Binet Intelligence Scales (or more commonly the Stanford–Binet) is an individually administered intelligence test that was revised from the original Binet–Simon Scale by Alfred Binet and Théodore Simon. It is in its fifth edition (SB5), which was released in 2003. It is a cognitive-ability and intelligence ...

  9. Generalizability theory - Wikipedia

    en.wikipedia.org/wiki/Generalizability_theory

    Generalizability theory, or G theory, is a statistical framework for conceptualizing, investigating, and designing reliable observations. It is used to determine the reliability (i.e., reproducibility) of measurements under specific conditions. It is particularly useful for assessing the reliability of performance assessments.