enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Intra-rater reliability - Wikipedia

    en.wikipedia.org/wiki/Intra-rater_reliability

    In statistics, intra-rater reliability is the degree of agreement among repeated administrations of a diagnostic test performed by a single rater. [ 1 ] [ 2 ] Intra-rater reliability and inter-rater reliability are aspects of test validity .

  3. Cohen's kappa - Wikipedia

    en.wikipedia.org/wiki/Cohen's_kappa

    Cohen's kappa coefficient (κ, lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. [1] It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement ...

  4. Fleiss' kappa - Wikipedia

    en.wikipedia.org/wiki/Fleiss'_kappa

    Fleiss' kappa is a generalisation of Scott's pi statistic, [2] a statistical measure of inter-rater reliability. [3] It is also related to Cohen's kappa statistic and Youden's J statistic which may be more appropriate in certain instances. [4]

  5. Reliability (statistics) - Wikipedia

    en.wikipedia.org/wiki/Reliability_(statistics)

    Measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions. [4] This includes intra-rater reliability. Inter-method reliability assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used. This allows inter-rater reliability to ...

  6. Intraclass correlation - Wikipedia

    en.wikipedia.org/wiki/Intraclass_correlation

    Single measures: even though more than one measure is taken in the experiment, reliability is applied to a context where a single measure of a single rater will be performed; Average measures: the reliability is applied to a context where measures of k raters will be averaged for each subject. Consistency or absolute agreement:

  7. Psychological statistics - Wikipedia

    en.wikipedia.org/wiki/Psychological_statistics

    (A) Inter-Rater reliability: Inter-Rater reliability is estimate of agreement between independent raters. This is most useful for subjective responses. Cohen's Kappa, Krippendorff's Alpha, Intra-Class correlation coefficients, Correlation coefficients, Kendal's concordance coefficient, etc. are useful statistical tools. (B) Test-Retest ...

  8. Generalizability theory - Wikipedia

    en.wikipedia.org/wiki/Generalizability_theory

    For example, a soft drink company might be interested in assessing the quality of a new product through use of a consumer rating scale. By employing a D study, it would be possible to estimate how the consistency of quality ratings would change if consumers were asked 10 questions instead of 2, or if 1,000 consumers rated the soft drink instead ...

  9. Q methodology - Wikipedia

    en.wikipedia.org/wiki/Q_methodology

    Q methodology is a research method used in psychology and in social sciences to study people's "subjectivity"—that is, their viewpoint. Q was developed by psychologist William Stephenson. It has been used both in clinical settings for assessing a patient's progress over time (intra-rater comparison), as well as in research settings to examine ...