enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Intra-rater reliability - Wikipedia

    en.wikipedia.org/wiki/Intra-rater_reliability

    In statistics, intra-rater reliability is the degree of agreement among repeated administrations of a diagnostic test performed by a single rater. [ 1 ] [ 2 ] Intra-rater reliability and inter-rater reliability are aspects of test validity .

  3. Cohen's kappa - Wikipedia

    en.wikipedia.org/wiki/Cohen's_kappa

    Cohen's kappa measures the agreement between two raters who each classify N items into C mutually exclusive categories. The definition of is =, where p o is the relative observed agreement among raters, and p e is the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly selecting each category.

  4. Reliability (statistics) - Wikipedia

    en.wikipedia.org/wiki/Reliability_(statistics)

    This includes intra-rater reliability. Inter-method reliability assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used. This allows inter-rater reliability to be ruled out. When dealing with forms, it may be termed parallel-forms reliability. [6]

  5. Fleiss' kappa - Wikipedia

    en.wikipedia.org/wiki/Fleiss'_kappa

    Fleiss' kappa is a generalisation of Scott's pi statistic, [2] a statistical measure of inter-rater reliability. [3] It is also related to Cohen's kappa statistic and Youden's J statistic which may be more appropriate in certain instances. [4]

  6. Psychological statistics - Wikipedia

    en.wikipedia.org/wiki/Psychological_statistics

    (A) Inter-Rater reliability: Inter-Rater reliability is estimate of agreement between independent raters. This is most useful for subjective responses. Cohen's Kappa, Krippendorff's Alpha, Intra-Class correlation coefficients, Correlation coefficients, Kendal's concordance coefficient, etc. are useful statistical tools. (B) Test-Retest ...

  7. Rating (clinical trials) - Wikipedia

    en.wikipedia.org/wiki/Rating_(clinical_trials)

    Because the evaluation is subjective, this can result in both inter-rater or intra-rater reliability. [4] When conducting clinical trials, ensuring rating consistency is important, but can prove to be quite difficult to obtain.

  8. Generalizability theory - Wikipedia

    en.wikipedia.org/wiki/Generalizability_theory

    In contrast, an example of a relative, or norm-referenced, decision would be when the individual's test score is used to either (a) determine relative standing as compared to his/her peers (i.e. a child's score on a reading subtest is used to determine which reading group he/she is placed in), or (b) make intra-individual comparisons (i.e ...

  9. Q methodology - Wikipedia

    en.wikipedia.org/wiki/Q_methodology

    Q methodology is a research method used in psychology and in social sciences to study people's "subjectivity"—that is, their viewpoint. Q was developed by psychologist William Stephenson . It has been used both in clinical settings for assessing a patient's progress over time (intra-rater comparison), as well as in research settings to ...