enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Reliability (statistics) - Wikipedia

    en.wikipedia.org/wiki/Reliability_(statistics)

    Inter-rater reliability assesses the degree of agreement between two or more raters in their appraisals. For example, a person gets a stomach ache and different doctors all give the same diagnosis. [5]: 71 Test-retest reliability assesses the degree to

  3. External validity - Wikipedia

    en.wikipedia.org/wiki/External_validity

    For example, increasing the number of bystanders has been found to inhibit helping behaviour with many kinds of people, including children, university students, and future ministers; [25] in Israel; [26] in small towns and large cities in the U.S.; [27] in a variety of settings, such as psychology laboratories, city streets, and subway trains ...

  4. Validity scale - Wikipedia

    en.wikipedia.org/wiki/Validity_scale

    A validity scale, in psychological testing, is a scale used in an attempt to measure reliability of responses, for example with the goal of detecting defensiveness, malingering, or careless or random responding.

  5. Psychological statistics - Wikipedia

    en.wikipedia.org/wiki/Psychological_statistics

    Split-half reliability (Spearman- Brown Prophecy) and Cronbach Alpha are popular estimates of this reliability. [5] (D) Parallel Form Reliability: It is an estimate of consistency between two different instruments of measurement. The inter-correlation between two parallel forms of a test or scale is used as an estimate of parallel form reliability.

  6. Discriminant validity - Wikipedia

    en.wikipedia.org/wiki/Discriminant_validity

    In psychology, discriminant validity tests whether concepts or measurements that are not supposed to be related are actually unrelated. Campbell and Fiske (1959) introduced the concept of discriminant validity within their discussion on evaluating test validity.

  7. Inter-rater reliability - Wikipedia

    en.wikipedia.org/wiki/Inter-rater_reliability

    In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.

  8. Generalizability theory - Wikipedia

    en.wikipedia.org/wiki/Generalizability_theory

    Generalizability theory, or G theory, is a statistical framework for conceptualizing, investigating, and designing reliable observations.It is used to determine the reliability (i.e., reproducibility) of measurements under specific conditions.

  9. Psychological testing - Wikipedia

    en.wikipedia.org/wiki/Psychological_testing

    The reliability of observers in direct observational research can be evaluated using Cohen's kappa. The Parent-Child Interaction Assessment-II (PCIA) [33] is an example of a direct observation procedure that is used with school-age children and parents. The parents and children are video recorded playing at a make-believe zoo.