Search results
Results from the WOW.Com Content Network
In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon.
Cohen's kappa measures the agreement between two raters who each classify N items into C mutually exclusive categories. The definition of is =, where p o is the relative observed agreement among raters, and p e is the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly selecting each category.
Researchers can help foster higher interobserver reliability if they clearly define the constructs they are interested in measuring. If there is low inter-observer reliability, it is likely that the construct being observed is too ambiguous, and the observers are all imparting their own interpretations.
Fleiss' kappa is a generalisation of Scott's pi statistic, [2] a statistical measure of inter-rater reliability. [3] It is also related to Cohen's kappa statistic and Youden's J statistic which may be more appropriate in certain instances. [4]
Inter-observer variability refers to systematic differences among the observers — for example, one physician may consistently score patients at a higher risk level than other physicians. Intra-observer variability refers to deviations of a particular observer's score on a particular patient that are not part of a systematic difference.
Kendall's W (also known as Kendall's coefficient of concordance) is a non-parametric statistic for rank correlation.It is a normalization of the statistic of the Friedman test, and can be used for assessing agreement among raters and in particular inter-rater reliability.
Validity studies can also be performed. One study looked at the interobserver reliability of interviewing and coding. Two researchers independently interviewed DES participants and coded their experiences. They compared these codes to see if they matched and found high reliability. [20] DES samples can also be checked with other observables.
Scott's pi (named after William A Scott) is a statistic for measuring inter-rater reliability for nominal data in communication studies.Textual entities are annotated with categories by different annotators, and various measures are used to assess the extent of agreement between the annotators, one of which is Scott's pi.