Fleiss' kappa

Fleiss' kappa (named after Joseph L. Fleiss) is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of items or classifying items. This contrasts with other kappas such as Cohen's kappa, which only work when assessing the agreement between not more than two raters or the interrater reliability for one appraiser versus themself. The measure calculates the degree of agreement in classification over that which would be expected by chance. There is no generally agreed-upon measure of significance, although guidelines have been given.

Fleiss' kappa can be used only with binary or nominal-scale ratings. No version is available for ordered-categorical ratings.

Introduction

Fleiss' kappa is a generalisation of Scott's pi statistic, a statistical measure of inter-rater reliability. It is also related to Cohen's kappa statistic and Youden's J statistic which may be more appropriate in certain instances. Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings, to a fixed number of items. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly. It is important to note that whereas Cohen's kappa assumes the same two raters have rated a set of items, Fleiss' kappa specifically allows that although there are a fixed number of raters (e.g., three), different items may be rated by different individuals (Fleiss, 1971, p.378). That is, Item 1 is rated by Raters A, B, and C; but Item 2 could be rated by Raters D, E, and F.

Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa, , can be defined as,

(1)

The factor gives the degree of agreement that is attainable above chance, and, gives the degree of agreement actually achieved above chance. If the raters are in complete agreement then . If there is no agreement among the raters (other than what would be expected by chance) then .

An example of the use of Fleiss' kappa may be the following: Consider fourteen psychiatrists are asked to look at ten patients. Each psychiatrist gives one of possibly five diagnoses to each patient. These are compiled into a matrix, and Fleiss' kappa can be computed from this matrix (see example below) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance.

Equations

Let N be the total number of subjects, let n be the number of ratings per subject, and let k be the number of categories into which assignments are made. The subjects are indexed by i = 1, ... N and the categories are indexed by j = 1, ... k. Let nij represent the number of raters who assigned the i-th subject to the j-th category.

First calculate pj, the proportion of all assignments which were to the j-th category:

(2)

Now calculate , the extent to which raters agree for the i-th subject (i.e., compute how many rater--rater pairs are in agreement, relative to the number of all possible rater--rater pairs):

(3)

Now compute , the mean of the 's, and which go into the formula for :

(4)

(5)

Worked example

12345
10000141.000
2026420.253
3003560.308
4039200.440
5228110.330
6770000.462
7326300.242
8253220.176
9652100.286
10022370.286
Total2028392132
0.1430.2000.2790.1500.229
Table of values for computing the worked example

In the following example, fourteen raters ( ) assign ten "subjects" ( ) to a total of five categories ( ). The categories are presented in the columns, while the subjects are presented in the rows. Each cell lists the number of raters who assigned the indicated (row) subject to the indicated (column) category.

Data

See table to the right.

= 10, = 14, = 5

Sum of all cells = 140
Sum of = 3.780

Calculations

The value is the proportion of all assignments ( , here ) that were made to the th category. For example, taking the first column,

And taking the second row,

In order to calculate , we need to know the sum of ,

Over the whole sheet,

Interpretation

Landis and Koch (1977) gave the following table for interpreting values. This table is however by no means universally accepted. They supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful, as the number of categories and subjects will affect the magnitude of the value. The kappa will be higher when there are fewer categories.

Interpretation
< 0Poor agreement
0.01 0.20Slight agreement
0.21 0.40Fair agreement
0.41 0.60Moderate agreement
0.61 0.80Substantial agreement
0.81 1.00Almost perfect agreement

See also

Notes

  1. ^ Fleiss, J. L. (1971) pp. 378382
  2. ^ Scott, W. (1955) pp. 321325
  3. ^ Powers, D. M. W. (2011)
  4. ^ Powers, D. M. W. (2012)
  5. ^ Landis, J. R. and Koch, G. G. (1977) pp. 159174
  6. ^ Gwet, K. L. (2014, chapter 6)
  7. ^ Sim, J. and Wright, C. C. (2005) pp. 257268

References

  • Fleiss, J. L. (1971) "Measuring nominal scale agreement among many raters." Psychological Bulletin, Vol. 76, No. 5 pp. 378382
  • Gwet, K. L. (2014) Handbook of Inter-Rater Reliability (4th Edition). (Gaithersburg : Advanced Analytics, LLC) ISBN 978-0970806284
  • Landis, J. R. and Koch, G. G. (1977) "The measurement of observer agreement for categorical data" in Biometrics. Vol. 33, pp. 159174
  • Powers, D. M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies 2 (1): 37–63.
  • Powers, David M. W. (2012). "The Problem with Kappa". Conference of the European Chapter of the Association for Computational Linguistics (EACL2012) Joint ROBUS-UNSUP Workshop.
  • Scott, W. (1955). "Reliability of content analysis: The case of nominal scale coding." Public Opinion Quarterly, Vol. 19, No. 3, pp. 321325.
  • Sim, J. and Wright, C. C. (2005) "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements" in Physical Therapy. Vol. 85, No. 3, pp. 257268

Further reading

  • Fleiss, J. L. and Cohen, J. (1973) "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability" in Educational and Psychological Measurement, Vol. 33 pp. 613619
  • Fleiss, J. L. (1981) Statistical methods for rates and proportions. 2nd ed. (New York: John Wiley) pp. 3846
  • Gwet, K. L. (2008) "Computing inter-rater reliability and its variance in the presence of high agreement", British Journal of Mathematical and Statistical Psychology, Vol. 61, pp2948

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.