Explaining Low Kappa Value
FORUM QUESTION: BarbaraGrimes - 05 Feb 2010 - 15:44
I have a set of data where 3 reviewers classified 20 subjects.
The client is interested in agreement among the reviewers.
The possible categories were 1,2 and 3.
Everyone was rated a 3 by all the reviewers except for 2 individuals who received a rating of 1 by one rater. (Out of 60 possible ratings, 58 were 3 and 2 were 1)
The Kappa value is only .03.
How do I explain this to the client?
Should we be reporting a different statistic?
FORUM ANSWER: PeterBacchetti - 07 Feb 2010 - 18:04
Kappa is based on how much better agreement is compared to what would be expected by chance given the marginal totals. In this case the marginal totals will produce such high agreement by chance that there is essentially nothing left to base the kappa on.
To evaluate the reproducibility of the rating method, the investigators should obtain data where there is more variation in the items to be rated, with closer to a third of the ratings falling in each category.
Kappa is an abstract quantity, and actual tallies and estimated agreement rates may be more useful for some purposes. In this case, the high agreement rate is misleading because of the lack of variation.