For further understanding of the Kappa value and the difference and relationship between reliability &validity, we recommend you to read the following article:
Viera et al. (2005) – Understanding inter-observer agreement: the kappa statistic.
If a measurement or physical examination test has a continuous outcome, such as blood pressure or weight for example, the intra-class correlation coefficient (ICC) is used to express intra- or inter-rater reliability.
Values for the Kappaĸas well as for the ICC are categorized and labeled according to Landis et al. (1977):
|
Kappa к or ICC |
Interpretation |
|
0.81 – 1 |
Almost perfect |
|
0.61 – 0.8 |
Substantial |
|
0.41 – 0.6 |
Moderate |
| 0. 21 – 0.4 |
Fair |
|
< 0.21 |
Slight |
