The inter-rater reliability consists of statistical measures for assessing the extent of agreement among two or more raters (i.e., “judges”, “observers”). Other synonyms are: inter-rater agreement, inter-observer agreement or inter-rater concordance.
In this course, you will learn the basics and how to compute the different statistical measures for analyzing the inter-rater reliability. These include:
- Cohen’s Kappa: It can be used for either two nominal or two ordinal variables. It accounts for strict agreements between observers. It is most appropriate for two nominal variables.
- Weighted Kappa: It should be considered for two ordinal variables only. It allows partial agreement.
- Light’s Kappa, which is the average of Cohen’s Kappa if using more than two categorical variables.
- Fleiss Kappa: for two or more categorical variables (nominal or ordinal)
- Intraclass correlation coefficient (ICC) for continuous or ordinal data
You will also learn how to visualize the agreement between raters. The course presents the basic principles of these tasks and provide examples in R.
Related Book
Inter-Rater Reliability Essentials: Practical Guide in R
Version: Français
Very useful and simple demonstration, I really appreciate your nice effort.