cohen_kappa_score#

sklearn.metrics.cohen_kappa_score(y1, y2, *, labels=None, weights=None, sample_weight=None, zero_division='warn')[source]#

Compute Cohen’s kappa: a statistic that measures inter-annotator agreement.

This function computes Cohen’s kappa [1], a score that expresses the level of agreement between two annotators on a classification problem. It is defined as

\[\kappa = (p_o - p_e) / (1 - p_e)\]

where \(p_o\) is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio), and \(p_e\) is the expected agreement when both annotators assign labels randomly. \(p_e\) is estimated using a per-annotator empirical prior over the class labels [2].

Read more in the User Guide.

Parameters:
y1array-like of shape (n_samples,)

Labels assigned by the first annotator.

y2array-like of shape (n_samples,)

Labels assigned by the second annotator. The kappa statistic is symmetric, so swapping y1 and y2 doesn’t change the value.

labelsarray-like of shape (n_classes,), default=None

List of labels to index the matrix. This may be used to select a subset of labels. If None, all labels that appear at least once in y1 or y2 are used.

weights{‘linear’, ‘quadratic’}, default=None

Weighting type to calculate the score. None means not weighted; “linear” means linear weighting; “quadratic” means quadratic weighting.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

zero_division{“warn”, 0.0, 1.0, np.nan}, default=”warn”

Sets the return value when there is a zero division. This is the case when both labelings y1 and y2 both exclusively contain the 0 class (e. g. [0, 0, 0, 0]) (or if both are empty). If set to “warn”, returns 0.0, but a warning is also raised.

Added in version 1.6.

Returns:
kappafloat

The kappa statistic, which is a number between -1 and 1. The maximum value means complete agreement; zero or lower means chance agreement.

References

Examples

>>> from sklearn.metrics import cohen_kappa_score
>>> y1 = ["negative", "positive", "negative", "neutral", "positive"]
>>> y2 = ["negative", "positive", "negative", "neutral", "negative"]
>>> cohen_kappa_score(y1, y2)
np.float64(0.6875)