Fork me on GitHub

sklearn.metrics.hamming_loss

sklearn.metrics.hamming_loss(y_true, y_pred, classes=None)

Compute the average Hamming loss.

The Hamming loss is the fraction of labels that are incorrectly predicted.

Parameters :

y_true : array-like or list of labels or label indicator matrix

Ground truth (correct) labels.

y_pred : array-like or list of labels or label indicator matrix

Predicted labels, as returned by a classifier.

classes : array, shape = [n_labels], optional

Integer array of labels.

Returns :

loss : float or int,

Return the average Hamming loss between element of y_true and y_pred.

Notes

In multiclass classification, the Hamming loss correspond to the Hamming distance between y_true and y_pred which is equivalent to the subset zero_one_loss function.

In multilabel classification, the Hamming loss is different from the subset zero-one loss. The zero-one loss considers the entire set of labels for a given sample incorrect if it does entirely match the true set of labels. Hamming loss is more forgiving in that it penalizes the individual labels.

The Hamming loss is upperbounded by the subset zero-one loss. When normalized over samples, the Hamming loss is always between 0 and 1.

References

[R161]Grigorios Tsoumakas, Ioannis Katakis. Multi-Label Classification: An Overview. International Journal of Data Warehousing & Mining, 3(3), 1-13, July-September 2007.
[R162]Wikipedia entry on the Hamming distance

Examples

>>> from sklearn.metrics import hamming_loss
>>> y_pred = [1, 2, 3, 4]
>>> y_true = [2, 2, 3, 4]
>>> hamming_loss(y_true, y_pred)
0.25

In the multilabel case with binary indicator format:

>>> hamming_loss(np.array([[0.0, 1.0], [1.0, 1.0]]), np.zeros((2, 2)))
0.75

and with a list of labels format:

>>> hamming_loss([(1, 2), (3, )], [(1, 2), tuple()])  
0.166...
Previous
Next