Fork me on GitHub

sklearn.metrics.hinge_loss

sklearn.metrics.hinge_loss(y_true, pred_decision, pos_label=None, neg_label=None)

Average hinge loss (non-regularized)

Assuming labels in y_true are encoded with +1 and -1, when a prediction mistake is made, margin = y_true * pred_decision is always negative (since the signs disagree), implying 1 - margin is always greater than 1. The cumulated hinge loss is therefore an upper bound of the number of mistakes made by the classifier.

Parameters :

y_true : array, shape = [n_samples]

True target, consisting of integers of two values. The positive label must be greater than the negative label.

pred_decision : array, shape = [n_samples] or [n_samples, n_classes]

Predicted decisions, as output by decision_function (floats).

Returns :

loss : float

References

[R163]Wikipedia entry on the Hinge loss

Examples

>>> from sklearn import svm
>>> from sklearn.metrics import hinge_loss
>>> X = [[0], [1]]
>>> y = [-1, 1]
>>> est = svm.LinearSVC(random_state=0)
>>> est.fit(X, y)
LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,
     intercept_scaling=1, loss='l2', multi_class='ovr', penalty='l2',
     random_state=0, tol=0.0001, verbose=0)
>>> pred_decision = est.decision_function([[-2], [3], [0.5]])
>>> pred_decision  
array([-2.18...,  2.36...,  0.09...])
>>> hinge_loss([-1, 1, 1], pred_decision)  
0.30...
Previous
Next