Fork me on GitHub

sklearn.metrics.accuracy_score

sklearn.metrics.accuracy_score(y_true, y_pred, normalize=True)

Accuracy classification score.

In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.

Parameters :

y_true : array-like or list of labels or label indicator matrix

Ground truth (correct) labels.

y_pred : array-like or list of labels or label indicator matrix

Predicted labels, as returned by a classifier.

normalize : bool, optional (default=True)

If False, return the number of correctly classified samples. Otherwise, return the fraction of correctly classified samples.

Returns :

score : float

If normalize == True, return the correctly classified samples (float), else it returns the number of correctly classified samples (int).

The best performance is 1 with normalize == True and the number of samples with normalize == False.

Notes

In binary and multiclass classification, this function is equal to the jaccard_similarity_score function.

Examples

>>> import numpy as np
>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2

In the multilabel case with binary indicator format:

>>> accuracy_score(np.array([[0.0, 1.0], [1.0, 1.0]]), np.ones((2, 2)))
0.5

and with a list of labels format:

>>> accuracy_score([(1, ), (3, )], [(1, 2), tuple()])
0.0
Previous
Next