Fork me on GitHub

sklearn.feature_selection.RFECV

class sklearn.feature_selection.RFECV(estimator, step=1, cv=None, scoring=None, loss_func=None, estimator_params={}, verbose=0)
Feature ranking with recursive feature elimination and cross-validated
selection of the best number of features.
Parameters :

estimator : object

A supervised learning estimator with a fit method that updates a coef_ attribute that holds the fitted parameters. Important features must correspond to high absolute values in the coef_ array.

For instance, this is the case for most supervised learning algorithms such as Support Vector Classifiers and Generalized Linear Models from the svm and linear_model modules.

step : int or float, optional (default=1)

If greater than or equal to 1, then step corresponds to the (integer) number of features to remove at each iteration. If within (0.0, 1.0), then step corresponds to the percentage (rounded down) of features to remove at each iteration.

cv : int or cross-validation generator, optional (default=None)

If int, it is the number of folds. If None, 3-fold cross-validation is performed by default. Specific cross-validation objects can also be passed, see sklearn.cross_validation module for details.

scoring : string, callable or None, optional, default: None

A string (see model evaluation documentation) or a scorer callable object / function with signature scorer(estimator, X, y).

estimator_params : dict

Parameters for the external estimator. Useful for doing grid searches.

verbose : int, default=0

Controls verbosity of output.

References

[R141]Guyon, I., Weston, J., Barnhill, S., & Vapnik, V., “Gene selection for cancer classification using support vector machines”, Mach. Learn., 46(1-3), 389–422, 2002.

Examples

The following example shows how to retrieve the a-priori not known 5 informative features in the Friedman #1 dataset.

>>> from sklearn.datasets import make_friedman1
>>> from sklearn.feature_selection import RFECV
>>> from sklearn.svm import SVR
>>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0)
>>> estimator = SVR(kernel="linear")
>>> selector = RFECV(estimator, step=1, cv=5)
>>> selector = selector.fit(X, y)
>>> selector.support_ 
array([ True,  True,  True,  True,  True,
        False, False, False, False, False], dtype=bool)
>>> selector.ranking_
array([1, 1, 1, 1, 1, 6, 4, 3, 2, 5])

Attributes

n_features_ int The number of selected features with cross-validation.
support_ array of shape [n_features] The mask of selected features.
ranking_ array of shape [n_features] The feature ranking, such that ranking_[i] corresponds to the ranking position of the i-th feature. Selected (i.e., estimated best) features are assigned rank 1.
grid_scores_ array of shape [n_subsets_of_features] The cross-validation scores such that grid_scores_[i] corresponds to the CV score of the i-th subset of features.
estimator_ object The external estimator fit on the reduced dataset.

Methods

decision_function(X)
fit(X, y) Fit the RFE model and automatically tune the number of selected
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
get_support([indices]) Get a mask, or integer index, of the features selected
inverse_transform(X) Reverse the transformation operation
predict(X) Reduce X to the selected features and then predict using the
predict_proba(X)
score(X, y) Reduce X to the selected features and then return the score of the
set_params(**params) Set the parameters of this estimator.
transform(X) Reduce X to the selected features.
__init__(estimator, step=1, cv=None, scoring=None, loss_func=None, estimator_params={}, verbose=0)
fit(X, y)

Fit the RFE model and automatically tune the number of selected features.

Parameters :

X : {array-like, sparse matrix}, shape = [n_samples, n_features]

Training vector, where n_samples is the number of samples and n_features is the total number of features.

y : array-like, shape = [n_samples]

Target values (integers for classification, real numbers for regression).

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters :

X : numpy array of shape [n_samples, n_features]

Training set.

y : numpy array of shape [n_samples]

Target values.

Returns :

X_new : numpy array of shape [n_samples, n_features_new]

Transformed array.

get_params(deep=True)

Get parameters for this estimator.

Parameters :

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns :

params : mapping of string to any

Parameter names mapped to their values.

get_support(indices=False)

Get a mask, or integer index, of the features selected

Parameters :

indices : boolean (default False)

If True, the return value will be an array of integers, rather than a boolean mask.

Returns :

support : array

An index that selects the retained features from a feature vector. If indices is False, this is a boolean array of shape [# input features], in which an element is True iff its corresponding feature is selected for retention. If indices is True, this is an integer array of shape [# output features] whose values are indices into the input feature vector.

inverse_transform(X)

Reverse the transformation operation

Parameters :

X : array of shape [n_samples, n_selected_features]

The input samples.

Returns :

X_r : array of shape [n_samples, n_original_features]

X with columns of zeros inserted where features would have been removed by transform.

predict(X)

Reduce X to the selected features and then predict using the underlying estimator.

Parameters :

X : array of shape [n_samples, n_features]

The input samples.

Returns :

y : array of shape [n_samples]

The predicted target values.

score(X, y)

Reduce X to the selected features and then return the score of the underlying estimator.

Parameters :

X : array of shape [n_samples, n_features]

The input samples.

y : array of shape [n_samples]

The target values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :
transform(X)

Reduce X to the selected features.

Parameters :

X : array of shape [n_samples, n_features]

The input samples.

Returns :

X_r : array of shape [n_samples, n_selected_features]

The input samples with only the selected features.

Previous
Next