Fork me on GitHub

3.2.3.1.6. sklearn.linear_model.ElasticNetCV

class sklearn.linear_model.ElasticNetCV(l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=1)

Elastic Net model with iterative fitting along a regularization path

The best model is selected by cross-validation.

Parameters :

l1_ratio : float, optional

float between 0 and 1 passed to ElasticNet (scaling between l1 and l2 penalties). For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2 This parameter can be a list, in which case the different values are tested by cross-validation and the one giving the best prediction score is used. Note that a good choice of list of values for l1_ratio is often to put more values close to 1 (i.e. Lasso) and less close to 0 (i.e. Ridge), as in [.1, .5, .7, .9, .95, .99, 1]

eps : float, optional

Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.

n_alphas : int, optional

Number of alphas along the regularization path

alphas : numpy array, optional

List of alphas where to compute the models. If None alphas are set automatically

precompute : True | False | ‘auto’ | array-like

Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.

max_iter : int, optional

The maximum number of iterations

tol : float, optional

The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.

cv : integer or cross-validation generator, optional

If an integer is passed, it is the number of fold (default 3). Specific cross-validation objects can be passed, see the sklearn.cross_validation module for the list of possible objects.

verbose : bool or integer

amount of verbosity

n_jobs : integer, optional

Number of CPUs to use during the cross validation. If -1, use all the CPUs. Note that this is used only if multiple values for l1_ratio are given.

See also

enet_path, ElasticNet

Notes

See examples/linear_model/lasso_path_with_crossvalidation.py for an example.

To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array.

The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. More specifically, the optimization objective is:

1 / (2 * n_samples) * ||y - Xw||^2_2 +
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2

If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to:

a * L1 + b * L2

for:

alpha = a + b and l1_ratio = a / (a + b).

Attributes

alpha_ float The amount of penalization chosen by cross validation
l1_ratio_ float The compromise between l1 and l2 penalization chosen by cross validation
coef_ array, shape = (n_features,) | (n_targets, n_features) Parameter vector (w in the cost function formula),
intercept_ float | array, shape = (n_targets, n_features) Independent term in the decision function.
mse_path_ array, shape = (n_l1_ratio, n_alpha, n_folds) Mean square error for the test set on each fold, varying l1_ratio and alpha.

Methods

decision_function(X) Decision function of the linear model.
fit(X, y) Fit linear model with coordinate descent
get_params([deep]) Get parameters for this estimator.
path(X, y[, l1_ratio, eps, n_alphas, ...]) Compute Elastic-Net path with coordinate descent
predict(X) Predict using the linear model
score(X, y) Returns the coefficient of determination R^2 of the prediction.
set_params(**params) Set the parameters of this estimator.
__init__(l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, fit_intercept=True, normalize=False, precompute='auto', max_iter=1000, tol=0.0001, cv=None, copy_X=True, verbose=0, n_jobs=1)
decision_function(X)

Decision function of the linear model.

Parameters :

X : {array-like, sparse matrix}, shape = (n_samples, n_features)

Samples.

Returns :

C : array, shape = (n_samples,)

Returns predicted values.

fit(X, y)

Fit linear model with coordinate descent

Fit is on grid of alphas and best alpha estimated by cross-validation.

Parameters :

X : {array-like, sparse matrix}, shape (n_samples, n_features)

Training data. Pass directly as float64, Fortran-contiguous data to avoid unnecessary memory duplication

y : array-like, shape (n_samples,) or (n_samples, n_targets)

Target values

get_params(deep=True)

Get parameters for this estimator.

Parameters :

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns :

params : mapping of string to any

Parameter names mapped to their values.

static path(X, y, l1_ratio=0.5, eps=0.001, n_alphas=100, alphas=None, precompute='auto', Xy=None, fit_intercept=True, normalize=False, copy_X=True, coef_init=None, verbose=False, return_models=False, **params)

Compute Elastic-Net path with coordinate descent

The Elastic Net optimization function is:

1 / (2 * n_samples) * ||y - Xw||^2_2 +
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
Parameters :

X : {array-like, sparse matrix}, shape (n_samples, n_features)

Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication

y : ndarray, shape = (n_samples,)

Target values

l1_ratio : float, optional

float between 0 and 1 passed to ElasticNet (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso

eps : float

Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3

n_alphas : int, optional

Number of alphas along the regularization path

alphas : ndarray, optional

List of alphas where to compute the models. If None alphas are set automatically

precompute : True | False | ‘auto’ | array-like

Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.

Xy : array-like, optional

Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.

fit_intercept : bool

Fit or not an intercept. WARNING : will be deprecated in 0.16

normalize : boolean, optional, default False

If True, the regressors X will be normalized before regression. WARNING : will be deprecated in 0.16

copy_X : boolean, optional, default True

If True, X will be copied; else, it may be overwritten.

coef_init : array, shape (n_features, ) | None

The initial values of the coefficients.

verbose : bool or integer

Amount of verbosity

return_models : boolean, optional, default False

If True, the function will return list of models. Setting it to False will change the function output returning the values of the alphas and the coefficients along the path. Returning the model list will be removed in version 0.16.

params : kwargs

keyword arguments passed to the coordinate descent solver.

Returns :

models : a list of models along the regularization path

(Is returned if return_models is set True (default).

alphas : array, shape: [n_alphas + 1]

The alphas along the path where models are computed. (Is returned, along with coefs, when return_models is set to False)

coefs : shape (n_features, n_alphas + 1)

Coefficients along the path. (Is returned, along with alphas, when return_models is set to False).

dual_gaps : shape (n_alphas + 1)

The dual gaps at the end of the optimization for each alpha. (Is returned, along with alphas, when return_models is set to False).

Notes

See examples/plot_lasso_coordinate_descent_path.py for an example.

Deprecation Notice: Setting return_models to False will make the Lasso Path return an output in the style used by lars_path. This will be become the norm as of version 0.15. Leaving return_models set to True will let the function return a list of models as before.

predict(X)

Predict using the linear model

Parameters :

X : {array-like, sparse matrix}, shape = (n_samples, n_features)

Samples.

Returns :

C : array, shape = (n_samples,)

Returns predicted values.

score(X, y)

Returns the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0, lower values are worse.

Parameters :

X : array-like, shape = (n_samples, n_features)

Test samples.

y : array-like, shape = (n_samples,)

True values for X.

Returns :

score : float

R^2 of self.predict(X) wrt. y.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :
Previous
Next