8.7.2.5. sklearn.feature_extraction.text.TfidfTransformer¶
- class sklearn.feature_extraction.text.TfidfTransformer(norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)¶
Transform a count matrix to a normalized tf or tf–idf representation
Tf means term-frequency while tf–idf means term-frequency times inverse document-frequency. This is a common term weighting scheme in information retrieval, that has also found good use in document classification.
The goal of using tf–idf instead of the raw frequencies of occurrence of a token in a given document is to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus.
In the SMART notation used in IR, this class implements several tf–idf variants. Tf is always “n” (natural), idf is “t” iff use_idf is given, “n” otherwise, and normalization is “c” iff norm=’l2’, “n” iff norm=None.
Parameters : norm : ‘l1’, ‘l2’ or None, optional
Norm used to normalize term vectors. None for no normalization.
use_idf : boolean, optional
Enable inverse-document-frequency reweighting.
smooth_idf : boolean, optional
Smooth idf weights by adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once. Prevents zero divisions.
sublinear_tf : boolean, optional
Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf).
References
[Yates2011] R. Baeza-Yates and B. Ribeiro-Neto (2011). Modern Information Retrieval. Addison Wesley, pp. 68–74. [MSR2008] C.D. Manning, H. Schütze and P. Raghavan (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 121–125. Methods
fit(X[, y]) Learn the idf vector (global term weights) fit_transform(X[, y]) Fit to data, then transform it get_params([deep]) Get parameters for the estimator set_params(**params) Set the parameters of the estimator. transform(X[, copy]) Transform a count matrix to a tf or tf–idf representation - __init__(norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)¶
- fit(X, y=None)¶
Learn the idf vector (global term weights)
Parameters : X: sparse matrix, [n_samples, n_features] :
a matrix of term/token counts
- fit_transform(X, y=None, **fit_params)¶
Fit to data, then transform it
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
Parameters : X : numpy array of shape [n_samples, n_features]
Training set.
y : numpy array of shape [n_samples]
Target values.
Returns : X_new : numpy array of shape [n_samples, n_features_new]
Transformed array.
Notes
This method just calls fit and transform consecutively, i.e., it is not an optimized implementation of fit_transform, unlike other transformers such as PCA.
- get_params(deep=True)¶
Get parameters for the estimator
Parameters : deep: boolean, optional :
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- set_params(**params)¶
Set the parameters of the estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns : self :
- transform(X, copy=True)¶
Transform a count matrix to a tf or tf–idf representation
Parameters : X: sparse matrix, [n_samples, n_features] :
a matrix of term/token counts
Returns : vectors: sparse matrix, [n_samples, n_features] :