quantificationlib.multiclass.friedman module¶
Quantifier based on Mixture Estimation proposed by Friedman
- class FriedmanME(estimator_test=None, estimator_train=None, distance='L2', tol=1e-05, verbose=0)[source]¶
Bases:
UsingClassifiers
Multiclass Mixture Estimation method proposed by Friedman
This class works in two different ways:
Two estimators are used to classify the examples of the training set and the testing set in order to compute the (probabilistic) confusion matrix of both sets. Estimators can be already trained
You can directly provide the predictions for the examples in the fit/predict methods. This is useful for synthetic/artificial experiments
The idea in both cases is to guarantee that all methods based on distribution matching are using exactly the same predictions when you compare this kind of quantifiers (and others that also employ an underlying classifier, for instance, CC/PCC). In the first case, estimators are only trained once and can be shared for several quantifiers of this kind
- Parameters:
estimator_train (estimator object (default=None)) – An estimator object implementing fit and predict_proba. It is used to classify the examples of the training set and to compute the confusion matrix
estimator_test (estimator object (default=None)) – An estimator object implementing fit and predict_proba. It is used to classify the examples of the testing set and to obtain the confusion matrix of the testing set. For some experiments both estimators could be the same
distance (str, representing the distance function (default='L2')) – It is the name of the distance used to compute the difference between the mixture of the training distribution and the testing distribution
tol (float, (default=1e-05)) – The precision of the solution when search is used to compute the prevalence
verbose (int, optional, (default=0)) – The verbosity level. The default value, zero, means silent mode
- estimator_train¶
Estimator used to classify the examples of the training set
- Type:
estimator
- estimator_test¶
Estimator used to classify the examples of the testing bag
- Type:
estimator
- predictions_train_¶
Predictions of the examples in the training set
- Type:
ndarray, shape (n_examples, n_classes) (probabilistic estimator)
- predictions_test_¶
Predictions of the examples in the testing bag
- Type:
ndarray, shape (n_examples, n_classes) (probabilistic estimator)
- needs_predictions_train¶
It is True because FriedmanME quantifiers need to estimate the training distribution
- Type:
bool, True
- probabilistic_predictions¶
This means that predictions_train_/predictions_test_ contain probabilistic predictions
- Type:
bool, True
- distance¶
The name of the distance function used
- Type:
A distance function (default=l2)
- tol¶
The precision of the solution when search is used to compute the solution
- Type:
float
- classes_¶
Class labels
- Type:
ndarray, shape (n_classes, )
- y_ext_¶
Repmat of true labels of the training set. When CV_estimator is used with averaged_predictions=False, predictions_train_ will have a larger dimension (factor=n_repetitions * n_folds of the underlying CV) than y. In other cases, y_ext_ == y. y_ext_ is used in fit method whenever the true labels of the training set are needed, instead of y
- Type:
ndarray, shape(len(predictions_train_, 1)
- train_prevs_¶
Prevalence of each class in the training set
- Type:
ndarray, shape (n_classes, )
- train_distrib_¶
Each column is the representation of the training examples of such class. The column contains the percentage of examples of each class whose probability to belong to the row class is >= than the prevalence of the row class in the training set
- Type:
ndarray, shape (n_classes, n_classes)
- test_distrib_¶
Percentage of examples in the testing bag whose probability to belong each class is >= than the prevalence of that class in the training set
- Type:
ndarray, shape (n_classes_, 1)
- G_, C_, b_
These variables are precomputed in the fit method and are used for solving the optimization problem using quadprog.solve_qp. See compute_l2_param_train function
- Type:
variables of different kind for definining the optimization problem
- problem_¶
This attribute is set to None in the fit() method. With such model, the first time a testing bag is predicted this attribute will contain the corresponding cvxpy Object (if such library is used, i.e in the case of ‘L1’ and ‘HD’). For the rest testing bags, this object is passed to allow a warm start. The solving process is faster.
- Type:
a cvxpy Problem object
- mixtures_¶
Contains the mixtures for all the prevalences in the range [0, 1] step=0.01. This speeds up the prediction for a collection of testing bags
- Type:
ndarray, shape (101, n_quantiles)
- verbose¶
The verbosity level
- Type:
int
Notes
Notice that at least one between estimator_train/predictions_train and estimator_test/predictions_test must be not None. If both are None a ValueError exception will be raised. If both are not None, predictions_train/predictions_test are used
References
Jerome H. Friedman. Class counts in future unlabeled samples. Presentation at MIT CSAIL Big Data Event, 2014.
- fit(X, y, predictions_train=None)[source]¶
This method performs the following operations: 1) fits the estimators for the training set and the testing set (if needed), and 2) computes predictions_train_ (probabilities) if needed. Both operations are performed by the fit method of its superclass. Then, the method computes the training distribution of the method ME suggested by Friedman. The distribution of a class contains the percentage of the training examples of that class whose probability to belong to each class is >= than the prevalence of such class in the training set Finally, the method computes all the parameters for solving the optimization problem needed by quadprog that do not need the testing distribution
- Parameters:
X (array-like, shape (n_examples, n_features)) – Data
y (array-like, shape (n_examples, )) – True classes
predictions_train (ndarray, shape (n_examples, n_classes)) – Predictions of the training set
- Raises:
ValueError – When estimator_train and predictions_train are both None
- predict(X, predictions_test=None)[source]¶
Predict the class distribution of a testing bag
First, predictions_test_ are computed (if needed, when predictions_test parameter is None) by super().predict() method.
After that, the method computes the distribution of the FriedmanME method for the testing bag. That is, the percentage of examples in the testing bag whose probability to belong each class is >= than the prevalence of that class in the training set
Finally, the prevalences are computed solving the resulting optimization problem
- Parameters:
X (array-like, shape (n_examples, n_features)) – Testing bag
predictions_test (ndarray, shape (n_examples, n_classes) (default=None)) –
They must be probabilities (the estimator used must have a predict_proba method)
If predictions_test is not None they are copied on predictions_test_ and used. If predictions_test is None, predictions for the testing examples are computed using the predict method of estimator_test (it must be an actual estimator)
- Raises:
ValueError – When estimator_test and predictions_test are both None
- Returns:
prevalences – Contains the predicted prevalence for each class
- Return type:
ndarray, shape(n_classes, )
- set_fit_request(*, predictions_train='$UNCHANGED$')¶
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
predictions_train (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
predictions_train
parameter infit
.self (FriedmanME) –
- Returns:
self – The updated object.
- Return type:
object
- set_predict_request(*, predictions_test='$UNCHANGED$')¶
Request metadata passed to the
predict
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed topredict
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it topredict
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
predictions_test (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
predictions_test
parameter inpredict
.self (FriedmanME) –
- Returns:
self – The updated object.
- Return type:
object