PoolQueryStrategy#
- class skactiveml.base.PoolQueryStrategy(missing_label=nan, random_state=None)[source]#
Bases:
QueryStrategyBase class for all pool-based active learning query strategies in scikit-activeml.
- Parameters:
- missing_labelscalar or string or np.nan or None, default=np.nan
Value to represent a missing label.
- random_stateint or RandomState instance or None, default=None
Controls the randomness of the estimator.
Methods
Get metadata routing of this object.
get_params([deep])Get parameters for this estimator.
query(*args, **kwargs)Determines the query for active learning based on input arguments.
set_params(**params)Set the parameters of this estimator.
- get_metadata_routing()#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequestencapsulating routing information.
- get_params(deep=True)#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- abstract query(*args, **kwargs)#
Determines the query for active learning based on input arguments.
- set_params(**params)#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
Examples using skactiveml.base.PoolQueryStrategy#
Batch Active Learning by Diverse Gradient Embedding (BADGE)
Batch Bayesian Active Learning by Disagreement (BatchBALD)
Fast Active Learning by Contrastive UNcertainty (FALCUN)
Batch Density-Diversity-Distribution-Distance Sampling (4DS)
Density-Diversity-Distribution-Distance Sampling (4DS)
Monte-Carlo Expected Error Reduction (EER) with Log-Loss
Monte-Carlo Expected Error Reduction (EER) with Misclassification-Loss
Query-by-Committee (QBC) with Kullback-Leibler Divergence
Querying Informative and Representative Examples (QUIRE)
Uncertainty Sampling with Expected Average Precision (USAP)
Regression based Kullback Leibler Divergence Maximization
Regression Tree Based Active Learning (RT-AL) with Diversity Selection
Regression Tree Based Active Learning (RT-AL) with Random Selection
Regression Tree Based Active Learning (RT-AL) with Representativity Selection