- Optimized search spaces for classification and regression tasks
- Support for multiple evaluation metrics - accuracy, ROC-AUC, F1, RMSE, MAE
- Proper handling of categorical features via automatic encoding
- Compatible with both TabPFN and TabPFN-client backends
- Implements scikit-learn’s estimator interface for seamless pipeline integration
- Built-in validation and stratification for reliable performance estimation
- Configurable search algorithms - TPE (Bayesian) or Random Search
Getting Started
Install thehpo extension:
Supported Metrics
| Metric | Description |
|---|---|
accuracy | Classification accuracy (proportion of correct predictions) |
roc_auc | Area under the ROC curve (binary or multiclass) |
f1 | F1 score (harmonic mean of precision and recall) |
rmse | Root mean squared error (regression) |
mse | Mean squared error (regression) |
mae | Mean absolute error (regression) |
Supported Models
| Model | Description |
|---|---|
TunedTabPFNClassifier | TabPFN classifier with automatic hyperparameter tuning and categorical handling. |
TunedTabPFNRegressor | TabPFN regressor with automatic tuning for continuous prediction tasks. |
How it Works
Under the hood, the HPO system:- Splits your data into train and validation sets with optional stratification.
- Samples a candidate configuration from the TabPFN hyperparameter space.
- Trains a TabPFN model with those parameters.
- Evaluates it using the chosen metric.
- Updates its belief model via TPE (Tree-structured Parzen Estimator).
- Repeats this process for
n_trials, selecting the configuration with the best score.