Skip to main content
Fine-tuning enables you to optimize TabPFN’s pretrained foundation models to your own datasets. It works by updating the pretrained transformer parameters by training with a user-provided dataset using gradient descent. This process retains TabPFN’s learned priors while aligning it more closely with the target data distribution. You can fine-tune both: Fine-tuning can help especially when:
  • Your data represents an edge case or niche distribution not well-covered by TabPFN’s priors.
  • You want to specialize the model for a single domain (e.g., healthcare, finance, IoT sensors)
Recommended setup Fine-tuning requires GPU acceleration for efficient training.

Getting Started

The fine-tuning process is similar for classifiers and regressors and shares the same interface as the standard TabPFNClassifier and TabPFNRegressor classes.
  1. Prepare your dataset: Load and split your data into a train and test set.
  2. Configure your model: Initialize a FinetunedTabPFNClassifier or FinetunedTabPFNRegressor with your desired finetuning hyperparameters.
    finetuned_clf = FinetunedTabPFNClassifier(
        device="cuda",
        epochs=30,
        learning_rate=1e-5,
    )
    
  3. Run fit on your train set: This will run the finetuning training loop for the specified number of epochs.
    finetuned_clf.fit(X_train, y_train)
    
  4. Make predictions with the finetuned model:
    y_pred_proba = finetuned_clf.predict_proba(X_test)
    

GitHub Examples

See more examples and fine-tuning utilities in our TabPFN GitHub repository.