Skip to main content
The Interpretability Extension adds SHAP (SHapley Additive exPlanations) support to quantify the contribution of each input feature to an individual prediction. This can be used to:
  • See which features drive model predictions.
  • Compare feature importance across samples.
  • Debug unexpected model behavior.
Shapley Values SHAP values explain a single prediction by attributing the prediction’s deviation from the baseline (mean prediction) to individual features. They provide a consistent, game-theoretic measure of feature influence. Mathematically, each SHAP value represents the marginal contribution of a feature across all possible feature combinations. Data generation example Data generation example

Getting Started

Install the interpretability extension:
pip install "tabpfn-extensions[interpretability]"
Then, use SHAP with any trained TabPFN model. This example shows how to use the TabPFNClassifier, however, a TabPFNRegressor can be used analogously.
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split

from tabpfn_extensions import TabPFNClassifier, interpretability

# Load example dataset
data = load_breast_cancer()
X, y = data.data, data.target
feature_names = data.feature_names
n_samples = 50

# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)

# Initialize and train model
clf = TabPFNClassifier()
clf.fit(X_train, y_train)

# Calculate SHAP values
shap_values = interpretability.shap.get_shap_values(
    estimator=clf,
    test_x=X_test[:n_samples],
    attribute_names=feature_names,
    algorithm="permutation",
)

# Create visualization
fig = interpretability.shap.plot_shap(shap_values)

Core Functions

FunctionDescription
get_shap_valuesCalculates SHAP values for the provided model and data subset.
plot_shapGenerates an interactive visualization showing feature contributions for each prediction.

Google Colab Example

Check out our Google Colab for a demo.