- See which features drive model predictions.
- Compare feature importance across samples.
- Debug unexpected model behavior.
- Communicate insights to stakeholders clearly and visually.
Shapley Values
SHAP values explain a single prediction by attributing the prediction’s deviation from the baseline (mean prediction) to individual features. They provide a consistent, game-theoretic measure of feature influence. Mathematically, each SHAP value represents the marginal contribution of a feature across all possible feature combinations.Getting Started
Install theinterpretability extension:
SHAP values can be calculated for both classification and regression tasks. The extension supports CPU and GPU execution transparently.
Core Functions
| Function | Description |
|---|---|
get_shap_values | Calculates SHAP values for the provided model and data subset. |
plot_shap | Generates an interactive visualization showing feature contributions for each prediction. |
Advanced Use Cases
- Per-sample explainability: inspect individual predictions to see why the model produced a specific output.
- Global interpretability: aggregate SHAP values across samples to rank features by importance.
- Feature interaction analysis: identify features that amplify or offset contributions.
- Model comparison: use SHAP to compare explanations between TabPFN variants.