- See which features drive model predictions.
- Compare feature importance across samples.
- Debug unexpected model behavior.


Getting Started
Install theinterpretability extension:
TabPFNClassifier, however, a TabPFNRegressor can be used analogously.
Core Functions
| Function | Description |
|---|---|
get_shap_values | Calculates SHAP values for the provided model and data subset. |
plot_shap | Generates an interactive visualization showing feature contributions for each prediction. |
Google Colab Example
Check out our Google Colab for a demo.