Skip to main content
TabPFN is a pre-trained transformer trained on billions of synthetic datasets to “learn the learning process.” Instead of re-optimizing weights for every new dataset, TabPFN encodes inductive biases, priors, and optimization strategies and applies them to your data via in-context learning. That means one forward pass → high-quality predictions in seconds.

How to access TabPFN

Capabilities

Why teams choose TabPFN

Accurate predictions in seconds

TabPFN-2.5 reaches tuned-ensemble–level performance with near-instant training.

No re-training required

Skip repeated training loops. Simply update the context and TabPFN performs zero-shot inference.

Familiar interface

Plug into any workflow with the familiar scikit-learn interface or through the Prior Labs API.

Robust in the real world

Handles missing values, outliers, categorical & text features natively.

Minimal preprocessing

Handles missing values, outliers, categorical & text features natively.

Interpretable

Returns calibrated probabilities and integrates SHAP for explainable outcomes.

Get Started