What GPUs does TabPFN require at minimum?
What GPUs does TabPFN require at minimum?
At minimum, TabPFN requires an NVIDIA T4 GPU to run efficiently. For best performance, we recommend A100 or H100 GPUs. CPU and MPS (Apple Silicon) execution are supported experimentally, but GPUs provide the best throughput for production workloads and larger tables.
What are TabPFN’s practical limits for context size and features?
What are TabPFN’s practical limits for context size and features?
TabPFN is optimized for small- to medium-sized tabular contexts, since it performs in-context learning using transformer attention. See our Models for an overview of capabilities.
How are text features handled?
How are text features handled?
In the local package version text features are encoded as categoricals without considering their semantic meaning. Our API automatically detects text features and includes their semantic meaning into our prediction. The local package version encodes text as numerical categories and does not include semantic meaning.
How are date features handled?
How are date features handled?
In the local package version date features are encoded as categoricals without considering their semantic meaning. Our API automatically detects date features and creates an optimized embedding.
How reproducible are TabPFN results across runs and devices?
How reproducible are TabPFN results across runs and devices?
With a fixed seed and in the same environment TabPFN inference is deterministic. Across different hardware (CPU, GPU, MPS) configurations small differences are expected.
Does TabPFN handle missing values natively?
Does TabPFN handle missing values natively?
Yes. TabPFN’s estimators can handle missing values internally, including
pd.NA, without requiring manual imputation.My run is 'slow' on thousands of rows - what controls the fit/predict trade-off?
My run is 'slow' on thousands of rows - what controls the fit/predict trade-off?
By default TabPFN performs most of its fitting inside the
predict() step (that’s how it simulates Bayesian posterior inference). That means latency scales roughly with the number of training rows. For fast inference we can create a tree- or small MLP based model that yields almost the same accuracy as TabPFN. Contact sales@priorlabs.ai to access this solution.What are the API rate limits?
What are the API rate limits?
The current API rate limits are 100 million cells per day, where a “cell” is computed as:These limits automatically reset every 24 hours. If you require further credits, please fill out this form.
How do I know which TabPFN model version am I using in the TabPFN API Client?
How do I know which TabPFN model version am I using in the TabPFN API Client?
After making predictions with a TabPFN model using the client you can access model-level metadata, such as the exact model version used for inference. Each estimator exposes a .last_meta attribute containing this information: