π TabPFN is now available on Azure AI Foundry! See our guide to get started.
import os, json, requests
# Define your dataset path
train_path = "train.csv"
# Get your API key from the environment
api_key = os.getenv("PRIORLABS_API_KEY")
headers = {"Authorization": f"Bearer {api_key}"}
# Upload your training dataset to /v1/fit
payload = {
"task": "classification",
"schema": {
"target": "churn",
"description": "Customer churn dataset"
}
}
files = {
"data": (None, json.dumps(payload), "application/json"),
"dataset_file": (train_path, open(train_path, "rb")),
}
fit_response = requests.post(
"https://api.priorlabs.ai/v1/fit",
headers=headers,
files=files,
)
model_id = fit_response.json().get("model_id")
print(f"β
Model trained: {model_id}"){
"model_id": "123e4567-e89b-12d3-a456-426614174000",
"task": "classification"
}Uploads and fits a TabPFN model on your training data. The API automatically handles preprocessing and stores a reference to your trained context (not the model weights). You can use either a single dataset file (with the target column included) or separate feature and label files.
import os, json, requests
# Define your dataset path
train_path = "train.csv"
# Get your API key from the environment
api_key = os.getenv("PRIORLABS_API_KEY")
headers = {"Authorization": f"Bearer {api_key}"}
# Upload your training dataset to /v1/fit
payload = {
"task": "classification",
"schema": {
"target": "churn",
"description": "Customer churn dataset"
}
}
files = {
"data": (None, json.dumps(payload), "application/json"),
"dataset_file": (train_path, open(train_path, "rb")),
}
fit_response = requests.post(
"https://api.priorlabs.ai/v1/fit",
headers=headers,
files=files,
)
model_id = fit_response.json().get("model_id")
print(f"β
Model trained: {model_id}"){
"model_id": "123e4567-e89b-12d3-a456-426614174000",
"task": "classification"
}Documentation Index
Fetch the complete documentation index at: https://docs.priorlabs.ai/llms.txt
Use this file to discover all available pages before exploring further.
Bearer token for authentication, obtained after signing up and generating an API key.
A JSON string defining the training configuration.
Supported Systems:
["preprocessing"] - Applies skrub preprocessing,["text"] - Adds text embeddings for text columns.Default: ["preprocessing", "text"].
Supported Config Parameters:
n_estimators (int, 1-10) - Number of ensemble estimators,softmax_temperature (float) - Temperature for softmax scaling,average_before_softmax (bool) - Average before softmax,ignore_pretraining_limits (bool) - Ignore pretraining limits,random_state (int) - Random seed for reproducibility.Option 1: CSV file containing both features (X_train) and labels (y_train). Use this when you have all data in a single file.
Option 2:: CSV file containing only feature columns (X_train). Must be used together with labels_file.
Option 2: CSV file containing only the target/label column (y_train). Must be used together with features_file.
Model fitted successfully β returns a model ID for later prediction calls.
Was this page helpful?