Documentation

Lakehouse API

POST /query

Execute SQL queries and retrieve results in streaming JSONL format.

Request:

curl https://api.altertable.ai/query \
-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"statement": "SELECT event, COUNT(*) AS count FROM altertable.main.events GROUP BY event"
}'

Request Body:

{
"statement": "SELECT ... your SQL query"
}

Response Format:

Responses use JSONL (JSON Lines) format for efficient streaming:

  1. First line: Query metadata (execution time, etc.)
  2. Second line: Column names and types
  3. Remaining lines: Result rows, one JSON object per line

Example Response:

{"statement": "SELECT ... your SQL query", "session_id": "1234567890", ...}
[{"name": "event", "type": "VARCHAR"}, {"name": "count", "type": "BIGINT"}]
["signup", 42]
["login", 123]
...

POST /upload

Upload data files to create or update tables in your lakehouse. Supports CSV, JSON, and Parquet formats with multiple insertion modes.

Request:

curl https://api.altertable.ai/upload \
-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \
-H "Content-Type: application/octet-stream" \
--data-binary @data.csv \
"?catalog=my_catalog&schema=public&table=users&format=csv&mode=create"**Query Parameters:**
  • catalog (required): Name of the catalog to upload data to
  • schema (required): Name of the schema within the catalog
  • table (required): Name of the table to create or insert into
  • format (required): File format - csv, json, or parquet
  • mode (required): Upload mode - create, append, upsert, or overwrite
  • primary_key (optional): Primary key column name (required for upsert mode)

Request Body:

Binary file data in the specified format:

  • CSV: Comma-separated values with header row
  • JSON: JSON array of objects or JSONL (one JSON object per line)
  • Parquet: Apache Parquet columnar format (most efficient)

Upload Modes:

  • create: Create a new table with the uploaded data (fails if table already exists)
  • append: Append the uploaded data to an existing table (preserves existing data)
  • upsert: Update existing rows and insert new ones based on primary key (requires primary_key parameter)
  • overwrite: Drop the existing table and recreate it with the uploaded data (replaces all data)

Response:

Returns 200 OK on successful upload. The endpoint accepts files up to 100 GB in size.

Example: Upload CSV file

curl https://api.altertable.ai/upload \
-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \
-H "Content-Type: application/octet-stream" \
--data-binary @users.csv \
"?catalog=my_catalog&schema=public&table=users&format=csv&mode=create"

Example: Upload JSON file

curl https://api.altertable.ai/upload \
-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \
-H "Content-Type: application/octet-stream" \
--data-binary @events.json \
"?catalog=my_catalog&schema=public&table=events&format=json&mode=append"

Example: Upsert with primary key

curl https://api.altertable.ai/upload \
-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \
-H "Content-Type: application/octet-stream" \
--data-binary @updates.parquet \
"?catalog=my_catalog&schema=public&table=users&format=parquet&mode=upsert&primary_key=id"
Crafted with <3 by former Algolia × Front × Sorare builders© 2025 AltertableTermsPrivacySecurityCookies