Lakehouse API
POST /query
Execute SQL queries and retrieve results in streaming JSONL format.
Request:
curl https://api.altertable.ai/query \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/json" \-d '{"statement": "SELECT event, COUNT(*) AS count FROM altertable.main.events GROUP BY event"}'
Request Body:
{"statement": "SELECT ... your SQL query"}
Response Format:
Responses use JSONL (JSON Lines) format for efficient streaming:
- First line: Query metadata (execution time, etc.)
- Second line: Column names and types
- Remaining lines: Result rows, one JSON object per line
Example Response:
{"statement": "SELECT ... your SQL query", "session_id": "1234567890", ...}[{"name": "event", "type": "VARCHAR"}, {"name": "count", "type": "BIGINT"}]["signup", 42]["login", 123]...
POST /upload
Upload data files to create or update tables in your lakehouse. Supports CSV, JSON, and Parquet formats with multiple insertion modes.
Request:
curl https://api.altertable.ai/upload \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/octet-stream" \--data-binary @data.csv \"?catalog=my_catalog&schema=public&table=users&format=csv&mode=create"
Query Parameters:
catalog(required): Name of the catalog to upload data toschema(required): Name of the schema within the catalogtable(required): Name of the table to create or insert intoformat(required): File format -csv,json, orparquetmode(required): Upload mode -create,append,upsert, oroverwriteprimary_key(optional): Primary key column name (required forupsertmode)
Request Body:
Binary file data in the specified format:
- CSV: Comma-separated values with header row
- JSON: JSON array of objects or JSONL (one JSON object per line)
- Parquet: Apache Parquet columnar format (most efficient)
Upload Modes:
create: Create a new table with the uploaded data (fails if table already exists)append: Append the uploaded data to an existing table (preserves existing data)upsert: Update existing rows and insert new ones based on primary key (requiresprimary_keyparameter)overwrite: Drop the existing table and recreate it with the uploaded data (replaces all data)
Response:
Returns 200 OK on successful upload. The endpoint accepts files up to 100 GB in size.
Example: Upload CSV file
curl https://api.altertable.ai/upload \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/octet-stream" \--data-binary @users.csv \"?catalog=my_catalog&schema=public&table=users&format=csv&mode=create"
Example: Upload JSON file
curl https://api.altertable.ai/upload \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/octet-stream" \--data-binary @events.json \"?catalog=my_catalog&schema=public&table=events&format=json&mode=append"
Example: Upsert with primary key
curl https://api.altertable.ai/upload \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/octet-stream" \--data-binary @updates.parquet \"?catalog=my_catalog&schema=public&table=users&format=parquet&mode=upsert&primary_key=id"
POST /append
Append data to a table using a simple, fire-and-forget API. Designed for streaming analytics data without managing batches, offsets, or pipelines.
Request:
curl https://api.altertable.ai/append \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/json" \"?catalog=my_catalog&schema=public&table=users"
Query Parameters:
catalog(required): Name of the catalog to append data toschema(required): Name of the schema within the catalogtable(required): Name of the table to append to
Request Body:
The request body can be either a single JSON object or an array of JSON objects:
Single object:
{"name": "Alice","age": 30,}
Batch of objects:
[]
Response:
Returns 200 OK with a JSON response indicating success:
{"ok": true,"error_code": null}
Key Features:
- Automatic schema inference: The schema is automatically inferred from your JSON data
- Automatic table creation: Tables are created automatically if they don't exist
- Automatic schema migration: New columns are added automatically when you send data with additional fields
- Fire-and-forget: Data is accepted immediately and processed asynchronously
- Queryable in seconds: Data becomes available for querying within seconds of being sent
Schema Inference:
The endpoint automatically infers types from your JSON data:
- Numbers →
BIGINT(integers) orDOUBLE(floats) - Strings →
VARCHAR - Booleans →
BOOLEAN - Arrays/Objects →
JSON(stored as JSON strings) - RFC3339 timestamps →
TIMESTAMP(with microsecond precision)
Example: Single object
curl https://api.altertable.ai/append \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/json" \-d '{"event": "signup", "user_id": 123, "timestamp": "2025-12-30T10:00:00Z"}' \"?catalog=my_catalog&schema=public&table=events"
Example: Batch of objects
curl https://api.altertable.ai/append \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/json" \-d '[{"event": "signup", "user_id": 123, "timestamp": "2025-12-30T10:00:00Z"},{"event": "login", "user_id": 456, "timestamp": "2025-12-30T10:01:00Z"}]' \"?catalog=my_catalog&schema=public&table=events"
Example: Schema evolution
Send data with new fields, and the table schema is automatically updated:
# First append - creates table with name and agecurl https://api.altertable.ai/append \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/json" \-d '{"name": "Alice", "age": 30}' \"?catalog=my_catalog&schema=public&table=users"# Later append - automatically adds "email" columncurl https://api.altertable.ai/append \-H "Authorization: Bearer $ALTERTABLE_BASIC_AUTH_TOKEN" \-H "Content-Type: application/json" \"?catalog=my_catalog&schema=public&table=users"
The email column is automatically added to the table. Existing rows will have NULL for the new column.
Manual Schema Creation:
While automatic schema inference is convenient, you can still create tables manually using SQL if you need fine-tuned control over types, constraints, or defaults:
CREATE TABLE my_catalog.main.events (event VARCHAR,user_id VARCHAR,created_at TIMESTAMP);
When you use /append with a manually created table, it respects your existing schema and only adds new columns when they don't exist. Automatic inference is a convenience, not a requirement.