Today, we are introducing Altertable Workers in early access: a way to run the Altertable data plane inside your own cloud, while keeping altertable.ai as the control plane.
The old bargain
For the last decade, cloud data products have offered the same deal: send us the data, we run the compute, you use the UI. That model made adoption easy and worked well enough when the primary job was scheduled ingestion, dashboards, and human-driven analysis.
It breaks when data becomes an active input into products, operations, and agents. The problem is no longer storage or visualization; it is topology. The data is in one place, the runtime is somewhere else, and every new use case adds another network path, another permission boundary, another source of latency, and another bill that grows with usage.
These are symptoms of one architectural assumption: the provider owns the data plane. In the AI era, that default is too rigid.
BYOC is hard to ship
The control-plane/data-plane split is not a new idea. The hard part is shipping it without creating a second product. Most platforms' runtimes are too large, too coupled to their own cloud, or too dependent on assumptions that disappear inside a customer cluster. They end up with half-products: different code paths, delayed features, heavier operations.
Altertable is different because the unit of execution is already a worker; small enough to run close to your data, powerful enough to execute real analytical workloads, and integrated enough to remain part of the same product.
How Altertable Workers work
altertable.ai is the control plane: catalog, authentication, connection configuration, UI, billing, audit, metadata, and orchestration. Altertable Workers are the data plane: source connectivity, query execution, Parquet reads and writes, object storage access, local cache, and result streaming.
A self-hosted worker is installed with Helm or as a Docker container. During installation it receives a one-time enrollment token, phones home to altertable.ai, and appears online in the UI. The worker fetches its configuration, connects to sources over your private network, and executes work inside your environment.
The important part is what does not have to happen. Your database does not need a public IP. You do not need a bastion or a vendor-facing endpoint. Your object storage credentials do not leave your infrastructure. Raw data does not round-trip through the control plane.
Under the hood, the worker is a Rust binary built around DuckDB. It speaks to the control plane, executes analytical work locally, and operates on data in object storage. The runtime is intentionally small; a BYOC data plane only works if the thing you ship is something customers can actually run: understandable, observable, upgradeable, and boring enough to fit existing infrastructure practices.
Currently, communication between clients and altertable.ai, and between altertable.ai and workers, runs over Arrow Flight SQL. Yesterday, the DuckDB team announced Quack, a native client-server protocol for DuckDB. We are very excited to adopt it; Quack will make this layer simpler and more performant by letting DuckDB instances talk to each other directly.
What this changes
Security asks for less. In the traditional model, the provider asks you to open network paths, allowlist IPs, export data, trust an external runtime. With a self-hosted worker, the architecture inverts: credentials stay local, object storage stays customer-controlled, raw data never enters Altertable's data path.
Compute becomes yours. Scaling is an infrastructure choice, not a vendor conversation. You choose the machines, memory, and isolation model. This matters especially for AI workloads; agents are exploratory by nature, and if every step is metered through a vendor-owned data plane, curiosity becomes a budget item.
One runtime, many placements. A hosted worker is a worker we provision. A self-hosted worker is a worker you run. Same control plane, same UI, same runtime. Hybrid setups are natural: managed workers for shared exploration, a self-hosted worker inside a production VPC for data that cannot leave your environment.
Early access
Altertable Workers are available in early access.
If your production data sits behind private networks, if raw data cannot leave your infrastructure, or if your AI workflows need customer-controlled compute, we should talk.
Book a demo and we will show you how to bring the Altertable data plane into your own cloud.






