Back to blog
3 min read

Speed Shapes Understanding

Speed isn't just a petabyte-scale luxury: it's the difference between insight and inertia. We've been deep in TPC-H benchmarks, tuning our analytical engine. Because when AI agents run continuously, performance isn't a vanity metric: it's economics.

Listen to this article (Gen-AI)0:00
0:00
Share onX

Most people think data performance only matters at petabyte scale. But anyone who’s ever waited for a dashboard to load—or for a query to return—knows better. Whether you’re debugging a retention dip or surfacing an anomaly for an AI agent, seconds define flow. Speed isn’t a luxury; it’s the difference between insight and inertia.

At Altertable, we’ve spent the past weeks deep in TPC-H benchmarks (an industry-standard test for decision-support systems) tuning our analytical engine and chasing every millisecond. Not because we love benchmarks for their own sake, but because real-time understanding depends on them.

The Engine That Never Sleeps

Our SQL engine is built for motion. We rely on Trino for distributed workloads and increasingly on DuckDB for fast, in-process analytics: two engines that push what’s possible in modern query execution. Every optimization matters: smarter joins, vectorized processing, adaptive caching...

And we don’t just consume open source, we upstream our improvements. Our recent post, NetQuack: 4,000× Faster Analytics, details how we reworked a few URL functions we've been relying on for web-like analytics. These contributions ripple beyond us, strengthening the ecosystem we build on.

Built Like a Race Car

SQL speed isn’t just about code; it’s about what that code runs on. Analytical workloads are bursty: quiet for hours, then spiking as a heavy query hits. We design for that elasticity, so we can scale up fast without burning idle cycles.

But we didn’t go the usual route of AWS EC2 or Google Cloud Compute. We’ve been there before. At Algolia, we learned how much control, predictability, and efficiency come from running bare-metal infrastructure tuned for real workloads. So that’s what we’re doing again: high-frequency CPUs, generous RAM, and NVMe SSDs that keep data flowing even when it can’t all fit in memory.

No noisy neighbors. No over-provisioned clusters waiting for queries that never come. Just raw, predictable throughput. Because the fastest query is the one that never waits.

Distance Still Matters

In a lakehouse world, performance doesn’t stop at compute: it travels through the network. Data has to move fast and stay close. We’re testing Amazon S3, Cloudflare R2, and Hetzner Object Storage to balance latency, durability, and cost. With smart caching and distribution, our agents can pull what they need instantly, not seconds later.

The Real Goal

All this work points to something bigger: waking up idle data.
Today, 95% of company data sits dormant: warehoused, unqueried, waiting for someone to ask the right question. We’re flipping that dynamic.

Our platform’s AI agents continuously model, test, and surface insights before anyone asks. But to make that vision viable, the foundation must be blazingly fast and cost-efficient. When agents run continuously, inefficiency multiplies; performance isn’t a vanity metric: it’s economics.

We know what great looks like: we've benchmarked extensively with high-end CPUs and variable core counts to understand exactly where the performance ceiling is, and where the competition (esp. Snowflake) stands.

Because the future of data isn’t about asking faster questions: it’s about your data answering first.

Share onX
Sylvain Utard, Co-Founder & CEO at Altertable

Sylvain Utard

Co-Founder & CEO

Seasoned leader in B2B SaaS and B2C. Scaled 100+ teams at Algolia (1st hire) & Sorare. Passionate about data, performance and productivity.

We're hiring! Join our team.View All Jobs
Altertable Logo Shard
About Altertable
We're building a unified, AI-driven data platform that puts data to work for people.
Craft with Purpose
Focus with Ownership
Operate with Transparency
Grow with Others