OCTOBER 7, 2025

3 MIN READ

SYLVAIN UTARD

Speed Shapes Understanding

Speed Shapes Understanding

Speed isn't just a luxury: it's the difference between insight and inertia. We've been deep in TPC-H benchmarks, tuning our analytical engine for AI agents.

Listen to this article (Gen-AI)

0:00
3:46
Blog

Most people think data performance only matters at petabyte scale. But anyone who's ever waited for a dashboard to load—or for a query to return—knows better. Whether you're debugging a retention dip or surfacing an anomaly for an AI agent, seconds define flow. Speed isn't a luxury; it's the difference between insight and inertia.

At Altertable, we've spent the past weeks deep in TPC-H benchmarks (an industry-standard test for decision-support systems) tuning our analytical engine and chasing every millisecond. Not because we love benchmarks for their own sake, but because real-time understanding depends on them. (Learn more: our lakehouse architecture and query engine choices)

The Engine That Never Sleeps

Our SQL engine is built for motion. We rely on Trino for distributed workloads and increasingly on DuckDB for fast, in-process analytics: two engines that push what's possible in modern query execution. Every optimization matters: smarter joins, vectorized processing, adaptive caching... (Read more: our full technical stack)

And we don't just consume open source, we upstream our improvements. Our recent post, NetQuack: 4,000× Faster Analytics, details how we reworked a few URL functions we've been relying on for web-like analytics. These contributions ripple beyond us, strengthening the ecosystem we build on. (Related: our 17 upstream contributions across DuckDB, Trino, ClickHouse, and more)

Built Like a Race Car

SQL speed isn’t just about code; it’s about what that code runs on. Analytical workloads are bursty: quiet for hours, then spiking as a heavy query hits. We design for that elasticity, so we can scale up fast without burning idle cycles.

But we didn’t go the usual route of AWS EC2 or Google Cloud Compute. We’ve been there before. At Algolia, we learned how much control, predictability, and efficiency come from running bare-metal infrastructure tuned for real workloads. So that’s what we’re doing again: high-frequency CPUs, generous RAM, and NVMe SSDs that keep data flowing even when it can’t all fit in memory.

No noisy neighbors. No over-provisioned clusters waiting for queries that never come. Just raw, predictable throughput. Because the fastest query is the one that never waits.

Distance Still Matters

In a lakehouse world, performance doesn’t stop at compute: it travels through the network. Data has to move fast and stay close. We’re testing Amazon S3, Cloudflare R2, and Hetzner Object Storage to balance latency, durability, and cost. With smart caching and distribution, our agents can pull what they need instantly, not seconds later.

The Real Goal

All this work points to something bigger: waking up idle data.
Today, 95% of company data sits dormant: warehoused, unqueried, waiting for someone to ask the right question. We're flipping that dynamic with a data operating system where AI agents work continuously.

Our platform's AI agents continuously model, test, and surface insights before anyone asks. But to make that vision viable, the foundation must be blazingly fast and cost-efficient. When agents run continuously, inefficiency multiplies; performance isn't a vanity metric: it's economics.

We know what great looks like: we've benchmarked extensively with high-end CPUs and variable core counts to understand exactly where the performance ceiling is, and where the competition (esp. Snowflake) stands.

Because the future of data isn’t about asking faster questions: it’s about your data answering first.

Share

Sylvain Utard, Co-Founder & CEO at Altertable

Sylvain Utard

Co-Founder & CEO

Seasoned leader in B2B SaaS and B2C. Scaled 100+ teams at Algolia (1st hire) & Sorare. Passionate about data, performance and productivity.

Stay Updated

Get the latest insights on data, AI, and modern infrastructure delivered to your inbox

Related Articles

Continue exploring topics related to this article

From Task Executors to Outcome Owners
JANUARY 28TH, 2026
Kevin Granger

From Task Executors to Outcome Owners

AI Agents, Product, Culture

How AI is transforming data analyst, data engineer, and data scientist roles from task execution to strategic ownership. Learn how data teams are evolving in 2026 and what skills matter most in the AI era.

READ ARTICLE
Lessons from Search
JANUARY 13TH, 2026
Sylvain Utard

Lessons from Search

Performance, Architecture, Engineering

Real-time analytics systems face the same small-file problem that search engines solved decades ago. DuckLake's new tiered compaction primitives bring battle-tested merge strategies to streaming analytics, making low-latency ingestion sustainable.

READ ARTICLE
Stop Batching Analytics
DECEMBER 30TH, 2025
Sylvain Utard

Stop Batching Analytics

Analytics, Architecture, Performance

Why we're forcing analytics through complex batch pipelines when append-only data should work like logs. The warehouse constraint that stopped making sense.

READ ARTICLE
NetQuack 4000x Faster
SEPTEMBER 30TH, 2025
Sylvain Utard

NetQuack 4000x Faster

Performance, Engineering

We rewrote NetQuack DuckDB extension, replacing regex with character parsing. Result: 4000x faster—37 seconds down to 0.012 seconds.

READ ARTICLE
Rethinking the Lakehouse
JULY 30TH, 2025
Yannick Utard

Rethinking the Lakehouse

Architecture, Performance, Data Stack

Breaking down our storage and query architecture: why we're leaning into Apache Iceberg and why DuckDB is emerging as our real-time query engine of choice.

READ ARTICLE
Altertable Logo

Wake Up To Insights

Join product, growth, and engineering teams enabling continuous discovery