OCTOBER 7, 2025

3 MIN READ

SYLVAIN UTARD

Speed Shapes Understanding

Speed Shapes Understanding

Speed isn't just a luxury: it's the difference between insight and inertia. We've been deep in TPC-H benchmarks, tuning our analytical engine for AI agents.

Listen to this article (Gen-AI)

0:00
0:00
Blog

Most people think data performance only matters at petabyte scale. But anyone who's ever waited for a dashboard to load—or for a query to return—knows better. Whether you're debugging a retention dip or surfacing an anomaly for an AI agent, seconds define flow. Speed isn't a luxury; it's the difference between insight and inertia.

At Altertable, we've spent the past weeks deep in TPC-H benchmarks (an industry-standard test for decision-support systems) tuning our analytical engine and chasing every millisecond. Not because we love benchmarks for their own sake, but because real-time understanding depends on them. (Learn more: our lakehouse architecture and query engine choices)

The Engine That Never Sleeps

Our SQL engine is built for motion. We rely on Trino for distributed workloads and increasingly on DuckDB for fast, in-process analytics: two engines that push what's possible in modern query execution. Every optimization matters: smarter joins, vectorized processing, adaptive caching... (Read more: our full technical stack)

And we don't just consume open source, we upstream our improvements. Our recent post, NetQuack: 4,000× Faster Analytics, details how we reworked a few URL functions we've been relying on for web-like analytics. These contributions ripple beyond us, strengthening the ecosystem we build on. (Related: our 17 upstream contributions across DuckDB, Trino, ClickHouse, and more)

Built Like a Race Car

SQL speed isn’t just about code; it’s about what that code runs on. Analytical workloads are bursty: quiet for hours, then spiking as a heavy query hits. We design for that elasticity, so we can scale up fast without burning idle cycles.

But we didn’t go the usual route of AWS EC2 or Google Cloud Compute. We’ve been there before. At Algolia, we learned how much control, predictability, and efficiency come from running bare-metal infrastructure tuned for real workloads. So that’s what we’re doing again: high-frequency CPUs, generous RAM, and NVMe SSDs that keep data flowing even when it can’t all fit in memory.

No noisy neighbors. No over-provisioned clusters waiting for queries that never come. Just raw, predictable throughput. Because the fastest query is the one that never waits.

Distance Still Matters

In a lakehouse world, performance doesn’t stop at compute: it travels through the network. Data has to move fast and stay close. We’re testing Amazon S3, Cloudflare R2, and Hetzner Object Storage to balance latency, durability, and cost. With smart caching and distribution, our agents can pull what they need instantly, not seconds later.

The Real Goal

All this work points to something bigger: waking up idle data.
Today, 95% of company data sits dormant: warehoused, unqueried, waiting for someone to ask the right question. We're flipping that dynamic with a data operating system where AI agents work continuously.

Our platform's AI agents continuously model, test, and surface insights before anyone asks. But to make that vision viable, the foundation must be blazingly fast and cost-efficient. When agents run continuously, inefficiency multiplies; performance isn't a vanity metric: it's economics.

We know what great looks like: we've benchmarked extensively with high-end CPUs and variable core counts to understand exactly where the performance ceiling is, and where the competition (esp. Snowflake) stands.

Because the future of data isn’t about asking faster questions: it’s about your data answering first.

Share

Sylvain Utard, Co-Founder & CEO at Altertable

Sylvain Utard

Co-Founder & CEO

Seasoned leader in B2B SaaS and B2C. Scaled 100+ teams at Algolia (1st hire) & Sorare. Passionate about data, performance and productivity.

Stay Updated

Get the latest insights on data, AI, and modern infrastructure delivered to your inbox

Related Articles

Continue exploring topics related to this article

AI's Event Backbone
MARCH 10TH, 2026
Sylvain Utard

AI's Event Backbone

Product, Performance, Engineering

AI-native products generate a new kind of infrastructure problem. Here's how to build the event backbone for your AI system.

READ ARTICLE
One Billion Rows
FEBRUARY 17TH, 2026
Sylvain Utard

One Billion Rows

Product, Performance, Engineering

At 1 billion rows, every shortcut comes back to collect interest. Here's how we achieved sub-second queries with near-realtime ingestion.

READ ARTICLE
Pruning Top-N Queries
FEBRUARY 3RD, 2026
Sylvain Utard

Pruning Top-N Queries

Open Source, Performance, Architecture

A deep dive into DuckLake PR #668 and how Top-N dynamic filter pruning turns ORDER BY + LIMIT from full scans into metadata-driven execution.

READ ARTICLE
From Task Executors to Outcome Owners
JANUARY 28TH, 2026
Kevin Granger

From Task Executors to Outcome Owners

AI Agents, Product, Culture

AI is transforming data roles from task execution to strategic ownership. Learn how data teams are evolving in 2026 and what skills matter most in the AI era.

READ ARTICLE
Lessons from Search
JANUARY 13TH, 2026
Sylvain Utard

Lessons from Search

Performance, Architecture, Engineering

Real-time analytics faces the small-file problem search engines solved. DuckLake's tiered compaction brings those merge strategies to streaming analytics.

READ ARTICLE
Altertable Logo

Stop paying for idle warehouses

Join engineering, product, and data teams switching to the operational lakehouse