JUNE 17, 2025

3 MIN READ

SYLVAIN UTARD

Relearning the Craft

Relearning the Craft

AI-first development changed everything—prompts are code, green CI means nothing. Here's how we're adapting and relearning.

Listen to this article (Gen-AI)

0:00
3:38
Blog

For over two decades, I've honed the craft of building software. I came up through the ranks of waterfall projects, learned to ship under the pressure of agile sprints, and adopted the best of lean practices: small releases, rapid feedback, shipping to production multiple times a day. I loved the sense of control. Your CI was green? You could sleep at night. Your tests passed? Your code was good. There was a rhythm, a safety net, a set of rules you could master.

Then came AI.

Specifically, large language models. Suddenly, all those comforting rules cracked.

The first shock? A green CI could mean nothing. You could ship perfect code: unit tests passing, non-regression checks spotless and still watch the app behave in surprising ways. Why? Because the heart of the system was no longer deterministic logic. It was text. Dynamic, contextual, probabilistic text. You can try to tame this with parameters like temperature (which controls how "creative" or "random" the model's responses are), but even that's not a silver bullet: it's just another knob to turn in an increasingly complex system.

Prompts became the new source code. Not just their content, but their structure, tone, even punctuation. The difference between a helpful AI and a hallucinating one often hinged on a stray comma or an ambiguous word. And prompts, unlike functions, don't throw syntax errors. They fail silently, fuzzily. You only know something's off when a user does.

To develop AI-first features, we had to rewire our approach. Our internal dev setup now includes a MCP (Model Context Protocol) server: a kind of local AI control tower that feeds our apps real (but read-only) data from production so we can see how the AI behaves in context. We tinker with prompts, tools, data structures. We pair LLMs with helpers like OpenWebUI, Windmill.dev, and quite some custom Ruby code (shoutout to @paolino for his work on ruby_llm).

Debugging isn't about stack traces anymore. It's about asking: "Why did the AI interpret it that way?" You're not chasing memory leaks, you're investigating intent.

Observability has become central. Every prompt, every response, every model call is logged, annotated, rewatched like game tape. Feedback loops are sacred. Did the AI pick the right tool? Did it parse the user's question correctly? Did it respond helpfully, clearly, ethically? You build telemetry not just for performance, but for behavior.

And the pace! Every few weeks, a new model drops. Faster, smarter, cheaper. Switching from GPT-4 to Claude 3 Opus to Gemini 2.5 isn't just a version bump - it's like hiring a new team member with different instincts. You need migration strategies, testbeds, fallback paths. It's dependency management, but on existential steroids.

It's disorienting. It's delightful. It breaks every rule I once held dear.

But it also feels like 2005 again: when web 2.0 was exploding and you had to learn fast or get left behind. I'm 20 years into my career, and I'm learning harder than ever. AI isn't just a new toolset. It's a new terrain. And if you're building in it, you need to let go of the old playbook.

The code still matters. But now, the conversation does too. And that changes everything.

If you're excited by this kind of challenge, if you share our hunger to learn and our core values of purposeful craftsmanship, ownership, transparency, and collective growth: we're hiring. Let's build this future together.

Share

Sylvain Utard, Co-Founder & CEO at Altertable

Sylvain Utard

Co-Founder & CEO

Seasoned leader in B2B SaaS and B2C. Scaled 100+ teams at Algolia (1st hire) & Sorare. Passionate about data, performance and productivity.

Stay Updated

Get the latest insights on data, AI, and modern infrastructure delivered to your inbox

Related Articles

Continue exploring topics related to this article

Building Trust, Byte by Byte
MAY 27TH, 2025
Sylvain Utard

Building Trust, Byte by Byte

Culture, Engineering

Trust is our foundation. Drawing on Algolia, Front, and Sorare experience, we build a data platform where security comes standard.

READ ARTICLE
From Task Executors to Outcome Owners
JANUARY 28TH, 2026
Kevin Granger

From Task Executors to Outcome Owners

AI Agents, Product, Culture

How AI is transforming data analyst, data engineer, and data scientist roles from task execution to strategic ownership. Learn how data teams are evolving in 2026 and what skills matter most in the AI era.

READ ARTICLE
Upside-Down Architecture
JANUARY 20TH, 2026
Yannick Utard

Upside-Down Architecture

Architecture, Engineering

Most analytics queries scan less than 100MB, yet traditional architectures still assume compute must live in a remote warehouse. We explore a hybrid model where compute moves between our servers and your local machine, powered by DuckDB and open table formats.

READ ARTICLE
Lessons from Search
JANUARY 13TH, 2026
Sylvain Utard

Lessons from Search

Performance, Architecture, Engineering

Real-time analytics systems face the same small-file problem that search engines solved decades ago. DuckLake's new tiered compaction primitives bring battle-tested merge strategies to streaming analytics, making low-latency ingestion sustainable.

READ ARTICLE
From STDIO to OAuth
OCTOBER 21ST, 2025
Sylvain Utard

From STDIO to OAuth

Engineering, Open Source

How MCP evolved from local stdio to OAuth 2.0 for cloud-scale AI, using Dynamic Client Registration for secure agent access.

READ ARTICLE
Altertable Logo

Wake Up To Insights

Join product, growth, and engineering teams enabling continuous discovery