JUNE 17, 2025

3 MIN READ

SYLVAIN UTARD

Relearning the Craft

Relearning the Craft

AI-first development changed everything—prompts are code, green CI means nothing. Here's how we're adapting and relearning.

Listen to this article (Gen-AI)

0:00
3:38
Blog

For over two decades, I've honed the craft of building software. I came up through the ranks of waterfall projects, learned to ship under the pressure of agile sprints, and adopted the best of lean practices: small releases, rapid feedback, shipping to production multiple times a day. I loved the sense of control. Your CI was green? You could sleep at night. Your tests passed? Your code was good. There was a rhythm, a safety net, a set of rules you could master.

Then came AI.

Specifically, large language models. Suddenly, all those comforting rules cracked.

The first shock? A green CI could mean nothing. You could ship perfect code: unit tests passing, non-regression checks spotless and still watch the app behave in surprising ways. Why? Because the heart of the system was no longer deterministic logic. It was text. Dynamic, contextual, probabilistic text. You can try to tame this with parameters like temperature (which controls how "creative" or "random" the model's responses are), but even that's not a silver bullet: it's just another knob to turn in an increasingly complex system.

Prompts became the new source code. Not just their content, but their structure, tone, even punctuation. The difference between a helpful AI and a hallucinating one often hinged on a stray comma or an ambiguous word. And prompts, unlike functions, don't throw syntax errors. They fail silently, fuzzily. You only know something's off when a user does.

To develop AI-first features, we had to rewire our approach. Our internal dev setup now includes a MCP (Model Context Protocol) server: a kind of local AI control tower that feeds our apps real (but read-only) data from production so we can see how the AI behaves in context. We tinker with prompts, tools, data structures. We pair LLMs with helpers like OpenWebUI, Windmill.dev, and quite some custom Ruby code (shoutout to @paolino for his work on ruby_llm).

Debugging isn't about stack traces anymore. It's about asking: "Why did the AI interpret it that way?" You're not chasing memory leaks, you're investigating intent.

Observability has become central. Every prompt, every response, every model call is logged, annotated, rewatched like game tape. Feedback loops are sacred. Did the AI pick the right tool? Did it parse the user's question correctly? Did it respond helpfully, clearly, ethically? You build telemetry not just for performance, but for behavior.

And the pace! Every few weeks, a new model drops. Faster, smarter, cheaper. Switching from GPT-4 to Claude 3 Opus to Gemini 2.5 isn't just a version bump - it's like hiring a new team member with different instincts. You need migration strategies, testbeds, fallback paths. It's dependency management, but on existential steroids.

It's disorienting. It's delightful. It breaks every rule I once held dear.

But it also feels like 2005 again: when web 2.0 was exploding and you had to learn fast or get left behind. I'm 20 years into my career, and I'm learning harder than ever. AI isn't just a new toolset. It's a new terrain. And if you're building in it, you need to let go of the old playbook.

The code still matters. But now, the conversation does too. And that changes everything.

If you're excited by this kind of challenge, if you share our hunger to learn and our core values of purposeful craftsmanship, ownership, transparency, and collective growth: we're hiring. Let's build this future together.

Share

Sylvain Utard, Co-Founder & CEO at Altertable

Sylvain Utard

Co-Founder & CEO

Seasoned leader in B2B SaaS and B2C. Scaled 100+ teams at Algolia (1st hire) & Sorare. Passionate about data, performance and productivity.

Stay Updated

Get the latest insights on data, AI, and modern infrastructure delivered to your inbox

For more information, please consult our Privacy Policy

Related Articles

Continue exploring topics related to this article

Building Trust, Byte by Byte
MAY 27TH, 2025
Sylvain Utard

Building Trust, Byte by Byte

Culture, Engineering

Trust is our foundation. Drawing on Algolia, Front, and Sorare experience, we build a data platform where security comes standard.

READ ARTICLE
From Task Executors to Outcome Owners
JANUARY 28TH, 2026
Kevin Granger

From Task Executors to Outcome Owners

AI Agents, Product, Culture

AI is transforming data roles from task execution to strategic ownership. Learn how data teams are evolving in 2026 and what skills matter most in the AI era.

READ ARTICLE
Lakehouse table formats in 2026
APRIL 14TH, 2026
Sylvain Utard

Lakehouse table formats in 2026

Product, Engineering, Data Stack

There is no single “winning” lakehouse table format in 2026. What has emerged instead is a more interesting split.

READ ARTICLE
Grep your lakehouse
MARCH 27TH, 2026
Sylvain Utard

Grep your lakehouse

Product, Performance, Engineering

Agents do not fail because they lack SQL generation. They fail because they lack a native way to retrieve the right slice of data before writing precise queries.

READ ARTICLE
AI's Event Backbone
MARCH 10TH, 2026
Sylvain Utard

AI's Event Backbone

Product, Performance, Engineering

AI-native products generate a new kind of infrastructure problem. Here's how to build the event backbone for your AI system.

READ ARTICLE
Altertable Logo

Build on a lakehouse your agents can use

Join engineering, product, and data teams replacing warehouse sprawl with a faster, more affordable operational data platform.

Wake up to insights