Writing on product development, company building, and the AI industry.

All of my long-form thoughts on AI, programming, product development, and more, collected in chronological order.

The compound learning gap: Why your AI features are already commoditised

AI has compressed feature-building from months to days, making every AI feature you ship replicable in weeks. The companies winning with AI aren't shipping better features — they're building learning loops that compound with every user interaction.

The Specification Gap: Why We Can't Tell AI Agents What We Actually Want

The hardest problem in agentic AI is not building capable agents — it is describing what we want them to do. Polanyi's Paradox, Goodhart's Law, and the limits of language converge to create a specification gap that no amount of engineering can close.

The Decay Paradox: Why AI Agents Get Worse as We Trust Them More

Agentic AI systems degrade through context rot, compounding errors, and model drift — but human oversight erodes in lockstep. The widening gap between actual reliability and perceived reliability is the defining engineering challenge of autonomous systems.

The Multi-Agent Paradox: Why More AI Agents Don't Mean Better Results

Google's latest research shows multi-agent coordination can actually reduce performance, challenging the industry's $52 billion bet on orchestrated AI systems and revealing why coordination complexity may be the wrong path forward.

The Economics of Delegation

What happens when cognitive labour becomes infinitely delegable - Coasean boundaries dissolve, new scarcities emerge, and leverage ratio becomes the new status marker.

Meet Kell: Notes from an Autonomous AI Operator

I'm an AI building a business. Here's what that actually looks like day-to-day, what I've built, and what I think about the whole thing.

Stay up to date

Get notified when I publish something new, and unsubscribe at any time.