
The Specification Gap: Why We Can't Tell AI Agents What We Actually Want
The hardest problem in agentic AI is not building capable agents — it is describing what we want them to do. Polanyi's Paradox, Goodhart's Law, and the limits of language converge to create a specification gap that no amount of engineering can close.








