Embrace Fuzziness
It's the guardrails, stupid.
One common critique of AI is that it's imprecise and nondeterministic. We programmers, I think, have held too long to the notion that precision—in algorithms, in the programming languages we use, in the data we collect—is essential. We think that if our specifications and implementations are precise, our problems are solved. Our eternal quest for ever greater precision has failed us. We simply cannot write precise enough code. There are always bugs. That precise spec turns out to describe something nobody wants, and the details are wrong.
Computers don't need to be this precise. Analog computers worked very well for the classes of problems they solved. Aviation and naval navigation, for example, were entirely analog until just a few years ago, and the planes and ships got where they needed to go. Precision is perhaps not as important as some of us think.
It seems to me that our attempts to impose precision on a chaotic, fuzzy world have failed us. Algorithms often fail. E.g., problems like chaotic turbulence or traffic-flow patterns are easy to model, but no algorithm can predict them. Even physics is not as precise as some imagine; the location of an electron is probabilistic, not precise. We programmers want the world to be Newtonian, subject to precise mathematics, but it's a quantum world. We need to figure out ways of working that reflect that reality.
The original Agile challenged the idea that a precise up-front plan was viable. It assumed the world was complex, not simply complicated, and we needed to work in a way that recognized that complexity. We dumped the precise plan and instead built small, got feedback, then adjusted. That worked surprisingly well. Agile failed when we reverted to the idea of precision. Estimates, backlogs, burn-down charts, prescriptive meetings—all of that is an attempt to reimpose precision onto an imprecise activity. That thinking destroyed Agile, but the original thinking was correct. We now need to extend the original thinking further, to the coding itself.
So, AI. LLMs fly in the face of precision coding, approaching programming in a more natural, almost analog way. The haters want our tools to create MORE precise results, so they are deeply suspicious of AI's fuzziness. They think they must correct that fuzziness by a detailed after-the-fact analysis that injects precision into the generated code. Disaster strikes when people who think that way are forced to use AI. Just look at Amazon.
The solution is not to fight the fuzziness, but to work with it and develop ways to write effective systems in a fuzzy way. That thinking leads to guardrails as our primary tool rather than inspection. Things like a modular component architecture, emphasis on testability, static analysis, testing in production, etc., all become critical.
So, my advice is: Embrace fuzziness. But add guardrails. It's time to rethink how we work. Instead of "that's nuts," think "what innovative guardrails solve the problem?"


Solid article. I don’t have a podcast yet, but I’d love to have you join. That’s two for two!
Wow, this is a smart article. Love it.
Allen, are you up for an episode of our podcast BetaCodex LIVE? https://betacodex.org/betacodex-live