Entering the probabilistic era of software
Gian Segato’s essay, Building AI Products In The Probabilistic Era, offers a clear perspective on the fundamental changes AI is bringing to software development. He argues that we are moving from a ‘classical’ deterministic world to a ‘quantum’ probabilistic one, a shift that requires a complete rethinking of how we build, manage, and measure software products.
From deterministic funnels to infinite fields
Traditionally, software has been deterministic. We build products that map known inputs to expected results: F(x) -> y
. An action x
reliably produces an outcome y
. Our entire industry, from engineering Service Level Objectives (SLOs) to product management conversion funnels, is built on the premise of predictable, countable, and reliable outcomes.
Gian argues that generative AI breaks this model. We now have a function, F'(?)
, that accepts a near-infinite range of open-ended inputs and produces a probability distribution of possible outputs. The input is unknown, and the output is stochastic.
In moving to an AI-first world, we transitioned from funnels to infinite fields.
This change means our products can now succeed in ways we never imagined and fail in ways we never intended.
The shift from engineering to empiricism
This new reality demands a different mindset. The classical engineering approach of adding constraints to ensure 100% reliability can “nerf” the model, destroying the very emergent intelligence that makes it powerful.
Instead, Gian suggests a move towards empiricism, where we act more like scientists than traditional engineers. This involves forming hypotheses, testing them rigorously, and accepting that we do not have perfect knowledge of the system.
With AI products, all this is no longer true. These models are discovered, not engineered. There’s some deep unknowability about them that is both powerful and scary.
This approach requires a willingness to fundamentally rethink and even rebuild systems when a new, more capable model is released, as demonstrated by Replit’s complete product re-architecture in three weeks to leverage a new model’s capabilities.
My perspective
Gian’s essay provides a valuable framework for what many of us are experiencing with the rise of generative AI. It feels like we are only at the beginning of understanding the implications of this shift, and his articulation of the move from deterministic to probabilistic systems is very accurate.
I see a direct parallel in how large corporations must adapt. Implementing these new models is not just about adding a new tool; it requires a complete overhaul of processes and how we define value.
However, this transition also introduces a significant tension, particularly in a corporate environment. While AI excels in ambiguity, many business functions rely on determinism. For tasks like KPI reporting or financial tracking, an 80% or 90% probability of being correct is not sufficient. The answer must be 100% right. The same applies to security; 99% secure is a critical failure.
This highlights a crucial challenge: how do we harness the power of probabilistic AI for creative and exploratory tasks while maintaining the deterministic integrity required for critical business operations? Finding that balance will be key for full value creation with these new tools.