In a previous piece, I argued that large language models are not enterprise architecture. The response was clear: that argument is hard to dismiss. The harder question is what comes next: “if not this, then what?”
It’s the right question. Because the problem was never that AI doesn’t work. It clearly does. The problem is that we tried to place it in the wrong layer.
We didn’t fail at AI. We failed at where we put it.
Over the last two years, companies have invested tens of billions into generative AI. The result is not ambiguity. It’s clarity.
A growing body of research, including a widely cited MIT study, shows that around 95% of enterprise generative AI initiatives fail to deliver measurable business impact, despite widespread adoption.
This is not because the models don’t work: it’s because they were inserted into organizations as tools, not as systems. We tried to bolt intelligence onto workflows. What we need is systems where intelligence is the workflow.
Large language models are, by design, stateless: each interaction starts from scratch unless we artificially reconstruct context.
