Polished output is becoming cheap. That shifts the real source of advantage away from production and toward selection, taste, and strategic clarity.
12 March 2026
Intelligent Rails | Where AI meets financial infrastructure
By Omar Al-Bakri
Disclosure: This newsletter is opinion and analysis, not investment, legal, or financial advice.
TL;DR
- AI is making productive-looking behaviour dramatically cheaper.
- That is operationally useful, but it also makes weak strategy easier to disguise.
- The bottleneck is shifting from production to judgment: what matters, what gets ignored, and what deserves action.
- The companies that benefit most from AI will not be the ones that generate the most output. They will be the ones that become more selective.
In This Issue
- Why AI changes the economics of looking competent
- Where commercial teams misread automation
- Why leadership teams are especially vulnerable
- A practical distinction: compression versus abdication
- What operators should do now
One of the more persistent habits in ambitious organisations is mistaking motion for progress.
The calendar fills. More systems appear. More reporting gets generated. More outbound gets sent. More summaries circulate. Everyone can point to activity, and activity creates the reassuring impression that something important is happening.
Sometimes that is true. Often it is not.
What matters in the current AI cycle is not just what the tools can do. It is what they change about the economics of appearing effective.
For a long time, polished output was expensive. A strong memo required effort. A market map required hours. A decent proposal took real preparation. A coherent internal brief implied somebody had done the thinking.
That implication is breaking.
AI can now generate more writing, more research, more outreach drafts, more strategic framing, and more internal artefacts than most teams can properly evaluate. The production constraint is falling fast.
That sounds like pure upside. It is not.
When output becomes cheap, the penalty for weak judgment rises.
The New Risk Is False Confidence
The danger is not that teams will become less productive. Many will become materially more productive in a narrow operational sense.
The danger is that they will confuse increased throughput with improved direction.
That confusion is expensive. It keeps weak ideas alive longer than they deserve. It makes poor prioritisation harder to spot. It allows organisations to substitute polished artefacts for hard choices. And it creates just enough apparent momentum for everyone to postpone the more uncomfortable question: are we doing the right thing, or are we simply doing more things?
This is why I remain sceptical of AI being framed primarily as a productivity story.
Productivity matters. But productivity without judgment often just means acceleration in the wrong direction.
Commercial Teams Will Feel This First
This is already visible across sales, partnerships, and business development.
Give a weak commercial team better automation and it rarely becomes a strong one. It usually becomes a faster version of the same underlying confusion. More prospects get touched. More sequences go out. More dashboards get updated. More research notes appear in the CRM.
Yet the real problems stay in place.
The positioning is still vague. Qualification is still weak. Account selection is still poor. Value articulation is still generic. The team still does not know when to push, when to hold back, or when an opportunity is structurally not worth pursuing.
A team can now generate account research, outbound sequences, call summaries, qualification notes, and follow-up drafts at near-zero marginal cost. But if it is targeting the wrong accounts with generic positioning, all AI has done is industrialise waste.
That matters because most revenue organisations do not fail from insufficient activity. They fail from pursuing the wrong accounts, with the wrong story, at the wrong moment, then misreading motion as pipeline quality.
AI can improve the motion. It does not automatically improve the judgment.
Leadership Has the Same Problem at a Higher Altitude
Leadership teams are vulnerable for the same reason, only with more leverage behind the mistake.
A misaligned executive team can now generate cleaner narratives at speed. Plans read better. Strategy papers sound more coherent. Internal updates become more polished. Market commentary becomes easier to produce.
But if the underlying thinking is weak, the organisation does not become sharper. It becomes better at telling itself an elegant story.
That is a dangerous capability.
One of the hidden risks in this cycle is that it narrows the visible gap between organisations that think clearly and organisations that merely present themselves well. When every team can generate competent-seeming output, discernment matters more, not less.
Compression Versus Abdication
The most useful distinction I have found is simple.
Use AI for compression.
Use it to collapse administrative drag, tighten preparation, accelerate synthesis, reduce repetition, and make routine execution cheaper.
Do not use it for abdication.
Do not use it to avoid choosing. Do not use it to avoid judgment. Do not use it to make weak strategy look sophisticated. And do not let the presence of a polished artefact trick the organisation into thinking the hard part has already been done.
That line will matter more over the next few years than most teams currently appreciate.
The real value in many businesses still sits in places AI does not remove cleanly: timing, taste, commercial instinct, conflict navigation, prioritisation under uncertainty, and the ability to tell signal from noise before the market makes it obvious.
If those capabilities are weak, more automation does not solve the problem. It scales the consequences.
What Good Operators Should Do Now
The practical question is not whether to use the tools. Of course you should use the tools.
The better question is where judgment lives once the production bottleneck disappears.
Three things matter.
First, identify where output has been mistaken for value. In most organisations, entire categories of work survive because they look industrious rather than because they materially improve decisions.
Second, define the decisions that actually create economic value. Not the visible process around them. The decisions themselves. Which accounts to pursue. Which markets to ignore. Which product bets deserve resources. Which risks are acceptable. Which trade-offs are real.
Third, treat AI as leverage for the right people, not as a substitute for the judgment layer. The best use of these systems is to make strong operators faster and clearer. The worst use is to let weak operators produce convincing noise at scale.
The Real Divide
I increasingly think the meaningful divide will not be between companies that adopted AI and companies that did not.
It will be between companies that used AI to become sharper and companies that used AI to become louder.
Those are not the same thing.
The former will remove drag, improve speed, and protect decision quality. The latter will generate a flood of competent-looking artefacts while drifting further from reality.
That is the productivity trap.
AI is reducing the cost of producing work. It is not reducing the importance of knowing what work is worth doing.
If anything, it is making that distinction more valuable.
A Useful Test
If the tooling disappeared tomorrow, would your team still know how to tell a good opportunity from a bad one?
If the answer is no, the problem is not the tooling.
It is the judgment layer.
AI will not eliminate bad judgment. It will disguise it more effectively and scale it more cheaply.
That is why judgment is becoming the premium capability.