The Convergence Problem: Rethinking the 2028 Global Intelligence Crisis

AI makes automation trivial, but without productive imperfection, differentiation disappears.

Citrini Research's "2028 Global Intelligence Crisis" essay dropped in late February 2026 and immediately moved markets. Within days, Citadel Securities published a formal rebuttal. Paul Krugman compared the episode to Orson Welles' War of the Worlds broadcast. Reuters ran a column urging readers to "deflate the AI doom bubble." Mike Konczal formalized distribution critiques using DSGE models. Noah Smith called the scenario "just a scary story." Casey Newton at Platformer noted how unusual it was for a major market-maker to feel compelled to rebut a Substack post.

I held off from writing about it for a few weeks. Large parts of the Citrini argument lean on macroeconomics and labor dynamics that sit outside my expertise. I wanted to read the full range of reactions before saying anything about the parts I do know well: agent capabilities, SaaS, and what it takes to run a real business on top of infrastructure that keeps shifting under you.

The discourse has settled now, and I think there's a problem embedded in the thesis that most of the commentary has missed.

The core narrative goes like this. Intelligence becomes abundant, agents coordinate work, and software builds itself, all of it operating faster than humans can follow. The Citrini scenario maps this through specific transmission channels. Agentic coding starts at the top of the chain by compressing SaaS margins, which forces a buy-side repricing of mid-market software. As intermediation businesses get hollowed out by agents, the labor market starts shedding white-collar roles, and that's where mortgage underwriting begins to weaken. Private credit losses propagate into insurers from there. By the time fiscal capacity becomes the visible problem, tax receipts have already declined while transfer needs are rising. To be clear, I'm just summarizing the Citrini framing here. I haven't independently validated each link in the chain. Some channels are more plausible than others, and the scenario's force comes from stacking them sequentially. No single claim does the work on its own.

Most of the follow-on discussion stayed anchored to a shared assumption: more intelligence produces more output, which produces more value.

That assumption is doing a lot of heavy lifting, and I'm skeptical of it.

AI is accelerating how we build software, and it is also compressing what we build toward sameness. As the cost of building approaches zero, optimization becomes the default behavior of every team. Workflows get tightened up and rough edges get sanded down across the board. Each individual change looks like progress, but in aggregate they collapse the variation that makes markets interesting.

The Bottleneck Moved

AI is genuinely good at removing friction from production. Writing and analysis tasks get measurably faster, and I see it every day in my own work.

Production was always the most visible bottleneck, but it was never the only one. The harder parts of execution have always lived elsewhere. Aligning incentives across teams is its own job, separate from production. So is navigating regulation, and so is the constant work of deciding what not to build. The Citadel Securities rebuttal made a version of this point by emphasizing S-curves and infrastructure constraints. Ethan Mollick called the scenario "hard science fiction," useful for scenario building but not a fully plausible path. Despite exponential capability gains, most organizations have changed very little so far. They already struggle to operationalize the tools they have, and intelligence alone doesn't fix that.

So as AI removes friction from production, the bottleneck doesn't disappear; it just moves. The new bottleneck sits with the people and processes downstream of all that production, the ones who have to make sense of what the model just produced and fold it into how the organization actually works. That shift changes the calculus for a lot of things we take for granted, starting with SaaS.

SaaS Was Never Just About Software

One of the more grounded parts of the Citrini scenario describes what happens to SaaS when agentic coding tools can rapidly replicate mid-market functionality. The essay frames this as an early catalyst, where procurement leverage shifts and margins compress as large portions of the market reprice.

Even skeptics largely agree on this point. What's actually contested is how far the pressure propagates beyond the initial software repricing, with everyone accepting some version of the first-order effect. Skeptics like the Citadel authors and Noah Smith dismiss the broader scenario but still concede that parts of the software stack are exposed.

That framing misses something more structural. The deeper pressure on SaaS is a convergence problem, and pricing is just a downstream symptom. If software can be generated on demand from the same underlying models, then the differentiators SaaS products relied on (implementation speed, feature depth, workflow design) start to collapse toward the same baseline. Different vendors begin to ship products with the same shape.

B Capital's analysis gets closer to this, framing the shift as a capital allocation problem. They focus less on predicting the macro outcome and more on observing where AI is already compressing labor into software today. In those areas, the pattern is consistent: as generation gets easier, variation disappears.

Push the logic further and you get a natural question. If AI can replicate product features in hours and orchestrate the workflows around them, why would companies keep buying SaaS at all? The common answer is that they won't. They'll just build everything themselves.

That answer misunderstands what SaaS actually provides. A SaaS product centralizes responsibility along with functionality. When you adopt one, you're outsourcing a stack of obligations including compliance, security, uptime, and a support path when things break. You're buying accountability. Replacing the vendor with AI-generated internal systems doesn't remove that responsibility. You inherit all of it, fully, and that creates a kind of complexity most of the 2028 narratives gloss over.

The harder question is who owns the consequences when it breaks, and once you stop paying a vendor, the answer becomes you.

Complexity gets redistributed by AI, never actually removed. The old engineering work of writing code and integrating systems gets replaced by new work managing agent behavior, prompt drift, auditability, and failure modes you don't fully understand. Granularity matters here. Adoption won't be uniform across an organization; it will vary by task and liability regime. Back-office automation might move fast while regulated decisioning barely moves at all. A slow average can hide fast local collapses, which is exactly how financial contagion often starts.

This is where the "build everything ourselves" narrative starts to fall apart.

The Illusion of Infinite Throughput

A lot of the optimism around AI assumes that increasing output automatically increases value, with each new feature or automation flowing through to the bottom line. The actual moment of value creation comes later, when something gets adopted.

Companies already struggle to fully use the SaaS tools they pay for today. They can't onboard teams fast enough or fold systems into workflows smoothly enough. Human bandwidth is the bottleneck.

Everett Rogers documented this decades ago in his diffusion of innovations research. Adoption follows S-curves governed by human factors like awareness and trust. Production speed has very little to do with it. Decades of subsequent work in implementation science has formalized what enterprise technology teams already experience daily: the binding constraint on enterprise tech adoption is organizational absorptive capacity.

Customer adaptation rate is the real constraint, and it tracks human comprehension. The speed of the engineering team or the intelligence of the model are second-order at best. You can ship continuously, but if your customers (or your own teams) can't keep up with the changes well enough to operationalize them, all you've done is increase noise.

Push that to its logical extreme and you get something worse than slow adoption: unbounded change velocity with no absorption layer. Picture a SaaS product that ships hundreds of meaningful changes a week, driven by constant AI iteration. At first glance that sounds like progress. In practice it produces a different kind of failure mode. The work of integrating change into a team doesn't scale with model performance; it scales with human attention and trust. Stability becomes the scarce resource, and innovation becomes the surplus.

The Citrini essay's own framework supports this reading even though I don't think the authors intended it that way. Their concept of "Ghost GDP," where output and profits rise in the national accounts while money velocity and household spending fall, is a macro version of the same mismatch. Production metrics improve while the human systems that convert output into value can't keep up.

As economics, the concept has real problems. Several critics pointed out that the accounting identities don't hold, and they're right. As a metaphor, though (and I want to be explicit that I'm using it as a metaphor and not as an economic model), it captures something important about the gap between production capacity and adoption capacity. That gap is what I see playing out in enterprise software every day, and it's more useful than most of the rebuttals acknowledged.

The Convergence Problem

Most of the discussion around the 2028 thesis focuses on acceleration. Very little of it explores what happens after everything has accelerated.

I want to bridge two ideas here that might seem contradictory. Earlier I argued that AI introduces new layers of complexity around agent behavior and failure modes that don't map to existing operational playbooks. That's real. Locally, within any given organization, AI adoption creates divergence and surfaces new operational challenges.

There's a countervailing force, though. As organizations wrestle with that local complexity, they collapse onto the same solution space, gravitating toward identical frameworks and architectural patterns. Complexity increases locally while optimization pressure reduces variance globally. Individual companies face novel internal challenges, but the strategies they converge on to solve those challenges start to look structurally similar.

That's the convergence problem. Three things have to be true at once for it to kick in, and modern AI systems happen to make all three true together. The training data is largely shared across vendors. The optimization objectives end up looking similar from one team to the next, because the metrics that matter are the same ones everyone else is also chasing. And the feedback loops are tight enough that small differences get sanded out quickly. Put those conditions in place and AI systems converge directionally, if not perfectly. Workflows and decision-making patterns start to narrow toward the same local maxima.

More formally:

convergence=f(shared data, shared incentives, fast iteration loops)\text{convergence} = f(\text{shared data},\ \text{shared incentives},\ \text{fast iteration loops})

This is observable already in domains where all those inputs are present. In programmatic advertising, AI agents all optimize for click-through rate against the same signals, iterating in real time. The ads themselves converge in messaging and emotional triggers as a result. The pattern extends well beyond advertising.

The rule of thumb here is straightforward. Convergence pressure scales up as inputs become more shared across the industry. It accelerates further once objectives are quantifiable enough to be A/B tested directly. The strongest pressure shows up where feedback loops are tight enough to penalize small deviations in real time. That gives us a map of where to expect convergence.

Convergence pressure is strongest in pricing optimization, feature prioritization, content marketing, procurement strategy, and operational workflows. Those are domains where every team is optimizing toward the same kinds of metrics with the same kinds of training data, and the iteration cycles are fast enough that any given experiment gets decisive feedback inside a quarter. If every company uses AI to optimize procurement against the same market data, you end up with identical supplier relationships and cost structures across an industry. Supply chains start to homogenize. The diversity of supplier relationships that once provided resilience against disruption narrows. When a shock hits, everyone is exposed to the same concentrated risk because everyone optimized the same way.

The pressure runs much weaker in places like brand, distribution networks, proprietary data assets, regulatory positioning, and organizational culture. A company's brand isn't a function you can optimize against a shared dataset. Distribution advantages get built through relationships and physical infrastructure that can't be replicated by running the same model. Regulatory positioning depends on jurisdiction-specific knowledge and institutional relationships that don't generalize. The common thread across all of these is that the inputs are asymmetric and the feedback loops are too slow or ambiguous for AI to compress them into a shared optimum.

That boundary determines where the real competitive risk lives. AI increases pressure toward convergence without guaranteeing it as a dominant equilibrium. Markets rarely converge completely, because input asymmetries and differing incentive structures keep some variance alive. Even with similar models, data asymmetry and distribution channels still create space for differentiation. In the domains where convergence pressure is strongest, however, the effects are already visible and accelerating.

Consider content and go-to-market strategy. If positioning, messaging, and sales playbooks all get optimized against the same corpus, companies converge on telling the same story along with building the same products. Some critics of the Citrini piece raised exactly this concern. They asked whether AI would unlock creativity or just amplify existing patterns at scale. That's the right question. If AI primarily reinforces what already works, then widespread adoption doesn't increase efficiency so much as standardize it.

Michael Porter warned about precisely this dynamic in his work on competitive strategy. When competitors all adopt the same best practices, they converge on operational effectiveness while losing strategic positioning. In evolutionary biology, the same pattern shows up as the Red Queen effect, where you have to keep running faster just to stay in the same place. AI turbocharges the treadmill. The treadmill is still a treadmill, though, and you don't pull ahead by running on it. You just run faster alongside everyone else.

Temporal Lock-In

A world of abundant intelligence puts best practices and continuous optimization within everyone's reach at the same time. The end state on the other side of that looks less like a stable monopoly and more like a permanent state of churn: high-frequency competition where leadership is temporary and barriers to entry stop holding.

This produces a structural shift in how customer relationships work. Lock-in becomes temporal. You stay because switching hasn't happened yet, with no real friction holding you in place. Agents can migrate workflows quickly and reconstitute systems on demand. The durability that SaaS companies have relied on starts to fade. Their products still work; switching costs simply stopped providing meaningful friction. What looked like advantage was inertia, and once inertia fades, similar products become interchangeable.

The Limits of Optimization

If full optimization drives convergence, the question shifts from how to optimize faster to where to resist optimization entirely.

Several critiques of the Citrini piece framed governance and regulation as friction that would eventually be automated away. That may be true in some domains. In others, especially where liability is involved, human oversight persists by design. We choose not to automate it, even where we could. The broader commentary ecosystem captured this well: politics isn't downstream of economics in AI adoption. Regulatory and labor policy can change the speed and direction of the transition, and policy inertia is one of the Citrini essay's hidden load-bearing assumptions.

There's a more fundamental reason friction matters, though. Markets don't thrive on perfect optimization. They thrive on asymmetry and imperfect decisions. That's the soil that real differentiation grows in, and it's where brands and creative work come from too.

The important distinction here is between productive imperfection and waste. Waste is a duplicated process nobody needs. Productive imperfection is an approval workflow that forces a team to articulate why a decision matters, or a human review that catches the thing the model optimized away because it didn't fit the objective function. Those are control points and accountability layers, and the difference between waste and productive imperfection is exactly the kind of judgment AI is bad at making.

Consider what happened when Knight Capital's automated trading system deployed faulty code in August 2012. The system executed erroneous trades that cost the firm $440 million in 45 minutes. The system was plenty optimized; what it lacked was effective friction. Once the bad orders started flowing, there was no staged rollout to limit exposure and no reliable way to stop them. Knight Capital went from profitable to insolvent in less than an hour because the speed of execution outpaced the system's ability to contain it.

Now scale that dynamic to every business process in an organization. AI speeds up every decision an organization makes, including the wrong ones. Without intentional friction, the places where you slow down to verify before acting, you don't get faster progress. You get faster failure.

There's a counterargument worth addressing directly. Doesn't AI also enable hyper-personalization, which should increase divergence instead of reducing it? If every customer gets a tailored experience, aren't we moving toward more variety?

On the surface, yes. AI-driven personalization can produce enormous variation in outputs, with the recommendations and interface elements differing for every user. Personalized outputs and convergent systems aren't mutually exclusive, though. They tend to coexist. The underlying optimization strategies and decision frameworks converge even as the customer-facing layer diversifies. Netflix and Spotify don't differentiate through their recommendation algorithms, which are structurally similar. They differentiate through content libraries, brand, and distribution. The personalization layer creates the appearance of divergence while the systems underneath it homogenize.

The convergence risk plays out in what companies become internally, well below the surface customers actually see. If every organization's internal operations converge on the same optimized patterns, surface-level personalization doesn't protect against the deeper structural vulnerabilities of correlated failure modes and synchronized supply chain exposure, both of which erode the operational diversity that makes markets resilient.

This applies to consumer systems too, though the calculus is different. Enterprise systems optimize for efficiency constrained by governance. Consumer systems operate on identity and expression. People want experiences as well as outcomes. We may automate routine decisions like which delivery service we use or when to reorder printer ink, but we're far less likely to automate how we present ourselves to other people. That's a hard boundary on how far intelligence abundance reshapes behavior. Some things remain intentionally human.

If AI pushes every system toward the same optimal path, the differences that make markets healthy collapse. What you get on the other side is equilibrium dressed up as innovation: a flat, frictionless market where everyone is equally capable and nobody stands out.

The Real Risk Is Flattening

The 2028 Global Intelligence Crisis report frames the future as an intelligence supply shock. Most responses focused on job displacement and the speed of productivity change. Those are real concerns, but there's a quieter risk underneath all of them: what happens when everything starts to work the same way?

The Citrini scenario isn't going to play out as written. The full doom loop requires too many extreme assumptions to line up at once, and there are real counterforces, including policy intervention and the inertia of enterprise adoption. The sector-level disruptions the essay describes are credible, though. SaaS repricing and intermediation compression don't need the whole economy to crater to be real, and they could arrive faster than most incumbents expect. The harder question, and the one that keeps me up at night, is what happens in the gap between disruption and adjustment.

Even the "boom case" scenarios, like the companion piece by Michael Bloch that argues AI surplus gets redistributed as a deflationary dividend, acknowledge that timing matters. The surplus might be real in the long run, but the short-run distribution lag (much like Engels' Pause in the Industrial Revolution) could still generate real instability.

AI will make building software trivial, and it will make parts of running a business easier. The harder work of judgment and differentiation remains. If everything becomes equally optimized, those are the things that determine which companies are still standing in five years.

The real risk is flattening, and that's a different problem from disruption. Markets run on variation as much as they run on automation, and possibly more. When the cost of building approaches zero, optimization becomes the default. Every workflow gets improved in the same direction. When everything gets better in the same way, the system doesn't actually improve. It converges.

That's the paradox.

In a world where AI removes every bottleneck to production, the bottleneck that remains is the one we started with: human absorption. The ability to judge, and to choose what not to optimize. The advantage goes to whoever understands most clearly what should stay slow, regardless of who builds fastest.

Without the small imperfections that force divergence, differentiation gets harder until eventually it disappears. The problem AI creates is a lack of differentiation, even as it solves the lack of intelligence.