We’re Already Living in a Post-Serverless World

This isn’t really about serverless. It’s about what happens when infrastructure stops asking humans to guess the future.

For more than a decade, “serverless” has been a debate.
Is it real? Is it ready? Is it too limiting? Too expensive? Too opaque?

Those questions made sense in 2014. They make less sense today.

Not because we've outgrown serverless. But because it quietly stopped being the point.

Serverless Was Never the Destination

Serverless was always a means to an end.

The goal wasn’t functions or managed databases or scale-to-zero pricing models. The goal was to stop spending our limited time thinking about infrastructure that didn’t meaningfully differentiate our products.

Early serverless didn’t fully deliver on that promise. It came with tradeoffs that were hard to ignore: cold starts, missing primitives, awkward workflows, and a long list of caveats that architects learned to recite by heart.

So we built compensating systems. We mixed in servers. We added provisioned databases, cache clusters, and NAT Gateways. We kept explaining, again and again, why serverless was good if you only understood its constraints.

That era is ending.

The Paper Cuts Are Healing

Over the last few years, and especially the last few months, the sharp edges have started to disappear. Not all at once. Not with a single announcement. But steadily.

Some of this progress is subtle, but it’s tangible. Lambda Managed Instances changed the at-scale execution model, significantly reducing the pressure to prematurely replatform or "graduate" from serverless. Durable Functions reduced the need for external orchestrators that bifurcate business logic. Aurora DSQL can now provision and scale in seconds, not minutes, removing one of the biggest friction points that previously pushed teams toward always-on, usually over-provisioned infrastructure. Even streaming, long-lived connections, and stateful patterns no longer feel like architectural outliers.

None of these changes are revolutionary on their own. But together, they’re transformative.

They don’t make serverless perfect. And none of this means the tradeoffs are gone.

Cold starts still exist. Stateful serverless is still fragmented and network-dependent. Debugging distributed, event-driven systems can be harder than debugging well-understood application stacks. And for many large or regulated organizations, the inertia of existing platforms (e.g. Kubernetes, VMs, long-lived services) hasn’t magically disappeared. In plenty of environments, serverless is still a conscious choice, debated and defended. That skepticism isn’t wrong. It reflects where many teams actually are today, not where the marketing slide says they should be.

But for many of us, these improvements make it reliable enough to fade into the background.

When the Default Disappears

Here’s the strange part: as serverless finally starts to deliver on its original promise, we’re talking about it less.

That’s not a failure. It’s a signal.

The most successful platforms fade into the background. We don’t debate TCP/IP. We rarely argue about HTTP servers. We don’t ask whether managed storage is “real infrastructure.”

Serverless is heading in the same direction. It’s becoming assumed by startups, greenfield projects, and cloud-native SaaS.

And once something is assumed, we stop naming it.

The Shift Up the Stack

At the same time, the center of gravity in software is moving fast. Really fast.

AI-driven systems, agentic workflows, real-time data pipelines, and adaptive products don’t always map cleanly to static infrastructure models. They demand flexibility. They demand experimentation. They demand the ability to change direction without rewriting the foundation every time.

That’s where serverless stops being a category and starts being table stakes.

Not because every workload runs on functions. But because the expectations around infrastructure have changed.

Provisioning speed matters.
Operational drag matters.
Reversibility matters.

Post-Serverless Doesn’t Mean “After”

A “post-serverless world” doesn’t mean serverless is over.

It means we’ve internalized its lessons.

We’ve learned that:

  • Infrastructure should adapt to reality, not the other way around
  • Paying for idle capacity is usually a smell
  • Most systems benefit from starting small and growing into complexity
  • The fastest teams are the ones with the fewest irreversible decisions

Those ideas don’t belong to a product category anymore. They belong to how modern systems are built.

What Comes Next

The next shift isn’t about which infrastructure you choose.
It’s about who (or what) does the choosing.

We’re already seeing the early signals. Adaptive infrastructure that responds to load and behavior. Self-provisioning runtimes that spin up just enough capability at the right moment. Infrastructure from Code models that describe intent instead of topology. Vercel recently announced their vision of self-driving infrastructure, systems that automatically optimize themselves based on observed behavior.

These aren’t isolated ideas. They’re all pointing in the same direction.

AI infrastructure accelerates this transition. Non-deterministic workloads don’t just scale differently, they behave differently. They force systems to observe themselves, reason about tradeoffs in real time, and continuously rebalance cost, performance, and correctness. Static infrastructure can’t keep up with that pace.

The logical conclusion isn’t just infrastructure that configures itself. It’s infrastructure that understands why it exists. Systems that infer intent, observe outcomes, and adapt continuously. All without requiring humans to pre-decide capacity, execution models, or even which services should be involved. At that point, infrastructure becomes a decision engine, not a deployment artifact.

That’s the real endgame. Not self-driving in the sense of automation alone, but autonomous in the sense of awareness and adaptation.

In that world, choosing infrastructure looks a lot like choosing a compiler today. Important, for sure, but rarely a daily concern.

That’s the autonomous cloud. And it doesn’t arrive in a single release or rebrand. It emerges gradually, as layers of decision-making move from humans to software. The transition will be uneven. Some domains will resist it longer. Humans will remain in the loop, increasingly focused on intent, oversight, and constraints rather than execution.

The momentum is already there. Capital, talent, and attention are flowing toward systems that can adapt faster than humans can predict. Not because anyone declared it so, but because the economics and complexity leave little alternative.

In that sense, we’re already living in a post-serverless world.

Not because serverless won.
But because it stopped asking humans to guess the future.