
Corey Quinn, Cloud Economist (and perpetual thorn in AWS’s side), recently published a post titled The Unfulfilled Promise of Serverless. Twitter reacted as we would expect, with plenty of folks feeling vindicated, others professing their staunch disagreement, and perhaps even a few now questioning their life (and technology) choices. My take is that he’s not wrong, but he’s also not entirely right.
Let me start by saying I’m a big fan of Corey’s (seriously, this video about AppConfig might be the funniest thing he’s ever created.) However, much of Corey’s brand is predicated on creating controversy with his unique brand of snark and clever shitposting (the Twitterverse wants what the Twitterverse wants.) So while his hottake on serverless might resonate with many of us, it’s important to understand both its motivation as well as its missing context.
In order for a promise to be “unfulfilled,” we must understand both what was promised, as well as when delivery was promised. While this might sound like a semantic argument, I think it’s important to point out a very important piece of context: we’ve barely scratched the surface of what’s possible with serverless technology. I agree with Corey that Werner’s “just write business logic” claim conveniently ignored the mountain of configuration that came with it, but it also put a stake in the ground and set the vision. To the best of my knowledge, no one ever gave a timeline to achieve everything that was promised, but what I’ve seen over the last seven years are massive investments by public cloud providers, established players, and startups laying the groundwork that brings us closer to that vision. No, we’re not there yet, but that just means we have more work to do.
Below I’ve tried to add some context to the main points Corey made in his post. As I said, many of his points resonate with me, but hopefully this brings some comfort to those of you that were disheartened by his message. For those of you that have yet to “buy in” to serverless, I simply ask that you keep an open mind. The vision for serverless and its current reality are very different, and like with most things, we’re likely all trying to get to the same place.
Serverless, but not potable
Yes, potable, you read that right. The (lack of) portability with serverless is less about lock-in, and more about the lack of consensus around implementation standards. Sure, if you’re AWS, keeping customers in your ecosystem by providing native services with first-class integrations (even if they’re as terrible as Cognito) makes it harder for them to look elsewhere and even harder for them to migrate away later. But if I take my cynical-colored glasses off for a moment, there’s also a greater than zero percent chance that some of the amazing engineers and product managers that work behind the curtain at AWS are actually trying to “innovate.”
For me, there is a major disconnect between AWS and most of the other major cloud providers when it comes to their approach to serverless. AWS took the same security and isolation properties of hardware virtualization technology and built them into Firecracker. Why? So that they could optimize the execution of microVMs by moving them as close to the metal as possible. What did just about everyone else do? They built their serverless compute solutions in containers running on top of Kubernetes, just about as far away from the metal as you can get.
Kelsey Hightower recently tweeted that “Serverless based on containers is the future.” Really? I guess that’s true if we’re prepared to give up on the optimizations already possible by letting the platform be the orchestrator rather than requiring 30 layers of abstraction just to execute a few lines of code. Even if we go down the container route, the “portability” problem is only partially solved. As Forrest Brazeal has pointed out, choosing “cloud-agnostic” technology often means going with the “lowest common denominator.” In other words, regardless of where you “port” your serverless compute, your (wise) reliance on CSP specific services will make migrating to other providers the bigger hurdle. This is true for all of “cloud,” by the way, and isn’t just a serverless phenomenon.
So back to my original point. I see “lock-in” as merely an excuse for people’s unwillingness to buy into the uncertainty. Sure, there is no competitor to AWS Step Functions, but that’s because this service is ahead of its time—not because state machines haven’t been a thing since the dawn of computing, but because other cloud providers have a short-sighted vision for serverless. Customers that are willing to go “all in” on AWS’s vision might lose portability, but what they gain in capabilities (at least in my opinion) far outweighs any potential drawbacks. It’s not about portability; it’s the fact that too many people want to reach for the familiar and are unwilling to drink the Kool Aid.
The imperceptible value fallacy
On a recent episode of his podcast, Corey so eloquently stated that “serverless sucks,” to which Ant Stanley, co-founder of A Cloud Guru/ServerlessConf/ServerlessDays and arguably one of the earliest of early adopters of serverless, agreed. It’s not the serverless technology that sucks, but rather the complexity we’ve created around it. As Ant explains, “I think folks have focused far too much on the technical aspects of serverless, and what is serverless and not serverless, or how you deploy something, or how you monitor something, observability, instead of going back to basics and first principles of what is this thing? Why should you do it? How do you do it? And how do we make that easy? There’s no real focus on user experience and adoption for inexperienced folks.”
Corey’s experience with serverless isn’t an outlier. AWS has come a long way with CloudWatch and its related tools, but the fact that an entirely new industry was required to help you understand and observe your serverless applications (Lumigo, Epsagon, IOpipe, Thundra, Dashbird, etc.) should have been the canary in the coal mine for widespread adoption. Most of these tools have either been acquired or have pivoted to focus on more than just serverless because, as Corey said, they faced the unfortunate reality that a pure serverless market just wasn’t valuable enough.
Serverless adoption (including the use of “serverless services” that Corey casually dismisses) continues to grow year over year. But even though more that 50% of the “thousands of companies in Datadog’s customer base” are using some form of FaaS, it’s only based on whether “they ran at least five distinct functions in a given month.” While we don’t have the underlying data, this suggests that FaaS makes up just a portion of their cloud workloads. So maybe we don’t have full buy-in at this point, but plenty of cloud companies are embracing the serverless paradigm and extracting some value from it.
However, there is a major problem with partial buy-in: TCO. The total cost of ownership (TCO) of serverless (or at least the promise of it) is significantly lower than setting up and maintaining traditional compute (yes, even if the compute charges are higher). But the incremental cost of adding workloads to an existing Kubernetes cluster or cloud environment supported by an experienced Ops team is minimal. For a startup (or even a small team within a larger org), going “serverless first” should be a no-brainer. Unfortunately, the value of those customers probably won’t move the needle, hence the fact that it “doesn’t feel like it’s solving an expensive problem.”
Who isn’t going to own the upgrades?
I had Ant Stanley on the Serverless Chats podcast not long after his interview with Corey, and when I asked about the complexity issue, he also pointed out that the number of choices with serverless creates more cognitive load on developers when making decisions.
“I would love to see a simplification of serverless going, ‘Hey, if you want a relational database, this is it. If you want a NoSQL database system, this is it. if you want to run synchronous workloads, this is where you do it. If you want asynchronous workloads, this is where you do it, and give one option. And here’s your development workflow, and one set of tooling to make that work,’ but I think we’re way off that.”
While Ant’s vision of serverless simplicity would certainly make things easier, others have posited ideas that get us much closer to the ultimate serverless vision. Shawn @SWYX Wang recently articulated his vision of The Self Provisioning Runtime. He says, “If the Platonic ideal of Developer Experience is a world where you ‘Just Write Business Logic’, the logical endgame is a language+infrastructure combination that figures out everything else.”
At the time Shawn published his article in late August, he was unaware of Serverless Cloud, the project that my team has been working on at Serverless, Inc. for nearly a year. The goal was to achieve almost exactly what Shawn was describing, a platform that would interpret your code and deploy the necessary infrastructure to best support it. It’s what we call Infrastructure from Code at Serverless Cloud. There are others that are doing similar work like Dark Lang, Lambdragon, and others mentioned in Shawn’s post.
But we’re now at the point where the technological groundwork painstakingly laid over the last seven years is ready for the next major step towards the ultimate vision. The complexity added by configuration was an intermediate step that told the cloud what was needed to run our code. The near future is a world that no longer requires that step. So, yes, there are plenty of things still required for serverless computing to fulfill its full potential, but we’re a lot closer to “just write business logic” than you might think. And when anyone can fully express their application with just their code, then the true potential of serverless will finally be met.
Did you like this post? 👍 Do you want more? 🙌 Follow me on Twitter or check out some of the projects I’m working on.
Agree with you that as a startup, particularly with limited resources, serverless can be a huge win in terms of allowing focus on value add instead of undifferentiated heavy lifting infrastructure.
We did find that AWS Amplify and AppSync did get us pretty close to ‘Just Write Business Logic’ (well and a lot of front end UI code, but would need that anyway). But the backend was pretty wonderful with business logic maintained as lambdas cleanly attached and managed with AppSync GraphQL mutations / schema. And all the CRUD stuff done almost completely automatically by AppSync.
What do you say to someone who is confused by the multiple variations/implementations of serverless that exist?
That’s the million dollar question. I think starting out with an abstraction is the best way to get you going, but I also think that there are limitations to this approach. Architect, Amplify, Serverless Cloud, etc. will get you most of the way there, but at some point, depending on your use case, you’ll need to dive into the primitives. I will say this, I don’t think “containers” are the future of serverless, they’re just another intermediate step to placate the current masses. So, serverless will likely continue to mean lots of different things to different people, but I’d be very surprised if bloated Kubernetes-like stacks win the war.
The “just write business logic” term is a marketing fluff, that unfortunately resonated all too well with the developers. They overpromised and underdelivered, surprise surprise 🙂
I think we need to slightly readjust our expectation of what serverless provides. It’s still be best possible infrastructure in my view. How I see it, the point of serverless is to provide an abstract layer between the business logic and the cloud. This abstract layer means that we as developers don’t need to waste time maintaining the infrastructure, scaling it in and out, ensuring it’s fault tolerant, and that the underlying “hardware” resources are optimally used, thus giving us a better price point.
With serverless we are still building, architecting and deploying infrastructure, but once it’s up and running, that’s where our responsibilities stop and are taken over by the cloud vendor.
I believe having this perspective serverless makes much more sense. If you start your project with that mind-set you’ll have a pleasant experience with serverless. If you approach if from the “just write business logic”, well it won’t be as pretty I can tell you that. Just make sure your budget can compensate for it 🙂
Great analysis of current end user interpretation of Serverless. I am sure and sincerely hope everyone who understands serverless agree with end goal. But now that we have Knative 1.0 , do you need think we are one step closer to “ just write business logic“?
Every little bit helps, but as long as container orchestration systems are still in the mix, I think we’re not making any real gains.
Well.
Is Serverless a silver bullet that solves all the problems? No. But it’s a powerful approch that a modern architect must consider while thinking to an application.
Is Serverless supposed to be “easy” to implement or deploy? No. But it’s probably the only way the create real cloud native applications.
We’ll continue to have both Serverless and containers forever as it is meaningless think on this in terms of a competition between the two worlds (there is a lot o people today that moved to the cloud buying virtual machines, so…).
Serverless is definitely still something for people who likes to go where eagles dare as is missing standards and far to be opinionated enough to be accessible to people coming from more traditional technologies.
In any case Serverless is here to stay.
I disagree with this statement: “Kelsey Hightower recently tweeted that “Serverless based on containers is the future.” Really? I guess that’s true if we’re prepared to give up on the optimizations already possible by letting the platform be the orchestrator rather than requiring 30 layers of abstraction just to execute a few lines of code.”
It focuses primarily on the FaaS aspect of serverless, but it can be stretched to the other aspects. So, why stop there and not stretch it to other aspects of serverless too? Doing so can easily lead to the point of view that serverless is not the way to go because it lacks “optimizations”.
I think what we miss when focus on the
container
part of Kelsey’s argument is that (at least in AWS) the compute is notpotable
. That is, how can I take a workload from ECS and easily migrate it to Lambda and vice versa? This was a blind side for me until I started using GCP more frequently. Having an abstraction like a container as the packaging format makes it effortless to deploy the workload to any kind of compute (Kubernetes, Cloud Functions, Cloud Run, Compute Engine). I think that is what the essence of Kelsey’s comment was.Kubernetes is a technological dead end FOR MOST PEOPLE.
If you’re GCP or AWS, providing compute as an end product, then you absolutely need a tool like K8s. For the rest of us it is just a way to virtualise a data centre; the only thing it gets rid of is the bare metal and all the other complexity remains. I believe that the likes of Kelsey Hightower realise this but still want to reserve a place for what is an amazing system, IF you’re a cloud provider.
With serverless we will realise compute as the fourth utility. Like electric, we will not have to concern ourselves with the equivalent of generators, transformers, substations and miles of wiring. We’ll just need to flick the switch on the wall to allow us to do what we actually want to achieve, such as boil the kettle.