The side effect of increasing developer responsibility

Posted in #serverless, #cloud

It wasn't that long ago that the vast majority of developers didn't worry much about infrastructure. Sure, there were plenty of us configuring Linux servers and setting up the occasional MySQL database, but that certainly wasn't the norm. For those that worked in larger organizations, your code was likely checked into perforce or subversion, and then magically ended up in production (some days, weeks, or even months later). For many, this is probably still how they ship code.

During the late 1990s, web hosting companies sprung up and allowed a new breed of "web developer" to push code directly to servers accessible to the Internet. This mostly involved FTPing files to a single web server, often shared, with almost zero ability to configure anything beyond a few Apache overrides with a .htaccess file. When cPanel became widely available, suddenly developers had a bit more control. One click installs of MySQL and php, and suddenly there was an explosion of new use cases now available to a rapidly growing generation of developers.

Then came Drupal and WordPress, opening up web publishing to an entirely new population. The search wars were raging with Yahoo! and Google vying for the top spot (Lycos and Altavista were already history at this point). Suddenly Friendster and MySpace introduced a new type of online social interaction driving massive engagement and more demand. The proliferation of data centers followed with more and more companies shifting their resources to grow their businesses on the backbone of the Internet. Computing became global and the days of the single server were over.

If you wanted to be a serious player, you needed to buy (lots of) servers, rent colocation space, pay for a large amount of mostly unused bandwidth, license software, and become experts in operations. This was business critical and staffed with specialists whose job it was to keep the servers running. This wasn't easy and it wasn't cheap, but the world had changed. There were plenty of WordPress and Ruby on Rails sites still out there (heck, there still are), but those looking to grow a business beyond a hobby were learning that high availability was the new standard, and they were falling behind.

Renting dedicated servers was nothing new, but when AWS launched EC2 in 2006, suddenly the overhead of running your own servers virtually disappeared. "Elastic compute" rolled up thousands of dollars in monthly costs into a per hour charge. When I moved my hosting company from a colocation facility to AWS in 2009, my monthly cost dropped by over 80%. Managing EC2 instances wasn't particularly easy at first, but it enabled the next generation of hyper-scalers to build without limits, taking advantage of the massive growth of the nascent mobile web sparked by the success of the iPhone.

Cloud architectures were still relatively simple. Autoscaling VMs with a load balancer and a few key services like S3 and SQS filled in the gaps. Developers were still mostly throwing code over the wall. Then CloudFormation (and others like Puppet and Chef) kicked off a new wave of automation, spinning up VMs and infrastructure with easily repeatable workflows. Then came the deluge of cloud services. DynamoDB, Kinesis, Lambda, API Gateway, Step Functions, and more. We now moved beyond multi-VMs to multi-service. Applications changed from horizontally scaled VMs connected to a database cluster to multiple services stitched together.

It happened over the course of a few years, but suddenly developers were no longer just being asked to use the AWS SDK to connect to a DynamoDB table or install the Kinesis Client Library. They were being asked to deploy, configure, and set the permission to connect to them, too. Whether you were writing monoliths or SOAs or microservices, your code was no longer deployed to a cluster of servers, but split into smaller parts that needed to know where it was running, from where and how it would be executed, the permissions it needed, and the special format it was required to return.

As soon as developers began to get comfortable with that, there was another shift. Now instead of writing code, you needed to start encapsulating business logic and processes into cloud specific configurations, connecting services together directly rather than using familiar instruction sets that we all have ingrained so deeply within ourselves. Configuration over code, because, yes, your code is a liability and the cloud can do it better than you.

Over the course of 15 years or so, developers became responsible for more than just business logic. Infrastructure and cloud architecture, build and deployment pipelines, instrumentation and observability, security and compliance, plus a whole lot more now fall directly on the shoulders of your modern day "cloud developer." I don't think it's possible for one person to master all of these disciplines. I'm not even sure they can effectively be spread across a team of developers without really good communication and strict review processes in place. And even then.

We're at an inflection point. The explosion of new tools and platforms trying to abstract away this complexity and reduce cognitive load on already overtaxed developers should be a flashing neon sign. I'm also not alone in thinking that AWS can't solve this. Tools like AWS Application Composer and Amazon CodeCatalyst are all fine and good, but all they do is hide a layer of complexity that you're still ultimately responsible for. AWS is really great at operational excellence on the service level, but only if you've properly wired everything together.

I'm not sure anyone has found a good answer to this just yet, nor do I think a silver bullet can exist. But given the increased amount of responsibilities the average cloud developer now has, I don't blame us for seeking something better.

Comments are currently disabled, but they'll be back soon.