Why I believe "serverless" is the future
Introduction
Hello, in this blog post I will try to answer the question why I think serverless is the future. I will go over the things that make it great and also mention a few things that are a bit challenging and could be improved.
What makes serverless computing great
So, let's start with a list of things that make serverless computing great in my opinion!
Reduced operational overhead
First thing that comes to mind when I think about serverless architectures is the way they can help you reduce operational overhead. You don't have to provision virtual machines, you don't need to think about underlying OS, you don't need to think about security patches for your infrastraucture. It's all taken care of by the service provider such as Amazon Web Services or Google Cloud Functions. In turn, you don't need so many dedicated "DevOps" people, because there's ways less to manage in terms of infrastructure. It also makes is so much easier to spin up additional environments of your application for purposes such as testing or development.
Scalability out of the box
Another great benefit of serverless architectures is their ability to scale out of the box. If your application has a sudden spike in use, it should automatically scale to accomodate for the required demand. Of course, there can be some caveats and some external systems might become a bottlenect, but in general, scaling the application tier is way easier with serverless.
Empowers developers
This is one of the most important points in my opinion. Thanks to amazing development tools like AWS CDK, Serverless Framework, Architect, or SST, developers can build and deploy their applications quicker than ever. In my opinion, its especially powerful for frontend developers, who now can use a bunch of managed services from AWS like Cognito, AppSync, and Step Functions, all defined in TypeScript in CDK to build full-fledged applications with writing minimal additional backend code. Another great thing is something that I already mentioned, which is the ability to very easily spin up additional environments of your application for testing or development. Since when using Serverless applications you almost always use some kind of Infrastructure-as-code solution, often spinnning up a new environment for testing is a matter of running the same command as for deploying your production application. The ability to test and develop your changes in an environment that is a copy of your production environment can help with catching bugs that would be impossible to catch if you were just running your code locally, maybe with some mocked services.
Event-driven nature
Last thing that I want to mention is the fact that serverless architectures pretty much always are also event-based architectures. It helps a lot with decomposing your application into smaller pieces, based on their resposibility. It also makes it easier for other applications to react to emited events, for example by integrating with a centralized event bus like AWS EventBridge.
What could be improved and currently can be seen as a challenge
Of course, as with everything, there are some things that could be improved and currently could be seen as a challenge.
Testing, development, and debugging
This one is tricky, as the development with serverless often requires a change of mindset from testing locally to testing in the cloud. As serverless architectures often depend on multiple managed services, its very hard to simulate them locally. Still, there are great projects such as Localstack, Serverless Offline, or Architect that help with testing your serverless projects locally. Other projects like SST take a hybrid approach, where your local code is invoked and integrated with cloud-based services. There are also projects like Serverless Cloud that are fully cloud-based. I personally believe that cloud-based development is the future, but the tooling around it can still greatly improve and projects like SST and Serverless Cloud are going into the right direction.
Performance
Serverless computing is not suitable for all kinds of applications. If you need really low latency or GPU acceleration, you might need to consider other solutions. Latency caused by cold starts is no longer the same problem as it used to be in the past, but still can be a deal-breaker if you're building an application where every millisecond counts. Also, if you e.g. need GPU acceleration for handling ML workloads, then you probably would be more interested in using containers or virtual machines that can offer you such capabilities. While now these things can be seen as limitations, I encourage you to monitor serverless space closely, as I wouldn't be surprised if in a year or two from now we will see even better performance or GPU acceleration in services such as AWS Lambda.
Handling long-running applications
Another limitation of severless architectures is their ability (or lack of it) to run long-running applications. Service like AWS Lambda has maximum execution time of 15 minutes, which for some cases might not be enough. Luckily, we also have alternatives. There are solutions like AWS Fargate or Google Cloud Run that allow you to run serverless containers. Recently, Google Cloud Functions Gen2 introduces increase of execution timeout to 60 minutes. I wouldn't be surprised to see AWS Lambda do the same thing in the upcoming months. There are also services like Step Functions that allow to decompose long-running tasks into smaller steps.
Summary
In this blog post I went over benefits and challenges associated with serverless computing. I believe that the trend of reducing the need for "manual" infrastructure management will gain more and more popularity in the coming years. I especially like how it enables more people to quickly build and deploy cloud native applications and allows to focus on developing your product, rather than on managing the underlying servers. I'm really excited about products such as Serverless Cloud, where your don't even have to define your infrastructure separately, as it is "smart" enough to determine the needed infrastructure directly from the code you write.
Thanks for reading and see you next time! 👋