Back to Blog

Introduction to Serverless

"Serverless” services or Functions as a Service (FaaS), means moving the level of concern radically away from infrastructure, and towards custom application code, by removing any notion of “servers” or “nodes” altogether. Application code is packaged as functions that trigger on events, for example incoming HTTP calls and you concentrate on adding value, not tweaking the stack. Your stack can effortlessly scale to massive levels of concurrency, which makes FaaS ideal for fluctuating workloads.

faas-level-of-concern--rq9e72hkmw s830x0 q80 noupscale With FaaS, you concentrate on adding value, not maintaining infrastructure (original image by Kimmo Brunfeldt).

All the major cloud providers offer it, with reasonable feature parity: Google Cloud Platform (GCP) has Cloud Functions, Amazon Web Services (AWS) has Lambdas, and Microsoft Azure has Azure Functions, to name the obvious three. No matter your integrations, preferred stack or vendor affinity, in a few clicks you can be deploying stateless services from the comfort of your browser.

This approach is an easy way to get started, but the major drawback is that it easily leads to bespoke environments that are undocumented, except for whatever kind of view the particular cloud vendor’s console offers. Such setups are hard to replicate, changes are hard to review, there is no support for reverting, and so on.

Many vendors have their methods for scripted resource creation, the most well-known being Amazon’s CloudFormation. However, with CloudFormation comes the need for managing all aspects of the setup manually, including execution roles for your functions, log storage, and a pile of other things that are necessary just to get to “Hello World”. Such fine-grained control is definitely warranted in some scenarios, but when you want to get iterating on your solution quickly, you would like to streamline the basics.

Enter Serverless, promising to be a provider-agnostic deployment tool (and a FaaS of its own, currently in beta). “Focus on your application, not your infrastructure” is the tagline. Sounds like Heroku, so definitely promising!

Serverless services are defined in “serverless.yml” files, and here is the canonical basic AWS Node.js example, after removing the copious helpful comments:

  name: aws
  runtime: nodejs6.10

    handler: handler.hello

Combine this with a “handler.js” file which exports a “hello” function for a working, albeit minimal Serverless service:

module.exports.hello = (event, context, callback) => {
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      message: 'Go Serverless v1.0! Your function executed successfully!',
      input: event,
  callback(null, response);

All that is needed to deploy is AWS credentials stored in any of the usual places (credentials with an AdminAccess policy—more on this worrisome feature in a later post) and a serverless deploy command. The result will be a CloudFormation stack that includes an S3 deployment bucket, a Lambda execution role, a log group and the Lambda itself, which you can then test with serverless invoke --function hello. Serverless generates about 150 lines of CloudFormation template code from those six lines.

To add an API gateway and allow HTTP calls to your function, you would amend the function definition to the following:

  handler: handler.hello
    - http: GET hello

After a redeploy, Serverless would then spit out a URL for calling the function. Not bad in terms of simplicity.

There are lots of community plugins available to enhance and extend the workflow, e.g. for running the aforementioned DynamoDB locally, or simulating an API gateway for offline testing. These are just NPM modules that you install as dev dependencies and set up in serverless.yml.

Things get a bit more murky when provider-specific resources such as DynamoDB tables come into play. Here Serverless understandably just throws in the towel, and you’re back to writing CloudFormation resource definitions, inlined inside “serverless.yml”.

Even if you eschew provider-specific resources, you will not be able to just change cloud providers with a flick of the wrist, however. By default, Serverless does nothing to abstract away the assumptions made about Node.js code organisation, for example. In AWS you can specify your function handler with dotted “module.function” notation, but GCP has altogether different assumptions about how to find your code, and this specification will just not deploy at all. On the other hand, switching providers mid-project is mostly a really low-probability risk, and what you really want is to get moving quickly so that you can validate the thing you are building. For this, Serverless is a nice tool, allowing you to get going with FaaS quickly while having your deployment configuration as code.

Next, I will look a bit deeper into the overly broad permissions problem I alluded to above, and see if I can’t solve it in a nice way so that a minimum of extra tooling is required.


  • Ilkka Poutanen
    Senior Specialist