Introduction to Serverless (FaaS)


Serverless computing, serverless functions and FaaS are all buzzwords that appeared recently. In this article, we'll take a look at what they mean, how we can leverage them in applications by taking a look at a cool example.


One of the most important "quirks" that we need to remember is that even though we call Serverless "serverless", there are servers involved. So Serverless is not some magical application design pattern that avoids using servers entirely, but rather, it is a model where we, the developers, do not need worry about managing the servers.


Let's start by discussing the basic terms around Serverless computing in general. Below please find a screenshot from a slide that I delivered at Download Event in Bergamo, Italy earlier this year.


Infrastructure As A Service - this term refers to services such as Amazon's Elastic Cloud Computing (AWS EC2) or Amazon's Virtual Private Cloud (VPC). The key takeaway here is that for something to be considered an IAAS, everything has to be managed by us. In EC2 we set up the server, we manage it, Amazon only provides us with the infrastructure.


Software As A Service - this service provides us with some service functionality - either free or paid. Such services include Stripe and PayPal for integrating online payment capabilities into an application - and let's not forget Slack as well. We are responsible for integrating these services, but we know very little about how these services work behind the scenes.


Platform As A Service - such services provide us with an entire platform where we can deploy and run an application, but we do not need to worry about the underlying infrastructure. One good example is Heroku - we can deploy an app to a Dyno (a Heroku specific term) and scale these up or down as we see fit.


Backend As A Service - services that provide us with a highly available backend to achieve a particular task, usually via dedicated SDKs and APIs. Such services include DynamoDB from Amazon and Firebase from Google.

BaaS is also one of the services that are considered to be Serverless. Why? As far as we are concerned, there is no server management involved.


Function As A Service - a service that allows the invocation of a given function. This is useful for multiple reasons - amongst which cost considerations is one. Think of FaaS as an implementation of some custom logic or implementation to orchestrating web services. But more on this later.

FaaS is the other service that is considered to be Serverless.

So now we have an idea about the various components in the Serverless space.

Pros and Cons

Serverless computing has a few benefits, but it also comes with a few negative side-effects that we need to take into consideration:

Pros include:

  • No "always-on" pricing model ("Pay As You Go" pricing model is used instead)
  • Computational power is calculated based on the load and usage
  • Framework/language /environment independent invocation
  • Horizontal scalability via ephemeral containers
  • Virtualisation - no heavy-weight configuration is required

The term ephemeral in the above context refers to containers that are short-lived - so a container (think of it as a process) starts up, executes, returns data or does some computation and then shuts down (and gets destroyed)

To sum up, FaaS allows us to concentrate on the development of the function and not DevOps and related tasks, and it's a cost-effective solution. We pay for the computational time that we have used to invoke and run a function. If we compare this with an EC2 instance, that has to be on all the time, and we pay for the entire server.

Cons include:

  • Application state is difficult to maintain
  • No variable and data-sharing
  • Long-running functions are "cut short"
  • Potential startup latency

There are some obvious drawbacks for FaaS as well - since functions are executed in ephemeral containers variable sharing is almost impossible. These containers also have an execution limit, so long running functions cannot be executed. Finally, if we don't run a function for a period, it's container may become stale (the infrastructure determines this) and therefore we may be faced with longer-than-usual startup time.

Serverless Framework

The Serverless Framework is an open source initiative released by Serverless, Inc. It's a great framework that allows the creation of serverless functions in an unintrusive way, and it is provider agnostic, which means that we can create FaaS against AWS, Microsoft Azure or IBM OpenWhisk.


We'll take a look at using the Serverless Framework to create a function as a service running in AWS (and utilising AWS Lambda). The function we'll create will use Unsplash and Cloudinary. The choice behind the Serverless Framework is two-fold. First, it's something that I wanted to try for a long time and second, it provides us with an easy way to deploy applications not only to AWS but to other providers such as Microsoft Azure, IBM Bluemix and Oracle Cloud to mention a few.

Often there are situations when we are looking for a sample image, and we wish to store that somewhere. For example a picture of a cat, a house, a business person, you name it, but most likely a cat. 😉

During this example project, we'll create a Serverless Function that will take an argument - such as "cat" - and will take a single image from Unsplash and upload it to Cloudinary's Media Library. We will walk through how to set up the Serverless Framework, followed by how to configure it and put together the various pieces of this project.

Getting started

To get started, please install the Serverless Framework:

npm i -g serverless

And start a new serverless project:

serverless login

Note that you will need to set up your credentials for AWS. This article does not cover this, but please refer to the AWS Credential Setup Guide from Serverless, Inc.

Once logged in, we are ready to create a new service - we'll leverage the Node.js template:

serverless create -t aws-nodejs

This will create a few files:

- handler.js
- serverless.yml

handler.js is the Node.js file holding the actual function that we'll invoke, and serverless.yml contains information about the service.

Open up the yml file and add these two lines to it:

service: aws-nodejs
app: XXX
tenant: XXX

Replace the XXX with your app and tenant values.

At this point we are ready to deploy this service by executing the following command:

serverless deploy

To verify that the service has been deployed, we can invoke the function (by default called hello):

serverless invoke -f hello -l

If we have done everything correctly, we should see a message body returned to us.

Updating the handler

Okay, so we can update the handler function. We'll need to rename it and write some code. Here's what we have:

'use strict';

function _getImage(term) {
  const url = `${term}&client_id=${process.env.UNSPLASH_API_KEY}`;
  return new Promise((resolve, reject) => {
    const https = require('https');
    const request = https.get(url, response => {
      if (response.statusCode < 200 || response.statusCode > 299) {
         reject(new Error('Failed to load page, status code: ' + response.statusCode));
      const body = [];
      response.on('data', chunk => body.push(chunk));
      response.on('end', () => resolve(body.join('')));
    request.on('error', (err) => reject(err))

module.exports.image = async (event, context, callback) => {
  const cloudinary = require('cloudinary');
    cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
    api_key: process.env.CLOUDINARY_API_KEY,
    api_secret: process.env.CLOUDINARY_API_SECRET
  if (event.queryStringParameters && event.queryStringParameters.term) {
    const term = event.queryStringParameters.term;
    return _getImage(term).then(response => {
      const r = JSON.parse(response);
      return cloudinary.v2.uploader.upload(r.urls.full, (error, result) => {
        return callback(null, {
          statusCode: 200,
          body: JSON.stringify({
            message: `Access image at ${result.secure_url}`

Now, this is a lot of code, let's review some parts of it.

Note that all the process.env.* values are explained later on in this article.

_getImage() is a function that goes out to Unsplash and returns a random image based on the term that we pass in as the argument.

Please note that a valid API key is required for this function to work. Please register for one.

The second part of our code - starting with module.exports.image is the FaaS code that will be executed on AWS. In there we set up the Cloudinary credentials.

Please also sign up to Cloudinary to receive an API key.

Once these credentials are set up, we check for the existence of query parameters, and if they exist, we call the previously mentioned _getImage() function.

That function returns a random image, we then take the full URL of that image from Unsplash, return it to the Cloudinary Node.js Uploader and if the upload was successful, we return the Cloudinary specific URL to access the image.

Managing dependencies

As we can see from the code above, there's a dependency on the Cloudinary Node.js API. The question is of course, how can we tell the Serverless platform to be aware of this dependency and to install it at AWS?

The answer is simple; we need to have a package.json file in place with the desired dependencies listed at the same location where we have the rest of our files:

  "name": "serverless",
  "version": "1.0.0",
  "description": "",
  "main": "handler.js",
  "dependencies": {
    "cloudinary": "^1.11.0"
  "devDependencies": {},
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "keywords": [],
  "author": "",
  "license": "ISC"

Managing secrets

We also saw that many secrets are being utilised here, and we shouldn't be exposing those in our code. Using the serverless.yml configuration file, we can set up environment variables that we wish to use and access those via process.env:

Let's extend our serverless.yml file with the following content (of course replace the values with proper values):

  name: aws
  runtime: nodejs8.10
    CLOUDINARY_API_KEY: 'cloudinary-api-key'
    CLOUDINARY_API_SECRET: 'cloudinary-api-secret'
    CLOUDINARY_CLOUD_NAME: 'cloudinary-cloud-name'
    UNSPLASH_API_KEY: 'unsplash-api-key'

API Gateway

So far we have only created a service, but there's no way to access it via a REST call. For us to be able to invoke the function that we created, we need to create an API Gateway.

Since we are using the Serverless Framework, we don't need to do anything special, update our serverless.yml configuration file to include the details of the API Gateway:

Let's append the serverless.yml file again:

    handler: handler.image
    - http:
        path: image
        method: get

We define that if someone is accessing the /image endpoint with an HTTP GET method, that should resolve to handler.image - which is really module.exports.image - which we have already defined.


After making all these changes, we are ready to deploy (or instead, re-deploy):

serverless deploy


During the previous deploy step, we received some Service Information which looks similar to this:

Notice the endpoints - that's the endpoint that we can use to invoke our FaaS. Let's try it out!

Note that the URLs of the service and the response will always be different since it should reflect your setup.

In the end, we should not only see a URL returned but if we login to our Cloudinary Media Library, we should also see the image there:


In this article, we have reviewed the basics of Serverless computing, and we also created a fun example utilising a FaaS solution to load a random image to Cloudinary from a royalty-free image provider.