Scalable Node App In Scalable Cloud Infrastructure (IaaS), part 1

Matti Cloud, Development, Tech

Node.js (or Io.js) have been trending programming environment, platform, system, or whichever name is used out there, in the current web backend development industry. It’s been gaining enormous popularity due to it’s common “java-type-of” syntax, Javascript, running in basis of open source Google’s V8 engine used in the popular Chrome browser.

nodejs scalability

Node is event-driven and has non-blocking I/O making it powerful and efficient. Part of the benefits of Node ecosystem, due to it’s popularity, it has nowadays one of the largest modules and packages that developers’ can install and use in their applications easily.

Yes! Node is good. It’s not perfect, but very near to that…

Let’s think from business owner and enterprise perspective. I can utilize existing resources who are familiar with the syntax. The enormous number of open source libraries through it’s ‘npm’ package infrastructure provides me fast time-to-market and lower development and integration costs. Efficient and performance saves me money in infrastructure and pre-built images and environments let me launch/deploy Node based applications to cloud rapidly with near-zero capital expenditures. And the best-of-all, I can start with low cost servers, lightweight applications, which I can build and scale up-to massive volumes as the business grows. Awesome!

This article is about running a scalable Node application in a scalable cloud infrastructure.

Business Examples

Several companies have decided to re-write their code for better performance using Node.js. Examples of such companies include LinkedIn who gained 10x reduction in their number of servers needed to host their social business networking platform. 

GoDaddy on the other hand have stated that Node allows them to easily build high quality applications, enabling them easier unit and integration testing as well as REST APIs. In addition they’ve also stated that they can handle the same load with only 10% of the hardware — this aligns with LinkedIn’s experience.

PayPal has publicly announced that their productivity has increased due to “Node.js and an all Javascript development stack” and Netflix, the online movie streaming company, has stated that their time-to-market in development has boosted with help of Node.js.

Other examples of some known companies (and websites) who are using Node.js nowadays are at least:

  • eBay
  • Uber
  • Dow Jones
  • Flickr (flickr.com)
  • Groupon (groupon.com)
  • Wall Street Journal (wsj.com)
  • Today.com
  • Outbrain.com
  • Paytm.com
  • Onedio.com
  • Mobile.de
  • Coursera.org
  • Yellowpages.com

Companies using Node.js

In this blog post, we’ll dive into few steps that are required to make your Node application scalable both from application architecture and from server infrastructure perspective. The blog article is part 1 of my blog article series in relation to this topic.

Step Into Details

Node.js Application Cluster

Node is single-threaded, which limits it’s ability to scale up with the hardware: using of additional CPU cores requires the use of built-in clustering capabilities, more specifically, Node.js Cluster API. This allows us to build an application that easily scales up with the instance container size.

Further, Node.js is based on Chrome V8 engine – which originally had a hard memory limit of about 1.5-1.7 GB on 64Bit machines – this has since been modified and the actual limitation has been removed, as long as you configure the Node application to use the additional amount of memory needed (and available) – more on this a bit later..

Enabling clustering on Node.js application level enables application concurrence, speeds it up dramatically, and reduces the risk of single-point of failure on application level.

The Cluster module is fairly easy to pick up, especially if you are already used to working with Node.js. The Cluster module allows you to create a small network of separate processes (workers) which can share and serve the same server ports with the main Node process (master).

The master process is in charge of creating the workers and controlling them. That is pretty much all I recommend to do in the master side with some exceptions to generic initialisation and common set up, before the workers can lift of. The workers are spawned using the fork() method of Node’s ChildProcess module allowing master and workers to share handles and inter-process communication for communication. The work load, incoming connections, are distributed by default in a round-robin approach among the workers.

To implement the Node.js Cluster module in your application, you pretty much separate the master process code (initiating, controlling) from the workers code (doing the actual work). Here is an example:

var cluster = require('cluster'); 
var http = require('http'); 

if (cluster.isMaster) {
  var osNumOfCPUs = require('os').cpus().length;
  for (var i = 0; i < osNumOfCPUs; i++) {
    cluster.fork();
  }
} else {
  http.createServer(function(req, res) {
    res.end('Hello from worker process: ' + process.pid);
  }).listen(8080);
}

In the code, the master process will create as many worker threads as there is CPU cores, as reported by Node.js OS module. This example creates a simple web server, listening on port 8080, that responds to all incoming requests with the process ID (PID).

Node.js Number of CPU cores

You could test this code by running it as simple Node application, e.g. node –debug app,js (store the above code in app.js file) and then accessing your localhost in http://127.0.0.1:8080. When request is received, it is distributed to an available worker which processes the request.

Let’s add a bit more intelligence into it – re-spawn the process if it dies improperly.

var cluster = require('cluster');
var http = require('http');

if (cluster.isMaster) { 
  var osNumOfCPUs = require('os').cpus().length;
  for (var i = 0; i < osNumOfCPUs; i++) { 
    cluster.fork();
  }
  cluster.on('online', function(worker) {
    console.log('Worker process ' + worker.process.pid + ' is online!');
  });
  cluster.on('exit', function(worker, code, signal) {
    console.log('Worker process ' + worker.process.pid + ' has died with code: ' + code + ', and signal: ' + signal);
    console.log('We don\'t want that, so let\'s start a new worker..');
    cluster.fork();
  });
  console.log('Master started '+numOfWorkers+' workers.'); 
} else { 
  http.createServer(function(req, res) {
    res.end('Hello from worker process: ' + process.pid);
  }).listen(8080);
}

What was added in this example was the listening of worker events (on ‘online’ and ‘exit’), so we know when worker has come online and ready to serve requests and when worker has died. If the worker process dies suddenly, we will spin off a new worker to ensure that there is always the number of workers running and each one serving the master as required. No single-process risk of failure!

You can actually listen on these events on both worker and on cluster level and add the necessary functionality to handle this situation. 🙂

The inter-process communication between individual Node processes and between master-worker processes is established by adding a message listeners to each process. To do this, you can use following code on worker side to listen for messages:

worker.on('message', function(message) {
  console.log(message);
});

And following on master side:

process.on('message', function(message) {
  console.log(message);
});

Messages can be strings or JSON objects. To send a message from the master to a specific worker, you can use following code:

worker.send('Saludos from master!');

And similarly, to send a message from a worker to the master:

process.send('Hello. This is a greeting from the worker: ' + process.pid);

In Node.js, messages are generic and are not of any specific type. This means that it is highly recommended to send messages as JSON objects which then contains some additional information about the message: type, sender, message content, etc. An example could be:

worker.send({
  type: 'message',
  from: 'master',
  data: {
    // the data/message delivered between processes
  }
});

This is it for this blog article.. Please feel free to leave me a comment and check the full source code examples in the following public Github repository: https://github.com/mattivilola/scalable-node-app

In the next blog, part 2 of “running a scalable node application in a scalable cloud infrastructure”, I will go more deep into the management and controlling of Node.js processes, sub-processes and control messages between master and worker processes.

In addition, I will be discussing on Node application host level considerations, optimisations and relations between other host components and Node application and how they support the fully scalable application infrastructure.