PayApi Goes Crypto – Extended Support, e.g. Bitcoin, Ethereum, Litecoin, Ripple

Matti Business, CryptoCurrency, Development, News, Payments

Crypto-currencies, crypto-payments, blockchain based virtual coins.. It is all about the crypto-based currencies nowadays that are gaining more and more momentum and popularity.

New crypto-currencies are launched every month and very popular ones are attracting more people and market capitalization is increasing.

Today we are also happy to join the crowd with more extended support to crypto payments routing through PayApi secure online and mobile payments service. The initial version is being tested for production deployment in coming days/weeks. At this moment, our users and integration clients can start to test the service without any changes to their existing integrations. Just login to your backoffice dashboard and configure the new payment method with your details. For more information, please do not hesitate to contact us. 🙂

In addition, we are also preparing our PayApi Escrow Service for potential new coin launches, ICOs and pre-ICOs, as needed. Please contact me for more information if you find this option interesting.

But now, let’s try an example coin payment. You may use the following button to donate me some Ethereum. 😉 I’ve done this test button integration using the fastest non-coding method: Webshop Easy Payments endpoint.

0.001 ETH – Pay Now

 

Please note!
This example uses our staging environment which may or may not be up ‘n running depending on our development cycle. If the button does not work, please try again later.

Please also note, that we are working on API upgrades/changes in order to support crypto-currencies in a better way and with tens of decimals accuracy.

Supported Crypto Currencies

Earlier, PayApi, with it’s partners was able to route BitCoin payments. Now, once we have completed the testing and finalization of the new and upcoming crypto-currency gateway integration, we can support a total of 10 crypto-currencies with very easy and fast possibility to add support for more than 50 additional crypto currencies (altcoins)!

The first testing and commercial integrations will be done with the crypto currencies that have the most biggest market capital, including:

  • BTC – BitCoin
  • BCH – BitCoin Cash
  • LTC – LiteCoin
  • ETH – Ethereum
  • ETC – Ethereum Classic
  • XRP – Ripple
  • XEM – NEM
  • DASH – Dash
  • NEO – NEO
  • XMR – Monero
 It takes no longer than 5 minutes to setup and configure the new payment gateway without any technical integration work. Go ahead, check it out now in the PayApi staging environment. The production/live release will follow later!

Probably The Fastest Payment Integration In The World

Matti Business, Cloud, Payments, Tech

As the famous Carlsberg announces with their headline: “Probably The Best Beer In The World”, I have now found the same slogan for any website out there who wants to start selling products and accept payments from their clients.

This is…

Probably The Fastest Payment Integration In The World

Let’s elaborate a bit…

Probably… This is probably, because I haven’t conducted any thorough research or hired a bunch of analysts to check out the magnitude of payment services available globally. However, based on my experience the evaluation we have done at PayApi, this is, clearly, the fastest and easiest payment integration. It really cannot get any easier or faster, and it fits to any website or content management system (CMS) or webshop platform because of the easy syntax it has.

The Fastest… As slightly linked to the previous point, it’s fast.. It’s darn fast and you can add it to any HTML based website. The only thing you need to do is to add couple of details of your product to sell and your PayApi subscription API public ID as a HTML meta data in your web page. And that’s it. See the syntax and meta-tags in the API documentation.

Payment Integration… Payments, purchases, transactions, credit card charges, crypto-currency, PayPal.. These are keywords explaining what it is. It’s an extensive payment service which integrates different payment gateways/methods into one, easy to use, single integration.

In The World… The single integration works anywhere in the world, anywhere and any time. It’s a PaaS service and you can define how the payments are handled and what payment methods you want to use. The offering covers and includes payments which support clients and consumers globally.

This is (still)…

Probably The Fastest (and Easiest) Payment Integration (for Any Website or Webshop) In The World

Please note that this blog entry is PayApi oriented and clearly a promotion for one of the services PayApi offers. I am the CEO and Co-Founder of the company and more than happy to discuss you with any details you may have.

Please feel free to contact me if you have any questions. 🙂

UPDATE 2016-08-26: I got challenged on how fast this integration is actually, so I demonstrated this by adding these meta-tags into this blog post when timer was running.. It took me exactly 5:04 minutes to integrate a payment into this blog post from the scratch without any programming. This is not bad for adding a payment support to any web page with few simple meta tags.. Right? 🙂

Steps in my demonstration were:
1) open https://payapi.io/apidoc and navigate to WebShop Easy Payments API reference,
2) copy the meta-tags and embed them into the web page,
3) whitelist the webpage domain in PayApi backoffice for the used subscription publicId, and finally
4) add the buy button to open the webshop API with http encoded URL (see below). For encoding I used this tool.

Fast Buy Button

5) As an extra step, by using a QR code generator tool, I created also a QR code to easily link purchasing to a QR code reader on your mobile app. You can try it out below.

UPDATE YEAR 2020: this QR code is not working anymore/pointing to old account!

static_qr_code_without_logo_matti-vilola-blog_20160826

– – –

DISCLAIMER: “Probably” is the advertising slogan and known brand, trademark, of Carslberg. The image of the blog entry is based on Carlsberg’s ‘Probably’ advertisement during UEFA EURO 2016.

Scalable Node App In Scalable Cloud Infrastructure (IaaS), part 3. IaaS

Matti Cloud, Development, Tech

This is the article part 3 of the Scalable Node App in Scalable Cloud Infrastructure. If you are looking for part 1, please check it out here, or part 2, please check it out here instead. 🙂

In this blog,  I’ll go through scalable infrastructure considerations with Node applications. This means IaaS, infrastructure as a service, type of services provided typically from Amazon, Microsoft or Google type of big players among smaller ones.

So.. We’ll jump right into it. This blog article will be shorter than previous ones. This means that instead of going to deep in the offering, options and configurations with each major IaaS provider, I am going to go briefly through all of them and jump into more details with one specific vendor.

Infrastructure-as-a-Service

The reason why companies, smaller and larger, enterprises are heading towards cloud based infrastructure, Infrastructure-as-a-Service, IaaS a.k.a. Cloud Computing and Cloud Virtual Servers, is really mainly the purpose of getting flexibility on the current IT management.

Some companies look for cost savings and some companies are keen on transferring their CapEx to OpEx, which means removing of several 100s of thousands of capital investments on IT equipments, servers and related stuff, to a monthly based flexible, pay-as-you-go service from big vendors that do it better than themselves.

Security for public cloud services or personal data/privacy concerns are often the reasons of not going into the public cloud services, which, in my opinion is very much old-fashioned point-of-view. The security and handling of privacy aspects is often in better shape with credible IaaS provider. See my previous blog post on Cloud Security for further details.

As credible IaaS providers I would highlight few biggest and main players on this area.

Amazon Web Services

Amazon has been the #1 cloud service provider within last years, and it still is. It has the biggest service portfolio and the strongest track record and history in providing such services.

amazon_cloud

Amazon Web Services (AWS) include pure server hosting and cloud computing as well as disk space and networking services. The infrastructure portfolio includes:

  • Compute services: virtual servers, contains, 1-click web app deployments, event-driven compute functions, auto scaling, load balancing
  • Storage and content services: object storage, CDN, block storage, file system storage, archive storage, data transport, integrated storage
  • Database (well, not really part of IaaS services but instead PaaS, Platform-as-a-Service; listed here due to AWS own website listing it as part of offering): relational, database migration, NoSQL, caching, data warehouse
  • Networking: virtual private cloud, direct connections, load balancing, DNS

Amazon states that they have over million active customers in 190 countries and they offer 12 months free-of-charge trial to gain good experience and knowledge with their services. Some limitations apply but it is really sufficient to test drive the services, set up few Node application servers and host your application. They have data centers all over the world.

Recommendation: Cloud leader. Cost-efficient. Recommended.

Microsoft Azure

Microsoft’s answer to cloud computing needs is their Azure cloud computing platform and infrastructure. They’ve created it for building, deploying, and managing applications and services through their global network of Microsoft-managed datacenters.

microsoft_azure

The Azure infrastructure portfolio includes:

  • Compute services: virtual machines, virtual machine scale sets, cloud services(why is this listed on their website?), batch, RemoteApp, service fabric, container service
  • Data & Storage services (partly PaaS): SQL database, document database (NoSQL), Redis cache, storage (blobs, tables, queues, files and disks), StorSimple, search, data warehouse, SQL server stretch database
  • Networking services: virtual network, ExpressRoute, traffic manager, load balancer, DNS, VPN Gateway, application gateway, CDN

There are many other services as well in the PaaS and SaaS areas and like Amazon, also Microsoft offers $200 worth of free credits to get started as well as some other development related resources and supportive services.

Recommendation: Solid performer. Recommended!

Google Cloud Platform

The giant networking and internet company, Google, has increased their investments on enterprise offering within last years. And this can be really seen in the number of growing offerings in the area of cloud services. The Google’s path from SaaS service provider to enterprise-class IaaS (and PaaS) provider has been interesting to follow.

google_cloud

The Google Cloud Platform has services built on top of Google’s core network and infrastructure. They offer following inftrastructure portfolio:

  • Compute services: compute engine, app engine, container engine, container registry, event-based microservices, load balancing and auto-scaling
  • Storage and Database services (partly PaaS): cloud storage, cloud bigtable, cloud datastore, cloud SQL,
  • Networking: cloud virtual network, cloud load balancing, cloud CDN, cloud interconnect, DNS

The Google’s marketing slogan is to “let innovators innovate and let coders, well, just code” which describes their objectives quite nicely. Google Cloud Platform frees you from the overhead of managing infrastructure, provisioning servers and configuring networks. Google joins Amazon and Microsoft in offering free trials to test drive their cloud platform (worth of $300).

Recommendation: Improving rapidly. Highly Recommended!

Oracle Cloud

Oracle, one of the big IT companies, the database company launched last year their public cloud offering (nowadays also offering private cloud services running on top of their state-of-art hardware, such as Exalytics) called Oracle Cloud.

oracle_logo

Oracle, in lead of Larry Ellison, is coming to the cloud computing business heavily and fast. They reacted late but have gained significant growth in their cloud business within their first 2 years. It’s going to be really interesting to see how Oracle can compete with Amazon, Microsoft or Google. Their cloud portfolio in relation to infrastructure includes:

  • Compute services: dedicated compute and compute
  • Storage services: storage capacity, archive storage, shared file storage, storage file appliance
  • Network services: site-to-site VPN, FastConnect, VPN for compute
  • Cloud machine: cloud services in enterprise own data center

The actual offering in relation to infrastructure is not as wide as with competition at the moment; Oracle is clearly playing a catch-up and has stronger offering in their PaaS and SaaS services. The strong argument for Oracle is the full stack integration between different cloud services as well as their strong install base of on-premise software. Oracle also offers free trial for their compute and storage cloud services.

Recommendation: Challenger. Follow-up and test it out in near term

IBM Cloud

I was not planning to include IBM as one of the major cloud providers in the beginning. However, looking at their cloud investments and offering I realized that it’s actually a huge cloud player already nowadays. For avoidance of doubt, I do have least experience with IBM’s cloud offerings and thus have to look at their offering from consumer point-of-view and using their public documentation.

ibm

IBM Cloud is a high-performing, flexible and  scalable cloud infrastructure built on top of SoftLayer solutions. IBM states that their infrastructure is secure, scalable, and flexible, providing customized enterprise solutions that have made IBM Cloud the hybrid cloud market leader.. Interesting! Let’s look at what their infrastructure portfolio includes:

  • Compute services: virtual servers
  • Storage services: block storage, file storage, object storage, backup
  • Networking services: load balancer, network appliances, direct link, domain services, CDN

IBM offers 1 month free trial for using SoftLayer cloud platform. This is quite small period of time compared to other players, however, IBM has also offerings on starting to use their different PaaS services which are pretty nice ones.

Recommendation: No experience, to be considered

Example with Google Cloud

Let’s look at some practical examples using Google Cloud computing services. I am running currently more than 50 virtual servers from which about 40 is running using Google Cloud platform.. Why? Well, just because I’ve seen great improvements in their platform and it’s the easiest way for me to centralize my server maintenance and backend development under the one platform. I am 100% sure that for instance Amazon and Azure can provide the same features.

Google’s administration dashboard is built with latest web UX in mind. It’s responsive and they also provide simple mobile application to check logs, SSH into a server or to see your billing and server status.

Google allows hosting and 1-click deployments of several linux distributions as well as Windows servers (additional license costs will apply). For running a NodeJS applications, I am usually running a latest Debian linux distribution, e.g. Debian 8.3 as of today.

blog_scalable-node-iaas_gcloud_dashboard

The smallest instance, called f1-micro, is costing about USD 5 per month is more than capable of running Node application with reasonable load. My backend applications are usually using f1-micro or g1-small (about USD 15 per month) instance types with 10GB of disk space: normal 10GB or SSD depending on I/O activities.

For heavy traffic API entry nodes or bigger database entities I am running next level n1-standard-n1 or n1-highcpu-n4 for higher capacity, the latter having already 4 vCPUs in use. For scalability and growth, an individual instance can be grown/modified to have upto 32 vCPUs as of today.

The normal backend configuration includes load balancing and cluster of backend nodes, called Google Instance Groups with automated scaling depending on certain criteria e.g. CPU consumption.

google_cloud_instances

My latest project includes a secure online and mobile payment system, an integration of multiple payment gateways with smart routing and intelligent anti-fraud operations. This project is called PayApi and it’s running completely on top of Google Cloud platform. The scalability is built with automated clusters and flexible NodeJS backend architecture.

With the simplest setup we have today, we can process and receive millions of payments on daily basis and all deployments are executed automatically using chatops running in their dedicated IT management server with secure VPN access and strict access controls.

It’s really fun and very flexible way to control our NodeJS backend servers with nowadays IaaS providers. Scrap your dedicated hardware and start using public cloud services today! 🙂

 

That’s it for this blog article series. If you have any questions, please contact me using this form.

Thanks for reading!

Scalable Node App In Scalable Cloud Infrastructure (IaaS), part 2. Advanced control of Node processes and host considerations.

Matti Cloud, Development, Tech

This is the article part 2 of the Scalable Node App in Scalable Cloud Infrastructure. If you are looking for part 1, please check it out here.

In the 2nd part of the article, we cover a bit more deep aspect in management and controlling of Node.js processes, sub-processes and control messages between master and worker processes – mostly in theoretical level but with generic examples as well.

Another topic will contain some host level considerations and optimizations; and how they support the fully scalable application infrastructure.

Master-Sub Process Architecture

To start with, let’s recap some of the principle in the master – sub process Node application communication. The concept in question here is utilizing the Node Cluster module. This functionality allows us to bypass the Node’s single-thread limitations with the hardware and allows us to use additional CPU cores for scaling up.

By using Node master and sub-processes, we allow processing of same activities in multiple threads utilizing the full capabilities of the underlying hardware whether it is physical or virtual environment hosting. For making the system smooth in co-operation, we need to establish communication between processes, such as sending control messages between master and sub-process. An example is seen below with a message from master to subprocess:

worker.send('Saludos from master!');

And an example from a subprocess (worker) to master:

process.send('Hello. This is a greeting from the worker: ' + process.pid);

It’s important to notice that message event callbacks are handled asynchronously and there isn’t any defined order of execution.

Another important design aspect is to keep master process simple and short; this minimizes risks and is in line with running master process all the time. The master will handle restarting and managing of worker sub-processes as necessary. Of course, we need to make sure that master process is running continuously, even some random problem would occur and crash the process: this can be done easily with Forever and ForeverService.

For good control of managing workers from master , we would need to implement proper handling of control messages from master to worker and vice versa; this ensure integrity and health of workers and allows proper shutdown of workers in terms of maintenance update etc.

It is okay to restart your workers by first sending them a controlled shutdown message; and then, if they did not safely terminate, forcing to kill them. This can be in event of upgrading your software without any downtime, e.g. running multiple workers in one host.

And example of this kind of control message could be following:

workers[workerId].send({type: 'shutdown', from: 'master'});

Then monitor for this message and safely shutdown the worker:

process.on('message', function(message) {
  if (message.type === 'shutdown') {
    process.exit(0);
  }
});

My recommendation would be to wrap this up with all workers and create necessary functions for handling different control messages from both master and workers perspective. And this would allow the monitoring of whether the worker was shutting down or do we have to force it with SIGKILL. Examples follow:

function doRestartWorkers() {
  var workerid, workerIds = [];
  for(workerid in cluster.workers) {
    workerIds.push(wid);
  }
  workerIds.forEach(function(wid) {
    cluster.workers[wid].send({
      text: 'shutdown',
      from: 'master'
    });
    setTimeout(function() {
      if(cluster.workers[workerid]) {
        cluster.workers[wid].kill('SIGKILL');
      }
    }, 5000);
  });
};

The example is getting the list of worker IDs of all the running workers from the cluster module and then sends shutdown control message. If the worker is still alive after 5 seconds, then forces the shutdown by sending SIGKILL system message.

Node.js code example

Host Considerations

My normal Node application host is typically a set of following applications, programs and packages:

  • Base image: Linux Debian 8.3 Jessie
  • Backend platform with Node v4.x LTS
  • No-SQL database with MongoDB v3.x
  • Easy API routing with Express
  • Front-end proxy with NGINX web server
  • Process management with Forever and Forever-service

In addition, I am often running more than one Node application per server. These applications may be completely independent, server in different ports, or just support each other, providing supporting microservices.

Between these applications, I usually have main master application and then supporting processes doing often a lot of background processing and non-critical/non-real-time processing and activities. Such processes I set to lower priority with Linux NICE levels allowing me to maintain best effort with main master application on front-end level.

NGINX as a front-end

NGINX is a good HTTP operating system formodern web applications. It’s a high-performance, efficient HTTP processing engine handling desktop, mobile, and API traffic equally well before switching and routing each request to the correct service. I am usingNGINX to route traffic to my Node application processes and to do SSL transport decryption before it reaches the application level: this is done both for simplicity but added security and to minimize complexity thus allowing better scalability and performance.

NGINX is a well known as a high-performance load balancer, cache, and web server; and it is powering 40% of the busiest websites in the world. However, you still need to consider some optimizations with it and do some tuning.

NGINX

Remember, that when trying out these recommendations and configurations, a good rule is  to change one setting at a time, and set it back to the default value if the change does not improve the performance.

The Backlog Queue. This setting relate to connections and how they are queued. If you have a high rate of incoming connections and you are getting uneven levels of performance, then changing these settings can help:

  • net.core.somaxconn – The maximum number of connections that can be queued for acceptance by Nginx. The default is often very low and that’s usually acceptable, but it can be worth increasing it if your website experiences heavy traffic
  • net.core.netdev_max_backlog – The rate at which packets are buffered by the network card before being handed off to the CPU. Increasing the value can improve performance on machines with a high amount of bandwidth

File Descriptors. These are operating system resources used to represent connections and open files, among other things. NGINX can use up to two file descriptors per connection. For a system serving a large number of connections, the following settings might need to be adjusted:

  • sys.fs.file_max – The system-wide limit for file descriptors
  • nofile – The user file descriptor limit, set in the /etc/security/limits.conf file

Ephemeral Ports. When NGINX is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral, port. You might want to change these settings:

  • net.ipv4.ip_local_port_range – The start and end of the range of port values. If you see that you are running out of ports, increase the range (common: 1024 to 65000)
  • net.ipv4.tcp_fin_timeout – The time a port must be inactive before it can be reused for another connection. The default is often 60 seconds, but it’s usually safe to reduce it to 30, or even 15 seconds

NOTE on following configurations – some directives can impact on the performance. The following directives are usually safe for you to adjust on your own; I do not recommend that you change any of the other settings and also for any change, please be aware of any negative impacts may also occur.

Worker Processes. NGINX can run multiple worker processes, each capable of processing a large number of simultaneous connections. You can control the number of worker processes and how they handle connections with the following directives:

  • worker_processes – The number of NGINX worker processes (the default is 1). In most cases, running one worker process per CPU core works well, and we recommend setting this directive to auto to achieve that. There are times when you may want to increase this number, such as when the worker processes have to do a lot of disk I/O
  • worker_connections – The maximum number of connections that each worker process can handle simultaneously. The default is 512, but most systems have enough resources to support a larger number. The appropriate setting depends on the size of the server and the nature of the traffic, and can be discovered through testing. Finding maximum supported number by the core system can be done with ulimit:
    ulimit -n
  • use epoll – You can also use epoll, which is a scalable I/O event notification mechanism to trigger on events and make sure that I/O is utilized to the best of its ability.
  • multi_accept on – You can utilize multi_accept in order for a worker to accept all new connections at one time

Keepalive Connections. Keepalive connections can have a major impact on performance by reducing the CPU and network overhead needed to open and close connections. NGINX terminates all client connections and creates separate and independent connections to the upstream servers. The following directives relate to client keepalives:

  • keepalive_requests – The number of requests a client can make over a single keepalive connection. The default is 100, but a much higher value can be especially useful for testing with a load-generation tool or getting a high number of requests from individual instances
  • keepalive_timeout – How long an idle keepalive connection remains open.

The following directive relates to upstream keepalives:

  • keepalive – The number of idle keepalive connections to an upstream server that remain open for each worker process. There is no default value.
    To enable keepalive connections to upstream servers you must also include the following directives in the configuration:

    proxy_http_version 1.1;
    proxy_set_header Connection "";

Access Logging. Logging every request consumes both CPU and I/O cycles, and one way to reduce the impact is to enable access-log buffering. With buffering, instead of performing a separate write operation for each log entry, NGINX buffers a series of entries and writes them to the file together in a single operation.

To enable access-log buffering, include the buffer=size parameter to the access_log directive; NGINX writes the buffer contents to the log when the buffer reaches the size value.

Sendfile. The operating system’s sendfile() system call copies data from one file descriptor to another, often achieving zero-copy, which can speed up TCP data transfers. To enable NGINX to use it, include the sendfile directive in the http context or a server or location context.

Limits. You can set various limits that help prevent clients from consuming too many resources, which can adversely the performance of your system as well as user experience and security. The following are some of the relevant directives:

  • limit_conn and limit_conn_zone – Limit the number of client connections NGINX accepts, for example from a single IP address
  • limit_rate – Limits the rate at which responses are transmitted to a client, per connection (so clients that open multiple connections can consume this amount of bandwidth for each connection). This helps ensuring more even quality for all clients
  • limit_req and limit_req_zone – Limit the rate of requests being processed by NGINX, which has the same benefits as setting limit_rate. They can also improve security, especially for login pages, by limiting the request rate to a value reasonable for human users but too slow for programs (such as bots in a DDoS attack)
  • max_conns parameter to the server directive in an upstream configuration block – Sets the maximum number of simultaneous connections accepted by a server in an upstream group

Some additional features worth of mentioning include caching and compression. These are not really related to tuning and performance optimization but just good to consider.. 🙂

Caching. By enabling caching on an NGINX instance that is load balancing a set of web or application servers, you can dramatically improve the response time to clients while at the same time dramatically reducing the load on the backend servers. For instructions on how to do that, please check the NGINX Content Caching guidance.

Some quick tips I would like to still share re: static content serving, if any exists in your Node app web server.. This means that if your site serves static assets (such as CSS/JavaScript/images), NGINX can cache these files for a short period of time. An example of this configuration would be following: tell NGINX to cache 1000 files for 60 seconds, excluding any files that haven’t been accessed in 20 seconds, and only files that have 5 times or more uses. Example:

open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 5;
open_file_cache_errors off;

Caching can also be done based on location, such as in following example:

location ~* .(woff|eot|ttf|svg|mp4|webm|jpg|jpeg|png|gif|ico|css|js)$ {
  expires 365d;
}

Compression. Compressing responses sent to clients can greatly reduce their size, so they use less network bandwidth. However, because compressing uses CPU resources, you should only use it when needed and only for objects that are not already compressed.

That’s it for the blog article, part 2.. In the next article, part 3, I’ll go through considerations when using scalable cloud infrastructure, IaaS, services in running Node applications with examples from Google Cloud.

 

 

 

 

Scalable Node App In Scalable Cloud Infrastructure (IaaS), part 1

Matti Cloud, Development, Tech

Node.js (or Io.js) have been trending programming environment, platform, system, or whichever name is used out there, in the current web backend development industry. It’s been gaining enormous popularity due to it’s common “java-type-of” syntax, Javascript, running in basis of open source Google’s V8 engine used in the popular Chrome browser.

nodejs scalability

Node is event-driven and has non-blocking I/O making it powerful and efficient. Part of the benefits of Node ecosystem, due to it’s popularity, it has nowadays one of the largest modules and packages that developers’ can install and use in their applications easily.

Yes! Node is good. It’s not perfect, but very near to that…

Let’s think from business owner and enterprise perspective. I can utilize existing resources who are familiar with the syntax. The enormous number of open source libraries through it’s ‘npm’ package infrastructure provides me fast time-to-market and lower development and integration costs. Efficient and performance saves me money in infrastructure and pre-built images and environments let me launch/deploy Node based applications to cloud rapidly with near-zero capital expenditures. And the best-of-all, I can start with low cost servers, lightweight applications, which I can build and scale up-to massive volumes as the business grows. Awesome!

This article is about running a scalable Node application in a scalable cloud infrastructure.

Business Examples

Several companies have decided to re-write their code for better performance using Node.js. Examples of such companies include LinkedIn who gained 10x reduction in their number of servers needed to host their social business networking platform. 

GoDaddy on the other hand have stated that Node allows them to easily build high quality applications, enabling them easier unit and integration testing as well as REST APIs. In addition they’ve also stated that they can handle the same load with only 10% of the hardware — this aligns with LinkedIn’s experience.

PayPal has publicly announced that their productivity has increased due to “Node.js and an all Javascript development stack” and Netflix, the online movie streaming company, has stated that their time-to-market in development has boosted with help of Node.js.

Other examples of some known companies (and websites) who are using Node.js nowadays are at least:

  • eBay
  • Uber
  • Dow Jones
  • Flickr (flickr.com)
  • Groupon (groupon.com)
  • Wall Street Journal (wsj.com)
  • Today.com
  • Outbrain.com
  • Paytm.com
  • Onedio.com
  • Mobile.de
  • Coursera.org
  • Yellowpages.com

Companies using Node.js

In this blog post, we’ll dive into few steps that are required to make your Node application scalable both from application architecture and from server infrastructure perspective. The blog article is part 1 of my blog article series in relation to this topic.

Step Into Details

Node.js Application Cluster

Node is single-threaded, which limits it’s ability to scale up with the hardware: using of additional CPU cores requires the use of built-in clustering capabilities, more specifically, Node.js Cluster API. This allows us to build an application that easily scales up with the instance container size.

Further, Node.js is based on Chrome V8 engine – which originally had a hard memory limit of about 1.5-1.7 GB on 64Bit machines – this has since been modified and the actual limitation has been removed, as long as you configure the Node application to use the additional amount of memory needed (and available) – more on this a bit later..

Enabling clustering on Node.js application level enables application concurrence, speeds it up dramatically, and reduces the risk of single-point of failure on application level.

The Cluster module is fairly easy to pick up, especially if you are already used to working with Node.js. The Cluster module allows you to create a small network of separate processes (workers) which can share and serve the same server ports with the main Node process (master).

The master process is in charge of creating the workers and controlling them. That is pretty much all I recommend to do in the master side with some exceptions to generic initialisation and common set up, before the workers can lift of. The workers are spawned using the fork() method of Node’s ChildProcess module allowing master and workers to share handles and inter-process communication for communication. The work load, incoming connections, are distributed by default in a round-robin approach among the workers.

To implement the Node.js Cluster module in your application, you pretty much separate the master process code (initiating, controlling) from the workers code (doing the actual work). Here is an example:

var cluster = require('cluster'); 
var http = require('http'); 

if (cluster.isMaster) {
  var osNumOfCPUs = require('os').cpus().length;
  for (var i = 0; i < osNumOfCPUs; i++) {
    cluster.fork();
  }
} else {
  http.createServer(function(req, res) {
    res.end('Hello from worker process: ' + process.pid);
  }).listen(8080);
}

In the code, the master process will create as many worker threads as there is CPU cores, as reported by Node.js OS module. This example creates a simple web server, listening on port 8080, that responds to all incoming requests with the process ID (PID).

Node.js Number of CPU cores

You could test this code by running it as simple Node application, e.g. node –debug app,js (store the above code in app.js file) and then accessing your localhost in http://127.0.0.1:8080. When request is received, it is distributed to an available worker which processes the request.

Let’s add a bit more intelligence into it – re-spawn the process if it dies improperly.

var cluster = require('cluster');
var http = require('http');

if (cluster.isMaster) { 
  var osNumOfCPUs = require('os').cpus().length;
  for (var i = 0; i < osNumOfCPUs; i++) { 
    cluster.fork();
  }
  cluster.on('online', function(worker) {
    console.log('Worker process ' + worker.process.pid + ' is online!');
  });
  cluster.on('exit', function(worker, code, signal) {
    console.log('Worker process ' + worker.process.pid + ' has died with code: ' + code + ', and signal: ' + signal);
    console.log('We don\'t want that, so let\'s start a new worker..');
    cluster.fork();
  });
  console.log('Master started '+numOfWorkers+' workers.'); 
} else { 
  http.createServer(function(req, res) {
    res.end('Hello from worker process: ' + process.pid);
  }).listen(8080);
}

What was added in this example was the listening of worker events (on ‘online’ and ‘exit’), so we know when worker has come online and ready to serve requests and when worker has died. If the worker process dies suddenly, we will spin off a new worker to ensure that there is always the number of workers running and each one serving the master as required. No single-process risk of failure!

You can actually listen on these events on both worker and on cluster level and add the necessary functionality to handle this situation. 🙂

The inter-process communication between individual Node processes and between master-worker processes is established by adding a message listeners to each process. To do this, you can use following code on worker side to listen for messages:

worker.on('message', function(message) {
  console.log(message);
});

And following on master side:

process.on('message', function(message) {
  console.log(message);
});

Messages can be strings or JSON objects. To send a message from the master to a specific worker, you can use following code:

worker.send('Saludos from master!');

And similarly, to send a message from a worker to the master:

process.send('Hello. This is a greeting from the worker: ' + process.pid);

In Node.js, messages are generic and are not of any specific type. This means that it is highly recommended to send messages as JSON objects which then contains some additional information about the message: type, sender, message content, etc. An example could be:

worker.send({
  type: 'message',
  from: 'master',
  data: {
    // the data/message delivered between processes
  }
});

This is it for this blog article.. Please feel free to leave me a comment and check the full source code examples in the following public Github repository: https://github.com/mattivilola/scalable-node-app

In the next blog, part 2 of “running a scalable node application in a scalable cloud infrastructure”, I will go more deep into the management and controlling of Node.js processes, sub-processes and control messages between master and worker processes.

In addition, I will be discussing on Node application host level considerations, optimisations and relations between other host components and Node application and how they support the fully scalable application infrastructure.

Saludos from MWC 2016, Barcelona

Matti Business, Mobile, News, Tech

Greetings from Mobile World Congress 2016!

Mobile World Congress 2016 main entrance (south)

I’ve spent first day at #MWC and things are pretty nice. Samsung is betting heavily with their new Galaxy launch as well as we saw nice VR (virtual reality) theater built into Hall 3 where tens of people were able to enjoy the 4D theater with their VR glasses on. Awesome!

We are also getting some nice things forward with big players. Promoting and hosting the new social sharing and shopping experience application/platform, which is now also known as FNGR app. Stay tuned for more videos and nice materials coming up in relation to this. 🙂

FNGR ME at #MWC16

I will continue my exhibition visit tomorrow and until end of the MWC16 on Thu 25th. Agenda involves meetings with many companies from sales and from partner management perspective as well as preparation for upcoming launch in China, US and then in Europe.

Btw. I also saw Android in real life. Quite well promoted @mwc16. This gardener was making Android bushes in the space between exhibition halls.. 😉

IMG_20160222_113205

Blue sky over Iceland

Connected Enterprises, Oracle Cloud Platform-As-A-Service (PaaS) and more at Reykjavik, Iceland

Matti Business, Cloud, OnPremise, Security, Tech

I went for a business trip this week in Reykjavik, Iceland. My first time in Iceland, land of volcanoes and big ice floes and highly sophisticated and modern environment. It’s really cold country, I was frozen when walking outside.. But nevertheless, I like it.. I like it a lot! 🙂

The reason for visiting Reykjavik was Oracle Day – I was invited as a speaker regarding Connected Enterprises and Oracle Cloud Platform services and products. Of course, I was more than happy to take this opportunity and travel there for this unique event.

On my way, I learned few facts and details about the Iceland.. They are actually quite interesting and thus I wanted to share them here as well:

  • Reykjavik is the capital of Iceland, which has some 120 000 – 150 000 people living there. The city includes several connected cities and thus covering some 200 000 people alltogether.
  • In Iceland alltogether, there is some 320 000 people thus some 2/3 of people are living in Reykjavik area.
  • You can drink pure, clean water straight from the tap – the infrastructure is very high quality
  • There are unique natural hot springs, the geysers and extraordinary lava fields.
  • Most Icelanders speak English.
  • The largest glacier in Europe is located in Iceland.
  • 95% of all homes are connected to Internet.
  • The heating is mostly done using hot water; which this country has plenty due to its’ termic characteristic
  • There are several active volcanoes and earthquakes continuously

Northern lights, aurora borealis, Iceland

But getting back to the agenda, I did a presentation and acted as a speaker regarding two specific topics: Connected Enterprises and Oracle Cloud Platform Services.

Connected Enterprises was all about integration between different applications, systems, endpoints and APIs, data, and even devices. The importance of enterprise integrations are growing; they are getting more complex and amount of data is increasing rapidly. No wonder integration has been the 2nd biggest concern from enterprise not going into cloud.. And no wonder integrations take 2/3 of the enterprise mobile application development time based on studies. During the session, we spoke about SaaS (Software-As-A-Service) application integrations and other cloud integrations, we spoke about on-premise integrations and considered when to do integrations in which environment. Or when hybrid integrations make most sense..?

Matti Vilola as a technology speaker at Oracle Day in Reykjavik

Another topic I went through was the Oracle’s current cloud offering in relation to platform services, Platform-As-A-Service (PaaS) products. This really is all about the broad offering that Oracle has to offer. We also spoke about other cloud services and how these all make sense when using them together to cover different enterprise needs and requirements. Not to forget about the cloud security – steps that can make public cloud services more safe than your on-premise systems! And of course, also touched some services in the security area, such as anti-fraud services.

Oracle PaaS Matti, the IT-person slide

It was really nice to meet some Reykjavik “business Icelanders” and this was an exciting speaker opportunity for myself. Really appreciated the invitation and possibility – Thanks Oracle Iceland and Denmark team! 🙂

Blue Sky Over Iceland

Enterprise Mobility – Mobile Cloud – Webcast on December 8th 2015

Matti Cloud, Development, Mobile, Tech, Webcasts

Another webcast regarding Enterprise Mobility will be broadcasted on December 8th 2015.

Please do not hesitate to join if you are interested. The webcast will be provided for free and will focus on enterprise mobility aspects and include an overview of Oracle’s mobility offering: Mobile Cloud Service (MCS).

Registration is required. Please register here.

Webcast details

Build Your Mobile Strategy—Not Just Your Mobile Apps.

Overview of Oracle Mobile Cloud Service:

  • A platform that understands the challenges of moving enterprise data to mobile in a secure, scalable, elegant fashion, one that makes it easy to do things right.
  • A set of APIs and declarative tools that can help you move away from the tactical, and unite all lines of business along a well-defined strategy, to get enterprise data out of the back end and into a set of robust and appealing B2C or B2E mobile applications— while at the same time addressing each team member’s top-of-mind concerns.
  • A platform that enables development models driven by mobile app developers (“outside in”) and by service developers (“inside out”) simultaneously.

Join us to learn what’s new in the mobile area from Oracle; understand Oracle’s Mobile Backend as a Service and examples of how to use it.

I am your speaker as a Senior Sales Consultant for Oracle.

The presentation will be provided in English. Welcome! 🙂

Easy Tools To Analyze Your Social Profile Value

Matti Social

Today, social selling is among other selling and co-operation channels and increasingly important media to connect, communicate and network with your customers, partners and peers.

I recently spent some more time analyze my social profiles, mainly LinkedIn and Twitter but also checking the Facebook, Instagram and Google+ profiles.

Here is my social selling scores as of today based on different tools.

LinkedIn Social Selling Index (SSI, business.linkedin.com/sales-solutions/social-selling/the-social-selling-index)

LinkedIn measures your social selling efforts, and of course, by using your LinkedIn profile, network and activities done in this business-oriented networking platform. The score is determined based on 4 main aspects and is a value between 0-100:

  1. Establish your professional brand: how well is your profile done, how customer centric it is? Publish meaningful posts in LinkedIn platform
  2. Find the right people: use LinkedIn and better, use their Premium service called Sales Navigator to identify better contacts, prospects using LinkedIn data and search tools
  3. Engage with insights: discover and share news, updates to identify and find new connections and grow your network
  4. Build relationships: improve your network by connecting and establishing trust with decision makers; the quality of your network and connections

As a reference, I am posting my LinkedIn SSI score, social selling index:

Social Selling Index MattiVilola_20151106_overview

Twitter Analytics (analytics.twitter.com)

Twitter Analytics provides you insight on how you are doing with your Twitter accounts. It provides details on your tweets’ engagement, clicks, retweets, favorites, replies, and more.

In one page dashboard you can see a chart of the last 28 days­, the number of impressions your tweets have received, and contextual information such as whether your impressions were higher or lower than the previous 28-day period. Impressions refer to the number of times a user sees your tweet.

In addition, you can see a stream of tweets with metrics on each month and your best performing and best engaged tweets. This provides you good summary how your social media publishing and sharing on Twitter performs over time and how your improvement activities change the results.

My Twitter analytics results attached:

Twitter Analytics account overview for mvilola_20151106_overview

In average I seem to have some 10k impressions per month with monthly 50-200 new followers. This is with my current effort and activities with Twitter and works well in my case.

 

Klout Score (klout.com/corp/score)

The Klout Score is a performance indicator regarding your social media influence. It is a value between 0-100, the more influential you are, the higher your number and Klout Score.

Klout defines influence as follows: “Influence is the ability to drive action. When you share something on social media or in real life and people respond, that’s influence.”

Klout MattiVilola_20151109_overview

I am using Kloud to monitor my social networking progress over-time.


There are more similar tools available in the internet, such as SocialHunt, SocialCount, SumAll and others.

Go and check your score now! 🙂

Black Market Prices – The Hidden Data Economy (report from McAfee)

Matti Business, Cloud, News, Security

The Marketplace for Stolen Digital Information
http://www.mcafee.com/us/resources/reports/rp-hidden-data-economy.pdf

The report from McAfee states the same than what was discussed in the last month’s Cloud Security webcast: I can sell your credit card details for 20 euros upto 40 euros.

The one of the most common reasons for online cyberattack is the economical reasons and the report from McAfee interestingly provides insight to the black market value of different data entities supporting this threat.. Or actually, reasons for doing this, selling your data: credit card details, account information, personal information/identity, etc.

Interesting!