How to implement load balancing with Nginx

In this article of server talks we will see how we can implement load balancing with nginx as load balancer.

First install nginx with the below command

sudo apt-get install nginx

Now we will open the nginx config and change the config to use the nginx as load balancer. You can found the nginx config at /etc/nginx/sites-enabled/default. You can add multiple config in sites-enabled folder for different servers running on different ports.

upstream loadbalancer  {
  server 192.168.0.13:8000;
  server 192.168.0.13:8001;
  server 192.168.0.13:8002;
}
server {
  location / {
    proxy_pass  http://loadbalancer;
  }
}

On these servers there is just a html with server1 on first server 2 on second and server 3 on third.

The config for other three servers are as below.

server {
        listen   8000;
        root /usr/share/nginx/www;
        index index.php index.html index.htm;
}

Just change the port and root location. This all is shown on local system by using different port.

Not lets hit localhost and see what is coming.

On first attempt the result was below

Load Balancing with Nginx

 

Second attempt was

 

Load Balancing with Nginx

 

Third attempt was

Load Balancing with Nginx

So clearly nginx is using round robin for now. Lets try to change the algorithms. For this change to the below config.

upstream loadbalancer  {
  server 192.168.0.13:8000 weight=1;
  server 192.168.0.13:8001 weight=2;
  server 192.168.0.13:8002 weight=4;
}

Now according to this second server will get twice the traffic as server 1. Third server will get twice the traffic as server 2 and four times the traffic as server 1. This algorithm is weighted round robin

Now lets say a server is down it will cause problem thus we add max_fails params. This params tells that after this number of attempts the node will be considered as inactive. Now lets say you server started acting you want to retry again then we need the param fail_timeout.

upstream loadbalancer  {
  server 192.168.0.13:8000 weight=1 max_fails = 3 fail_timeout = 50s;
  server 192.168.0.13:8001 weight=2;
  server 192.168.0.13:8002 weight=4;
}

Now the above configurations means if the server 1 is inactive nginx will try 3 times. After that the node will be considered as inactive. Now after 50 seconds of this event the nginx will again try for this node if it is active.

So this is how you can use your nginx server as a load balancer with different algorithms.

Interesting? Share and subscribe.


Gaurav Yadav

Gaurav is cloud infrastructure engineer and a full stack web developer and blogger. Sportsperson by heart and loves football. Scale is something he loves to work for and always keen to learn new tech. Experienced with CI/CD, distributed cloud infrastructure, build systems and lot of SRE Stuff.

1 COMMENT
  • Rosetta
    Reply

    Skype has established its web-centered buyer beta on the entire world, soon after establishing it generally inside the Usa and U.K.
    previous this four weeks. Skype for Online also now supports Linux
    and Chromebook for instant online messaging conversation (no
    video and voice however, those call for a plug-in installing).

    The increase from the beta contributes support for a
    longer listing of spoken languages to aid bolster that overseas functionality

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.