Scale Docker Horizontally with Nginx Load Balancing

Mark Shust
InstructorMark Shust
Share this video with your friends

Social Share Links

Send Tweet
Published 7 years ago
Updated 5 years ago

Node.js apps built with Docker cannot scale horizontally by themselves. In this lesson we will spawn out multiple Node.js processes/containers, and move the incoming request handler to an Nginx proxy to load balance and properly scale our Node.js app.

Instructor: [00:00] Let's make a directory called nodejs that contains our app files. Within that, we will create a simple Node.js app that responds, "Hello world from server," followed by the name of our server, which will define from an environment variable.

[00:16] Let's then create a Docker file that simply kicks off our Node.js process. Then we will build our app image with "docker build -t app-nodejs .". Let's start two Node.js processes. We'll start the first server with a server name of chicken, and a name we can reference later, chicken.

[00:41] We'll do something similar with our second server, but with the name steak for both. Note that Nginx will be handling our external requests, so we do not need to bind any ports of the app containers back to the host.

[00:57] Our containers will only be accessible from other Docker containers. Since these containers will not be directly accessible by a public network, this will also add an additional level of security to our app. Let's create a new Nginx directory in the root of our project and enter it.

[01:15] In this directory, we will create a new file to contain our Nginx configuration, named Nginx.conf. The purpose of our Nginx process is to load balance requests to our Node.js processes. Let's create a server block with a location block underneath, preceded with a slash.

[01:34] Within this block, define a proxy_pass directive, followed by http://, followed by any arbitrary value. We'll use app here, followed by a semicolon. What we're telling Nginx to do here is to listen at the root path of our web server, and pass all requests through a proxy named app.

[01:54] Let's go ahead and define that proxy. Create a new upstream block, followed by the name of our proxy, app. Next, we will follow it with a line started with server, followed by the name of our first server, chicken, and the default port, 8000.

[02:11] We will repeat the line again, but this time with the steak server. The upstream block we define here tells Nginx which server to proxy requests to. We can define as many lines here as we want. Nginx will treat requests defined within this group with a round robin balancing method. You can even define weights on the servers with the weight option.

[02:34] Next, let's create an Nginx Docker image that just copies over our nginx.com file to the default configuration file location. Let's build this image, and name it app-nginx. The final step is to start our Nginx container, and map port 8080 on the host to port 80 on the container.

[02:58] We will then use the link directive to link our chicken and steak servers, making them accessible within the container. If we use curl to hit our Nginx server on port 8080, we will see that Nginx is properly routing requests to our chicken and steak Node.js servers in a round robin fashion.

Stefan Stolniceanu
Stefan Stolniceanu
~ 6 years ago

I know that docker-compose has the scale option, which can be used to scale a container very easy, but you'll need to update manually the nginx conf to use the new containers. Is there any way to do this automatically?

Another question: I used to put the Load Balancer in the same docker-compose file as the node app. Is that a good practice?

Mark Shust
Mark Shustinstructor
~ 6 years ago

Good question. It appears Docker still does not offer native service discovery. There is a long-standing issue to implement it, however it doesn't appear to be gaining g development traction. I stumbled upon this post -- http://blog.gaiterjones.com/docker-mono-host-service-scaling-and-dynamic-load-balancing-with-nginx/ -- which pretty much provides service discovery within nginx for load balancing. Another solution would be to mount the Docker socket for reading within containers, however this has security implications. You could also code a custom solution on the software side, even though this is not ideal.

Regarding Docker Compose, you can absolutely and should have nginx and node within the same compose definition. Hope this helps.

Markdown supported.
Become a member to join the discussionEnroll Today