Getting nodejs to run inside of Docker is relatively simple. Getting nodejs to run inside of Docker while using recommended best practices takes some planning. In this lesson, I’ll show you how to get a simple nodejs web server running in Docker on your local workstation while adhering to best practices.
The starting point for running Node.js on Docker is to create our Docker file. That's created in the same directory as our application code. The first line specifies the image I'm going to use as my base, and I'm using the official Node.js image, specifying version 4.4.7.
You might be tempted to put latest here instead of a specific version, making sure that your app always gets the hottest, latest features, but you might end up with a different version on your Docker image than what you have tested against on your development system.
This can make it incredibly difficult to troubleshoot those issues if those issues are resulting from different Node versions. Pin the version and be specific. On the lesson description, I said I was going to show you how to do this according to best practices. That's not something I just made up.
There's an actual doc on the Node.js GitHub repo called "Docker and Node.js Best Practices." In here, we can see that it recommends not running our Node app as the root user. This is done to limit the access that the user running Node.js has in the event that your application is compromised.
We'll do that by running the user add command, and I'm going to supply a couple of configuration options to it. --usergroup will create a user group, the same as the user account name. --createhome is going to create the home directory for this user, and --shell set to/bin/false is going to prevent anyone from logging in as this user and obtaining a shell.
Finally, I'll specify the user account name itself, which I'm going to call Node.js. Next, I'm going to create an environment variable called home and set that equal to the home directory created by the user account by the user add command.
Since I'm creating environment variables, I'm going to create another one called node_env and set that to the value of production as recommended by the best practices guide. Next, I'm going to create the Docker Compose file. That's docker-compose.yml.
It's a YAML file that defines the services networks and volumes for our container. I'm going to define my service by giving it the name app. Here, again, I have the opportunity to slip in some of the recommended best practices for running Node.js on Docker.
In a larger-scale environment, it's likely that our app is going to be running alongside other containers on the same host. For that reason, we want to limit the amount of memory our Node app can consume from the host, so I'm going to limit the running memory to 300 meg, and limit the swap memory to 1 gig.
Next, I'll put the build option and define it with a single dot that tells Compose and Docker to build everything within the current directory. Networking is also defined in my Compose file, and I need to expose the ports that my Node.js app is listening on.
Remember that we're running as a non-privileged user, so we can't use port 80 because only root is allowed to bind to ports 1024 and lower. That's not a problem for us, though, because this is going to be one of several Node.js containers that provide high availability for my application.
I'll user a load balancer in front of them, and it can listen on port 80, as well as port 443 and handle my SSL certs. Then it can forward those requests onto my Node servers on whatever ports that they're listening on.
If I take a look at the server.js file, I can see that it's listening on port 3000. I can go back to my Docker Compose file, and port is an array object, so I'll specify that with a -, and then expose port 3000. This configuration tells it that expose port 3000 in the Docker container, and map that to port 3000 inside the container.
Last, I need to add a volume to my container. I'm going to map the current directory. Again, this is an array, so we start with a -. I map this as .:home/nodejs/app, where the . represents the current directory where the Docker Compose file is located, the : is a separator, and then home/nodejs/app is the file path inside the container where I want that mapped.
It's important to understand here that this . is a relative file path referring to the local directory on the machine running Docker. Since I'm running Docker on the same workstation that I'm developing my code on, this is going to work just fine, and it has the added benefit of allowing the code changes I make locally to be available immediately in the Docker container.
If you're deploying to a remote server though, you'll need to either specify a path on that server where the Node files are located and use that instead of the ., or bundle the Node.js files into your container using the copy command.
To return to the Docker file, at this point, it might seem pretty logical to switch to our user Node.js that we defined, set our working directory to the app folder in the home directory, and then launch our Node app by passing in the Node command with the server.js as a variable.
This wouldn't be horrible, but we can actually improve on this. If you look at my application directory, I have my package.json and npmshrinkwrap.json files. Package.json specifies the module dependencies in my application. In this application, I'm using the npm module faker, so it has to be installed for my application to work.
Npmshrinkwrap.json locks down the version of my dependencies plus the version of their dependencies, ensuring that the same versions I'm testing against here locally are the ones that get installed. What if I copied package.json and npmshrinkwrap.json into the app folder inside the container? Once I did that, what if I ran npm install?
As you might expect, this will create the Node modules folder inside the container. Thanks to the way that Docker caches during the build process, it will actually check the checksum on package.json and nmpshrinkwrap.json during the build process. It it's already built this image with those same files, it will reuse the cached copy.
This means that once you've built and deployed an image the first time, subsequent Docker builds can reuse that cached image, making deploys much faster since you aren't dependent on waiting on npm install to finish.
You can continue to leverage that cache layer up until you change your package.json or npmshrinkwrap.json. Even then, you only have to wait for that build or npm install to finish once before you can leverage those speed benefits over and over again.
There is a bit of a gotcha here. When the package.json and npmshrinkwrap.json files are copied into the container during the build process, it's done it as root, which means that our Node.js user wouldn't be able to access them.
Right after we copy them in, I'm going to run the change owner command with a recursive flag specifying Node.js as the user and group, and then apply that to everything in the home directory. There's one other caveat, and that's because of the way Docker works.
We won't be able to access the node_modules folder unless we mount it as volume. In my volumes definition inside the Docker Compose file, I can define that as a volume and it's available inside the container.
We can now build our image with the Docker Compose build command. While that build is running, let's check and see how we did against our best practices guide. The first thing recommended is setting a node_env environment variable to production, and we did that.
Then it recommends running as a non-root user, which we did that whenever we created our user Node.js with the user add command. It recommends that we limit the memory and the swap to protect other containers running on that same host. We defined that in our Docker Compose file.
It also recommends launching Node.js by calling the launch command directly. The reason for that is because if you use the start command as defined in package.json, that launches an extra process inside the container that's not really necessary. It's more efficient this way.
The Docker run section, we can ignore because we didn't use the Docker run command. Finally, there's the security recommendations on how to analyze your running container. We're going to table that right now, because that's the subject of a whole another lesson.
The build completed successfully. I can launch the container with the Docker Compose up command. I can see it's listening, and if I switch to my browser and go to localhost port 3000, it works and returns my response to me.