Getting nodejs to run inside of Docker is relatively simple. Getting nodejs to run inside of Docker while using recommended best practices takes some planning. In this lesson, I’ll show you how to get a simple nodejs web server running in Docker on your local workstation while adhering to best practices.
[00:00] The starting point for running Node.js on Docker is to create our Docker file. That's created in the same directory as our application code. The first line specifies the image I'm going to use as my base, and I'm using the official Node.js image, specifying version 4.4.7.
[00:22] You might be tempted to put latest here instead of a specific version, making sure that your app always gets the hottest, latest features, but you might end up with a different version on your Docker image than what you have tested against on your development system.
[00:38] This can make it incredibly difficult to troubleshoot those issues if those issues are resulting from different Node versions. Pin the version and be specific. On the lesson description, I said I was going to show you how to do this according to best practices. That's not something I just made up.
[00:56] There's an actual doc on the Node.js GitHub repo called "Docker and Node.js Best Practices." In here, we can see that it recommends not running our Node app as the root user. This is done to limit the access that the user running Node.js has in the event that your application is compromised.
[01:18] We'll do that by running the user add command, and I'm going to supply a couple of configuration options to it. --usergroup will create a user group, the same as the user account name. --createhome is going to create the home directory for this user, and --shell set to/bin/false is going to prevent anyone from logging in as this user and obtaining a shell.
[01:47] Finally, I'll specify the user account name itself, which I'm going to call Node.js. Next, I'm going to create an environment variable called home and set that equal to the home directory created by the user account by the user add command.
[02:02] Since I'm creating environment variables, I'm going to create another one called node_env and set that to the value of production as recommended by the best practices guide. Next, I'm going to create the Docker Compose file. That's docker-compose.yml.
[02:19] It's a YAML file that defines the services networks and volumes for our container. I'm going to define my service by giving it the name app. Here, again, I have the opportunity to slip in some of the recommended best practices for running Node.js on Docker.
[02:35] In a larger-scale environment, it's likely that our app is going to be running alongside other containers on the same host. For that reason, we want to limit the amount of memory our Node app can consume from the host, so I'm going to limit the running memory to 300 meg, and limit the swap memory to 1 gig.
[02:56] Next, I'll put the build option and define it with a single dot that tells Compose and Docker to build everything within the current directory. Networking is also defined in my Compose file, and I need to expose the ports that my Node.js app is listening on.
[03:15] Remember that we're running as a non-privileged user, so we can't use port 80 because only root is allowed to bind to ports 1024 and lower. That's not a problem for us, though, because this is going to be one of several Node.js containers that provide high availability for my application.
[03:35] I'll user a load balancer in front of them, and it can listen on port 80, as well as port 443 and handle my SSL certs. Then it can forward those requests onto my Node servers on whatever ports that they're listening on.
[03:50] If I take a look at the server.js file, I can see that it's listening on port 3000. I can go back to my Docker Compose file, and port is an array object, so I'll specify that with a -, and then expose port 3000. This configuration tells it that expose port 3000 in the Docker container, and map that to port 3000 inside the container.
[04:17] Last, I need to add a volume to my container. I'm going to map the current directory. Again, this is an array, so we start with a -. I map this as .:home/nodejs/app, where the . represents the current directory where the Docker Compose file is located, the : is a separator, and then home/nodejs/app is the file path inside the container where I want that mapped.
[04:46] It's important to understand here that this . is a relative file path referring to the local directory on the machine running Docker. Since I'm running Docker on the same workstation that I'm developing my code on, this is going to work just fine, and it has the added benefit of allowing the code changes I make locally to be available immediately in the Docker container.
[05:08] If you're deploying to a remote server though, you'll need to either specify a path on that server where the Node files are located and use that instead of the ., or bundle the Node.js files into your container using the copy command.
[05:23] To return to the Docker file, at this point, it might seem pretty logical to switch to our user Node.js that we defined, set our working directory to the app folder in the home directory, and then launch our Node app by passing in the Node command with the server.js as a variable.
[05:48] This wouldn't be horrible, but we can actually improve on this. If you look at my application directory, I have my package.json and npmshrinkwrap.json files. Package.json specifies the module dependencies in my application. In this application, I'm using the npm module faker, so it has to be installed for my application to work.
[06:09] Npmshrinkwrap.json locks down the version of my dependencies plus the version of their dependencies, ensuring that the same versions I'm testing against here locally are the ones that get installed. What if I copied package.json and npmshrinkwrap.json into the app folder inside the container? Once I did that, what if I ran npm install?
[06:37] As you might expect, this will create the Node modules folder inside the container. Thanks to the way that Docker caches during the build process, it will actually check the checksum on package.json and nmpshrinkwrap.json during the build process. It it's already built this image with those same files, it will reuse the cached copy.
[06:59] This means that once you've built and deployed an image the first time, subsequent Docker builds can reuse that cached image, making deploys much faster since you aren't dependent on waiting on npm install to finish.
[07:13] You can continue to leverage that cache layer up until you change your package.json or npmshrinkwrap.json. Even then, you only have to wait for that build or npm install to finish once before you can leverage those speed benefits over and over again.
[07:32] There is a bit of a gotcha here. When the package.json and npmshrinkwrap.json files are copied into the container during the build process, it's done it as root, which means that our Node.js user wouldn't be able to access them.
[07:49] Right after we copy them in, I'm going to run the change owner command with a recursive flag specifying Node.js as the user and group, and then apply that to everything in the home directory. There's one other caveat, and that's because of the way Docker works.
[08:09] We won't be able to access the node_modules folder unless we mount it as volume. In my volumes definition inside the Docker Compose file, I can define that as a volume and it's available inside the container.
[08:27] We can now build our image with the Docker Compose build command. While that build is running, let's check and see how we did against our best practices guide. The first thing recommended is setting a node_env environment variable to production, and we did that.
[08:44] Then it recommends running as a non-root user, which we did that whenever we created our user Node.js with the user add command. It recommends that we limit the memory and the swap to protect other containers running on that same host. We defined that in our Docker Compose file.
[09:03] It also recommends launching Node.js by calling the launch command directly. The reason for that is because if you use the start command as defined in package.json, that launches an extra process inside the container that's not really necessary. It's more efficient this way.
[09:21] The Docker run section, we can ignore because we didn't use the Docker run command. Finally, there's the security recommendations on how to analyze your running container. We're going to table that right now, because that's the subject of a whole another lesson.
[09:37] The build completed successfully. I can launch the container with the Docker Compose up command. I can see it's listening, and if I switch to my browser and go to localhost port 3000, it works and returns my response to me.
You are correct. The node_modules folder on my local workstation has modules compiled for OSX, but my docker host is running Linux. The new volume was needed to provide a location for the docker host to place the npm modules compiled for its OS (Linux in this case). The 2nd volume is created locally on the docker host. Since it contains the npm modules compiled specifically for its OS, they aren't of any use outside of the docker host so I didn't expose it.
Hi Will,
I needed to add : RUN mkdir $HOME/app after ENV NODE_ENV=production in my Dockerfile, don't know if it's related to Docker version or something else but without I had a missing dir error when docker was running npm install on build
Best !
Nice catch! It may be that, or an oversight by the author when recording the lesson! :-D
EDIT: Service 'app' failed to build: When using COPY with more than one source file, the destination must be a directory and end with a /
Need to replace: COPY package.json npm-shrinkwrap.json $HOME/app by COPY package.json npm-shrinkwrap.json $HOME/app/
Yup, that was a typo on the screen during recording. The git repo with the lesson has the corrected trailing slash on the command. I'll update the lesson to correctly show that as well. Thanks!
Hi, I'm At 5.25, I'm not following you when you describe how & why we should amend the "." volume path on the remote host. Are you able to explain this a bit more please? Is this a path within the docker container, and if so, do I need to mkdir in the the Dockerfile for the creation of that folder? Is that where the app will be installed in the container?
Hi, thanks for posting your question!
In the line .:/home/nodejs/app
, we are telling docker to create a volume in the container. The syntax to do this is [source directory]:[destination directory]
, or stated another way take this stuff
:
and put it here
. So in our example, the .
refers to the current local directory. If I'm in the directory /home/foo
, the .
will refer to the contents of the foo
directory. If I'm in
/home/bar, the
.will refer to the contents of the directory
bar. The end result in our example is the contents of my application are created in the Docker container as
/home/nodejs/app`.
Hope that answers your question!
I am getting a permissions error when I attempt to chown on a mac.
chown: changing ownership of '/home/nodejs/app/.gitignore': Operation not permitted
chown: changing ownership of '/home/nodejs/app/package.json': Operation not permitted
chown: changing ownership of '/home/nodejs/app/server.js': Operation not permitted
chown: changing ownership of '/home/nodejs/app': Operation not permitted
Any ideas?
I'm still a little confused as to why you needed to create a new volume for
/home/nodejs/app/node_modules
. You've already mounted the entire project directory which contains thenode_modules
you have installed locally. Why can't the container just use those? Also, You didn't specify a "matching" directory when you mounted the 2nd volume. What is this doing? From what I understand this is creating a directory locally that only the Docker engine has access to (well, for the most part).