In this lesson, we add a Docker configuration to our project. In the Dockerfile
we specify the different layers of our Docker image. We use a pretty standard Dockerfile
and use the build
and start
scripts in our package.json
do the actual work.
We create a shortcut called docker:build
to quickly build an image.
When building the image, we see that the context being sent to the container is huge. By adding a .dockerignore
file we can exclude what is being sent. We add dist
and node_modules
.
Other scripts we add are docker:run
and docker:push
.
The code of the Dockerfile
can be found below:
FROM node:14-alpine
WORKDIR /workspace
COPY package.json yarn.lock /workspace/
RUN yarn
COPY . .
RUN yarn build
CMD ["yarn", "start"]
Bram Borggreve: [0:00] Before building Docker containers, we want to make sure that DOCKER_BUILDKIT environment variable is set your system. I've done this by adding export DOCKER_BUILDKIT=1 to my /.zhsrc.
[0:10] Let's start out by deleting our dist folder. When we now run nx build api --prod, we can see that a new build gets generated in dist/apps/api.
[0:22] Our main.ts got compiled to down to a main.js. From the terminal, we can now run node dist/apps/api/main.js, and this will start our compile build.
[0:32] We get an address in use error because we already have the development server listening on port 3000. To work around this, we can use environment variable PORT=8000 before starting our server. We can see that the server now listens to port 8000. Let's stop the server for now.
[0:49] In our project root we create a Dockerfile with capital D. On the first line, we add FROM and pass in node:14-alpine. This is the base image that we're really working from.
[1:01] We set the container's WORKDIR to /workspace and add a copy command. We COPY package.json and yarn.lock to the workspace folder and RUN yarn to install all the dependencies. Doing it this way we make sure that we only have to rebuild this part of the image when package.json or yarn.lock are changed.
[1:19] Next up, we COPY the whole source codes and RUN yarn build. Last thing we do is add a command to make it start yarn start.
[1:27] This is just your standard issue Dockerfile and can be used with generally any Node.js project. We only need to make sure that yarn build and yarn start do as we expect.
[1:37] Let's open package.json and change the build script to nx build api --prod. The start script is going to be changed to node dist/apps/api/main. Let's also add a script for building our Docker image. I'm going to call it docker:build and have it execute docker build . for current working dir and give the image a tag. I'm going to call it coursus/api.
[2:00] When we now run these commands, we see that the container gets built, but at the step of transferring context is copying over a lot of stuff. This is because it's trying to copy over Node modules, etc.
[2:13] Let's add a file called .dockerignore in the root of our project and add the dist and node_modules folders to this file. When we now run docker build again, we see that context is way smaller and the container gets built.
[2:27] Next up, we're going to add a script to quickly run our containers and make it execute docker run -it and pass in the name of the image. When we execute it, we see that the container runs, and it uses port 3000. The reason that it doesn't interfere with our Dev Server is that this port 3000 is only accessible from inside Docker. When we run docker ps, we see that there is no reference of any open ports.
[2:51] Let's add the parameter -p 8000;3000 to our docker run script and start it again. This maps our local port 8000 to port 3000 inside the container. When we run docker ps, we can see the mapping.
[3:06] Let's run curl localhost:8000/api to get the output of the API. We can fire off GraphQL commands by using the gq command, pass it in GraphQL endpoints and submitting a query.
[3:19] Last thing we're going to do is add a docker:push script and make it run docker push with the name of the image. After running docker:push we see the image appear on the dockerhub. You can now use your API on your favorite cloud provider.