Export Services with Docker Port Binding

Mark Shust
InstructorMark Shust
Share this video with your friends

Social Share Links

Send Tweet
Published 7 years ago
Updated 6 years ago

By default, Docker is locked down and no ports are exposed. We’ll cover how to expose ports when building your image with a Dockerfile, and how to export that port as a service when running containers.

Instructor: [00:00] Let's take a look at a simple Node.js app. All we are doing is responding, "Hello, World!" to every request, and then starting the web server on port 8000.

[00:11] For Docker file t's as simple as well, just copying over the index.js file and starting Node. Let's build this into a Docker image named foo, and then run it with dockerrun-dfoo.

[00:29] If we try to access this address in port for curl, we will get a connection refused failure response. This is because Docker locks down all ports by default, just like a firewall. We need to open the container to port 8000 and make it publicly accessible.

[00:45] Let's start up another container, but this time with also defining a -p flag. This is how we tell Docker which port we want to access on the container, and where on the host we want to access it. The host definition comes first, followed by the container port. If we try to access this again with curl, we will get our "Hello, World!" response.

[01:09] Note that we can also map the container port to a different port on the host. This is useful when running multiple Docker images at the same time that all run on the same port.

[01:21] If using compose, you could do the same thing by defining the ports option. It accepts multiple values following the same format as the -p flag, with the host port coming first, followed by the container port. Let's test this out by starting our app with compose, then testing this new 8010 port with curl.