The end is nigh. Your application is almost ready, and you will need to deploy it. And with the deployment, the endless tweaking to get everything running on your production servers.
If only there were a way to test everything first. That is where containers will come in to save the day. With containers, you not only run your code in an environment similar to the production server, but you also run your code in precisely the same environment.
In this talk, the attendees will learn about how to create container friendly applications and how to use containers to share their code with their team.
Joel Lord: [0:00] Let's talk about containers. By now, you've probably heard about them, but if you're watching this talk, there's a good chance that, well, you're not using them, and why is that?
[0:11] For me, when I heard about them, I was a software engineer, and I thought it would get in my way. I wanted to be as productive as possible, and I didn't want those new container technologies to infringe my production.
[0:22] Now, as I started looking into them, I saw how it can, not only, be very useful, but it can help me to be even more productive. This is what we're going to see today.
[0:34] To do this, I'm going to spend a lot of time inside my terminal, which is right behind me, so let me get out of the way and immediately jump into this. In order to access those slides, I'll use my terminal where we'll be spending most of the time today. Containers for software developers. How can you use them to make your life easier?
[0:56] First of all, let me introduce myself. Hi, my name is Joel. I currently work as a developer advocate for the Red Hat OpenShift platform. I'm also an Egghead author, so you can find some of my courses on Egghead. I have a course on authentication, as well as one on containers.
[1:11] I am based in Ottawa, Canada, and if you ever have any questions, Twitter is always a good way to reach me. joel__lord is my Twitter handle.
[1:20] Let's talk about why you should use containers in the first place. Well, it's really about having that same environment everywhere. You're reproducing the same development environment, as the production environment, as a testing environment, everything is exactly the same.
[1:37] It's a great tool for onboarding. As a consultant in software engineering for many years, I had to jump into a new team. Every time I jump into a new team, I've got to set up my whole machine.
[1:48] I've got to install a new version of Node.js, and install npm, or install an Apache server, depending on what project I'm working on, and all of that setup takes a lot of time. Using containers, you can make that a lot quicker.
[2:02] It's also great for open-source contributions. If you have an open-source project, you should definitely look into containerization. It makes it easier for potential contributors to be able to jump into your project and be able to help you out right away.
[2:16] It's also great for testing. If you do use unit tests, or if you connect to a database and input a lot of garbage into your database, containers will make your life a lot easier because you can just take it down and restart it, and it will start fresh with a brand-new version of your application but in a very reproducible environment every time.
[2:37] Also, if you deal with a manual testing team, then you can just send the whole environment as one big package, one big container, and they can test in the exact same environment as you are developing in.
[2:51] There's a few things that it won't help you with. It won't necessarily help you with networking, DNS routing, so that's an additional thing that you'll need to look into when you're dealing with containers. If you have many of them, there are tools to help you with that, but it can get a little bit tricky there.
[3:07] Same thing with scaling. A lot of people associate containers with scalability, that is not exactly the case. It is a good step in the right direction. It definitely makes your life easier when you want to scale, but you'll need tools like Kubernetes to help you with both the networking component and the scaling, and that is outside the scope of this presentation.
[3:27] What is a container? A container is a standard unit of software that packages up code and all of its dependencies, so the application runs quickly and reliably from one computing environment to another. It is a lightweight standalone executable package of software that includes everything needed to run an application.
[4:27] If you've changed anything in your setup or accidentally changed anything, that's not a problem just shut it down, start it again, and it starts from fresh every time. That is not so great if you want to persist data so if you have a database, then you'll need another mechanism to persist that data. Apart from that, most of the time, it's actually great.
[4:48] You might think it's just like a VM. It is not. In a nutshell, when you have a virtual machine, you have an infrastructure that runs an operating system, and then you have a hypervisor. The hypervisor is in charge of delegating and allocating some resources of your system to various virtual machines that run on your infrastructure.
[5:10] Then for each one of those virtual machine, you have another operating system, which will have everything you needed to run your applications. There's a lot of overhead. When you're looking at containers, you have your infrastructure, you have your host operating system, and if you're running Linux, the containers are running directly as processes in your Linux operating system.
[5:32] If you have a Windows or a Mac, then you will need an additional tool that will create a virtual machine for you. Oftentimes, that is Docker. Where does Docker fit into the whole container scene? Docker is one of the various tools to run containers on your machine.
[5:49] It was going to use as the base for the OCI. They made containers popular and now it became a standard, so the other runtimes are using the same structure. Padman is an alternative that you can use for Linux, which is a little bit more optimal. It doesn't use a daemon in the background to run all of your containers. They run directly as processes in your operating system making them much faster.
[6:13] Let's take a look at containers in practice. As I mentioned, when I started looking into containers, I was a bit afraid that it would get into my way and that was until I had a good use case to use them and to try them out. That was one of my friend who came to me and said, "Hey, can you help me with this PHP code?"
[6:55] I had heard about those containers and I figured, maybe that's a good time to try them out. This is what I did, basically. Let me try this out inside my slides inside of my terminal, because why not? What I did here was to use docker run. Docker run is a command to start a container. I wanted to run that in detached mode.
[7:17] I mounted some files, so I made sure that the files that are in my file system are available inside my container as well. Let's move that to .../var/.../html. If you're familiar with Apache, you might recognize that. Let's make sure that we map some ports as well. I'll tell Docker that any incoming requests on port 8080 on my machine goes into the container to the port 80.
[7:45] What else do I need? A --rm to clean this up. --name, we'll call it test. This will be a PHP 7.4 server with Apache. This is what I tried, boom, done. Apparently, I have an Apache server running. Let's try this out. Curl localhost:8080, and there it is. It started. It was able to serve my page. If you don't believe me, let me just open up a VS code, and there it is.
[8:16] I can go to my index.php and we'll change this to "Hello, Egghead." Then I can do my curl localhost:8080 again, and you can see "Hello, Egghead." How cool is that? I was able to instantly almost start a full Apache server preconfigured with PHP, everything with just a single line of CLI. How cool is that?
[8:49] This is what the commands, I went really fast on there. This is the command that I've used. I've used docker run. Docker run is the command that you would use to start a container. I told that to use the PHP 7.4 Apache image.
[9:03] If you're not familiar with Docker, you should look at hub.docker.com. They have a list of images that you can use. They're all publicly available. Look for official images, ideally. If you search for something like PHP, you will find a page documenting all of the information about this same image.
[9:25] I can do a search for a PHP here, I have my PHP Docker image and you can see all the versions that are supported. If you want to test out the latest version 803, you can use that image. You can see here that there's a bunch of variants as well. When there's that -apache, that's a variant. -alpine is a common one, you'll see that often.
[9:46] In my case, I've used that -apache, so that is a preconfigured Apache server. Without that, it would only be the executable that would be installed. You can find all the information about how to use it. There's a lot of stuff in here. Take a look at the documentation. It's always a great resource.
[10:05] The other thing that I did was to map some ports, so that is very important. My Apache server inside my container runs on port 80, but I want any incoming requests on my machine on port 8080 to be redirected to that port 80 inside the container. I do that by using the -p flag.
[10:22] I've also mounted the volumes. I made sure that all the files on my file system were installed inside that container in /var/www/html which is the folder that Apache uses by default to serve files.
[10:37] There are few other flags that I've used. -d to run it in detached mode so that's in the background. --rm removes the container after it stopped. Once you start using containers, you'll see that they can take up a lot of disk space so using --rm is a good little tip there.
--[10:52] name will let you name your container. You don't need to name them. It will assign it a random ID as well as a random name. If you start scripting, then you'll need a name to be able to find them in your system. This is the command that I've just typed, so docker run to start that container. -d to run it in detached mode. --rm to clean it afterwards. --name myPHP, I've used test.
-[11:17] v to mount a volume so the files in my file system are now mounted as /var/www/html inside the container. I've also mapped the port 8080 on my machine into the port 80 inside of the container. Finally, I told it which image to use, that was PHP 7.4 Apache.
[11:40] I now have my Docker container running. If I do docker ps, I will see all of the images that I have running. I should have probably cleaned that up beforehand, but you can see here that this image is the one that I've just created. It has the ID 85ea or the name, Test.
[11:56] I can stop my container. I can do stop 85 using the ID, so 85, use the first few characters, or I can use the name, so that was Test, and that will stop my container. While I'm there, I might as well just stop everything, so that we don't run into any issues later on as we move during this presentation.
[12:17] If I wanted to use just the PHP image. PHP is a neat language. It's nice to script in PHP. Sometimes I'll use it, but I don't want to install PHP. I don't have it installed on my machine. I can use an image that will have the PHP executable on it, and it will start that executable, run the script, and that's it. Then it will terminate and destroy that container afterwards.
[12:42] To do that, I can use docker run, same thing, and mount my volume. Then, I'll use the PHP 7.4 image. This is what executable or what command to run once the container is started. In this case, it's going to start to type in the command php/app/cli.php, which is a small CLI script that I use to count the number of files in a folder.
[13:07] You can also use a -w flag to specify a working directory if that makes your life easier. Docker run -v current working directory /samples/php, I believe. Let's mount that to /app. We won't run it in detached mode because we wanted to see the results. I'm going to use PHP 7.4 and then we're going to execute php/app/cli.php, and this should count the number of files.
[13:41] There we go. There are 20 files. If I open up the scripts here again, you can see this is the script that was executed, and I didn't have to install PHP on my machine. As you can see, PHP is not there.
[13:58] If you want to run a Node.js server, it's very similar. Docker run, same command. -v, mount your files into that container. Match your ports, so port 8080 to port 3000. Then use the node 14 or 16, which is about to get released. Use the base image that you want, and then the command that you want to execute once the container is started. In this case, it would be node/app.
[14:24] I have it scripted somewhere. Let me find my scripts here. There you go. script/nodeserver.sh, and it started my Node.js server. I can do curl localhost:3000, and there we go. I've got my Node.js server running and serving this Express application with a single route that says, "Hello world."
[14:51] Let's stop this container again. I'm not sure how it was called. Docker ps to see the containers running, and we'll just stop a34. That's it. That container is now stopped, and that's it. Without installing Node.js, I was able to run a Node.js application and able to stop it afterwards.
[15:12] No more [inaudible] on my machine. We've fixed that problem. We solved this finally because now if I ship my whole application, I'm going to ship an Apache server preconfigured with all the exact versions that everybody will run exactly the same thing. It fixes most of the problems, most of them.
[15:31] You didn't see it there, but I was getting an error, something about a file not found. I asked my friend, "Hey, what's the problem there? They told me I had to unhide the file C:/temp/log.txt. Ah, really? Hardcoded path. Don't do that, come on. How do you access those type of variables that will change from one environment to another?
[15:53] This is where environment variables will come into play. Environment variable will let you store a variable that is specific to an environment. Things such as a file path for your log files, for example, or the base URL for your API. It will most likely change from your development environment to your production environment.
[16:13] Hopefully, you're not using the same one for development, testing, and production, but that API will change and you would want to change that base URL, so you'll have to use environment variables to do that. To pass an environment variable to your container, you use a -e flag, and you use the key equals value, and you now have access to that environment variable inside your container.
[16:36] To access it from a PHP script, you can use the $_ENV. That will give you access to that value. In Node.js, you can use process.env.baseURL in this case which will give you the value of the base URL environment variable.
[16:54] What does that look like in practice? Once again, I have that scripted somewhere. Node ENV, I think it is. There it is. I've run a docker run with the -ename=world in this case, and I've used the image node 8 here, because why not? You can see that it outputted, "Hello world."
[17:19] If I run this with another environment variable, that's what happened. Just one line higher. We missed it. Let's make that a little bit smaller. .sh and there it is. The environment variable name=Joel, "Hello Joel," and then we have a -ename=world, "Hello world." We can also see that the first one here, you use a Node 8 and we can see that it was executed with diversion 8.17.
[17:49] I've outputted the node version that executed this script here. We can see that if you need to test on different versions, that will make your life a lot easier.
[18:00] Now, environment variables are also used to configure your containers on a specific container, so you just have to look at the documentation. As an example, MySQL will take a bunch of parameters such as the root password, the user, the password for that user, in order to be able to start.
[18:17] You can specify all of those as environment variables, and then, your MySQL server will start with those variables. You can also specify an entry point on certain specific images, so you should once again look at the documentation.
[18:30] For MySQL, as an example, any file that is in the /docker-entrypoint-initdb.d/ folder will be executed. Any .sh or .sql file will be executed inside the Linux operating system or applied on your database when it finds anything in there.
[18:51] Let's take a look at some more scripts here. This is what I would use to start a MySQL service. I can specify all the environment variables. Let's take this and try to paste it in here. There you go. I started a MySQL server. That's it. That's all I needed. Actually, let's stop this server, and what I'll do I'll stop my MySQL. I'll use another script here.
[19:23] I have this one which also, in addition to using those environment variables, uses the entry point to create and pre-populate my MySQL database. Now that I have this, let's take a look at another script that I have here. Another docker run command that I can run. Let's take a look now at my browser. There it is, localhost:8888, and there was a surprise for you. That's phpMyAdmin.
[19:54] Ever tried to install and configure phpMyAdmin? That's a lot of trouble, but there it is. With a one single Docker command, I was able to use it. I can see that my MySQL is there, and it is pre-populated with this conference table, which I've just added as part of my initialization script.
[20:12] Cool, but there's a lot more to the Docker. There's a lot more than just the CLI because that's a lot of commands to remember. Maybe you'll want to create your own images. For that, you can use a Dockerfile. A Docker file is a file called Dockerfile. It has the same options as the command line, and it's a standardized set of commands that you can use to build your own images for sharing.
[20:34] You will use the Dockerfile to create your own image that includes your source code, and then you'll be able to share that image with your testing team or to your production server and build stacks that can then be shared.
[20:47] Dockerfile for a PHP project start from a base image. You always start from a base image. In this case, starting from PHP 7 for Apache, and then just copy your source code to the /var/.../html folder. That's it. You've got a container. It has all of your source code in it. If anybody uses that container without mounting a folder to overwrite your actual source code, it'll run.
[21:11] A Dockerfile for Node.js project, no problem. Here's what it looks like. From node 14, export your reports that you want to use in your application, copy all of your source code, change your working directory, run npm install to install all the load modules for this specific environment.
[21:28] The command is, once that container is start, use npm start, and it will run this, and then there you go. You've got your Express server running.
[21:38] Now there are ways to optimize all of that. I won't go into the details, but maybe if you want to optimize the cache, you'll probably want to copy the package JSON. Install all the dependencies. Then only at the end, install the source code, just to make a better use of the cache, making sure that those layers that will never change or rarely change, are always reused.
[22:00] Now you'll want to compile those images, so you'll use the docker build command to do that. Docker build specify the place where your Docker file is. You will also want to give it a name, or you won't be able to, or people, to be able to use that. Just do use the -t to specify a tag.
[22:17] Your container might be running at root, so that is one thing that you'll need to keep in mind. If you want to know more about that, you can watch my Egghead course that will explain how to avoid running at root. Docker build creates a full image. That image can then be shared through a registry, and the image is ready to be shared with your team or to be deployed.
[22:36] To share on a registry, while you can share on Docker Hub, which is the one that I use most of the time, it's available, you can put as many public images as you want, there's a rate limit on the downloads though that you'll need to keep in mind.
[22:49] If you want to have your own registry that you will use on your own infrastructure, you can use Quaid, which is an open source project, and Google, there's your IBM, they all have their own registry, so just use the one that is the closest to your service.
[23:02] To push to that registry, use docker push, so same syntax as you've already used to with Git. Docker push to name a few image and docker pull to download an image from the registry. There's a lot more useful commands that I haven't shown here. Docker ps to list the containers. You have seen me use it.
[23:21] Docker ps -a will list all the containers. If you don't use that --rm flag, you'll see that there's a lot of them there. Docker stop to stop your containers from running. Docker rm to remove a container. Docker tag to rename a container. That's all nice in theory, but they don't always work as they should. When you get started with containers, it can be a little bit tricky.
[23:45] There's a few tricks that you can use. Docker logs will let you see the logs that are running inside that container. Just give it the name of the container, and -f if you want to follow those logs so you can see what's going on in real-time. Docker exec lets you run a command inside that container. Docker exec name of your container and give it a command to be executed.
[24:07] Note that those commands are actually run inside the container. If you have a Windows machine running a container, and you want to list the files in a specific folder, you will still need to use /bin/ls because that container is actually running a Linux. You can use dir to see the content of the folder or the files in a folder.
[24:26] If you want to log inside of your container to see what's going on, you can use the /bin/bash to open up a bash session inside that container. Make sure to use the -it flag to make it an interactive mode, and you'll be able to log inside that container and see what's going on.
[24:44] To copy files to or from your container, docker cp is a command that you can use that can be very useful, so you want to take a look at the configuration file that is being used.
[24:53] Docker cp, the name of the container, the name of the file that you want copied, and the destination where you want it, so I can take a look at the configuration file for my Apache servers to see if there is any tweaks that need to be done.
[25:06] To copy to the container, it's the opposite. The name of the file or the path of the source, name of the container, colon, the destination. Debugging, docker logs to see the logs, docker exec to run commands or a log inside your container, docker cp to copy files to and from your container.
[25:25] Here's another thing. If you have multiple containers, when you start using them more and more, you might have different things such as PHP and phpMyAdmin and MySQL, all running. If you have multiple containers, you might want to start looking into docker compose.
[25:40] Docker compose is another tool that is in addition to Docker, which will take a YAML file that describes all of the containers that you want running, and it will start all of them with one single command. In this case, you can see that it overflows a little bit. You can see that I have two services or two containers running.
[26:01] I'll have a container for my database that will use MySQL image, and then I have my Web which is PHP 7.4 Apache image, and I described the volumes and how I mount. Basically, you can use exactly the same thing as a docker run command, but in a YAML format.
[26:17] Definitely take a look into that. That is what will make your life so much easier. When you share that, you just share that YAML file, and everybody is able to use a single command called docker compose up, and all of the containers are started and ready to be used.
[26:32] Should you use containers in your day-to-day life? Yes, absolutely. That is what I've discovered. It makes your life a lot easier, so definitely. If you're a software developer, look into it. You should definitely use it. If you are doing any testing, you should definitely look into them because it will also make your life easier.
[26:50] By starting from scratch every time, you're not worried about breaking anything inside that container. You just discard it afterwards. If you need to share with another team, just create that image, share with the rest of the team. Everything will be reproducible.
[27:05] Finally, if you have an open source project, yes, please use it. It makes the life of potential contributors so much easier. I once had to contribute to your project, and it was a PHP project. I didn't want to install all of those Apache servers and PHP and all that stuff, but they had a Docker compose file.
[27:24] I was able to do docker compose up. Everything was spun up. I immediately had access to a database preceded with some information, test data. I was able to do that small change and submit APR. If it would have not have been that Docker compose file, I would probably just filed in an issue, and hope that someone would have fixed my issue.
[27:46] If you have an open source project, feel free to reach out to me on Twitter, @joel__lord. I'll be more than happy to help you out to containerize your application.
[27:55] If you want to find out more information, Easy URL to /containers, or just check out my course on Egghead that explains exactly how to take an application, decompose it in multiple little service, and run all of those inside their own containers.