Configuring AWS Elastic Load Balancers

Will Button
InstructorWill Button
Share this video with your friends

Social Share Links

Send Tweet

In this lesson, you will learn how to setup an Elastic Load Balancer for your nodejs servers. You will also learn how to configure the load balancer with servers in different availability zones, using nodejs servers that do not require root access to bind to port 80 or 443, and are not publicly accessible via the internet. You will learn how to use health checks to ensure your servers are removed from service when not responding properly and be automatically added back to the load balancer when functional again. Finally, I show you how to understand the metrics provided by the ELB to troubleshoot problems with both your backend servers and the ELB itself.

[00:02] have two new JS servers and I want to point out a couple of things about them before we get started. First of all, you can see that they're both in different availability zones. That means that in Amazon they're in different physical data centers, which gives us a little bit of disaster recovery capability. The second node is not running as root on these servers so that prevents us from binding to port 80 so instead node is binding to port 3000.

[00:27] Finally, neither of these servers have a public IP. The only way that they can be accessed is through my VPC. You can see the private IPs right there.

[00:37] These things could be potential show-stoppers, but I'm going to show you how we're going to use the elastic load balancer to make sure that they aren't. Let's get started by creating the load balancer. Let's use load balancers from the EC2 menu and click "Create." I'll give it a name and I want to create it inside my VPC.

[00:57] I'm not going to check the box that says "Create an internal load balancer," because that would make it accessible only within my VPC, but I want this to be publicly accessible. Earlier in the lesson, I mentioned that no JS is running on port 3000. Since this is going to be our website, we want the ELB to be listening on port 80.

[01:17] Whenever the ELB receives a request on port 80, it needs to forward it to port 3000 on the Node.js instances.

[01:27] We also want to listen on HTTPS so it's going to be port 443, and again we want it to forward the request to port 3000 on the Node.js servers. The cool part about that is we can have our Node.js servers listening on any port and then the ELB is going to listen on the HTTP and HTTPS ports and do the port translation for us.

[01:49] Next, we need to select our subnets and our Node.js servers are available in my subnet named "Private B" and "Private C." We can now assign our security groups. We're going to create a new security group, call it WWW. What you see on the screen here is the protocol, either TCP or UDP, followed by the port range that the rule refers to and it can be either a single port as we've done here, or a range.

[02:15] The source allows you to specify the source IPs that are permitted by this security group. In the case of 00000it means anywhere on the Internet. We enabled HTTPS on our load balancer, so we need to provide an SSL certificate. I've provided the private key and public key certificate for my cert. We're prompted to choose a cipher and we can choose the SSL protocols and ciphers that we allow through our load balancer.

[02:45] I'm just going to stick with the default of the pre-defined security policy by the ELB. Then we'll select "Configure health check."

[02:52] What the health check is going to do is continually poll the Node.js servers that we've added to the load balancer. We need to provide a valid URL. In our case, just the root URL itself is going to provide a valid response and of course on port 3000. It defaults to a 5-second timeout and then checks every 30 seconds.

[03:12] If one of the servers fails two checks it's going to remove it from service and mark it as unhealthy.

[03:18] Once it starts responding again, it has to pass 10 checks before it can be added back to the load balancer. 10 checks at a 30-second interval is 300 seconds or five minutes. That seems a little long for me so I'm going to drop that back down to three. It's time to add our instances so I'm going to select both of my Node.js instances.

[03:40] One final check to confirm everything's set correct and then we'll create. Our load balancer's created, so let's go take a look at it.

[03:48] On the Description tab, you see the DNS name and one of the common problems I see with ELBs is that people look for the IP address of the last load balancer to use and you shouldn't use the IP address. You should always be using this DNS name. The reason is because AWS in the background is load balancing your load balancer, meaning that if the specific hardware for your load balancer fails, they're going to automatically roll over to one that's working.

[04:17] They use the DNSC names to do that. If you point to the IP address that this resolves to and it fails, your load balancer is going to appear to stop working. But if you always use the DNS name, it will roll with you no matter what happens to the back end configuration.

[04:34] We can also see the status of both of our instances and our port configuration on this tab. Selecting the Instances tab, we can see both of our instances with a little more detail, as well as the configured health check.

[04:46] I'm on the monitoring tab here, and I'm going to scroll up so that we can see some of our metrics. The HTTP 200s are the 200 responses generated by the back end node instances. The next two charts are the HTTP 400 and 500 errors. These are going to be the errors that are generated by the node instances behind the load balancer.

[05:08] On the second row, you see the ELB 400 and 500 errors. These will be the 400 and 500 errors generated by the ELB itself.

[05:17] It could be generating those because there's a problem with your back end instances or it could be generating them because there's a problem with the ELB itself. We have to do a little troubleshooting sometimes to figure out which is the accurate scenario.

[05:31] Key takeaway from this -- on the top row, HTTP 400 and 500 errors are the ones being generated by your back end instances in the load balancer. The ELB 400 and 500 errors are the errors being generated by the ELB itself.