Perform Load Tests on an API Server using Apache jmeter

Will Button
InstructorWill Button
Share this video with your friends

Social Share Links

Send Tweet

The last step before deploying your new API server into production is to load test it. Load testing allows you to better understand the performance characteristics as well as forecast load and capacity. Fortunately, load testing is incredibly easy and I'll show you exactly how to create a load test plan, test the response from the API server to ensure it is responding correctly, and scale your test up to simulate as many users as needed.

Instructor: [0:01] The first step in load testing our application is to download JMeter. You can do so from the JMeter website, and you're also going to need Java because it's a Java-based application.

[0:12] Once it's downloaded, you can launch it by switching to the folder that it's in and running bin/JMeter. That's going to launch the JMeter UI and start us off with a basic test plan. We'll change the name of our test plan to ToDo API test.

[0:28] Then I'm going to right click on the test plan and hit Add. From the config element menu, choose user defined variables. This is going to allow us to set some defaults up and then just refer to them by variable name within the other tests or within the different steps of the test.

[0:43] I'm going to create a variable named host. The value is going to be set to localhost. I'll create one called port and set its value to 10010, which is the port number that my API server is running on. Then I'm going to add an HTTP header default. That again comes from the config element.

[1:11] Then here, I'm going to set the Content-Type header to application/json. That's going to ensure that the Content-Type header is sent in all of the HTTP requests that we send.

[1:22] I'm also going to add HTTP request defaults. Here, I'm going to use the variables that we defined for the server name and port number. You refer to that using a dollar sign, curly bracket notation, and the name of the variable that you created, which for us is the host for the server name and port for the port number. We'll specify our port protocol as HTTP.

[1:53] Now, I'm going to add a thread group. A thread group is a way to group our requests into a nice organized fashion. Think of it like a subfolder. I'm going to give it a name, and then we'll come back to some of these other options later on and test. For right now, within our thread group, we're going to add from the Sampler menu an HTTP request.

[2:19] We have options to set the server name, port number, and protocol here, but because we set those in our request defaults, we can just ignore those. We'll go ahead and give this one a name that's going to be Get All To-Dos. That request is going to hit the endpoint of the server, so we don't need to set anything else in this.

[2:37] Now, from the Assertions menu, I can choose a response assertion. Think of that as like a TDD style assertion where we can check the response we get back from the server and ensure that it matches. We can listen for a specific response code. In this case, we want to listen for an HTTP 200 request.

[3:09] Then I'm going to choose to add a view result tree from the listener menu. This is going to allow us to see the results. I can run this by hitting the little green play button.

[3:21] In the response tree, or in the result tree, it shows the test that it ran, shows me the sampler results, the request that it sent, and the response data. Then specifically, what we're looking for is this little green checkmark here showing us that the response assertion was valid.

[3:42] As a matter of fact, if I stop by node server right now, I can rerun that test and I get a red. It turns red because that failed. In the response data for the HTTP request, you can see the error we got was connection refused.

[4:00] In the response assertion section, it showed that it was expecting to contain the code 200 and that did not happen. I'm going to add another HTTP request from the sampler menu. This time, I'm going to make it a post. We're going to post to that root endpoint again.

[4:18] Then under the body data, we're going to provide the JSON object that creates a new ToDo. I'll create that JSON object. The first thing that we're going to add is our ToDo ID.

[4:29] Then I'm going to use a JMeter function called random and specify that it creates a random number between 99,000 and 99,999, which isn't going to conflict with anything in my Elasticsearch database. The ToDo will call test from JMeter. Our author is JMeter.

[4:53] Completed is going to be false. Then I'll also change the name of it so that it makes sense. Then underneath this, from the post processors menu, I'm going to add a JSON path post processor.

[5:08] This is going to allow me to capture the results from that HTTP post and save it as a variable. Let me show you what that's going to look like real quick. I'm going to do a curl with a post operation. Need to specify my headers. Then for the data, we'll do our JSON object.

[5:28] I'm just going to make up a number right now. You can see in the response I get back this JSON object that has the ID number of the ToDo that was inserted. We're going to capture that and use that later in our tests. I want that ID saved as a variable name ToDo.

[5:48] To get it, I'm going to use $.ID, which dollar sign represents the response, and then the dot, and then the ID represents the field that I want from that JSON object. Now we can add another sampler that's an HTTP request.

[6:07] We're going to use a delete method to actually delete that ToDo that we just created. The path for that is going to be ToDo, followed by the ID number of that ToDo. We can reference that using the dollar sign curly bracket ToDo variable name.

[6:25] We want to add a response assertion to that. That actually returns the results from Elasticsearch. What we're going to look for is this string successful and the number one. If we run that now, we see that all three worked correctly. The getAllToDos returned all of our ToDos.

[6:46] The post ToDo returned the ID number, and the ID was within the range that we specified with our random function. The delete ToDo returned the elastic search results of the delete operation. If I go back to the API test section, there was this section we skipped over.

[7:07] That's the number of threads, the ramp-up period, and loop count. We can ramp this up to say that we want to simulate 10 users and ramp those up over a five second window. In each user, we want to execute our tests in the thread group, let's say, 10 times.

[7:25] You can see that's where the thread groups really come into play. As you can group certain operations, and the number of times that they run to really simulate a pretty realistic load as to what your production servers see.

[7:38] If we click on our results tree, and we run it, you can see all of those going through an operation. Then just so you get a complete picture, I'm running my node application using node.js dashboard here. I've restarted the JMeter test in the background off-screen.

[7:56] You can see it going through here, logging out the results from the node application. It's showing the time differential for the timing of the functions that they're running.

[8:07] If we were actually generating any CPU traffic or impacting the event loop, you would see those showing up on the graphs here as well. That just allows you to see both sides of the equation of ramping up the load using JMeter.

[8:19] Then monitoring your server on the back end to see when you start having performance impacting problems.