Yonatan Kra: 0:00 This is an express application with two routes -- the test API that returns some concurrent handler, and the home page API which returns a different handler. The request handler calls two async methods sequentially, and the concurrent request handler calls the two async functions concurrently.
0:25 We're going to use a tool called autocannon in order to see the difference between the two API calls. I'll start by installing autocannon globally, npm install autocannon -g. Once the installation is complete, we can use the autocannon.
0:45 The basic usage is to call autocannon with the API URL, in this case, localhost on port 3000. The default is to open 10 connections that call the API for 10 seconds.
1:02 These are the results for the sequential handler. We see that the time between requests is around six seconds. This is the expected result, because the sum of the two async functions' run time is six seconds.
1:19 Let's run the same code again for the concurrent handler. Autocannon, again, raises 10 connections for 10 seconds. Now that it finished, we can see that, as expected, the concurrent handler took around five seconds, which is the time of the longer request.
1:42 As you can see, autocannon gives you many statistics about your HTTP API. The most used are the median, which means that 50 percent of the requests were faster and 50 percent of the requests were slower than this number. You might also want to pay attention to the average and latency.
2:01 Two more parameters are requests per second and bytes per second, passed between the client and the server, which can give you more information, depending on your use case.
2:12 To summarize, we've installed autocannon globally and called it using the command-line with the route for our tested APIs. In our case, we saw the difference between calling async processes, sequentially and concurrently.