This lesson is for PRO members.

Unlock this lesson NOW!
Already subscribed? sign in

Managing Memory and Garbage Collection in Node.js

8:40 Node.js lesson by

In this lesson, you will learn how to use view and interpret the garbage collection activity of your node.js app via the console. You will also learn how to take heapdump snapshots and analyze them with Chrome Developer Tools to identify possible memory leaks. A sample app with a known memory leak is provided as part of the walk-through to illustrate how to use the tools in your own environments.

Get the Code Now
click to level up

egghead.io comment guidelines

Avatar
egghead.io

In this lesson, you will learn how to use view and interpret the garbage collection activity of your node.js app via the console. You will also learn how to take heapdump snapshots and analyze them with Chrome Developer Tools to identify possible memory leaks. A sample app with a known memory leak is provided as part of the walk-through to illustrate how to use the tools in your own environments.

Avatar
Mickey

Windows Issue

Good example, but only found out that server.js had changed at the end of the lesson, and initially had been using the server.js created earlier which didn't have the memory leak. So for someone following thru with the lesson meant it didn't work and not sure why.

As I'm following on a Windows machine, found that I was unable to install the heapdump package, so that put the final kybosh on it. Would have been nice to end the course on a high, but wasn't to be.

In reply to egghead.io
Avatar
Will

Thanks for your comments Mickey!
I'll take a look at the discrepancy, thank you for pointing it out. Regarding using heapdump and other packages from Windows- you may find it useful to run a small Vagrant server or Docker image locally. Doing so allows you to simulate the behavior you will see on your production servers, as many of them may be one of the Linux distros.

In reply to Mickey
Avatar
John

I ran this lesson on Ubuntu 14.04, Node version 6.2.2. There was no memory leak activity reported.

output

john@john-aspire-7741:~/Documents/fenestra/development/gids-web/node-egghead/http$ node --trace_gc server-heapdump.js
[30883:0x28a71b0]       11 ms: Scavenge 2.1 (37.0) -> 2.0 (37.0) MB, 0.9 / 0 ms [allocation failure].
[30883:0x28a71b0]       16 ms: Scavenge 2.1 (37.0) -> 2.1 (38.0) MB, 1.0 / 0 ms [allocation failure].
[30883:0x28a71b0]      109 ms: Scavenge 4.0 (41.0) -> 3.7 (41.0) MB, 1.0 / 0 ms [allocation failure].
received request
received request
received request
received request

received request
received request
received request
received request
received request
received request
^C
john@john-aspire-7741:~/Documents/fenestra/development/gids-web/node-egghead/http$ node --version
v6.2.2

server-heapdump.js

const http = require('http')
const heapdump = require('heapdump')

function BigData() {
    var mem = Array(1000000).join("a")
}

const leak = []
const server = http.createServer((request, response) => {
    if (request.url === '/') {
        leak.push(new BigData())
        console.log('received request')
        response.writeHead(200, {"Content-Type" : "text/plain"})
        response.end('Hello world')
    }
})

server.listen(3000)
Avatar
Will

Yup. I was able to duplicate the results you show.
The lesson is still accurate for versions of node 4.x and lower but I'll need to update it for v6.
Thanks for pointing this out!

In reply to John
Avatar
Vishwas

I see you are requiring a module heap but it's not been referenced anywhere in the code. Where is the heap dump file created?

We're going to start a Node application server, and pass the --trace_gc flag to it, which is going to cause it to output the garbage collection activity to the console. In my other console I'm going to use a for loop to hit that node end point 10 times, and when we do that in the other console here we can see the garbage collection activity output into the console. We have two things going on here.

We have scavenge operations and we have mark sweep operations, and that's quite a bit of activity for only 10 get requests. Before we dig too deep into that, let's cover what each of those operations mean and how memory is handled inside of Node. The memory space managed by node is known as the heap, and it's divided into several different spaces also referred to as generations. The ones we're concerned with are called the new space and the old space.

New space is where objects are initially created and it's designed to be small and fast to garbage collect. Memory in new space is managed by a pointer. When a new object is created it's written at the space indicated by the pointer, and the pointer is moved to the next free slot. When it reaches the end of the space allocated for new space, it undergoes a scavenge operation.

The scavenge operation identifies all of the objects in memory that are no longer referenced. It then removes these and compacts the memory and resets the pointer for the next available memory allocation. If we examine the scavenge operations that happened in our application here, the first thing that we see is the timestamp, and this is the number of milliseconds that have elapsed since the Node server was started.

This is useful whenever you're looking back through these to see how frequently garbage collections are occurring. Next we see the type of garbage collection that occurred, we have a scavenge operation here, and we have a mark sweep operation here. Then we get to our memory statistics. This tell us that there were 12 megabytes allocated to new space when the garbage collection operation started, and the total heap size was 47.6 megabytes.

After the garbage collection operation, new heap size was still 12 megabytes, and the total heap allocation increased to 48.6 megabytes. The final piece of information we can get from this is the garbage collection duration, and this took .5 milliseconds. One thing to note here is that seeing the garbage collection initially as a single line isn't really good nor bad, it's simply a baseline. Every application has its own memory footprint, and it's only when you compare the baseline over time that tells you how the app is performing.

In this case, as we look through the garbage collection cycles, we can see that the memory allocation even after garbage collection is continuing to increase with the number of get requests. We can see the same thing happening with our heap size as it continues to grow with every operation. As the scavenge garbage collection operations happen, objects that survive two scavenge operations are promoted into old space.

Old space is much larger than the new space, as a result it employs a different garbage collection algorithm, and that's the one you see here known as mark sweep. You can really do a deep dive into how the mark sweep garbage collection works, but the main thing to know is that the garbage collector runs through the memory and marks all of the objects that are still referenced.

This means anything that's not referenced is unreachable, and is subject to garbage collection, so then the sweep process kicks in, and removes those objects for memory and then the process starts all over. One really important thing to note is during this garbage collection cycle, your node process is in a stop the world state. Because the garbage collector is marking everything that's reachable in memory, it can't allow anything to change, otherwise you risk removing data that is still actively in use.

So it actually stops the node process while this occurs. If you have a lot of garbage collection going on, this can really start to impact performance as it stops the Node process to do the garbage collection, and so you end up in a situation where you have an application that's having to garbage collect a lot, which is not really performant, and on top of that, you're stopping the process to do the garbage collection which creates this cascading effect for negative performance of your server.

One other difference that you can see in the console between the mark sweep and the scavenge operations is that it was done in three steps since the start of marking. Because it's actively marking everything it knows is still in use, it doesn't have to necessarily sweep those all at the same time, so it will break it up in cycles to minimize the garbage collection impact to your application. So now we know how garbage collection works, but that leaves the question of what do we do about it?

The answer lies in heap dumps. A heap dump is a snapshot of the memory allocation at the time of the dump, and much like garbage collection stats, a single data point may not reveal as much and they work better when you compare multiple samples over time. So in my console window I'm going to find the process ID for my Node application. I get two results returned for that. One is the grep command that I used to find the process statement, and the other is my Node server itself.

So I'm going to issue a kill-usr2 and then the process ID. If I do a directory listing you can see that created a heap snapshot that we'll view in just a minute. Now I'm going to run a significant load against this server, so I get a really nice comparison between the two snapshots. You can see just as it starts to go here, garbage collection in the left-hand console is starting to go through the roof.

One other thing to note here is as the output scrolls past in the right window, you're starting to see empty replies from the server. That's a great leading indicator from your Node application that it's starting to fall over. My load test has completed so I'm going to do one more heap snapshot, and I've switched to the Chrome developer tools and using the profiles tab, I'm going to load those two snapshots I created.

I can select the second one, and then I'm going to switch from the summary view to the comparison view, and if you happen to have multiple snapshots listed over here, you can choose which one you compare it to using this drop-down menu. If we sort by the number of new objects you can see that there were almost 10,000 new objects of a big data constructor created and none of them were deleted.

So right away that's a pretty big indicator that that might be the source of our leak. If we switch over to our code view, you can see that with every get request we're pushing a new instance of this big data function onto an array, and at the same time, we're never actually using that anywhere. So let's see how our performance changes if we stop doing that.

I'm going to restart the Node application, find my process ID again, take an initial snapshot, and now repeat my load test and already you can see a big difference.

We have garbage collection happening in the left-hand console, but it's at a much less frequent rate. Load test has completed, so let's take one more snapshot, I loaded up the new snapshots up in the Google developer tools and when we compare the two snapshots from the beginning and the end of the load test, you can see that there were some new objects created but at the same time they were being destroyed as they were no longer being used, and there doesn't appear to be a clear runaway memory object like we had initially.

One last thing I want to point out is that taking these snapshots requires the heap dump NPM module, so you'll need to make sure that that is included in any apps that you will be profiling which leads me to the key takeaways from this module. First, memory management is a key component to scaling. Without it, your only option is to throw hardware at the problem until someone accidentally finds the leak.

Second, it takes planning. As you saw, a single garbage collection statement or a single heap dump don't necessarily show you where the problem is. It wasn't until we started comparing things over time that we really got a clear indicator of what was going on. Finally, baselines are extremely helpful. Knowing how your app performs under a known load provides a great reference and can help you accurately forecast how your app is going to perform under future loads.

HEY, QUICK QUESTION!
Joel's Head
Why are we asking?