Avoid JavaScript optimization killers and ensure that Chrome and Node.js are executing your code as efficiently as possible. This video demonstrates several limitations of the V8 JavaScript engine's optimizing compiler and inliner. Information that is not only interesting, but can prove to be very useful.
[00:00] I'd like to start this video with a couple of disclaimers. First, the material I'm about to cover applies to the V8 JavaScript engine only. Second, while this material is interesting and can be very useful, it must be balanced against other factors, such as readability and maintainability.
[00:15] That being said, the V8 JavaScript engine used by Chrome and Node.js is fast. Two big reasons it's as fast as it is are its optimizing compiler and its ability to inline function bodies.
[00:27] All JavaScript executed by V8 is compiled using its generic compiler. Only after a particular function has run a number of times will it get flagged for compilation using the optimizing compiler.
[00:37] The optimizing compiler compiles entire functions at once. If anything in the function is unable to be optimized, then, the entire function is unable to be optimized, as well.
[00:49] Here's a somewhat contrived example to highlight the differences between optimized and unoptimized code. I have two functions that calculate Fibonacci numbers. Fibonacci never has been flagged to never be optimized.
[01:02] Fibonacci pre has been flagged for optimization on its next call. I'll then calculate the same Fibonacci number using both functions and measure how much time it takes.
[01:18] When run, it's pretty clear that the optimized function runs significantly faster. If I remove the calls to never optimize and to optimize on next call and run the script again, both functions are optimized by the time it's completed.
[01:31] That's great to know. What can prevent a function from being optimized? Surprisingly enough, there are a number of things. Some even considered idiomatic JavaScript that can prevent a function from being optimized. Let's take a look at some of them.
[01:48] The first four have to do with the arguments object. The arg pass, arg return, and arg scope functions all leak the arguments object to another scope, preventing V8 from properly optimizing the functions. Leaking most often occurs when passing the arguments object to another function to convert it to an array for later use.
[02:10] The fourth function arg reassign mentions the arguments object, then, conditionally reassigns the arguments. This function can't be optimized because of the combination of the arguments object being used and an argument being reassigned.
[02:25] If this function were to do its checks directly on options or call back, instead of using arguments.length, the function could be optimized. Clearly, care should be taken when using the arguments object at all.
[02:37] Perhaps most surprisingly, the use of try-catch and try-finally will prevent a function from being optimized. Even including them after the return statement where they cannot be executed makes them unoptimizable.
[02:51] V8 also refuses to optimize a function containing a with statement. That should be easy to avoid since the use of with has always been discouraged. Including a debugger statement will prevent optimization of the entire containing function. Make sure you remove them before deploying your code to a production environment, which is a best practice anyway.
[03:12] Including an object in your function that uses underscore, underscore, proto, underscore, underscore, get or set prevents optimization. Seeing this behavior with underscores, proto underscores isn't surprising since it was never a standard feature. The get and set operators are part of the ECMAScript file specification. I would hope to see them optimizable in the future.
[03:35] For...in statements are also a bit touchy. First, if the key from a for...in is not a local variable, the function won't optimize. In my first example, the key is leaked out of the function scope. In my second, it comes from a higher scope.
[03:50] Second, if the for...in is operating over an object that is in hash table mode instead of being a simple enumerable, the compiler won't optimize the function. Using the delete statement to remove a property from the object is one way to drop the object in to hash table mode.
[04:07] The third way for...in can cause a function to become unoptimizable is by using it with an array. Many of the new ES6 or ES2015 language features are also currently unoptimizable. Of the features I tested, generator functions, functions containing for...of, class definitions, and tagged template strings all failed to be optimized by the optimizing compiler.
[04:31] It's safe to assume that many of the ES2015 inefficiencies will be corrected over time as features become used more often. Let's take a quick look at how I tested these functions.
[04:46] This file exports the list of test functions as an array, which is required by my testOpt.js file. It loops through the functions, executing them a couple of times so that V8 learns how they are used. Then, it tells V8 to optimize the function the next time it's called.
[05:05] I call the function one last time, then print its name and its optimization status. The app.js file, where the on next call and get status functions live uses a number of native V8 calls, all of which are prefixed by a percent symbol.
[05:22] When testOpt is run, it will print the function's name and its optimization status to the console. Let's take a look. Surprisingly, none of the functions were optimized.
[05:36] Another problem with unoptimizable functions is that they cannot be inlined into another block of code by V8. Inlining is the practice of taking up the body of a function and placing it where it is used, removing the need to actually call a function at all. The overall impact is small, especially, on contrived examples like this.
[05:55] To see the impact inlining mix here, I need to call this simple function millions of times. To show a comparison on inlined versus non-inlined, I'll use two optimizable functions that calculate the square of a given number.
[06:08] One of the functions, square big, has a large comment that increases the size of the function body just past the size threshold V8 looks for for inlinability. Yup, you heard me correctly.
[06:19] V8 won't inline a function if it is too big. That threshold seems to be at 594 characters in length. If the function body is 595 characters, the function won't be inlined.
[06:32] I'll measure the time it takes to execute each function 10 million times and print the result. When run with inline tracing enabled with the trace inlining flag, you can see that the square function was inlined and the square big function was not.
[06:51] We also see that the inlined version runs several times faster than the non-inlined version. Not only does it run faster, it uses a lot less memory. To demonstrate, I'll comment out the run off square and print out the heap usage.
[07:08] Just running the test for square big uses about seven megs of memory. If I make the square big function just one character smaller to bring it under the inlining threshold and run it again, it now uses about five-and-a-half megs of memory.
[07:23] Keep in mind, these kinds of micro benchmarks are a poor indicator of how your app will actually perform during real-world usage. If you have performance concerns, benchmark your app and look for ways to optimize you business logic or your algorithm before looking to micro optimizations.