The optimizing compiler compiles entire functions at once. If anything in the function is unable to be optimized, then, the entire function is unable to be optimized, as well.
Here's a somewhat contrived example to highlight the differences between optimized and unoptimized code. I have two functions that calculate Fibonacci numbers. Fibonacci never has been flagged to never be optimized.
Fibonacci pre has been flagged for optimization on its next call. I'll then calculate the same Fibonacci number using both functions and measure how much time it takes.
When run, it's pretty clear that the optimized function runs significantly faster. If I remove the calls to never optimize and to optimize on next call and run the script again, both functions are optimized by the time it's completed.
The first four have to do with the arguments object. The arg pass, arg return, and arg scope functions all leak the arguments object to another scope, preventing V8 from properly optimizing the functions. Leaking most often occurs when passing the arguments object to another function to convert it to an array for later use.
The fourth function arg reassign mentions the arguments object, then, conditionally reassigns the arguments. This function can't be optimized because of the combination of the arguments object being used and an argument being reassigned.
If this function were to do its checks directly on options or call back, instead of using arguments.length, the function could be optimized. Clearly, care should be taken when using the arguments object at all.
Perhaps most surprisingly, the use of try-catch and try-finally will prevent a function from being optimized. Even including them after the return statement where they cannot be executed makes them unoptimizable.
V8 also refuses to optimize a function containing a with statement. That should be easy to avoid since the use of with has always been discouraged. Including a debugger statement will prevent optimization of the entire containing function. Make sure you remove them before deploying your code to a production environment, which is a best practice anyway.
Including an object in your function that uses underscore, underscore, proto, underscore, underscore, get or set prevents optimization. Seeing this behavior with underscores, proto underscores isn't surprising since it was never a standard feature. The get and set operators are part of the ECMAScript file specification. I would hope to see them optimizable in the future.
For...in statements are also a bit touchy. First, if the key from a for...in is not a local variable, the function won't optimize. In my first example, the key is leaked out of the function scope. In my second, it comes from a higher scope.
Second, if the for...in is operating over an object that is in hash table mode instead of being a simple enumerable, the compiler won't optimize the function. Using the delete statement to remove a property from the object is one way to drop the object in to hash table mode.
The third way for...in can cause a function to become unoptimizable is by using it with an array. Many of the new ES6 or ES2015 language features are also currently unoptimizable. Of the features I tested, generator functions, functions containing for...of, class definitions, and tagged template strings all failed to be optimized by the optimizing compiler.
It's safe to assume that many of the ES2015 inefficiencies will be corrected over time as features become used more often. Let's take a quick look at how I tested these functions.
This file exports the list of test functions as an array, which is required by my testOpt.js file. It loops through the functions, executing them a couple of times so that V8 learns how they are used. Then, it tells V8 to optimize the function the next time it's called.
I call the function one last time, then print its name and its optimization status. The app.js file, where the on next call and get status functions live uses a number of native V8 calls, all of which are prefixed by a percent symbol.
When testOpt is run, it will print the function's name and its optimization status to the console. Let's take a look. Surprisingly, none of the functions were optimized.
Another problem with unoptimizable functions is that they cannot be inlined into another block of code by V8. Inlining is the practice of taking up the body of a function and placing it where it is used, removing the need to actually call a function at all. The overall impact is small, especially, on contrived examples like this.
To see the impact inlining mix here, I need to call this simple function millions of times. To show a comparison on inlined versus non-inlined, I'll use two optimizable functions that calculate the square of a given number.
One of the functions, square big, has a large comment that increases the size of the function body just past the size threshold V8 looks for for inlinability. Yup, you heard me correctly.
V8 won't inline a function if it is too big. That threshold seems to be at 594 characters in length. If the function body is 595 characters, the function won't be inlined.
I'll measure the time it takes to execute each function 10 million times and print the result. When run with inline tracing enabled with the trace inlining flag, you can see that the square function was inlined and the square big function was not.
We also see that the inlined version runs several times faster than the non-inlined version. Not only does it run faster, it uses a lot less memory. To demonstrate, I'll comment out the run off square and print out the heap usage.
Just running the test for square big uses about seven megs of memory. If I make the square big function just one character smaller to bring it under the inlining threshold and run it again, it now uses about five-and-a-half megs of memory.
Keep in mind, these kinds of micro benchmarks are a poor indicator of how your app will actually perform during real-world usage. If you have performance concerns, benchmark your app and look for ways to optimize you business logic or your algorithm before looking to micro optimizations.