This talk is designed to cover some of the most common buzzwords that you might be asked about in your next interview!
[1:14] My hope is that I can prepare you for some common buzzword questions that everyone of us gets thrown at whenever we interview, even though it's not 100 percent necessary when you do development work, things to Google in Stack Overflow. One thing I want you to take away, some quick answers.
[1:31] I want you to be prepared, proactive, and to believe in yourself from here on out, whenever you go into interviews. To reiterate, this isn't going to be a very in-depth conversation on some of these buzzword topics. I highly suggest that you look into these concepts yourselves afterwards.
[1:49] The best learner is going to be yourself. I even suggest teaching somebody around you these concepts yourselves, so that you can solidify what you're learning. Let's get into it.
[2:02] The first thing I want to talk about is a closure. This question gets asked so often. In my career of interviewing and working as a developer, I get asked what a closure is so many times. I like to break it down to three bullet points. This clearly defines what a closure is. It's an inner function that's enclosed by another function.
[2:26] If we think about the mechanics of what that looks like, I have the code example here, it's a function that's been enclosed by another function. This is used as a data privacy technique for objects, specially with object-oriented languages like C# and Java. It's also a state for a function. This is what the code looks like. At the core, a closure is an inner function within another function.
[2:52] Here, we have a parent function that's returning another function, which that is the closure. You'll notice that the closure functions will have access to at least three scopes, the global scope, which is outside the parent function. The parent scope, which is accessing that a and b params, and then its own variables that it has inside of its own scope.
[3:49] A quick recap, a closure is an inner function that has access to at least three scopes, because of that scope that has nested underneath it, it's hiding values within its own scope. It can't be accessed anywhere else. Make sure that you're not returning anything inside of this inner closure function.
[4:10] Number two, what's a curried function? As you saw in that Tweet that I mentioned in the beginning of this talk, currying is a very common question that I've been asked whenever I've been interviewed for job applications.
[4:24] At its core, a curried function is a function that's returning another function. More specifically, a curried function is one that takes multiple arguments one at a time. Looking at this code example here, const func = a, b. It's returning a + b. That's a normal function.
[4:43] You invoke this with one and two, you're going to get a return value of three. A curried function is going to be functions that are returning themselves and parsing in one argument one at a time. You invoke that function, as you see in the bottom there, by enclosing your values in your arguments.
[5:00] Then, right next to it, doing an open-close paren with the next set of arguments. What you're doing here is you're invoking the functions one at a time, parsing through the arguments to each function that's returned. The end result here will be three, as well.
[5:15] Two main things to take away from currying, currying is a function that returns a function. It's also the art of parsing arguments, one at a time, to functions that are returning themselves. As you learned with closure, those curried functions are those functions that are being returned. They're going to have access to all of its parents arguments.
[5:34] A and B is going to be parsed down. This last function will have access to all of those arguments. You can make your addition there. Taking what we've learned about closures and currying, we're going to apply that to our third talking point today, what are partially applied functions?
[5:52] This is a very common question as well. It combines multiple fundamental principles into one. You'll see partially applied functions happen when only some of a function's arguments have been applied. Taking a look at a code example of what this means, check out this curried function here.
[6:13] We've got a returning a function of b. Then at the very end, we're returning a plus b. A partially applied function would be just invoking this first function with an argument of one. Now this partially applied function here that's been assigned to this variable, this is now a partially applied function, because it is now a function, and it has a fixed value of one.
[6:40] We can invoke partially applied anywhere else in the context, and it will always have that value of one. We can later on call partially applied the function parsing in two. You notice that we'll still get three here. It's no longer partially applied and the function is complete.
[6:59] Hopefully, those didn't go way over your head, because closures, curried, and partially applied functions are actually very fundamental. You probably use them every day in your code already. There's a lot to love with these fundamental principles. For closures, as we talked about, you're going to have access to three scopes. You can hide away data in enclosures.
[7:19] When you curry functions, you're probably again doing this in your code every day, but it makes your code more modular. Imagine a function that has six different params, and you're doing something in a large function, versus smaller functions that can be executed with one param, and each one of those params, it does something with it.
[7:39] You can move them around, and again, make it more composable in your code. Partially applied built off the same principles of closure and curried functions. You can use them to fix one value to a function. You don't need to call that in other functions. You can just parse that one function with the applied logic around inside of your code.
[8:01] Now let's talk about a pure function. These are simple. Pure functions, when given the same input, you're going to get the same output every time. Looking at this basic example, every time I put in a two, I'm always going to get a seven every single time I call that function. This is a pure function.
[8:21] What are not pure functions are things that do things at random or are side effects that rely on function calls or date of calls, that kind of stuff. In this case, math.random, that's going to give me a different value on every time I call that function. This is not a pure function.
[8:39] As I said, pure functions produce no side effects. Even though I'm going to get the same value in return every time I call add five, notice that I'm doing something inside of my add five function. I'm mutating the state object. Even though I return the same value, pure functions, they don't leave their scope. They don't go and mutate other objects or data properties around them.
[9:08] What's nice about pure functions is that they're deterministic. They're testable. They're easy to write tests. They're not going to make flaky, very breakable tests, because they're going to be straightforward. You can think of them as Legos, building blocks. You should favor pure functions, because again, you can test those, and you can rely on their behavior.
[9:28] They work independent of outside state. One last time, because they are self enclosed, they're great to unit test, they're deterministic, and they don't rely on anything that's happening inside of your code.
[9:44] Let's talk about recursion. Recursion is a function that directly or indirectly calls itself. Looking at this example, we've got a function counter that's going to take an argument of n. Based off that n, and we're going to do a for loop that's going to const.log its values.
[10:02] Here, we have let i equals n. While i is less than or equal to 10, we're going to i++. This const.log is going to go from through 10. Pretty self-explanatory.
[10:13] How could we do this with a function that calls itself? How could we create this iteration with a function that's calling itself? Looking at this example, what if we do, return counter n + 1? Now here, we're going to call our function with . We're going to go from to infinity. We're going to have a stack overflow, because we are constantly calling itself on each loop.
[10:40] This is going to overflow. It's going to throw an error. One thing about recursion is it's a function that calls itself. You can do things with the value that you're getting. You have to always remember an escape clause. Recursion works on a call stack, first-in is last-out. There always needs to be a base case or it will throw an error or lockup a stack overflow.
[11:09] This is how we would do a recursive function that would count from to 10. Counter again, we're taking in an argument. We're going to console.log that before we do an if check, which is our escape clause. Here, we're saying if n = 10, then we're going to return undefined. This is what's going to break the recursive calling itself. Then, we're going to return counter n + 1.
[11:37] We are going to go from to 10 because as soon as we get to 10, we're going to return on that next line of the if. That's recursion in a nutshell. It's a complicated topic because it's the idea of a function calling itself. We're going to recap more about this later on. Next up, let's talk about big O notation.
[11:57] The quick answer to what big O notation is, is it's the speed at which an algorithm grows. It's not going to tell you how fast in seconds or in time your algorithm will run to completion. It tells you how fast the algorithm will grow a worst case scenario. Let's look at a couple of examples and figure out the notation of a function.
[12:20] Here we have a function called notation. It's taking an argument of n. It's returning n at 2, at the third location. Notation, we're going to parse in an array. It's got , 1, 2, 3. If you look at this, this is going to be a notation of O(1) because it goes right next and pulls it up. It's not doing any kind of looping or recursion, nothing like that.
[12:44] It's just going right to a specific location and pulling it up. This is going to be a constant time. It's always going to be O(1). Regardless of the size of the array, because it's not a linked list, because we're not stepping through, and it's just going to one location, this is O(1). It's constant.
[13:02] Looking at this example, we do have a loop inside of this. Notation is now getting an n and it's got a for loop, looping over everything inside of n and then console.logging that. This is going to be in linear time. However large your array is, the number of n steps it will be.
[13:24] The notation will look like this, O(n), basically saying that our notation depends on n. It will grow linearly. If you have 1, 5, 10, 20, it's going to count one at a time until it gets there.
[13:39] Let's talk about a more complicated example. Binary search is the idea of searching through a sorted list to find an item. Take a sorted array and look at the middle item. If it's smaller, then you're going to ignore the second half, that's been sorted and greater, and only going to look at that first half that's smaller, grab all the items in there, and continue to halve the list each time, until you get down to the item that you're looking for.
[14:11] If it's larger, you do the same thing, but in that larger list. Each time, you're going to the middle of your list that's been sorted again. Then, you work down or up, until you find the item you're looking for.
[14:27] This is going to be a logarithmic time or log time. It's the idea of jumping to the middle of a list and repeating until you find what you're looking for. The big O notation for this O(log(n)). This is faster than working with the linear time. If you are given an array that's not sorted, you're going to have to go through one at a time through the entire list looking for your item. That's going to be O(n).
[14:51] The worst-case scenario is it's that last item in your array of, let's say, 1,000 items. That's going to take 1,000 steps to get there, versus using a logarithmic time or binary search, you could easily get there a lot faster. You can cut that 1,000 list in half. Now, you're only looking at 500. You cut the 500 in half until get to the very end.
[15:13] The worst-case scenario using log time, it's going to take you a max of 10 times to find that last item in the array of 1,000 items versus worst case, it being 1,000 with the linear time.
[15:30] Let's talk about what happens when you have loops inside of loops and how that affects your big O notation. Looking at this function, you can see that we've got a for loop. It itself has a for loop. What you can do is you're timesing for each loop that you're doing. You're timesing it by itself. You can think of O(n) * O(n). With each loop, you're doing another series of looping and you start over again.
[15:57] You're taking O(n)^2. With every additional loop that you add that is nested within itself, you're going to be going to n of power of whatever number of loops you have. There's lots of different big O complexities out there with each function, depending on the loops you have, the recursion you do, and what kind of data structures you're working with.
[16:19] You're going to have vastly different notations. Once you understand how those work, you're going to build or write your code efficiently. As we saw at the beginning, O(1) and that logarithmic time here, those are going to be the quickest. They're going to get, solve, and run to completion the quickest. It's not going to grow very fast.
[16:41] O(n) is the linear time. If you have 1,000 items, it's going to linearly grow to 1,000. There's nothing interesting there. As you get more complex because you start adding recursion, you start having nested loops, your complexity chart grows quickly. This is something to keep in mind as you work on your code.
[17:01] The most case if you're working just on UIs, you're not going to run into this problem. It's interesting to know because preparing for your interviews, you might be asked that question. Perfect.
[17:34] Before we dive deeper on the difference between class and prototype on inheritance, let's first talk about inheritance. By its nature, inheritance is a code where you use pattern. Instead of recreating the same functionality over and over again, inheritance comes to us to mitigate that by creating the function only once and then being passed around through inheritance between instances.
[18:11] When you're working with object-oriented languages that are using a class-to-child, parent-to-child relationship, you're going to see a scenario that's similar to the gorilla and banana scenario, which is, if you're looking for a banana, you're probably going to end up getting the gorilla holding a banana and the entire jungle with it.
[18:30] With the parent-child relationship, you're going to inherit a whole bunch of other stuff with it, with that banana that you originally asked for.
[18:42] Let's talk about this relationship in classical inheritance. You can think of class as like a blueprint. You define a blueprint of a house. Every time you want to create a new instance, you're stepping out that blueprint and creating a new house. You're going to see that classes are invoked with a new keyword. This is consistent across the languages.
[19:04] They're going to have constructors and instances from the blueprints. They're going to have a parent-to-child relationship. Similar to a child, they're going to inherit DNA from their parents.
[19:32] This is the link between objects. Whenever you dot and look up a functionality, it's going through that object, looking at this automatically created property. Then, looking at the next object or that next prototype, and looking for that functionality to see if it exists there.
[20:06] To recap, today we talked about closures, curried functions, partially applied functions, pure functions, recursion, big O notation, and the difference between class and prototypal inheritance. Again, I barely scratched the surface on these topics. I challenge you to go and look up some of this stuff yourself. What's nice is taking a lot of interest on these topics as someone that's come from a self-taught background.