 # Sum of the most common algorithmic arithmetic series

Published 5 years ago
Updated 5 months ago

There is a particular arithmetic series that shows up time and time again in analysis of naïve implementations of many common programming challenges.

In this lesson we show why the time complexity is Quadratic, so you don't have to keep analyzing it over and over.

Quite commonly in algorithms, you're given items one by one and the task at hand can easily be solved by looking at all the items you've been given thus far. A classic example of such a task is sorting. Let's say we have some numbers 2, 4, 6, 7, 8, 9. We're not told that they are already sorted and our task with sorting them.

Of course, we have discussed many excellent algorithms for this, but let's come up with another nice solution. We look at these input numbers one by one, copying them over to their rightful sorted place in an output array.

In iteration one, we get the number two and just pop this number into the start. We need it to look at zero existing numbers. In iteration two, you pick the next number four, compare it to two, since it's larger, we pop it at the end. In total, we need it look at one existing numbers.

In iteration three, we get the number the six, compare it to two, since it's larger, we next compare it to four, since it's larger we now get to pop it at the end. We need it to look at two existing numbers. Similarly, in iteration four, for the worst case, you might to look at three existing items. And so on till iteration end, you might have to look at n-1 existing items.

Now, the total work across n iterations for this wasteful sorting algorithm is simply the sum of the lookups in each iteration.

Let's call the sum A. Another way of writing the same sum is as a descending series where you can continually look at lesser, and lesser remaining items till you arrive at zero remaining items. Now, if you just sum these two ways of writing the same equation, we get a new equation where we have n values all of which are n-1.

Two A equals n(n-1). Thus, the total work A equals n(n-1)/2. You'll see this equation quite commonly in text books, but as algorithm designers we are generally interested in the bigger complexity which results to the highest power of n, which is n squared.

Remember, whenever you're looping n times and looking at all existing items, or looping n times and only decreasing the amount of work in each iteration by one, the final algorithm will be at least quadratic in nature.