Search justacoding.blog. [Enter] to search. Click anywhere to close.

October 12th, 2021

How Big O Notation Works

The Big O Notation is an important concept within programming. We’ll cover 3 of the more common Big O Notation examples in this article: constant time, linear time and quadratic time.

Maybe you’re completely unfamiliar with the concept, or maybe you’ve heard of it in passing but never got around to taking a closer look at what it actually is and what it’s actually meant to measure or describe.

Regardless, let’s take a look at how Big O Notation works and how this understanding can be applied to our development work.

Let’s begin!

So what is the Big O Notation?

In simple terms, The Big O Notation is used to describe the performance of an algorithm. We can say that by using the notation, we can describe how effectively or efficiently the given algorithm performs.

This is useful for comparing one implementation (or solution to a problem) to another, for example.

In case you are wondering, the “O” in Big O Notation stands for order of magnitude.

Regarding algorithms, an algorithm is, specifically:

… a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.

Oxford Languages

As developers, algorithms are part of our day to day life. Even if you’re unfamiliar with the technicalities and the nuts and bolts in this regard; algorithms are still very much at the heart of what we do in some respects.

Algorithms in practice

An example of some algorithm-based functionalities you’re likely familiar with would be:

  • Searching an array for the smallest or largest integer
  • Searching a string to find the position of a specific character
  • Checking if two arrays are equal
  • Finding the last element of an array

And so on.

These are fairly common tasks or functionalities that most developers will be familiar with implementing in some form.

Basically, any type of operation where logic is applied to arrive at a specific, desired result utilizes some sort of algorithm under the hood.

The algorithm is simply the process that delivers the required solution in each of these cases.

Back to how Big O Notation works

Now we’ve clarified in basic terms what algorithms actually are, we can return to looking at how Big O Notation works and how we can utilize this understanding in our favour.

The first important thing to remember is that the Big O Notation is measuring the time taken of an algorithm relative to its input size.

We’ll go through 3 of the most basic time complexities, starting with the least complex.

Big O Notation examples

These Big O Notation examples are written in JavaScript, but the same principles apply regardless of the programming language used.

1. O(1) – or “constant” time

An algorithm that runs in constant time is said to have the same performance regardless of the inputs it is operating upon. The time it takes to run is constant regardless of the supplied input.

Here’s a very simplistic example of this:

const getDefaultAge = () => {
    return 22;
}

The function returns a static, unchanging value each time.

And here’s another example:

const isEven = (number) {
    return number % 2 === 0
}

If we pass in a smaller number or a larger number, the amount of operations that the function will execute remains exactly the same. In this case, there is one operation: using the modulo operator (or %) to determine if the number is odd or even.

That’s one step, and one step only, regardless of input.

Here’s another example, whereby there are multiple inputs in the form of an array:

const firstItemInArrayIsNull = (someArray) => {
    return someArray[0] === null
}

The array we pass to our firstItemInArrayIsNull function can be of any length — the time taken of our function is consistent in all cases. Only one step is required each time; this is independent of the size of the array that we pass in to the function.

2. O(n) – or “linear” time

The performance of an algorithm that runs in linear time is said to grow in a linear fashion based on the inputs it is operating upon.

By this, we are saying that the performance is directly relative and proportional to the size of the inputs supplied.

Consider this example:

const uppercaseFruits = []
const fruits = ["apple", "orange", "pear", "banana", "plum"]

fruits.forEach(fruit => {
    uppercaseFruits.push(fruit.toUppercase()) 
})

Here, we’re using JavaScript’s forEach function to explain linear time.

The forEach demonstrated here performs an operation on every item in the supplied array. As you can see, it simply converts the given string to uppercase.

But what is linear time?

If our fruits array only had one item; this uppercase conversion would be performed only once.

If our fruits array had a lot more items, for instance, if it had 1000 items — the operation would be performed 1000 times.

And so on.

This is what we mean by “linear” time: the time taken correlates directly with the supplied inputs. In this case, the number of inputs (the number of items in the array) determines how many times the operation is performed.

So to clarify, the amount of items in the array in this example has a direct impact on how this algorithm performs and the time it takes to complete. It’s not like a constant time algorithm in this regard. The more items passed in, the greater the time taken becomes.

The easiest way to envisage this is: if we double the amount of inputs; we double the time taken of this function.

How Big O Notation considers the worst case scenario

In our linear example above, it’s clear that our forEach will always run x number of times – where x is the number of items in the supplied array.

However, there is something important to note regarding the Big O Notation. This can be best demonstrated via this example:

const countries = ["england", "ireland", "scotland", "wales"]

for (country of countries) {
    if (country === "ireland") {
        break;
    }
    console.log(country)
}

As you can see, this time it’s possible for our functionality to terminate early. So it won’t necessarily be the case that we loop through every item in the array every time we invoke some functionality like this.

If we had an array of 1000 items, it’s possible the relevant condition may be met only 1 or 2 iterations in. So we’d break in this case, and the for loop will thus terminate.

But how does that affect the Big O Notation?

The answer is: it doesn’t.

That’s because the Big O Notation considers the “worst” case scenario.

In our example above, it’s possible for the functionality to terminate in 1 iteration. This would be considered to be the “best” case scenario.

Equally, it’s possible for the functionality to terminate in 4 iterations. Or in other words, it’d operate on every item in the supplied array before terminating. This is considered to be the “worst” case scenario, as the functionality will execute the maximum amount of possible times (and thus take longer to complete).

So in short, the Big O Notation will always consider the worst case scenario — even if it’s possible for the functionality to sometimes return early.

3. O(n^2) – or “quadratic” time

Algorithms falling within this bracket are more complex and require more time than the previous examples.

Here’s a demonstration of some functionality that can be considered to have O(n^2) complexity:

const checkForDuplicates = (items) => {
    items.forEach(outerItem => {
       items.forEach(innerItem => {
           // assume some duplicate checking logic...
           console.log(outerItem + " - " + innerItem)
       })
    })
}

containsDuplicates([1]) // expect 1 log (or 1^2)
containsDuplicates([1, 2, 3]) // expect 9 logs (or 3^2)
containsDuplicates([1, 2, 3, 4, 5]) // expect 25 logs (or 5^2)

In this example, our checkForDuplicates function receives an array of items.

Assume we’d want to check for duplicates within this array, that may be one possible use-case for a nested forEach like this. I’ve omitted this logic in the pseudo-example, but we can easily demonstrate quadratic time regardless.

So there are two nested forEach loops here. This is a typical scenario (nesting) where we may expect to encounter an algorithm falling under the O(n^2) bracket.

But why is this function considered to be O(n^2) complexity?

Simply put, this can be considered to be of O(n^2) complexity because: the amount of operations performed equates to the square of the amount of inputs received.

You can copy/paste that snippet yourself and let it run.

You’ll see that the amount of console.logs you see will increase exponentially depending on the number of items within your array. By exponentially in this case, I mean the number of logs will be the number of items squared. Or the number of items ^ 2. Or more simply, n^2.

If our initial array contained an array of 10 items, we’d log exactly 100 times (10*10, or 10^2).

If our initial array contained more than 10 items; the amount of logs we would expect to see jumps exponentially. As stated, the amount of logs will be the square of our amount of inputs.

Given this, we can see that algorithms such as this one are more complex than the constant time and the linear time examples above. That’s because the time this function takes to complete increases drastically based on the amount of inputs passed in, as demonstrated.

The Big O Notation explained – in summary…

The Big O Notation examples above are fairly straight forward, but let’s recap and compare them one last time for the sake of clarity:

  • O(1) or constant time – the amount of inputs is irrelevant, the function performs consistently each time
  • O(n) or linear time – the amount of inputs correlates directly with the amount of inputs received
  • O(n^2) or quadratic time – the amount of inputs increases the time taken of the function exponentially (ie. number of inputs squared)

This is a fairly trivial summary, but you should get the idea of how Big O can be used to describe the performance of different algorithms.

How Big O Notation (and knowledge thereof) can benefit your code

Now that you hopefully have a basic grasp of how Big O Notation works, we can start to discuss how that can be applied to your own work as a developer.

time complexity
Measure and describe the time complexity of your own code

In short, any time you are writing a function or a piece of code — you’ll be able to analyse it’s effectiveness or efficiency in a more technical manner, through the use of the Big O Notation.

Having the awareness in this regard can be a huge benefit when analysing your own code as well as that of other developers.

In conclusion

I hope you’ve enjoyed this article, and I hope you’ve got a clearer idea now of how Big O Notation works, what it represents, and how it can benefit you as a developer.

It’s worth noting that there is a lot more to the Big O than what we’ve covered in this brief article. It can definitely become fairly complex and intricate!

The Big O Notation examples provided in this article are as lean as possible, and most of all, easy to digest whilst still demonstrating the topic at hand.

However, if you’re wanting to advance your knowledge in this domain, there’s definitely a wealth of information out there covering more advanced scenarios.

Thanks for reading!

Have any questions? Ping me over at @justacodingblog
← Back to blog