In mathematics and computer science, Big O notation is a way of comparing rates of growth of different functions. It is often used to compare the efficiency of different algorithms, which is done by calculating how much memory is needed, and how much time it takes to complete.
The Big O notation is often used in identifying how complex a problem is, also known as the problem's complexity class. The mathematician Paul Bachmann (1837-1920) was the first to use this notation, in the second edition of his book "Analytische Zahlentheorie", in 1896. Edmund Landau (1877-1938) made the notation popular. For this reason, when people talk about a Landau symbols, they refer to this notation.
Big O notation is named after the term "order of the function", which refers to the growth of functions. Big O notation is used to find the upper bound (the highest possible amount) of the function's growth rate, meaning it works out the longest time it will take to turn the input into the output. This means an algorithm can be grouped by how long it can take in a worst-case scenario, where the longest route will be taken every time.
More specifically, given two positive functions and , we say that is in the big O of (written ) if for large enough , for some constant .[1][2][3]
Big O is an expression that finds worst-case scenario run-time, showing how efficient an algorithm is without having to run the program on a computer. This is also useful due to the fact that different computers may have different hardware, and therefore need different amounts of time to complete it. Since Big O always assumes the worst-case, it can show a consistent measurement of speed: regardless of the hardware, is always going to complete faster than , because they have different levels of efficiency.