wiki · home

Disclaimer: The definitions given here are very informal. More formal definitions can be found by looking at the references and other material available in the Internet.


Big-O notation is a way of describing the performance or complexity of an algorithm. It is commonly used to describe the time complexity and space complexity of an algorithm.

These analysis are done as a function of the input size. So if the input size changes, the complexity might change as well. Big-O notation gives us an upper bound about the complexity of an algorithm as the input size grows asymptotically. For example, if we say that a function f has time complexity of O(n), it means that as the input size n grows, the time for executing this function f grows at the same rate, i.e., it is linear. If a function g has time complexity of O(1), it means that no matter how large the input is, the function will always take a constant time to execute. As mentioned before, big-O gives an upper bound, so when describing the complexity of a function with big-O notation, you can be sure that the complexity it is not worse than that.

To find how much time a function will take according to its input size, the number of instructions executed are counted and that will give the expected complexity. Let’s consider the following function for example:

// Returns the sum of all values in the array
func sum(arr []int) int {
    result := 0
    for i := 0; i < len(arr); i++ {
        result += arr[i]
    return result

If we consider that reading a value from memory, assigning a value to variable, incrementing a value, adding two values, etc, all take one instruction we could say that the function sum above takes:

Note that some operations are executed multiple times in the for loop. The comparison with the array length, incrementing the variable i, and adding the array value with result will execute proportionally to the number of elements that arr has. Instead of having 1 + 1 + 2 + 1 + 2 = 7 instructions, this function will actually have 1 + 1 + (2 + 1 + 2) * n, which is 2 + 5n instructions, where n is the input size. In big-O notation, we don’t consider any constants, because no matter how big the input size is, the first two instructions that initialize the variables will still be executed and have no impact in the time complexity of this function. The same reasoning can be used for reducing 5n to simply n. Then, we say that the function sum is O(n).

Big Omega and Big Theta

Similarly to the Big-O notation, we have Big-Omega (Ω) and Big Theta (Θ) notations.

Ω describes the lower bounds for the complexity of an algorithm and Θ describes a tight bound, usually meaning that the complexity is not better or worse than what Θ describes.