How to Analyze Algorithms?

Let's figure out what algorithm complexity is, how to calculate time and space complexity.

Time to read: 9 min

Briefly

The computational complexity of an algorithm describes how the amount of work performed by the algorithm depends on the size of the input data.

Typically, algorithms have two types of complexity:

  • time complexity — how the number of operations performed during the algorithm's execution relates to the volume of input data;
  • space complexity — how the amount of memory required by the algorithm relates to the size of the input data.

In both cases, we evaluate how the resources used by the algorithm (time or memory) relate to the amount of input data. It may seem that the algorithm works slower and consumes more memory when the size of the input data is large. This is not always the case. For example, the runtime of the function const doNothing = (...asManyDataAsYouLike) => { } is likely not to depend on the number of arguments passed to it. The complexity estimate of this function is O(1). Let's try to understand what this means.

Methods of Evaluating Complexity (Notations)

There are several methods for evaluating the complexity of algorithms. Their main idea is to obtain a bound for a function that connects the size of the input data with the number of operations or the size of memory. It is not necessary to define this function exactly; we need an estimate.

Now, let's discuss some ways to evaluate the complexity of an algorithm.

O (Big O)

O, pronounced as "O", "Big O", describes an upper bound on complexity. That is, the maximum number of operations that the algorithm can perform in the worst case. The function that bounds the complexity is indicated in parentheses after O. For example, O(n) means that the algorithm's complexity grows linearly. This means that the execution time of the algorithm increases directly proportional to the size of the input data (for example, if there is a list of 10 items, the algorithm will take a certain amount of time. But if there are 20 items, the algorithm will take twice as long). The exact manner in which it grows linearly is not important. Let's look at a few examples.

The computational complexity of this algorithm is O(n). We process each element once. If our array has n elements, we will execute the body of the reduce function n times.

        
          
            const sum = (someArray) => someArray.reduce((sum, value) => sum + value, 0)
            const sum = (someArray) => someArray.reduce((sum, value) => sum + value, 0)

        
        
          
        
      

The computational complexity of this algorithm will also be O(n). We process each element twice. If our array has n elements, then we will execute the body of the reduce function 2 × n times — n times for summation and n times for multiplication. This is described by the formula O(k × n) = O(n). In our case, the coefficient does not matter, as it does not depend on the size of the input data.

        
          
          const sumAndProd = (someArray) => {  const sum = (someArray) => someArray.reduce((sum, value) => sum + value, 0)  const prod = (someArray) => someArray.reduce((prod, value) => prod * value, 1)  return sum * prod}
          const sumAndProd = (someArray) => {
  const sum = (someArray) => someArray.reduce((sum, value) => sum + value, 0)
  const prod = (someArray) => someArray.reduce((prod, value) => prod * value, 1)
  return sum * prod
}

        
        
          
        
      

Ω (Omega)

Ω, pronounced as "Omega" or "Big Omega", describes a lower bound on complexity. That is, the minimum number of operations that the algorithm will perform in the best case. The function that bounds the complexity is indicated in parentheses after Ω. For example, Ω(n) means that the growth of complexity is at least linear or faster. For instance, quadratic complexity n × n is also Ω(n).

Θ (Theta)

Θ, pronounced as "Theta" or "Big Theta", describes a tight bound on the algorithm's complexity. The function that bounds the complexity both above and below is indicated in parentheses after Θ. Let's consider the previous example:

        
          
          const sumAndProd = (someArray) => {  const sum = (someArray) => someArray.reduce((sum, value) => sum + value, 0)  const prod = (someArray) => someArray.reduce((prod, value) => prod * value, 1)  return sum * prod}
          const sumAndProd = (someArray) => {
  const sum = (someArray) => someArray.reduce((sum, value) => sum + value, 0)
  const prod = (someArray) => someArray.reduce((prod, value) => prod * value, 1)
  return sum * prod
}

        
        
          
        
      

The algorithm performs 2 × n operations — n additions and n multiplications. This means that the number of operations is bounded above by a function of n or 2 × n < 3 × n, and below by a function of n or 2 × n > n.

For completeness, let’s present the exact formulations for each of the definitions.

  • O — f(n) = O(g(n)). There exists a positive number c, such that starting from n, it always satisfies the condition 0 <= f(n) <= c × g(n).
  • Omega — f(n) = Ω(g(n)). There exists a positive number c, such that starting from n, it always satisfies the condition 0 <= c × g(n) <= f(n).
  • Theta — f(n) = Θ(g(n)). There exist two positive numbers c1 and c2, such that starting from n, it always satisfies the condition c1 × g(n) <= f(n) <= c2 × g(n).

There are also algorithms that have space complexity — O(1). They are called in-place algorithms. They may use additional memory, but the size does not depend on the number of input data. For example, to compute the sum of all numbers, we use a variable for their sum. This variable takes space, but does not depend on the number of numbers being summed.

Examples

A few more examples. Imagine we need to change the logo, and we choose the designer with the most experience. We have an array of numbers indicating years of experience. We need to find the maximum number:

        
          
          function findMax(arr) {  let max = arr[0]  for (let i = 1; i < arr.length; i++) {    if (arr[i] > max) {      max = arr[i]    }  }  return max}
          function findMax(arr) {
  let max = arr[0]
  for (let i = 1; i < arr.length; i++) {
    if (arr[i] > max) {
      max = arr[i]
    }
  }
  return max
}

        
        
          
        
      

This solution has a time complexity of O(n). The body of the for loop executes for each element of the array, except the first one. For example, the variable max will be updated twice for an array of three elements.

Now suppose we have an array of designers' experience that is sorted in ascending order. We still want to find the most experienced one:

        
          
          function findMax(arr) {  return arr[arr.length - 1]}
          function findMax(arr) {
  return arr[arr.length - 1]
}

        
        
          
        
      

This code has a time complexity of Θ(1) because it performs only one operation, regardless of the size of the array.

Finally, let's say we have an array of numbers representing designers' experience, and we sort these numbers in ascending order:

        
          
          function sortArray(arr) {  for (let i = 0; i < arr.length; i++) {    for (let j = i + 1; j < arr.length; j++) {      if (arr[i] > arr[j]) {        let temp = arr[i]        arr[i] = arr[j]        arr[j] = temp      }    }  }  return arr}
          function sortArray(arr) {
  for (let i = 0; i < arr.length; i++) {
    for (let j = i + 1; j < arr.length; j++) {
      if (arr[i] > arr[j]) {
        let temp = arr[i]
        arr[i] = arr[j]
        arr[j] = temp
      }
    }
  }
  return arr
}

        
        
          
        
      

The time complexity of this code is Θ(n ^ 2). It performs n operations for each element of the array — n × n. If the array consists of ten elements, the code will execute 100 operations.

Such algorithms should not be used for sorting real arrays. There are more efficient sorting algorithms with computational complexity n × log(n). This is the minimal possible computational complexity for sorting based on comparing two elements of an array.

Now, consider an example of an in-place algorithm. Suppose we have two baskets of apples, and we want to swap them:

        
          
          let basket1 = ["apple", "apple", "apple"]let basket2 = ["orange", "orange", "orange"][basket1[0], basket2[0]] = [basket2[0], basket1[0]]console.log(basket1) // ["orange", "apple", "apple"]console.log(basket2) // ["apple", "orange", "orange"]
          let basket1 = ["apple", "apple", "apple"]
let basket2 = ["orange", "orange", "orange"]

[basket1[0], basket2[0]] = [basket2[0], basket1[0]]

console.log(basket1) // ["orange", "apple", "apple"]
console.log(basket2) // ["apple", "orange", "orange"]

        
        
          
        
      

In this example, we swap the first elements of the two arrays basket1 and basket2. We use array destructuring for temporary storage of the values of the elements and swap them directly in memory.