Tag: Big O

  • Big O Notation

    What is Big O notation ?

    Big O notation is a mathematical concept used in computer science to describe the time complexity or space complexity of algorithms. It provides a standardized way to compare the efficiency of different algorithms in terms of their worst-case performance.

    Purpose of Big O Notation

    1. Measure Efficiency: Helps analyze how an algorithm scales with input size.
    2. Compare Algorithms: Provides a standard to evaluate and compare different solutions.
    3. Worst-Case Analysis: Describes the upper bound of an algorithm’s performance to ensure reliability under all conditions.

    Common Big O Notations

    1. O(1) – Constant Time
      • The runtime does not depend on the size of the input.
      • Example: Accessing an element in an array by index.
    2. O(log n) – Logarithmic Time
      • The runtime grows logarithmically with the input size.
      • Example: Binary search.
    3. O(n) – Linear Time
      • The runtime grows directly proportional to the input size.
      • Example: Traversing an array.
    4. O(n log n) – Quasilinear Time
      • Often associated with divide-and-conquer algorithms.
      • Example: Merge sort, quicksort (average case).
    5. O(n²) – Quadratic Time/O(n³) – Cubic Time 
      • The runtime grows quadratically/cubically with input size.
      • Example: Nested loops, such as in bubble sort.
    6. O(2ⁿ) – Exponential Time
      • The runtime doubles with each additional input element.
      • Example: Recursive algorithms for the Fibonacci sequence.
    7. O(n!) – Factorial Time
      • Extremely inefficient; used in algorithms that generate permutations.
      • Example: Solving the traveling salesman problem using brute force.

    Why Is Big O Important?

    1. Scalability: Determines how well an algorithm performs with large inputs.
    2. Optimization: Helps identify bottlenecks and improve performance.
    3. Decision Making: Guides the choice of the right algorithm for a given problem.