Introduction
Algorithmic complexity is concerned about how fast or slow particular algorithm performs. So, it's obvious that ability to measure and compare complexity of an algorithm is quite useful to assume how fast/slow it will work depends of amount of data.
For example, if we just have created an algorithm for some web application that works for 1 sec. with 1 000 users, it's fairly important how long it will work with 10 000 users. Cause depends of it complexity it can works 10 times slower or can be such fast as before.
For example, if we just have created an algorithm for some web application that works for 1 sec. with 1 000 users, it's fairly important how long it will work with 10 000 users. Cause depends of it complexity it can works 10 times slower or can be such fast as before.
- Time complexity – is a function describing the amount of time an algorithm takes in terms of the amount of input to the algorithm. "Time" can mean the number of memory accesses performed, the number of comparisons between integers, the number of times some inner loop is executed, or some other natural unit related to the amount of real time the algorithm will take.
- Space complexity – is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm. We often speak of "extra" memory needed, not counting the memory needed to store the input itself. Space complexity is sometimes ignored because the space used is minimal and/or obvious, but sometimes it becomes as important an issue as time.
Notation for asymptotic growth
@see Know Thy Complexities!letter | bound | growth |
---|---|---|
(theta) Θ | upper and lower, tight[1] | equal[2] |
(big-oh) O | upper, tightness unknown | less than or equal[3] |
(small-oh) o | upper, not tight | less than |
(big omega) Ω | lower, tightness unknown | greater than or equal |
(small omega) ω | lower, not tight | greater than |
[1] Big O is the upper bound, while Omega is the lower bound. Theta requires both Big O and Omega, so that's why it's referred to as a tight bound (it must be both the upper and lower bound). For example, an algorithm taking Omega(n log n) takes at least n log n time but has no upper limit. An algorithm taking Theta(n log n) is far preferential since it takes AT LEAST n log n (Omega n log n) and NO MORE THAN n log n (Big O n log n).
[2] f(x)=Θ(g(n)) means f (the running time of the algorithm) grows exactly like g when n (input size) gets larger. In other words, the growth rate of f(x) is asymptotically proportional to g(n).
[3] Same thing. Here the growth rate is no faster than g(n). big-oh is the most useful because represents the worst-case behavior.
In short, if algorithm is __ then its performance is __
algorithm | performance |
---|---|
o(n) | < n |
O(n) | ≤ n |
Θ(n) | = n |
Ω(n) | ≥ n |
ω(n) | > n |
Big O Complexity chart
Algoritms
Awesome
Constant time: O(1)
Constant time: O(1)
Runs constant time regardless of input size.
- accessing an array element by index (e.g. arr[27]);
- pushing and poping on Stack;
- insertion and removal from Queue;
Normal
Linear time: O(n)
Linear time: O(n)
Its execution time is directly proportional to input size. I.e. time grows linearly as input size increases.
- traversing an Array or Linked List
- linear search;
- deletion of specific element in a Linked List;
- comparing of two strings;
Good
Logarithmic time: O(log(n))
Logarithmic time: O(log(n))
Its execution time is proportional to logarithm of input size.
- binary search
- calculating Fibonacci Numbers
- deletion of specific element in a Linked List;
- comparing of two strings;
Fast Enough
Logarithmic time: O(n log(n))
Logarithmic time: O(n log(n))
Its execution time is proportional to logarithm of input size multiplied with input size.
- Quick sort;
- Merge sort;
- Heap sort;
Poor
Quadratic time: O(n^2)
Quadratic time: O(n^2)
Its execution time is proportional to the square of input size.
- Bubble sort;
- Insertion sort;
- Selection sort;
Rules of thumb
Asymptotic behavior - keeping the largest growing term but dropping all terms that grow slowly.@see A Gentle Introduction to Algorithm Complexity Analysis
- Simple programs can be analyzed by counting the nested loops of the program. A single loop over n items yields f( n ) = n. A loop within a loop yields f( n ) = n^2. A loop within a loop within a loop yields f( n ) = n^3.
- Programs with a bigger Θ run slower than programs with a smaller Θ.
- While all the symbols O, o, Ω, ω and Θ are useful at times, O is the one used more commonly, as it's easier to determine than Θ and more practically useful than Ω.
- In C++ You can get a rough estimate of how fast your program will run by expecting it to perform about 1,000,000 operations per second, where the operations you count are given by the asymptotic behavior function describing your algorithm. For example, a Θ( n ) algorithm takes about a second to process the input for n = 1,000,000.
- Improving the asymptotic running time of a program often tremendously increases its performance, much more than any smaller "technical" optimizations such as using a faster programming language.
No comments:
Post a Comment