Algorithms complexity and big-O notation
for (int i=0; i<n; i++) c++;
- 1 for "i=0"
- n+1 for "i<n"
- 2n for "i++" (i++ is equivalent to i = i+1 — which consists of two elementary operations)
- 2n for "c++"
Algorithms complexity and big-O notation
for (int i=0; i<n; i++) c++;
One more example before we go to Big-O notation.
int i = 0; while (i < n){ for(int j=0; j
Let's count complexity:
- i = 0 will be executed once.
- i<n in the while loop will be executed n+1 times
- i++ — n times
- for-loop will be started n times, and every time:
- j=0 will be executed once
- j<n — n+1
- j++ — n times
- c++ — n times
So in total complexity is O(n) = 1 + (n+1) + 2n + n(1 + (n+1) + 2n + 2n) = 2 + 5n + 5n²For real code samples complexity may be quite complex function! :-)And it is easy to mistaken in compolexity calculation and for instance forget some summand.Moreover there are a lot of reasons not to bother to calculate the precise form of complexity function:That's why usually engineers use so called complexity asymptotic estimation instead of complexity function itself.
- Some operations are more expensive than the others. Complexity function doesn't consider that.
- Cost of one operation may differ for different CPU's.
- Cost of one operation may differ for different execution contexts (CPU caching, multithreading context switches, ...)
- Some costs are hidden, when code is written in a high level language (memory allocation, garbage collection, just in time compilation, etc)
To make the long story short — you can just drop all summands except one with the fastest growth rate and then drop all constants. The result is called complexity asymptotic estimation.
During complexity estimation engineers usually use Big-O notation, written in form O(n²) or Ѳ(n log(n)). The strict formal definition of this symbolic notation is below:
Definition 1. They say "g(n) = O(f(n))" ⇔ ∀c>0 ∃n₀: ∀n>n₀ → g(n)<c·f(n)
Definition 2. They say "g(n) = Ѳ(f(n))" ⇔ g(n) = O(f(n)) and f(n) = O(g(n))
(Ѳ — is greek capital letter Theta)
Examples.
Treating these definitions informally: f = O(g) means "f grows not faster than g (ignoring constants)".
While f = Ѳ(g) means "f growth rate is the same as g (ignoring constants)".
As you can see, Ѳ — is more precise and therefore more preferable estimation than O. But somehow people very-very often use O-symbol in sence of Ѳ-symbol, especially during informal conversations, especially in written form (may be because of not so many people knows how to type Ѳ-symbol). So, if you want to be a wet blanket, you should correct everybody who mistakenly uses O-notation instead of Ѳ-notation. ;-)
Немає коментарів:
Дописати коментар