On 05/13/2011 01:44 PM, Merrill Oveson wrote:
> big-O?

The Wikipedia article makes it sound complicated.  Here it is in a nutshell.

Let's say you have two algorithms that do the same thing but in a 
different way, and you need to choose one.  Big-O notation is a good way 
to compare the theoretical scalability of algorithms.

Let's say an algorithm iterates over a list, and for each element in the 
list, iterates *again* over the entire list.  This is O(n^2) behavior 
and is usually naive.

Another algorithm might use hashes and buckets.  It still has to iterate 
over every element of the list, but for each element, it only has to 
look at the buckets, and the number of buckets is relatively small. 
This kind of algorithm probably has O(n log n) behavior, which usually 
scales a lot better.

Everyone loves an O(1) algorithm, which means that the algorithm takes 
the same time regardless of the number of inputs.  Selecting a random 
item from a list is an O(1) operation.

In real life, sometimes an O(1) algorithm takes longer than an O(n^2) 
algorithm.  Also, sometimes the scalability of some particular operation 
is not very important and the "bad" algorithm that is much easier to 
implement saves a lot of time and money.  Still, big-O analysis is very 
important and developers should be thinking about it in *everything* 
they write.

Shane

/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/

Reply via email to