On 19/06/17 00:32, Richard Kuebbing wrote:
Is a binary search ever appropriate on machines like the Z where the
penalty for not being in the cache is so high?

Searching a 1GB table (say, 1 million 1kB entries) via sequential
search would require loading 500MB of data into the cache on average.

Using binary search, only 20 blocks of memory need to be loaded
into the cache.

As processors get faster and memory gets larger, our programs can handle
much larger data sets, so the choice of the right algorithm becomes
*more* important, not less! The ratio between O(n) and O(log n),
or between O(n^2) and O(n log n) increases as n gets larger.

For O(n) versus O(n log n) (eg sequential search vs binary search)
non-cached memory speed has not increased in line with memory size:
so a fast modern mainframe takes *longer* to sequentially search
its entire memory space compared to an old, slow mainframe.

For O(n^2) versus O(n log n) (eg insertion sort vs quicksort)
the differences become larger even if memory speed keeps track
with increasing memory size.

--
                        Martin

Dr Martin Ward STRL Principal Lecturer & Reader in Software Engineering
mar...@gkc.org.uk  http://www.cse.dmu.ac.uk/~mward/  Erdos number: 4
G.K.Chesterton web site: http://www.cse.dmu.ac.uk/~mward/gkc/
Mirrors:  http://www.gkc.org.uk  and  http://www.gkc.org.uk/gkc

Reply via email to