At 10:52 AM 2/16/2006, Ron wrote:
At 09:48 AM 2/16/2006, Martijn van Oosterhout wrote:

Where this does become interesting is where we can convert a datum to
an integer such that if f(A) > f(B) then A > B. Then we can sort on
f(X) first with just integer comparisons and then do a full tuple
comparison only if f(A) = f(B). This would be much more cache-coherent
and make these algorithms much more applicable in my mind.
In fact we can do better.
Using hash codes or what-not to map datums to keys and then sorting just the keys and the pointers to those datums followed by an optional final pass where we do the actual data movement is also a standard technique for handling large data structures.
I thought some follow up might be in order here.

Let's pretend that we have the typical DB table where rows are ~2-4KB apiece. 1TB of storage will let us have 256M-512M rows in such a table.

A 32b hash code can be assigned to each row value such that only exactly equal rows will have the same hash code.
A 32b pointer can locate any of the 256M-512M rows.

Now instead of sorting 1TB of data we can sort 2^28 to 2^29 32b+32b= 64b*(2^28 to 2^29)= 2-4GB of pointers+keys followed by an optional pass to rearrange the actual rows if we so wish.

We get the same result while only examining and manipulating 1/50 to 1/25 as much data during the sort.

If we want to spend more CPU time in order to save more space, we can compress the key+pointer representation. That usually reduces the amount of data to be manipulated to ~1/4 the original key+pointer representation, reducing things to ~512M-1GB worth of compressed pointers+keys. Or ~1/200 - ~1/100 the original amount of data we were discussing.

Either representation is small enough to fit within RAM rather than requiring HD IO, so we solve the HD IO bottleneck in the best possible way: we avoid ever doing it.

Ron


---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to