On Thu, Sep 29, 2005 at 03:28:27PM +0200, Zeugswetter Andreas DAZ SD wrote:
> > In my original example, a sequential scan of the 1TB of 2KB 
> > or 4KB records, => 250M or 500M records of data, being sorted 
> > on a binary value key will take ~1000x more time than reading 
> > in the ~1GB Btree I described that used a Key+RID (plus node 
> > pointers) representation of the data.
> Imho you seem to ignore the final step your algorithm needs of
> collecting the
> data rows. After you sorted the keys the collect step will effectively
> access the 
> tuples in random order (given a sufficiently large key range).
> This random access is bad. It effectively allows a competing algorithm
> to read the
> whole data at least 40 times sequentially, or write the set 20 times
> sequentially. 
> (Those are the random/sequential ratios of modern discs)

True, but there is a compromise... not shuffling full tuples around when
sorting in memory. Do your sorting with pointers, then write the full
tuples out to 'tape' if needed.

Of course the other issue here is that as correlation improves it
becomes better and better to do full pointer-based sorting.
Jim C. Nasby, Sr. Engineering Consultant      [EMAIL PROTECTED]
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?


Reply via email to