On Sun, Mar 18, 2012 at 8:56 AM, Greg Stark <st...@mit.edu> wrote: > On Sun, Mar 18, 2012 at 3:25 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >>>> No, it's about reducing the number of comparisons needed to maintain >>>> the heap property. >> >>> That sounds very interesting. I didn't know it was even theoretically >>> possible to do that. >> >> So has somebody found a hole in the n log n lower bound on the cost of >> comparison-based sorting? I thought that had been proven pretty >> rigorously. > > I think what he's describing is, for example, say you have a heap of > 1,000 elements and that lets you do the entire sort in a single pass > -- i.e. it generates a single sorted run after the first pass. > Increasing the heap size to 2,000 will just mean doing an extra > comparison for each tuple to sift up. And increasing the heap size > further will just do more and more work for nothing. It's still nlogn > but the constant factor is slightly higher. Of course to optimize this > you would need to be able to see the future. > > I always thought the same behaviour held for if the heap was larger > than necessary to do N merge passes. that is, making the heap larger > might generate fewer tapes but the still enough to require the same > number of passes. However trying to work out an example I'm not too > sure. If you generate fewer tapes then the heap you use to do the > merge is smaller so it might work out the same (or more likely, you > might come out ahead).

Except for rounding effects, it does come out the same. Every extra layer on the tuple-heap during the initial run building causes a reduced layer on the tape-heap during the merge. So they wash. I haven't analyzed the exact rounding effects in detail, but by just instrumenting the code I found a huge tuple-heap with a two-tape merge at the end used less than one percent more comparisons, but was >40% slower, then a lower-memory sort which used a modest sized tuple-heap followed by a modest-size tape-heap merge. My conclusion is that it isn't the number of comparison that drove the difference, but the number of on-chip cache misses. Two modest sized heaps are more cache friendly than one giant heap and one tiny heap. Of course if the data is partially sorted in a way that is apparent over large ranges but not short ranges, the larger initial heap will capture that during the initial run construction while the smaller one will not. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers