On Sun, Mar 18, 2012 at 3:25 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >>> No, it's about reducing the number of comparisons needed to maintain >>> the heap property. > >> That sounds very interesting. I didn't know it was even theoretically >> possible to do that. > > So has somebody found a hole in the n log n lower bound on the cost of > comparison-based sorting? I thought that had been proven pretty > rigorously.

I think what he's describing is, for example, say you have a heap of 1,000 elements and that lets you do the entire sort in a single pass -- i.e. it generates a single sorted run after the first pass. Increasing the heap size to 2,000 will just mean doing an extra comparison for each tuple to sift up. And increasing the heap size further will just do more and more work for nothing. It's still nlogn but the constant factor is slightly higher. Of course to optimize this you would need to be able to see the future. I always thought the same behaviour held for if the heap was larger than necessary to do N merge passes. that is, making the heap larger might generate fewer tapes but the still enough to require the same number of passes. However trying to work out an example I'm not too sure. If you generate fewer tapes then the heap you use to do the merge is smaller so it might work out the same (or more likely, you might come out ahead). -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers