On Sat, 2005-10-01 at 02:01 -0400, Tom Lane wrote:
> "Jeffrey W. Baker" <[EMAIL PROTECTED]> writes:
> > I think the largest speedup will be to dump the multiphase merge and
> > merge all tapes in one pass, no matter how large M. Currently M is
> > capped at 6, so a sort of 60GB with 1GB sort memory needs 13 passes over
> > the tape. It could be done in a single pass heap merge with N*log(M)
> > comparisons, and, more importantly, far less input and output.
> I had more or less despaired of this thread yielding any usable ideas
> :-( but I think you have one here. The reason the current code uses a
> six-way merge is that Knuth's figure 70 (p. 273 of volume 3 first
> edition) shows that there's not much incremental gain from using more
> tapes ... if you are in the regime where number of runs is much greater
> than number of tape drives. But if you can stay in the regime where
> only one merge pass is needed, that is obviously a win.
> I don't believe we can simply legislate that there be only one merge
> pass. That would mean that, if we end up with N runs after the initial
> run-forming phase, we need to fit N tuples in memory --- no matter how
> large N is, or how small work_mem is. But it seems like a good idea to
> try to use an N-way merge where N is as large as work_mem will allow.
> We'd not have to decide on the value of N until after we've completed
> the run-forming phase, at which time we've already seen every tuple
> once, and so we can compute a safe value for N as work_mem divided by
> largest_tuple_size. (Tape I/O buffers would have to be counted too
> of course.)
> It's been a good while since I looked at the sort code, and so I don't
> recall if there are any fundamental reasons for having a compile-time-
> constant value of the merge order rather than choosing it at runtime.
> My guess is that any inefficiencies added by making it variable would
> be well repaid by the potential savings in I/O.
Well, perhaps Knuth is not untouchable!
So we merge R runs with N variable rather than N=6.
Pick N so that N >= 6 and N <= R, with N limited by memory, sufficient
to allow long sequential reads from the temp file.
Looking at the code, in selectnewtape() we decide on the connection
between run number and tape number. This gets executed during the
writing of initial runs, which was OK when the run->tape mapping was
known ahead of time because of fixed N.
To do this it sounds like we'd be better to write each run out to its
own personal runtape, taking the assumption that N is very large. Then
when all runs are built, re-assign the run numbers to tapes for the
merge. That is likely to be a trivial mapping unless N isn't large
enough to fit in memory. That idea should be easily possible because the
tape numbers were just abstract anyway.
Right now, I can't see any inefficiencies from doing this. It uses
memory better and Knuth shows that using more tapes is better anyhow.
Keeping track of more tapes isn't too bad, even for hundreds or even
thousands of runs/tapes.
Tom, its your idea, so you have first dibs. I'm happy to code this up if
you choose not to, once I've done my other immediate chores.
That just leaves these issues for a later time:
- CPU and I/O interleaving
- CPU cost of abstract data type comparison operator invocation
Best Regards, Simon Riggs
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?