Kevin Brown <[EMAIL PROTECTED]> writes:
> One problem I've been running into is the merge join spilling to disk
> because sort_mem isn't big enough.  The problem isn't that this is
> happening, it's that I think the planner is underestimating the impact
> that doing this will have on the time the merge join takes.  Does the
> planner even account for the possibility that a sort or join will spill
> to disk?

Yes it does.  I thought it was making a pretty good estimate, actually.
The only obvious hole in the assumptions is

 * The disk traffic is assumed to be half sequential and half random
 * accesses (XXX can't we refine that guess?)

Because of the way that tuplesort.c works, the first merge pass should
be pretty well sequential, but I think the passes after that might be
mostly random from the kernel's viewpoint :-(.  Possibly the I/O cost
should be adjusted depending on how many merge passes we expect.


> In any case, one thing that none of this really accounts for is that
> it's better to set random_page_cost too low than too high.

That depends on what you are doing, although I will concede that a lot
of people are doing things where indexscans should be favored.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to