Simon Riggs <[EMAIL PROTECTED]> writes:
> Since we know the predicted size of the sort set prior to starting the
> sort node, could we not use that information to allocate memory
> appropriately? i.e. if sort size is predicted to be more than twice the
> size of work_mem, then just move straight to the external sort algorithm
> and set the work_mem down at the lower limit?

Have you actually read the sort code?

During the run-forming phase it's definitely useful to eat all the
memory you can: that translates directly to longer initial runs and
hence fewer merge passes.  During the run-merging phase it's possible
that using less memory would not hurt performance any, but as already
stated, I don't think it will actually end up cutting the backend's
memory footprint --- the sbrk point will be established during the run
forming phase and it's unlikely to move back much until transaction end.

Also, if I recall the development of that code correctly, the reason for
using more than minimum memory during the merge phase is that writing or
reading lots of tuples at once improves sequentiality of access to the
temp files.  So I'm not sure that cutting down the memory wouldn't hurt
performance.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to