From: Simon Riggs <[EMAIL PROTECTED]>
Sent: Sep 23, 2005 5:37 AM
Subject: [PERFORM] Releasing memory during External sorting?
>I have concerns about whether we are overallocating memory for use in
>external sorts. (All code relating to this is in tuplesort.c)
A decent external sorting algorithm, say a Merge Sort + Radix (or
Distribution Counting) hybrid with appropriate optimizations for small sub-
files, should become more effective / efficient the more RAM you give it.
>The external sort algorithm benefits from some memory but not much.
That's probably an artifact of the psql external sorting code and _not_
due to some fundamental external sorting issue.
>Knuth says that the amount of memory required is very low, with a value
>typically less than 1 kB.
"Required" means the external sort can operate on that little memory. How
Much memory is required for optimal performance is another matter.
>I/O overheads mean that there is benefit from having longer sequential
>writes, so the optimum is much larger than that. I've not seen any data
>that indicates that a setting higher than 16 MB adds any value at all to a
>large external sort.
It should. A first pass upper bound would be the amount of RAM needed for
Replacement Selection to create a run (ie sort) of the whole file. That should
be ~ the amount of RAM to hold 1/2 the file in a Replacement Selection pass.
At the simplest, for any file over 32MB the optimum should be more than
> I have some indications from private tests that very high memory settings
>may actually hinder performance of the sorts, though I cannot explain that
>and wonder whether it is the performance tests themselves that have issues.
Hmmm. Are you talking about amounts so high that you are throwing the OS
into paging and swapping thrash behavior? If not, then the above is weird.
>Does anyone have any clear data that shows the value of large settings
>of work_mem when the data to be sorted is much larger than memory? (I am
>well aware of the value of setting work_mem higher for smaller sorts, so
>any performance data needs to reflect only very large sorts).
This is not PostgreSQL specific, but it does prove the point that the
of external sorts benefits greatly from large amounts of RAM being available:
Looking at the particulars of the algorithms listed there should shed a lot of
on what a "good" external sorting algorithm looks like:
1= HD IO matters the most.
1a= Seeking behavior is the largest factor in poor performance.
2= No optimal external sorting algorithm should use more than 2 passes.
3= Optimal external sorting algorithms should use 1 pass if at all possible.
4= Use as much RAM as possible, and use it as efficiently as possible.
5= The amount of RAM needed to hide the latency of a HD subsytem goes up as
the _square_ of the difference between the bandwidth of the HD subsystem and
6= Be cache friendly.
7= For large numbers of records whose sorting key is substantially smaller than
the record itself, use a pointer + compressed key representation and write the
to HD in sorted order (Replace HD seeks with RAM seeks. Minimize RAM seeks).
8= Since your performance will be constrained by HD IO first and RAM IO second,
up to a point it is worth it to spend more CPU cycles to save on IO.
Given the large and growing gap between CPU IO, RAM IO, and HD IO, these issues
are becoming more important for _internal_ sorts as well.
>Best Regards, Simon Riggs
Hope this is useful,
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?