More work_mem will make the sort fit more in memory and less on disk, even
with the same query plan.


On Wed, Aug 20, 2008 at 12:54 AM, Moritz Onken <[EMAIL PROTECTED]>wrote:

>
> Am 19.08.2008 um 17:23 schrieb Moritz Onken:
>
>
>> Am 19.08.2008 um 16:49 schrieb Scott Carey:
>>
>>  What is your work_mem set to?  The default?
>>>
>>> Try increasing it significantly if you have the RAM and seeing if that
>>> affects the explain plan.  You may even want to set it to a number larger
>>> than the RAM you have just to see what happens.  In all honesty, it may be
>>> faster to overflow to OS swap space than sort too many rows, but ONLY if it
>>> changes the plan to a significantly more efficient one.
>>>
>>> Simply type
>>> 'SET work_mem = '500MB';
>>> before running your explain.  Set it to even more RAM if you have the
>>> space for this experiment.
>>>
>>> In my experience the performance of aggregates on large tables is
>>> significantly affected by work_mem and the optimizer will chosse poorly
>>> without enough of it.  It will rule out plans that may be fast enough when
>>> overflowing to disk in preference to colossal sized sorts (which likely also
>>> overflow to disk but take hours or days).
>>>
>>
>> Thanks for that advice but the explain is not different :-(
>>
>> moritz
>>
>> --
>>
>
> Hi,
>
> I started the query with work_mem set to 3000MB. The explain output didn't
> change but it runs now much faster (about 10 times). The swap isn't used.
> How can you explain that?
>
>
> moritz
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

Reply via email to