Larry,

 

Considering these recommendations, let's try setting shared_buffers to 2GB
and work_mem to 16MB. The thing is that work_mem is per connection, and if
we get too aggressive and we get a lot of simultaneous users, we can
potentially eat up a lot of memory.

 

So 2GB + (100 * 16MB) = 3.6GB total RAM eaten up under peak load for these
two values alone.

 

If we wanted to get more aggressive, we can raise work_mem.

 

Carlo

 

-----Original Message-----
From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
Sent: November 1, 2007 5:39 PM
To: Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to avoid hashjoin and mergejoin

 

On 11/1/07, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:

> I am comparing the same query on two different PG 8.2 servers, one Linux

> (8GB RAM) and one Windows (32GB RAM). Both have similar drives and CPU's.

> 

> The Windows posgrestsql.config is pretty well tuned but it looks like

> someone had wiped out the Linux config so the default one was
re-installed.

> All performance-related memory allocation values seem to be set to the

> defaults, but mods have been made: max_connections = 100 and
shared_buffers

> = 32MB.

> 

> The performance for this query is terrible on the Linux server, and good
on

> the Windows server - presumably because the original Linux PG config has

> been lost. This query requires: that "set enable_seqscan to 'off';"

 

Have you run analyze on the server yet?

 

A few general points on performance tuning.  With 8.2 you should set

shared_buffers to a pretty big chunk of memory on linux, up to 25% or

so.  That means 32 Meg shared buffers is REAL low for a linux server.

Try running anywhere from 512Meg up to 1Gig for starters and see if

that helps too.  Also turn up work_mem to something like 16 to 32 meg

then restart the server after making these changes.

 

Then give us the explain analyze output with all the enable_xxx set to ON.

 

summary: analyze, increase shared_buffers and work_mem, give us explain
analyze.

Reply via email to