Re: [PERFORM] Performance 8.4.0

2009-08-02 Thread Robert Haas
On Fri, Jul 31, 2009 at 12:22 AM, Chris Dunnchris.d...@bigredsky.com wrote:
 constraint_exclusion = on

This is critical if you need it, but a waste of CPU time if you don't.
 Other than that your paramaters look good.  Are you using the default
page cost settings?  I see you have 12 GB RAM; how big is your
database?

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


FW: [PERFORM] Performance 8.4.0

2009-08-02 Thread Chris Dunn
The database is 8gb currently. Use to be a lot bigger but we removed all large 
objects out and developed a file server storage for it, and using default page 
costs for 8.4, I did have it changed in 8.1.4

-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
Sent: Sunday, 2 August 2009 11:26 PM
To: Chris Dunn
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance 8.4.0

On Fri, Jul 31, 2009 at 12:22 AM, Chris Dunnchris.d...@bigredsky.com wrote:
 constraint_exclusion = on

This is critical if you need it, but a waste of CPU time if you don't.
 Other than that your paramaters look good.  Are you using the default
page cost settings?  I see you have 12 GB RAM; how big is your
database?

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: FW: [PERFORM] Performance 8.4.0

2009-08-02 Thread Robert Haas
On Sun, Aug 2, 2009 at 10:04 PM, Chris Dunnchris.d...@bigredsky.com wrote:
 The database is 8gb currently. Use to be a lot bigger but we removed all 
 large objects out and developed a file server storage for it, and using 
 default page costs for 8.4, I did have it changed in 8.1.4

You might want to play with lowering them.  The default page costs
make page accesses expensive relative to per-tuple operations, which
is appropriate if you are I/O-bound but not so much if you are CPU
bound, and especially if the whole database is memory resident.  I'd
try something like random_page_cost = seq_page_cost = 0.1 for
starters, or whatever values were working for you in 8.1, but the
sweet spot may be higher or lower.

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Performance 8.4.0

2009-07-30 Thread Chris Dunn
Hi,

I would like to know if my configuration is ok, We run a web application with 
high transaction rate and the database machine on Mondays / Tuesdays is always 
at 100% CPU with no IO/Wait . the machine is a Dual Xeon Quad core, 12gb RAM, 
4gb/s Fibre Channel on Netapp SAN, with pg_xlog on separate Lun,
Could you please provide some feedback on the configuration

maintenance_work_mem = 704MB
constraint_exclusion = on
checkpoint_completion_target = 0.9
effective_cache_size = 8GB
work_mem = 72MB
wal_buffers = 8MB
checkpoint_segments = 16
shared_buffers = 2816MB
max_connections = 32

I have limited connections down to 32 as if I put up higher the machine load 
average goes through the roof and will decrease performance even more.
In the process of looking at a 4 x AMD 6 core Opteron  machine with 32GB Ram to 
replace if I cannot get any more performance out of this machine

Kind Regards
Christopher Dunn




Re: [PERFORM] Performance 8.4.0

2009-07-30 Thread Greg Smith
Your settings look reasonable, I'd bump up checkpoint_segments to at least 
double where you've got it set at now to lower general overhead a bit.  I 
doubt that will help you much though.


If you're at 100% CPU with no I/O wait, typically that means you have some 
heavy queries running that are gobbling up your CPU time.  Finding and 
improving those will probably buy you more than adjusting server 
parameters given that you already have everything in the right general 
ballpark.


Suggestions:

-Set log_min_duration_statement and analyze the results.  See 
http://wiki.postgresql.org/wiki/Logging_Difficult_Queries for more 
information about tools that might help.


-Capture snapshots of what the system is doing when it gets bogged down. 
I like to run the following periodically:


top -c -b -n 1
psql -c select * from pg_stat_activity

top -c will show you which processes are gobbling CPU time, and then you 
can see more detail about what those processes are doing by matching them 
up with the corresponding lines in pg_stat_activity.


If you want more performance out of the hardware you've already, finding 
the worst queries and seeing if you can speed them up would be where I'd 
start in your case.  It only takes one badly written one to drag the whole 
system to crawl if a couple of clients get caught up executing it at the 
same time.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance