Re: [PERFORM] Need advice on postgresql.conf settings

2004-11-10 Thread Jeff
On Nov 9, 2004, at 2:01 PM, Shane | SkinnyCorp wrote:
Thanks in advance for anything you can do to help.
The real issue is this, we have THE SAME queries taking anywhere from 
.001 - 90.0 seconds... the server is using 98% of the available RAM at 
all times (because of the persistant connections via php), and I don't 
know what to do.  Every time I change a
I'd recommend strongly ditching the use of pconnect and use pgpool + 
regular connect. It is a terrific combination that provides pool 
connections like how you'd think they shoudl work (a pool of N 
connections to PG shared by Y processes instead of a 1:1 mapping).

curiously, have you noticed any pattern to the slowdown?
It could be induced by a checkpoint or vacuum.
Are you swapping at all?
Are your PHP scripts leaking at all, etc.?
Your load average is high, how does your CPU idle look (if load is 
high, and the cpus are pretty idle that is an indicator of being IO 
bound).

good luck.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Need advice on postgresql.conf settings

2004-11-09 Thread Sean Chittenden
The real issue is this, we have THE SAME queries taking anywhere from 
.001 -
90.0 seconds... the server is using 98% of the available RAM at all 
times
(because of the persistant connections via php), and I don't know 
what to
do.
Another possible line of attack is to use persistent (pooled)
connections to cut down the number of live backend processes you need.
However, depending on what your application software is, that might
take more time/effort (= money) than dropping in some more RAM.
This particular feature is pure evilness.  Using all of my fingers and 
toes, I can't count the number of times I've had a client do this and 
get themselves into a world of hurt.  Somewhere in the PHP 
documentation, there should be a big warning wrapped in the blink tag 
that steers people away from setting this.  The extra time necessary to 
setup a TCP connection is less than the performance drag induced on the 
backend when persistent connections are enabled.  Reread that last 
sentence until it sinks in.  On a local network, this is premature 
optimization that's hurting you.

max_files_per_process = 3052# min 25
You really have your kernel set to support 3052 * 75 simultaneously 
open
files?  Back this off.  I doubt values beyond a couple hundred buy
anything except headaches.
This, on the other hand, has made a large difference for me.  Time 
necessary to complete open(2) calls can be expensive, especially when 
the database is poorly designed and is touching many different parts of 
the database spread across multiple files on the backend.  3000 is 
high, but I've found 500 to be vastly too low in some cases... in 
others, it's just fine.  My rule of thumb has become, if you're doing 
lots of aggregate functions (ex, SUM(), COUNT()) more than once in the 
lifetime of a backend, increasing this value helps.. otherwise it buys 
you little (if so, 1500 is generally sufficient).  Faster IO, however, 
is going to save you here.  If you can, increase your disk caching in 
the OS.  On FreeBSD, increase your KVA_PAGES and NBUFs.  Since you've 
freed up more ram by disabling persistent connections, this shouldn't 
be a problem.  -sc

--
Sean Chittenden
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Need advice on postgresql.conf settings

2004-11-09 Thread Tom Lane
"Shane | SkinnyCorp" <[EMAIL PROTECTED]> writes:
> The real issue is this, we have THE SAME queries taking anywhere from .001 - 
> 90.0 seconds... the server is using 98% of the available RAM at all times 
> (because of the persistant connections via php), and I don't know what to 
> do.

I have a feeling that the answer is going to boil down to "buy more RAM"
--- it sounds a lot like you're just overstressing your server.  The
more active backends you have, the more RAM goes to process-local
memory, and the less is available for kernel disk cache.  Even if you
don't go into outright swapping, the amount of disk I/O needed goes up
the smaller the kernel disk cache gets.

Another possible line of attack is to use persistent (pooled)
connections to cut down the number of live backend processes you need.
However, depending on what your application software is, that might
take more time/effort (= money) than dropping in some more RAM.

You can investigate this theory by watching "top" output (the first few
lines about memory usage, not the process listing) as well as "vmstat"
output.

> uptime: 12:23:08  up 132 days, 19:16,  2 users,  load average: 19.75, 
> 17.34, 18.86

Load averages approaching 20 are not good either ... what sort of box
are you running on anyway?

As for the postgresql.conf settings, the only ones I'd seriously question
are

max_fsm_pages = 2   # min max_fsm_relations*16, 6 bytes each
max_fsm_relations = 1000# min 100, ~50 bytes each

These are the defaults, and are probably too small for a DB exceeding a
hundred meg or so.

max_files_per_process = 3052# min 25

You really have your kernel set to support 3052 * 75 simultaneously open
files?  Back this off.  I doubt values beyond a couple hundred buy
anything except headaches.

wal_buffers = 192 

This is an order-of-magnitude overkill too, especially if your
transactions are mostly small.  I know it's only a megabyte or two,
but you evidently need that RAM more elsewhere.

enable_seqscan = false

I don't think this is a good idea in general.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html