having said that, there are a few general suggestions :
Set-up some shared memory, about a tenth of your available RAM, and configure shared_memory and max_clients correctly. I've used the following formula, ripped off the net from somewhere. It's not entirely acurate, as other settings steal a little shared memory, but it works for the most part :
((1024*RAM_SIZE) - (14.2 * max_connections) - 250) / 8.2
as I say, it should get you a good value, otherwise lower it bit by bit if you have trouble starting your db.
Increase effective_cache (50%-70% avail ram) and sort_mem (about 1/20th ram) and lower you random_page_cost to around 2 or less (as low as 0.3) if you have fast SCSI drives in a RAID10 set-up - this was a big speedup ;)
But this might not be the answer though. The values detailed above are when tuning an already stable setup.
Perhaps you need to look at your system resource usage. If you're degrading performance over time it sounds to me like you are slowly running out of memory and swap ?
Generall if I take something over, I'll try and get it onto my terms. Have you tried importing the DB to a fresh installation, one where you know sensible defaults are set, so you aren't inheriting any cruft from the previous sysadmin.
To be honest tho, I've never run pg so that it actually shutdown because it was running so badly - i just wouldn't think it would do that.
Rob Fielding [EMAIL PROTECTED]
www.dsvr.co.uk Development Designer Servers Ltd
---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match