I concur with most of what was already posted. Some additions below. >>> "Scott Marlowe" <[EMAIL PROTECTED]> wrote: > On Fri, Jun 6, 2008 at 12:30 AM, Greg Smith <[EMAIL PROTECTED]> wrote: > >> vacuum_cost_delay = 750 >> autovacuum = true >> autovacuum_naptime = 3600 >> autovacuum_vacuum_threshold = 1000 >> autovacuum_analyze_threshold = 500 >> autovacuum_vacuum_scale_factor = 0.4 >> autovacuum_analyze_scale_factor = 0.2 >> autovacuum_vacuum_cost_delay = -1 >> autovacuum_vacuum_cost_limit = -1 >> max_fsm_pages = 5000000 >> max_fsm_relations = 2000 > > These are terrible settings for a busy database. A cost delay > anything over 10 or 20 is usually WAY too big, and will make vacuums > take nearly forever. Naptime of 3600 is 1 hour, right? That's also > far too long to be napping between just checking to see if you should > run another vacuum. > > I'd recommend: > vacuum_cost_delay = 20 > autovacuum = true > autovacuum_naptime = 300 # 5 minutes. I would also reduce the autovacuum thresholds and scale factors; many small vacuums are more efficient than a few big ones. Also, you stand a chance to force the hint bit writing to coalesce with the initial page write if you are more aggressive here. I'd probably go all the way down to a vacuum cost delay of 10 and then see if you need to go higher. That has worked best for us in a write-heavy environment with hundreds of millions of rows. A nightly database vacuum is good if it can complete off-hours and doesn't interfere with the application; otherwise, some regular schedule, by table. It's hard to give more advice without more specifics. -Kevin
-- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance