Hi chaps,

Our legacy apps have some permanent tables that they use for tempory data and 
constantly clear out, I've kicked the developers and I intend to eradicate them 
eventually (the tables, not the developers).

These tables are constantly being autovacuumed, approximately once a minute, 
it's not causing any problem and seems to be keeping them vacuumed.  But I'm 
constantly re-assessing our autovacuum settings to make sure they're adequate, 
and no matter how much I read up on autovacuum I still feel like I'm missing 
something.

I just wondered what peoples opinions were on handling this sort of vacuuming? 
Is that too often?

The general autovaccum settings set more for our central tables are threshold 
500, scale_factor 0.2. I guess I could set specific settings for the tables in 
pg_autovacuum, or I could exclude them in there and run a vacuum from cron once 
a day or something.

Here's a typical log message:

2008-09-19 11:40:10 BST [12917]: [1-1]: [user=]: [host=]: [db=]:: LOG:  
automatic vacuum of table "TEMP.reports.online": index scans: 1
        pages: 21 removed, 26 remain
        tuples: 2356 removed, 171 remain
        system usage: CPU 0.00s/0.00u sec elapsed 0.08 sec

Any comments would be appreciated.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to