Matthew T. O'Connor wrote:
Lending, Rune wrote:
I am managing a large database with lots of transactions in different
The largest tables have around 5-6 millions tuples and around
50000-60000 inserts and maybe 20000 updates pr day.
While the smalest tables have only a few tuples and a few updates
/inserts pr day. In addition we have small tables with many
updates/inserts. So what I am saying is that there is all kinds of
tables and uses of tables in our database.
This, I think, makes it difficult to set up pg_autovacuum. I am now
running vacuum jobs on different tables in cron.
What things should I consider when setting but base and threshold
values in pg_autovacuum? Since the running of vacuum and analyze is
relative to the table size, as it must be, I think it is difficult to
cover all tables..
One of the biggest problems with the version of pg_autovacuum in 7.4
contrib is that you can only specify one set of thresholds, which often
isn't flexible enough. That said the thresholds are based on table
since since you specify both a base value and a scaling factor so
pg_autovacuum -v 1000 -V 1 will vacuum a table with 100 rows every 200
updates, but will vacuum a table with 1,000,000 rows every 1,000,100
Are there anyone who have some thoughts around this?
Basically, you should be able to use pg_autovacuum to do most of the
vacuuming, if there are a few tables that aren't getting vacuumed often
enough, then you can add a vacuum command to cron for those specific
And in the version 7.5^H^H^H8.0 ( Tom Lane docet :-) ) I think is possible
specify that thresholds per table...
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings