I am managing a large database with lots of transactions in different tables.
The largest tables have around 5-6 millions tuples and around 50000-60000 inserts and maybe 20000 updates pr day.
While the smalest tables have only a few tuples and a few updates /inserts pr day. In addition we have small tables with many updates/inserts. So what I am saying is that there is all kinds of tables and uses of tables in our database.
This, I think, makes it difficult to set up pg_autovacuum. I am now running vacuum jobs on different tables in cron.
What things should I consider when setting but base and threshold values in pg_autovacuum? Since the running of vacuum and analyze is relative to the table size, as it must be, I think it is difficult to cover all tables..
Are there anyone who have some thoughts around this?