Re: [PERFORM] pg_autovacuum parameters

2004-08-03 Thread Gaetano Mendola
Matthew T. O'Connor wrote:
Lending, Rune wrote:
Hello all.
 
I am managing a large database with lots of transactions in different 
tables.
The largest tables have around 5-6 millions tuples and around 
5-6 inserts and maybe 2 updates pr day.
While the smalest tables have only a few tuples and a few updates 
/inserts pr day. In addition we have small tables with many 
updates/inserts. So what I am saying is that there is all kinds of 
tables and uses of tables in our database.
This, I think, makes it difficult to set up pg_autovacuum. I am now 
running vacuum jobs on different tables in cron.
 
What things should I consider when setting but base and threshold 
values in pg_autovacuum? Since the running of vacuum and analyze is 
relative to the table size, as it must be, I think it is difficult to 
cover all tables..

One of the biggest problems with the version of pg_autovacuum in 7.4 
contrib is that you can only specify one set of thresholds, which often 
isn't flexible enough.  That said the thresholds are based on table 
since since you specify both a base value and a scaling factor so 
pg_autovacuum -v 1000 -V 1 will vacuum a table with 100 rows every 200 
updates, but will vacuum a table with 1,000,000 rows every 1,000,100 
updates.

Are there anyone who have some thoughts around this?

Basically, you should be able to use pg_autovacuum to do most of the 
vacuuming, if there are a few tables that aren't getting vacuumed often 
enough, then you can add a vacuum command to cron for those specific 
tables.
And in the version 7.5^H^H^H8.0 ( Tom Lane docet :-) ) I think is possible
specify that thresholds per table...
Regards
Gateano Mendola


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] pg_autovacuum parameters

2004-08-03 Thread Matthew T. O'Connor
Lending, Rune wrote:
Hello all.
 
I am managing a large database with lots of transactions in different 
tables.
The largest tables have around 5-6 millions tuples and around 
5-6 inserts and maybe 2 updates pr day.
While the smalest tables have only a few tuples and a few updates 
/inserts pr day. In addition we have small tables with many 
updates/inserts. So what I am saying is that there is all kinds of 
tables and uses of tables in our database.
This, I think, makes it difficult to set up pg_autovacuum. I am now 
running vacuum jobs on different tables in cron.
 
What things should I consider when setting but base and threshold values 
in pg_autovacuum? Since the running of vacuum and analyze is relative to 
the table size, as it must be, I think it is difficult to cover all tables..
One of the biggest problems with the version of pg_autovacuum in 7.4 
contrib is that you can only specify one set of thresholds, which often 
isn't flexible enough.  That said the thresholds are based on table 
since since you specify both a base value and a scaling factor so 
pg_autovacuum -v 1000 -V 1 will vacuum a table with 100 rows every 200 
updates, but will vacuum a table with 1,000,000 rows every 1,000,100 
updates.

Are there anyone who have some thoughts around this?
Basically, you should be able to use pg_autovacuum to do most of the 
vacuuming, if there are a few tables that aren't getting vacuumed often 
enough, then you can add a vacuum command to cron for those specific tables.

Matthew

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[PERFORM] pg_autovacuum parameters

2004-08-03 Thread Lending, Rune
Title: Melding



Hello 
all.
 
I am managing a 
large database with lots of transactions in different 
tables.
The largest tables 
have around 5-6 millions tuples and around 5-6 inserts and maybe 2 
updates pr day.
While the smalest 
tables have only a few tuples and a few updates /inserts pr day. In addition we 
have small tables with many updates/inserts. So what I am saying is that there 
is all kinds of tables and uses of tables in our database.
This, I think, makes 
it difficult to set up pg_autovacuum. I am now running vacuum jobs on different 
tables in cron. 
 
What things should I 
consider when setting but base and threshold values in pg_autovacuum? Since the 
running of vacuum and analyze is relative to the table size, as it must be, I 
think it is difficult to cover all tables..
 
Are there anyone who 
have some thoughts around this?
 
Regards
Rune