On Friday May 7 2004 9:09, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > I guess the activity just totally outran the ability of autovac to keep
> > up.
>
> Could you have been bit by autovac's bug with misreading '3e6' as '3'?
> If you don't have a recent version it's likely to fail to vacuum large
> tables often enough.

No, our autovac logs the number of changes (upd+del for vac, upd+ins+del for 
analyze) on each round of checks, and we can see it was routinely 
performing when expected.  The number of updates/deletes just far exceeded 
the thresholds.  Vac threshold was 2000, and at times there might be 
300,000 outstanding changes in the 10-30 minutes between vacuums.

Given the gradual performance degradation we saw over a period of days if 
not weeks, and the extremely high numbers of unused tuples, I'm wondering 
if there is something like a data fragmentation problem occurring in which 
we're having to read many many disk pages to get just a few tuples off each 
page?  This cluster has 3 databases (2 nearly idle) with a total of 600 
tables (about 300 in the active database).  Gzipped dumps are 1.7GB.  
max_fsm_relations = 1000 and max_fsm_pages = 10000.  The pattern of ops is 
a continuous stream of inserts, sequential scan selects, and deletes.


---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to