Excerpts from Tom Lane's message of mié jun 08 21:50:22 -0400 2011:
> Robert Haas <robertmh...@gmail.com> writes:
> > I think it'd be really useful to expose some more data in this area
> > though.  One random idea is - remember the time at which a table was
> > first observed to need vacuuming. Clear the timestamp when it gets
> > vacuumed.  Then you can do:
> 
> As far as I recall that logic, there is no delay between when we know
> that a table needs vacuumed and when we do it.  I don't see the point of
> introducing any such delay, either.

Autovacuum checks each table twice.  When it first connects to a
database it grabs a complete list of relations needing vacuum.  Then it
starts vacuuming, and before processing each relation, it rechecks.

So there *is* a delay, which corresponds to how long it took to process
the tables that preceded it in the list.  Robert's suggestion would seem
to make sense.  I'm not sure how to implement it: do we want some more
(highly volatile) data points in pgstat?  Do we need some other
mechanism?  This seems like a use case for pg_class_nt (see
http://archives.postgresql.org/pgsql-patches/2006-06/msg00114.php)


In any case, given the "rebalancing" feature of vacuum_cost_delay (which
increases the delay the more workers there are), the only "solution" to
the problem of falling behind is reducing the delay parameter.  If you
just add more workers, they start working more slowly.

-- 
Álvaro Herrera <alvhe...@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to