On Jun 27, 2012, at 22:00, Josh Berkus <j...@agliodbs.com> wrote:

> Folks,
> 
> Yeah, I can't believe I'm calling for *yet another* configuration
> variable either.  Suggested workaround fixes very welcome.
> 
> The basic issue is that autovacuum_max_workers is set by most users
> based on autovac's fairly lightweight action most of the time: analyze,
> vacuuming pages not on the visibility list, etc.  However, when XID
> wraparound kicks in, then autovac starts reading entire tables from disk
> ... and those tables may be very large.
> 
> This becomes a downtime issue if you've set autovacuum_max_workers to,
> say, 5 and several large tables hit the wraparound threshold at the same
> time (as they tend to do if you're using the default settings).  Then
> you have 5 autovacuum processes concurrently doing heavy IO and getting
> in each others' way.
> 
> I've seen this at two sites now, and my conclusion is that a single
> autovacuum_max_workers isn't sufficient if to cover the case of
> wraparound vacuum.  Nor can we just single-thread the wraparound vacuum
> (i.e. just one worker) since that would hurt users who have thousands of
> small tables.
> 
> 

Would there be enough benefit to setting up separate small/medium?/large 
thresholds with user-changeable default table size boundaries so that you can 
configure 6 workers where 3 handle the small tables, 2 handle the medium 
tables, and 1 handles the large tables.  Or alternatively a small worker 
consumes 1, medium 2, and large 3 'units' from whatever size pool has been 
defined.  So you could have 6 small tables or two large tables in-progress 
simultaneously.

David J.
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to