On Mar 26, 2007, at 2:01 AM, Galy Lee wrote:
As AUTOVACUUM is having multiple workers now, the semantics of
autovacuum_cost_limit also need to be redefined.
Currently, autovacuum_cost_limit is the accumulated cost that will
cause
one single worker vacuuming process to sleep. It is used to restrict
the I/O consumption of a single vacuum worker. When there are N
workers,
the I/O consumption by autovacuum workers can be increased by N times.
This autovacuum_cost_limit semantics produces unpredictable I/O
consumption for multiple-autovacuum-workers.
One simple idea is to set cost limit for every worker to:
autovacuum_cost_limit / max_autovacuum_workers. But for scenarios
which
have fewer active workers, it is obvious unfair to active workers.
So a
better way is to set cost limit of every active worker to:
autovacuum_cost_limit/autovacuum_active_workers. This ensures the I/O
consumption of autovacuum is stable.
Worker can be extended to have its own cost_limit on share memory.
When
a worker is brought up or a worker has finished its work, launcher
recalculates:
worker_cost_limit= (autovacuum_cost_limit/
autovacuum_active_workers)
and sets new value for each active workers.
The above approach requires launcher can change cost delay setting of
workers on-the-fly. This can be achieved by forcing VACUUM refers
to the
cost delay setting in its worker’s share memory every
vacuum_delay_point.
Any comments or suggestions?
Well, ideally we'd set cost limit settings on a per-tablespace
basis... but I agree that what you propose is probably the best bet
for multiple daemons short of doing per-tablespace stuff.
--
Jim Nasby [EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster