Simon Riggs wrote:
> The bad thing about having multiple autovacuum daemons active is that
> you can get two large VACUUMs running at the same time. This gives you
> the same small-VACUUM-starvation problem we had before, but now the
> effects of two VACUUMs kill performance even more. I would suggest that
> we look at ways of queueing, so that multiple large VACUUMs cannot
> occur. Setting vacuum_cost_delay will still allow multiple large VACUUMs
> but will make the starvation problem even worse as well. If we allow
> that situation to occur, I think I'd rather stick to autovac_workers=1.
> We will still have this potential problem even with HOT.
> 
> Potential solution: Each autovac worker gets a range of table sizes they
> are allowed to VACUUM. This is set with an additional parameter which is
> an array of gating values (i.e. one less gating value than number of
> autovac workers). That way small VACUUMs are never starved out by large
> ones. This is the same as having a Small:Medium:Large style queueing
> system. We can work out how to make the queueing system self-tune by
> observation of autovacuum frequency.

default autovac_workers is 3, so wouldn't you need three, not two, large
VACUUMs to starvate a smaller table?

Instead of queuing, how about increasing autovac_workers if starvation
is a concern?

I'd like to set a default autovacuum_vacuum_cost_delay anyway. Without
it, autovacuum is a performance hit when it kicks in, even if there's
only one of them running, and even if it only lasts for a short time.
It's an unpleasant surprise for someone who's new to PostgreSQL and
doesn't yet understand how vacuum and autovacuum works.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to