On 09/14/2010 08:06 PM, Robert Haas wrote:
One idea I had was to have autovacuum workers stick around for a
period of time after finishing their work.  When we need to autovacuum
a database, first check whether there's an existing worker that we can
use, and if so use him.  If not, start a new one.  If that puts us
over the max number of workers, kill of the one that's been waiting
the longest.  But workers will exit anyway if not reused after a
certain period of time.

That's pretty close to how bgworkers are implemented, now. Except for the need to terminate after a certain period of time. What is that intended to be good for?

Especially considering that the avlauncher/coordinator knows the current amount of work (number of jobs) per database.

The idea here would be to try to avoid all the backend startup costs:
process creation, priming the caches, etc.  But I'm not really sure
it's worth the effort.  I think we need to look for ways to further
reduce the overhead of vacuuming, but this doesn't necessarily seem
like the thing that would have the most bang for the buck.

Well, the pressure has simply been bigger for Postgres-R.

It should be possible to do benchmarks using Postgres-R and compare against a max_idle_background_workers = 0 configuration that leads to termination and re-connecting for ever remote transaction to be applied. However, that's not going to say anything about whether or not it's worth it for autovacuum.

Regards

Markus Wanner

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to