This is how I think autovacuum should change with an eye towards being
able to run multiple vacuums simultaneously:

1. There will be two kinds of processes, "autovacuum launcher" and
"autovacuum worker".

2. The launcher will be in charge of scheduling and will tell workers
what to do

3. The workers will be similar to what autovacuum does today: start when
somebody else tells it to start, process a single item (be it a table or
a database) and terminate

4. Launcher will be a continuously-running process, akin to bgwriter;
connected to shared memory

5. Workers will be direct postmaster children; so postmaster will get
SIGCHLD when worker dies

6. Launcher will start a worker using the following protocol:
   - Set up information on what to run on shared memory
   - invoke SendPostmasterSignal(PMSIGNAL_START_AUTOVAC_WORKER)
   - Postmaster will react by starting a worker, and registering it very
     similarly to a regular backend, so it can be shut down easily when
     (Thus launcher will not be informed right away when worker dies)
   - Worker will examine shared memory to know what to do, clear the
     request, and send a signal to Launcher
   - Launcher wakes up and can start another one if appropriate

Does this raise some red flags?  It seems straightforward enough to me;
I'll submit a patch implementing this, so that scheduling will continue
to be as it is today.  Thus the scheduling discussions are being
deferred until they can be actually useful and implementable.

Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?


Reply via email to