Hi,

On Fri, Jul 18, 2025 at 2:43 AM Masahiko Sawada <sawada.m...@gmail.com> wrote:
>
> On Mon, Jul 14, 2025 at 3:49 AM Daniil Davydov <3daniss...@gmail.com> wrote:
> >
> > This log level is used only "for messages about parallel workers launched".
> > I think that such logs relate more to the parallel workers module than
> > autovacuum itself. Moreover, if we emit log "planned vs. launched" each
> > time, it will simplify the task of selecting the optimal value of
> > 'autovacuum_max_parallel_workers' parameter. What do you think?
>
> INFO level is normally not sent to the server log. And regarding
> autovacuums, they don't write any log mentioning it started. If we
> want to write planned vs. launched I think it's better to gather these
> statistics during execution and write it together with other existing
> logs.
>
> >
> > About "%svacuum" - I guess we need to clarify what exactly the workers
> > were launched for. I'll add errhint to this log, but I don't know whether 
> > such
> > approach is acceptable.
>
> I'm not sure errhint is an appropriate place. If we write such
> information together with other existing autovacuum logs as I
> suggested above, I think we don't need to add such information to this
> log message.
>

I thought about it for some time and came up with this idea :
1)
When gathering such statistics, we need to take into account that users
might not want autovacuum to log something. Thus, we should collect statistics
in "higher" level that knows about log_min_duration.
2)
By analogy with the rest of the statistics, we can only accumulate a
total number
of planned and launched parallel workers. Alternatively, we could build an array
(one element for each index scan) of "planned vs. launched". But it will make
the code "dirty", and I don't sure that this will be useful.

This may be a discussion point, so I will separate it to another .patch file.

> I've reviewed v7 patch and here are some comments:
>
> +   {
> +       {
> +           "parallel_autovacuum_workers",
> +           "Maximum number of parallel autovacuum workers that can be
> taken from bgworkers pool for processing this table. "
> +           "If value is 0 then parallel degree will computed based on
> number of indexes.",
> +           RELOPT_KIND_HEAP,
> +           ShareUpdateExclusiveLock
> +       },
> +       -1, -1, 1024
> +   },
>
> Many autovacuum related reloptions have the prefix "autovacuum". So
> how about renaming it to autovacuum_parallel_worker (change
> check_parallel_av_gucs() name too accordingly)?
>

I have no objections.

> ---
> +bool
> +check_autovacuum_max_parallel_workers(int *newval, void **extra,
> +                                     GucSource source)
> +{
> +   if (*newval >= max_worker_processes)
> +       return false;
> +   return true;
> +}
>
> I think we don't need to strictly check the
> autovacuum_max_parallel_workers value. Instead, we can accept any
> integer value but internally cap by max_worker_processes.
>

I don't think that such a limitation is excessive, but I don't see similar
behavior in other "max_parallel_..." GUCs, so I think we can get
rid of it. I'll replace the "check hook" with an "assign hook", where
autovacuum_max_parallel_workers will be limited.

> ---
> +/*
> + * Make sure that number of available parallel workers corresponds to the
> + * autovacuum_max_parallel_workers parameter (after it was changed).
> + */
> +static void
> +check_parallel_av_gucs(int prev_max_parallel_workers)
> +{
>
> I think this function doesn't just check the value but does adjust the
> number of available workers, so how about
> adjust_free_parallel_workers() or something along these lines?

I agree, it's better this way.

>
> ---
> +       /*
> +        * Number of available workers must not exceed limit.
> +        *
> +        * Note, that if some parallel autovacuum workers are running at this
> +        * moment, available workers number will not exceed limit after
> +        * releasing them (see ParallelAutoVacuumReleaseWorkers).
> +       */
> +       AutoVacuumShmem->av_freeParallelWorkers =
> +           autovacuum_max_parallel_workers;
>
> I think the comment refers to the following code in
> AutoVacuumReleaseParallelWorkers():
>
> +   /*
> +    * If autovacuum_max_parallel_workers variable was reduced during parallel
> +    * autovacuum execution, we must cap available workers number by its new
> +    * value.
> +    */
> +   if (AutoVacuumShmem->av_freeParallelWorkers >
> +       autovacuum_max_parallel_workers)
> +   {
> +       AutoVacuumShmem->av_freeParallelWorkers =
> +           autovacuum_max_parallel_workers;
> +   }
>
> After the autovacuum launchers decreases av_freeParallelWorkers, it's
> not guaranteed that the autovacuum worker already reflects the new
> value from the config file when executing the
> AutoVacuumReleaseParallelWorkers(), which leds to skips the above
> codes. For example, suppose that autovacuum_max_parallel_workers is 10
> and 3 parallel workers are running by one autovacuum worker (i.e.,
> av_freeParallelWorkers = 7 now), if the user changes
> autovacuum_max_parallel_workers to 5, the autovacuum launchers adjust
> av_freeParallelWorkers to 5. However, if the worker doesn't reload the
> config file and executes AutoVacuumReleaseParallelWorkers(), it
> increases av_freeParallelWorkers to 8 and skips the adjusting logic.
> I've not tested this scenarios so I might be missing something though.
>

Yes, this is a possible scenario. I'll rework av_freeParallelWorkers
calculation. Main change is that a/v worker now checks whether config
reload is pending. Thus, it will have the relevant value of the
autovacuum_max_parallel_workers parameter.

Thank you very much for your comments! Please, see v8 patches:
1) Rename table option.
2) Replace check_hook with assign_hook for autovacuum_max_parallel_workers.
3) Simplify and correct logic for handling
autovacuum_max_parallel_workers parameter change.
4) Rework logic with "planned vs. launched" statistics for autovacuum
(see second patch file).
5) Get rid of "sandbox" - I don't see the point in continuing to drag him along.

--
Best regards,
Daniil Davydov
From 27b2c7d0dfb193aadd9d0199647e5909de3ac0aa Mon Sep 17 00:00:00 2001
From: Daniil Davidov <d.davy...@postgrespro.ru>
Date: Sun, 20 Jul 2025 23:26:13 +0700
Subject: [PATCH v8 2/2] Logging for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c  | 26 ++++++++++++++++++++++++--
 src/backend/commands/vacuumparallel.c | 20 ++++++++++++++------
 src/include/commands/vacuum.h         | 16 ++++++++++++++--
 3 files changed, 52 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 14036c27e87..11dc2c48a7e 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -348,6 +348,11 @@ typedef struct LVRelState
 
 	/* Instrumentation counters */
 	int			num_index_scans;
+	/*
+	 * Number of planned and actually launched parallel workers for all index
+	 * scans, or NULL
+	 */
+	PVWorkersUsage *workers_usage;
 	/* Counters that follow are only for scanned_pages */
 	int64		tuples_deleted; /* # deleted from table */
 	int64		tuples_frozen;	/* # newly frozen */
@@ -688,6 +693,16 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 		indnames = palloc(sizeof(char *) * vacrel->nindexes);
 		for (int i = 0; i < vacrel->nindexes; i++)
 			indnames[i] = pstrdup(RelationGetRelationName(vacrel->indrels[i]));
+
+		/*
+		 * Allocate space for workers usage statistics. Thus, we explicitly
+		 * make clear that such statistics must be accumulated.
+		 * For now, this is used only by autovacuum leader worker, because it
+		 * must log it in the end of table processing.
+		 */
+		vacrel->workers_usage = AmAutoVacuumWorkerProcess() ?
+			(PVWorkersUsage *) palloc0(sizeof(PVWorkersUsage)) :
+			NULL;
 	}
 
 	/*
@@ -1012,6 +1027,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 							 vacrel->relnamespace,
 							 vacrel->relname,
 							 vacrel->num_index_scans);
+			if (vacrel->workers_usage)
+				appendStringInfo(&buf,
+								 _("workers usage statistics for all of index scans : launched in total = %d, planned in total = %d\n"),
+								 vacrel->workers_usage->nlaunched,
+								 vacrel->workers_usage->nplanned);
 			appendStringInfo(&buf, _("pages: %u removed, %u remain, %u scanned (%.2f%% of total), %u eagerly scanned\n"),
 							 vacrel->removed_pages,
 							 new_rel_pages,
@@ -2634,7 +2654,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 	{
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
-											vacrel->num_index_scans);
+											vacrel->num_index_scans,
+											vacrel->workers_usage);
 
 		/*
 		 * Do a postcheck to consider applying wraparound failsafe now.  Note
@@ -3047,7 +3068,8 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
 											vacrel->num_index_scans,
-											estimated_count);
+											estimated_count,
+											vacrel->workers_usage);
 	}
 
 	/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 38cd6f68105..831cc64b529 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -227,7 +227,7 @@ struct ParallelVacuumState
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-												bool vacuum);
+												bool vacuum, PVWorkersUsage *wusage);
 static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -510,7 +510,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
  */
 void
 parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans)
+									int num_index_scans, PVWorkersUsage *wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -521,7 +521,7 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = true;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true, wusage);
 }
 
 /*
@@ -529,7 +529,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
  */
 void
 parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans, bool estimated_count)
+									int num_index_scans, bool estimated_count,
+									PVWorkersUsage *wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -541,7 +542,7 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = estimated_count;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false, wusage);
 }
 
 /*
@@ -626,7 +627,7 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
  */
 static void
 parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-									bool vacuum)
+									bool vacuum, PVWorkersUsage *wusage)
 {
 	int			nworkers;
 	PVIndVacStatus new_status;
@@ -750,6 +751,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 									 "launched %d parallel vacuum workers for index cleanup (planned: %d)",
 									 pvs->pcxt->nworkers_launched),
 							pvs->pcxt->nworkers_launched, nworkers)));
+
+		/* Remember these values, if we asked to. */
+		if (wusage != NULL)
+		{
+			wusage->nlaunched += pvs->pcxt->nworkers_launched;
+			wusage->nplanned += nworkers;
+		}
 	}
 
 	/* Vacuum the indexes that can be processed by only leader process */
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 14eeccbd718..64b23687506 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -295,6 +295,16 @@ typedef struct VacDeadItemsInfo
 	int64		num_items;		/* current # of entries */
 } VacDeadItemsInfo;
 
+/*
+ * PVWorkersUsage stores information about total number of launched and planned
+ * workers during parallel vacuum.
+ */
+typedef struct PVWorkersUsage
+{
+	int		nlaunched;
+	int		nplanned;
+}		PVWorkersUsage;
+
 /* GUC parameters */
 extern PGDLLIMPORT int default_statistics_target;	/* PGDLLIMPORT for PostGIS */
 extern PGDLLIMPORT int vacuum_freeze_min_age;
@@ -389,11 +399,13 @@ extern TidStore *parallel_vacuum_get_dead_items(ParallelVacuumState *pvs,
 extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
 extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
-												int num_index_scans);
+												int num_index_scans,
+												PVWorkersUsage *wusage);
 extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
 												int num_index_scans,
-												bool estimated_count);
+												bool estimated_count,
+												PVWorkersUsage *wusage);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
-- 
2.43.0

From 74329dfbaebff1878c443d70b45aa1b5f7f2ef74 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <d.davy...@postgrespro.ru>
Date: Sun, 20 Jul 2025 23:03:57 +0700
Subject: [PATCH v8 1/2] Parallel index autovacuum

---
 src/backend/access/common/reloptions.c        |  12 ++
 src/backend/commands/vacuumparallel.c         |  46 ++++++-
 src/backend/postmaster/autovacuum.c           | 120 +++++++++++++++++-
 src/backend/utils/init/globals.c              |   1 +
 src/backend/utils/misc/guc_tables.c           |  10 ++
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/include/miscadmin.h                       |   1 +
 src/include/postmaster/autovacuum.h           |   4 +
 src/include/utils/guc_hooks.h                 |   1 +
 src/include/utils/rel.h                       |   2 +
 10 files changed, 190 insertions(+), 8 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 50747c16396..54abe7f21f5 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -222,6 +222,16 @@ static relopt_int intRelOpts[] =
 		},
 		SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
 	},
+	{
+		{
+			"autovacuum_parallel_workers",
+			"Maximum number of parallel autovacuum workers that can be taken from bgworkers pool for processing this table. "
+			"If value is 0 then parallel degree will computed based on number of indexes.",
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
+		},
+		-1, -1, 1024
+	},
 	{
 		{
 			"autovacuum_vacuum_threshold",
@@ -1872,6 +1882,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
 		{"autovacuum_enabled", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+		{"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
 		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
 		{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 0feea1d30ec..38cd6f68105 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
 /*-------------------------------------------------------------------------
  *
  * vacuumparallel.c
- *	  Support routines for parallel vacuum execution.
+ *	  Support routines for parallel vacuum and autovacuum execution. In the
+ *	  future comments, the word "vacuum" will refer to both vacuum and
+ *	  autovacuum.
  *
  * This file contains routines that are intended to support setting up, using,
  * and tearing down a ParallelVacuumState.
@@ -34,6 +36,7 @@
 #include "executor/instrument.h"
 #include "optimizer/paths.h"
 #include "pgstat.h"
+#include "postmaster/autovacuum.h"
 #include "storage/bufmgr.h"
 #include "tcop/tcopprot.h"
 #include "utils/lsyscache.h"
@@ -373,8 +376,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	shared->queryid = pgstat_get_my_query_id();
 	shared->maintenance_work_mem_worker =
 		(nindexes_mwm > 0) ?
-		maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
-		maintenance_work_mem;
+		vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+		vac_work_mem;
+
 	shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
 
 	/* Prepare DSA space for dead items */
@@ -435,6 +439,8 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 void
 parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
 {
+	int nlaunched_workers;
+
 	Assert(!IsParallelWorker());
 
 	/* Copy the updated statistics */
@@ -453,7 +459,13 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
 
 	TidStoreDestroy(pvs->dead_items);
 
+	nlaunched_workers = pvs->pcxt->nworkers_launched; /* remember this value */
 	DestroyParallelContext(pvs->pcxt);
+
+	/* Release all launched (i.e. reserved) parallel autovacuum workers. */
+	if (AmAutoVacuumWorkerProcess())
+		AutoVacuumReleaseParallelWorkers(nlaunched_workers);
+
 	ExitParallelMode();
 
 	pfree(pvs->will_parallel_vacuum);
@@ -553,12 +565,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	int			nindexes_parallel_bulkdel = 0;
 	int			nindexes_parallel_cleanup = 0;
 	int			parallel_workers;
+	int			max_parallel_workers;
+
+	max_parallel_workers = AmAutoVacuumWorkerProcess() ?
+		autovacuum_max_parallel_workers :
+		max_parallel_maintenance_workers;
 
 	/*
 	 * We don't allow performing parallel operation in standalone backend or
 	 * when parallelism is disabled.
 	 */
-	if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+	if (!IsUnderPostmaster || max_parallel_workers == 0)
 		return 0;
 
 	/*
@@ -597,8 +614,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	parallel_workers = (nrequested > 0) ?
 		Min(nrequested, nindexes_parallel) : nindexes_parallel;
 
-	/* Cap by max_parallel_maintenance_workers */
-	parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+	/* Cap by GUC variable */
+	parallel_workers = Min(parallel_workers, max_parallel_workers);
 
 	return parallel_workers;
 }
@@ -646,6 +663,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/*
+	 * Also reserve workers in autovacuum global state. Note, that we may be
+	 * given fewer workers than we requested.
+	 */
+	if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+		nworkers = AutoVacuumReserveParallelWorkers(nworkers);
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
@@ -690,6 +714,16 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		LaunchParallelWorkers(pvs->pcxt);
 
+		if (AmAutoVacuumWorkerProcess() &&
+			pvs->pcxt->nworkers_launched < nworkers)
+		{
+			/*
+			 * Tell autovacuum that we could not launch all the previously
+			 * reserved workers.
+			 */
+			AutoVacuumReleaseParallelWorkers(nworkers - pvs->pcxt->nworkers_launched);
+		}
+
 		if (pvs->pcxt->nworkers_launched > 0)
 		{
 			/*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 9474095f271..61a50c9eca8 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -285,6 +285,7 @@ typedef struct AutoVacuumWorkItem
  * av_workItems		work item array
  * av_nworkersForBalance the number of autovacuum workers to use when
  * 					calculating the per worker cost limit
+ * av_freeParallelWorkers the number of free parallel autovacuum workers
  *
  * This struct is protected by AutovacuumLock, except for av_signal and parts
  * of the worker list (see above).
@@ -299,6 +300,7 @@ typedef struct
 	WorkerInfo	av_startingWorker;
 	AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];
 	pg_atomic_uint32 av_nworkersForBalance;
+	uint32 av_freeParallelWorkers;
 } AutoVacuumShmemStruct;
 
 static AutoVacuumShmemStruct *AutoVacuumShmem;
@@ -354,6 +356,7 @@ static void autovac_report_workitem(AutoVacuumWorkItem *workitem,
 static void avl_sigusr2_handler(SIGNAL_ARGS);
 static bool av_worker_available(void);
 static void check_av_worker_gucs(void);
+static void adjust_free_parallel_workers(int prev_max_parallel_workers);
 
 
 
@@ -753,6 +756,8 @@ ProcessAutoVacLauncherInterrupts(void)
 	if (ConfigReloadPending)
 	{
 		int			autovacuum_max_workers_prev = autovacuum_max_workers;
+		int			autovacuum_max_parallel_workers_prev =
+			autovacuum_max_parallel_workers;
 
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
@@ -769,6 +774,14 @@ ProcessAutoVacLauncherInterrupts(void)
 		if (autovacuum_max_workers_prev != autovacuum_max_workers)
 			check_av_worker_gucs();
 
+		/*
+		 * If autovacuum_max_parallel_workers changed, we must take care of
+		 * the correct value of available parallel autovacuum workers in shmem.
+		 */
+		if (autovacuum_max_parallel_workers_prev !=
+			autovacuum_max_parallel_workers)
+			adjust_free_parallel_workers(autovacuum_max_parallel_workers_prev);
+
 		/* rebuild the list in case the naptime changed */
 		rebuild_database_list(InvalidOid);
 	}
@@ -2847,8 +2860,12 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		 */
 		tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
-		/* As of now, we don't support parallel vacuum for autovacuum */
-		tab->at_params.nworkers = -1;
+
+		/* Decide whether we need to process indexes of table in parallel. */
+		tab->at_params.nworkers = avopts
+			? avopts->autovacuum_parallel_workers
+			: -1;
+
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -3329,6 +3346,68 @@ AutoVacuumRequestWork(AutoVacuumWorkItemType type, Oid relationId,
 	return result;
 }
 
+/*
+ * In order to meet the 'autovacuum_max_parallel_workers' limit, leader worker
+ * must call this function. It returns the number of parallel workers that
+ * actually can be launched and reserves (if any) these workers in global
+ * autovacuum state.
+ *
+ * NOTE: We will try to provide as many workers as requested, even if caller
+ * will occupy all available workers.
+ */
+int
+AutoVacuumReserveParallelWorkers(int nworkers)
+{
+	int can_launch;
+
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess() && !IsParallelWorker());
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/* Provide as many workers as we can. */
+	can_launch = Min(AutoVacuumShmem->av_freeParallelWorkers, nworkers);
+	AutoVacuumShmem->av_freeParallelWorkers -= nworkers;
+
+	LWLockRelease(AutovacuumLock);
+	return can_launch;
+}
+
+/*
+ * When parallel autovacuum worker die, leader worker must call this function
+ * in order to refresh global autovacuum state. Thus, other leaders will be
+ * able to use these workers.
+ *
+ * 'nworkers' - how many workers caller wants to release.
+ */
+void
+AutoVacuumReleaseParallelWorkers(int nworkers)
+{
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess() && !IsParallelWorker());
+
+	/* Refresh autovacuum_max_parallel_workers paremeter */
+	CHECK_FOR_INTERRUPTS();
+	if (ConfigReloadPending)
+	{
+		ConfigReloadPending = false;
+		ProcessConfigFile(PGC_SIGHUP);
+	}
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * If autovacuum_max_parallel_workers parameter was reduced during parallel
+	 * autovacuum execution, we must cap available workers number by its new
+	 * value.
+	 */
+	AutoVacuumShmem->av_freeParallelWorkers =
+		Min(AutoVacuumShmem->av_freeParallelWorkers + nworkers,
+			autovacuum_max_parallel_workers);
+
+	LWLockRelease(AutovacuumLock);
+}
+
 /*
  * autovac_init
  *		This is called at postmaster initialization.
@@ -3389,6 +3468,8 @@ AutoVacuumShmemInit(void)
 		Assert(!found);
 
 		AutoVacuumShmem->av_launcherpid = 0;
+		AutoVacuumShmem->av_freeParallelWorkers =
+			autovacuum_max_parallel_workers;
 		dclist_init(&AutoVacuumShmem->av_freeWorkers);
 		dlist_init(&AutoVacuumShmem->av_runningWorkers);
 		AutoVacuumShmem->av_startingWorker = NULL;
@@ -3439,6 +3520,12 @@ check_autovacuum_work_mem(int *newval, void **extra, GucSource source)
 	return true;
 }
 
+void
+assign_autovacuum_max_parallel_workers(int newval, void *extra)
+{
+	autovacuum_max_parallel_workers = Min(newval, max_worker_processes);
+}
+
 /*
  * Returns whether there is a free autovacuum worker slot available.
  */
@@ -3470,3 +3557,32 @@ check_av_worker_gucs(void)
 				 errdetail("The server will only start up to \"autovacuum_worker_slots\" (%d) autovacuum workers at a given time.",
 						   autovacuum_worker_slots)));
 }
+
+/*
+ * Make sure that number of free parallel workers corresponds to the
+ * autovacuum_max_parallel_workers parameter (after it was changed).
+ */
+static void
+adjust_free_parallel_workers(int prev_max_parallel_workers)
+{
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * Cap the number of free workers by new parameter's value, if needed.
+	 */
+	AutoVacuumShmem->av_freeParallelWorkers =
+		Min(AutoVacuumShmem->av_freeParallelWorkers,
+			autovacuum_max_parallel_workers);
+
+	if (autovacuum_max_parallel_workers > prev_max_parallel_workers)
+	{
+		/*
+		 * If user wants to increase number of parallel autovacuum workers, we
+		 * must increase number of free workers.
+		 */
+		AutoVacuumShmem->av_freeParallelWorkers +=
+			(autovacuum_max_parallel_workers - prev_max_parallel_workers);
+	}
+
+	LWLockRelease(AutovacuumLock);
+}
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index d31cb45a058..977644978c1 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int			NBuffers = 16384;
 int			MaxConnections = 100;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
+int         autovacuum_max_parallel_workers = 0;
 int			MaxBackends = 0;
 
 /* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c
index d14b1678e7f..4941ad976df 100644
--- a/src/backend/utils/misc/guc_tables.c
+++ b/src/backend/utils/misc/guc_tables.c
@@ -3604,6 +3604,16 @@ struct config_int ConfigureNamesInt[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"autovacuum_max_parallel_workers", PGC_SIGHUP, VACUUM_AUTOVACUUM,
+			gettext_noop("Maximum number of parallel autovacuum workers, that can be taken from bgworkers pool."),
+			gettext_noop("This parameter is capped by \"max_worker_processes\" (not by \"autovacuum_max_workers\"!)."),
+		},
+		&autovacuum_max_parallel_workers,
+		0, 0, MAX_BACKENDS,
+		NULL, assign_autovacuum_max_parallel_workers, NULL
+	},
+
 	{
 		{"max_parallel_maintenance_workers", PGC_USERSET, RESOURCES_WORKER_PROCESSES,
 			gettext_noop("Sets the maximum number of parallel processes per maintenance operation."),
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index a9d8293474a..bbf5307000f 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -683,6 +683,7 @@
 autovacuum_worker_slots = 16	# autovacuum worker slots to allocate
 					# (change requires restart)
 #autovacuum_max_workers = 3		# max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 0	# disabled by default and limited by max_worker_processes
 #autovacuum_naptime = 1min		# time between autovacuum runs
 #autovacuum_vacuum_threshold = 50	# min number of row updates before
 					# vacuum
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 1bef98471c3..85926415657 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -177,6 +177,7 @@ extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
 
 extern PGDLLIMPORT int commit_timestamp_buffers;
 extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index e8135f41a1c..863d206f2bd 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -64,6 +64,10 @@ pg_noreturn extern void AutoVacWorkerMain(const void *startup_data, size_t start
 extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
 								  Oid relationId, BlockNumber blkno);
 
+/* parallel autovacuum stuff */
+extern int AutoVacuumReserveParallelWorkers(int nworkers);
+extern void AutoVacuumReleaseParallelWorkers(int nworkers);
+
 /* shared memory stuff */
 extern Size AutoVacuumShmemSize(void);
 extern void AutoVacuumShmemInit(void);
diff --git a/src/include/utils/guc_hooks.h b/src/include/utils/guc_hooks.h
index 82ac8646a8d..04833b4f147 100644
--- a/src/include/utils/guc_hooks.h
+++ b/src/include/utils/guc_hooks.h
@@ -31,6 +31,7 @@ extern void assign_application_name(const char *newval, void *extra);
 extern const char *show_archive_command(void);
 extern bool check_autovacuum_work_mem(int *newval, void **extra,
 									  GucSource source);
+extern void assign_autovacuum_max_parallel_workers(int newval, void *extra);
 extern bool check_vacuum_buffer_usage_limit(int *newval, void **extra,
 											GucSource source);
 extern bool check_backtrace_functions(char **newval, void **extra,
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index b552359915f..377000199d7 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,8 @@ typedef struct ForeignKeyCacheInfo
 typedef struct AutoVacOpts
 {
 	bool		enabled;
+	int			autovacuum_parallel_workers; /* max number of parallel
+												autovacuum workers */
 	int			vacuum_threshold;
 	int			vacuum_max_threshold;
 	int			vacuum_ins_threshold;
-- 
2.43.0

Reply via email to