Hi,

On Tue, Mar 3, 2026 at 5:26 AM Masahiko Sawada <[email protected]> wrote:
>
> On Sun, Mar 1, 2026 at 6:46 AM Daniil Davydov <[email protected]> wrote:
> >
> > Thus, a/v leader cannot launch any workers if max_parallel_workers is set 
> > to 0.
>
> Right. But this fact would actually support that limiting
> autovacuum_max_parallel_workers by max_parallel_workers is more
> appropriate, no?
>

av_max_parallel_workers is really limited by max_parallel_workers only
during shmem init. After that we can change it to a value that is higher
than max_parallel_workers, and nothing bad will happen (obviously).

So, my point was : why should we have this explicit limitation if it
1) doesn't guard us from something bad and 2) can be violated at any time
(via ALTER SYSTEM SET ...).

Now it seems to me that limiting our parameter by max_parallel_workers is
more about grouping of logically related parameters, not a practical necessity.

> > Even if there is a bug in the code and a/v leader cannot release parallel
> > workers due to occured error, one day it will finish vacuuming and call
> > "proc_exit". During "proc_exit" the "before_shmem_exit_hook" along with
> > the "ReleaseAllParallelWorkers" will be called.
>
> What bugs are you concerned about in this case? I'm not sure what you
> meant by "a/v leader cannot release parallel workers due to occured
> error". It sounds like you mentioned a case where there is a bug in
> AutoVacuumReleaseParallelWorkers() but if there is the bug and the
> leader failed to release parallel workers, we would end up not writing
> these elogs in either case.
>

Not precisely. I mean a bug that causes a/v leader to not call
AutoVacuumReleaseParallelWorkers in the try/catch block.
I'll continue my thoughts below.

> > I suppose to do the same as we did for try/catch block - add logging inside
> > the "autovacuum_worker_before_shmem_exit" with some unique message.
> > Thus, we will be sure that the workers are released precisely in the
> > "before_shmem_exit_hook".
> >
> > The alternative is to pass some additional information to the
> > "ReleaseAllParallelWorkers" function (to supplement the log it emits), but 
> > it
> > doesn't seem like a good solution to me.
>
> I'm not sure if it's important to check how
> AutoVacuumReleaseAllParallelWorkers() has been called (either in
> PG_CATCH() block or by autovacuum_worker_before_shmem_exit()). We
> would end up having to add a unique message to each caller of
> AutoVacuumReleaseAllParallelWorkers() in the future. I guess it's more
> important to make sure that all workers have been released in the end.
>
> In that sense, it would make more sense to check that all workers have
> actually been released (i.e., checking by
> get_parallel_autovacuum_free_workers()) after a parallel vacuum
> instead of checking workers being released by debug logs. That is, we
> can check at each test end if get_parallel_autovacuum_free_workers()
> returns the expected number after disabling parallel autovacuum.
>

Sure, at first we want to check whether all workers have been
released. But the ability to release them precisely in the try/catch
block is also important, because if it doesn't - a/v worker can "hold"
these workers until it finishes vacuuming of other tables (which can
take a lot of time). Such a situation will surely degrade performance,
so I think that we must check whether we can release workers precisely
during ERROR handling. Do you agree with it?

I understand your concerns about adding a unique log message for each
ReleaseAll call. But I cannot imagine a new situation when we need to
emergency release workers. If you think that it might be possible, I can
propose adding a new optional parameter to the "ReleaseAll" function -
something like "char *context_msg", which will be added to the elog placed
inside this function.

> On second thoughts on the "planned" and "reserved", can we consider
> what the patch implemented as "reserved" as the "planned" in
> autovacuum cases? That is, in autovacuum cases, the "planned" number
> considers the number of parallel degrees based on the number of
> indexes (or autovacuum_parallel_workers value) as well as the number
> of workers that have actually been reserved. In cases of
> autovacuum_max_parallel_workers shortage, users would notice by seeing
> logs that enough workers are not planned in the first place against
> the number of indexes on the table. That might be less confusing for
> users rather than introducing a new "reserved" concept in the vacuum
> logs. Also, it slightly helps simplify the codes.

Yeah, it sounds tempting. But in this case we're shifting more responsibility
to the user. For instance :
If av_max_workers = 5 and there are two a/v leaders each of which is trying
to launch 3 parallel workers, we will see logs like "3 planned, 3 launched",
"2 planned, 2 launched". IMHO, such a log doesn't imply that there is a
shortage of workers. I.e. this is the user's responsibility to notice that the
second a/v leader could launch more than 2 workers for processing of the
table with (N + 2) indexes.
In this case even our previous version of logging will give more information
to the user : "3 planned, 3 launched", "3 planned, 2 launched".

If we don't want to create a new "reserved" concept, maybe we can rename
it to something more intuitive? For example, "n_abandoned" - number of
workers that we were unable to launch due to av_max_parallel_workers
shortage. If n_abandoned is 0 and n_launched < n_planned, the user can
conclude that he should increase the max_parallel_workers parameter.
And vica versa, if n_launched == n_planned and n_abandoned > 0, the
user can conclude that he should increase the
autovacuum_max_parallel_workers parameter.

What do you think?

**Comments on the 0001 patch**

>   * of the worker list (see above).
> @@ -299,6 +308,8 @@ typedef struct
>         WorkerInfo      av_startingWorker;
>         AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];
>         pg_atomic_uint32 av_nworkersForBalance;
> +       uint32          av_freeParallelWorkers;
> +       uint32          av_maxParallelWorkers;
>  } AutoVacuumShmemStruct;
>
> We should use int32 instead of uint32.

I don't mind, but I don't quite understand the reason. We assume that the
minimal value for both variables is 0. Why shouldn't we use unsigned
data type?

**Comments on the 0003 patch**

> I've attached the proposed changes to the 0003 patch, which includes:
>
> - removal of VacuumCostParams as it's not necessary.
> - comment updates.
> - other cosmetic updates.

Thank you! Most of the proposals are LGTM, but I'll edit a few comments.

**Comments on the 0004 patch**

> +#ifdef USE_INJECTION_POINTS
> +   /*
> +    * If we are parallel autovacuum worker, we can consume delay parameters
> +    * during index processing (via vacuum_delay_point call). This logging
> +    * allows tests to ensure this.
> +    */
> +   if (shared->is_autovacuum)
> +       elog(DEBUG2,
> +            "parallel autovacuum worker cost params: cost_limit=%d,
> cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d,
> cost_page_hit=%d",
> +            vacuum_cost_limit,
> +            vacuum_cost_delay,
> +            VacuumCostPageMiss,
> +            VacuumCostPageDirty,
> +            VacuumCostPageHit);
> +#endif
>
> While it's true that we use these logs only during the regression
> tests that are enabled only when injection points are also enabled,
> these logs themselves are not related to the injection points. I'd
> recommend writing these logs when the worker refreshes its local delay
> parameters (i.e., in parallel_vacuum_update_shared_delay_params()).
>

I agree (thought about it too).

> +$node->append_conf('postgresql.conf', qq{
> +   max_worker_processes = 20
> +   max_parallel_workers = 20
> +   max_parallel_maintenance_workers = 20
> +   autovacuum_max_parallel_workers = 20
> +   log_min_messages = debug2
> +   log_autovacuum_min_duration = 0
> +   autovacuum_naptime = '1s'
> +   min_parallel_index_scan_size = 0
> +   shared_preload_libraries=test_autovacuum
> +});
>
> It would be better to set log_autovacuum_min_duration = 0 to the
> specific table instead of setting globally.
>

I agree.

> +   uint32      nfree_workers;
> +
> +#ifndef USE_INJECTION_POINTS
> +   ereport(ERROR, errmsg("injection points not supported"));
> +#endif
> +
> +   nfree_workers = AutoVacuumGetFreeParallelWorkers();
> +
> +   PG_RETURN_UINT32(nfree_workers);
> +}
>
> As I commented above, I think we should use int32 for the number of
> parallel free workers. So let's change it here too.

No problem. But again, why do we avoid unsigned integer?

> +PG_FUNCTION_INFO_V1(get_parallel_autovacuum_free_workers);
> +Datum
> +get_parallel_autovacuum_free_workers(PG_FUNCTION_ARGS)
> +{
> +   uint32      nfree_workers;
> +
> +#ifndef USE_INJECTION_POINTS
> +   ereport(ERROR, errmsg("injection points not supported"));
> +#endif
> +
>
> I think we don't necessarily need to check the USE_INJECTION_POINTS in
> this function as we already have the check in the tap tests. The
> function itself is actually workable even without injection points.
>

I agree. It is left from the previous tests implementation.

> +# Copyright (c) 2024-2025, PostgreSQL Global Development Group
> +
>
> Please update the copyright year here too.

I keep forgetting about the meson file, sorry.


Thank you very much for the review!
Please, see an updated set of patches.

--
Best regards,
Daniil Davydov
From df141f9e4588ca45e8430d3accf55f4cfe3d3a9f Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:08:14 +0700
Subject: [PATCH v24 4/5] Tests for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c          |   9 +
 src/backend/commands/vacuumparallel.c         |  22 ++
 src/backend/postmaster/autovacuum.c           |  38 +++
 src/include/postmaster/autovacuum.h           |   1 +
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/test_autovacuum/.gitignore   |   2 +
 src/test/modules/test_autovacuum/Makefile     |  28 ++
 src/test/modules/test_autovacuum/meson.build  |  36 +++
 .../t/001_parallel_autovacuum.pl              | 299 ++++++++++++++++++
 .../test_autovacuum/test_autovacuum--1.0.sql  |  12 +
 .../modules/test_autovacuum/test_autovacuum.c |  31 ++
 .../test_autovacuum/test_autovacuum.control   |   3 +
 13 files changed, 483 insertions(+)
 create mode 100644 src/test/modules/test_autovacuum/.gitignore
 create mode 100644 src/test/modules/test_autovacuum/Makefile
 create mode 100644 src/test/modules/test_autovacuum/meson.build
 create mode 100644 src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
 create mode 100644 src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
 create mode 100644 src/test/modules/test_autovacuum/test_autovacuum.c
 create mode 100644 src/test/modules/test_autovacuum/test_autovacuum.control

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 2bcdbdcfcf3..4a3b826dde5 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -152,6 +152,7 @@
 #include "storage/latch.h"
 #include "storage/lmgr.h"
 #include "storage/read_stream.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_rusage.h"
 #include "utils/timestamp.h"
@@ -872,6 +873,14 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	lazy_check_wraparound_failsafe(vacrel);
 	dead_items_alloc(vacrel, params.nworkers);
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * Trigger injection point, if parallel autovacuum is about to be started.
+	 */
+	if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
+		INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
+#endif
+
 	/*
 	 * Call lazy_scan_heap to perform all required heap pruning, index
 	 * vacuuming, and heap vacuuming (plus related processing)
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 13304c40b59..82618ab3ac5 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -47,6 +47,7 @@
 #include "storage/bufmgr.h"
 #include "storage/proc.h"
 #include "tcop/tcopprot.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 
@@ -653,6 +654,14 @@ parallel_vacuum_update_shared_delay_params(void)
 	VacuumUpdateCosts();
 
 	shared_params_generation_local = params_generation;
+
+	elog(DEBUG2,
+		 "parallel autovacuum worker cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
+		 vacuum_cost_limit,
+		 vacuum_cost_delay,
+		 VacuumCostPageMiss,
+		 VacuumCostPageDirty,
+		 VacuumCostPageHit);
 }
 
 /*
@@ -919,6 +928,19 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 							pvs->pcxt->nworkers_launched, nworkers)));
 	}
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * To be able to exercise whether all reserved parallel workers are being
+	 * released anyway, allow injection points to trigger a failure at this
+	 * point.
+	 *
+	 * This injection point is also used to wait until parallel workers
+	 * finishes their part of index processing.
+	 */
+	if (nworkers > 0)
+		INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
+#endif
+
 	/* Vacuum the indexes that can be processed by only leader process */
 	parallel_vacuum_process_unsafe_indexes(pvs);
 
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index cc3456e205d..1c51210883e 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1409,7 +1409,18 @@ avl_sigusr2_handler(SIGNAL_ARGS)
 static void
 autovacuum_worker_before_shmem_exit(int code, Datum arg)
 {
+	int nreserved_old = av_nworkers_reserved;
+
 	AutoVacuumReleaseAllParallelWorkers();
+
+	if (nreserved_old > 0)
+	{
+		elog(DEBUG2,
+			 ngettext("autovacuum worker before_shmem_exit: %d parallel worker has been released",
+					  "autovacuum worker before_shmem_exit: %d parallel workers has been released",
+						nreserved_old - av_nworkers_reserved),
+			 nreserved_old - av_nworkers_reserved);
+	}
 }
 
 /*
@@ -2495,12 +2506,20 @@ do_autovacuum(void)
 		}
 		PG_CATCH();
 		{
+			int	nreserved_workers = av_nworkers_reserved;
+
 			/*
 			 * Parallel autovacuum can reserve parallel workers. Make sure
 			 * that all reserved workers are released.
 			 */
 			AutoVacuumReleaseAllParallelWorkers();
 
+			if (nreserved_workers > 0)
+				ereport(DEBUG2,
+						(errmsg("%d parallel autovacuum workers has been released after occured error",
+								nreserved_workers),
+						 errhidecontext(true)));
+
 			/*
 			 * Abort the transaction, start a new one, and proceed with the
 			 * next table in our list.
@@ -3465,6 +3484,21 @@ AutoVacuumReleaseAllParallelWorkers(void)
 	Assert(av_nworkers_reserved == 0);
 }
 
+/*
+ * Get number of free autovacuum parallel workers.
+ */
+int32
+AutoVacuumGetFreeParallelWorkers(void)
+{
+	int32		nfree_workers;
+
+	LWLockAcquire(AutovacuumLock, LW_SHARED);
+	nfree_workers = AutoVacuumShmem->av_freeParallelWorkers;
+	LWLockRelease(AutovacuumLock);
+
+	return nfree_workers;
+}
+
 /*
  * autovac_init
  *		This is called at postmaster initialization.
@@ -3633,5 +3667,9 @@ adjust_free_parallel_workers(int prev_max_parallel_workers)
 	AutoVacuumShmem->av_freeParallelWorkers = Max(nfree_workers, 0);
 	AutoVacuumShmem->av_maxParallelWorkers = autovacuum_max_parallel_workers;
 
+	elog(DEBUG2,
+		 "number of free parallel autovacuum workers is set to %u due to config reload",
+		 AutoVacuumShmem->av_freeParallelWorkers);
+
 	LWLockRelease(AutovacuumLock);
 }
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index f3783afb51b..d60010a43b4 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -66,6 +66,7 @@ extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
 extern void	AutoVacuumReserveParallelWorkers(int *nworkers);
 extern void AutoVacuumReleaseParallelWorkers(int nworkers);
 extern void AutoVacuumReleaseAllParallelWorkers(void);
+extern int32 AutoVacuumGetFreeParallelWorkers(void);
 
 /* shared memory stuff */
 extern Size AutoVacuumShmemSize(void);
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 4ac5c84db43..01fe0041c97 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -16,6 +16,7 @@ SUBDIRS = \
 		  plsample \
 		  spgist_name_ops \
 		  test_aio \
+		  test_autovacuum \
 		  test_binaryheap \
 		  test_bitmapset \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index e2b3eef4136..9dcdc68bc87 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -16,6 +16,7 @@ subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
 subdir('test_aio')
+subdir('test_autovacuum')
 subdir('test_binaryheap')
 subdir('test_bitmapset')
 subdir('test_bloomfilter')
diff --git a/src/test/modules/test_autovacuum/.gitignore b/src/test/modules/test_autovacuum/.gitignore
new file mode 100644
index 00000000000..716e17f5a2a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/.gitignore
@@ -0,0 +1,2 @@
+# Generated subdirectories
+/tmp_check/
diff --git a/src/test/modules/test_autovacuum/Makefile b/src/test/modules/test_autovacuum/Makefile
new file mode 100644
index 00000000000..32254c53a5d
--- /dev/null
+++ b/src/test/modules/test_autovacuum/Makefile
@@ -0,0 +1,28 @@
+# src/test/modules/test_autovacuum/Makefile
+
+PGFILEDESC = "test_autovacuum - test code for parallel autovacuum"
+
+MODULE_big = test_autovacuum
+OBJS = \
+	$(WIN32RES) \
+	test_autovacuum.o
+
+EXTENSION = test_autovacuum
+DATA = test_autovacuum--1.0.sql
+
+TAP_TESTS = 1
+
+EXTRA_INSTALL = src/test/modules/injection_points
+
+export enable_injection_points
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_autovacuum
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build
new file mode 100644
index 00000000000..969af8bd52a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/meson.build
@@ -0,0 +1,36 @@
+# Copyright (c) 2024-2026, PostgreSQL Global Development Group
+
+test_autovacuum_sources = files(
+  'test_autovacuum.c',
+)
+
+if host_system == 'windows'
+  test_autovacuum_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'test_autovacuum',
+    '--FILEDESC', 'test_autovacuum - test code for parallel autovacuum',])
+endif
+
+test_autovacuum = shared_module('test_autovacuum',
+  test_autovacuum_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += test_autovacuum
+
+test_install_data += files(
+  'test_autovacuum.control',
+  'test_autovacuum--1.0.sql',
+)
+
+tests += {
+  'name': 'test_autovacuum',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'env': {
+       'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
+    },
+    'tests': [
+      't/001_parallel_autovacuum.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
new file mode 100644
index 00000000000..7f8b5a7b4d3
--- /dev/null
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -0,0 +1,299 @@
+# Test parallel autovacuum behavior
+
+use warnings FATAL => 'all';
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if ($ENV{enable_injection_points} ne 'yes')
+{
+	plan skip_all => 'Injection points not supported by this build';
+}
+
+# Before each test we should disable autovacuum for 'test_autovac' table and
+# generate some dead tuples in it.
+
+sub prepare_for_next_test
+{
+	my ($node, $test_number) = @_;
+
+	$node->safe_psql('postgres', qq{
+		ALTER TABLE test_autovac SET (autovacuum_enabled = false);
+	});
+
+	$node->safe_psql('postgres', qq{
+		UPDATE test_autovac SET col_1 = $test_number;
+	});
+}
+
+
+my $psql_out;
+
+my $node = PostgreSQL::Test::Cluster->new('node1');
+$node->init;
+
+# Configure postgres, so it can launch parallel autovacuum workers, log all
+# information we are interested in and autovacuum works frequently
+$node->append_conf('postgresql.conf', qq{
+	max_worker_processes = 20
+	max_parallel_workers = 20
+	max_parallel_maintenance_workers = 20
+	autovacuum_max_parallel_workers = 20
+	log_min_messages = debug2
+	autovacuum_naptime = '1s'
+	min_parallel_index_scan_size = 0
+	shared_preload_libraries=test_autovacuum
+});
+$node->start;
+
+# Check if the extension injection_points is available, as it may be
+# possible that this script is run with installcheck, where the module
+# would not be installed by default.
+if (!$node->check_extension('injection_points'))
+{
+	plan skip_all => 'Extension injection_points not installed';
+}
+
+# Create all functions needed for testing
+$node->safe_psql('postgres', qq{
+	CREATE EXTENSION test_autovacuum;
+	CREATE EXTENSION injection_points;
+});
+
+my $indexes_num = 4;
+my $initial_rows_num = 10_000;
+my $autovacuum_parallel_workers = 2;
+
+# Create table and fill it with some data
+$node->safe_psql('postgres', qq{
+	CREATE TABLE test_autovac (
+		id SERIAL PRIMARY KEY,
+		col_1 INTEGER,  col_2 INTEGER,  col_3 INTEGER,  col_4 INTEGER
+	) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers,
+			log_autovacuum_min_duration = 0);
+
+	INSERT INTO test_autovac
+	SELECT
+		g AS col1,
+		g + 1 AS col2,
+		g + 2 AS col3,
+		g + 3 AS col4
+	FROM generate_series(1, $initial_rows_num) AS g;
+});
+
+# Create specified number of b-tree indexes on the table
+$node->safe_psql('postgres', qq{
+	DO \$\$
+	DECLARE
+		i INTEGER;
+	BEGIN
+		FOR i IN 1..$indexes_num LOOP
+			EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i);
+		END LOOP;
+	END \$\$;
+});
+
+# Test 1 :
+# Our table has enough indexes and appropriate reloptions, so autovacuum must
+# be able to process it in parallel mode. Just check if it can.
+# Also check whether all requested workers:
+# 	1) launched
+# 	2) correctly released
+
+prepare_for_next_test($node, 1);
+
+$node->safe_psql('postgres', qq{
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until the parallel autovacuum on table is completed. At the same time,
+# we check that the required number of parallel workers has been started.
+$log_start = $node->wait_for_log(
+	qr/parallel workers: index vacuum: 2 planned, 2 reserved, 2 launched/,
+	$log_start
+);
+
+$psql_out = $node->safe_psql('postgres', qq{
+	SELECT get_parallel_autovacuum_free_workers();
+});
+is($psql_out, 20, 'All parallel workers has been released by the leader');
+
+# Test 2:
+# Check whether parallel autovacuum leader can propagate cost-based parameters
+# to parallel workers.
+
+prepare_for_next_test($node, 2);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+
+	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum is inited
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-start-parallel-vacuum'
+);
+
+# Reload config - leader worker must update its own parameters during indexes
+# processing
+$node->safe_psql('postgres', qq{
+	ALTER SYSTEM SET vacuum_cost_limit = 500;
+	ALTER SYSTEM SET vacuum_cost_page_miss = 10;
+	ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
+	ALTER SYSTEM SET vacuum_cost_page_hit = 10;
+	SELECT pg_reload_conf();
+});
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
+});
+
+# Now wait until parallel autovacuum leader completes processing table (i.e.
+# guaranteed to call vacuum_delay_point) and launches parallel worker.
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+# Check whether parallel worker successfully updated all parameters during
+# index processing
+$log_start = $node->wait_for_log(
+	qr/parallel autovacuum worker cost params: cost_limit=500, cost_delay=2, / .
+	qr/cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+	$log_start
+);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+
+	SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+
+	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = $autovacuum_parallel_workers);
+});
+
+# Test 3:
+# Test adjustment of free parallel workers number when changing
+# autovacuum_max_parallel_workers parameter
+
+prepare_for_next_test($node, 4);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+$node->safe_psql('postgres', qq{
+	ALTER SYSTEM SET autovacuum_max_parallel_workers = 1;
+	SELECT pg_reload_conf();
+});
+
+# Since 2 parallel workers already launched and will be released in the future,
+# we are expecting that :
+# 1) number of free workers will be '0' after config reload
+# 2) number of free workers will be '1' after releasing workers
+
+# Check statement (1)
+$log_start = $node->wait_for_log(
+	qr/number of free parallel autovacuum workers is set to 0 due to config reload/,
+	$log_start
+);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+});
+
+# Wait until the end of parallel processing
+$log_start = $node->wait_for_log(
+	qr/parallel workers: index vacuum: 2 planned, 2 reserved, 2 launched/,
+	$log_start
+);
+
+# Check statement (2)
+$psql_out = $node->safe_psql('postgres', qq{
+	SELECT get_parallel_autovacuum_free_workers();
+});
+is($psql_out, 1, 'Number of free parallel workers is consistent');
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+	ALTER SYSTEM SET autovacuum_max_parallel_workers = 10;
+	SELECT pg_reload_conf();
+});
+
+# Test 4:
+# We want parallel autovacuum workers to be released even if leader gets an
+# error. At first, simulate situation, when leader exits due to an ERROR.
+
+prepare_for_next_test($node, 4);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'error');
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+$log_start = $node->wait_for_log(
+	qr/error triggered for injection point / .
+	qr/autovacuum-leader-before-indexes-processing/,
+	$log_start
+);
+
+$log_start = $node->wait_for_log(
+	qr/2 parallel autovacuum workers has been released after occured error/,
+	$log_start
+);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+# Test 5:
+# Same as above test, but simulate situation, when leader exits due to FATAL.
+
+prepare_for_next_test($node, 5);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until parallel workers are reserved autovacuum and kill the leader
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+my $av_pid = $node->safe_psql('postgres', qq{
+	SELECT pid FROM pg_stat_activity
+	WHERE backend_type = 'autovacuum worker'
+	  AND wait_event = 'autovacuum-leader-before-indexes-processing'
+	LIMIT 1;
+});
+
+$node->safe_psql('postgres', qq{
+	SELECT pg_terminate_backend('$av_pid');
+});
+
+$log_start = $node->wait_for_log(
+	qr/autovacuum worker before_shmem_exit: 2 parallel workers has been released/,
+	$log_start
+);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+$node->stop;
+done_testing();
diff --git a/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql b/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
new file mode 100644
index 00000000000..e5646e0def5
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
@@ -0,0 +1,12 @@
+/* src/test/modules/test_autovacuum/test_autovacuum--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION test_autovacuum" to load this file. \quit
+
+/*
+ * Functions for expecting shared autovacuum state
+ */
+
+CREATE FUNCTION get_parallel_autovacuum_free_workers()
+RETURNS INTEGER STRICT
+AS 'MODULE_PATHNAME' LANGUAGE C;
diff --git a/src/test/modules/test_autovacuum/test_autovacuum.c b/src/test/modules/test_autovacuum/test_autovacuum.c
new file mode 100644
index 00000000000..dd5c839e851
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum.c
@@ -0,0 +1,31 @@
+/*-------------------------------------------------------------------------
+ *
+ * test_autovacuum.c
+ *		Helpers to write tests for parallel autovacuum
+ *
+ * Copyright (c) 2020-2026, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/test/modules/test_autovacuum/test_autovacuum.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "postmaster/autovacuum.h"
+#include "utils/injection_point.h"
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(get_parallel_autovacuum_free_workers);
+Datum
+get_parallel_autovacuum_free_workers(PG_FUNCTION_ARGS)
+{
+	int32		nfree_workers;
+
+	nfree_workers = AutoVacuumGetFreeParallelWorkers();
+
+	PG_RETURN_INT32(nfree_workers);
+}
diff --git a/src/test/modules/test_autovacuum/test_autovacuum.control b/src/test/modules/test_autovacuum/test_autovacuum.control
new file mode 100644
index 00000000000..1b7fad258f0
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum.control
@@ -0,0 +1,3 @@
+comment = 'Test code for parallel autovacuum'
+default_version = '1.0'
+module_pathname = '$libdir/test_autovacuum'
-- 
2.43.0

From 1b99783b4be5909cd5d168f5e019a5d3e2a2118c Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Thu, 15 Jan 2026 23:15:48 +0700
Subject: [PATCH v24 3/5] Cost based parameters propagation for parallel
 autovacuum

---
 src/backend/commands/vacuum.c         |  21 +++-
 src/backend/commands/vacuumparallel.c | 157 ++++++++++++++++++++++++++
 src/backend/postmaster/autovacuum.c   |   2 +-
 src/include/commands/vacuum.h         |   2 +
 src/tools/pgindent/typedefs.list      |   1 +
 5 files changed, 180 insertions(+), 3 deletions(-)

diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index b9840637783..5fba48d0536 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2434,8 +2434,19 @@ vacuum_delay_point(bool is_analyze)
 	/* Always check for interrupts */
 	CHECK_FOR_INTERRUPTS();
 
-	if (InterruptPending ||
-		(!VacuumCostActive && !ConfigReloadPending))
+	if (InterruptPending)
+		return;
+
+	if (IsParallelWorker())
+	{
+		/*
+		 * Update cost-based vacuum delay parameters for a parallel autovacuum
+		 * worker if any changes are detected.
+		 */
+		parallel_vacuum_update_shared_delay_params();
+	}
+
+	if (!VacuumCostActive && !ConfigReloadPending)
 		return;
 
 	/*
@@ -2449,6 +2460,12 @@ vacuum_delay_point(bool is_analyze)
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
 		VacuumUpdateCosts();
+
+		/*
+		 * Propagate cost-based vacuum delay parameters to shared memory if
+		 * any of them have changed during the config reload.
+		 */
+		parallel_vacuum_propagate_shared_delay_params();
 	}
 
 	/*
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 643849b2fb8..13304c40b59 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -18,6 +18,13 @@
  * the parallel context is re-initialized so that the same DSM can be used for
  * multiple passes of index bulk-deletion and index cleanup.
  *
+ * For parallel autovacuum, we need to propagate cost-based vacuum delay
+ * parameters from the leader to its workers, as the leader's parameters can
+ * change even while processing a table (e.g., due to a config reload).
+ * The PVSharedCostParams struct manages these parameters using a
+ * generation counter. Each parallel worker polls this shared state and
+ * refreshes its local delay parameters whenever a change is detected.
+ *
  * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
@@ -54,6 +61,31 @@
 #define PARALLEL_VACUUM_KEY_WAL_USAGE		4
 #define PARALLEL_VACUUM_KEY_INDEX_STATS		5
 
+/*
+ * Struct for cost-based vacuum delay related parameters to share among an
+ * autovacuum worker and its parallel vacuum workers.
+ */
+typedef struct PVSharedCostParams
+{
+	/*
+	 * The generation counter is incremented by the leader process each time
+	 * it updates the shared cost-based vacuum delay parameters. Paralell
+	 * vacuum workers compares it with their local generation,
+	 * shared_params_generation_local, to detect whether they need to refresh
+	 * their local parameters.
+	 */
+	pg_atomic_uint32 generation;
+
+	slock_t		mutex;			/* protects all fields below */
+
+	/* Parameters to share with parallel workers */
+	double		cost_delay;
+	int			cost_limit;
+	int			cost_page_dirty;
+	int			cost_page_hit;
+	int			cost_page_miss;
+} PVSharedCostParams;
+
 /*
  * Shared information among parallel workers.  So this is allocated in the DSM
  * segment.
@@ -123,6 +155,18 @@ typedef struct PVShared
 
 	/* Statistics of shared dead items */
 	VacDeadItemsInfo dead_items_info;
+
+	/*
+	 * If 'true' then we are running parallel autovacuum. Otherwise, we are
+	 * running parallel maintenence VACUUM.
+	 */
+	bool		is_autovacuum;
+
+	/*
+	 * Struct for syncing cost-based vacuum delay parameters between
+	 * supportive parallel autovacuum workers with leader worker.
+	 */
+	PVSharedCostParams cost_params;
 } PVShared;
 
 /* Status used during parallel index vacuum or cleanup */
@@ -225,6 +269,11 @@ struct ParallelVacuumState
 	PVIndVacStatus status;
 };
 
+static PVSharedCostParams *pv_shared_cost_params = NULL;
+
+/* See comments in the PVSharedCostParams for the details */
+static uint32 shared_params_generation_local = 0;
+
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
@@ -236,6 +285,7 @@ static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation
 static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_index_scans,
 												   bool vacuum);
 static void parallel_vacuum_error_callback(void *arg);
+static inline void parallel_vacuum_set_cost_parameters(PVSharedCostParams *params);
 
 /*
  * Try to enter parallel mode and create a parallel context.  Then initialize
@@ -396,6 +446,21 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	pg_atomic_init_u32(&(shared->active_nworkers), 0);
 	pg_atomic_init_u32(&(shared->idx), 0);
 
+	shared->is_autovacuum = AmAutoVacuumWorkerProcess();
+
+	/*
+	 * Initialize shared cost-based vacuum delay parameters if it's for
+	 * autovacuum.
+	 */
+	if (shared->is_autovacuum)
+	{
+		parallel_vacuum_set_cost_parameters(&shared->cost_params);
+		pg_atomic_init_u32(&shared->cost_params.generation, 0);
+		SpinLockInit(&shared->cost_params.mutex);
+
+		pv_shared_cost_params = &(shared->cost_params);
+	}
+
 	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
 	pvs->shared = shared;
 
@@ -540,6 +605,95 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 										&wusage->cleanup);
 }
 
+/*
+ * Fill in the given structure with cost-based vacuum delay parameter values.
+ */
+static inline void
+parallel_vacuum_set_cost_parameters(PVSharedCostParams *params)
+{
+	params->cost_delay = vacuum_cost_delay;
+	params->cost_limit = vacuum_cost_limit;
+	params->cost_page_dirty = VacuumCostPageDirty;
+	params->cost_page_hit = VacuumCostPageHit;
+	params->cost_page_miss = VacuumCostPageMiss;
+}
+
+/*
+ * Updates the cost-based vacuum delay parameters for parallel autovacuum
+ * workers.
+ *
+ * For non-autovacuum parallel worker this function will have no effect.
+ */
+void
+parallel_vacuum_update_shared_delay_params(void)
+{
+	uint32		params_generation;
+
+	Assert(IsParallelWorker());
+
+	/* Quick return if the wokrer is not running for the autovacuum */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	params_generation = pg_atomic_read_u32(&pv_shared_cost_params->generation);
+	Assert(shared_params_generation_local <= params_generation);
+
+	/* Return if parameters had not changed in the leader */
+	if (params_generation == shared_params_generation_local)
+		return;
+
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	VacuumCostDelay = pv_shared_cost_params->cost_delay;
+	VacuumCostLimit = pv_shared_cost_params->cost_limit;
+	VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty;
+	VacuumCostPageHit = pv_shared_cost_params->cost_page_hit;
+	VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss;
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	VacuumUpdateCosts();
+
+	shared_params_generation_local = params_generation;
+}
+
+/*
+ * Store the cost-based vacuum delay parameters in the shared memory so that
+ * parallel vacuum workers can consume them (see
+ * parallel_vacuum_update_shared_delay_params()).
+ */
+void
+parallel_vacuum_propagate_shared_delay_params(void)
+{
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/*
+	 * Quick return if the leader process is not sharing the delay parameters.
+	 */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	/*
+	 * Check if any delay parameters has changed. We can read them without
+	 * locks as only the leader can modify them.
+	 */
+	if (vacuum_cost_delay == pv_shared_cost_params->cost_delay &&
+		vacuum_cost_limit == pv_shared_cost_params->cost_limit &&
+		VacuumCostPageDirty == pv_shared_cost_params->cost_page_dirty &&
+		VacuumCostPageHit == pv_shared_cost_params->cost_page_hit &&
+		VacuumCostPageMiss == pv_shared_cost_params->cost_page_miss)
+		return;
+
+	/* Update the shared delay parameters */
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	parallel_vacuum_set_cost_parameters(pv_shared_cost_params);
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	/*
+	 * Increment the generation of the parameters, i.e. let parallel workers
+	 * know that they should re-read shared cost params.
+	 */
+	pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1);
+}
+
 /*
  * Compute the number of parallel worker processes to request.  Both index
  * vacuum and index cleanup can be executed with parallel workers.
@@ -1109,6 +1263,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	VacuumSharedCostBalance = &(shared->cost_balance);
 	VacuumActiveNWorkers = &(shared->active_nworkers);
 
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = &(shared->cost_params);
+
 	/* Set parallel vacuum state */
 	pvs.indrels = indrels;
 	pvs.nindexes = nindexes;
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 267fdcbe1a8..cc3456e205d 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1690,7 +1690,7 @@ VacuumUpdateCosts(void)
 	}
 	else
 	{
-		/* Must be explicit VACUUM or ANALYZE */
+		/* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */
 		vacuum_cost_delay = VacuumCostDelay;
 		vacuum_cost_limit = VacuumCostLimit;
 	}
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 1b1fb625cb2..4bfeba8264d 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -434,6 +434,8 @@ extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												int num_index_scans,
 												bool estimated_count,
 												PVWorkersUsage *wusage);
+extern void parallel_vacuum_update_shared_delay_params(void);
+extern void parallel_vacuum_propagate_shared_delay_params(void);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 536237ff546..1120646f2c8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2070,6 +2070,7 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVSharedCostParams
 PVWorkersUsage
 PVWorkersStats
 PX_Alias
-- 
2.43.0

From ff36d0daf6abb1d74370111a18762643e417aba8 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:07:47 +0700
Subject: [PATCH v24 2/5] Logging for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c  | 54 ++++++++++++++++++++++++++-
 src/backend/commands/vacuumparallel.c | 32 +++++++++++++---
 src/include/commands/vacuum.h         | 39 ++++++++++++++++++-
 src/tools/pgindent/typedefs.list      |  2 +
 4 files changed, 117 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 5b6f2441f6b..2bcdbdcfcf3 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -342,6 +342,13 @@ typedef struct LVRelState
 	int			num_index_scans;
 	int			num_dead_items_resets;
 	Size		total_dead_items_bytes;
+
+	/*
+	 * Total number of planned and actually launched parallel workers for
+	 * index scans.
+	 */
+	PVWorkersUsage workers_usage;
+
 	/* Counters that follow are only for scanned_pages */
 	int64		tuples_deleted; /* # deleted from table */
 	int64		tuples_frozen;	/* # newly frozen */
@@ -780,6 +787,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	vacrel->new_all_visible_all_frozen_pages = 0;
 	vacrel->new_all_frozen_pages = 0;
 
+	vacrel->workers_usage.vacuum.nlaunched = 0;
+	vacrel->workers_usage.vacuum.nplanned = 0;
+	vacrel->workers_usage.cleanup.nlaunched = 0;
+	vacrel->workers_usage.cleanup.nplanned = 0;
+
 	/*
 	 * Get cutoffs that determine which deleted tuples are considered DEAD,
 	 * not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze.  Then determine
@@ -1122,6 +1134,42 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 							 orig_rel_pages == 0 ? 100.0 :
 							 100.0 * vacrel->lpdead_item_pages / orig_rel_pages,
 							 vacrel->lpdead_items);
+			if (vacrel->workers_usage.vacuum.nplanned > 0)
+			{
+				if (AmAutoVacuumWorkerProcess())
+				{
+					appendStringInfo(&buf,
+									 _("parallel workers: index vacuum: %d planned, %d reserved, %d launched in total\n"),
+									 vacrel->workers_usage.vacuum.nplanned,
+									 vacrel->workers_usage.vacuum.nreserved,
+									 vacrel->workers_usage.vacuum.nlaunched);
+				}
+				else
+				{
+					appendStringInfo(&buf,
+									 _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
+									 vacrel->workers_usage.vacuum.nplanned,
+									 vacrel->workers_usage.vacuum.nlaunched);
+				}
+			}
+			if (vacrel->workers_usage.cleanup.nplanned > 0)
+			{
+				if (AmAutoVacuumWorkerProcess())
+				{
+					appendStringInfo(&buf,
+									 _("parallel workers: index cleanup: %d planned, %d reserved, %d launched\n"),
+									 vacrel->workers_usage.cleanup.nplanned,
+									 vacrel->workers_usage.cleanup.nreserved,
+									 vacrel->workers_usage.cleanup.nlaunched);
+				}
+				else
+				{
+					appendStringInfo(&buf,
+									 _("parallel workers: index cleanup: %d planned, %d launched\n"),
+									 vacrel->workers_usage.cleanup.nplanned,
+									 vacrel->workers_usage.cleanup.nlaunched);
+				}
+			}
 			for (int i = 0; i < vacrel->nindexes; i++)
 			{
 				IndexBulkDeleteResult *istat = vacrel->indstats[i];
@@ -2668,7 +2716,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 	{
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
-											vacrel->num_index_scans);
+											vacrel->num_index_scans,
+											&vacrel->workers_usage);
 
 		/*
 		 * Do a postcheck to consider applying wraparound failsafe now.  Note
@@ -3102,7 +3151,8 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
 											vacrel->num_index_scans,
-											estimated_count);
+											estimated_count,
+											&vacrel->workers_usage);
 	}
 
 	/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 806a7f48326..643849b2fb8 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -228,7 +228,7 @@ struct ParallelVacuumState
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-												bool vacuum);
+												bool vacuum, PVWorkersStats *wstats);
 static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -503,7 +503,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
  */
 void
 parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans)
+									int num_index_scans, PVWorkersUsage *wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -514,7 +514,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = true;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true,
+										&wusage->vacuum);
 }
 
 /*
@@ -522,7 +523,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
  */
 void
 parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans, bool estimated_count)
+									int num_index_scans, bool estimated_count,
+									PVWorkersUsage *wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -534,7 +536,8 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = estimated_count;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false,
+										&wusage->cleanup);
 }
 
 /*
@@ -616,10 +619,13 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 /*
  * Perform index vacuum or index cleanup with parallel workers.  This function
  * must be used by the parallel vacuum leader process.
+ *
+ * If wstats is not NULL, the statistics it stores will be updated according
+ * to what happens during function execution.
  */
 static void
 parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-									bool vacuum)
+									bool vacuum, PVWorkersStats *wstats)
 {
 	int			nworkers;
 	PVIndVacStatus new_status;
@@ -656,13 +662,23 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/* Remember this value, if we asked to */
+	if (wstats != NULL && nworkers > 0)
+		wstats->nplanned += nworkers;
+
 	/*
 	 * Reserve workers in autovacuum global state. Note that we may be given
 	 * fewer workers than we requested.
 	 */
 	if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+	{
 		AutoVacuumReserveParallelWorkers(&nworkers);
 
+		/* Remember this value, if we asked to */
+		if (wstats != NULL)
+			wstats->nreserved += nworkers;
+	}
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
@@ -729,6 +745,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 			/* Enable shared cost balance for leader backend */
 			VacuumSharedCostBalance = &(pvs->shared->cost_balance);
 			VacuumActiveNWorkers = &(pvs->shared->active_nworkers);
+
+			/* Remember this value, if we asked to */
+			if (wstats != NULL)
+				wstats->nlaunched += pvs->pcxt->nworkers_launched;
 		}
 
 		if (vacuum)
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index e885a4b9c77..1b1fb625cb2 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -300,6 +300,39 @@ typedef struct VacDeadItemsInfo
 	int64		num_items;		/* current # of entries */
 } VacDeadItemsInfo;
 
+/*
+ * Helper for the PVWorkersUsage structure (see below), to avoid repetition.
+ */
+typedef struct PVWorkersStats
+{
+	/* Number of parallel workers we are planned to launch */
+	int			nplanned;
+
+	/*
+	 * Number of parallel workers we have managed to reserve.
+	 *
+	 * Note, that we collect this stats only for the parallel *autovacuum*
+	 * since during it we must reserve workers in shared state before actually
+	 * trying to launch them (in order to meet the
+	 * autovacuum_max_parallel_workers limit). Manual VACUUM (PARALLEL), on
+	 * the contrary, doesn't need to reserve workers.
+	 */
+	int			nreserved;
+
+	/* Number of launched parallel workers */
+	int			nlaunched;
+} PVWorkersStats;
+
+/*
+ * PVWorkersUsage stores information about total number of launched, reserved
+ * and planned workers during parallel vacuum (both for vacuum and cleanup).
+ */
+typedef struct PVWorkersUsage
+{
+	PVWorkersStats vacuum;
+	PVWorkersStats cleanup;
+} PVWorkersUsage;
+
 /* GUC parameters */
 extern PGDLLIMPORT int default_statistics_target;	/* PGDLLIMPORT for PostGIS */
 extern PGDLLIMPORT int vacuum_freeze_min_age;
@@ -394,11 +427,13 @@ extern TidStore *parallel_vacuum_get_dead_items(ParallelVacuumState *pvs,
 extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
 extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
-												int num_index_scans);
+												int num_index_scans,
+												PVWorkersUsage *wusage);
 extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
 												int num_index_scans,
-												bool estimated_count);
+												bool estimated_count,
+												PVWorkersUsage *wusage);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 77e3c04144e..536237ff546 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2070,6 +2070,8 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVWorkersUsage
+PVWorkersStats
 PX_Alias
 PX_Cipher
 PX_Combo
-- 
2.43.0

From 84d78c58932bb1d9f1bf01319a583e68278e7bca Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 02:32:44 +0700
Subject: [PATCH v24 5/5] Documentation for parallel autovacuum

---
 doc/src/sgml/config.sgml           | 17 +++++++++++++++++
 doc/src/sgml/maintenance.sgml      | 12 ++++++++++++
 doc/src/sgml/ref/create_table.sgml | 20 ++++++++++++++++++++
 3 files changed, 49 insertions(+)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index f670e2d4c31..07139ec7ff2 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2918,6 +2918,7 @@ include_dir 'conf.d'
         <para>
          When changing this value, consider also adjusting
          <xref linkend="guc-max-parallel-workers"/>,
+         <xref linkend="guc-autovacuum-max-parallel-workers"/>,
          <xref linkend="guc-max-parallel-maintenance-workers"/>, and
          <xref linkend="guc-max-parallel-workers-per-gather"/>.
         </para>
@@ -9380,6 +9381,22 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
        </listitem>
       </varlistentry>
 
+      <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+        <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+        <indexterm>
+         <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+         <secondary>configuration parameter</secondary>
+        </indexterm>
+        </term>
+        <listitem>
+         <para>
+          Sets the maximum number of parallel autovacuum workers that
+          can be used for parallel index vacuuming at one time. Is capped by
+          <xref linkend="guc-max-parallel-workers"/>. The default is 2.
+         </para>
+        </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
 
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 7c958b06273..c9f9163c551 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -926,6 +926,18 @@ HINT:  Execute a database-wide VACUUM in that database.
     autovacuum workers' activity.
    </para>
 
+   <para>
+    If an autovacuum worker process comes across a table with the enabled
+    <xref linkend="reloption-autovacuum-parallel-workers"/> storage parameter,
+    it will launch parallel workers in order to vacuum indexes of this table
+    in a parallel mode. Parallel workers are taken from the pool of processes
+    established by <xref linkend="guc-max-worker-processes"/>, limited by
+    <xref linkend="guc-max-parallel-workers"/>.
+    The total number of parallel autovacuum workers that can be active at one
+    time is limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+    configuration parameter.
+   </para>
+
    <para>
     If several large tables all become eligible for vacuuming in a short
     amount of time, all autovacuum workers might become occupied with
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 982532fe725..4894de021cd 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1718,6 +1718,26 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
+  <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+    <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+    <indexterm>
+     <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+    </indexterm>
+    </term>
+    <listitem>
+     <para>
+      Sets the maximum number of parallel autovacuum workers that can process
+      indexes of this table.
+      The default value is -1, which means no parallel index vacuuming for
+      this table. If value is 0 then parallel degree will computed based on
+      number of indexes.
+      Note that the computed number of workers may not actually be available at
+      run time. If this occurs, the autovacuum will run with fewer workers
+      than expected.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
     <term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
     <indexterm>
-- 
2.43.0

From 3222e8734acb39452f9f2e8c96960cfac99dff5d Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:03:24 +0700
Subject: [PATCH v24 1/5] Parallel autovacuum

---
 src/backend/access/common/reloptions.c        |  11 ++
 src/backend/commands/vacuumparallel.c         |  42 ++++-
 src/backend/postmaster/autovacuum.c           | 164 +++++++++++++++++-
 src/backend/utils/init/globals.c              |   1 +
 src/backend/utils/misc/guc.c                  |   8 +-
 src/backend/utils/misc/guc_parameters.dat     |   8 +
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/bin/psql/tab-complete.in.c                |   1 +
 src/include/miscadmin.h                       |   1 +
 src/include/postmaster/autovacuum.h           |   5 +
 src/include/utils/rel.h                       |   8 +
 11 files changed, 240 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 237ab8d0ed9..9459a010cc3 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -235,6 +235,15 @@ static relopt_int intRelOpts[] =
 		},
 		SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
 	},
+	{
+		{
+			"autovacuum_parallel_workers",
+			"Maximum number of parallel autovacuum workers that can be used for processing this table.",
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
+		},
+		-1, -1, 1024
+	},
 	{
 		{
 			"autovacuum_vacuum_threshold",
@@ -1968,6 +1977,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
 		{"autovacuum_enabled", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+		{"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
 		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
 		{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 279108ca89f..806a7f48326 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
 /*-------------------------------------------------------------------------
  *
  * vacuumparallel.c
- *	  Support routines for parallel vacuum execution.
+ *	  Support routines for parallel vacuum and autovacuum execution. In the
+ *	  comments below, the word "vacuum" will refer to both vacuum and
+ *	  autovacuum.
  *
  * This file contains routines that are intended to support setting up, using,
  * and tearing down a ParallelVacuumState.
@@ -34,6 +36,7 @@
 #include "executor/instrument.h"
 #include "optimizer/paths.h"
 #include "pgstat.h"
+#include "postmaster/autovacuum.h"
 #include "storage/bufmgr.h"
 #include "storage/proc.h"
 #include "tcop/tcopprot.h"
@@ -374,8 +377,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	shared->queryid = pgstat_get_my_query_id();
 	shared->maintenance_work_mem_worker =
 		(nindexes_mwm > 0) ?
-		maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
-		maintenance_work_mem;
+		vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+		vac_work_mem;
+
 	shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
 
 	/* Prepare DSA space for dead items */
@@ -554,12 +558,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	int			nindexes_parallel_bulkdel = 0;
 	int			nindexes_parallel_cleanup = 0;
 	int			parallel_workers;
+	int			max_workers;
+
+	max_workers = AmAutoVacuumWorkerProcess() ?
+		autovacuum_max_parallel_workers :
+		max_parallel_maintenance_workers;
 
 	/*
 	 * We don't allow performing parallel operation in standalone backend or
 	 * when parallelism is disabled.
 	 */
-	if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+	if (!IsUnderPostmaster || max_workers == 0)
 		return 0;
 
 	/*
@@ -598,8 +607,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	parallel_workers = (nrequested > 0) ?
 		Min(nrequested, nindexes_parallel) : nindexes_parallel;
 
-	/* Cap by max_parallel_maintenance_workers */
-	parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+	/* Cap by GUC variable */
+	parallel_workers = Min(parallel_workers, max_workers);
 
 	return parallel_workers;
 }
@@ -647,6 +656,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/*
+	 * Reserve workers in autovacuum global state. Note that we may be given
+	 * fewer workers than we requested.
+	 */
+	if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+		AutoVacuumReserveParallelWorkers(&nworkers);
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
@@ -691,6 +707,16 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		LaunchParallelWorkers(pvs->pcxt);
 
+		/*
+		 * Tell autovacuum that we could not launch all the previously
+		 * reserved workers.
+		 */
+		if (AmAutoVacuumWorkerProcess() &&
+			pvs->pcxt->nworkers_launched < nworkers)
+		{
+			AutoVacuumReleaseParallelWorkers(nworkers - pvs->pcxt->nworkers_launched);
+		}
+
 		if (pvs->pcxt->nworkers_launched > 0)
 		{
 			/*
@@ -739,6 +765,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
 			InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
+
+		/* Release all the reserved parallel workers for autovacuum */
+		if (AmAutoVacuumWorkerProcess() && pvs->pcxt->nworkers_launched > 0)
+			AutoVacuumReleaseAllParallelWorkers();
 	}
 
 	/*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 6fde740465f..267fdcbe1a8 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -151,6 +151,13 @@ int			Log_autoanalyze_min_duration = 600000;
 static double av_storage_param_cost_delay = -1;
 static int	av_storage_param_cost_limit = -1;
 
+/*
+ * Tracks the number of parallel workers currently reserved by the
+ * autovacuum worker. This is non-zero only for the parallel autovacuum
+ * leader process.
+ */
+static int	av_nworkers_reserved = 0;
+
 /* Flags set by signal handlers */
 static volatile sig_atomic_t got_SIGUSR2 = false;
 
@@ -285,6 +292,8 @@ typedef struct AutoVacuumWorkItem
  * av_workItems		work item array
  * av_nworkersForBalance the number of autovacuum workers to use when
  * 					calculating the per worker cost limit
+ * av_freeParallelWorkers the number of free parallel autovacuum workers
+ * av_maxParallelWorkers the maximum number of parallel autovacuum workers
  *
  * This struct is protected by AutovacuumLock, except for av_signal and parts
  * of the worker list (see above).
@@ -299,6 +308,8 @@ typedef struct
 	WorkerInfo	av_startingWorker;
 	AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];
 	pg_atomic_uint32 av_nworkersForBalance;
+	int32		av_freeParallelWorkers;
+	int32		av_maxParallelWorkers;
 } AutoVacuumShmemStruct;
 
 static AutoVacuumShmemStruct *AutoVacuumShmem;
@@ -361,6 +372,7 @@ static void autovac_report_workitem(AutoVacuumWorkItem *workitem,
 static void avl_sigusr2_handler(SIGNAL_ARGS);
 static bool av_worker_available(void);
 static void check_av_worker_gucs(void);
+static void adjust_free_parallel_workers(int prev_max_parallel_workers);
 
 
 
@@ -759,6 +771,8 @@ ProcessAutoVacLauncherInterrupts(void)
 	if (ConfigReloadPending)
 	{
 		int			autovacuum_max_workers_prev = autovacuum_max_workers;
+		int			autovacuum_max_parallel_workers_prev =
+			autovacuum_max_parallel_workers;
 
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
@@ -775,6 +789,15 @@ ProcessAutoVacLauncherInterrupts(void)
 		if (autovacuum_max_workers_prev != autovacuum_max_workers)
 			check_av_worker_gucs();
 
+		/*
+		 * If autovacuum_max_parallel_workers changed, we must take care of
+		 * the correct value of available parallel autovacuum workers in
+		 * shmem.
+		 */
+		if (autovacuum_max_parallel_workers_prev !=
+			autovacuum_max_parallel_workers)
+			adjust_free_parallel_workers(autovacuum_max_parallel_workers_prev);
+
 		/* rebuild the list in case the naptime changed */
 		rebuild_database_list(InvalidOid);
 	}
@@ -1379,6 +1402,16 @@ avl_sigusr2_handler(SIGNAL_ARGS)
  *					  AUTOVACUUM WORKER CODE
  ********************************************************************/
 
+/*
+ * Make sure that all reserved workers are released, even if parallel
+ * autovacuum leader is finishing due to FATAL error.
+ */
+static void
+autovacuum_worker_before_shmem_exit(int code, Datum arg)
+{
+	AutoVacuumReleaseAllParallelWorkers();
+}
+
 /*
  * Main entry point for autovacuum worker processes.
  */
@@ -2275,6 +2308,12 @@ do_autovacuum(void)
 										  "Autovacuum Portal",
 										  ALLOCSET_DEFAULT_SIZES);
 
+	/*
+	 * Parallel autovacuum can reserve parallel workers. Make sure that all
+	 * reserved workers are released even after FATAL error.
+	 */
+	before_shmem_exit(autovacuum_worker_before_shmem_exit, 0);
+
 	/*
 	 * Perform operations on collected tables.
 	 */
@@ -2456,6 +2495,12 @@ do_autovacuum(void)
 		}
 		PG_CATCH();
 		{
+			/*
+			 * Parallel autovacuum can reserve parallel workers. Make sure
+			 * that all reserved workers are released.
+			 */
+			AutoVacuumReleaseAllParallelWorkers();
+
 			/*
 			 * Abort the transaction, start a new one, and proceed with the
 			 * next table in our list.
@@ -2856,8 +2901,12 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		 */
 		tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
-		/* As of now, we don't support parallel vacuum for autovacuum */
-		tab->at_params.nworkers = -1;
+
+		/* Decide whether we need to process indexes of table in parallel. */
+		tab->at_params.nworkers = avopts
+			? avopts->autovacuum_parallel_workers
+			: -1;
+
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -3334,6 +3383,88 @@ AutoVacuumRequestWork(AutoVacuumWorkItemType type, Oid relationId,
 	return result;
 }
 
+/*
+ * Reserves parallel workers for autovacuum.
+ *
+ * nworkers is an in/out parameter; the requested number of parallel workers
+ * to reserve by the caller, and set to the actual number of reserved workers.
+ *
+ * The caller must call AutoVacuumRelease[All]ParallelWorkers() to release the
+ * reserved workers.
+ *
+ * NOTE: We will try to provide as many workers as requested, even if caller
+ * will occupy all available workers.
+ */
+void
+AutoVacuumReserveParallelWorkers(int *nworkers)
+{
+	/* Only leader autovacuum worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/* The worker must not have any reserved workers yet */
+	Assert(av_nworkers_reserved == 0);
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/* Provide as many workers as we can. */
+	*nworkers = Min(AutoVacuumShmem->av_freeParallelWorkers, *nworkers);
+	AutoVacuumShmem->av_freeParallelWorkers -= *nworkers;
+
+	LWLockRelease(AutovacuumLock);
+
+	/* Remember how many workers we have reserved. */
+	av_nworkers_reserved = *nworkers;
+}
+
+/*
+ * Releases the reserved parallel workers for autovacuum.
+ *
+ * This function should be used to release the parallel workers that an
+ * autovacuum worker reserved by AutoVacuumReserveParallelWorkers(). nworkers
+ * is the number of workers to release, which must not be greater than the
+ * number of workers currently reserved, av_nworkers_reserved.
+ */
+void
+AutoVacuumReleaseParallelWorkers(int nworkers)
+{
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/* Cannot release more workers than reserved */
+	Assert(nworkers <= av_nworkers_reserved);
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * If the maximum number of parallel workers was reduced during execution,
+	 * we must cap available workers number by its new value.
+	 */
+	AutoVacuumShmem->av_freeParallelWorkers =
+		Min(AutoVacuumShmem->av_freeParallelWorkers + nworkers,
+			AutoVacuumShmem->av_maxParallelWorkers);
+
+	LWLockRelease(AutovacuumLock);
+
+	/* Don't have to remember these workers anymore. */
+	av_nworkers_reserved -= nworkers;
+}
+
+/*
+ * Same as above, but this function releases all the parallel workers that
+ * this autovacuum worker reserved.
+ */
+void
+AutoVacuumReleaseAllParallelWorkers(void)
+{
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess());
+
+	if (av_nworkers_reserved > 0)
+		AutoVacuumReleaseParallelWorkers(av_nworkers_reserved);
+
+	Assert(av_nworkers_reserved == 0);
+}
+
 /*
  * autovac_init
  *		This is called at postmaster initialization.
@@ -3394,6 +3525,10 @@ AutoVacuumShmemInit(void)
 		Assert(!found);
 
 		AutoVacuumShmem->av_launcherpid = 0;
+		AutoVacuumShmem->av_maxParallelWorkers =
+			Min(autovacuum_max_parallel_workers, max_parallel_workers);
+		AutoVacuumShmem->av_freeParallelWorkers =
+			AutoVacuumShmem->av_maxParallelWorkers;
 		dclist_init(&AutoVacuumShmem->av_freeWorkers);
 		dlist_init(&AutoVacuumShmem->av_runningWorkers);
 		AutoVacuumShmem->av_startingWorker = NULL;
@@ -3475,3 +3610,28 @@ check_av_worker_gucs(void)
 				 errdetail("The server will only start up to \"autovacuum_worker_slots\" (%d) autovacuum workers at a given time.",
 						   autovacuum_worker_slots)));
 }
+
+/*
+ * Adjusts the number of free parallel workers corresponds to the new
+ * autovacuum_max_parallel_workers value.
+ */
+static void
+adjust_free_parallel_workers(int prev_max_parallel_workers)
+{
+	int	nfree_workers;
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * Cap or increase number of free parallel workers according to the
+	 * parameter change.
+	 */
+	nfree_workers =
+		autovacuum_max_parallel_workers - prev_max_parallel_workers +
+		AutoVacuumShmem->av_freeParallelWorkers;
+
+	AutoVacuumShmem->av_freeParallelWorkers = Max(nfree_workers, 0);
+	AutoVacuumShmem->av_maxParallelWorkers = autovacuum_max_parallel_workers;
+
+	LWLockRelease(AutovacuumLock);
+}
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 36ad708b360..8265a82b639 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int			NBuffers = 16384;
 int			MaxConnections = 100;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
+int			autovacuum_max_parallel_workers = 2;
 int			MaxBackends = 0;
 
 /* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index d77502838c4..4a5c73a9e33 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3326,9 +3326,13 @@ set_config_with_handle(const char *name, config_handle *handle,
 	 *
 	 * Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
 	 *
-	 * Other changes might need to affect other workers, so forbid them.
+	 * Other changes might need to affect other workers, so forbid them. Note,
+	 * that parallel autovacuum leader is an exception, because only
+	 * cost-based delays need to be affected also to parallel vacuum workers,
+	 * and we will handle it elsewhere if appropriate.
 	 */
-	if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE &&
+	if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
+		action != GUC_ACTION_SAVE &&
 		(record->flags & GUC_ALLOW_IN_PARALLEL) == 0)
 	{
 		ereport(elevel,
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index 9507778415d..92b69c65e83 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -154,6 +154,14 @@
   max => '2000000000',
 },
 
+{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
+  short_desc => 'Maximum number of parallel autovacuum workers, that can be taken from bgworkers pool.',
+  variable => 'autovacuum_max_parallel_workers',
+  boot_val => '2',
+  min => '0',
+  max => 'MAX_BACKENDS',
+},
+
 { name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
   short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.',
   variable => 'autovacuum_max_workers',
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index f938cc65a3a..ef8126f3790 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -710,6 +710,7 @@
 #autovacuum_worker_slots = 16           # autovacuum worker slots to allocate
                                         # (change requires restart)
 #autovacuum_max_workers = 3             # max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 2    # limited by max_parallel_workers
 #autovacuum_naptime = 1min              # time between autovacuum runs
 #autovacuum_vacuum_threshold = 50       # min number of row updates before
                                         # vacuum
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 905c076763c..31ec2f51753 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -1423,6 +1423,7 @@ static const char *const table_storage_parameters[] = {
 	"autovacuum_multixact_freeze_max_age",
 	"autovacuum_multixact_freeze_min_age",
 	"autovacuum_multixact_freeze_table_age",
+	"autovacuum_parallel_workers",
 	"autovacuum_vacuum_cost_delay",
 	"autovacuum_vacuum_cost_limit",
 	"autovacuum_vacuum_insert_scale_factor",
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index f16f35659b9..00190c67ecf 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -178,6 +178,7 @@ extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
 
 extern PGDLLIMPORT int commit_timestamp_buffers;
 extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 5aa0f3a8ac1..f3783afb51b 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -62,6 +62,11 @@ pg_noreturn extern void AutoVacWorkerMain(const void *startup_data, size_t start
 extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
 								  Oid relationId, BlockNumber blkno);
 
+/* parallel autovacuum stuff */
+extern void	AutoVacuumReserveParallelWorkers(int *nworkers);
+extern void AutoVacuumReleaseParallelWorkers(int nworkers);
+extern void AutoVacuumReleaseAllParallelWorkers(void);
+
 /* shared memory stuff */
 extern Size AutoVacuumShmemSize(void);
 extern void AutoVacuumShmemInit(void);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 236830f6b93..11dd3aebc6c 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,14 @@ typedef struct ForeignKeyCacheInfo
 typedef struct AutoVacOpts
 {
 	bool		enabled;
+
+	/*
+	 * Target number of parallel autovacuum workers. -1 by default disables
+	 * parallel vacuum during autovacuum. 0 means choose the parallel degree
+	 * based on the number of indexes.
+	 */
+	int			autovacuum_parallel_workers;
+
 	int			vacuum_threshold;
 	int			vacuum_max_threshold;
 	int			vacuum_ins_threshold;
-- 
2.43.0

From d6add90f5146fe0acae78fbcf72d9559b21c9305 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Wed, 4 Mar 2026 13:39:03 +0700
Subject: [PATCH] fixes for 0004

---
 src/backend/commands/vacuumparallel.c         | 24 +++++++------------
 src/backend/postmaster/autovacuum.c           |  4 ++--
 src/include/postmaster/autovacuum.h           |  2 +-
 src/test/modules/test_autovacuum/meson.build  |  2 +-
 .../t/001_parallel_autovacuum.pl              |  4 ++--
 .../modules/test_autovacuum/test_autovacuum.c |  8 ++-----
 6 files changed, 16 insertions(+), 28 deletions(-)

diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 828844ffc67..414a465d99f 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -654,6 +654,14 @@ parallel_vacuum_update_shared_delay_params(void)
 	VacuumUpdateCosts();
 
 	shared_params_generation_local = params_generation;
+
+	elog(DEBUG2,
+		 "parallel autovacuum worker cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
+		 vacuum_cost_limit,
+		 vacuum_cost_delay,
+		 VacuumCostPageMiss,
+		 VacuumCostPageDirty,
+		 VacuumCostPageHit);
 }
 
 /*
@@ -1311,22 +1319,6 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	/* Process indexes to perform vacuum/cleanup */
 	parallel_vacuum_process_safe_indexes(&pvs);
 
-#ifdef USE_INJECTION_POINTS
-	/*
-	 * If we are parallel autovacuum worker, we can consume delay parameters
-	 * during index processing (via vacuum_delay_point call). This logging
-	 * allows tests to ensure this.
-	 */
-	if (shared->is_autovacuum)
-		elog(DEBUG2,
-			 "parallel autovacuum worker cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
-			 vacuum_cost_limit,
-			 vacuum_cost_delay,
-			 VacuumCostPageMiss,
-			 VacuumCostPageDirty,
-			 VacuumCostPageHit);
-#endif
-
 	/* Report buffer/WAL usage during parallel execution */
 	buffer_usage = shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_BUFFER_USAGE, false);
 	wal_usage = shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_WAL_USAGE, false);
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index ee8d9ba0428..1c51210883e 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -3487,10 +3487,10 @@ AutoVacuumReleaseAllParallelWorkers(void)
 /*
  * Get number of free autovacuum parallel workers.
  */
-uint32
+int32
 AutoVacuumGetFreeParallelWorkers(void)
 {
-	uint32		nfree_workers;
+	int32		nfree_workers;
 
 	LWLockAcquire(AutovacuumLock, LW_SHARED);
 	nfree_workers = AutoVacuumShmem->av_freeParallelWorkers;
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 52be260e15f..d60010a43b4 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -66,7 +66,7 @@ extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
 extern void	AutoVacuumReserveParallelWorkers(int *nworkers);
 extern void AutoVacuumReleaseParallelWorkers(int nworkers);
 extern void AutoVacuumReleaseAllParallelWorkers(void);
-extern uint32 AutoVacuumGetFreeParallelWorkers(void);
+extern int32 AutoVacuumGetFreeParallelWorkers(void);
 
 /* shared memory stuff */
 extern Size AutoVacuumShmemSize(void);
diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build
index 75b24814b13..969af8bd52a 100644
--- a/src/test/modules/test_autovacuum/meson.build
+++ b/src/test/modules/test_autovacuum/meson.build
@@ -1,4 +1,4 @@
-# Copyright (c) 2024-2025, PostgreSQL Global Development Group
+# Copyright (c) 2024-2026, PostgreSQL Global Development Group
 
 test_autovacuum_sources = files(
   'test_autovacuum.c',
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
index edfbde73aac..7f8b5a7b4d3 100644
--- a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -40,7 +40,6 @@ $node->append_conf('postgresql.conf', qq{
 	max_parallel_maintenance_workers = 20
 	autovacuum_max_parallel_workers = 20
 	log_min_messages = debug2
-	log_autovacuum_min_duration = 0
 	autovacuum_naptime = '1s'
 	min_parallel_index_scan_size = 0
 	shared_preload_libraries=test_autovacuum
@@ -70,7 +69,8 @@ $node->safe_psql('postgres', qq{
 	CREATE TABLE test_autovac (
 		id SERIAL PRIMARY KEY,
 		col_1 INTEGER,  col_2 INTEGER,  col_3 INTEGER,  col_4 INTEGER
-	) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers);
+	) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers,
+			log_autovacuum_min_duration = 0);
 
 	INSERT INTO test_autovac
 	SELECT
diff --git a/src/test/modules/test_autovacuum/test_autovacuum.c b/src/test/modules/test_autovacuum/test_autovacuum.c
index 195a6149a5d..dd5c839e851 100644
--- a/src/test/modules/test_autovacuum/test_autovacuum.c
+++ b/src/test/modules/test_autovacuum/test_autovacuum.c
@@ -23,13 +23,9 @@ PG_FUNCTION_INFO_V1(get_parallel_autovacuum_free_workers);
 Datum
 get_parallel_autovacuum_free_workers(PG_FUNCTION_ARGS)
 {
-	uint32		nfree_workers;
-
-#ifndef USE_INJECTION_POINTS
-	ereport(ERROR, errmsg("injection points not supported"));
-#endif
+	int32		nfree_workers;
 
 	nfree_workers = AutoVacuumGetFreeParallelWorkers();
 
-	PG_RETURN_UINT32(nfree_workers);
+	PG_RETURN_INT32(nfree_workers);
 }
-- 
2.43.0

From 848d628a56d78b38b21b5a83d1f63e03075171af Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Wed, 4 Mar 2026 02:50:39 +0700
Subject: [PATCH] fixes for 0003

---
 src/backend/commands/vacuum.c         |  10 +-
 src/backend/commands/vacuumparallel.c | 132 ++++++++++++--------------
 src/tools/pgindent/typedefs.list      |   1 -
 3 files changed, 67 insertions(+), 76 deletions(-)

diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index e94e35481a2..5fba48d0536 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2440,10 +2440,8 @@ vacuum_delay_point(bool is_analyze)
 	if (IsParallelWorker())
 	{
 		/*
-		 * Possibly update cost-based delay parameters.
-		 *
-		 * Do it before checking VacuumCostActive, because its value might be
-		 * changed after calling this function.
+		 * Update cost-based vacuum delay parameters for a parallel autovacuum
+		 * worker if any changes are detected.
 		 */
 		parallel_vacuum_update_shared_delay_params();
 	}
@@ -2464,8 +2462,8 @@ vacuum_delay_point(bool is_analyze)
 		VacuumUpdateCosts();
 
 		/*
-		 * If we are parallel autovacuum leader and some of cost-based
-		 * parameters had changed, let other parallel workers know.
+		 * Propagate cost-based vacuum delay parameters to shared memory if
+		 * any of them have changed during the config reload.
 		 */
 		parallel_vacuum_propagate_shared_delay_params();
 	}
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 80b57bf9da3..13304c40b59 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -18,6 +18,13 @@
  * the parallel context is re-initialized so that the same DSM can be used for
  * multiple passes of index bulk-deletion and index cleanup.
  *
+ * For parallel autovacuum, we need to propagate cost-based vacuum delay
+ * parameters from the leader to its workers, as the leader's parameters can
+ * change even while processing a table (e.g., due to a config reload).
+ * The PVSharedCostParams struct manages these parameters using a
+ * generation counter. Each parallel worker polls this shared state and
+ * refreshes its local delay parameters whenever a change is detected.
+ *
  * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
@@ -54,26 +61,6 @@
 #define PARALLEL_VACUUM_KEY_WAL_USAGE		4
 #define PARALLEL_VACUUM_KEY_INDEX_STATS		5
 
-/*
- * Helper for the PVSharedCostParams structure (see below), to avoid
- * repetition.
- */
-typedef struct VacuumCostParams
-{
-	double		cost_delay;
-	int			cost_limit;
-	int			cost_page_dirty;
-	int			cost_page_hit;
-	int			cost_page_miss;
-} VacuumCostParams;
-
-#define	FillVacCostParams(cost_params) \
-	(cost_params)->cost_delay = vacuum_cost_delay; \
-	(cost_params)->cost_limit = vacuum_cost_limit; \
-	(cost_params)->cost_page_dirty = VacuumCostPageDirty; \
-	(cost_params)->cost_page_hit = VacuumCostPageHit; \
-	(cost_params)->cost_page_miss = VacuumCostPageMiss
-
 /*
  * Struct for cost-based vacuum delay related parameters to share among an
  * autovacuum worker and its parallel vacuum workers.
@@ -81,23 +68,22 @@ typedef struct VacuumCostParams
 typedef struct PVSharedCostParams
 {
 	/*
-	 * Each time leader worker updates its parameters, it must increase
-	 * generation. Every parallel worker keeps the generation
-	 * (shared_params_local_generation) at which it had last time received
-	 * parameters from the leader.
-	 *
-	 * It is enough for worker to compare it's local_generation with the field
-	 * below to determine whether it needs to receive new parameters' values.
+	 * The generation counter is incremented by the leader process each time
+	 * it updates the shared cost-based vacuum delay parameters. Paralell
+	 * vacuum workers compares it with their local generation,
+	 * shared_params_generation_local, to detect whether they need to refresh
+	 * their local parameters.
 	 */
 	pg_atomic_uint32 generation;
 
 	slock_t		mutex;			/* protects all fields below */
 
-	/*
-	 * Copies of the corresponding cost-based vacuum delay parameters from
-	 * autovacuum leader process.
-	 */
-	VacuumCostParams params_data;
+	/* Parameters to share with parallel workers */
+	double		cost_delay;
+	int			cost_limit;
+	int			cost_page_dirty;
+	int			cost_page_hit;
+	int			cost_page_miss;
 } PVSharedCostParams;
 
 /*
@@ -285,7 +271,7 @@ struct ParallelVacuumState
 
 static PVSharedCostParams *pv_shared_cost_params = NULL;
 
-/* See comments for the PVSharedCostParams structure for the explanation. */
+/* See comments in the PVSharedCostParams for the details */
 static uint32 shared_params_generation_local = 0;
 
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
@@ -299,6 +285,7 @@ static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation
 static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_index_scans,
 												   bool vacuum);
 static void parallel_vacuum_error_callback(void *arg);
+static inline void parallel_vacuum_set_cost_parameters(PVSharedCostParams *params);
 
 /*
  * Try to enter parallel mode and create a parallel context.  Then initialize
@@ -461,9 +448,13 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 
 	shared->is_autovacuum = AmAutoVacuumWorkerProcess();
 
+	/*
+	 * Initialize shared cost-based vacuum delay parameters if it's for
+	 * autovacuum.
+	 */
 	if (shared->is_autovacuum)
 	{
-		FillVacCostParams(&shared->cost_params.params_data);
+		parallel_vacuum_set_cost_parameters(&shared->cost_params);
 		pg_atomic_init_u32(&shared->cost_params.generation, 0);
 		SpinLockInit(&shared->cost_params.mutex);
 
@@ -615,10 +606,21 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 }
 
 /*
- * If we are parallel *autovacuum* worker, check whether related to cost-based
- * vacuum delay parameters had changed in the leader worker. If so,
- * corresponding parameters will be updated to the values which leader worker
- * is operating on.
+ * Fill in the given structure with cost-based vacuum delay parameter values.
+ */
+static inline void
+parallel_vacuum_set_cost_parameters(PVSharedCostParams *params)
+{
+	params->cost_delay = vacuum_cost_delay;
+	params->cost_limit = vacuum_cost_limit;
+	params->cost_page_dirty = VacuumCostPageDirty;
+	params->cost_page_hit = VacuumCostPageHit;
+	params->cost_page_miss = VacuumCostPageMiss;
+}
+
+/*
+ * Updates the cost-based vacuum delay parameters for parallel autovacuum
+ * workers.
  *
  * For non-autovacuum parallel worker this function will have no effect.
  */
@@ -629,7 +631,7 @@ parallel_vacuum_update_shared_delay_params(void)
 
 	Assert(IsParallelWorker());
 
-	/* Check whether we are running parallel autovacuum */
+	/* Quick return if the wokrer is not running for the autovacuum */
 	if (pv_shared_cost_params == NULL)
 		return;
 
@@ -641,13 +643,11 @@ parallel_vacuum_update_shared_delay_params(void)
 		return;
 
 	SpinLockAcquire(&pv_shared_cost_params->mutex);
-
-	VacuumCostDelay = pv_shared_cost_params->params_data.cost_delay;
-	VacuumCostLimit = pv_shared_cost_params->params_data.cost_limit;
-	VacuumCostPageDirty = pv_shared_cost_params->params_data.cost_page_dirty;
-	VacuumCostPageHit = pv_shared_cost_params->params_data.cost_page_hit;
-	VacuumCostPageMiss = pv_shared_cost_params->params_data.cost_page_miss;
-
+	VacuumCostDelay = pv_shared_cost_params->cost_delay;
+	VacuumCostLimit = pv_shared_cost_params->cost_limit;
+	VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty;
+	VacuumCostPageHit = pv_shared_cost_params->cost_page_hit;
+	VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss;
 	SpinLockRelease(&pv_shared_cost_params->mutex);
 
 	VacuumUpdateCosts();
@@ -656,46 +656,40 @@ parallel_vacuum_update_shared_delay_params(void)
 }
 
 /*
- * Function to be called from parallel autovacuum leader in order to propagate
- * some cost-based vacuum delay parameters to the supportive workers.
+ * Store the cost-based vacuum delay parameters in the shared memory so that
+ * parallel vacuum workers can consume them (see
+ * parallel_vacuum_update_shared_delay_params()).
  */
 void
 parallel_vacuum_propagate_shared_delay_params(void)
 {
-	VacuumCostParams *params_data;
-
 	Assert(AmAutoVacuumWorkerProcess());
 
-	/* Check whether we are running parallel autovacuum */
+	/*
+	 * Quick return if the leader process is not sharing the delay parameters.
+	 */
 	if (pv_shared_cost_params == NULL)
 		return;
 
 	/*
-	 * Only leader worker can modify this shared structure, so we can read it
-	 * without acquiring a lock.
+	 * Check if any delay parameters has changed. We can read them without
+	 * locks as only the leader can modify them.
 	 */
-	params_data = &pv_shared_cost_params->params_data;
-
-	if (vacuum_cost_delay == params_data->cost_delay &&
-		vacuum_cost_limit == params_data->cost_limit &&
-		VacuumCostPageDirty == params_data->cost_page_dirty &&
-		VacuumCostPageHit == params_data->cost_page_hit &&
-		VacuumCostPageMiss == params_data->cost_page_miss)
-	{
-		/*
-		 * We don't need to update shared cost-based vacuum delay params if
-		 * they haven't changed.
-		 */
+	if (vacuum_cost_delay == pv_shared_cost_params->cost_delay &&
+		vacuum_cost_limit == pv_shared_cost_params->cost_limit &&
+		VacuumCostPageDirty == pv_shared_cost_params->cost_page_dirty &&
+		VacuumCostPageHit == pv_shared_cost_params->cost_page_hit &&
+		VacuumCostPageMiss == pv_shared_cost_params->cost_page_miss)
 		return;
-	}
 
+	/* Update the shared delay parameters */
 	SpinLockAcquire(&pv_shared_cost_params->mutex);
-	FillVacCostParams(&pv_shared_cost_params->params_data);
+	parallel_vacuum_set_cost_parameters(pv_shared_cost_params);
 	SpinLockRelease(&pv_shared_cost_params->mutex);
 
 	/*
-	 * Increase generation of the parameters, i.e. let parallel workers know
-	 * that they should re-read shared cost params.
+	 * Increment the generation of the parameters, i.e. let parallel workers
+	 * know that they should re-read shared cost params.
 	 */
 	pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1);
 }
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index de9f576e0f3..1120646f2c8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -3250,7 +3250,6 @@ VacAttrStatsP
 VacDeadItemsInfo
 VacErrPhase
 VacOptValue
-VacuumCostParams
 VacuumParams
 VacuumRelation
 VacuumStmt
-- 
2.43.0

From e2c7a74a110941ff86e7aabb85aa23fccbcfde5b Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Wed, 4 Mar 2026 02:19:03 +0700
Subject: [PATCH] fixes for 0001

---
 src/backend/postmaster/autovacuum.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index f40abe90ed5..267fdcbe1a8 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -308,8 +308,8 @@ typedef struct
 	WorkerInfo	av_startingWorker;
 	AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];
 	pg_atomic_uint32 av_nworkersForBalance;
-	uint32		av_freeParallelWorkers;
-	uint32		av_maxParallelWorkers;
+	int32		av_freeParallelWorkers;
+	int32		av_maxParallelWorkers;
 } AutoVacuumShmemStruct;
 
 static AutoVacuumShmemStruct *AutoVacuumShmem;
-- 
2.43.0

Reply via email to