Hi,

On Thu, Mar 12, 2026 at 2:05 AM Masahiko Sawada <[email protected]> wrote:
>
> BTW thes discussion made me think to change av_max_parallel_workers to
> control the number of workers per-autovacuum worker instead (with
> renaming it to say max_parallel_workers_per_autovacuum_worker). Users
> can compute the maximum number of parallel workers the system requires
> by (autovacuum_worker_slots *
> max_parallel_workers_per_autovacuum_worker). We would no longer need
> the reservation and release logic. I'd like to hear your opinion.
>

IIUC, one of the main autovacuum's goals is to be "inconspicuous" for the
rest of the system. I mean that it should not try to vacuum all the tables
as fast as possible. Instead it should try to interfere with other backends
as little as possible and try to avoid high resource consumption (assuming
there is no hazard of wraparound).

I propose to reason based on the case for which the parallel a/v will
actually be used :
We have a 3 tables which has 80+ indexes each and require a
parallel a/v. Ideally, each of these tables should be processed with 20
parallel workers. This is a real example which can be encountered in
different productions, where such tables take up about half of all the data
in the database.

How parallel a/v will handle such a situation?
1. Our current implementation
We can set av_max_parallel_workers to 60 and autovacuum_parallel_workers
reloption to 20 for each table.
2. Proposed idea
We can set max_parallel_workers_per_av_worker to 20 and
autovacuum_parallel_workers reloption to 20 for each table.

In both cases we have guarantee that all tables will be processed with the
desired number of parallel workers. And both cases allows us to limit the
CPU consumption via reducing the "av_max_parallel_workers" parameter (for
current implementation) or via reducing the "autovacuum_parallel_workers"
reloption for each table (for proposed idea). So basically I don't see whether
current approach has a big advantages over the idea you proposed.

I also asked my friend, who is many years working with the clients with big
productions. He said that this is super important to process such huge tables
with maximum "intensity". I.e. each a/v worker should have ability to launch
as many parallel workers as required. I guess that this is an argument in
favor of your idea.

The only argument against this idea that I could come up with is that some
users may abuse our parallel a/v feature. For instance, the user can set
"autovacuum_parallel_workers" reloption not only for large tables, but also
for many smaller ones. In this case the max_parallel_workers_per_av_worker
must be pretty large (in order to process the huge table). Thus, the user
can face a situation when all a/v workers are launching additional parallel
workers => there is high CPU consumption and possibly max_parallel_workers
shortage. The only way to deal with it is to go through a large amount of
smaller tables and reduce "autovacuum_parallel_workers" reloption for each
of them. IMHO, this is a pretty unpleasant experience for the user. On the
other hand, the user himself is to blame for the occurrence of such a
situation.

Let's summarize.
Proposed idea has several strong advantages over current implementation.
The only disadvantage I came up with can be avoided by writing recommendations
on how to use this feature in the documentation. So, if I didn't messed up
anything and you don't have any doubts, I would rather implement the
proposed idea.

> > 2)
> > I suggest adding a separate log that will be emitted every time we are
> > unable to start workers due to a shortage of av_max_parallel_workers.
>
> For (2), do you mean that the worker writes these logs regardless of
> log_autovacuum_min_duration setting? I'm concerned that the server
> logs would be flooded with these logs especially when multiple
> autovacuum workers are working very actively and the system is facing
> a shortage of av_max_parallel_workers.

Oh, I didn't take that into account. But this is not a problem - we can
accumulate such statistics just as we do now for the "nreserved" ones. And
then we will log this value with all other stats.

> > Possibly we can introduce a new injection point, or a new log for it.
> > But I assume that the subject of discussion in patch 0002 is the
> > "nreserved" logic, and "nlaunched/nplanned" logic does not raise any
> > questions.
> >
> > I suggest splitting the 0002 patch into two parts : 1) basic logic and
> > 2) additional logic with nreserved or something else. The second part can be
> > discussed in isolation from the patch set. If we do this, we may not have to
> > change the tests. What do you think?
>
> Assuming the basic logic means nlaunched/nplanned logic, yes, it would
> be a nice idea. I think user-facing logging stuff can be developed as
> an improvement independent from the main parallel autovacuum patch.
> It's ideal if we can implement the main patch (with tests) without
> relying on the user-facing logging.

OK, actually we can do it.



Thank you very much for the review!
Please, see attached patches. The changes are :
1) Fixed segfault with accessing outdated pv_shared_cost_params pointer.
2) "Logging for autovacuum" is divided into two patches - basic logging
(nplanned/nlaunched) and advanced logging (nreserved).
3) Tests are now independent of logging.

By now I didn't try to change the core logic. I think that first we need to
agree on the use of the new GUC parameter.

--
Best regards,
Daniil Davydov
From 8b6fd4254165d8551a15f84b0e2973e134ce3938 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Mon, 16 Mar 2026 19:09:01 +0700
Subject: [PATCH v26 6/6] Advanced logging for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c  | 40 +++++++++++++++++++++------
 src/backend/commands/vacuumparallel.c |  6 ++++
 src/include/commands/vacuum.h         | 15 ++++++++--
 3 files changed, 51 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 4f97baced2b..df709afe622 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -791,8 +791,10 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 
 	vacrel->workers_usage.vacuum.nlaunched = 0;
 	vacrel->workers_usage.vacuum.nplanned = 0;
+	vacrel->workers_usage.vacuum.nreserved = 0;
 	vacrel->workers_usage.cleanup.nlaunched = 0;
 	vacrel->workers_usage.cleanup.nplanned = 0;
+	vacrel->workers_usage.cleanup.nreserved = 0;
 
 	/*
 	 * Get cutoffs that determine which deleted tuples are considered DEAD,
@@ -1146,17 +1148,39 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 							 vacrel->lpdead_items);
 			if (vacrel->workers_usage.vacuum.nplanned > 0)
 			{
-				appendStringInfo(&buf,
-								 _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
-								 vacrel->workers_usage.vacuum.nplanned,
-								 vacrel->workers_usage.vacuum.nlaunched);
+				if (AmAutoVacuumWorkerProcess())
+				{
+					appendStringInfo(&buf,
+									 _("parallel workers: index vacuum: %d planned, %d reserved, %d launched in total\n"),
+									 vacrel->workers_usage.vacuum.nplanned,
+									 vacrel->workers_usage.vacuum.nreserved,
+									 vacrel->workers_usage.vacuum.nlaunched);
+				}
+				else
+				{
+					appendStringInfo(&buf,
+									 _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
+									 vacrel->workers_usage.vacuum.nplanned,
+									 vacrel->workers_usage.vacuum.nlaunched);
+				}
 			}
 			if (vacrel->workers_usage.cleanup.nplanned > 0)
 			{
-				appendStringInfo(&buf,
-								 _("parallel workers: index cleanup: %d planned, %d launched\n"),
-								 vacrel->workers_usage.cleanup.nplanned,
-								 vacrel->workers_usage.cleanup.nlaunched);
+				if (AmAutoVacuumWorkerProcess())
+				{
+					appendStringInfo(&buf,
+									 _("parallel workers: index cleanup: %d planned, %d reserved, %d launched\n"),
+									 vacrel->workers_usage.cleanup.nplanned,
+									 vacrel->workers_usage.cleanup.nreserved,
+									 vacrel->workers_usage.cleanup.nlaunched);
+				}
+				else
+				{
+					appendStringInfo(&buf,
+									 _("parallel workers: index cleanup: %d planned, %d launched\n"),
+									 vacrel->workers_usage.cleanup.nplanned,
+									 vacrel->workers_usage.cleanup.nlaunched);
+				}
 			}
 			for (int i = 0; i < vacrel->nindexes; i++)
 			{
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 650060871d3..5105137ce3b 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -837,8 +837,14 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 * fewer workers than we requested.
 	 */
 	if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+	{
 		AutoVacuumReserveParallelWorkers(&nworkers);
 
+		/* Remember this value, if we asked to */
+		if (wstats != NULL)
+			wstats->nreserved += nworkers;
+	}
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index cf0c3c9dbf7..4bfeba8264d 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -308,13 +308,24 @@ typedef struct PVWorkersStats
 	/* Number of parallel workers we are planned to launch */
 	int			nplanned;
 
+	/*
+	 * Number of parallel workers we have managed to reserve.
+	 *
+	 * Note, that we collect this stats only for the parallel *autovacuum*
+	 * since during it we must reserve workers in shared state before actually
+	 * trying to launch them (in order to meet the
+	 * autovacuum_max_parallel_workers limit). Manual VACUUM (PARALLEL), on
+	 * the contrary, doesn't need to reserve workers.
+	 */
+	int			nreserved;
+
 	/* Number of launched parallel workers */
 	int			nlaunched;
 } PVWorkersStats;
 
 /*
- * PVWorkersUsage stores information about total number of launched and
- * planned workers during parallel vacuum (both for vacuum and cleanup).
+ * PVWorkersUsage stores information about total number of launched, reserved
+ * and planned workers during parallel vacuum (both for vacuum and cleanup).
  */
 typedef struct PVWorkersUsage
 {
-- 
2.43.0

From d6a096cfaddf01ef75786e35cabf1a009d652c69 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:08:14 +0700
Subject: [PATCH v26 4/6] Tests for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c          |   9 +
 src/backend/commands/vacuumparallel.c         |  22 ++
 src/backend/postmaster/autovacuum.c           |  25 ++
 src/include/postmaster/autovacuum.h           |   1 +
 src/test/modules/Makefile                     |   1 +
 src/test/modules/meson.build                  |   1 +
 src/test/modules/test_autovacuum/.gitignore   |   2 +
 src/test/modules/test_autovacuum/Makefile     |  28 ++
 src/test/modules/test_autovacuum/meson.build  |  36 +++
 .../t/001_parallel_autovacuum.pl              | 299 ++++++++++++++++++
 .../test_autovacuum/test_autovacuum--1.0.sql  |  12 +
 .../modules/test_autovacuum/test_autovacuum.c |  31 ++
 .../test_autovacuum/test_autovacuum.control   |   3 +
 13 files changed, 470 insertions(+)
 create mode 100644 src/test/modules/test_autovacuum/.gitignore
 create mode 100644 src/test/modules/test_autovacuum/Makefile
 create mode 100644 src/test/modules/test_autovacuum/meson.build
 create mode 100644 src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
 create mode 100644 src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
 create mode 100644 src/test/modules/test_autovacuum/test_autovacuum.c
 create mode 100644 src/test/modules/test_autovacuum/test_autovacuum.control

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index cccaee5b620..4f97baced2b 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -152,6 +152,7 @@
 #include "storage/latch.h"
 #include "storage/lmgr.h"
 #include "storage/read_stream.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/pg_rusage.h"
 #include "utils/timestamp.h"
@@ -873,6 +874,14 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	lazy_check_wraparound_failsafe(vacrel);
 	dead_items_alloc(vacrel, params.nworkers);
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * Trigger injection point, if parallel autovacuum is about to be started.
+	 */
+	if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
+		INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
+#endif
+
 	/*
 	 * Call lazy_scan_heap to perform all required heap pruning, index
 	 * vacuuming, and heap vacuuming (plus related processing)
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 2cad6b15517..650060871d3 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -47,6 +47,7 @@
 #include "storage/bufmgr.h"
 #include "storage/proc.h"
 #include "tcop/tcopprot.h"
+#include "utils/injection_point.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 
@@ -656,6 +657,14 @@ parallel_vacuum_update_shared_delay_params(void)
 	VacuumUpdateCosts();
 
 	shared_params_generation_local = params_generation;
+
+	elog(DEBUG2,
+		 "parallel autovacuum worker cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
+		 vacuum_cost_limit,
+		 vacuum_cost_delay,
+		 VacuumCostPageMiss,
+		 VacuumCostPageDirty,
+		 VacuumCostPageHit);
 }
 
 /*
@@ -916,6 +925,19 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 							pvs->pcxt->nworkers_launched, nworkers)));
 	}
 
+#ifdef USE_INJECTION_POINTS
+	/*
+	 * To be able to exercise whether all reserved parallel workers are being
+	 * released anyway, allow injection points to trigger a failure at this
+	 * point.
+	 *
+	 * This injection point is also used to wait until parallel workers
+	 * finishes their part of index processing.
+	 */
+	if (nworkers > 0)
+		INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
+#endif
+
 	/* Vacuum the indexes that can be processed by only leader process */
 	parallel_vacuum_process_unsafe_indexes(pvs);
 
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index a0c020fa1a7..7e994e88853 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -3448,6 +3448,12 @@ AutoVacuumReleaseParallelWorkers(int nworkers)
 
 	/* Don't have to remember these workers anymore. */
 	av_nworkers_reserved -= nworkers;
+
+	elog(DEBUG2,
+		 ngettext("autovacuum worker: %d parallel worker has been released",
+				  "autovacuum worker: %d parallel workers has been released",
+				  nworkers),
+		 nworkers);
 }
 
 /*
@@ -3466,6 +3472,21 @@ AutoVacuumReleaseAllParallelWorkers(void)
 	Assert(av_nworkers_reserved == 0);
 }
 
+/*
+ * Get number of free autovacuum parallel workers.
+ */
+int32
+AutoVacuumGetFreeParallelWorkers(void)
+{
+	int32		nfree_workers;
+
+	LWLockAcquire(AutovacuumLock, LW_SHARED);
+	nfree_workers = AutoVacuumShmem->av_freeParallelWorkers;
+	LWLockRelease(AutovacuumLock);
+
+	return nfree_workers;
+}
+
 /*
  * autovac_init
  *		This is called at postmaster initialization.
@@ -3634,5 +3655,9 @@ adjust_free_parallel_workers(int prev_max_parallel_workers)
 	AutoVacuumShmem->av_freeParallelWorkers = Max(nfree_workers, 0);
 	AutoVacuumShmem->av_maxParallelWorkers = autovacuum_max_parallel_workers;
 
+	elog(DEBUG2,
+		 "number of free parallel autovacuum workers is set to %u due to config reload",
+		 AutoVacuumShmem->av_freeParallelWorkers);
+
 	LWLockRelease(AutovacuumLock);
 }
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index f3783afb51b..d60010a43b4 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -66,6 +66,7 @@ extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
 extern void	AutoVacuumReserveParallelWorkers(int *nworkers);
 extern void AutoVacuumReleaseParallelWorkers(int nworkers);
 extern void AutoVacuumReleaseAllParallelWorkers(void);
+extern int32 AutoVacuumGetFreeParallelWorkers(void);
 
 /* shared memory stuff */
 extern Size AutoVacuumShmemSize(void);
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 4ac5c84db43..01fe0041c97 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -16,6 +16,7 @@ SUBDIRS = \
 		  plsample \
 		  spgist_name_ops \
 		  test_aio \
+		  test_autovacuum \
 		  test_binaryheap \
 		  test_bitmapset \
 		  test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index e2b3eef4136..9dcdc68bc87 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -16,6 +16,7 @@ subdir('plsample')
 subdir('spgist_name_ops')
 subdir('ssl_passphrase_callback')
 subdir('test_aio')
+subdir('test_autovacuum')
 subdir('test_binaryheap')
 subdir('test_bitmapset')
 subdir('test_bloomfilter')
diff --git a/src/test/modules/test_autovacuum/.gitignore b/src/test/modules/test_autovacuum/.gitignore
new file mode 100644
index 00000000000..716e17f5a2a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/.gitignore
@@ -0,0 +1,2 @@
+# Generated subdirectories
+/tmp_check/
diff --git a/src/test/modules/test_autovacuum/Makefile b/src/test/modules/test_autovacuum/Makefile
new file mode 100644
index 00000000000..32254c53a5d
--- /dev/null
+++ b/src/test/modules/test_autovacuum/Makefile
@@ -0,0 +1,28 @@
+# src/test/modules/test_autovacuum/Makefile
+
+PGFILEDESC = "test_autovacuum - test code for parallel autovacuum"
+
+MODULE_big = test_autovacuum
+OBJS = \
+	$(WIN32RES) \
+	test_autovacuum.o
+
+EXTENSION = test_autovacuum
+DATA = test_autovacuum--1.0.sql
+
+TAP_TESTS = 1
+
+EXTRA_INSTALL = src/test/modules/injection_points
+
+export enable_injection_points
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_autovacuum
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build
new file mode 100644
index 00000000000..969af8bd52a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/meson.build
@@ -0,0 +1,36 @@
+# Copyright (c) 2024-2026, PostgreSQL Global Development Group
+
+test_autovacuum_sources = files(
+  'test_autovacuum.c',
+)
+
+if host_system == 'windows'
+  test_autovacuum_sources += rc_lib_gen.process(win32ver_rc, extra_args: [
+    '--NAME', 'test_autovacuum',
+    '--FILEDESC', 'test_autovacuum - test code for parallel autovacuum',])
+endif
+
+test_autovacuum = shared_module('test_autovacuum',
+  test_autovacuum_sources,
+  kwargs: pg_test_mod_args,
+)
+test_install_libs += test_autovacuum
+
+test_install_data += files(
+  'test_autovacuum.control',
+  'test_autovacuum--1.0.sql',
+)
+
+tests += {
+  'name': 'test_autovacuum',
+  'sd': meson.current_source_dir(),
+  'bd': meson.current_build_dir(),
+  'tap': {
+    'env': {
+       'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
+    },
+    'tests': [
+      't/001_parallel_autovacuum.pl',
+    ],
+  },
+}
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
new file mode 100644
index 00000000000..34f11193dfd
--- /dev/null
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -0,0 +1,299 @@
+# Test parallel autovacuum behavior
+
+use warnings FATAL => 'all';
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if ($ENV{enable_injection_points} ne 'yes')
+{
+	plan skip_all => 'Injection points not supported by this build';
+}
+
+# Before each test we should disable autovacuum for 'test_autovac' table and
+# generate some dead tuples in it.
+
+sub prepare_for_next_test
+{
+	my ($node, $test_number) = @_;
+
+	$node->safe_psql('postgres', qq{
+		ALTER TABLE test_autovac SET (autovacuum_enabled = false);
+	});
+
+	$node->safe_psql('postgres', qq{
+		UPDATE test_autovac SET col_1 = $test_number;
+	});
+}
+
+
+my $psql_out;
+
+my $node = PostgreSQL::Test::Cluster->new('node1');
+$node->init;
+
+# Configure postgres, so it can launch parallel autovacuum workers, log all
+# information we are interested in and autovacuum works frequently
+$node->append_conf('postgresql.conf', qq{
+	max_worker_processes = 20
+	max_parallel_workers = 20
+	max_parallel_maintenance_workers = 20
+	autovacuum_max_parallel_workers = 20
+	log_min_messages = debug2
+	autovacuum_naptime = '1s'
+	min_parallel_index_scan_size = 0
+	shared_preload_libraries=test_autovacuum
+});
+$node->start;
+
+# Check if the extension injection_points is available, as it may be
+# possible that this script is run with installcheck, where the module
+# would not be installed by default.
+if (!$node->check_extension('injection_points'))
+{
+	plan skip_all => 'Extension injection_points not installed';
+}
+
+# Create all functions needed for testing
+$node->safe_psql('postgres', qq{
+	CREATE EXTENSION test_autovacuum;
+	CREATE EXTENSION injection_points;
+});
+
+my $indexes_num = 4;
+my $initial_rows_num = 10_000;
+my $autovacuum_parallel_workers = 2;
+
+# Create table and fill it with some data
+$node->safe_psql('postgres', qq{
+	CREATE TABLE test_autovac (
+		id SERIAL PRIMARY KEY,
+		col_1 INTEGER,  col_2 INTEGER,  col_3 INTEGER,  col_4 INTEGER
+	) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers,
+			log_autovacuum_min_duration = 0);
+
+	INSERT INTO test_autovac
+	SELECT
+		g AS col1,
+		g + 1 AS col2,
+		g + 2 AS col3,
+		g + 3 AS col4
+	FROM generate_series(1, $initial_rows_num) AS g;
+});
+
+# Create specified number of b-tree indexes on the table
+$node->safe_psql('postgres', qq{
+	DO \$\$
+	DECLARE
+		i INTEGER;
+	BEGIN
+		FOR i IN 1..$indexes_num LOOP
+			EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i);
+		END LOOP;
+	END \$\$;
+});
+
+# Test 1 :
+# Our table has enough indexes and appropriate reloptions, so autovacuum must
+# be able to process it in parallel mode. Just check if it can.
+# Also check whether all requested workers:
+# 	1) launched
+# 	2) correctly released
+
+prepare_for_next_test($node, 1);
+
+$node->safe_psql('postgres', qq{
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until the parallel autovacuum on table is completed. At the same time,
+# we check that the required number of parallel workers has been started.
+$log_start = $node->wait_for_log(
+	qr/autovacuum worker: 2 parallel workers has been released/,
+	$log_start
+);
+
+$psql_out = $node->safe_psql('postgres', qq{
+	SELECT get_parallel_autovacuum_free_workers();
+});
+is($psql_out, 20, 'All parallel workers has been released by the leader');
+
+# Test 2:
+# Check whether parallel autovacuum leader can propagate cost-based parameters
+# to parallel workers.
+
+prepare_for_next_test($node, 2);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+
+	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum is inited
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-start-parallel-vacuum'
+);
+
+# Reload config - leader worker must update its own parameters during indexes
+# processing
+$node->safe_psql('postgres', qq{
+	ALTER SYSTEM SET vacuum_cost_limit = 500;
+	ALTER SYSTEM SET vacuum_cost_page_miss = 10;
+	ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
+	ALTER SYSTEM SET vacuum_cost_page_hit = 10;
+	SELECT pg_reload_conf();
+});
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
+});
+
+# Now wait until parallel autovacuum leader completes processing table (i.e.
+# guaranteed to call vacuum_delay_point) and launches parallel worker.
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+# Check whether parallel worker successfully updated all parameters during
+# index processing
+$log_start = $node->wait_for_log(
+	qr/parallel autovacuum worker cost params: cost_limit=500, cost_delay=2, / .
+	qr/cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+	$log_start
+);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+
+	SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+
+	ALTER TABLE test_autovac SET (autovacuum_parallel_workers = $autovacuum_parallel_workers);
+});
+
+# Test 3:
+# Test adjustment of free parallel workers number when changing
+# autovacuum_max_parallel_workers parameter
+
+prepare_for_next_test($node, 4);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+$node->safe_psql('postgres', qq{
+	ALTER SYSTEM SET autovacuum_max_parallel_workers = 1;
+	SELECT pg_reload_conf();
+});
+
+# Since 2 parallel workers already launched and will be released in the future,
+# we are expecting that :
+# 1) number of free workers will be '0' after config reload
+# 2) number of free workers will be '1' after releasing workers
+
+# Check statement (1)
+$log_start = $node->wait_for_log(
+	qr/number of free parallel autovacuum workers is set to 0 due to config reload/,
+	$log_start
+);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+});
+
+# Wait until the end of parallel processing
+$log_start = $node->wait_for_log(
+	qr/autovacuum worker: 2 parallel workers has been released/,
+	$log_start
+);
+
+# Check statement (2)
+$psql_out = $node->safe_psql('postgres', qq{
+	SELECT get_parallel_autovacuum_free_workers();
+});
+is($psql_out, 1, 'Number of free parallel workers is consistent');
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+	ALTER SYSTEM SET autovacuum_max_parallel_workers = 10;
+	SELECT pg_reload_conf();
+});
+
+# Test 4:
+# We want parallel autovacuum workers to be released even if leader gets an
+# error. At first, simulate situation, when leader exits due to an ERROR.
+
+prepare_for_next_test($node, 4);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'error');
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+$log_start = $node->wait_for_log(
+	qr/error triggered for injection point / .
+	qr/autovacuum-leader-before-indexes-processing/,
+	$log_start
+);
+
+$log_start = $node->wait_for_log(
+	qr/autovacuum worker: 2 parallel workers has been released/,
+	$log_start
+);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+# Test 5:
+# Same as above test, but simulate situation, when leader exits due to FATAL.
+
+prepare_for_next_test($node, 5);
+
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+	ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until parallel workers are reserved autovacuum and kill the leader
+$node->wait_for_event(
+	'autovacuum worker',
+	'autovacuum-leader-before-indexes-processing'
+);
+
+my $av_pid = $node->safe_psql('postgres', qq{
+	SELECT pid FROM pg_stat_activity
+	WHERE backend_type = 'autovacuum worker'
+	  AND wait_event = 'autovacuum-leader-before-indexes-processing'
+	LIMIT 1;
+});
+
+$node->safe_psql('postgres', qq{
+	SELECT pg_terminate_backend('$av_pid');
+});
+
+$log_start = $node->wait_for_log(
+	qr/autovacuum worker: 2 parallel workers has been released/,
+	$log_start
+);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+	SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+});
+
+$node->stop;
+done_testing();
diff --git a/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql b/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
new file mode 100644
index 00000000000..e5646e0def5
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum--1.0.sql
@@ -0,0 +1,12 @@
+/* src/test/modules/test_autovacuum/test_autovacuum--1.0.sql */
+
+-- complain if script is sourced in psql, rather than via CREATE EXTENSION
+\echo Use "CREATE EXTENSION test_autovacuum" to load this file. \quit
+
+/*
+ * Functions for expecting shared autovacuum state
+ */
+
+CREATE FUNCTION get_parallel_autovacuum_free_workers()
+RETURNS INTEGER STRICT
+AS 'MODULE_PATHNAME' LANGUAGE C;
diff --git a/src/test/modules/test_autovacuum/test_autovacuum.c b/src/test/modules/test_autovacuum/test_autovacuum.c
new file mode 100644
index 00000000000..dd5c839e851
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum.c
@@ -0,0 +1,31 @@
+/*-------------------------------------------------------------------------
+ *
+ * test_autovacuum.c
+ *		Helpers to write tests for parallel autovacuum
+ *
+ * Copyright (c) 2020-2026, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/test/modules/test_autovacuum/test_autovacuum.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "postmaster/autovacuum.h"
+#include "utils/injection_point.h"
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(get_parallel_autovacuum_free_workers);
+Datum
+get_parallel_autovacuum_free_workers(PG_FUNCTION_ARGS)
+{
+	int32		nfree_workers;
+
+	nfree_workers = AutoVacuumGetFreeParallelWorkers();
+
+	PG_RETURN_INT32(nfree_workers);
+}
diff --git a/src/test/modules/test_autovacuum/test_autovacuum.control b/src/test/modules/test_autovacuum/test_autovacuum.control
new file mode 100644
index 00000000000..1b7fad258f0
--- /dev/null
+++ b/src/test/modules/test_autovacuum/test_autovacuum.control
@@ -0,0 +1,3 @@
+comment = 'Test code for parallel autovacuum'
+default_version = '1.0'
+module_pathname = '$libdir/test_autovacuum'
-- 
2.43.0

From 51590d883e666192c650a5a4b88507ee4c54b76f Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Mon, 16 Mar 2026 19:01:05 +0700
Subject: [PATCH v26 2/6] Logging for parallel autovacuum

---
 src/backend/access/heap/vacuumlazy.c  | 32 +++++++++++++++++++++++++--
 src/backend/commands/vacuumparallel.c | 26 +++++++++++++++++-----
 src/include/commands/vacuum.h         | 28 +++++++++++++++++++++--
 src/tools/pgindent/typedefs.list      |  2 ++
 4 files changed, 78 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 82c5b28e0ad..cccaee5b620 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -343,6 +343,13 @@ typedef struct LVRelState
 	int			num_index_scans;
 	int			num_dead_items_resets;
 	Size		total_dead_items_bytes;
+
+	/*
+	 * Total number of planned and actually launched parallel workers for
+	 * index scans.
+	 */
+	PVWorkersUsage workers_usage;
+
 	/* Counters that follow are only for scanned_pages */
 	int64		tuples_deleted; /* # deleted from table */
 	int64		tuples_frozen;	/* # newly frozen */
@@ -781,6 +788,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 	vacrel->new_all_visible_all_frozen_pages = 0;
 	vacrel->new_all_frozen_pages = 0;
 
+	vacrel->workers_usage.vacuum.nlaunched = 0;
+	vacrel->workers_usage.vacuum.nplanned = 0;
+	vacrel->workers_usage.cleanup.nlaunched = 0;
+	vacrel->workers_usage.cleanup.nplanned = 0;
+
 	/*
 	 * Get cutoffs that determine which deleted tuples are considered DEAD,
 	 * not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze.  Then determine
@@ -1123,6 +1135,20 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
 							 orig_rel_pages == 0 ? 100.0 :
 							 100.0 * vacrel->lpdead_item_pages / orig_rel_pages,
 							 vacrel->lpdead_items);
+			if (vacrel->workers_usage.vacuum.nplanned > 0)
+			{
+				appendStringInfo(&buf,
+								 _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
+								 vacrel->workers_usage.vacuum.nplanned,
+								 vacrel->workers_usage.vacuum.nlaunched);
+			}
+			if (vacrel->workers_usage.cleanup.nplanned > 0)
+			{
+				appendStringInfo(&buf,
+								 _("parallel workers: index cleanup: %d planned, %d launched\n"),
+								 vacrel->workers_usage.cleanup.nplanned,
+								 vacrel->workers_usage.cleanup.nlaunched);
+			}
 			for (int i = 0; i < vacrel->nindexes; i++)
 			{
 				IndexBulkDeleteResult *istat = vacrel->indstats[i];
@@ -2669,7 +2695,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
 	{
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
-											vacrel->num_index_scans);
+											vacrel->num_index_scans,
+											&vacrel->workers_usage);
 
 		/*
 		 * Do a postcheck to consider applying wraparound failsafe now.  Note
@@ -3103,7 +3130,8 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
 		/* Outsource everything to parallel variant */
 		parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
 											vacrel->num_index_scans,
-											estimated_count);
+											estimated_count,
+											&vacrel->workers_usage);
 	}
 
 	/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 806a7f48326..2d583435696 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -228,7 +228,7 @@ struct ParallelVacuumState
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-												bool vacuum);
+												bool vacuum, PVWorkersStats *wstats);
 static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
 static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -503,7 +503,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
  */
 void
 parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans)
+									int num_index_scans, PVWorkersUsage *wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -514,7 +514,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = true;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, true,
+										&wusage->vacuum);
 }
 
 /*
@@ -522,7 +523,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
  */
 void
 parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
-									int num_index_scans, bool estimated_count)
+									int num_index_scans, bool estimated_count,
+									PVWorkersUsage *wusage)
 {
 	Assert(!IsParallelWorker());
 
@@ -534,7 +536,8 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 	pvs->shared->reltuples = num_table_tuples;
 	pvs->shared->estimated_count = estimated_count;
 
-	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false);
+	parallel_vacuum_process_all_indexes(pvs, num_index_scans, false,
+										&wusage->cleanup);
 }
 
 /*
@@ -616,10 +619,13 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 /*
  * Perform index vacuum or index cleanup with parallel workers.  This function
  * must be used by the parallel vacuum leader process.
+ *
+ * If wstats is not NULL, the statistics it stores will be updated according
+ * to what happens during function execution.
  */
 static void
 parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
-									bool vacuum)
+									bool vacuum, PVWorkersStats *wstats)
 {
 	int			nworkers;
 	PVIndVacStatus new_status;
@@ -656,6 +662,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/* Remember this value, if we asked to */
+	if (wstats != NULL && nworkers > 0)
+		wstats->nplanned += nworkers;
+
 	/*
 	 * Reserve workers in autovacuum global state. Note that we may be given
 	 * fewer workers than we requested.
@@ -729,6 +739,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 			/* Enable shared cost balance for leader backend */
 			VacuumSharedCostBalance = &(pvs->shared->cost_balance);
 			VacuumActiveNWorkers = &(pvs->shared->active_nworkers);
+
+			/* Remember this value, if we asked to */
+			if (wstats != NULL)
+				wstats->nlaunched += pvs->pcxt->nworkers_launched;
 		}
 
 		if (vacuum)
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index e885a4b9c77..1d820915d71 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -300,6 +300,28 @@ typedef struct VacDeadItemsInfo
 	int64		num_items;		/* current # of entries */
 } VacDeadItemsInfo;
 
+/*
+ * Helper for the PVWorkersUsage structure (see below), to avoid repetition.
+ */
+typedef struct PVWorkersStats
+{
+	/* Number of parallel workers we are planned to launch */
+	int			nplanned;
+
+	/* Number of launched parallel workers */
+	int			nlaunched;
+} PVWorkersStats;
+
+/*
+ * PVWorkersUsage stores information about total number of launched and
+ * planned workers during parallel vacuum (both for vacuum and cleanup).
+ */
+typedef struct PVWorkersUsage
+{
+	PVWorkersStats vacuum;
+	PVWorkersStats cleanup;
+} PVWorkersUsage;
+
 /* GUC parameters */
 extern PGDLLIMPORT int default_statistics_target;	/* PGDLLIMPORT for PostGIS */
 extern PGDLLIMPORT int vacuum_freeze_min_age;
@@ -394,11 +416,13 @@ extern TidStore *parallel_vacuum_get_dead_items(ParallelVacuumState *pvs,
 extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
 extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
-												int num_index_scans);
+												int num_index_scans,
+												PVWorkersUsage *wusage);
 extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												long num_table_tuples,
 												int num_index_scans,
-												bool estimated_count);
+												bool estimated_count,
+												PVWorkersUsage *wusage);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 52f8603a7be..a67d54e1819 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2088,6 +2088,8 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVWorkersUsage
+PVWorkersStats
 PX_Alias
 PX_Cipher
 PX_Combo
-- 
2.43.0

From eb1449d0d4570059f1e271ecd81dac8c26a62db0 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Thu, 15 Jan 2026 23:15:48 +0700
Subject: [PATCH v26 3/6] Cost based parameters propagation for parallel
 autovacuum

---
 src/backend/commands/vacuum.c         |  21 +++-
 src/backend/commands/vacuumparallel.c | 163 ++++++++++++++++++++++++++
 src/backend/postmaster/autovacuum.c   |   2 +-
 src/include/commands/vacuum.h         |   2 +
 src/tools/pgindent/typedefs.list      |   1 +
 5 files changed, 186 insertions(+), 3 deletions(-)

diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index bce3a2daa24..1b5ba3ce1ef 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2435,8 +2435,19 @@ vacuum_delay_point(bool is_analyze)
 	/* Always check for interrupts */
 	CHECK_FOR_INTERRUPTS();
 
-	if (InterruptPending ||
-		(!VacuumCostActive && !ConfigReloadPending))
+	if (InterruptPending)
+		return;
+
+	if (IsParallelWorker())
+	{
+		/*
+		 * Update cost-based vacuum delay parameters for a parallel autovacuum
+		 * worker if any changes are detected.
+		 */
+		parallel_vacuum_update_shared_delay_params();
+	}
+
+	if (!VacuumCostActive && !ConfigReloadPending)
 		return;
 
 	/*
@@ -2450,6 +2461,12 @@ vacuum_delay_point(bool is_analyze)
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
 		VacuumUpdateCosts();
+
+		/*
+		 * Propagate cost-based vacuum delay parameters to shared memory if
+		 * any of them have changed during the config reload.
+		 */
+		parallel_vacuum_propagate_shared_delay_params();
 	}
 
 	/*
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 2d583435696..2cad6b15517 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -18,6 +18,13 @@
  * the parallel context is re-initialized so that the same DSM can be used for
  * multiple passes of index bulk-deletion and index cleanup.
  *
+ * For parallel autovacuum, we need to propagate cost-based vacuum delay
+ * parameters from the leader to its workers, as the leader's parameters can
+ * change even while processing a table (e.g., due to a config reload).
+ * The PVSharedCostParams struct manages these parameters using a
+ * generation counter. Each parallel worker polls this shared state and
+ * refreshes its local delay parameters whenever a change is detected.
+ *
  * Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
  *
@@ -54,6 +61,31 @@
 #define PARALLEL_VACUUM_KEY_WAL_USAGE		4
 #define PARALLEL_VACUUM_KEY_INDEX_STATS		5
 
+/*
+ * Struct for cost-based vacuum delay related parameters to share among an
+ * autovacuum worker and its parallel vacuum workers.
+ */
+typedef struct PVSharedCostParams
+{
+	/*
+	 * The generation counter is incremented by the leader process each time
+	 * it updates the shared cost-based vacuum delay parameters. Paralell
+	 * vacuum workers compares it with their local generation,
+	 * shared_params_generation_local, to detect whether they need to refresh
+	 * their local parameters.
+	 */
+	pg_atomic_uint32 generation;
+
+	slock_t		mutex;			/* protects all fields below */
+
+	/* Parameters to share with parallel workers */
+	double		cost_delay;
+	int			cost_limit;
+	int			cost_page_dirty;
+	int			cost_page_hit;
+	int			cost_page_miss;
+} PVSharedCostParams;
+
 /*
  * Shared information among parallel workers.  So this is allocated in the DSM
  * segment.
@@ -123,6 +155,18 @@ typedef struct PVShared
 
 	/* Statistics of shared dead items */
 	VacDeadItemsInfo dead_items_info;
+
+	/*
+	 * If 'true' then we are running parallel autovacuum. Otherwise, we are
+	 * running parallel maintenence VACUUM.
+	 */
+	bool		is_autovacuum;
+
+	/*
+	 * Struct for syncing cost-based vacuum delay parameters between
+	 * supportive parallel autovacuum workers with leader worker.
+	 */
+	PVSharedCostParams cost_params;
 } PVShared;
 
 /* Status used during parallel index vacuum or cleanup */
@@ -225,6 +269,11 @@ struct ParallelVacuumState
 	PVIndVacStatus status;
 };
 
+static PVSharedCostParams *pv_shared_cost_params = NULL;
+
+/* See comments in the PVSharedCostParams for the details */
+static uint32 shared_params_generation_local = 0;
+
 static int	parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 											bool *will_parallel_vacuum);
 static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
@@ -236,6 +285,7 @@ static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation
 static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_index_scans,
 												   bool vacuum);
 static void parallel_vacuum_error_callback(void *arg);
+static inline void parallel_vacuum_set_cost_parameters(PVSharedCostParams *params);
 
 /*
  * Try to enter parallel mode and create a parallel context.  Then initialize
@@ -396,6 +446,21 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	pg_atomic_init_u32(&(shared->active_nworkers), 0);
 	pg_atomic_init_u32(&(shared->idx), 0);
 
+	shared->is_autovacuum = AmAutoVacuumWorkerProcess();
+
+	/*
+	 * Initialize shared cost-based vacuum delay parameters if it's for
+	 * autovacuum.
+	 */
+	if (shared->is_autovacuum)
+	{
+		parallel_vacuum_set_cost_parameters(&shared->cost_params);
+		pg_atomic_init_u32(&shared->cost_params.generation, 0);
+		SpinLockInit(&shared->cost_params.mutex);
+
+		pv_shared_cost_params = &(shared->cost_params);
+	}
+
 	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
 	pvs->shared = shared;
 
@@ -461,6 +526,9 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
 	DestroyParallelContext(pvs->pcxt);
 	ExitParallelMode();
 
+	if (AmAutoVacuumWorkerProcess())
+		pv_shared_cost_params = NULL;
+
 	pfree(pvs->will_parallel_vacuum);
 	pfree(pvs);
 }
@@ -540,6 +608,95 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
 										&wusage->cleanup);
 }
 
+/*
+ * Fill in the given structure with cost-based vacuum delay parameter values.
+ */
+static inline void
+parallel_vacuum_set_cost_parameters(PVSharedCostParams *params)
+{
+	params->cost_delay = vacuum_cost_delay;
+	params->cost_limit = vacuum_cost_limit;
+	params->cost_page_dirty = VacuumCostPageDirty;
+	params->cost_page_hit = VacuumCostPageHit;
+	params->cost_page_miss = VacuumCostPageMiss;
+}
+
+/*
+ * Updates the cost-based vacuum delay parameters for parallel autovacuum
+ * workers.
+ *
+ * For non-autovacuum parallel worker this function will have no effect.
+ */
+void
+parallel_vacuum_update_shared_delay_params(void)
+{
+	uint32		params_generation;
+
+	Assert(IsParallelWorker());
+
+	/* Quick return if the wokrer is not running for the autovacuum */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	params_generation = pg_atomic_read_u32(&pv_shared_cost_params->generation);
+	Assert(shared_params_generation_local <= params_generation);
+
+	/* Return if parameters had not changed in the leader */
+	if (params_generation == shared_params_generation_local)
+		return;
+
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	VacuumCostDelay = pv_shared_cost_params->cost_delay;
+	VacuumCostLimit = pv_shared_cost_params->cost_limit;
+	VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty;
+	VacuumCostPageHit = pv_shared_cost_params->cost_page_hit;
+	VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss;
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	VacuumUpdateCosts();
+
+	shared_params_generation_local = params_generation;
+}
+
+/*
+ * Store the cost-based vacuum delay parameters in the shared memory so that
+ * parallel vacuum workers can consume them (see
+ * parallel_vacuum_update_shared_delay_params()).
+ */
+void
+parallel_vacuum_propagate_shared_delay_params(void)
+{
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/*
+	 * Quick return if the leader process is not sharing the delay parameters.
+	 */
+	if (pv_shared_cost_params == NULL)
+		return;
+
+	/*
+	 * Check if any delay parameters has changed. We can read them without
+	 * locks as only the leader can modify them.
+	 */
+	if (vacuum_cost_delay == pv_shared_cost_params->cost_delay &&
+		vacuum_cost_limit == pv_shared_cost_params->cost_limit &&
+		VacuumCostPageDirty == pv_shared_cost_params->cost_page_dirty &&
+		VacuumCostPageHit == pv_shared_cost_params->cost_page_hit &&
+		VacuumCostPageMiss == pv_shared_cost_params->cost_page_miss)
+		return;
+
+	/* Update the shared delay parameters */
+	SpinLockAcquire(&pv_shared_cost_params->mutex);
+	parallel_vacuum_set_cost_parameters(pv_shared_cost_params);
+	SpinLockRelease(&pv_shared_cost_params->mutex);
+
+	/*
+	 * Increment the generation of the parameters, i.e. let parallel workers
+	 * know that they should re-read shared cost params.
+	 */
+	pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1);
+}
+
 /*
  * Compute the number of parallel worker processes to request.  Both index
  * vacuum and index cleanup can be executed with parallel workers.
@@ -1103,6 +1260,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	VacuumSharedCostBalance = &(shared->cost_balance);
 	VacuumActiveNWorkers = &(shared->active_nworkers);
 
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = &(shared->cost_params);
+
 	/* Set parallel vacuum state */
 	pvs.indrels = indrels;
 	pvs.nindexes = nindexes;
@@ -1152,6 +1312,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	vac_close_indexes(nindexes, indrels, RowExclusiveLock);
 	table_close(rel, ShareUpdateExclusiveLock);
 	FreeAccessStrategy(pvs.bstrategy);
+
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = NULL;
 }
 
 /*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index e1c995dd2ea..a0c020fa1a7 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1691,7 +1691,7 @@ VacuumUpdateCosts(void)
 	}
 	else
 	{
-		/* Must be explicit VACUUM or ANALYZE */
+		/* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */
 		vacuum_cost_delay = VacuumCostDelay;
 		vacuum_cost_limit = VacuumCostLimit;
 	}
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 1d820915d71..cf0c3c9dbf7 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -423,6 +423,8 @@ extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
 												int num_index_scans,
 												bool estimated_count,
 												PVWorkersUsage *wusage);
+extern void parallel_vacuum_update_shared_delay_params(void);
+extern void parallel_vacuum_propagate_shared_delay_params(void);
 extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a67d54e1819..15b8c966bf8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2088,6 +2088,7 @@ PVIndStats
 PVIndVacStatus
 PVOID
 PVShared
+PVSharedCostParams
 PVWorkersUsage
 PVWorkersStats
 PX_Alias
-- 
2.43.0

From 89ee0b3f80678164c6bd4b9117c126fc15bd1328 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 02:32:44 +0700
Subject: [PATCH v26 5/6] Documentation for parallel autovacuum

---
 doc/src/sgml/config.sgml           | 17 +++++++++++++++++
 doc/src/sgml/maintenance.sgml      | 12 ++++++++++++
 doc/src/sgml/ref/create_table.sgml | 20 ++++++++++++++++++++
 3 files changed, 49 insertions(+)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 8cdd826fbd3..73f839b6a8d 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2918,6 +2918,7 @@ include_dir 'conf.d'
         <para>
          When changing this value, consider also adjusting
          <xref linkend="guc-max-parallel-workers"/>,
+         <xref linkend="guc-autovacuum-max-parallel-workers"/>,
          <xref linkend="guc-max-parallel-maintenance-workers"/>, and
          <xref linkend="guc-max-parallel-workers-per-gather"/>.
         </para>
@@ -9395,6 +9396,22 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
        </listitem>
       </varlistentry>
 
+      <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+        <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+        <indexterm>
+         <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+         <secondary>configuration parameter</secondary>
+        </indexterm>
+        </term>
+        <listitem>
+         <para>
+          Sets the maximum number of parallel autovacuum workers that
+          can be used for parallel index vacuuming at one time. Is capped by
+          <xref linkend="guc-max-parallel-workers"/>. The default is 2.
+         </para>
+        </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
 
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 7c958b06273..c9f9163c551 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -926,6 +926,18 @@ HINT:  Execute a database-wide VACUUM in that database.
     autovacuum workers' activity.
    </para>
 
+   <para>
+    If an autovacuum worker process comes across a table with the enabled
+    <xref linkend="reloption-autovacuum-parallel-workers"/> storage parameter,
+    it will launch parallel workers in order to vacuum indexes of this table
+    in a parallel mode. Parallel workers are taken from the pool of processes
+    established by <xref linkend="guc-max-worker-processes"/>, limited by
+    <xref linkend="guc-max-parallel-workers"/>.
+    The total number of parallel autovacuum workers that can be active at one
+    time is limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+    configuration parameter.
+   </para>
+
    <para>
     If several large tables all become eligible for vacuuming in a short
     amount of time, all autovacuum workers might become occupied with
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 982532fe725..4894de021cd 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1718,6 +1718,26 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
     </listitem>
    </varlistentry>
 
+  <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+    <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+    <indexterm>
+     <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+    </indexterm>
+    </term>
+    <listitem>
+     <para>
+      Sets the maximum number of parallel autovacuum workers that can process
+      indexes of this table.
+      The default value is -1, which means no parallel index vacuuming for
+      this table. If value is 0 then parallel degree will computed based on
+      number of indexes.
+      Note that the computed number of workers may not actually be available at
+      run time. If this occurs, the autovacuum will run with fewer workers
+      than expected.
+     </para>
+    </listitem>
+   </varlistentry>
+
    <varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
     <term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
     <indexterm>
-- 
2.43.0

From db49a7776c6faba67a7c5f340f91da5c0e264e83 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Sun, 23 Nov 2025 01:03:24 +0700
Subject: [PATCH v26 1/6] Parallel autovacuum

---
 src/backend/access/common/reloptions.c        |  11 ++
 src/backend/commands/vacuumparallel.c         |  42 ++++-
 src/backend/postmaster/autovacuum.c           | 164 +++++++++++++++++-
 src/backend/utils/init/globals.c              |   1 +
 src/backend/utils/misc/guc.c                  |   8 +-
 src/backend/utils/misc/guc_parameters.dat     |   8 +
 src/backend/utils/misc/postgresql.conf.sample |   1 +
 src/bin/psql/tab-complete.in.c                |   1 +
 src/include/miscadmin.h                       |   1 +
 src/include/postmaster/autovacuum.h           |   5 +
 src/include/utils/rel.h                       |   8 +
 11 files changed, 240 insertions(+), 10 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 237ab8d0ed9..9459a010cc3 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -235,6 +235,15 @@ static relopt_int intRelOpts[] =
 		},
 		SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
 	},
+	{
+		{
+			"autovacuum_parallel_workers",
+			"Maximum number of parallel autovacuum workers that can be used for processing this table.",
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
+		},
+		-1, -1, 1024
+	},
 	{
 		{
 			"autovacuum_vacuum_threshold",
@@ -1968,6 +1977,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
 		{"autovacuum_enabled", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+		{"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
 		{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
 		offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
 		{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 279108ca89f..806a7f48326 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
 /*-------------------------------------------------------------------------
  *
  * vacuumparallel.c
- *	  Support routines for parallel vacuum execution.
+ *	  Support routines for parallel vacuum and autovacuum execution. In the
+ *	  comments below, the word "vacuum" will refer to both vacuum and
+ *	  autovacuum.
  *
  * This file contains routines that are intended to support setting up, using,
  * and tearing down a ParallelVacuumState.
@@ -34,6 +36,7 @@
 #include "executor/instrument.h"
 #include "optimizer/paths.h"
 #include "pgstat.h"
+#include "postmaster/autovacuum.h"
 #include "storage/bufmgr.h"
 #include "storage/proc.h"
 #include "tcop/tcopprot.h"
@@ -374,8 +377,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
 	shared->queryid = pgstat_get_my_query_id();
 	shared->maintenance_work_mem_worker =
 		(nindexes_mwm > 0) ?
-		maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
-		maintenance_work_mem;
+		vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+		vac_work_mem;
+
 	shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
 
 	/* Prepare DSA space for dead items */
@@ -554,12 +558,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	int			nindexes_parallel_bulkdel = 0;
 	int			nindexes_parallel_cleanup = 0;
 	int			parallel_workers;
+	int			max_workers;
+
+	max_workers = AmAutoVacuumWorkerProcess() ?
+		autovacuum_max_parallel_workers :
+		max_parallel_maintenance_workers;
 
 	/*
 	 * We don't allow performing parallel operation in standalone backend or
 	 * when parallelism is disabled.
 	 */
-	if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+	if (!IsUnderPostmaster || max_workers == 0)
 		return 0;
 
 	/*
@@ -598,8 +607,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
 	parallel_workers = (nrequested > 0) ?
 		Min(nrequested, nindexes_parallel) : nindexes_parallel;
 
-	/* Cap by max_parallel_maintenance_workers */
-	parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+	/* Cap by GUC variable */
+	parallel_workers = Min(parallel_workers, max_workers);
 
 	return parallel_workers;
 }
@@ -647,6 +656,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 	 */
 	nworkers = Min(nworkers, pvs->pcxt->nworkers);
 
+	/*
+	 * Reserve workers in autovacuum global state. Note that we may be given
+	 * fewer workers than we requested.
+	 */
+	if (AmAutoVacuumWorkerProcess() && nworkers > 0)
+		AutoVacuumReserveParallelWorkers(&nworkers);
+
 	/*
 	 * Set index vacuum status and mark whether parallel vacuum worker can
 	 * process it.
@@ -691,6 +707,16 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		LaunchParallelWorkers(pvs->pcxt);
 
+		/*
+		 * Tell autovacuum that we could not launch all the previously
+		 * reserved workers.
+		 */
+		if (AmAutoVacuumWorkerProcess() &&
+			pvs->pcxt->nworkers_launched < nworkers)
+		{
+			AutoVacuumReleaseParallelWorkers(nworkers - pvs->pcxt->nworkers_launched);
+		}
+
 		if (pvs->pcxt->nworkers_launched > 0)
 		{
 			/*
@@ -739,6 +765,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
 
 		for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
 			InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
+
+		/* Release all the reserved parallel workers for autovacuum */
+		if (AmAutoVacuumWorkerProcess() && pvs->pcxt->nworkers_launched > 0)
+			AutoVacuumReleaseAllParallelWorkers();
 	}
 
 	/*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 695e187ba11..e1c995dd2ea 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -152,6 +152,13 @@ int			Log_autoanalyze_min_duration = 600000;
 static double av_storage_param_cost_delay = -1;
 static int	av_storage_param_cost_limit = -1;
 
+/*
+ * Tracks the number of parallel workers currently reserved by the
+ * autovacuum worker. This is non-zero only for the parallel autovacuum
+ * leader process.
+ */
+static int	av_nworkers_reserved = 0;
+
 /* Flags set by signal handlers */
 static volatile sig_atomic_t got_SIGUSR2 = false;
 
@@ -286,6 +293,8 @@ typedef struct AutoVacuumWorkItem
  * av_workItems		work item array
  * av_nworkersForBalance the number of autovacuum workers to use when
  * 					calculating the per worker cost limit
+ * av_freeParallelWorkers the number of free parallel autovacuum workers
+ * av_maxParallelWorkers the maximum number of parallel autovacuum workers
  *
  * This struct is protected by AutovacuumLock, except for av_signal and parts
  * of the worker list (see above).
@@ -300,6 +309,8 @@ typedef struct
 	WorkerInfo	av_startingWorker;
 	AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];
 	pg_atomic_uint32 av_nworkersForBalance;
+	int32		av_freeParallelWorkers;
+	int32		av_maxParallelWorkers;
 } AutoVacuumShmemStruct;
 
 static AutoVacuumShmemStruct *AutoVacuumShmem;
@@ -362,6 +373,7 @@ static void autovac_report_workitem(AutoVacuumWorkItem *workitem,
 static void avl_sigusr2_handler(SIGNAL_ARGS);
 static bool av_worker_available(void);
 static void check_av_worker_gucs(void);
+static void adjust_free_parallel_workers(int prev_max_parallel_workers);
 
 
 
@@ -760,6 +772,8 @@ ProcessAutoVacLauncherInterrupts(void)
 	if (ConfigReloadPending)
 	{
 		int			autovacuum_max_workers_prev = autovacuum_max_workers;
+		int			autovacuum_max_parallel_workers_prev =
+			autovacuum_max_parallel_workers;
 
 		ConfigReloadPending = false;
 		ProcessConfigFile(PGC_SIGHUP);
@@ -776,6 +790,15 @@ ProcessAutoVacLauncherInterrupts(void)
 		if (autovacuum_max_workers_prev != autovacuum_max_workers)
 			check_av_worker_gucs();
 
+		/*
+		 * If autovacuum_max_parallel_workers changed, we must take care of
+		 * the correct value of available parallel autovacuum workers in
+		 * shmem.
+		 */
+		if (autovacuum_max_parallel_workers_prev !=
+			autovacuum_max_parallel_workers)
+			adjust_free_parallel_workers(autovacuum_max_parallel_workers_prev);
+
 		/* rebuild the list in case the naptime changed */
 		rebuild_database_list(InvalidOid);
 	}
@@ -1380,6 +1403,16 @@ avl_sigusr2_handler(SIGNAL_ARGS)
  *					  AUTOVACUUM WORKER CODE
  ********************************************************************/
 
+/*
+ * Make sure that all reserved workers are released, even if parallel
+ * autovacuum leader is finishing due to FATAL error.
+ */
+static void
+autovacuum_worker_before_shmem_exit(int code, Datum arg)
+{
+	AutoVacuumReleaseAllParallelWorkers();
+}
+
 /*
  * Main entry point for autovacuum worker processes.
  */
@@ -2276,6 +2309,12 @@ do_autovacuum(void)
 										  "Autovacuum Portal",
 										  ALLOCSET_DEFAULT_SIZES);
 
+	/*
+	 * Parallel autovacuum can reserve parallel workers. Make sure that all
+	 * reserved workers are released even after FATAL error.
+	 */
+	before_shmem_exit(autovacuum_worker_before_shmem_exit, 0);
+
 	/*
 	 * Perform operations on collected tables.
 	 */
@@ -2457,6 +2496,12 @@ do_autovacuum(void)
 		}
 		PG_CATCH();
 		{
+			/*
+			 * Parallel autovacuum can reserve parallel workers. Make sure
+			 * that all reserved workers are released.
+			 */
+			AutoVacuumReleaseAllParallelWorkers();
+
 			/*
 			 * Abort the transaction, start a new one, and proceed with the
 			 * next table in our list.
@@ -2857,8 +2902,12 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		 */
 		tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
 		tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
-		/* As of now, we don't support parallel vacuum for autovacuum */
-		tab->at_params.nworkers = -1;
+
+		/* Decide whether we need to process indexes of table in parallel. */
+		tab->at_params.nworkers = avopts
+			? avopts->autovacuum_parallel_workers
+			: -1;
+
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
@@ -3335,6 +3384,88 @@ AutoVacuumRequestWork(AutoVacuumWorkItemType type, Oid relationId,
 	return result;
 }
 
+/*
+ * Reserves parallel workers for autovacuum.
+ *
+ * nworkers is an in/out parameter; the requested number of parallel workers
+ * to reserve by the caller, and set to the actual number of reserved workers.
+ *
+ * The caller must call AutoVacuumRelease[All]ParallelWorkers() to release the
+ * reserved workers.
+ *
+ * NOTE: We will try to provide as many workers as requested, even if caller
+ * will occupy all available workers.
+ */
+void
+AutoVacuumReserveParallelWorkers(int *nworkers)
+{
+	/* Only leader autovacuum worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/* The worker must not have any reserved workers yet */
+	Assert(av_nworkers_reserved == 0);
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/* Provide as many workers as we can. */
+	*nworkers = Min(AutoVacuumShmem->av_freeParallelWorkers, *nworkers);
+	AutoVacuumShmem->av_freeParallelWorkers -= *nworkers;
+
+	LWLockRelease(AutovacuumLock);
+
+	/* Remember how many workers we have reserved. */
+	av_nworkers_reserved = *nworkers;
+}
+
+/*
+ * Releases the reserved parallel workers for autovacuum.
+ *
+ * This function should be used to release the parallel workers that an
+ * autovacuum worker reserved by AutoVacuumReserveParallelWorkers(). nworkers
+ * is the number of workers to release, which must not be greater than the
+ * number of workers currently reserved, av_nworkers_reserved.
+ */
+void
+AutoVacuumReleaseParallelWorkers(int nworkers)
+{
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess());
+
+	/* Cannot release more workers than reserved */
+	nworkers = Min(nworkers, av_nworkers_reserved);
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * If the maximum number of parallel workers was reduced during execution,
+	 * we must cap available workers number by its new value.
+	 */
+	AutoVacuumShmem->av_freeParallelWorkers =
+		Min(AutoVacuumShmem->av_freeParallelWorkers + nworkers,
+			AutoVacuumShmem->av_maxParallelWorkers);
+
+	LWLockRelease(AutovacuumLock);
+
+	/* Don't have to remember these workers anymore. */
+	av_nworkers_reserved -= nworkers;
+}
+
+/*
+ * Same as above, but this function releases all the parallel workers that
+ * this autovacuum worker reserved.
+ */
+void
+AutoVacuumReleaseAllParallelWorkers(void)
+{
+	/* Only leader worker can call this function. */
+	Assert(AmAutoVacuumWorkerProcess());
+
+	if (av_nworkers_reserved > 0)
+		AutoVacuumReleaseParallelWorkers(av_nworkers_reserved);
+
+	Assert(av_nworkers_reserved == 0);
+}
+
 /*
  * autovac_init
  *		This is called at postmaster initialization.
@@ -3395,6 +3526,10 @@ AutoVacuumShmemInit(void)
 		Assert(!found);
 
 		AutoVacuumShmem->av_launcherpid = 0;
+		AutoVacuumShmem->av_maxParallelWorkers =
+			Min(autovacuum_max_parallel_workers, max_parallel_workers);
+		AutoVacuumShmem->av_freeParallelWorkers =
+			AutoVacuumShmem->av_maxParallelWorkers;
 		dclist_init(&AutoVacuumShmem->av_freeWorkers);
 		dlist_init(&AutoVacuumShmem->av_runningWorkers);
 		AutoVacuumShmem->av_startingWorker = NULL;
@@ -3476,3 +3611,28 @@ check_av_worker_gucs(void)
 				 errdetail("The server will only start up to \"autovacuum_worker_slots\" (%d) autovacuum workers at a given time.",
 						   autovacuum_worker_slots)));
 }
+
+/*
+ * Adjusts the number of free parallel workers corresponds to the new
+ * autovacuum_max_parallel_workers value.
+ */
+static void
+adjust_free_parallel_workers(int prev_max_parallel_workers)
+{
+	int	nfree_workers;
+
+	LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);
+
+	/*
+	 * Cap or increase number of free parallel workers according to the
+	 * parameter change.
+	 */
+	nfree_workers =
+		autovacuum_max_parallel_workers - prev_max_parallel_workers +
+		AutoVacuumShmem->av_freeParallelWorkers;
+
+	AutoVacuumShmem->av_freeParallelWorkers = Max(nfree_workers, 0);
+	AutoVacuumShmem->av_maxParallelWorkers = autovacuum_max_parallel_workers;
+
+	LWLockRelease(AutovacuumLock);
+}
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 36ad708b360..8265a82b639 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int			NBuffers = 16384;
 int			MaxConnections = 100;
 int			max_worker_processes = 8;
 int			max_parallel_workers = 8;
+int			autovacuum_max_parallel_workers = 2;
 int			MaxBackends = 0;
 
 /* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index d77502838c4..4a5c73a9e33 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3326,9 +3326,13 @@ set_config_with_handle(const char *name, config_handle *handle,
 	 *
 	 * Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
 	 *
-	 * Other changes might need to affect other workers, so forbid them.
+	 * Other changes might need to affect other workers, so forbid them. Note,
+	 * that parallel autovacuum leader is an exception, because only
+	 * cost-based delays need to be affected also to parallel vacuum workers,
+	 * and we will handle it elsewhere if appropriate.
 	 */
-	if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE &&
+	if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
+		action != GUC_ACTION_SAVE &&
 		(record->flags & GUC_ALLOW_IN_PARALLEL) == 0)
 	{
 		ereport(elevel,
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index a5a0edf2534..c2395cf6638 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -154,6 +154,14 @@
   max => '2000000000',
 },
 
+{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
+  short_desc => 'Maximum number of parallel autovacuum workers, that can be taken from bgworkers pool.',
+  variable => 'autovacuum_max_parallel_workers',
+  boot_val => '2',
+  min => '0',
+  max => 'MAX_BACKENDS',
+},
+
 { name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
   short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.',
   variable => 'autovacuum_max_workers',
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index e686d88afc4..5e1c62d616c 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -710,6 +710,7 @@
 #autovacuum_worker_slots = 16           # autovacuum worker slots to allocate
                                         # (change requires restart)
 #autovacuum_max_workers = 3             # max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 2    # limited by max_parallel_workers
 #autovacuum_naptime = 1min              # time between autovacuum runs
 #autovacuum_vacuum_threshold = 50       # min number of row updates before
                                         # vacuum
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 5bdbf1530a2..29171efbc1b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -1432,6 +1432,7 @@ static const char *const table_storage_parameters[] = {
 	"autovacuum_multixact_freeze_max_age",
 	"autovacuum_multixact_freeze_min_age",
 	"autovacuum_multixact_freeze_table_age",
+	"autovacuum_parallel_workers",
 	"autovacuum_vacuum_cost_delay",
 	"autovacuum_vacuum_cost_limit",
 	"autovacuum_vacuum_insert_scale_factor",
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index f16f35659b9..00190c67ecf 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -178,6 +178,7 @@ extern PGDLLIMPORT int MaxBackends;
 extern PGDLLIMPORT int MaxConnections;
 extern PGDLLIMPORT int max_worker_processes;
 extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
 
 extern PGDLLIMPORT int commit_timestamp_buffers;
 extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 5aa0f3a8ac1..f3783afb51b 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -62,6 +62,11 @@ pg_noreturn extern void AutoVacWorkerMain(const void *startup_data, size_t start
 extern bool AutoVacuumRequestWork(AutoVacuumWorkItemType type,
 								  Oid relationId, BlockNumber blkno);
 
+/* parallel autovacuum stuff */
+extern void	AutoVacuumReserveParallelWorkers(int *nworkers);
+extern void AutoVacuumReleaseParallelWorkers(int nworkers);
+extern void AutoVacuumReleaseAllParallelWorkers(void);
+
 /* shared memory stuff */
 extern Size AutoVacuumShmemSize(void);
 extern void AutoVacuumShmemInit(void);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 236830f6b93..11dd3aebc6c 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,14 @@ typedef struct ForeignKeyCacheInfo
 typedef struct AutoVacOpts
 {
 	bool		enabled;
+
+	/*
+	 * Target number of parallel autovacuum workers. -1 by default disables
+	 * parallel vacuum during autovacuum. 0 means choose the parallel degree
+	 * based on the number of indexes.
+	 */
+	int			autovacuum_parallel_workers;
+
 	int			vacuum_threshold;
 	int			vacuum_max_threshold;
 	int			vacuum_ins_threshold;
-- 
2.43.0

From 0326bcc5f0d059ed4b4f1ec3ad48d9ce0d8405aa Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Mon, 16 Mar 2026 18:57:04 +0700
Subject: [PATCH 2/2] fixes for 0004 patch

---
 src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
index e5dacf59980..34f11193dfd 100644
--- a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -109,7 +109,7 @@ $node->safe_psql('postgres', qq{
 # Wait until the parallel autovacuum on table is completed. At the same time,
 # we check that the required number of parallel workers has been started.
 $log_start = $node->wait_for_log(
-	qr/parallel workers: index vacuum: 2 planned, 2 reserved, 2 launched/,
+	qr/autovacuum worker: 2 parallel workers has been released/,
 	$log_start
 );
 
@@ -214,7 +214,7 @@ $node->safe_psql('postgres', qq{
 
 # Wait until the end of parallel processing
 $log_start = $node->wait_for_log(
-	qr/parallel workers: index vacuum: 2 planned, 2 reserved, 2 launched/,
+	qr/autovacuum worker: 2 parallel workers has been released/,
 	$log_start
 );
 
-- 
2.43.0

From dde5f3806a70befb0d08d2c8afee423b27a02c7d Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Mon, 16 Mar 2026 18:56:50 +0700
Subject: [PATCH 1/2] fixes for 0003 patch

---
 src/backend/commands/vacuumparallel.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 82618ab3ac5..5105137ce3b 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -527,6 +527,9 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
 	DestroyParallelContext(pvs->pcxt);
 	ExitParallelMode();
 
+	if (AmAutoVacuumWorkerProcess())
+		pv_shared_cost_params = NULL;
+
 	pfree(pvs->will_parallel_vacuum);
 	pfree(pvs);
 }
@@ -1337,6 +1340,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
 	vac_close_indexes(nindexes, indrels, RowExclusiveLock);
 	table_close(rel, ShareUpdateExclusiveLock);
 	FreeAccessStrategy(pvs.bstrategy);
+
+	if (shared->is_autovacuum)
+		pv_shared_cost_params = NULL;
 }
 
 /*
-- 
2.43.0

Reply via email to