Hi,
On Mon, Mar 16, 2026 at 11:46 PM Masahiko Sawada <[email protected]> wrote:
>
> While both ideas can achieve our goal of this feature in general, the
> new idea doesn't require an additional layer of reserve/release logic
> on top of the existing bgworker pool, which is good. I've not tried
> coding this idea but I believe the patch can be simplified very much.
> So I agree to move to this idea.
>
OK, let's do it!
Please, see an updated set of patches. Main changes are :
0001 patch - removed all logic related to the parallel workers reserving.
0002 patch - no changes regarding v26.
0003 patch - no changes regarding v26.
0004 patch - removed all stuff related to the "test_autovacuum" extension.
Also removed 3th, 4th and 5th tests, because they were related
only to the workers reserving logic.
0005 patch - minor changes reflecting the new GUC parameter's purpose.
I have maintained the independence of the tests from the user-facing logging.
Instead of "nworkers released" logs I have added a single log at the end of
one round of parallel processing :
"av worker: finished parallel index processing with N parallel workers".
This is the only code that I added rather than deleted within the 0001 patch.
I hope I didn't miss anything.
--
Best regards,
Daniil Davydov
From 923f6f3d758edb1f64eadef1f5bb1dfb873f4b21 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 03:23:38 +0700
Subject: [PATCH v27 5/5] Documentation for parallel autovacuum
---
doc/src/sgml/config.sgml | 18 ++++++++++++++++++
doc/src/sgml/maintenance.sgml | 12 ++++++++++++
doc/src/sgml/ref/create_table.sgml | 20 ++++++++++++++++++++
3 files changed, 50 insertions(+)
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 8cdd826fbd3..7741796c6b0 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2918,6 +2918,7 @@ include_dir 'conf.d'
<para>
When changing this value, consider also adjusting
<xref linkend="guc-max-parallel-workers"/>,
+ <xref linkend="guc-autovacuum-max-parallel-workers"/>,
<xref linkend="guc-max-parallel-maintenance-workers"/>, and
<xref linkend="guc-max-parallel-workers-per-gather"/>.
</para>
@@ -9395,6 +9396,23 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
</listitem>
</varlistentry>
+ <varlistentry id="guc-autovacuum-max-parallel-workers" xreflabel="autovacuum_max_parallel_workers">
+ <term><varname>autovacuum_max_parallel_workers</varname> (<type>integer</type>)
+ <indexterm>
+ <primary><varname>autovacuum_max_parallel_workers</varname></primary>
+ <secondary>configuration parameter</secondary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Sets the maximum number of parallel autovacuum workers that
+ can be used for parallel index vacuuming at one time by a single
+ autovacuum worker. Is capped by <xref linkend="guc-max-parallel-workers"/>.
+ The default is 2.
+ </para>
+ </listitem>
+ </varlistentry>
+
</variablelist>
</sect2>
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 7c958b06273..f2a280db569 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -926,6 +926,18 @@ HINT: Execute a database-wide VACUUM in that database.
autovacuum workers' activity.
</para>
+ <para>
+ If an autovacuum worker process comes across a table with the enabled
+ <xref linkend="reloption-autovacuum-parallel-workers"/> storage parameter,
+ it will launch parallel workers in order to vacuum indexes of this table
+ in a parallel mode. Parallel workers are taken from the pool of processes
+ established by <xref linkend="guc-max-worker-processes"/>, limited by
+ <xref linkend="guc-max-parallel-workers"/>.
+ The number of parallel workers that can be taken from pool by a single
+ autovacuum worker is limited by the <xref linkend="guc-autovacuum-max-parallel-workers"/>
+ configuration parameter.
+ </para>
+
<para>
If several large tables all become eligible for vacuuming in a short
amount of time, all autovacuum workers might become occupied with
diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml
index 982532fe725..4894de021cd 100644
--- a/doc/src/sgml/ref/create_table.sgml
+++ b/doc/src/sgml/ref/create_table.sgml
@@ -1718,6 +1718,26 @@ WITH ( MODULUS <replaceable class="parameter">numeric_literal</replaceable>, REM
</listitem>
</varlistentry>
+ <varlistentry id="reloption-autovacuum-parallel-workers" xreflabel="autovacuum_parallel_workers">
+ <term><literal>autovacuum_parallel_workers</literal> (<type>integer</type>)
+ <indexterm>
+ <primary><varname>autovacuum_parallel_workers</varname> storage parameter</primary>
+ </indexterm>
+ </term>
+ <listitem>
+ <para>
+ Sets the maximum number of parallel autovacuum workers that can process
+ indexes of this table.
+ The default value is -1, which means no parallel index vacuuming for
+ this table. If value is 0 then parallel degree will computed based on
+ number of indexes.
+ Note that the computed number of workers may not actually be available at
+ run time. If this occurs, the autovacuum will run with fewer workers
+ than expected.
+ </para>
+ </listitem>
+ </varlistentry>
+
<varlistentry id="reloption-autovacuum-vacuum-threshold" xreflabel="autovacuum_vacuum_threshold">
<term><literal>autovacuum_vacuum_threshold</literal>, <literal>toast.autovacuum_vacuum_threshold</literal> (<type>integer</type>)
<indexterm>
--
2.43.0
From 4219b2cf4869c3bab130642fb243441af26906ad Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 02:50:23 +0700
Subject: [PATCH v27 4/5] Tests for parallel autovacuum
---
src/backend/access/heap/vacuumlazy.c | 9 +
src/backend/commands/vacuumparallel.c | 25 +++
src/test/modules/Makefile | 1 +
src/test/modules/meson.build | 1 +
src/test/modules/test_autovacuum/.gitignore | 2 +
src/test/modules/test_autovacuum/Makefile | 20 +++
src/test/modules/test_autovacuum/meson.build | 15 ++
.../t/001_parallel_autovacuum.pl | 169 ++++++++++++++++++
8 files changed, 242 insertions(+)
create mode 100644 src/test/modules/test_autovacuum/.gitignore
create mode 100644 src/test/modules/test_autovacuum/Makefile
create mode 100644 src/test/modules/test_autovacuum/meson.build
create mode 100644 src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index cccaee5b620..4f97baced2b 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -152,6 +152,7 @@
#include "storage/latch.h"
#include "storage/lmgr.h"
#include "storage/read_stream.h"
+#include "utils/injection_point.h"
#include "utils/lsyscache.h"
#include "utils/pg_rusage.h"
#include "utils/timestamp.h"
@@ -873,6 +874,14 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
lazy_check_wraparound_failsafe(vacrel);
dead_items_alloc(vacrel, params.nworkers);
+#ifdef USE_INJECTION_POINTS
+ /*
+ * Trigger injection point, if parallel autovacuum is about to be started.
+ */
+ if (AmAutoVacuumWorkerProcess() && ParallelVacuumIsActive(vacrel))
+ INJECTION_POINT("autovacuum-start-parallel-vacuum", NULL);
+#endif
+
/*
* Call lazy_scan_heap to perform all required heap pruning, index
* vacuuming, and heap vacuuming (plus related processing)
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index f4fceb96874..89eaceba55c 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -46,6 +46,7 @@
#include "storage/bufmgr.h"
#include "storage/proc.h"
#include "tcop/tcopprot.h"
+#include "utils/injection_point.h"
#include "utils/lsyscache.h"
#include "utils/rel.h"
@@ -655,6 +656,14 @@ parallel_vacuum_update_shared_delay_params(void)
VacuumUpdateCosts();
shared_params_generation_local = params_generation;
+
+ elog(DEBUG2,
+ "parallel autovacuum worker cost params: cost_limit=%d, cost_delay=%g, cost_page_miss=%d, cost_page_dirty=%d, cost_page_hit=%d",
+ vacuum_cost_limit,
+ vacuum_cost_delay,
+ VacuumCostPageMiss,
+ VacuumCostPageDirty,
+ VacuumCostPageHit);
}
/*
@@ -898,6 +907,15 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
pvs->pcxt->nworkers_launched, nworkers)));
}
+#ifdef USE_INJECTION_POINTS
+ /*
+ * This injection point is used to wait until parallel autovacuum workers
+ * finishes their part of index processing.
+ */
+ if (nworkers > 0)
+ INJECTION_POINT("autovacuum-leader-before-indexes-processing", NULL);
+#endif
+
/* Vacuum the indexes that can be processed by only leader process */
parallel_vacuum_process_unsafe_indexes(pvs);
@@ -918,6 +936,13 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
for (int i = 0; i < pvs->pcxt->nworkers_launched; i++)
InstrAccumParallelQuery(&pvs->buffer_usage[i], &pvs->wal_usage[i]);
+
+ if (AmAutoVacuumWorkerProcess())
+ elog(DEBUG2,
+ ngettext("autovacuum worker: finished parallel index processing with %d parallel worker",
+ "autovacuum worker: finished parallel index processing with %d parallel workers",
+ nworkers),
+ nworkers);
}
/*
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 4ac5c84db43..01fe0041c97 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -16,6 +16,7 @@ SUBDIRS = \
plsample \
spgist_name_ops \
test_aio \
+ test_autovacuum \
test_binaryheap \
test_bitmapset \
test_bloomfilter \
diff --git a/src/test/modules/meson.build b/src/test/modules/meson.build
index e2b3eef4136..9dcdc68bc87 100644
--- a/src/test/modules/meson.build
+++ b/src/test/modules/meson.build
@@ -16,6 +16,7 @@ subdir('plsample')
subdir('spgist_name_ops')
subdir('ssl_passphrase_callback')
subdir('test_aio')
+subdir('test_autovacuum')
subdir('test_binaryheap')
subdir('test_bitmapset')
subdir('test_bloomfilter')
diff --git a/src/test/modules/test_autovacuum/.gitignore b/src/test/modules/test_autovacuum/.gitignore
new file mode 100644
index 00000000000..716e17f5a2a
--- /dev/null
+++ b/src/test/modules/test_autovacuum/.gitignore
@@ -0,0 +1,2 @@
+# Generated subdirectories
+/tmp_check/
diff --git a/src/test/modules/test_autovacuum/Makefile b/src/test/modules/test_autovacuum/Makefile
new file mode 100644
index 00000000000..188ec9f96a2
--- /dev/null
+++ b/src/test/modules/test_autovacuum/Makefile
@@ -0,0 +1,20 @@
+# src/test/modules/test_autovacuum/Makefile
+
+PGFILEDESC = "test_autovacuum - test code for parallel autovacuum"
+
+TAP_TESTS = 1
+
+EXTRA_INSTALL = src/test/modules/injection_points
+
+export enable_injection_points
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/test_autovacuum
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
diff --git a/src/test/modules/test_autovacuum/meson.build b/src/test/modules/test_autovacuum/meson.build
new file mode 100644
index 00000000000..86e392bc0de
--- /dev/null
+++ b/src/test/modules/test_autovacuum/meson.build
@@ -0,0 +1,15 @@
+# Copyright (c) 2024-2026, PostgreSQL Global Development Group
+
+tests += {
+ 'name': 'test_autovacuum',
+ 'sd': meson.current_source_dir(),
+ 'bd': meson.current_build_dir(),
+ 'tap': {
+ 'env': {
+ 'enable_injection_points': get_option('injection_points') ? 'yes' : 'no',
+ },
+ 'tests': [
+ 't/001_parallel_autovacuum.pl',
+ ],
+ },
+}
diff --git a/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
new file mode 100644
index 00000000000..9ad87d48b96
--- /dev/null
+++ b/src/test/modules/test_autovacuum/t/001_parallel_autovacuum.pl
@@ -0,0 +1,169 @@
+# Test parallel autovacuum behavior
+
+use warnings FATAL => 'all';
+use PostgreSQL::Test::Cluster;
+use PostgreSQL::Test::Utils;
+use Test::More;
+
+if ($ENV{enable_injection_points} ne 'yes')
+{
+ plan skip_all => 'Injection points not supported by this build';
+}
+
+# Before each test we should disable autovacuum for 'test_autovac' table and
+# generate some dead tuples in it.
+
+sub prepare_for_next_test
+{
+ my ($node, $test_number) = @_;
+
+ $node->safe_psql('postgres', qq{
+ ALTER TABLE test_autovac SET (autovacuum_enabled = false);
+ UPDATE test_autovac SET col_1 = $test_number;
+ });
+}
+
+
+my $psql_out;
+
+my $node = PostgreSQL::Test::Cluster->new('node1');
+$node->init;
+
+# Configure postgres, so it can launch parallel autovacuum workers, log all
+# information we are interested in and autovacuum works frequently
+$node->append_conf('postgresql.conf', qq{
+ max_worker_processes = 20
+ max_parallel_workers = 20
+ autovacuum_max_parallel_workers = 4
+ log_min_messages = debug2
+ autovacuum_naptime = '1s'
+ min_parallel_index_scan_size = 0
+});
+$node->start;
+
+# Check if the extension injection_points is available, as it may be
+# possible that this script is run with installcheck, where the module
+# would not be installed by default.
+if (!$node->check_extension('injection_points'))
+{
+ plan skip_all => 'Extension injection_points not installed';
+}
+
+# Create all functions needed for testing
+$node->safe_psql('postgres', qq{
+ CREATE EXTENSION injection_points;
+});
+
+my $indexes_num = 4;
+my $initial_rows_num = 10_000;
+my $autovacuum_parallel_workers = 2;
+
+# Create table and fill it with some data
+$node->safe_psql('postgres', qq{
+ CREATE TABLE test_autovac (
+ id SERIAL PRIMARY KEY,
+ col_1 INTEGER, col_2 INTEGER, col_3 INTEGER, col_4 INTEGER
+ ) WITH (autovacuum_parallel_workers = $autovacuum_parallel_workers,
+ log_autovacuum_min_duration = 0);
+
+ INSERT INTO test_autovac
+ SELECT
+ g AS col1,
+ g + 1 AS col2,
+ g + 2 AS col3,
+ g + 3 AS col4
+ FROM generate_series(1, $initial_rows_num) AS g;
+});
+
+# Create specified number of b-tree indexes on the table
+$node->safe_psql('postgres', qq{
+ DO \$\$
+ DECLARE
+ i INTEGER;
+ BEGIN
+ FOR i IN 1..$indexes_num LOOP
+ EXECUTE format('CREATE INDEX idx_col_\%s ON test_autovac (col_\%s);', i, i);
+ END LOOP;
+ END \$\$;
+});
+
+# Test 1 :
+# Our table has enough indexes and appropriate reloptions, so autovacuum must
+# be able to process it in parallel mode. Just check if it can do it.
+
+prepare_for_next_test($node, 1);
+
+$node->safe_psql('postgres', qq{
+ ALTER TABLE test_autovac SET (autovacuum_enabled = true);
+});
+
+# Wait until the parallel autovacuum on table is completed. At the same time,
+# we check that the required number of parallel workers has been started.
+$log_start = $node->wait_for_log(
+ qr/autovacuum worker: finished parallel index processing with 2 parallel workers/,
+ $log_start
+);
+
+# Test 2:
+# Check whether parallel autovacuum leader can propagate cost-based parameters
+# to the parallel workers.
+
+prepare_for_next_test($node, 2);
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_attach('autovacuum-start-parallel-vacuum', 'wait');
+ SELECT injection_points_attach('autovacuum-leader-before-indexes-processing', 'wait');
+
+ ALTER TABLE test_autovac SET (autovacuum_parallel_workers = 1, autovacuum_enabled = true);
+});
+
+# Wait until parallel autovacuum is inited
+$node->wait_for_event(
+ 'autovacuum worker',
+ 'autovacuum-start-parallel-vacuum'
+);
+
+# Reload config - leader worker must update its own parameters during indexes
+# processing
+$node->safe_psql('postgres', qq{
+ ALTER SYSTEM SET vacuum_cost_limit = 500;
+ ALTER SYSTEM SET vacuum_cost_page_miss = 10;
+ ALTER SYSTEM SET vacuum_cost_page_dirty = 10;
+ ALTER SYSTEM SET vacuum_cost_page_hit = 10;
+ SELECT pg_reload_conf();
+});
+
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_wakeup('autovacuum-start-parallel-vacuum');
+});
+
+# Now wait until parallel autovacuum leader completes processing table (i.e.
+# guaranteed to call vacuum_delay_point) and launches parallel worker.
+$node->wait_for_event(
+ 'autovacuum worker',
+ 'autovacuum-leader-before-indexes-processing'
+);
+
+# Check whether parallel worker successfully updated all parameters during
+# index processing
+$log_start = $node->wait_for_log(
+ qr/parallel autovacuum worker cost params: cost_limit=500, cost_delay=2, / .
+ qr/cost_page_miss=10, cost_page_dirty=10, cost_page_hit=10/,
+ $log_start
+);
+
+# Cleanup
+$node->safe_psql('postgres', qq{
+ SELECT injection_points_wakeup('autovacuum-leader-before-indexes-processing');
+
+ SELECT injection_points_detach('autovacuum-start-parallel-vacuum');
+ SELECT injection_points_detach('autovacuum-leader-before-indexes-processing');
+
+ ALTER TABLE test_autovac SET (autovacuum_parallel_workers = $autovacuum_parallel_workers);
+});
+
+# We were able to get to this point, so everything is fine.
+ok(1);
+
+$node->stop;
+done_testing();
--
2.43.0
From 5bccf2adb52f57e1ab9ac0616ad2a52a7cf125cd Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Mon, 16 Mar 2026 19:01:05 +0700
Subject: [PATCH v27 2/5] Logging for parallel autovacuum
---
src/backend/access/heap/vacuumlazy.c | 32 +++++++++++++++++++++++++--
src/backend/commands/vacuumparallel.c | 26 +++++++++++++++++-----
src/include/commands/vacuum.h | 28 +++++++++++++++++++++--
src/tools/pgindent/typedefs.list | 2 ++
4 files changed, 78 insertions(+), 10 deletions(-)
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 82c5b28e0ad..cccaee5b620 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -343,6 +343,13 @@ typedef struct LVRelState
int num_index_scans;
int num_dead_items_resets;
Size total_dead_items_bytes;
+
+ /*
+ * Total number of planned and actually launched parallel workers for
+ * index scans.
+ */
+ PVWorkersUsage workers_usage;
+
/* Counters that follow are only for scanned_pages */
int64 tuples_deleted; /* # deleted from table */
int64 tuples_frozen; /* # newly frozen */
@@ -781,6 +788,11 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
vacrel->new_all_visible_all_frozen_pages = 0;
vacrel->new_all_frozen_pages = 0;
+ vacrel->workers_usage.vacuum.nlaunched = 0;
+ vacrel->workers_usage.vacuum.nplanned = 0;
+ vacrel->workers_usage.cleanup.nlaunched = 0;
+ vacrel->workers_usage.cleanup.nplanned = 0;
+
/*
* Get cutoffs that determine which deleted tuples are considered DEAD,
* not just RECENTLY_DEAD, and which XIDs/MXIDs to freeze. Then determine
@@ -1123,6 +1135,20 @@ heap_vacuum_rel(Relation rel, const VacuumParams params,
orig_rel_pages == 0 ? 100.0 :
100.0 * vacrel->lpdead_item_pages / orig_rel_pages,
vacrel->lpdead_items);
+ if (vacrel->workers_usage.vacuum.nplanned > 0)
+ {
+ appendStringInfo(&buf,
+ _("parallel workers: index vacuum: %d planned, %d launched in total\n"),
+ vacrel->workers_usage.vacuum.nplanned,
+ vacrel->workers_usage.vacuum.nlaunched);
+ }
+ if (vacrel->workers_usage.cleanup.nplanned > 0)
+ {
+ appendStringInfo(&buf,
+ _("parallel workers: index cleanup: %d planned, %d launched\n"),
+ vacrel->workers_usage.cleanup.nplanned,
+ vacrel->workers_usage.cleanup.nlaunched);
+ }
for (int i = 0; i < vacrel->nindexes; i++)
{
IndexBulkDeleteResult *istat = vacrel->indstats[i];
@@ -2669,7 +2695,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)
{
/* Outsource everything to parallel variant */
parallel_vacuum_bulkdel_all_indexes(vacrel->pvs, old_live_tuples,
- vacrel->num_index_scans);
+ vacrel->num_index_scans,
+ &vacrel->workers_usage);
/*
* Do a postcheck to consider applying wraparound failsafe now. Note
@@ -3103,7 +3130,8 @@ lazy_cleanup_all_indexes(LVRelState *vacrel)
/* Outsource everything to parallel variant */
parallel_vacuum_cleanup_all_indexes(vacrel->pvs, reltuples,
vacrel->num_index_scans,
- estimated_count);
+ estimated_count,
+ &vacrel->workers_usage);
}
/* Reset the progress counters */
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index cafa0a4d494..5dea4374ec7 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -227,7 +227,7 @@ struct ParallelVacuumState
static int parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
bool *will_parallel_vacuum);
static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
- bool vacuum);
+ bool vacuum, PVWorkersStats *wstats);
static void parallel_vacuum_process_safe_indexes(ParallelVacuumState *pvs);
static void parallel_vacuum_process_unsafe_indexes(ParallelVacuumState *pvs);
static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation indrel,
@@ -502,7 +502,7 @@ parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs)
*/
void
parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
- int num_index_scans)
+ int num_index_scans, PVWorkersUsage *wusage)
{
Assert(!IsParallelWorker());
@@ -513,7 +513,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
pvs->shared->reltuples = num_table_tuples;
pvs->shared->estimated_count = true;
- parallel_vacuum_process_all_indexes(pvs, num_index_scans, true);
+ parallel_vacuum_process_all_indexes(pvs, num_index_scans, true,
+ &wusage->vacuum);
}
/*
@@ -521,7 +522,8 @@ parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs, long num_table_tup
*/
void
parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tuples,
- int num_index_scans, bool estimated_count)
+ int num_index_scans, bool estimated_count,
+ PVWorkersUsage *wusage)
{
Assert(!IsParallelWorker());
@@ -533,7 +535,8 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
pvs->shared->reltuples = num_table_tuples;
pvs->shared->estimated_count = estimated_count;
- parallel_vacuum_process_all_indexes(pvs, num_index_scans, false);
+ parallel_vacuum_process_all_indexes(pvs, num_index_scans, false,
+ &wusage->cleanup);
}
/*
@@ -615,10 +618,13 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
/*
* Perform index vacuum or index cleanup with parallel workers. This function
* must be used by the parallel vacuum leader process.
+ *
+ * If wstats is not NULL, the statistics it stores will be updated according
+ * to what happens during function execution.
*/
static void
parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
- bool vacuum)
+ bool vacuum, PVWorkersStats *wstats)
{
int nworkers;
PVIndVacStatus new_status;
@@ -655,6 +661,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
*/
nworkers = Min(nworkers, pvs->pcxt->nworkers);
+ /* Remember this value, if we asked to */
+ if (wstats != NULL && nworkers > 0)
+ wstats->nplanned += nworkers;
+
/*
* Set index vacuum status and mark whether parallel vacuum worker can
* process it.
@@ -711,6 +721,10 @@ parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scan
/* Enable shared cost balance for leader backend */
VacuumSharedCostBalance = &(pvs->shared->cost_balance);
VacuumActiveNWorkers = &(pvs->shared->active_nworkers);
+
+ /* Remember this value, if we asked to */
+ if (wstats != NULL)
+ wstats->nlaunched += pvs->pcxt->nworkers_launched;
}
if (vacuum)
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index e885a4b9c77..1d820915d71 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -300,6 +300,28 @@ typedef struct VacDeadItemsInfo
int64 num_items; /* current # of entries */
} VacDeadItemsInfo;
+/*
+ * Helper for the PVWorkersUsage structure (see below), to avoid repetition.
+ */
+typedef struct PVWorkersStats
+{
+ /* Number of parallel workers we are planned to launch */
+ int nplanned;
+
+ /* Number of launched parallel workers */
+ int nlaunched;
+} PVWorkersStats;
+
+/*
+ * PVWorkersUsage stores information about total number of launched and
+ * planned workers during parallel vacuum (both for vacuum and cleanup).
+ */
+typedef struct PVWorkersUsage
+{
+ PVWorkersStats vacuum;
+ PVWorkersStats cleanup;
+} PVWorkersUsage;
+
/* GUC parameters */
extern PGDLLIMPORT int default_statistics_target; /* PGDLLIMPORT for PostGIS */
extern PGDLLIMPORT int vacuum_freeze_min_age;
@@ -394,11 +416,13 @@ extern TidStore *parallel_vacuum_get_dead_items(ParallelVacuumState *pvs,
extern void parallel_vacuum_reset_dead_items(ParallelVacuumState *pvs);
extern void parallel_vacuum_bulkdel_all_indexes(ParallelVacuumState *pvs,
long num_table_tuples,
- int num_index_scans);
+ int num_index_scans,
+ PVWorkersUsage *wusage);
extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
long num_table_tuples,
int num_index_scans,
- bool estimated_count);
+ bool estimated_count,
+ PVWorkersUsage *wusage);
extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
/* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 52f8603a7be..a67d54e1819 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2088,6 +2088,8 @@ PVIndStats
PVIndVacStatus
PVOID
PVShared
+PVWorkersUsage
+PVWorkersStats
PX_Alias
PX_Cipher
PX_Combo
--
2.43.0
From 041b867e07a61f5163ec35a1fb5fdd6fbe26b431 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Tue, 17 Mar 2026 02:18:09 +0700
Subject: [PATCH v27 1/5] Parallel autovacuum
---
src/backend/access/common/reloptions.c | 11 ++++++++++
src/backend/commands/vacuumparallel.c | 20 +++++++++++++------
src/backend/postmaster/autovacuum.c | 8 ++++++--
src/backend/utils/init/globals.c | 1 +
src/backend/utils/misc/guc.c | 8 ++++++--
src/backend/utils/misc/guc_parameters.dat | 8 ++++++++
src/backend/utils/misc/postgresql.conf.sample | 1 +
src/bin/psql/tab-complete.in.c | 1 +
src/include/miscadmin.h | 1 +
src/include/utils/rel.h | 8 ++++++++
10 files changed, 57 insertions(+), 10 deletions(-)
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 237ab8d0ed9..9459a010cc3 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -235,6 +235,15 @@ static relopt_int intRelOpts[] =
},
SPGIST_DEFAULT_FILLFACTOR, SPGIST_MIN_FILLFACTOR, 100
},
+ {
+ {
+ "autovacuum_parallel_workers",
+ "Maximum number of parallel autovacuum workers that can be used for processing this table.",
+ RELOPT_KIND_HEAP,
+ ShareUpdateExclusiveLock
+ },
+ -1, -1, 1024
+ },
{
{
"autovacuum_vacuum_threshold",
@@ -1968,6 +1977,8 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
{"fillfactor", RELOPT_TYPE_INT, offsetof(StdRdOptions, fillfactor)},
{"autovacuum_enabled", RELOPT_TYPE_BOOL,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, enabled)},
+ {"autovacuum_parallel_workers", RELOPT_TYPE_INT,
+ offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, autovacuum_parallel_workers)},
{"autovacuum_vacuum_threshold", RELOPT_TYPE_INT,
offsetof(StdRdOptions, autovacuum) + offsetof(AutoVacOpts, vacuum_threshold)},
{"autovacuum_vacuum_max_threshold", RELOPT_TYPE_INT,
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 279108ca89f..cafa0a4d494 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -1,7 +1,9 @@
/*-------------------------------------------------------------------------
*
* vacuumparallel.c
- * Support routines for parallel vacuum execution.
+ * Support routines for parallel vacuum and autovacuum execution. In the
+ * comments below, the word "vacuum" will refer to both vacuum and
+ * autovacuum.
*
* This file contains routines that are intended to support setting up, using,
* and tearing down a ParallelVacuumState.
@@ -374,8 +376,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
shared->queryid = pgstat_get_my_query_id();
shared->maintenance_work_mem_worker =
(nindexes_mwm > 0) ?
- maintenance_work_mem / Min(parallel_workers, nindexes_mwm) :
- maintenance_work_mem;
+ vac_work_mem / Min(parallel_workers, nindexes_mwm) :
+ vac_work_mem;
+
shared->dead_items_info.max_bytes = vac_work_mem * (size_t) 1024;
/* Prepare DSA space for dead items */
@@ -554,12 +557,17 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
int nindexes_parallel_bulkdel = 0;
int nindexes_parallel_cleanup = 0;
int parallel_workers;
+ int max_workers;
+
+ max_workers = AmAutoVacuumWorkerProcess() ?
+ autovacuum_max_parallel_workers :
+ max_parallel_maintenance_workers;
/*
* We don't allow performing parallel operation in standalone backend or
* when parallelism is disabled.
*/
- if (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)
+ if (!IsUnderPostmaster || max_workers == 0)
return 0;
/*
@@ -598,8 +606,8 @@ parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
parallel_workers = (nrequested > 0) ?
Min(nrequested, nindexes_parallel) : nindexes_parallel;
- /* Cap by max_parallel_maintenance_workers */
- parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+ /* Cap by GUC variable */
+ parallel_workers = Min(parallel_workers, max_workers);
return parallel_workers;
}
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 219673db930..f153d0343c8 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -2858,8 +2858,12 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
*/
tab->at_params.index_cleanup = VACOPTVALUE_UNSPECIFIED;
tab->at_params.truncate = VACOPTVALUE_UNSPECIFIED;
- /* As of now, we don't support parallel vacuum for autovacuum */
- tab->at_params.nworkers = -1;
+
+ /* Decide whether we need to process indexes of table in parallel. */
+ tab->at_params.nworkers = avopts
+ ? avopts->autovacuum_parallel_workers
+ : -1;
+
tab->at_params.freeze_min_age = freeze_min_age;
tab->at_params.freeze_table_age = freeze_table_age;
tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 36ad708b360..8265a82b639 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -143,6 +143,7 @@ int NBuffers = 16384;
int MaxConnections = 100;
int max_worker_processes = 8;
int max_parallel_workers = 8;
+int autovacuum_max_parallel_workers = 2;
int MaxBackends = 0;
/* GUC parameters for vacuum */
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index d77502838c4..534e58a398c 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3326,9 +3326,13 @@ set_config_with_handle(const char *name, config_handle *handle,
*
* Also allow normal setting if the GUC is marked GUC_ALLOW_IN_PARALLEL.
*
- * Other changes might need to affect other workers, so forbid them.
+ * Other changes might need to affect other workers, so forbid them. Note,
+ * that parallel autovacuum leader is an exception, because only
+ * cost-based delays need to be affected also to parallel autovacuum
+ * workers, and we will handle it elsewhere if appropriate.
*/
- if (IsInParallelMode() && changeVal && action != GUC_ACTION_SAVE &&
+ if (IsInParallelMode() && !AmAutoVacuumWorkerProcess() && changeVal &&
+ action != GUC_ACTION_SAVE &&
(record->flags & GUC_ALLOW_IN_PARALLEL) == 0)
{
ereport(elevel,
diff --git a/src/backend/utils/misc/guc_parameters.dat b/src/backend/utils/misc/guc_parameters.dat
index a5a0edf2534..12393c1214b 100644
--- a/src/backend/utils/misc/guc_parameters.dat
+++ b/src/backend/utils/misc/guc_parameters.dat
@@ -154,6 +154,14 @@
max => '2000000000',
},
+{ name => 'autovacuum_max_parallel_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
+ short_desc => 'Maximum number of parallel workers that a single autovacuum worker can take from bgworkers pool.',
+ variable => 'autovacuum_max_parallel_workers',
+ boot_val => '2',
+ min => '0',
+ max => 'MAX_BACKENDS',
+},
+
{ name => 'autovacuum_max_workers', type => 'int', context => 'PGC_SIGHUP', group => 'VACUUM_AUTOVACUUM',
short_desc => 'Sets the maximum number of simultaneously running autovacuum worker processes.',
variable => 'autovacuum_max_workers',
diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample
index e686d88afc4..5e1c62d616c 100644
--- a/src/backend/utils/misc/postgresql.conf.sample
+++ b/src/backend/utils/misc/postgresql.conf.sample
@@ -710,6 +710,7 @@
#autovacuum_worker_slots = 16 # autovacuum worker slots to allocate
# (change requires restart)
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
+#autovacuum_max_parallel_workers = 2 # limited by max_parallel_workers
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
diff --git a/src/bin/psql/tab-complete.in.c b/src/bin/psql/tab-complete.in.c
index 5bdbf1530a2..29171efbc1b 100644
--- a/src/bin/psql/tab-complete.in.c
+++ b/src/bin/psql/tab-complete.in.c
@@ -1432,6 +1432,7 @@ static const char *const table_storage_parameters[] = {
"autovacuum_multixact_freeze_max_age",
"autovacuum_multixact_freeze_min_age",
"autovacuum_multixact_freeze_table_age",
+ "autovacuum_parallel_workers",
"autovacuum_vacuum_cost_delay",
"autovacuum_vacuum_cost_limit",
"autovacuum_vacuum_insert_scale_factor",
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index f16f35659b9..00190c67ecf 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -178,6 +178,7 @@ extern PGDLLIMPORT int MaxBackends;
extern PGDLLIMPORT int MaxConnections;
extern PGDLLIMPORT int max_worker_processes;
extern PGDLLIMPORT int max_parallel_workers;
+extern PGDLLIMPORT int autovacuum_max_parallel_workers;
extern PGDLLIMPORT int commit_timestamp_buffers;
extern PGDLLIMPORT int multixact_member_buffers;
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 236830f6b93..11dd3aebc6c 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -311,6 +311,14 @@ typedef struct ForeignKeyCacheInfo
typedef struct AutoVacOpts
{
bool enabled;
+
+ /*
+ * Target number of parallel autovacuum workers. -1 by default disables
+ * parallel vacuum during autovacuum. 0 means choose the parallel degree
+ * based on the number of indexes.
+ */
+ int autovacuum_parallel_workers;
+
int vacuum_threshold;
int vacuum_max_threshold;
int vacuum_ins_threshold;
--
2.43.0
From 9f21c0e081d8a36fd19acee75cfa47dbf74e19e2 Mon Sep 17 00:00:00 2001
From: Daniil Davidov <[email protected]>
Date: Thu, 15 Jan 2026 23:15:48 +0700
Subject: [PATCH v27 3/5] Cost based parameters propagation for parallel
autovacuum
---
src/backend/commands/vacuum.c | 21 +++-
src/backend/commands/vacuumparallel.c | 163 ++++++++++++++++++++++++++
src/backend/postmaster/autovacuum.c | 2 +-
src/include/commands/vacuum.h | 2 +
src/tools/pgindent/typedefs.list | 1 +
5 files changed, 186 insertions(+), 3 deletions(-)
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index bce3a2daa24..1b5ba3ce1ef 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -2435,8 +2435,19 @@ vacuum_delay_point(bool is_analyze)
/* Always check for interrupts */
CHECK_FOR_INTERRUPTS();
- if (InterruptPending ||
- (!VacuumCostActive && !ConfigReloadPending))
+ if (InterruptPending)
+ return;
+
+ if (IsParallelWorker())
+ {
+ /*
+ * Update cost-based vacuum delay parameters for a parallel autovacuum
+ * worker if any changes are detected.
+ */
+ parallel_vacuum_update_shared_delay_params();
+ }
+
+ if (!VacuumCostActive && !ConfigReloadPending)
return;
/*
@@ -2450,6 +2461,12 @@ vacuum_delay_point(bool is_analyze)
ConfigReloadPending = false;
ProcessConfigFile(PGC_SIGHUP);
VacuumUpdateCosts();
+
+ /*
+ * Propagate cost-based vacuum delay parameters to shared memory if
+ * any of them have changed during the config reload.
+ */
+ parallel_vacuum_propagate_shared_delay_params();
}
/*
diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c
index 5dea4374ec7..f4fceb96874 100644
--- a/src/backend/commands/vacuumparallel.c
+++ b/src/backend/commands/vacuumparallel.c
@@ -18,6 +18,13 @@
* the parallel context is re-initialized so that the same DSM can be used for
* multiple passes of index bulk-deletion and index cleanup.
*
+ * For parallel autovacuum, we need to propagate cost-based vacuum delay
+ * parameters from the leader to its workers, as the leader's parameters can
+ * change even while processing a table (e.g., due to a config reload).
+ * The PVSharedCostParams struct manages these parameters using a
+ * generation counter. Each parallel worker polls this shared state and
+ * refreshes its local delay parameters whenever a change is detected.
+ *
* Portions Copyright (c) 1996-2026, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
@@ -53,6 +60,31 @@
#define PARALLEL_VACUUM_KEY_WAL_USAGE 4
#define PARALLEL_VACUUM_KEY_INDEX_STATS 5
+/*
+ * Struct for cost-based vacuum delay related parameters to share among an
+ * autovacuum worker and its parallel vacuum workers.
+ */
+typedef struct PVSharedCostParams
+{
+ /*
+ * The generation counter is incremented by the leader process each time
+ * it updates the shared cost-based vacuum delay parameters. Paralell
+ * vacuum workers compares it with their local generation,
+ * shared_params_generation_local, to detect whether they need to refresh
+ * their local parameters.
+ */
+ pg_atomic_uint32 generation;
+
+ slock_t mutex; /* protects all fields below */
+
+ /* Parameters to share with parallel workers */
+ double cost_delay;
+ int cost_limit;
+ int cost_page_dirty;
+ int cost_page_hit;
+ int cost_page_miss;
+} PVSharedCostParams;
+
/*
* Shared information among parallel workers. So this is allocated in the DSM
* segment.
@@ -122,6 +154,18 @@ typedef struct PVShared
/* Statistics of shared dead items */
VacDeadItemsInfo dead_items_info;
+
+ /*
+ * If 'true' then we are running parallel autovacuum. Otherwise, we are
+ * running parallel maintenence VACUUM.
+ */
+ bool is_autovacuum;
+
+ /*
+ * Struct for syncing cost-based vacuum delay parameters between
+ * supportive parallel autovacuum workers with leader worker.
+ */
+ PVSharedCostParams cost_params;
} PVShared;
/* Status used during parallel index vacuum or cleanup */
@@ -224,6 +268,11 @@ struct ParallelVacuumState
PVIndVacStatus status;
};
+static PVSharedCostParams *pv_shared_cost_params = NULL;
+
+/* See comments in the PVSharedCostParams for the details */
+static uint32 shared_params_generation_local = 0;
+
static int parallel_vacuum_compute_workers(Relation *indrels, int nindexes, int nrequested,
bool *will_parallel_vacuum);
static void parallel_vacuum_process_all_indexes(ParallelVacuumState *pvs, int num_index_scans,
@@ -235,6 +284,7 @@ static void parallel_vacuum_process_one_index(ParallelVacuumState *pvs, Relation
static bool parallel_vacuum_index_is_parallel_safe(Relation indrel, int num_index_scans,
bool vacuum);
static void parallel_vacuum_error_callback(void *arg);
+static inline void parallel_vacuum_set_cost_parameters(PVSharedCostParams *params);
/*
* Try to enter parallel mode and create a parallel context. Then initialize
@@ -395,6 +445,21 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,
pg_atomic_init_u32(&(shared->active_nworkers), 0);
pg_atomic_init_u32(&(shared->idx), 0);
+ shared->is_autovacuum = AmAutoVacuumWorkerProcess();
+
+ /*
+ * Initialize shared cost-based vacuum delay parameters if it's for
+ * autovacuum.
+ */
+ if (shared->is_autovacuum)
+ {
+ parallel_vacuum_set_cost_parameters(&shared->cost_params);
+ pg_atomic_init_u32(&shared->cost_params.generation, 0);
+ SpinLockInit(&shared->cost_params.mutex);
+
+ pv_shared_cost_params = &(shared->cost_params);
+ }
+
shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
pvs->shared = shared;
@@ -460,6 +525,9 @@ parallel_vacuum_end(ParallelVacuumState *pvs, IndexBulkDeleteResult **istats)
DestroyParallelContext(pvs->pcxt);
ExitParallelMode();
+ if (AmAutoVacuumWorkerProcess())
+ pv_shared_cost_params = NULL;
+
pfree(pvs->will_parallel_vacuum);
pfree(pvs);
}
@@ -539,6 +607,95 @@ parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs, long num_table_tup
&wusage->cleanup);
}
+/*
+ * Fill in the given structure with cost-based vacuum delay parameter values.
+ */
+static inline void
+parallel_vacuum_set_cost_parameters(PVSharedCostParams *params)
+{
+ params->cost_delay = vacuum_cost_delay;
+ params->cost_limit = vacuum_cost_limit;
+ params->cost_page_dirty = VacuumCostPageDirty;
+ params->cost_page_hit = VacuumCostPageHit;
+ params->cost_page_miss = VacuumCostPageMiss;
+}
+
+/*
+ * Updates the cost-based vacuum delay parameters for parallel autovacuum
+ * workers.
+ *
+ * For non-autovacuum parallel worker this function will have no effect.
+ */
+void
+parallel_vacuum_update_shared_delay_params(void)
+{
+ uint32 params_generation;
+
+ Assert(IsParallelWorker());
+
+ /* Quick return if the wokrer is not running for the autovacuum */
+ if (pv_shared_cost_params == NULL)
+ return;
+
+ params_generation = pg_atomic_read_u32(&pv_shared_cost_params->generation);
+ Assert(shared_params_generation_local <= params_generation);
+
+ /* Return if parameters had not changed in the leader */
+ if (params_generation == shared_params_generation_local)
+ return;
+
+ SpinLockAcquire(&pv_shared_cost_params->mutex);
+ VacuumCostDelay = pv_shared_cost_params->cost_delay;
+ VacuumCostLimit = pv_shared_cost_params->cost_limit;
+ VacuumCostPageDirty = pv_shared_cost_params->cost_page_dirty;
+ VacuumCostPageHit = pv_shared_cost_params->cost_page_hit;
+ VacuumCostPageMiss = pv_shared_cost_params->cost_page_miss;
+ SpinLockRelease(&pv_shared_cost_params->mutex);
+
+ VacuumUpdateCosts();
+
+ shared_params_generation_local = params_generation;
+}
+
+/*
+ * Store the cost-based vacuum delay parameters in the shared memory so that
+ * parallel vacuum workers can consume them (see
+ * parallel_vacuum_update_shared_delay_params()).
+ */
+void
+parallel_vacuum_propagate_shared_delay_params(void)
+{
+ Assert(AmAutoVacuumWorkerProcess());
+
+ /*
+ * Quick return if the leader process is not sharing the delay parameters.
+ */
+ if (pv_shared_cost_params == NULL)
+ return;
+
+ /*
+ * Check if any delay parameters has changed. We can read them without
+ * locks as only the leader can modify them.
+ */
+ if (vacuum_cost_delay == pv_shared_cost_params->cost_delay &&
+ vacuum_cost_limit == pv_shared_cost_params->cost_limit &&
+ VacuumCostPageDirty == pv_shared_cost_params->cost_page_dirty &&
+ VacuumCostPageHit == pv_shared_cost_params->cost_page_hit &&
+ VacuumCostPageMiss == pv_shared_cost_params->cost_page_miss)
+ return;
+
+ /* Update the shared delay parameters */
+ SpinLockAcquire(&pv_shared_cost_params->mutex);
+ parallel_vacuum_set_cost_parameters(pv_shared_cost_params);
+ SpinLockRelease(&pv_shared_cost_params->mutex);
+
+ /*
+ * Increment the generation of the parameters, i.e. let parallel workers
+ * know that they should re-read shared cost params.
+ */
+ pg_atomic_fetch_add_u32(&pv_shared_cost_params->generation, 1);
+}
+
/*
* Compute the number of parallel worker processes to request. Both index
* vacuum and index cleanup can be executed with parallel workers.
@@ -1081,6 +1238,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
VacuumSharedCostBalance = &(shared->cost_balance);
VacuumActiveNWorkers = &(shared->active_nworkers);
+ if (shared->is_autovacuum)
+ pv_shared_cost_params = &(shared->cost_params);
+
/* Set parallel vacuum state */
pvs.indrels = indrels;
pvs.nindexes = nindexes;
@@ -1130,6 +1290,9 @@ parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
vac_close_indexes(nindexes, indrels, RowExclusiveLock);
table_close(rel, ShareUpdateExclusiveLock);
FreeAccessStrategy(pvs.bstrategy);
+
+ if (shared->is_autovacuum)
+ pv_shared_cost_params = NULL;
}
/*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index f153d0343c8..f35acf3d75a 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -1659,7 +1659,7 @@ VacuumUpdateCosts(void)
}
else
{
- /* Must be explicit VACUUM or ANALYZE */
+ /* Must be explicit VACUUM or ANALYZE or parallel autovacuum worker */
vacuum_cost_delay = VacuumCostDelay;
vacuum_cost_limit = VacuumCostLimit;
}
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 1d820915d71..cf0c3c9dbf7 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -423,6 +423,8 @@ extern void parallel_vacuum_cleanup_all_indexes(ParallelVacuumState *pvs,
int num_index_scans,
bool estimated_count,
PVWorkersUsage *wusage);
+extern void parallel_vacuum_update_shared_delay_params(void);
+extern void parallel_vacuum_propagate_shared_delay_params(void);
extern void parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
/* in commands/analyze.c */
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index a67d54e1819..15b8c966bf8 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -2088,6 +2088,7 @@ PVIndStats
PVIndVacStatus
PVOID
PVShared
+PVSharedCostParams
PVWorkersUsage
PVWorkersStats
PX_Alias
--
2.43.0