On Tue, Mar 26, 2019 at 10:19 AM Haribabu Kommi
<kommi.harib...@gmail.com> wrote:
>
>
> On Fri, Mar 22, 2019 at 4:06 PM Masahiko Sawada <sawada.m...@gmail.com> wrote:
>>
>>
>> Attached the updated version patch. 0001 patch allows all existing
>> vacuum options an boolean argument. 0002 patch introduces parallel
>> lazy vacuum. 0003 patch adds -P (--parallel) option to vacuumdb
>> command.
>
>
> Thanks for sharing the updated patches.

Thank you for reviewing the patch.

>
> 0001 patch:
>
> +    PARALLEL [ <replaceable class="parameter">N</replaceable> ]
>
> But this patch contains syntax of PARALLEL but no explanation, I saw that
> it is explained in 0002. It is not a problem, but just mentioning.

Oops, removed it from 0001.

>
> +      Specifies parallel degree for <literal>PARALLEL</literal> option. The
> +      value must be at least 1. If the parallel degree
> +      <replaceable class="parameter">integer</replaceable> is omitted, then
> +      <command>VACUUM</command> decides the number of workers based on 
> number of
> +      indexes on the relation which further limited by
> +      <xref linkend="guc-max-parallel-workers-maintenance"/>.
>
> Can we add some more details about backend participation also, parallel 
> workers will
> come into picture only when there are 2 indexes in the table.

Agreed. I've add the description. Since such behavior might change in
a future release I've added such description.

>
> + /*
> + * Do post-vacuum cleanup and statistics update for each index if
> + * we're not in parallel lazy vacuum. If in parallel lazy vacuum, do
> + * only post-vacum cleanup and then update statistics after exited
> + * from parallel mode.
> + */
> + lazy_vacuum_all_indexes(vacrelstats, Irel, nindexes, indstats,
> + lps, true);
>
> How about renaming the above function, as it does the cleanup also?
> lazy_vacuum_or_cleanup_all_indexes?

Agreed. I think lazy_vacuum_or_cleanup_indexes would be better. Also
the same is true for lazy_vacuum_indexes_for_workers(). Fixed.

>
>
> + if (!IsInParallelVacuum(lps))
> + {
> + /*
> + * Update index statistics. If in parallel lazy vacuum, we will
> + * update them after exited from parallel mode.
> + */
> + lazy_update_index_statistics(Irel[idx], stats[idx]);
> +
> + if (stats[idx])
> + pfree(stats[idx]);
> + }
>
> The above check in lazy_vacuum_all_indexes can be combined it with the outer
> if check where the memcpy is happening. I still feel that the logic around 
> the stats
> makes it little bit complex.

Hmm, memcpy is needed in both vacuuming index and cleanup index case
but the updating index statistics is needed in only cleanup index
case. I've split the code for parallel index vacuuming from
lazy_vacuum_or_cleanup_indexes. Also, I've changed the patch so that
both the leader process and worker processes use the same code for
index vacuuming and index cleanup. I hope the code got simple and
understandable.

>
> + if (IsParallelWorker())
> + msg = "scanned index \"%s\" to remove %d row versions by parallel vacuum 
> worker";
> + else
> + msg = "scanned index \"%s\" to remove %d row versions";
>
> I feel, this way of error message may not be picked for the translations.
> Is there any problem if we duplicate the entire ereport message with changed 
> message?

No, but I'd like to avoid writing the same arguments in multiple
ereport()s. I've add gettext_noop() as per comment from Horiguchi-san.

>
> + for (i = 0; i < nindexes; i++)
> + {
> + LVIndStats *s = &(copied_indstats[i]);
> +
> + if (s->updated)
> + lazy_update_index_statistics(Irel[i], &(s->stats));
> + }
> +
> + pfree(copied_indstats);
>
> why can't we use the shared memory directly to update the stats once all the 
> workers
> are finished, instead of copying them to a local memory?

Since we cannot use heap_inplace_update() which is called by
vac_update_relstats() during parallel mode I copied the stats. Is that
safe if we destroy the parallel context *after* exited parallel mode?

>
> + tab->at_params.nworkers = 0; /* parallel lazy autovacuum is not supported */
>
> User is not required to provide workers number compulsory even that parallel 
> vacuum can
> work, so just setting the above parameters doesn't stop the parallel workers, 
> user must
> pass the PARALLEL option also. So mentioning that also will be helpful later 
> when we
> start supporting it or some one who is reading the code can understand.
>

We should set -1 to at_params.nworkers here to disable parallel lazy
vacuum. And this review comment got me thinking that VACOPT_PARALLEL
can be combined with nworkers of VacuumParams. So I've removed
VACOPT_PARALELL and passing nworkers with >= 0 enables parallel lazy
vacuum.






Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
From f243cf3b1b22136cad8fe981cf3b51f3cc44dfc5 Mon Sep 17 00:00:00 2001
From: Masahiko Sawada <sawada.mshk@gmail.com>
Date: Tue, 26 Mar 2019 22:13:53 +0900
Subject: [PATCH v20 1/3] All VACUUM command options allow an argument.

All existing VACUUM command options allow a boolean argument like
EXPLAIN command.
---
 doc/src/sgml/ref/vacuum.sgml  | 26 ++++++++++++++++++++------
 src/backend/commands/vacuum.c | 13 +++++++------
 src/backend/parser/gram.y     | 10 ++++++++--
 src/bin/psql/tab-complete.c   |  2 ++
 4 files changed, 37 insertions(+), 14 deletions(-)

diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml
index fd911f5..906d0c2 100644
--- a/doc/src/sgml/ref/vacuum.sgml
+++ b/doc/src/sgml/ref/vacuum.sgml
@@ -26,12 +26,12 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet
 
 <phrase>where <replaceable class="parameter">option</replaceable> can be one of:</phrase>
 
-    FULL
-    FREEZE
-    VERBOSE
-    ANALYZE
-    DISABLE_PAGE_SKIPPING
-    SKIP_LOCKED
+    FULL [ <replaceable class="parameter">boolean</replaceable> ]
+    FREEZE [ <replaceable class="parameter">boolean</replaceable> ]
+    VERBOSE [ <replaceable class="parameter">boolean</replaceable> ]
+    ANALYZE [ <replaceable class="parameter">boolean</replaceable> ]
+    DISABLE_PAGE_SKIPPING [ <replaceable class="parameter">boolean</replaceable> ]
+    SKIP_LOCKED [ <replaceable class="parameter">boolean</replaceable> ]
 
 <phrase>and <replaceable class="parameter">table_and_columns</replaceable> is:</phrase>
 
@@ -182,6 +182,20 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">boolean</replaceable></term>
+    <listitem>
+     <para>
+      Specifies whether the selected option should be turned on or off.
+      You can write <literal>TRUE</literal>, <literal>ON</literal>, or
+      <literal>1</literal> to enable the option, and <literal>FALSE</literal>,
+      <literal>OFF</literal>, or <literal>0</literal> to disable it.  The
+      <replaceable class="parameter">boolean</replaceable> value can also
+      be omitted, in which case <literal>TRUE</literal> is assumed.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">table_name</replaceable></term>
     <listitem>
      <para>
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index f0afeaf..72f140e 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -36,6 +36,7 @@
 #include "catalog/pg_inherits.h"
 #include "catalog/pg_namespace.h"
 #include "commands/cluster.h"
+#include "commands/defrem.h"
 #include "commands/vacuum.h"
 #include "miscadmin.h"
 #include "nodes/makefuncs.h"
@@ -97,9 +98,9 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
 
 		/* Parse common options for VACUUM and ANALYZE */
 		if (strcmp(opt->defname, "verbose") == 0)
-			params.options |= VACOPT_VERBOSE;
+			params.options |= defGetBoolean(opt) ? VACOPT_VERBOSE : 0;
 		else if (strcmp(opt->defname, "skip_locked") == 0)
-			params.options |= VACOPT_SKIP_LOCKED;
+			params.options |= defGetBoolean(opt) ? VACOPT_SKIP_LOCKED : 0;
 		else if (!vacstmt->is_vacuumcmd)
 			ereport(ERROR,
 					(errcode(ERRCODE_SYNTAX_ERROR),
@@ -108,13 +109,13 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
 
 		/* Parse options available on VACUUM */
 		else if (strcmp(opt->defname, "analyze") == 0)
-				params.options |= VACOPT_ANALYZE;
+			params.options |= defGetBoolean(opt) ? VACOPT_ANALYZE : 0;
 		else if (strcmp(opt->defname, "freeze") == 0)
-				params.options |= VACOPT_FREEZE;
+			params.options |= defGetBoolean(opt) ? VACOPT_FREEZE : 0;
 		else if (strcmp(opt->defname, "full") == 0)
-			params.options |= VACOPT_FULL;
+			params.options |= defGetBoolean(opt) ? VACOPT_FULL : 0;
 		else if (strcmp(opt->defname, "disable_page_skipping") == 0)
-			params.options |= VACOPT_DISABLE_PAGE_SKIPPING;
+			params.options |= defGetBoolean(opt) ? VACOPT_DISABLE_PAGE_SKIPPING : 0;
 		else
 			ereport(ERROR,
 					(errcode(ERRCODE_SYNTAX_ERROR),
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 0a48228..5af91aa 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -309,6 +309,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <str>		vac_analyze_option_name
 %type <defelt>	vac_analyze_option_elem
 %type <list>	vac_analyze_option_list
+%type <node>	vac_analyze_option_arg
 %type <boolean>	opt_or_replace
 				opt_grant_grant_option opt_grant_admin_option
 				opt_nowait opt_if_exists opt_with_data
@@ -10539,9 +10540,9 @@ analyze_keyword:
 		;
 
 vac_analyze_option_elem:
-			vac_analyze_option_name
+			vac_analyze_option_name vac_analyze_option_arg
 				{
-					$$ = makeDefElem($1, NULL, @1);
+					$$ = makeDefElem($1, $2, @1);
 				}
 		;
 
@@ -10550,6 +10551,11 @@ vac_analyze_option_name:
 			| analyze_keyword						{ $$ = "analyze"; }
 		;
 
+vac_analyze_option_arg:
+			opt_boolean_or_string					{ $$ = (Node *) makeString($1); }
+			| /* EMPTY */		 					{ $$ = NULL; }
+		;
+
 opt_analyze:
 			analyze_keyword							{ $$ = true; }
 			| /*EMPTY*/								{ $$ = false; }
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 3ba3498..36e20fb 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3432,6 +3432,8 @@ psql_completion(const char *text, int start, int end)
 		if (ends_with(prev_wd, '(') || ends_with(prev_wd, ','))
 			COMPLETE_WITH("FULL", "FREEZE", "ANALYZE", "VERBOSE",
 						  "DISABLE_PAGE_SKIPPING", "SKIP_LOCKED");
+		else if (TailMatches("FULL|FREEZE|ANALYZE|VERBOSE|DISABLE_PAGE_SKIPPING|SKIP_LOCKED"))
+			COMPLETE_WITH("ON", "OFF");
 	}
 	else if (HeadMatches("VACUUM") && TailMatches("("))
 		/* "VACUUM (" should be caught above, so assume we want columns */
-- 
1.8.3.1

From ab8f1a80d95f13a216e4faf054444603a3b8f640 Mon Sep 17 00:00:00 2001
From: Masahiko Sawada <sawada.mshk@gmail.com>
Date: Fri, 22 Mar 2019 10:31:31 +0900
Subject: [PATCH v20 2/3] Add parallel option to VACUUM command

In parallel vacuum, we perform both index vacuum and cleanup vacuum
with parallel workers. Indivisual indexes are processed by one vacuum
process. Therefore parallel vacuum can be used when the table has more
than one index.

Parallel vacuum can be performed by specifying like
VACUUM (PARALLEL 2) tbl, meaning that performing vacuum with 2
parallel worker processes. Specifying only PARALLEL means that the
degree of parallalism will be determined based on the number of
indexes the table has.

The parallel vacuum degree is limited by both the number of
indexes the table has and max_parallel_maintenance_workers.
---
 doc/src/sgml/config.sgml              |  14 +-
 doc/src/sgml/ref/vacuum.sgml          |  31 ++
 src/backend/access/heap/vacuumlazy.c  | 872 +++++++++++++++++++++++++++++-----
 src/backend/access/transam/parallel.c |   4 +
 src/backend/commands/vacuum.c         |  27 ++
 src/backend/parser/gram.y             |   1 +
 src/backend/postmaster/autovacuum.c   |   1 +
 src/bin/psql/tab-complete.c           |   3 +-
 src/include/access/heapam.h           |   2 +
 src/include/commands/vacuum.h         |   5 +
 src/test/regress/expected/vacuum.out  |  10 +-
 src/test/regress/sql/vacuum.sql       |   3 +
 12 files changed, 857 insertions(+), 116 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d383de2..3ca3ae8 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -2226,13 +2226,13 @@ include_dir 'conf.d'
        <listitem>
         <para>
          Sets the maximum number of parallel workers that can be
-         started by a single utility command.  Currently, the only
-         parallel utility command that supports the use of parallel
-         workers is <command>CREATE INDEX</command>, and only when
-         building a B-tree index.  Parallel workers are taken from the
-         pool of processes established by <xref
-         linkend="guc-max-worker-processes"/>, limited by <xref
-         linkend="guc-max-parallel-workers"/>.  Note that the requested
+         started by a single utility command.  Currently, the parallel
+         utility commands that support the use of parallel workers are
+         <command>CREATE INDEX</command> only when building a B-tree index,
+         and <command>VACUUM</command> without <literal>FULL</literal>
+         option. Parallel workers are taken from the pool of processes
+         established by <xref linkend="guc-max-worker-processes"/>, limited
+         by <xref linkend="guc-max-parallel-workers"/>.  Note that the requested
          number of workers may not actually be available at run time.
          If this occurs, the utility operation will run with fewer
          workers than expected.  The default value is 2.  Setting this
diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml
index 906d0c2..d3fe0f6 100644
--- a/doc/src/sgml/ref/vacuum.sgml
+++ b/doc/src/sgml/ref/vacuum.sgml
@@ -32,6 +32,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet
     ANALYZE [ <replaceable class="parameter">boolean</replaceable> ]
     DISABLE_PAGE_SKIPPING [ <replaceable class="parameter">boolean</replaceable> ]
     SKIP_LOCKED [ <replaceable class="parameter">boolean</replaceable> ]
+    PARALLEL [ <replaceable class="parameter">integer</replaceable> ]
 
 <phrase>and <replaceable class="parameter">table_and_columns</replaceable> is:</phrase>
 
@@ -143,6 +144,22 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet
    </varlistentry>
 
    <varlistentry>
+    <term><literal>PARALLEL</literal></term>
+    <listitem>
+     <para>
+      Perform vacuum index and cleanup index phases of <command>VACUUM</command>
+      in parallel using <replaceable class="parameter">integer</replaceable> background
+      workers (for the detail of each vacuum phases, please refer to
+      <xref linkend="vacuum-phases"/>). Only one worker can be used per index. So
+      parallel workers are launched only when there are at least <literal>2</literal>
+      indexes in the table. Workers for vacuum launches before starting each phases
+      and exit at the end of the phase. These behaviors might change in a future release.
+      This option can not use with <literal>FULL</literal> option.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><literal>DISABLE_PAGE_SKIPPING</literal></term>
     <listitem>
      <para>
@@ -196,6 +213,20 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class="paramet
    </varlistentry>
 
    <varlistentry>
+    <term><replaceable class="parameter">integer</replaceable></term>
+    <listitem>
+     <para>
+      Specifies parallel degree for <literal>PARALLEL</literal> option. The
+      value must be at least 1. If the parallel degree
+      <replaceable class="parameter">integer</replaceable> is omitted, then
+      <command>VACUUM</command> decides the number of workers based on number of
+      indexes on the relation which further limited by
+      <xref linkend="guc-max-parallel-workers-maintenance"/>.
+     </para>
+    </listitem>
+   </varlistentry>
+
+   <varlistentry>
     <term><replaceable class="parameter">table_name</replaceable></term>
     <listitem>
      <para>
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 5c554f9..a864d18 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -22,6 +22,19 @@
  * of index scans performed.  So we don't use maintenance_work_mem memory for
  * the TID array, just enough to hold as many heap tuples as fit on one page.
  *
+ * Lazy vacuum supports parallel execution with parallel worker processes. In
+ * parallel lazy vacuum, we perform both index vacuuming and index cleanup in
+ * parallel. Individual indexes is processed by one vacuum process. At beginning
+ * of lazy vacuum (at lazy_scan_heap) we prepare the parallel context and
+ * initialize the DSM segment that contains shared information as well as the
+ * memory space for dead tuples. When starting either index vacuuming or index
+ * cleanup, we launch parallel worker processes. Once all indexes are processed
+ * the parallel worker processes exit and the leader process re-initializes the
+ * DSM segment. Note that all parallel workers live during one either index
+ * vacuuming or index cleanup but the leader process neither exits from the
+ * parallel mode nor destroys the parallel context. For updating the index
+ * statistics, since any updates are not allowed during parallel mode we update
+ * the index statistics after exited from parallel mode.
  *
  * Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
@@ -41,8 +54,10 @@
 #include "access/heapam_xlog.h"
 #include "access/htup_details.h"
 #include "access/multixact.h"
+#include "access/parallel.h"
 #include "access/transam.h"
 #include "access/visibilitymap.h"
+#include "access/xact.h"
 #include "access/xlog.h"
 #include "catalog/storage.h"
 #include "commands/dbcommands.h"
@@ -55,6 +70,7 @@
 #include "storage/bufmgr.h"
 #include "storage/freespace.h"
 #include "storage/lmgr.h"
+#include "tcop/tcopprot.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/pg_rusage.h"
@@ -110,6 +126,92 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * DSM keys for parallel lazy vacuum. Since we don't need to worry about DSM
+ * keys conflicting with plan_node_id we can use small integers.
+ */
+#define PARALLEL_VACUUM_KEY_SHARED			1
+#define PARALLEL_VACUUM_KEY_DEAD_TUPLES		2
+#define PARALLEL_VACUUM_KEY_QUERY_TEXT		3
+
+/*
+ * Are we in a parallel lazy vacuum? If that's true, we're in parallel mode
+ * and prepared the DSM segments.
+ */
+#define IsInParallelVacuum(lps) (((LVParallelState *) (lps)) != NULL)
+
+/*
+ * Structs for an index bulk-deletion statistic that is used for parallel
+ * lazy vacuum. This is allocated in a DSM segment.
+ */
+typedef struct LVIndStats
+{
+	bool updated;	/* are the stats updated? */
+	IndexBulkDeleteResult stats;
+} LVIndStats;
+
+/*
+ * LVDeadTuples stores the dead tuple TIDs collected during heap scan.
+ * This is allocated in a DSM segment when parallel lazy vacuum mode,
+ * or allocated in a local memory.
+ */
+typedef struct LVDeadTuples
+{
+	int			max_tuples;	/* # slots allocated in array */
+	int			num_tuples;	/* current # of entries */
+	/* List of TIDs of tuples we intend to delete */
+	/* NB: this list is ordered by TID address */
+	ItemPointerData itemptrs[FLEXIBLE_ARRAY_MEMBER];	/* array of ItemPointerData */
+} LVDeadTuples;
+#define SizeOfLVDeadTuples offsetof(LVDeadTuples, itemptrs) + sizeof(ItemPointerData)
+
+/*
+ * Shared information among parallel workers. So this is allocated in
+ * a DSM segment.
+ */
+typedef struct LVShared
+{
+	/*
+	 * Target table relid and vacuum settings. These fields are not modified
+	 * during the lazy vacuum.
+	 */
+	Oid		relid;
+	int		elevel;
+
+	/*
+	 * An indication for vacuum workers of doing either vacuuming index or
+	 * index cleanup.
+	 */
+	bool	for_cleanup;
+
+	/*
+	 * Fields for both index vacuuming and index cleanup.
+	 *
+	 * reltuples is the total number of input heap tuples. We set either an
+	 * old live tuples in index vacuuming or the new live tuples in index cleanup.
+	 *
+	 * estimated_count is true if the reltuples is estimated value.
+	 */
+	double	reltuples;
+	bool	estimated_count;
+
+	/*
+	 * Variables to control parallel index vacuuming. An variable-sized field
+	 * 'indstats' must come last.
+	 */
+	pg_atomic_uint32	nprocessed;
+	LVIndStats			indstats[FLEXIBLE_ARRAY_MEMBER];
+} LVShared;
+#define SizeOfLVShared offsetof(LVShared, indstats) + sizeof(LVIndStats)
+
+/* Struct for parallel lazy vacuum */
+typedef struct LVParallelState
+{
+	ParallelContext	*pcxt;
+	LVShared		*lvshared;
+	int				nworkers_requested;	/* user-requested parallel degree */
+} LVParallelState;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -128,17 +230,12 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
-	/* List of TIDs of tuples we intend to delete */
-	/* NB: this list is ordered by TID address */
-	int			num_dead_tuples;	/* current # of entries */
-	int			max_dead_tuples;	/* # slots allocated in array */
-	ItemPointer dead_tuples;	/* array of ItemPointerData */
+	LVDeadTuples *dead_tuples;
 	int			num_index_scans;
 	TransactionId latestRemovedXid;
 	bool		lock_waiter_detected;
 } LVRelStats;
 
-
 /* A few variables that don't seem worth passing around as parameters */
 static int	elevel = -1;
 
@@ -150,17 +247,18 @@ static BufferAccessStrategy vac_strategy;
 
 
 /* non-export function prototypes */
-static void lazy_scan_heap(Relation onerel, int options,
+static void lazy_scan_heap(Relation onerel, VacuumParams *params,
 			   LVRelStats *vacrelstats, Relation *Irel, int nindexes,
 			   bool aggressive);
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats, BlockNumber nblocks);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
 				  IndexBulkDeleteResult **stats,
-				  LVRelStats *vacrelstats);
+				  double reltuples,
+				  LVDeadTuples	*dead_tuples);
 static void lazy_cleanup_index(Relation indrel,
-				   IndexBulkDeleteResult *stats,
-				   LVRelStats *vacrelstats);
+				   IndexBulkDeleteResult **stats,
+				   double reltuples, bool estimated_count);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
@@ -168,12 +266,35 @@ static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
 						 LVRelStats *vacrelstats);
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
-static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
-					   ItemPointer itemptr);
+static void lazy_record_dead_tuple(LVDeadTuples *dead_tuples, ItemPointer itemptr);
 static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 						 TransactionId *visibility_cutoff_xid, bool *all_frozen);
+static void lazy_update_index_statistics(Relation indrel, IndexBulkDeleteResult *stats);
+static LVParallelState *lazy_prepare_parallel(LVRelStats *vacrelstats, Oid relid,
+											  BlockNumber nblocks, int nindexes,
+											  int nrequested);
+static void lazy_end_parallel(LVParallelState *lps, Relation *Irel, int nindexes);
+static void lazy_begin_parallel_vacuum_index(LVParallelState *lps, LVRelStats *vacrelstats,
+											 bool for_cleanup);
+static void lazy_end_parallel_vacuum_index(LVParallelState *lps, bool reinitialize);
+static void lazy_vacuum_or_cleanup_indexes(LVRelStats *vacrelstats, Relation *Irel,
+										   int nindexes,
+										   IndexBulkDeleteResult **stats,
+										   LVParallelState *lps, bool for_cleanup);
+static void lazy_parallel_vacuum_or_cleanup_indexes(LVRelStats *vacrelstats,
+													Relation *Irel,
+													int nindexes,
+													IndexBulkDeleteResult **stats,
+													LVParallelState *lps,
+													bool for_cleanup);
+static void do_parallel_vacuum_or_cleanup_indexes(Relation *Irel, int nindexes,
+												  IndexBulkDeleteResult **stats,
+												  LVShared *lvshared,
+												  LVDeadTuples *dead_tuples);
+static int compute_parallel_workers(Relation onerel, int nrequested, int nindexes);
+static long compute_max_dead_tuples(BlockNumber relblocks, bool hasindex);
 
 
 /*
@@ -261,7 +382,7 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,
 	vacrelstats->hasindex = (nindexes > 0);
 
 	/* Do the vacuuming */
-	lazy_scan_heap(onerel, params->options, vacrelstats, Irel, nindexes, aggressive);
+	lazy_scan_heap(onerel, params, vacrelstats, Irel, nindexes, aggressive);
 
 	/* Done with indexes */
 	vac_close_indexes(nindexes, Irel, NoLock);
@@ -464,14 +585,28 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats)
  *		dead-tuple TIDs, invoke vacuuming of indexes and call lazy_vacuum_heap
  *		to reclaim dead line pointers.
  *
+ *		If the table has more than one index and parallel lazy vacuum is requested,
+ *		we execute both index vacuuming and index cleanup with parallel workers.
+ *		When allocating the space for lazy scan heap, we enter parallel mode,
+ *		create the parallel context and initailize a DSM segment for dead tuples.
+ *		The dead_tuples points either to a DSM segment in parallel lazy vacuum case
+ *		or to a local memory in single process vacuum case.  Before starting parallel
+ *		index vacuuming and parallel index cleanup we launch parallel workers.
+ *		All parallel workers will exit after processed all indexes and the leader
+ *		process re-initialize the parallel context and then re-launch them at the next
+ *		execution. The index statistics are updated by the leader after exited from
+ *		parallel mode since all writes are not allowed during parallel mode.
+ *
  *		If there are no indexes then we can reclaim line pointers on the fly;
  *		dead line pointers need only be retained until all index pointers that
  *		reference them have been killed.
  */
 static void
-lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
+lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 			   Relation *Irel, int nindexes, bool aggressive)
 {
+	LVParallelState *lps = NULL;
+	LVDeadTuples *dead_tuples;
 	BlockNumber nblocks,
 				blkno;
 	HeapTupleData tuple;
@@ -494,6 +629,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	bool		skipping_blocks;
 	xl_heap_freeze_tuple *frozen;
 	StringInfoData buf;
+	int			parallel_workers = 0;
 	const int	initprog_index[] = {
 		PROGRESS_VACUUM_PHASE,
 		PROGRESS_VACUUM_TOTAL_HEAP_BLKS,
@@ -529,13 +665,34 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	vacrelstats->nonempty_pages = 0;
 	vacrelstats->latestRemovedXid = InvalidTransactionId;
 
-	lazy_space_alloc(vacrelstats, nblocks);
+	/* Compute the number of parallel vacuum worker to request */
+	if (params->nworkers >= 0)
+		parallel_workers = compute_parallel_workers(onerel,
+													params->nworkers,
+													nindexes);
+
+	if (parallel_workers > 0)
+	{
+		/* enter parallel mode and prepare parallel lazy vacuum */
+		lps = lazy_prepare_parallel(vacrelstats,
+									RelationGetRelid(onerel),
+									nblocks, nindexes,
+									parallel_workers);
+		lps->nworkers_requested = params->nworkers;
+	}
+	else
+	{
+		/* Allocate the memory space for dead tuples locally */
+		lazy_space_alloc(vacrelstats, nblocks);
+	}
+
+	dead_tuples = vacrelstats->dead_tuples;
 	frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage);
 
 	/* Report that we're scanning the heap, advertising total # of blocks */
 	initprog_val[0] = PROGRESS_VACUUM_PHASE_SCAN_HEAP;
 	initprog_val[1] = nblocks;
-	initprog_val[2] = vacrelstats->max_dead_tuples;
+	initprog_val[2] = dead_tuples->max_tuples;
 	pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
 
 	/*
@@ -583,7 +740,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	 * be replayed on any hot standby, where it can be disruptive.
 	 */
 	next_unskippable_block = 0;
-	if ((options & VACOPT_DISABLE_PAGE_SKIPPING) == 0)
+	if ((params->options & VACOPT_DISABLE_PAGE_SKIPPING) == 0)
 	{
 		while (next_unskippable_block < nblocks)
 		{
@@ -638,7 +795,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		{
 			/* Time to advance next_unskippable_block */
 			next_unskippable_block++;
-			if ((options & VACOPT_DISABLE_PAGE_SKIPPING) == 0)
+			if ((params->options & VACOPT_DISABLE_PAGE_SKIPPING) == 0)
 			{
 				while (next_unskippable_block < nblocks)
 				{
@@ -713,8 +870,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if ((dead_tuples->max_tuples - dead_tuples->num_tuples) < MaxHeapTuplesPerPage &&
+			dead_tuples->num_tuples > 0)
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -742,10 +899,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 										 PROGRESS_VACUUM_PHASE_VACUUM_INDEX);
 
 			/* Remove index entries */
-			for (i = 0; i < nindexes; i++)
-				lazy_vacuum_index(Irel[i],
-								  &indstats[i],
-								  vacrelstats);
+			lazy_vacuum_or_cleanup_indexes(vacrelstats, Irel, nindexes,
+										   indstats, lps, false);
 
 			/*
 			 * Report that we are now vacuuming the heap.  We also increase
@@ -765,7 +920,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * not to reset latestRemovedXid since we want that value to be
 			 * valid.
 			 */
-			vacrelstats->num_dead_tuples = 0;
+			dead_tuples->num_tuples = 0;
 			vacrelstats->num_index_scans++;
 
 			/*
@@ -961,7 +1116,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		has_dead_tuples = false;
 		nfrozen = 0;
 		hastup = false;
-		prev_dead_count = vacrelstats->num_dead_tuples;
+		prev_dead_count = dead_tuples->num_tuples;
 		maxoff = PageGetMaxOffsetNumber(page);
 
 		/*
@@ -1000,7 +1155,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 */
 			if (ItemIdIsDead(itemid))
 			{
-				lazy_record_dead_tuple(vacrelstats, &(tuple.t_self));
+				lazy_record_dead_tuple(dead_tuples, &(tuple.t_self));
 				all_visible = false;
 				continue;
 			}
@@ -1140,7 +1295,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 			if (tupgone)
 			{
-				lazy_record_dead_tuple(vacrelstats, &(tuple.t_self));
+				lazy_record_dead_tuple(dead_tuples, &(tuple.t_self));
 				HeapTupleHeaderAdvanceLatestRemovedXid(tuple.t_data,
 													   &vacrelstats->latestRemovedXid);
 				tups_vacuumed += 1;
@@ -1209,8 +1364,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If there are no indexes then we can vacuum the page right now
 		 * instead of doing a second scan.
 		 */
-		if (nindexes == 0 &&
-			vacrelstats->num_dead_tuples > 0)
+		if (nindexes == 0 && dead_tuples->num_tuples > 0)
 		{
 			/* Remove tuples from heap */
 			lazy_vacuum_page(onerel, blkno, buf, 0, vacrelstats, &vmbuffer);
@@ -1221,7 +1375,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * not to reset latestRemovedXid since we want that value to be
 			 * valid.
 			 */
-			vacrelstats->num_dead_tuples = 0;
+			dead_tuples->num_tuples = 0;
 			vacuumed_pages++;
 
 			/*
@@ -1337,7 +1491,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * page, so remember its free space as-is.  (This path will always be
 		 * taken if there are no indexes.)
 		 */
-		if (vacrelstats->num_dead_tuples == prev_dead_count)
+		if (dead_tuples->num_tuples == prev_dead_count)
 			RecordPageWithFreeSpace(onerel, blkno, freespace, nblocks);
 	}
 
@@ -1371,7 +1525,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (dead_tuples->num_tuples > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1387,10 +1541,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 									 PROGRESS_VACUUM_PHASE_VACUUM_INDEX);
 
 		/* Remove index entries */
-		for (i = 0; i < nindexes; i++)
-			lazy_vacuum_index(Irel[i],
-							  &indstats[i],
-							  vacrelstats);
+		lazy_vacuum_or_cleanup_indexes(vacrelstats, Irel, nindexes,
+										   indstats, lps, false);
 
 		/* Report that we are now vacuuming the heap */
 		hvp_val[0] = PROGRESS_VACUUM_PHASE_VACUUM_HEAP;
@@ -1416,9 +1568,21 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,
 								 PROGRESS_VACUUM_PHASE_INDEX_CLEANUP);
 
-	/* Do post-vacuum cleanup and statistics update for each index */
-	for (i = 0; i < nindexes; i++)
-		lazy_cleanup_index(Irel[i], indstats[i], vacrelstats);
+	/*
+	 * Do post-vacuum cleanup and statistics update for each index if
+	 * we're not in parallel lazy vacuum. If in parallel lazy vacuum, do
+	 * only post-vacum cleanup and then update statistics after exited
+	 * from parallel mode.
+	 */
+	lazy_vacuum_or_cleanup_indexes(vacrelstats, Irel, nindexes,
+									   indstats, lps, true);
+
+	/*
+	 * If we're in parallel lazy vacuum, end parallel lazy vacuum and
+	 * update index statistics.
+	 */
+	if (IsInParallelVacuum(lps))
+		lazy_end_parallel(lps, Irel, nindexes);
 
 	/* If no indexes, make log report that lazy_vacuum_heap would've made */
 	if (vacuumed_pages)
@@ -1485,7 +1649,7 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats, BlockNumber nblocks)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	while (tupindex < vacrelstats->dead_tuples->num_tuples)
 	{
 		BlockNumber tblk;
 		Buffer		buf;
@@ -1494,7 +1658,7 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats, BlockNumber nblocks)
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples->itemptrs[tupindex]);
 		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
 								 vac_strategy);
 		if (!ConditionalLockBufferForCleanup(buf))
@@ -1542,6 +1706,7 @@ static int
 lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer)
 {
+	LVDeadTuples	*dead_tuples = vacrelstats->dead_tuples;
 	Page		page = BufferGetPage(buffer);
 	OffsetNumber unused[MaxOffsetNumber];
 	int			uncnt = 0;
@@ -1552,16 +1717,16 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 
 	START_CRIT_SECTION();
 
-	for (; tupindex < vacrelstats->num_dead_tuples; tupindex++)
+	for (; tupindex < dead_tuples->num_tuples; tupindex++)
 	{
 		BlockNumber tblk;
 		OffsetNumber toff;
 		ItemId		itemid;
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+		tblk = ItemPointerGetBlockNumber(&dead_tuples->itemptrs[tupindex]);
 		if (tblk != blkno)
 			break;				/* past end of tuples for this block */
-		toff = ItemPointerGetOffsetNumber(&vacrelstats->dead_tuples[tupindex]);
+		toff = ItemPointerGetOffsetNumber(&dead_tuples->itemptrs[tupindex]);
 		itemid = PageGetItemId(page, toff);
 		ItemIdSetUnused(itemid);
 		unused[uncnt++] = toff;
@@ -1682,6 +1847,151 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Vacuum or cleanup indexes with parallel workers. This function must be used
+ * by the parallel vacuum leader process.
+ */
+static void
+lazy_parallel_vacuum_or_cleanup_indexes(LVRelStats *vacrelstats, Relation *Irel,
+										int nindexes, IndexBulkDeleteResult **stats,
+										LVParallelState *lps, bool for_cleanup)
+{
+	Assert(!IsParallelWorker());
+	Assert(lps != NULL);
+	Assert(nindexes > 0);
+
+	/* Launch parallel vacuum workers if we're ready */
+	lazy_begin_parallel_vacuum_index(lps, vacrelstats,
+									 for_cleanup);
+
+	/*
+	 * Do index vacuuming or cleanup index with parallel workers.
+	 * Only the leader process could do that if no workers are launched.
+	 */
+	do_parallel_vacuum_or_cleanup_indexes(Irel, nindexes, stats,
+										  lps->lvshared,
+										  vacrelstats->dead_tuples);
+
+	lazy_end_parallel_vacuum_index(lps, !for_cleanup);
+}
+
+/*
+ * Index vacuuming and index cleanup routine for both the leader process
+ * and worker processes. Unlike single process vacuum, we don't update
+ * index statistics after cleanup index since that's not allowed during
+ * parallel mode, and copy index bulk-deletion results from local memory
+ * to the DSM segment.
+ */
+static void
+do_parallel_vacuum_or_cleanup_indexes(Relation *Irel, int nindexes,
+									  IndexBulkDeleteResult **stats,
+									  LVShared *lvshared,
+									  LVDeadTuples *dead_tuples)
+{
+	int idx = 0;
+
+	for (;;)
+	{
+		idx = pg_atomic_fetch_add_u32(&(lvshared->nprocessed), 1);
+
+		/* Done for all indexes? */
+		if (idx >= nindexes)
+			break;
+
+		/*
+		 * Update the local pointer to the corresponding bulk-deletion result
+		 * if someone already updated it.
+		 */
+		if (lvshared->indstats[idx].updated &&
+			stats[idx] == NULL)
+			stats[idx] = &(lvshared->indstats[idx].stats);
+
+		/* Do vacuum or cleanup one index */
+		if (!lvshared->for_cleanup)
+			lazy_vacuum_index(Irel[idx], &stats[idx], lvshared->reltuples,
+							  dead_tuples);
+		else
+			lazy_cleanup_index(Irel[idx], &stats[idx], lvshared->reltuples,
+							   lvshared->estimated_count);
+
+		/*
+		 * We copy the index bulk-deletion results returned from ambulkdelete
+		 * and amvacuumcleanup to the DSM segment because they allocate the
+		 * results locally and it's possible that an index will be vacuumed
+		 * by the different vacuum process at the next time. The copying the
+		 * result normally happens only after the first time of index vacuuming.
+		 * From the second time, we pass the result on the DSM segment so
+		 * that they update it directly.
+		 *
+		 * Since all vacuum workers write the bulk-deletion result at different
+		 * slot we can write them without locking.
+		 */
+		if (!lvshared->indstats[idx].updated &&
+			stats[idx] != NULL)
+		{
+			memcpy(&(lvshared->indstats[idx].stats),
+				   stats[idx], sizeof(IndexBulkDeleteResult));
+			lvshared->indstats[idx].updated = true;
+
+			/*
+			 * no longer need the locally allocated result and now stats[idx]
+			 * points to the DSM segment.
+			 */
+			pfree(stats[idx]);
+			stats[idx] = &(lvshared->indstats[idx].stats);
+		}
+	}
+}
+
+/*
+ * Vacuum or cleanup indexes. If we're ready for parallel lazy vacuum it's
+ * performed with parallel workers. So this function must be used by the parallel
+ * vacuum leader process.
+ */
+static void
+lazy_vacuum_or_cleanup_indexes(LVRelStats *vacrelstats, Relation *Irel,
+								   int nindexes, IndexBulkDeleteResult **stats,
+								   LVParallelState *lps, bool for_cleanup)
+{
+	int		idx;
+
+	Assert(!IsParallelWorker());
+
+	/* no job if the table has no index */
+	if (nindexes <= 0)
+		return;
+
+	/* Do parallel lazy index vacuuming or cleanup if we're ready */
+	if (IsInParallelVacuum(lps))
+	{
+		lazy_parallel_vacuum_or_cleanup_indexes(vacrelstats, Irel,
+												nindexes, stats,
+												lps, for_cleanup);
+		return;
+	}
+
+	for (idx = 0; idx < nindexes; idx++)
+	{
+		/* Do vacuum or cleanup one index */
+		if (!for_cleanup)
+			lazy_vacuum_index(Irel[idx], &stats[idx], vacrelstats->old_live_tuples,
+							  vacrelstats->dead_tuples);
+		else
+		{
+			lazy_cleanup_index(Irel[idx], &stats[idx], vacrelstats->new_rel_tuples,
+							   vacrelstats->tupcount_pages < vacrelstats->rel_pages);
+
+			/*
+			 * Update index statistics. If in parallel lazy vacuum, we will
+			 * update them after exited from parallel mode.
+			 */
+			lazy_update_index_statistics(Irel[idx], stats[idx]);
+
+			if (stats[idx])
+				pfree(stats[idx]);
+		}
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1690,11 +2000,11 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  *		vacrelstats->dead_tuples, and update running statistics.
  */
 static void
-lazy_vacuum_index(Relation indrel,
-				  IndexBulkDeleteResult **stats,
-				  LVRelStats *vacrelstats)
+lazy_vacuum_index(Relation indrel, IndexBulkDeleteResult **stats,
+				  double reltuples, LVDeadTuples *dead_tuples)
 {
 	IndexVacuumInfo ivinfo;
+	char		*msgfmt;
 	PGRUsage	ru0;
 
 	pg_rusage_init(&ru0);
@@ -1703,18 +2013,22 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.analyze_only = false;
 	ivinfo.estimated_count = true;
 	ivinfo.message_level = elevel;
-	/* We can only provide an approximate value of num_heap_tuples here */
-	ivinfo.num_heap_tuples = vacrelstats->old_live_tuples;
+	ivinfo.num_heap_tuples = reltuples;
 	ivinfo.strategy = vac_strategy;
 
 	/* Do bulk deletion */
 	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+							   lazy_tid_reaped, (void *) dead_tuples);
+
+	if (IsParallelWorker())
+		msgfmt = gettext_noop("scanned index \"%s\" to remove %d row versions by parallel vacuum worker");
+	else
+		msgfmt = gettext_noop("scanned index \"%s\" to remove %d row versions");
 
 	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
+			(errmsg(msgfmt,
 					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
+					dead_tuples->num_tuples),
 			 errdetail_internal("%s", pg_rusage_show(&ru0))));
 }
 
@@ -1722,60 +2036,65 @@ lazy_vacuum_index(Relation indrel,
  *	lazy_cleanup_index() -- do post-vacuum cleanup for one index relation.
  */
 static void
-lazy_cleanup_index(Relation indrel,
-				   IndexBulkDeleteResult *stats,
-				   LVRelStats *vacrelstats)
+lazy_cleanup_index(Relation indrel, IndexBulkDeleteResult **stats,
+				   double reltuples, bool estimated_count)
 {
 	IndexVacuumInfo ivinfo;
+	char		*msgfmt;
 	PGRUsage	ru0;
 
 	pg_rusage_init(&ru0);
 
 	ivinfo.index = indrel;
 	ivinfo.analyze_only = false;
-	ivinfo.estimated_count = (vacrelstats->tupcount_pages < vacrelstats->rel_pages);
+	ivinfo.estimated_count = estimated_count;
 	ivinfo.message_level = elevel;
-
-	/*
-	 * Now we can provide a better estimate of total number of surviving
-	 * tuples (we assume indexes are more interested in that than in the
-	 * number of nominally live tuples).
-	 */
-	ivinfo.num_heap_tuples = vacrelstats->new_rel_tuples;
+	ivinfo.num_heap_tuples = reltuples;
 	ivinfo.strategy = vac_strategy;
 
-	stats = index_vacuum_cleanup(&ivinfo, stats);
+	*stats = index_vacuum_cleanup(&ivinfo, *stats);
 
-	if (!stats)
+	if (!(*stats))
 		return;
 
-	/*
-	 * Now update statistics in pg_class, but only if the index says the count
-	 * is accurate.
-	 */
-	if (!stats->estimated_count)
-		vac_update_relstats(indrel,
-							stats->num_pages,
-							stats->num_index_tuples,
-							0,
-							false,
-							InvalidTransactionId,
-							InvalidMultiXactId,
-							false);
+	if (IsParallelWorker())
+		msgfmt = gettext_noop("index \"%s\" now contains %.0f row versions in %u pages, reported by parallel vacuum worker");
+	else
+		msgfmt = gettext_noop("index \"%s\" now contains %.0f row versions in %u pages");
 
 	ereport(elevel,
-			(errmsg("index \"%s\" now contains %.0f row versions in %u pages",
+			(errmsg(msgfmt,
 					RelationGetRelationName(indrel),
-					stats->num_index_tuples,
-					stats->num_pages),
+					(*stats)->num_index_tuples,
+					(*stats)->num_pages),
 			 errdetail("%.0f index row versions were removed.\n"
 					   "%u index pages have been deleted, %u are currently reusable.\n"
 					   "%s.",
-					   stats->tuples_removed,
-					   stats->pages_deleted, stats->pages_free,
+					   (*stats)->tuples_removed,
+					   (*stats)->pages_deleted, (*stats)->pages_free,
 					   pg_rusage_show(&ru0))));
+}
+
+/*
+ * Update index statistics in pg_class, but only if the index says the count
+ * is accurate.
+ */
+static void
+lazy_update_index_statistics(Relation indrel, IndexBulkDeleteResult *stats)
+{
+	Assert(!IsInParallelMode());
+
+	if (!stats || stats->estimated_count)
+		return;
 
-	pfree(stats);
+	vac_update_relstats(indrel,
+						stats->num_pages,
+						stats->num_index_tuples,
+						0,
+						false,
+						InvalidTransactionId,
+						InvalidMultiXactId,
+						false);
 }
 
 /*
@@ -2080,19 +2399,17 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
- * lazy_space_alloc - space allocation decisions for lazy vacuum
- *
- * See the comments at the head of this file for rationale.
+ * Return the maximum number of dead tuples we can record.
  */
-static void
-lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
+static long
+compute_max_dead_tuples(BlockNumber relblocks, bool hasindex)
 {
 	long		maxtuples;
 	int			vac_work_mem = IsAutoVacuumWorkerProcess() &&
 	autovacuum_work_mem != -1 ?
 	autovacuum_work_mem : maintenance_work_mem;
 
-	if (vacrelstats->hasindex)
+	if (hasindex)
 	{
 		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
 		maxtuples = Min(maxtuples, INT_MAX);
@@ -2106,34 +2423,49 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 		maxtuples = Max(maxtuples, MaxHeapTuplesPerPage);
 	}
 	else
-	{
 		maxtuples = MaxHeapTuplesPerPage;
-	}
 
-	vacrelstats->num_dead_tuples = 0;
-	vacrelstats->max_dead_tuples = (int) maxtuples;
-	vacrelstats->dead_tuples = (ItemPointer)
-		palloc(maxtuples * sizeof(ItemPointerData));
+	return maxtuples;
+}
+
+/*
+ * lazy_space_alloc - space allocation decisions for lazy vacuum
+ *
+ * See the comments at the head of this file for rationale.
+ */
+static void
+lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
+{
+	LVDeadTuples	*dead_tuples = NULL;
+	long		maxtuples;
+
+	maxtuples = compute_max_dead_tuples(relblocks, vacrelstats->hasindex);
+
+	dead_tuples = (LVDeadTuples *)
+		palloc(SizeOfLVDeadTuples + maxtuples * sizeof(ItemPointerData));
+	dead_tuples->num_tuples = 0;
+	dead_tuples->max_tuples = (int) maxtuples;
+
+	vacrelstats->dead_tuples = dead_tuples;
 }
 
 /*
  * lazy_record_dead_tuple - remember one deletable tuple
  */
 static void
-lazy_record_dead_tuple(LVRelStats *vacrelstats,
-					   ItemPointer itemptr)
+lazy_record_dead_tuple(LVDeadTuples *dead_tuples, ItemPointer itemptr)
 {
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples)
+	if (dead_tuples->num_tuples < dead_tuples->max_tuples)
 	{
-		vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr;
-		vacrelstats->num_dead_tuples++;
+		dead_tuples->itemptrs[dead_tuples->num_tuples] = *itemptr;
+		dead_tuples->num_tuples++;
 		pgstat_progress_update_param(PROGRESS_VACUUM_NUM_DEAD_TUPLES,
-									 vacrelstats->num_dead_tuples);
+									 dead_tuples->num_tuples);
 	}
 }
 
@@ -2147,12 +2479,12 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
 static bool
 lazy_tid_reaped(ItemPointer itemptr, void *state)
 {
-	LVRelStats *vacrelstats = (LVRelStats *) state;
+	LVDeadTuples	*dead_tuples = (LVDeadTuples *) state;
 	ItemPointer res;
 
 	res = (ItemPointer) bsearch((void *) itemptr,
-								(void *) vacrelstats->dead_tuples,
-								vacrelstats->num_dead_tuples,
+								(void *) dead_tuples->itemptrs,
+								dead_tuples->num_tuples,
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
@@ -2300,3 +2632,331 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 
 	return all_visible;
 }
+
+/*
+ * Compute the number of parallel worker process to request. Both index vacuuming
+ * and cleanup index can be executed together with parallel workers if the table
+ * has more than one index. The relation sizes of table and indexes don't affect
+ * to the parallel degree for now.
+ */
+static int
+compute_parallel_workers(Relation onerel, int nrequested, int nindexes)
+{
+	int parallel_workers;
+
+	Assert(nrequested >= 0);
+
+	if (nindexes <= 1)
+		return 0;
+
+	if (nrequested > 0)
+		parallel_workers = Min(nrequested, nindexes - 1);
+	else
+	{
+		/*
+		 * The parallel degree is not requested. Compute it based on the
+		 * number of indexes.
+		 */
+		parallel_workers = nindexes - 1;
+	}
+
+	/* cap by max_parallel_maintenace_workers */
+	parallel_workers = Min(parallel_workers, max_parallel_maintenance_workers);
+
+	return parallel_workers;
+}
+
+/*
+ * Enter parallel mode, allocate and initialize a DSM segment.
+ */
+static LVParallelState *
+lazy_prepare_parallel(LVRelStats *vacrelstats, Oid relid, BlockNumber nblocks,
+					  int nindexes, int nrequested)
+{
+	LVParallelState *lps = (LVParallelState *) palloc(sizeof(LVParallelState));
+	LVShared	*shared;
+	ParallelContext *pcxt;
+	LVDeadTuples	*tidmap;
+	long	maxtuples;
+	char	*sharedquery;
+	Size	est_shared;
+	Size	est_deadtuples;
+	int		querylen;
+	int		keys = 0;
+
+	Assert(nrequested > 0);
+	Assert(nindexes > 0);
+
+	EnterParallelMode();
+	pcxt = CreateParallelContext("postgres", "heap_parallel_vacuum_main",
+								 nrequested);
+	lps->pcxt = pcxt;
+	Assert(pcxt->nworkers > 0);
+
+	/* Estimate size for shared information -- PARALLEL_VACUUM_KEY_SHARED */
+	est_shared = MAXALIGN(add_size(SizeOfLVShared,
+								   mul_size(sizeof(LVIndStats), nindexes)));
+	shm_toc_estimate_chunk(&pcxt->estimator, est_shared);
+	keys++;
+
+	/* Estimate size for dead tuples -- PARALLEL_VACUUM_KEY_DEAD_TUPLES */
+	maxtuples = compute_max_dead_tuples(nblocks, true);
+	est_deadtuples = MAXALIGN(add_size(sizeof(LVDeadTuples),
+									   mul_size(sizeof(ItemPointerData), maxtuples)));
+	shm_toc_estimate_chunk(&pcxt->estimator, est_deadtuples);
+	keys++;
+
+	shm_toc_estimate_keys(&pcxt->estimator, keys);
+
+	/* Finally, estimate VACUUM_KEY_QUERY_TEXT space */
+	querylen = strlen(debug_query_string);
+	shm_toc_estimate_chunk(&pcxt->estimator, querylen + 1);
+	shm_toc_estimate_keys(&pcxt->estimator, 1);
+
+	InitializeParallelDSM(pcxt);
+
+	/* prepare shared information */
+	shared = (LVShared *) shm_toc_allocate(pcxt->toc, est_shared);
+	shared->relid = relid;
+	shared->elevel = elevel;
+	pg_atomic_init_u32(&(shared->nprocessed), 0);
+	MemSet(shared->indstats, 0, sizeof(LVIndStats) * nindexes);
+	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_SHARED, shared);
+	lps->lvshared = shared;
+
+	/* prepare the dead tuple space */
+	tidmap = (LVDeadTuples *) shm_toc_allocate(pcxt->toc, est_deadtuples);
+	tidmap->max_tuples = maxtuples;
+	tidmap->num_tuples = 0;
+	MemSet(tidmap->itemptrs, 0, sizeof(ItemPointerData) * maxtuples);
+	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_DEAD_TUPLES, tidmap);
+	vacrelstats->dead_tuples = tidmap;
+
+	/* Store query string for workers */
+	sharedquery = (char *) shm_toc_allocate(pcxt->toc, querylen + 1);
+	memcpy(sharedquery, debug_query_string, querylen + 1);
+	sharedquery[querylen] = '\0';
+	shm_toc_insert(pcxt->toc, PARALLEL_VACUUM_KEY_QUERY_TEXT, sharedquery);
+
+	lps->nworkers_requested = 0;
+
+	return lps;
+}
+
+/*
+ * Shutdown workers, destroy the parallel context, and end parallel mode.
+ * Update index statistics after exited from parallel mode.
+ */
+static void
+lazy_end_parallel(LVParallelState *lps, Relation *Irel, int nindexes)
+{
+	LVIndStats *copied_indstats = NULL;
+	int i;
+
+	Assert(!IsParallelWorker());
+	Assert(Irel != NULL && nindexes > 0);
+
+	/* copy the index statistics to a temporary space */
+	copied_indstats = palloc(sizeof(LVIndStats) * nindexes);
+	memcpy(copied_indstats, lps->lvshared->indstats,
+		   sizeof(LVIndStats) * nindexes);
+
+	/* Shutdown worker processes and destroy the parallel context */
+	WaitForParallelWorkersToFinish(lps->pcxt);
+	DestroyParallelContext(lps->pcxt);
+	ExitParallelMode();
+
+	for (i = 0; i < nindexes; i++)
+	{
+		LVIndStats *s = &(copied_indstats[i]);
+
+		if (s->updated)
+			lazy_update_index_statistics(Irel[i], &(s->stats));
+	}
+
+	pfree(copied_indstats);
+}
+
+/*
+ * Begin a parallel index vacuuming or index cleanup. Set shared information
+ * and launch parallel worker processes.
+ */
+static void
+lazy_begin_parallel_vacuum_index(LVParallelState *lps, LVRelStats *vacrelstats,
+								 bool for_cleanup)
+{
+	StringInfoData buf;
+
+	Assert(!IsParallelWorker());
+
+	/* Request workers to do either vacuuming indexes or cleaning indexes */
+	lps->lvshared->for_cleanup = for_cleanup;
+
+	if (!for_cleanup)
+	{
+		/* We can only provide an approximate value of num_heap_tuples here */
+		lps->lvshared->reltuples = vacrelstats->old_live_tuples;
+		lps->lvshared->estimated_count = true;
+	}
+	else
+	{
+		/*
+		 * Now we can provide a better estimate of total number of surviving
+		 * tuples (we assume indexes are more interested in that than in the
+		 * number of nominally live tuples).
+		 */
+		lps->lvshared->reltuples = vacrelstats->new_rel_tuples;
+		lps->lvshared->estimated_count =
+			(vacrelstats->tupcount_pages < vacrelstats->rel_pages);
+
+	}
+
+	LaunchParallelWorkers(lps->pcxt);
+
+	initStringInfo(&buf);
+
+	/*
+	 * if no workers launched, we vacuum all indexes by the leader process
+	 * alone. Since there is hope that we can launch workers in the next
+	 * execution time we don't want to end parallel mode yet.
+	 */
+	if (lps->pcxt->nworkers_launched == 0)
+	{
+		if (lps->nworkers_requested > 0)
+			appendStringInfo(&buf,
+							 gettext_noop("could not launch parallel vacuum worker (planned: %d, requested: %d)"),
+							 lps->pcxt->nworkers, lps->nworkers_requested);
+		else
+			appendStringInfo(&buf,
+							 gettext_noop("could not launch parallel vacuum worker (planned: %d)"),
+							 lps->pcxt->nworkers);
+		ereport(elevel, (errmsg("%s", buf.data)));
+
+		lazy_end_parallel_vacuum_index(lps, !for_cleanup);
+		return;
+	}
+
+	/* Report parallel vacuum worker information */
+	if (for_cleanup)
+	{
+		if (lps->nworkers_requested > 0)
+			appendStringInfo(&buf,
+							 ngettext("launched %d parallel vacuum worker for index cleanup (planned: %d, requested %d)",
+									  "launched %d parallel vacuum workers for index cleanup (planned: %d, requsted %d)",
+									  lps->pcxt->nworkers_launched),
+							 lps->pcxt->nworkers_launched,
+							 lps->pcxt->nworkers,
+							 lps->nworkers_requested);
+		else
+			appendStringInfo(&buf,
+							 ngettext("launched %d parallel vacuum worker for index cleanup (planned: %d)",
+									  "launched %d parallel vacuum workers for index cleanup (planned: %d)",
+									  lps->pcxt->nworkers_launched),
+							 lps->pcxt->nworkers_launched,
+							 lps->pcxt->nworkers);
+	}
+	else
+	{
+		if (lps->nworkers_requested > 0)
+			appendStringInfo(&buf,
+							 ngettext("launched %d parallel vacuum worker for index vacuuming (planned: %d, requested %d)",
+									  "launched %d parallel vacuum workers for index vacuuming (planned: %d, requested %d)",
+									  lps->pcxt->nworkers_launched),
+							 lps->pcxt->nworkers_launched,
+							 lps->pcxt->nworkers,
+							 lps->nworkers_requested);
+		else
+			appendStringInfo(&buf,
+							 ngettext("launched %d parallel vacuum worker for index vacuuming (planned: %d)",
+									  "launched %d parallel vacuum workers for index vacuuming (planned: %d)",
+									  lps->pcxt->nworkers_launched),
+							 lps->pcxt->nworkers_launched,
+							 lps->pcxt->nworkers);
+	}
+	ereport(elevel, (errmsg("%s", buf.data)));
+}
+
+/*
+ * Wait for all worker processes to finish and reinitialize DSM for
+ * the next execution.
+ */
+static void
+lazy_end_parallel_vacuum_index(LVParallelState *lps, bool reinitialize)
+{
+	Assert(!IsParallelWorker());
+
+	WaitForParallelWorkersToFinish(lps->pcxt);
+
+	if (reinitialize)
+	{
+		/* Reset the processing count */
+		pg_atomic_write_u32(&(lps->lvshared->nprocessed), 0);
+
+		/*
+		 * Reinitialize the parallel context to relaunch parallel workers
+		 * for the next execution.
+		 */
+		ReinitializeParallelDSM(lps->pcxt);
+	}
+}
+
+/*
+ * Perform work within a launched parallel process.
+ *
+ * Parallel vacuum worker processes doesn't report the vacuum progress
+ * information.
+ */
+void
+heap_parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)
+{
+	Relation	onerel;
+	Relation	*indrels;
+	LVShared	*lvshared;
+	LVDeadTuples	*dead_tuples;
+	int			nindexes;
+	char		*sharedquery;
+	IndexBulkDeleteResult **stats;
+
+	lvshared = (LVShared *) shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_SHARED,
+										   false);
+	elevel = lvshared->elevel;
+
+	ereport(DEBUG1,
+			(errmsg("starting parallel lazy vacuum worker for %s",
+					lvshared->for_cleanup ? "cleanup" : "vacuuming")));
+
+	/* Open relations */
+	onerel = heap_open(lvshared->relid, ShareUpdateExclusiveLock);
+
+	/* indrels are sorted in order by OID */
+	vac_open_indexes(onerel, RowExclusiveLock, &nindexes, &indrels);
+	Assert(nindexes > 0);
+
+	/* Set debug_query_string for individual workers */
+	sharedquery = shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_QUERY_TEXT, true);
+
+	/* Report the query string from leader */
+	debug_query_string = sharedquery;
+	pgstat_report_activity(STATE_RUNNING, debug_query_string);
+
+	/* Set dead tuple space within worker */
+	dead_tuples = (LVDeadTuples *) shm_toc_lookup(toc, PARALLEL_VACUUM_KEY_DEAD_TUPLES,
+												  false);
+
+	/* Set cost-based vacuum delay */
+	VacuumCostActive = (VacuumCostDelay > 0);
+	VacuumCostBalance = 0;
+	VacuumPageHit = 0;
+	VacuumPageMiss = 0;
+	VacuumPageDirty = 0;
+
+	stats = (IndexBulkDeleteResult **)
+		palloc0(nindexes * sizeof(IndexBulkDeleteResult *));
+
+	/* Do either vacuuming indexes or cleaning indexes */
+	do_parallel_vacuum_or_cleanup_indexes(indrels, nindexes, stats,
+										  lvshared, dead_tuples);
+
+	vac_close_indexes(nindexes, indrels, RowExclusiveLock);
+	heap_close(onerel, ShareUpdateExclusiveLock);
+}
diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index 55d129a..86511b2 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -14,6 +14,7 @@
 
 #include "postgres.h"
 
+#include "access/heapam.h"
 #include "access/nbtree.h"
 #include "access/parallel.h"
 #include "access/session.h"
@@ -140,6 +141,9 @@ static const struct
 	},
 	{
 		"_bt_parallel_build_main", _bt_parallel_build_main
+	},
+	{
+		"heap_parallel_vacuum_main", heap_parallel_vacuum_main
 	}
 };
 
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 72f140e..044138a 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -90,6 +90,7 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
 	ListCell	*lc;
 
 	params.options = vacstmt->is_vacuumcmd ? VACOPT_VACUUM : VACOPT_ANALYZE;
+	params.nworkers = -1;
 
 	/* Parse options list */
 	foreach(lc, vacstmt->options)
@@ -116,6 +117,27 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
 			params.options |= defGetBoolean(opt) ? VACOPT_FULL : 0;
 		else if (strcmp(opt->defname, "disable_page_skipping") == 0)
 			params.options |= defGetBoolean(opt) ? VACOPT_DISABLE_PAGE_SKIPPING : 0;
+		else if (strcmp(opt->defname, "parallel") == 0)
+		{
+			if (opt->arg == NULL)
+			{
+				/*
+				 * Parallel lazy vacuum is requested but user didn't specify
+				 * the parallel degree. The parallel degree will be determined
+				 * at the start of lazy vacuum.
+				 */
+				params.nworkers = 0;
+			}
+			else
+			{
+				params.nworkers = defGetInt32(opt);
+				if (params.nworkers <= 0)
+					ereport(ERROR,
+							(errcode(ERRCODE_SYNTAX_ERROR),
+							 errmsg("parallel vacuum degree must be at least 1"),
+							 parser_errposition(pstate, opt->location)));
+			}
+		}
 		else
 			ereport(ERROR,
 					(errcode(ERRCODE_SYNTAX_ERROR),
@@ -147,6 +169,11 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)
 		}
 	}
 
+	if ((params.options & VACOPT_FULL) && params.nworkers >= 0)
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("cannot specify FULL option with PARALLEL option")));
+
 	/*
 	 * All freeze ages are zero if the FREEZE option is given; otherwise pass
 	 * them as -1 which means to use the default values.
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 5af91aa..effc85b 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -10553,6 +10553,7 @@ vac_analyze_option_name:
 
 vac_analyze_option_arg:
 			opt_boolean_or_string					{ $$ = (Node *) makeString($1); }
+			| NumericOnly							{ $$ = (Node *) $1; }
 			| /* EMPTY */		 					{ $$ = NULL; }
 		;
 
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index fa875db..010a49c 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -2886,6 +2886,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 			(dovacuum ? VACOPT_VACUUM : 0) |
 			(doanalyze ? VACOPT_ANALYZE : 0) |
 			(!wraparound ? VACOPT_SKIP_LOCKED : 0);
+		tab->at_params.nworkers = -1;	/* parallel lazy autovacuum is not supported */
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index 36e20fb..685bfa3 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -3431,7 +3431,8 @@ psql_completion(const char *text, int start, int end)
 		 */
 		if (ends_with(prev_wd, '(') || ends_with(prev_wd, ','))
 			COMPLETE_WITH("FULL", "FREEZE", "ANALYZE", "VERBOSE",
-						  "DISABLE_PAGE_SKIPPING", "SKIP_LOCKED");
+						  "DISABLE_PAGE_SKIPPING", "SKIP_LOCKED",
+						  "PARALLEL");
 		else if (TailMatches("FULL|FREEZE|ANALYZE|VERBOSE|DISABLE_PAGE_SKIPPING|SKIP_LOCKED"))
 			COMPLETE_WITH("ON", "OFF");
 	}
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 3773a4d..4fcbfcc 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -14,6 +14,7 @@
 #ifndef HEAPAM_H
 #define HEAPAM_H
 
+#include "access/parallel.h"
 #include "access/relation.h"	/* for backward compatibility */
 #include "access/relscan.h"
 #include "access/sdir.h"
@@ -195,6 +196,7 @@ extern Size SyncScanShmemSize(void);
 struct VacuumParams;
 extern void heap_vacuum_rel(Relation onerel,
 				struct VacuumParams *params, BufferAccessStrategy bstrategy);
+extern void heap_parallel_vacuum_main(dsm_segment *seg, shm_toc *toc);
 
 /* in heap/heapam_visibility.c */
 extern bool HeapTupleSatisfiesVisibility(HeapTuple stup, Snapshot snapshot,
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 77086f3..c4b355a 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -167,6 +167,11 @@ typedef struct VacuumParams
 	int			log_min_duration;	/* minimum execution threshold in ms at
 									 * which  verbose logs are activated, -1
 									 * to use default */
+	/*
+	 * The number of parallel vacuum workers. -1 by default for no workers
+	 * and 0 for choosing based on the number of indexes.
+	 */
+	int			nworkers;
 } VacuumParams;
 
 /* GUC parameters */
diff --git a/src/test/regress/expected/vacuum.out b/src/test/regress/expected/vacuum.out
index 07d0703..973bb33 100644
--- a/src/test/regress/expected/vacuum.out
+++ b/src/test/regress/expected/vacuum.out
@@ -80,6 +80,12 @@ CONTEXT:  SQL function "do_analyze" statement 1
 SQL function "wrap_do_analyze" statement 1
 VACUUM FULL vactst;
 VACUUM (DISABLE_PAGE_SKIPPING) vaccluster;
+VACUUM (PARALLEL) vaccluster;
+VACUUM (PARALLEL 2) vaccluster;
+VACUUM (PARALLEL 0) vaccluster; -- error
+ERROR:  parallel vacuum degree must be at least 1
+LINE 1: VACUUM (PARALLEL 0) vaccluster;
+                ^
 -- partitioned table
 CREATE TABLE vacparted (a int, b char) PARTITION BY LIST (a);
 CREATE TABLE vacparted1 PARTITION OF vacparted FOR VALUES IN (1);
@@ -116,9 +122,9 @@ ERROR:  column "does_not_exist" of relation "vacparted" does not exist
 ANALYZE (VERBOSE) does_not_exist;
 ERROR:  relation "does_not_exist" does not exist
 ANALYZE (nonexistent-arg) does_not_exist;
-ERROR:  syntax error at or near "-"
+ERROR:  syntax error at or near "arg"
 LINE 1: ANALYZE (nonexistent-arg) does_not_exist;
-                            ^
+                             ^
 ANALYZE (nonexistentarg) does_not_exit;
 ERROR:  unrecognized ANALYZE option "nonexistentarg"
 LINE 1: ANALYZE (nonexistentarg) does_not_exit;
diff --git a/src/test/regress/sql/vacuum.sql b/src/test/regress/sql/vacuum.sql
index 81f3822..d0c209a 100644
--- a/src/test/regress/sql/vacuum.sql
+++ b/src/test/regress/sql/vacuum.sql
@@ -61,6 +61,9 @@ VACUUM FULL vaccluster;
 VACUUM FULL vactst;
 
 VACUUM (DISABLE_PAGE_SKIPPING) vaccluster;
+VACUUM (PARALLEL) vaccluster;
+VACUUM (PARALLEL 2) vaccluster;
+VACUUM (PARALLEL 0) vaccluster; -- error
 
 -- partitioned table
 CREATE TABLE vacparted (a int, b char) PARTITION BY LIST (a);
-- 
1.8.3.1

From cafb7a75636345017e1a8d9c499751aea4b7d8be Mon Sep 17 00:00:00 2001
From: Masahiko Sawada <sawada.mshk@gmail.com>
Date: Wed, 23 Jan 2019 16:07:53 +0900
Subject: [PATCH v20 3/3] Add --paralell, -P option to vacuumdb command

---
 doc/src/sgml/ref/vacuumdb.sgml    | 16 +++++++++++++
 src/bin/scripts/t/100_vacuumdb.pl | 10 +++++++-
 src/bin/scripts/vacuumdb.c        | 49 ++++++++++++++++++++++++++++++++++++++-
 3 files changed, 73 insertions(+), 2 deletions(-)

diff --git a/doc/src/sgml/ref/vacuumdb.sgml b/doc/src/sgml/ref/vacuumdb.sgml
index 41c7f3d..da65177 100644
--- a/doc/src/sgml/ref/vacuumdb.sgml
+++ b/doc/src/sgml/ref/vacuumdb.sgml
@@ -227,6 +227,22 @@ PostgreSQL documentation
      </varlistentry>
 
      <varlistentry>
+      <term><option>-P <replaceable class="parameter">workers</replaceable></option></term>
+      <term><option>--parallel=<replaceable class="parameter">workers</replaceable></option></term>
+      <listitem>
+       <para>
+        Execute parallel vacuum with <productname>PostgreSQL</productname>'s
+        <replaceable class="parameter">workers</replaceable> background workers.
+       </para>
+       <para>
+        This option will require background workers, so make sure your
+        <xref linkend="guc-max-parallel-workers-maintenance"/> setting is more
+        than one.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry>
       <term><option>-q</option></term>
       <term><option>--quiet</option></term>
       <listitem>
diff --git a/src/bin/scripts/t/100_vacuumdb.pl b/src/bin/scripts/t/100_vacuumdb.pl
index 7f3a9b1..5ab87f3 100644
--- a/src/bin/scripts/t/100_vacuumdb.pl
+++ b/src/bin/scripts/t/100_vacuumdb.pl
@@ -3,7 +3,7 @@ use warnings;
 
 use PostgresNode;
 use TestLib;
-use Test::More tests => 44;
+use Test::More tests => 48;
 
 program_help_ok('vacuumdb');
 program_version_ok('vacuumdb');
@@ -48,6 +48,14 @@ $node->issues_sql_like(
 $node->command_fails(
 	[ 'vacuumdb', '--analyze-only', '--disable-page-skipping', 'postgres' ],
 	'--analyze-only and --disable-page-skipping specified together');
+$node->issues_sql_like(
+	[ 'vacuumdb', '-P2', 'postgres' ],
+	qr/statement: VACUUM \(PARALLEL 2\).*;/,
+	'vacuumdb -P2');
+$node->issues_sql_like(
+	[ 'vacuumdb', '-P', 'postgres' ],
+	qr/statement: VACUUM \(PARALLEL\).*;/,
+	'vacuumdb -P');
 $node->command_ok([qw(vacuumdb -Z --table=pg_am dbname=template1)],
 	'vacuumdb with connection string');
 
diff --git a/src/bin/scripts/vacuumdb.c b/src/bin/scripts/vacuumdb.c
index 5ac41ea..6be3f8f 100644
--- a/src/bin/scripts/vacuumdb.c
+++ b/src/bin/scripts/vacuumdb.c
@@ -45,6 +45,8 @@ typedef struct vacuumingOptions
 	bool		skip_locked;
 	int			min_xid_age;
 	int			min_mxid_age;
+	int			parallel_workers;	/* -1 disables, 0 for choosing based on the
+									 * number of indexes */
 } vacuumingOptions;
 
 
@@ -111,6 +113,7 @@ main(int argc, char *argv[])
 		{"full", no_argument, NULL, 'f'},
 		{"verbose", no_argument, NULL, 'v'},
 		{"jobs", required_argument, NULL, 'j'},
+		{"parallel", optional_argument, NULL, 'P'},
 		{"maintenance-db", required_argument, NULL, 2},
 		{"analyze-in-stages", no_argument, NULL, 3},
 		{"disable-page-skipping", no_argument, NULL, 4},
@@ -140,6 +143,7 @@ main(int argc, char *argv[])
 
 	/* initialize options to all false */
 	memset(&vacopts, 0, sizeof(vacopts));
+	vacopts.parallel_workers = -1;
 
 	progname = get_progname(argv[0]);
 
@@ -147,7 +151,7 @@ main(int argc, char *argv[])
 
 	handle_help_version_opts(argc, argv, "vacuumdb", help);
 
-	while ((c = getopt_long(argc, argv, "h:p:U:wWeqd:zZFat:fvj:", long_options, &optindex)) != -1)
+	while ((c = getopt_long(argc, argv, "h:p:P::U:wWeqd:zZFat:fvj:", long_options, &optindex)) != -1)
 	{
 		switch (c)
 		{
@@ -214,6 +218,25 @@ main(int argc, char *argv[])
 					exit(1);
 				}
 				break;
+			case 'P':
+				{
+					int parallel_workers = 0;
+
+					if (optarg != NULL)
+					{
+						parallel_workers = atoi(optarg);
+						if (parallel_workers <= 0)
+						{
+							fprintf(stderr, _("%s: number of parallel workers must be at least 1\n"),
+									progname);
+							exit(1);
+						}
+					}
+
+					/* allow to set 0, meaning PARALLEL without the parallel degree */
+					vacopts.parallel_workers = parallel_workers;
+					break;
+				}
 			case 2:
 				maintenance_db = pg_strdup(optarg);
 				break;
@@ -288,9 +311,22 @@ main(int argc, char *argv[])
 					progname, "disable-page-skipping");
 			exit(1);
 		}
+		if (vacopts.parallel_workers >= 0)
+		{
+			fprintf(stderr, _("%s: cannot use the \"%s\" option when performing only analyze\n"),
+					progname, "parallel");
+			exit(1);
+		}
 		/* allow 'and_analyze' with 'analyze_only' */
 	}
 
+	if (vacopts.full && vacopts.parallel_workers >= 0)
+	{
+		fprintf(stderr, _("%s: cannot use the \"%s\" option with \"%s\" option\n"),
+				progname, "full", "parallel");
+		exit(1);
+	}
+
 	setup_cancel_handler();
 
 	/* Avoid opening extra connections. */
@@ -895,6 +931,16 @@ prepare_vacuum_command(PQExpBuffer sql, int serverVersion,
 				appendPQExpBuffer(sql, "%sANALYZE", sep);
 				sep = comma;
 			}
+			if (vacopts->parallel_workers > 0)
+			{
+				appendPQExpBuffer(sql, "%sPARALLEL %d", sep, vacopts->parallel_workers);
+				sep = comma;
+			}
+			if (vacopts->parallel_workers == 0)
+			{
+				appendPQExpBuffer(sql, "%sPARALLEL", sep);
+				sep = comma;
+			}
 			if (sep != paren)
 				appendPQExpBufferChar(sql, ')');
 		}
@@ -1227,6 +1273,7 @@ help(const char *progname)
 	printf(_("  -j, --jobs=NUM                  use this many concurrent connections to vacuum\n"));
 	printf(_("      --min-mxid-age=MXID_AGE     minimum multixact ID age of tables to vacuum\n"));
 	printf(_("      --min-xid-age=XID_AGE       minimum transaction ID age of tables to vacuum\n"));
+	printf(_("  -P, --parallel[=NUM]            do parallel vacuuming\n"));
 	printf(_("  -q, --quiet                     don't write any messages\n"));
 	printf(_("      --skip-locked               skip relations that cannot be immediately locked\n"));
 	printf(_("  -t, --table='TABLE[(COLUMNS)]'  vacuum specific table(s) only\n"));
-- 
1.8.3.1

Reply via email to