I lost track of this, sorry. Here is an updated patch responding to
review feedback.
From 5011b88902c0f316be4aa877d35b73dcd6463728 Mon Sep 17 00:00:00 2001
From: Maciek Sakrejda <[email protected]>
Date: Fri, 26 Sep 2025 17:17:41 -0700
Subject: [PATCH v2] Improve EXPLAIN docs

 - clarify fractional rows in loops
 - add some basic info on parallel execution
---
 doc/src/sgml/perform.sgml | 64 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 60 insertions(+), 4 deletions(-)

diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml
index 5f6f1db0467..b197b48143c 100644
--- a/doc/src/sgml/perform.sgml
+++ b/doc/src/sgml/perform.sgml
@@ -756,13 +756,69 @@ WHERE t1.unique1 &lt; 10 AND t1.unique2 = t2.unique2;
     <literal>loops</literal> value reports the
     total number of executions of the node, and the actual time and rows
     values shown are averages per-execution.  This is done to make the numbers
-    comparable with the way that the cost estimates are shown.  Multiply by
-    the <literal>loops</literal> value to get the total time actually spent in
-    the node.  In the above example, we spent a total of 0.030 milliseconds
-    executing the index scans on <literal>tenk2</literal>.
+    comparable with the way that the cost estimates are shown.  Multiply by the
+    <literal>loops</literal> value to get the total time actually spent in the
+    node and the total number of rows processed by the node across all
+    executions. In the above example, we spent a total of 0.030
+    milliseconds executing the index scans on <literal>tenk2</literal>, and
+    they handled a total of 10 rows.
    </para>
 
    <para>
+     Parallel execution will also cause nodes to be executed more than once.
+     This is also indicated with the <literal>loops</literal> value. We can
+     change some planner settings to have the planner pick a parallel plan for
+     the above query:
+   </para>
+
+<screen>
+SET min_parallel_table_scan_size = 0;
+SET parallel_tuple_cost = 0;
+SET parallel_setup_cost = 0;
+
+EXPLAIN ANALYZE SELECT *
+FROM tenk1 t1, tenk2 t2
+WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2;
+                                                                QUERY PLAN                                                                
+------------------------------------------------------------------------------------------------------------------------------------------
+ Gather  (cost=4.65..70.96 rows=10 width=488) (actual time=1.161..11.655 rows=10.00 loops=1)
+   Workers Planned: 2
+   Workers Launched: 2
+   Buffers: shared hit=78 read=6
+   ->  Nested Loop  (cost=4.65..70.96 rows=4 width=488) (actual time=0.247..0.317 rows=3.33 loops=3)
+         Buffers: shared hit=78 read=6
+         ->  Parallel Bitmap Heap Scan on tenk1 t1  (cost=4.36..39.31 rows=4 width=244) (actual time=0.228..0.249 rows=3.33 loops=3)
+               Recheck Cond: (unique1 < 10)
+               Heap Blocks: exact=10
+               Buffers: shared hit=54
+               ->  Bitmap Index Scan on tenk1_unique1  (cost=0.00..4.36 rows=10 width=0) (actual time=0.438..0.439 rows=10.00 loops=1)
+                     Index Cond: (unique1 < 10)
+                     Index Searches: 1
+                     Buffers: shared hit=2
+         ->  Index Scan using tenk2_unique2 on tenk2 t2  (cost=0.29..7.90 rows=1 width=244) (actual time=0.016..0.017 rows=1.00 loops=10)
+               Index Cond: (unique2 = t1.unique2)
+               Index Searches: 10
+               Buffers: shared hit=24 read=6
+ Planning:
+   Buffers: shared hit=327 read=3
+ Planning Time: 4.781 ms
+ Execution Time: 11.858 ms
+(22 rows)
+</screen>
+
+  <para>
+    The parallel bitmap heap scan was executed three separate times in parallel:
+    once in the leader (since <xref linkend="guc-parallel-leader-participation"/>
+    is on by default), and once in each of the two launched workers.  Similarly
+    to sequential repeated executions, rows and actual time are averages
+    per-worker.  Multiply by the <literal>loops</literal> value to get the
+    total number of rows processed by the node across all workers.  The total
+    time spent in all workers can be similarly calculated, but note this time
+    is spent concurrently, so it is not equivalent to total time spent in the
+    node.
+  </para>
+
+  <para>
     In some cases <command>EXPLAIN ANALYZE</command> shows additional execution
     statistics beyond the plan node execution times and row counts.
     For example, Sort and Hash nodes provide extra information:
-- 
2.43.0

Reply via email to