Good suggestion. The results are...interesting:

test0=# SET min_parallel_index_scan_size =0;
SET
Time: 0.172 ms
test0=# EXPLAIN ANALYZE SELECT t0.id0, t1.val FROM tab0 AS t0 JOIN tab1 AS t1 ON (t0.id0 = t1.id0) WHERE t0.id0 < 5;
   QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=1097.91..40206.84 rows=5000 width=98) (actual time=0.362..5.565 rows=3500.00 loops=1)
   Workers Planned: 1
   Workers Launched: 1
   Buffers: shared hit=114
   ->  Nested Loop  (cost=97.91..38706.84 rows=2941 width=98) (actual time=0.034..0.479 rows=1750.00 loops=2)
         Buffers: shared hit=114
         ->  Parallel Index Only Scan using tab0_pkey on tab0 t0  (cost=0.29..4.36 rows=3 width=4) (actual time=0.008..0.009 rows=2.50 loops=2)
               Index Cond: (id0 < 5)
               Heap Fetches: 0
               Index Searches: 1
               Buffers: shared hit=3
         ->  Bitmap Heap Scan on tab1 t1 (cost=97.61..12867.78 rows=3305 width=98) (actual time=0.036..0.140 rows=700.00 loops=5)
               Recheck Cond: (t0.id0 = id0)
               Heap Blocks: exact=97
               Buffers: shared hit=111
               ->  Bitmap Index Scan on tab1_id0_hash (cost=0.00..96.79 rows=3305 width=0) (actual time=0.030..0.030 rows=700.00 loops=5)
                     Index Cond: (id0 = t0.id0)
                     Index Searches: 5
                     Buffers: shared hit=14
 Planning:
   Buffers: shared hit=222
 Planning Time: 0.763 ms
 Execution Time: 5.716 ms
(23 rows)

Time: 7.248 ms
test0=# SET max_parallel_workers_per_gather=0;
SET
Time: 0.131 ms
test0=# EXPLAIN ANALYZE SELECT t0.id0, t1.val FROM tab0 AS t0 JOIN tab1 AS t1 ON (t0.id0 = t1.id0) WHERE t0.id0 < 5;
 QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=97.91..64508.51 rows=5000 width=98) (actual time=0.044..0.903 rows=3500.00 loops=1)
   Buffers: shared hit=113
   ->  Index Only Scan using tab0_pkey on tab0 t0 (cost=0.29..4.38 rows=5 width=4) (actual time=0.003..0.005 rows=5.00 loops=1)
         Index Cond: (id0 < 5)
         Heap Fetches: 0
         Index Searches: 1
         Buffers: shared hit=3
   ->  Bitmap Heap Scan on tab1 t1  (cost=97.61..12867.78 rows=3305 width=98) (actual time=0.028..0.132 rows=700.00 loops=5)
         Recheck Cond: (t0.id0 = id0)
         Heap Blocks: exact=97
         Buffers: shared hit=110
         ->  Bitmap Index Scan on tab1_id0_hash (cost=0.00..96.79 rows=3305 width=0) (actual time=0.021..0.021 rows=700.00 loops=5)
               Index Cond: (id0 = t0.id0)
               Index Searches: 5
               Buffers: shared hit=13
 Planning:
   Buffers: shared hit=9
 Planning Time: 0.190 ms
 Execution Time: 1.025 ms
(19 rows)

Time: 1.459 ms

However disabling gather workers gets a much better plan. Now I can switch the child index to btree if you think that is significant. Best wishes

Mark

On 08/01/2026 17:14, David Rowley wrote:
On Thu, 8 Jan 2026 at 17:03, Mark Kirkwood <[email protected]> wrote:
I don't think so - while the case I posted used a hash index on the
child table, exactly the sane behaviour happens if it is a btree (I
probably should have mentioned that sorry). Background is I discovered
this while playing about with hash indexes...which I must say - someone
has done excellent work on as in this *particular cases* they are
getting me better query performance!
Ok, it seems related to the min_parallel_index_scan_size GUC. If you
zero that, do you get a better plan?

I think the problem is that because the best form of plan for joining
this tiny set of rows to the huge table is a parameterised nested
loop, to parallelise that loop, you need a Parallel node on the outer
side of the Nested Loop. If the index's size is below
min_parallel_index_scan_size then we won't build a partial path for
it.

David


Reply via email to