Dear Team,

With reference to the conversation ongoing in message ID :
c562dc2a-6e36-46f3-a5ea-cd42eebd7118, I am writing to express my interest
in contributing to the ongoing work on fixing the bug related to Adding
skip scan (including MDAM style range skip scan) to nbtree.

I tried to replicate the performance regression reported earlier in this
thread, by running pgbench with the same setup (pgbench scale=1, 100
partitions, extra index on bid, single-count query). I built both before
skip scan (commit 3ba2cdaa454) and after skip scan (commit 92fe23d93aa)
versions, and compared the throughput:

--- BEFORE (3ba2cdaa454) ---
Mode=simple Clients=1   tps = 23890
Mode=simple Clients=4   tps = 82791
Mode=simple Clients=32  tps = 129877
Mode=prepared Clients=1 tps = 26404
Mode=prepared Clients=4 tps = 87116
Mode=prepared Clients=32 tps = 140881

--- AFTER (92fe23d93aa) ---
Mode=simple Clients=1   tps = 22551
Mode=simple Clients=4   tps = 76844
Mode=simple Clients=32  tps = 129445
Mode=prepared Clients=1 tps = 25880
Mode=prepared Clients=4 tps = 84876
Mode=prepared Clients=32 tps = 137812

In my environment the regression is smaller than Tomas originally
observed (*~5–8%
vs. ~50%*), but it still shows up consistently, especially at higher
concurrency.

This suggests that the extra malloc/free activity in the skip scan code
path is indeed introducing overhead, though the impact seems to vary
depending on glibc/memory allocator behavior.

*Proposal:*

   -

   For PG18, a safe short-term fix could be to *remove the unused “options”
   support function*, as Peter suggested, or replace it with a lighter path
   that avoids repeated allocations.
   -

   Longer term, we may want to *revisit skip scan memory management* (e.g.,
   static allocation, memory pool, or reducing per-call overhead) so that the
   optimization does not regress performance in micro-benchmarks.

I am currently working on these proposed methods and will continue
experimenting to provide further results and possible patches.

Regards,
Athiyaman

Reply via email to