Hi Peter,

On 2024-11-20 04:06, Peter Geoghegan wrote:
Hi Masahiro,

On Tue, Nov 19, 2024 at 3:30 AM Masahiro Ikeda <ikeda...@oss.nttdata.com> wrote:
Apologies for the delayed response. I've confirmed that the costing is
significantly
improved for multicolumn indexes in the case I provided. Thanks!
https://www.postgresql.org/message-id/TYWPR01MB10982A413E0EC4088E78C0E11B1A62%40TYWPR01MB10982.jpnprd01.prod.outlook.com

Great! I made it one of my private/internal test cases for the
costing. Your test case was quite helpful.

Attached is v15. It works through your feedback.

Importantly, v15 has a new patch which has a fix for your test.sql
case -- which is the most important outstanding problem with the patch
(and has been for a long time now). I've broken those changes out into
a separate patch because they're still experimental, and have some
known minor bugs. But it works well enough for you to assess how close
I am to satisfactorily fixing the known regressions, so it seems worth
posting quickly.

Thanks for your quick response!

IIUC, why not add it to the documentation? It would clearly help users
understand how to tune their queries using the counter, and it would
also show that the counter is not just for developers.

The documentation definitely needs more work. I have a personal TODO
item about that.

Changes to the documentation can be surprisingly contentious, so I
often work on it last, when we have the clearest picture of how to
talk about the feature. For example, Matthias said something that's
approximately the opposite of what you said about it (though I agree
with you about it).

OK, I understood.

From the perspective of consistency, wouldn't it be better to align the
naming
between the EXPLAIN output and pg_stat_all_indexes.idx_scan, even though
the
documentation states they refer to the same concept?

I personally prefer something like "search" instead of "scan", as "scan"
is
commonly associated with node names like Index Scan and similar terms.
To maintain
consistency, how about renaming pg_stat_all_indexes.idx_scan to
pg_stat_all_indexes.idx_search?

I suspect that other hackers will reject that proposal on
compatibility grounds, even though it would make sense in a "green
field" situation.

Honestly, discussions about UI/UX details such as EXPLAIN ANALYZE
always tend to result in unproductive bikeshedding. What I really want
is something that will be acceptable to all parties. I don't have any
strong opinions of my own about it -- I just think that it's important
to show *something* like "Index Searches: N" to make skip scan user
friendly.

OK, I agree.

Although it's not an optimal solution and would only reduce the degree
of performance
degradation, how about introducing a threshold per page to switch from
skip scan to full
index scan?

The approach to fixing these regressions from the new experimental
patch doesn't need to use any such threshold. It is effective both
with simple "WHERE id2 = 100" cases (like the queries from your
test.sql test case), as well as more complicated "WHERE id2 BETWEEN 99
AND 101" inequality cases.

What do you think? The regressions are easily under 5% with the new
patch applied, which is in the noise.

I didn't come up with the idea. At first glance, your idea seems good
for all cases.

Actually, test.sql shows a performance improvement, and the performance
is almost the same as the master's seqscan. To be precise, the master's
performance is 10-20% better than the v15 patch because the seqscan is
executed in parallel. However, the v15 patch is twice as fast as when
seqscan is not executed in parallel.

However, I found that there is still a problematic case when I read your
patch. IIUC, beyondskip becomes true only if the tuple's id2 is greater
than the scan key value. Therefore, the following query (see test_for_v15.sql)
still degrades.

-- build without '--enable-debug' '--enable-cassert' 'CFLAGS=-O0 '
-- SET skipscan_prefix_cols=32;
Index Only Scan using t_idx on public.t (cost=0.42..3535.75 rows=1 width=8) (actual time=65.767..65.770 rows=1 loops=1)
   Output: id1, id2
   Index Cond: (t.id2 = 1000000)
   Index Searches: 1
   Heap Fetches: 0
   Buffers: shared hit=2736
 Settings: effective_cache_size = '7932MB', work_mem = '15MB'
 Planning Time: 0.058 ms
 Execution Time: 65.782 ms
(9 rows)

-- SET skipscan_prefix_cols=0;
Index Only Scan using t_idx on public.t (cost=0.42..3535.75 rows=1 width=8) (actual time=17.276..17.278 rows=1 loops=1)
   Output: id1, id2
   Index Cond: (t.id2 = 1000000)
   Index Searches: 1
   Heap Fetches: 0
   Buffers: shared hit=2736
 Settings: effective_cache_size = '7932MB', work_mem = '15MB'
 Planning Time: 0.044 ms
 Execution Time: 17.290 ms
(9 rows)

I’m reporting the above result, though you might already be aware of the issue.

At the same time, we're just as capable of skipping whenever the scan
encounters a large group of skipped-prefix-column duplicates. For
example, if I take your test.sql test case and add another insert that
adds such a group (e.g., "INSERT INTO t SELECT 55, i FROM
generate_series(-1000000, 1000000) i;" ), and then re-run the query,
the scan is exactly as fast as before -- it just skips to get over the
newly inserted "55" group of tuples. Obviously, this also makes the
master branch far, far slower.

As I've said many times already, the need to be flexible and offer
robust performance in cases where skipping is either very effective or
very ineffective *during the same index scan* seems very important to
me. This "55" variant of your test.sql test case is a great example of
the kinds of cases I was thinking about.

Yes, I agree. Therefore, even if I can't think of a way to prevent regressions or if I can only think of improvements that would significantly sacrifice the benefits of skip scan, I would still like to report any regression cases if
they occur.

There may be a better way, such as the new idea you suggested, and I think there is room for discussion regarding how far we should go in handling regressions, regardless of whether we choose to accept regressions or sacrifice the benefits of
skip scan to address them.

Regards,
--
Masahiro Ikeda
NTT DATA CORPORATION
DROP TABLE IF EXISTS t;
CREATE TABLE t (id1 int, id2 int);
INSERT INTO t (SELECT i, i FROM generate_series(1,1_000_000) s(i));
CREATE INDEX t_idx on t (id1, id2);
VACUUM FREEZE ANALYZE;
--fast
SET skipscan_prefix_cols=32;
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT, SETTINGS, WAL, VERBOSE) SELECT * FROM t 
WHERE id2 = 1; -- cache
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT, SETTINGS, WAL, VERBOSE) SELECT * FROM t 
WHERE id2 = 1;
SET skipscan_prefix_cols=0;
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT, SETTINGS, WAL, VERBOSE) SELECT * FROM t 
WHERE id2 = 1;

--slow
SET skipscan_prefix_cols=32;
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT, SETTINGS, WAL, VERBOSE) SELECT * FROM t 
WHERE id2 = 1_000_000; -- cache
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT, SETTINGS, WAL, VERBOSE) SELECT * FROM t 
WHERE id2 = 1_000_000;
SET skipscan_prefix_cols=0;
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT, SETTINGS, WAL, VERBOSE) SELECT * FROM t 
WHERE id2 = 1_000_000;
DROP INDEX t_idx;

Reply via email to