Hi Alexander.
On 9/13/18 9:01 AM, Alexander Kuzmenkov wrote:
While testing this patch
Thanks for the review !
I noticed that current implementation doesn't
perform well when we have lots of small groups of equal values. Here is
the execution time of index skip scan vs unique over index scan, in ms,
depending on the size of group. The benchmark script is attached.
group size skip unique
1 2,293.85 132.55
5 464.40 106.59
10 239.61 102.02
50 56.59 98.74
100 32.56 103.04
500 6.08 97.09
Yes, this doesn't look good. Using your test case I'm seeing that unique
is being chosen when the group size is below 34, and skip above. This is
with the standard initdb configuration; did you change something else ?
Or did you force the default plan ?
So, the current implementation can lead to performance regression, and
the choice of the plan depends on the notoriously unreliable ndistinct
statistics.
Yes, Peter mentioned this, which I'm still looking at.
The regression is probably because skip scan always does
_bt_search to find the next unique tuple.
Very likely.
I think we can improve this,
and the skip scan can be strictly faster than index scan regardless of
the data. As a first approximation, imagine that we somehow skipped
equal tuples inside _bt_next instead of sending them to the parent
Unique node. This would already be marginally faster than Unique + Index
scan. A more practical implementation would be to remember our position
in tree (that is, BTStack returned by _bt_search) and use it to skip
pages in bulk. This looks straightforward to implement for a tree that
does not change, but I'm not sure how to make it work with concurrent
modifications. Still, this looks a worthwhile direction to me, because
if we have a strictly faster skip scan, we can just use it always and
not worry about our unreliable statistics. What do you think?
This is something to look at -- maybe there is a way to use
btpo_next/btpo_prev instead/too in order to speed things up. Atm we just
have the scan key in BTScanOpaqueData. I'll take a look after my
upcoming vacation; feel free to contribute those changes in the meantime
of course.
Thanks again !
Best regards,
Jesper