Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-04 Thread Shulgin, Oleksandr
On Apr 5, 2016 00:31, "Tom Lane"  wrote:
>
> Alex Shulgin  writes:
> > On Mon, Apr 4, 2016 at 1:06 AM, Tom Lane  wrote:
> >> I'm inclined to
> >> revert the aspect of 3d3bf62f3 that made us work from "d" (the observed
> >> number of distinct values in the sample) rather than stadistinct (the
> >> extrapolated estimate for the table).  On reflection I think that
that's
> >> inconsistent with the theory behind the old MCV-cutoff rule.  It
wouldn't
> >> matter if we were going to replace the cutoff rule with something else,
> >> but it's beginning to sound like that won't happen for 9.6.
>
> > Please feel free to do what you think is in the best interest of the
people
> > preparing 9.6 for the freeze.  I'm not all that familiar with the
process,
> > but I guess reverting this early might save some head-scratching if some
> > interesting interactions of this change combined with some others are
going
> > to show up.
>
> I've reverted that bit; so we still have the improvements associated with
> ignoring nulls, but nothing else at the moment.  I'll set this commitfest
> item back to Waiting on Author, just in case you are able to make some
> more progress before the end of the week.

OK, though it's unlikely that I'll get productive again before next week,
but maybe someone who has also been following this thread wants to step in?

Thanks.
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-04 Thread Tom Lane
Alex Shulgin  writes:
> On Mon, Apr 4, 2016 at 1:06 AM, Tom Lane  wrote:
>> I'm inclined to
>> revert the aspect of 3d3bf62f3 that made us work from "d" (the observed
>> number of distinct values in the sample) rather than stadistinct (the
>> extrapolated estimate for the table).  On reflection I think that that's
>> inconsistent with the theory behind the old MCV-cutoff rule.  It wouldn't
>> matter if we were going to replace the cutoff rule with something else,
>> but it's beginning to sound like that won't happen for 9.6.

> Please feel free to do what you think is in the best interest of the people
> preparing 9.6 for the freeze.  I'm not all that familiar with the process,
> but I guess reverting this early might save some head-scratching if some
> interesting interactions of this change combined with some others are going
> to show up.

I've reverted that bit; so we still have the improvements associated with
ignoring nulls, but nothing else at the moment.  I'll set this commitfest
item back to Waiting on Author, just in case you are able to make some
more progress before the end of the week.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-03 Thread Alex Shulgin
On Mon, Apr 4, 2016 at 1:06 AM, Tom Lane  wrote:

> Alex Shulgin  writes:
> > On Sun, Apr 3, 2016 at 10:53 PM, Tom Lane  wrote:
> >> The reason for checking toowide_cnt is that if it's greater than zero,
> >> then in fact the track list does NOT include all values seen, and it's
> >> flat-out wrong to claim that it is an exhaustive set of values.
>
> > But do we really state that with the short path?
>
> Well, yes: the point of the short path is that we're hypothesizing that
> the track list contains all values in the table, and that they should all
> be considered MCVs.  Another way to think about it is that if we didn't
> have the width-limit implementation restriction, those values would appear
> in the track list, almost certainly with count 1, and so we would not
> have taken the short path anyway.
>
> Now you can argue that the long path would have accepted all the real
> track-list entries as MCVs, and have rejected all these hypothetical
> count-1 entries for too-wide values, and so the end result would be the
> same.  But that gets back to the fact that that's not necessarily how
> the long path behaves, either today or with the proposed patch.
>

 Agreed.

The design intention was that the short path would handle columns
> with a finite, small set of values (think booleans or enums) where the
> ideal thing is that the table population is completely represented by
> the MCV list.  As soon as we have good reason to think that the MCV
> list won't represent the table contents fully, we should switch over
> to a different approach where we're trying to identify which sample
> values are common enough to justify putting in the MCV list.


This is a precious detail that I unfortunately couldn't find in any of the
sources of information available to me online. :-)

I don't have a habit of hanging on IRC channels, but now I wonder how
likely is it that I could learn this by just asking around on #postgresql
(or mailing you directly as the committer of this early implementation--is
that OK at all?)

Again, having this type of design decisions documented in the code might
save some time and confusion for the sociopath^W introvert-type of folks
like myself. ;-)

In that
> situation there are good reasons to not blindly fill the MCV list all
> the way to the stats-target limit, but to try to cut it off at the
> point of diminishing returns, so that the planner isn't saddled with
> a huge MCV list that doesn't really contain a lot of useful information.
>

This came to be my understanding also at some point.

So that's the logic behind there being two code paths with discontinuous
> behavior.  I'm not sure whether we need to try to narrow the discontinuity
> or whether it's OK to act that way and we just need to refine the decision
> rule about which path to take.  But anyway, comparisons of frequencies
> of candidate MCVs seem to me to make sense in a large-ndistinct scenario
> (where we have to be selective) but not a small-ndistinct scenario
> (where we should just take 'em all).
>

Yeah, this seems to be an open question.  And a totally new one to me in
the light of recent revelations.

>> The point of the original logic was to try to decide whether the
> >> values in the sample are significantly more common than typical values
> >> in the whole table population.  I think we may have broken that with
> >> 3d3bf62f3: you can't make any such determination if you consider only
> >> what's in the sample without trying to estimate what is not in the
> >> sample.
>
> > Speaking of rabbit holes...
> > I'm out of ideas, unfortunately.  We badly need more eyes/brainpower on
> > this, which is why I have submitted a talk proposal on this topic to
> > PGDay.ru this summer in St. Petersburg, fearing that it might be too late
> > to commit a satisfactory version during the current dev cycle for 9.6,
> and
> > in hope to draw at least some attention to it.
>
> If you're thinking it's too late to get more done for 9.6,


Not necessarily, but given the time constraints and some personal issues
that just keep popping up I'm not that optimistic as I was just 24h ago
anymore.


> I'm inclined to
> revert the aspect of 3d3bf62f3 that made us work from "d" (the observed
> number of distinct values in the sample) rather than stadistinct (the
> extrapolated estimate for the table).  On reflection I think that that's
> inconsistent with the theory behind the old MCV-cutoff rule.  It wouldn't
> matter if we were going to replace the cutoff rule with something else,
> but it's beginning to sound like that won't happen for 9.6.
>

Please feel free to do what you think is in the best interest of the people
preparing 9.6 for the freeze.  I'm not all that familiar with the process,
but I guess reverting this early might save some head-scratching if some
interesting interactions of this change combined with some others are going
to show up.

Cheers!
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-03 Thread Tom Lane
Alex Shulgin  writes:
> On Sun, Apr 3, 2016 at 10:53 PM, Tom Lane  wrote:
>> The reason for checking toowide_cnt is that if it's greater than zero,
>> then in fact the track list does NOT include all values seen, and it's
>> flat-out wrong to claim that it is an exhaustive set of values.

> But do we really state that with the short path?

Well, yes: the point of the short path is that we're hypothesizing that
the track list contains all values in the table, and that they should all
be considered MCVs.  Another way to think about it is that if we didn't
have the width-limit implementation restriction, those values would appear
in the track list, almost certainly with count 1, and so we would not
have taken the short path anyway.

Now you can argue that the long path would have accepted all the real
track-list entries as MCVs, and have rejected all these hypothetical
count-1 entries for too-wide values, and so the end result would be the
same.  But that gets back to the fact that that's not necessarily how
the long path behaves, either today or with the proposed patch.

The design intention was that the short path would handle columns
with a finite, small set of values (think booleans or enums) where the
ideal thing is that the table population is completely represented by
the MCV list.  As soon as we have good reason to think that the MCV
list won't represent the table contents fully, we should switch over
to a different approach where we're trying to identify which sample
values are common enough to justify putting in the MCV list.  In that
situation there are good reasons to not blindly fill the MCV list all
the way to the stats-target limit, but to try to cut it off at the
point of diminishing returns, so that the planner isn't saddled with
a huge MCV list that doesn't really contain a lot of useful information.

So that's the logic behind there being two code paths with discontinuous
behavior.  I'm not sure whether we need to try to narrow the discontinuity
or whether it's OK to act that way and we just need to refine the decision
rule about which path to take.  But anyway, comparisons of frequencies
of candidate MCVs seem to me to make sense in a large-ndistinct scenario
(where we have to be selective) but not a small-ndistinct scenario
(where we should just take 'em all).

>> The point of the original logic was to try to decide whether the
>> values in the sample are significantly more common than typical values
>> in the whole table population.  I think we may have broken that with
>> 3d3bf62f3: you can't make any such determination if you consider only
>> what's in the sample without trying to estimate what is not in the
>> sample.

> Speaking of rabbit holes...
> I'm out of ideas, unfortunately.  We badly need more eyes/brainpower on
> this, which is why I have submitted a talk proposal on this topic to
> PGDay.ru this summer in St. Petersburg, fearing that it might be too late
> to commit a satisfactory version during the current dev cycle for 9.6, and
> in hope to draw at least some attention to it.

If you're thinking it's too late to get more done for 9.6, I'm inclined to
revert the aspect of 3d3bf62f3 that made us work from "d" (the observed
number of distinct values in the sample) rather than stadistinct (the
extrapolated estimate for the table).  On reflection I think that that's
inconsistent with the theory behind the old MCV-cutoff rule.  It wouldn't
matter if we were going to replace the cutoff rule with something else,
but it's beginning to sound like that won't happen for 9.6.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-03 Thread Alex Shulgin
On Sun, Apr 3, 2016 at 10:53 PM, Tom Lane  wrote:

> Alex Shulgin  writes:
> > This recalled observation can now also explain to me why in the
> regression
> > you've seen, the short path was not followed: my bet is that stadistinct
> > appeared negative.
>
> Yes, I think that's right.  The table under consideration had just a few
> live rows (I think 3), so that even though there was only one value in
> the sample, the "if (stats->stadistinct > 0.1 * totalrows)" condition
> succeeded.
>

Yeah, this part of the logic can be really surprising at times.

> Given that we change the logic in the complex path substantially, the
> > assumptions that lead to the "Take all MVCs" condition above might no
> > longer hold, and I see it as a pretty compelling argument to remove the
> > extra checks, thus keeping the only one: track_cnt == ndistinct.  This
> > should also bring the patch's effect more close to the thread's topic,
> > which is "More stable query plans".
>
> The reason for checking toowide_cnt is that if it's greater than zero,
> then in fact the track list does NOT include all values seen, and it's
> flat-out wrong to claim that it is an exhaustive set of values.
>

But do we really state that with the short path?

If there would be only one too wide value, it might be the only thing left
for the histogram in the end and will be discarded anyway, so from the end
result perspective there is no difference.

If there are multiple too wide values, they will be effectively discarded
by the histogram calculation part also, so again no difference from the
perspective of the end result.

The reason for the track_cnt <= num_mcv condition is that if that's not
> true, the track list has to be trimmed to meet the statistics target.
> Again, that's not optional.
>

Yes, but this check we only need in compute_distinct_stats() and we are
talking about compute_scalar_stats() now where track_cnt is always less
than or equals to num_mcv (again, please see at the bottom of the
thread-starting email), or is my analysis broken on this part?

I think the reasoning for having the stats->stadistinct > 0 test in there
> was that if we'd set it negative, then we think that the set of distinct
> values will grow --- which again implies that the set of values actually
> seen should not be considered exhaustive.


This is actually very neat.  So the idea here as I get it is that if we
have enough distinct values to suspect that more unique ones will be added
later as the table grows (which is a natural tendency with most of the
tables anyway), then at the moment the statistics we produce are going to
be actually used by the planner, it is likely that we no longer cover all
the distinct values by the MCV list, right?

I would *love* to see that documented in code comments at the least.

Of course, with a table as
> small as that regression-test example, we have little evidence to support
> either that conclusion or its opposite.
>

I think it might be possible to record historical ndistinct values between
the ANALYZE runs and use that as better evidence that the number of
distincts is actually growing rather than basing that decision on that
hard-coded 10% limit rule.  What do you think?

We do not support migration of pg_statistic system table during major
version upgrades (yet), so if we somehow achieve what I've just described,
it might be not a compatibility-breaking change anyway.

It's possible that what we should do to eliminate the sudden change
> of behaviors is to drop the "track list includes all values seen, and all
> will fit" code path entirely, and always go through the track list
> one-at-a-time.
>

That could also be an option, that I have considered initially.  Now that I
read your explanation of each check, I'm not that sure anymore.

If we do, though, the currently-proposed filter rules aren't going to
> be too satisfactory: if we have a relatively small group of roughly
> equally common MCVs, this logic would reject all of them, which is
> surely not what we want.
>

Indeed. :-(


> The point of the original logic was to try to decide whether the
> values in the sample are significantly more common than typical values
> in the whole table population.  I think we may have broken that with
> 3d3bf62f3: you can't make any such determination if you consider only
> what's in the sample without trying to estimate what is not in the
> sample.
>

Speaking of rabbit holes...

I'm out of ideas, unfortunately.  We badly need more eyes/brainpower on
this, which is why I have submitted a talk proposal on this topic to
PGDay.ru this summer in St. Petersburg, fearing that it might be too late
to commit a satisfactory version during the current dev cycle for 9.6, and
in hope to draw at least some attention to it.

Regards,
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-03 Thread Tom Lane
Alex Shulgin  writes:
> This recalled observation can now also explain to me why in the regression
> you've seen, the short path was not followed: my bet is that stadistinct
> appeared negative.

Yes, I think that's right.  The table under consideration had just a few
live rows (I think 3), so that even though there was only one value in
the sample, the "if (stats->stadistinct > 0.1 * totalrows)" condition
succeeded.

> Given that we change the logic in the complex path substantially, the
> assumptions that lead to the "Take all MVCs" condition above might no
> longer hold, and I see it as a pretty compelling argument to remove the
> extra checks, thus keeping the only one: track_cnt == ndistinct.  This
> should also bring the patch's effect more close to the thread's topic,
> which is "More stable query plans".

The reason for checking toowide_cnt is that if it's greater than zero,
then in fact the track list does NOT include all values seen, and it's
flat-out wrong to claim that it is an exhaustive set of values.

The reason for the track_cnt <= num_mcv condition is that if that's not
true, the track list has to be trimmed to meet the statistics target.
Again, that's not optional.

I think the reasoning for having the stats->stadistinct > 0 test in there
was that if we'd set it negative, then we think that the set of distinct
values will grow --- which again implies that the set of values actually
seen should not be considered exhaustive.  Of course, with a table as
small as that regression-test example, we have little evidence to support
either that conclusion or its opposite.

It's possible that what we should do to eliminate the sudden change
of behaviors is to drop the "track list includes all values seen, and all
will fit" code path entirely, and always go through the track list
one-at-a-time.

If we do, though, the currently-proposed filter rules aren't going to
be too satisfactory: if we have a relatively small group of roughly
equally common MCVs, this logic would reject all of them, which is
surely not what we want.

The point of the original logic was to try to decide whether the
values in the sample are significantly more common than typical values
in the whole table population.  I think we may have broken that with
3d3bf62f3: you can't make any such determination if you consider only
what's in the sample without trying to estimate what is not in the
sample.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-03 Thread Alex Shulgin
On Sun, Apr 3, 2016 at 8:24 AM, Alex Shulgin  wrote:
>
> On Sun, Apr 3, 2016 at 7:49 AM, Tom Lane  wrote:
>>
>> Alex Shulgin  writes:
>> > On Sun, Apr 3, 2016 at 7:18 AM, Tom Lane  wrote:
>> >> Well, we have to do *something* with the last (possibly only) value.
>> >> Neither "include always" nor "omit always" seem sane to me.  What
other
>> >> decision rule do you want there?
>>
>> > Well, what implies that the last value is somehow special?  I would
think
>> > we should just do with it whatever we do with the rest of the candidate
>> > MCVs.
>>
>> Sure, but both of the proposed decision rules break down when there are
no
>> values after the one under consideration.  We need to do something sane
>> there.
>
>
> Hm... There are indeed the case where it would beneficial to have at
least 2 values in the histogram (to have at least the low/high bounds for
inequality comparison selectivity) instead of taking both to the MCV list
or taking one to the MCVs and having to discard the other.

I was thinking about this in the background...

Popularity of the last sample value (which is not the only) one can be:

a) As high as 50%, in case we have an even division between the only two
values in the sample.  Quite obviously, we should take this one into the
MCV list (well, unless the user has specified statistics_target of 1 for
some bizarre reason, but that should not be our problem).

b) As low as 2/(statistics_target*300), which is with the target set to a
maximum allowed value of 10,000 amounts to 2/(10,000*300) = 1 in
1,500,000.  This seems like a really tiny number, but if your table has
some tens of billions of rows, for example, seeing such a value at least
twice means that it might correspond to some thousands of rows in the
table, whereas seeing a value only once might mean just that: it's a unique
value.

In this case, putting such a duplicate value in the MCV list will allow a
much better selectivity estimate for equality comparison, as I've mentioned
earlier.  It also allows for better estimate with inequality comparison,
since MCVs are also consulted in this case.  I see no good reason to
discard such a value.

c) Or anything in between the above figures.

In my opinion that amounts to "include always" being the sane option.  Do
you see anything else as a problem here?

> Obviously, we need a fresh idea on how to handle this.

On reflection, the case where we have a duplicate value in the track list
which is not followed by any other sample should be covered by the short
path where we put all the tracked values in the MCV list, so there should
be no point to even consider all of the above!

But the exact short path condition is formulated like this:

if (track_cnt == ndistinct && toowide_cnt == 0 &&
stats->stadistinct > 0 &&
track_cnt <= num_mcv)
{
/* Track list includes all values seen, and all will fit */

So the execution path here is additionally put in dependence of two
factors: whether we've seen at least one too wide sample or the distinct
estimation produced a number higher than 10% of the estimated total table
size (which is yet another arbitrary limit, but that's not in scope of this
patch).

I've been puzzled by these conditions a lot, as I have mentioned in the
last section of this thread's starting email and I could not find anything
that would hint why they exist there, in the documentation, code comments
or emails on hackers leading to the introduction of analyze.c in the form
we know it today.  Probably we will never know, unless Tom still has some
notes on this topic from 15 years ago. ;-)

This recalled observation can now also explain to me why in the regression
you've seen, the short path was not followed: my bet is that stadistinct
appeared negative.

Given that we change the logic in the complex path substantially, the
assumptions that lead to the "Take all MVCs" condition above might no
longer hold, and I see it as a pretty compelling argument to remove the
extra checks, thus keeping the only one: track_cnt == ndistinct.  This
should also bring the patch's effect more close to the thread's topic,
which is "More stable query plans".

Regards,
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-03 Thread Alex Shulgin
On Sun, Apr 3, 2016, 18:40 Tom Lane  wrote:

> Alex Shulgin  writes:
>
> > Well, if it's the only value it will be accepted simply because we are
> > checking that special case already and don't even bother to loop through
> > the track list.
>
> That was demonstrably not the case in the failing regression test.
> I forget what aspect of the test case allowed it to get past the short
> circuit, but it definitely got into the scan-the-track-list code.
>

Hm, I'll have to see that for myself, probably there was something more to
it.

--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-03 Thread Tom Lane
Alex Shulgin  writes:
> On Sun, Apr 3, 2016 at 7:49 AM, Tom Lane  wrote:
>> If there is only one value, it will have 100% of the samples, so it would
>> get included under just about any decision rule (other than "more than
>> 100% of this value plus following values").  I don't think making sure
>> this case works is sufficient to get us to a reasonable rule --- it's
>> a necessary case, but not a sufficient case.

> Well, if it's the only value it will be accepted simply because we are
> checking that special case already and don't even bother to loop through
> the track list.

That was demonstrably not the case in the failing regression test.
I forget what aspect of the test case allowed it to get past the short
circuit, but it definitely got into the scan-the-track-list code.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-03 Thread Alex Shulgin
On Sun, Apr 3, 2016 at 7:49 AM, Tom Lane  wrote:

> Alex Shulgin  writes:
> > On Sun, Apr 3, 2016 at 7:18 AM, Tom Lane  wrote:
> >> Well, we have to do *something* with the last (possibly only) value.
> >> Neither "include always" nor "omit always" seem sane to me.  What other
> >> decision rule do you want there?
>
> > Well, what implies that the last value is somehow special?  I would think
> > we should just do with it whatever we do with the rest of the candidate
> > MCVs.
>
> Sure, but both of the proposed decision rules break down when there are no
> values after the one under consideration.  We need to do something sane
> there.
>

Hm... There are indeed the case where it would beneficial to have at least
2 values in the histogram (to have at least the low/high bounds for
inequality comparison selectivity) instead of taking both to the MCV list
or taking one to the MCVs and having to discard the other.

Obviously, we need a fresh idea on how to handle this.

> For "the only value" case: we cannot build a histogram out of a single
> > value, so omitting it from MCVs is not a good strategy, ISTM.
> > From my point of view that amounts to "include always".
>
> If there is only one value, it will have 100% of the samples, so it would
> get included under just about any decision rule (other than "more than
> 100% of this value plus following values").  I don't think making sure
> this case works is sufficient to get us to a reasonable rule --- it's
> a necessary case, but not a sufficient case.
>

Well, if it's the only value it will be accepted simply because we are
checking that special case already and don't even bother to loop through
the track list.

--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-02 Thread Tom Lane
Alex Shulgin  writes:
> On Sun, Apr 3, 2016 at 7:18 AM, Tom Lane  wrote:
>> Well, we have to do *something* with the last (possibly only) value.
>> Neither "include always" nor "omit always" seem sane to me.  What other
>> decision rule do you want there?

> Well, what implies that the last value is somehow special?  I would think
> we should just do with it whatever we do with the rest of the candidate
> MCVs.

Sure, but both of the proposed decision rules break down when there are no
values after the one under consideration.  We need to do something sane
there.

> For "the only value" case: we cannot build a histogram out of a single
> value, so omitting it from MCVs is not a good strategy, ISTM.
> From my point of view that amounts to "include always".

If there is only one value, it will have 100% of the samples, so it would
get included under just about any decision rule (other than "more than
100% of this value plus following values").  I don't think making sure
this case works is sufficient to get us to a reasonable rule --- it's
a necessary case, but not a sufficient case.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-02 Thread Alex Shulgin
On Sun, Apr 3, 2016 at 7:18 AM, Tom Lane  wrote:

> Alex Shulgin  writes:
> > On Sun, Apr 3, 2016 at 3:43 AM, Alex Shulgin 
> wrote:
> >> I'm not sure yet about the 1% rule for the last value, but would also
> love
> >> to see if we can avoid the arbitrary limit here.  What happens with a
> last
> >> value which is less than 1% popular in the current code anyway?
>
> > Now that I think about it, I don't really believe this arbitrary
> heuristic
> > is any good either, sorry.
>
> Yeah, it was just a placeholder to produce a working patch.
>
> Maybe we could base this cutoff on the stats target for the column?
> That is, "1%" would be the right number if stats target is 100,
> otherwise scale appropriately.
>
> > What was your motivation to introduce some limit at the bottom anyway?
>
> Well, we have to do *something* with the last (possibly only) value.
> Neither "include always" nor "omit always" seem sane to me.  What other
> decision rule do you want there?
>

Well, what implies that the last value is somehow special?  I would think
we should just do with it whatever we do with the rest of the candidate
MCVs.

For "the only value" case: we cannot build a histogram out of a single
value, so omitting it from MCVs is not a good strategy, ISTM.

>From my point of view that amounts to "include always".  What problems do
you see with this approach exactly?

--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-02 Thread Tom Lane
Alex Shulgin  writes:
> On Sun, Apr 3, 2016 at 3:43 AM, Alex Shulgin  wrote:
>> I'm not sure yet about the 1% rule for the last value, but would also love
>> to see if we can avoid the arbitrary limit here.  What happens with a last
>> value which is less than 1% popular in the current code anyway?

> Now that I think about it, I don't really believe this arbitrary heuristic
> is any good either, sorry.

Yeah, it was just a placeholder to produce a working patch.

Maybe we could base this cutoff on the stats target for the column?
That is, "1%" would be the right number if stats target is 100,
otherwise scale appropriately.

> What was your motivation to introduce some limit at the bottom anyway?

Well, we have to do *something* with the last (possibly only) value.
Neither "include always" nor "omit always" seem sane to me.  What other
decision rule do you want there?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-02 Thread Alex Shulgin
On Sun, Apr 3, 2016 at 3:43 AM, Alex Shulgin  wrote:

>
> I'm not sure yet about the 1% rule for the last value, but would also love
> to see if we can avoid the arbitrary limit here.  What happens with a last
> value which is less than 1% popular in the current code anyway?
>

Tom,

Now that I think about it, I don't really believe this arbitrary heuristic
is any good either, sorry.  What if you have a value that is just a bit
under 1% popular, but is being used in 50% of your queries in WHERE clause
with equality comparison?  Without this value in the MCV list the planner
will likely use SeqScan instead of an IndexScan that might be more
appropriate here.  I think we are much better off if we don't touch this
aspect of the current code.

What was your motivation to introduce some limit at the bottom anyway?  If
that was to prevent accidental division by zero, then an explicit check on
denominator not being 0 seems to me like a better safeguard than this.

Regards.
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-02 Thread Alex Shulgin
On Sat, Apr 2, 2016 at 8:57 PM, Shulgin, Oleksandr <
oleksandr.shul...@zalando.de> wrote:
> On Apr 2, 2016 18:38, "Tom Lane"  wrote:
>
>> I did not like the fact that the compute_scalar_stats logic
>> would allow absolutely anything into the MCV list once num_hist falls
>> below 2. I think it's important that we continue to reject values that
>> are only seen once in the sample, because there's no very good reason to
>> think that they are MCVs and not just infrequent values that by luck
>> appeared in the sample.
>
> In my understanding we only put a value in the track list if we've seen it
> at least twice, no?

This is actually the case for compute_scalar_stats, but not for
compute_distinct_stats.  In the latter case we can still have
track[i].count == 1, but we can also break out of the loop if we see the
first tracked item like that.

>> Before I noticed the regression failure, I'd been thinking that maybe
it'd
>> be better if the decision rule were not "at least 100+x% of the average
>> frequency of this value and later ones", but "at least 100+x% of the
>> average frequency of values after this one".
>
> Hm, sounds pretty similar to what I wanted to achieve, but better
> formalized.
>
>> With that formulation, we're
>> not constrained as to the range of x.  Now, if there are *no* values
after
>> this one, then this way needs an explicit special case in order not to
>> compute 0/0; but the preceding para shows that we need a special case for
>> the last value anyway.
>>
>> So, attached is a patch rewritten along these lines.  I used 50% rather
>> than 25% as the new cutoff percentage --- obviously it should be higher
>> in this formulation than before, but I have no idea if that particular
>> number is good or we should use something else.  Also, the rule for the
>> last value is "at least 1% of the non-null samples".  That's a pure guess
>> as well.
>>
>> I do not have any good corpuses of data to try this on.  Can folks who
>> have been following this thread try it on their data and see how it
>> does?  Also please try some other multipliers besides 1.5, so we can
>> get a feeling for where that cutoff should be placed.
>
> Expect me to run it on my pet db early next week. :-)

I was trying to come up with some examples where 50% could be a good or a
bad choice and then I noticed that we might be able to turn it it the other
way round: instead of inventing an arbitrary limit at the MCVs frequency we
could use the histogram as the criteria for a candidate MCV to be
considered "common enough".  If we can prove that the value would produce
duplicates in the histogram, we should rather put it in the MCV list
(unless it's already fully occupied, then we can't do anything).

A value is guaranteed to produce a duplicate if it has appeared at least
2*hfreq+1 times in the sample (hfreq from your version of the patch, which
is recalculated on every loop iteration).  I could produce an updated patch
on Monday or anyone else following this discussion should be able to do
that.

This approach would be a huge win in my opinion, because this way we can
avoid all the arbitrariness of that .25 / .50 multiplier.  Otherwise there
might be (valid) complaints that for some data .40 (or .60) is a better
fit, but we have already hard-coded something and there would be no easy
way to improve situation for some users while avoiding to break it for the
rest (unless we introduce a per-attribute configurable parameter like
statistics_target for this multiplier, which I'd like to avoid even
thinking about ;-)

While we don't (well, can't) build a histogram in the
compute_distinct_stats variant, we could also apply the above mind trick
there for the same reason and to make the output of both functions more
consistent (and to have less maintenance burden between the variants).  And
anyway it would be rather surprising to see that depending on the presence
of an order operator for the type, the resulting MCV lists after the
ANALYZE would be different (I mean not only due to the random nature of the
sample).

I'm not sure yet about the 1% rule for the last value, but would also love
to see if we can avoid the arbitrary limit here.  What happens with a last
value which is less than 1% popular in the current code anyway?

Cheers!
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-02 Thread Shulgin, Oleksandr
On Apr 2, 2016 18:38, "Tom Lane"  wrote:
>
> "Shulgin, Oleksandr"  writes:
> > On Apr 1, 2016 23:14, "Tom Lane"  wrote:
> >> Haven't looked at 0002 yet.
>
> > [crosses fingers] hope you'll have a chance to do that before feature
> > freeze for 9.6
>
> I studied this patch for awhile after rebasing it onto yesterday's
> commits.

Fantastic! I could not hope for a better reply :-)

> I did not like the fact that the compute_scalar_stats logic
> would allow absolutely anything into the MCV list once num_hist falls
> below 2. I think it's important that we continue to reject values that
> are only seen once in the sample, because there's no very good reason to
> think that they are MCVs and not just infrequent values that by luck
> appeared in the sample.

In my understanding we only put a value in the track list if we've seen it
at least twice, no?

> However, after I rearranged the tests there so
> that "if (num_hist >= 2)" only controlled whether to apply the 1/K limit,
> one of the regression tests started to fail:

Uh-oh.

> there's a place in
> rowsecurity.sql that expects that if a column contains nothing but several
> instances of a single value, that value will be recorded as a lone MCV.
> Now this isn't a particularly essential thing for that test, but it still
> seems like a good property for ANALYZE to have.

No objection here.

> The reason it's failing,
> of course, is that the test as written cannot possibly accept the last
> (or only) value.

Yeah, this I would expect from such a change.

> Before I noticed the regression failure, I'd been thinking that maybe it'd
> be better if the decision rule were not "at least 100+x% of the average
> frequency of this value and later ones", but "at least 100+x% of the
> average frequency of values after this one".

Hm, sounds pretty similar to what I wanted to achieve, but better
formalized.

> With that formulation, we're
> not constrained as to the range of x.  Now, if there are *no* values after
> this one, then this way needs an explicit special case in order not to
> compute 0/0; but the preceding para shows that we need a special case for
> the last value anyway.
>
> So, attached is a patch rewritten along these lines.  I used 50% rather
> than 25% as the new cutoff percentage --- obviously it should be higher
> in this formulation than before, but I have no idea if that particular
> number is good or we should use something else.  Also, the rule for the
> last value is "at least 1% of the non-null samples".  That's a pure guess
> as well.
>
> I do not have any good corpuses of data to try this on.  Can folks who
> have been following this thread try it on their data and see how it
> does?  Also please try some other multipliers besides 1.5, so we can
> get a feeling for where that cutoff should be placed.

Expect me to run it on my pet db early next week. :-)

Many thanks!
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-02 Thread Tom Lane
"Shulgin, Oleksandr"  writes:
> On Apr 1, 2016 23:14, "Tom Lane"  wrote:
>> Haven't looked at 0002 yet.

> [crosses fingers] hope you'll have a chance to do that before feature
> freeze for 9.6

I studied this patch for awhile after rebasing it onto yesterday's
commits.  I did not like the fact that the compute_scalar_stats logic
would allow absolutely anything into the MCV list once num_hist falls
below 2.  I think it's important that we continue to reject values that
are only seen once in the sample, because there's no very good reason to
think that they are MCVs and not just infrequent values that by luck
appeared in the sample.  However, after I rearranged the tests there so
that "if (num_hist >= 2)" only controlled whether to apply the 1/K limit,
one of the regression tests started to fail: there's a place in
rowsecurity.sql that expects that if a column contains nothing but several
instances of a single value, that value will be recorded as a lone MCV.
Now this isn't a particularly essential thing for that test, but it still
seems like a good property for ANALYZE to have.  The reason it's failing,
of course, is that the test as written cannot possibly accept the last
(or only) value.

Before I noticed the regression failure, I'd been thinking that maybe it'd
be better if the decision rule were not "at least 100+x% of the average
frequency of this value and later ones", but "at least 100+x% of the
average frequency of values after this one".  With that formulation, we're
not constrained as to the range of x.  Now, if there are *no* values after
this one, then this way needs an explicit special case in order not to
compute 0/0; but the preceding para shows that we need a special case for
the last value anyway.

So, attached is a patch rewritten along these lines.  I used 50% rather
than 25% as the new cutoff percentage --- obviously it should be higher
in this formulation than before, but I have no idea if that particular
number is good or we should use something else.  Also, the rule for the
last value is "at least 1% of the non-null samples".  That's a pure guess
as well.

I do not have any good corpuses of data to try this on.  Can folks who
have been following this thread try it on their data and see how it
does?  Also please try some other multipliers besides 1.5, so we can
get a feeling for where that cutoff should be placed.

regards, tom lane

diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 44a4b3f..a2c606b 100644
*** a/src/backend/commands/analyze.c
--- b/src/backend/commands/analyze.c
*** compute_distinct_stats(VacAttrStatsP sta
*** 2120,2128 
  		 * we are able to generate a complete MCV list (all the values in the
  		 * sample will fit, and we think these are all the ones in the table),
  		 * then do so.  Otherwise, store only those values that are
! 		 * significantly more common than the (estimated) average. We set the
! 		 * threshold rather arbitrarily at 25% more than average, with at
! 		 * least 2 instances in the sample.
  		 */
  		if (track_cnt < track_max && toowide_cnt == 0 &&
  			stats->stadistinct > 0 &&
--- 2120,2138 
  		 * we are able to generate a complete MCV list (all the values in the
  		 * sample will fit, and we think these are all the ones in the table),
  		 * then do so.  Otherwise, store only those values that are
! 		 * significantly more common than the ones we omit.  We determine that
! 		 * by considering the values in frequency order, and accepting each
! 		 * one if it is at least 50% more common than the average among the
! 		 * values after it.  The 50% threshold is somewhat arbitrary.
! 		 *
! 		 * Note that the 50% rule will never accept a value with count 1,
! 		 * since all the values have count at least 1; this is a property we
! 		 * desire, since there's no very good reason to assume that a
! 		 * single-occurrence value is an MCV and not just a random non-MCV.
! 		 *
! 		 * We need a special rule for the very last value.  If we get to it,
! 		 * we'll accept it if it's at least 1% of the non-null samples and has
! 		 * count more than 1.
  		 */
  		if (track_cnt < track_max && toowide_cnt == 0 &&
  			stats->stadistinct > 0 &&
*** compute_distinct_stats(VacAttrStatsP sta
*** 2133,2153 
  		}
  		else
  		{
! 			/* d here is the same as d in the Haas-Stokes formula */
  			int			d = nonnull_cnt - summultiple + nmultiple;
! 			double		avgcount,
! 		mincount;
  
! 			/* estimate # occurrences in sample of a typical nonnull value */
! 			avgcount = (double) nonnull_cnt / (double) d;
! 			/* set minimum threshold count to store a value */
! 			mincount = avgcount * 1.25;
! 			if (mincount < 2)
! mincount = 2;
  			if (num_mcv > track_cnt)
  num_mcv = track_cnt;
  			for (i = 0; i < num_mcv; i++)
  			{
  if (track[i].count < mincount)
  {
  	num_mcv = i;
--- 2143,2181 
  		}
  	

Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-01 Thread Shulgin, Oleksandr
On Apr 1, 2016 23:14, "Tom Lane"  wrote:
>
> "Shulgin, Oleksandr"  writes:
> > Alright.  I'm attaching the latest version of this patch split in two
> > parts: the first one is NULLs-related bugfix and the second is the
> > "improvement" part, which applies on top of the first one.
>
> I've applied the first of these patches,

Great news, thank you!

> broken into two parts first
> because it seemed like there were two issues and second because Tomas
> deserved primary credit for one part, ie realizing we were using the
> Haas-Stokes formula wrong.
>
> As for the other part, I committed it with one non-cosmetic change:
> I do not think it is right to omit "too wide" values when considering
> the threshold for MCVs.  As submitted, the patch was inconsistent on
> that point anyway since it did it differently in compute_distinct_stats
> and compute_scalar_stats.  But the larger picture here is that we define
> the MCV population to exclude nulls, so it's reasonable to consider a
> value as an MCV even if it's greatly outnumbered by nulls.  There is
> no such exclusion for "too wide" values; those things are just an
> implementation limitation in analyze.c, not something that is part of
> the pg_statistic definition.  If there are a lot of "too wide" values
> in the sample, we don't know whether any of them are duplicates, but
> we do know that the frequencies of the normal-width values have to be
> discounted appropriately.

Okay.

> Haven't looked at 0002 yet.

[crosses fingers] hope you'll have a chance to do that before feature
freeze for 9.6…

--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-04-01 Thread Tom Lane
"Shulgin, Oleksandr"  writes:
> Alright.  I'm attaching the latest version of this patch split in two
> parts: the first one is NULLs-related bugfix and the second is the
> "improvement" part, which applies on top of the first one.

I've applied the first of these patches, broken into two parts first
because it seemed like there were two issues and second because Tomas
deserved primary credit for one part, ie realizing we were using the
Haas-Stokes formula wrong.

As for the other part, I committed it with one non-cosmetic change:
I do not think it is right to omit "too wide" values when considering
the threshold for MCVs.  As submitted, the patch was inconsistent on
that point anyway since it did it differently in compute_distinct_stats
and compute_scalar_stats.  But the larger picture here is that we define
the MCV population to exclude nulls, so it's reasonable to consider a
value as an MCV even if it's greatly outnumbered by nulls.  There is
no such exclusion for "too wide" values; those things are just an
implementation limitation in analyze.c, not something that is part of
the pg_statistic definition.  If there are a lot of "too wide" values
in the sample, we don't know whether any of them are duplicates, but
we do know that the frequencies of the normal-width values have to be
discounted appropriately.

Haven't looked at 0002 yet.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-29 Thread Shulgin, Oleksandr
On Tue, Mar 29, 2016 at 6:24 PM, Tom Lane  wrote:

> "Shulgin, Oleksandr"  writes:
> > I've just seen that this patch doesn't have a reviewer assigned
> anymore...
>
> I took my name off it because I was busy with other things and didn't
> want to discourage other people from reviewing it meanwhile.


I wanted to write that this should not be stopping anyone else from
attempting a review, but then learned that you have removed your name. ;-)


>   I do hope
> to get to it eventually but there's a lot of stuff on my to-do list.
>

Completely understood.

Thank you.
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-29 Thread Tom Lane
"Shulgin, Oleksandr"  writes:
> I've just seen that this patch doesn't have a reviewer assigned anymore...

I took my name off it because I was busy with other things and didn't
want to discourage other people from reviewing it meanwhile.  I do hope
to get to it eventually but there's a lot of stuff on my to-do list.

> I would welcome any review.

Me too.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-29 Thread Shulgin, Oleksandr
On Tue, Mar 15, 2016 at 4:47 PM, Shulgin, Oleksandr <
oleksandr.shul...@zalando.de> wrote:

> On Wed, Mar 9, 2016 at 5:28 PM, Tom Lane  wrote:
>
>> "Shulgin, Oleksandr"  writes:
>> > Yes, I now recall that my actual concern was that sample_cnt may
>> calculate
>> > to 0 due to the latest condition above, but that also implies track_cnt
>> ==
>> > 0, and then we have a for loop there which will not run at all due to
>> this,
>> > so I figured we can avoid calculating avgcount and running the loop
>> > altogether with that check.  I'm not opposed to changing the condition
>> if
>> > that makes the code easier to understand (or dropping it altogether if
>> > calculating 0/0 is believed to be harmless anyway).
>>
>> Avoiding intentional zero divides is good.  It might happen to work
>> conveniently on your machine, but I wouldn't think it's portable.
>>
>
> Tom,
>
> Thank you for volunteering to review this patch!
>
> Are you waiting on me to produce an updated version with more comments
> about NULL-handling in the distinct estimator, or do you have something
> cooking already?
>

I've just seen that this patch doesn't have a reviewer assigned anymore...

I would welcome any review.  If we don't commit even the first part
(bugfix) now, is it going to be 9.7-only material?..

--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-15 Thread Shulgin, Oleksandr
On Wed, Mar 9, 2016 at 5:28 PM, Tom Lane  wrote:

> "Shulgin, Oleksandr"  writes:
> > Yes, I now recall that my actual concern was that sample_cnt may
> calculate
> > to 0 due to the latest condition above, but that also implies track_cnt
> ==
> > 0, and then we have a for loop there which will not run at all due to
> this,
> > so I figured we can avoid calculating avgcount and running the loop
> > altogether with that check.  I'm not opposed to changing the condition if
> > that makes the code easier to understand (or dropping it altogether if
> > calculating 0/0 is believed to be harmless anyway).
>
> Avoiding intentional zero divides is good.  It might happen to work
> conveniently on your machine, but I wouldn't think it's portable.
>

Tom,

Thank you for volunteering to review this patch!

Are you waiting on me to produce an updated version with more comments
about NULL-handling in the distinct estimator, or do you have something
cooking already?

--
Regards,
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-09 Thread Tom Lane
"Shulgin, Oleksandr"  writes:
> Yes, I now recall that my actual concern was that sample_cnt may calculate
> to 0 due to the latest condition above, but that also implies track_cnt ==
> 0, and then we have a for loop there which will not run at all due to this,
> so I figured we can avoid calculating avgcount and running the loop
> altogether with that check.  I'm not opposed to changing the condition if
> that makes the code easier to understand (or dropping it altogether if
> calculating 0/0 is believed to be harmless anyway).

Avoiding intentional zero divides is good.  It might happen to work
conveniently on your machine, but I wouldn't think it's portable.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-09 Thread Shulgin, Oleksandr
On Wed, Mar 9, 2016 at 1:33 PM, Tomas Vondra 
wrote:

> Hi,
>
> On Wed, 2016-03-09 at 11:23 +0100, Shulgin, Oleksandr wrote:
> > On Tue, Mar 8, 2016 at 8:16 PM, Alvaro Herrera
> >  wrote:
> >
> > Also, I can't quite figure out why the "else" now in line 2131
> > is now "else if track_cnt != 0".  What happens if track_cnt is
> > zero?
> > The comment above the "if" block doesn't provide any guidance.
> >
> >
> > It is there only to avoid potentially dividing zero by zero when
> > calculating avgcount (which will not be used after that anyway).  I
> > agree it deserves a comment.
>
> That definitely deserves a comment. It's not immediately clear why
> (track_cnt != 0) would prevent division by zero in the code. The only
> way such error could happen is if ndistinct==0, because that's the
> actual denominator. Which means this
>
> ndistinct = ndistinct * sample_cnt
>
> would have to evaluate to 0. But ndistinct==0 can't happen as we're in
> the (nonnull_cnt > 0) branch, and that guarantees (standistinct != 0).
>
> Thus the only possibility seems to be (nonnull_cnt==toowide_cnt). Why
> not to use this condition instead?
>

Yes, I now recall that my actual concern was that sample_cnt may calculate
to 0 due to the latest condition above, but that also implies track_cnt ==
0, and then we have a for loop there which will not run at all due to this,
so I figured we can avoid calculating avgcount and running the loop
altogether with that check.  I'm not opposed to changing the condition if
that makes the code easier to understand (or dropping it altogether if
calculating 0/0 is believed to be harmless anyway).

--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-09 Thread Tom Lane
Tomas Vondra  writes:
> On Wed, 2016-03-09 at 12:02 -0300, Alvaro Herrera wrote:
>> Tomas Vondra wrote:
>>> FWIW while looking at the code I noticed that we skip wide varlena
>>> values but not cstrings. Seems a bit suspicious.

>> Uh, can you actually have columns of cstring type?  I don't think you
>> can ...

> Yeah, but then why do we handle that in compute_scalar_stats?

If you're looking at what I think you're looking at, we aren't bothering
because we assume that cstrings won't be very wide.  Since they're not
toastable or compressable, they certainly won't exceed BLCKSZ.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-09 Thread Tomas Vondra
On Wed, 2016-03-09 at 12:02 -0300, Alvaro Herrera wrote:
> Tomas Vondra wrote:
> 
> > FWIW while looking at the code I noticed that we skip wide varlena
> > values but not cstrings. Seems a bit suspicious.
> 
> Uh, can you actually have columns of cstring type?  I don't think you
> can ...

Yeah, but then why do we handle that in compute_scalar_stats?

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-09 Thread Alvaro Herrera
Tomas Vondra wrote:

> FWIW while looking at the code I noticed that we skip wide varlena
> values but not cstrings. Seems a bit suspicious.

Uh, can you actually have columns of cstring type?  I don't think you
can ...

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-09 Thread Tomas Vondra
Hi,

On Wed, 2016-03-09 at 11:23 +0100, Shulgin, Oleksandr wrote:
> On Tue, Mar 8, 2016 at 8:16 PM, Alvaro Herrera
>  wrote:
> Shulgin, Oleksandr wrote:
> 
> > Alright.  I'm attaching the latest version of this patch
> split in two
> > parts: the first one is NULLs-related bugfix and the second
> is the
> > "improvement" part, which applies on top of the first one.
> 
> I went over patch 0001 and it seems pretty reasonable.  It's
> missing
> some comment updates -- at least the large comments that talk
> about Duj1
> should be modified to indicate why the code is now subtracting
> the null
> count.
> 
> 
> Good point.
>  
> 
> Also, I can't quite figure out why the "else" now in line 2131
> is now "else if track_cnt != 0".  What happens if track_cnt is
> zero?
> The comment above the "if" block doesn't provide any guidance.
> 
> 
> It is there only to avoid potentially dividing zero by zero when
> calculating avgcount (which will not be used after that anyway).  I
> agree it deserves a comment.

That definitely deserves a comment. It's not immediately clear why
(track_cnt != 0) would prevent division by zero in the code. The only
way such error could happen is if ndistinct==0, because that's the
actual denominator. Which means this

ndistinct = ndistinct * sample_cnt

would have to evaluate to 0. But ndistinct==0 can't happen as we're in
the (nonnull_cnt > 0) branch, and that guarantees (standistinct != 0).

Thus the only possibility seems to be (nonnull_cnt==toowide_cnt). Why
not to use this condition instead?

FWIW while looking at the code I noticed that we skip wide varlena
values but not cstrings. Seems a bit suspicious.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-09 Thread Tomas Vondra
Hi,

On Wed, 2016-03-09 at 10:58 +0100, Shulgin, Oleksandr wrote:
> On Tue, Mar 8, 2016 at 9:10 PM, Joel Jacobson 
> wrote:
> On Wed, Mar 9, 2016 at 1:25 AM, Shulgin, Oleksandr
>  wrote:
> > Thank you for spending your time to run these :-)
> 
> n/p, it took like 30 seconds :-)
> 
> 
> Great!  I'm glad to hear it was as easy to use as I hoped for :-)
> 
> 
> > I don't want to be asking for too much here, but is there a
> chance you could
> > try the effects of the proposed patch on an offline copy of
> your database?
> 
> Yes, I think that should be possible.
> 
> > Do you envision or maybe have experienced problems with
> query plans
> > referring to the columns that are near the top of the above
> hist_ratio
> > report?  In other words: what are the practical implications
> for you with
> > the values being duplicated rather badly throughout the
> histogram like in
> > the example you shown?
> 
> I don't know much about the internals of query planner,
> I just read the "57.1. Row Estimation Examples" to get a basic
> understanding.
> 
> If I understand it correctly, if the histogram_bounds contains
> a lot
> of duplicated values,
> then the row estimation will be inaccurate, which in turn will
> trick
> the query planner
> into a sub-optimal plan?
> 
> 
> Yes, basically it should matter the most for the equality comparison
> operator, such that a MCV entry would provide more accurate
> selectivity estimate (and the histogram is not used at all in this
> case anyway).  For the "less/greater-than" comparison both MCV list
> and histogram are used, so the drawback of having repeated values in
> the histogram, in my understanding is the same: less accurate
> selectivity estimates for the values that could fall precisely into a
> bin which didn't make it into the histogram.
> 
> 
> We've had some problems lately with the query planner, or
> actually we've always
> had them but never noticed them nor cared about them, but now
> during peak times
> we've had short periods where we haven't been able to fully
> cope up
> with the traffic.
> 
> I tracked down the most self_time-consuming functions and
> quickly saw
> how to optimize them.
> Many of them where on the form:
> SELECT .. FROM SomeBigTable WHERE Col1 = [some dynamic value]
> AND Col2
> = [some constant value] AND Col3 = [some other constant value]
> The number of rows matching the WHERE clause were very tiny,
> perfect
> match for a partial index:
> CREATE INDEX .. ON SomeBigTable USING btree (Col1) WHERE Col2
> = [some
> constant value] AND Col3 = [some other constant value];
> 
> Even though the new partial index matched the query perfectly,
> the
> query planner didn't want to use it. Instead it continued to
> use some
> other sub-optimal index.
> 
> The only way to force it to use the correct index was to use
> the
> "+0"-trick which I recently learned from one of my colleagues:
> SELECT .. FROM SomeBigTable WHERE Col1 = [some dynamic value]
> AND
> Col2+0 = [some constant value] AND Col3+0 = [some other
> constant
> value]
> CREATE INDEX .. ON SomeBigTable USING btree (Col1) WHERE Col2
> +0 =
> [some constant value] AND Col3+0 = [some other constant
> value];
> 
> By adding +0 to the columns, the query planner will as I
> understand it
> be extremely motivated to use the correct index, as otherwise
> it would
> have to do a seq scan on the entire big table, which would be
> very
> costly.
> 
> I'm glad the trick worked, now the system is fast again.
> 
> We're still on 9.1, so maybe these problems will go away once
> we upgrade to 9.5.
> 
> 
> Hm... sounds like a planner bug to me.  I'm not exceptionally aware of
> the changes in partial index handling that were made after 9.1, though
> grepping the commit log for "partial index" produces a number of hits
> after the date of 9.1 release.

My first guess would be this is related to the costing bug addressed in 

https://commitfest.postgresql.org/9/299/

I.e. the planner is not accounting for the index predicate correctly,
and ends up choosing the full index. It'd be interesting to see if the
patch makes the optimizer to choose the right index in your example.

The fact that +0 fixes the issue however seems a bit contradictory,
though. That forces the planner to use 

Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-09 Thread Shulgin, Oleksandr
On Tue, Mar 8, 2016 at 8:16 PM, Alvaro Herrera 
wrote:

> Shulgin, Oleksandr wrote:
>
> > Alright.  I'm attaching the latest version of this patch split in two
> > parts: the first one is NULLs-related bugfix and the second is the
> > "improvement" part, which applies on top of the first one.
>
> I went over patch 0001 and it seems pretty reasonable.  It's missing
> some comment updates -- at least the large comments that talk about Duj1
> should be modified to indicate why the code is now subtracting the null
> count.


Good point.


> Also, I can't quite figure out why the "else" now in line 2131
> is now "else if track_cnt != 0".  What happens if track_cnt is zero?
> The comment above the "if" block doesn't provide any guidance.
>

It is there only to avoid potentially dividing zero by zero when
calculating avgcount (which will not be used after that anyway).  I agree
it deserves a comment.

Thank you!
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-09 Thread Shulgin, Oleksandr
On Tue, Mar 8, 2016 at 9:10 PM, Joel Jacobson  wrote:

> On Wed, Mar 9, 2016 at 1:25 AM, Shulgin, Oleksandr
>  wrote:
> > Thank you for spending your time to run these :-)
>
> n/p, it took like 30 seconds :-)
>

Great!  I'm glad to hear it was as easy to use as I hoped for :-)

> I don't want to be asking for too much here, but is there a chance you
> could
> > try the effects of the proposed patch on an offline copy of your
> database?
>
> Yes, I think that should be possible.
>
> > Do you envision or maybe have experienced problems with query plans
> > referring to the columns that are near the top of the above hist_ratio
> > report?  In other words: what are the practical implications for you with
> > the values being duplicated rather badly throughout the histogram like in
> > the example you shown?
>
> I don't know much about the internals of query planner,
> I just read the "57.1. Row Estimation Examples" to get a basic
> understanding.
>
> If I understand it correctly, if the histogram_bounds contains a lot
> of duplicated values,
> then the row estimation will be inaccurate, which in turn will trick
> the query planner
> into a sub-optimal plan?
>

Yes, basically it should matter the most for the equality comparison
operator, such that a MCV entry would provide more accurate selectivity
estimate (and the histogram is not used at all in this case anyway).  For
the "less/greater-than" comparison both MCV list and histogram are used, so
the drawback of having repeated values in the histogram, in my
understanding is the same: less accurate selectivity estimates for the
values that could fall precisely into a bin which didn't make it into the
histogram.

We've had some problems lately with the query planner, or actually we've
> always
> had them but never noticed them nor cared about them, but now during peak
> times
> we've had short periods where we haven't been able to fully cope up
> with the traffic.
>
> I tracked down the most self_time-consuming functions and quickly saw
> how to optimize them.
> Many of them where on the form:
> SELECT .. FROM SomeBigTable WHERE Col1 = [some dynamic value] AND Col2
> = [some constant value] AND Col3 = [some other constant value]
> The number of rows matching the WHERE clause were very tiny, perfect
> match for a partial index:
> CREATE INDEX .. ON SomeBigTable USING btree (Col1) WHERE Col2 = [some
> constant value] AND Col3 = [some other constant value];
>
> Even though the new partial index matched the query perfectly, the
> query planner didn't want to use it. Instead it continued to use some
> other sub-optimal index.
>
> The only way to force it to use the correct index was to use the
> "+0"-trick which I recently learned from one of my colleagues:
> SELECT .. FROM SomeBigTable WHERE Col1 = [some dynamic value] AND
> Col2+0 = [some constant value] AND Col3+0 = [some other constant
> value]
> CREATE INDEX .. ON SomeBigTable USING btree (Col1) WHERE Col2+0 =
> [some constant value] AND Col3+0 = [some other constant value];
>
> By adding +0 to the columns, the query planner will as I understand it
> be extremely motivated to use the correct index, as otherwise it would
> have to do a seq scan on the entire big table, which would be very
> costly.
>
> I'm glad the trick worked, now the system is fast again.
>
> We're still on 9.1, so maybe these problems will go away once we upgrade
> to 9.5.
>

Hm... sounds like a planner bug to me.  I'm not exceptionally aware of the
changes in partial index handling that were made after 9.1, though grepping
the commit log for "partial index" produces a number of hits after the date
of 9.1 release.

I don't know if these problems I described can be fixed by your patch,
> but I wanted to share this story since I know our systems (Trustly's
> and Zalando's) are quite similar in design,
> so maybe you have experienced something similar.
>

I would not expect this type of problem to be affected by the patch in any
way, though maybe I'm missing the complete picture here.

Also, I'm not aware of similar problems in our systems, but I can ask
around. :-)

Thank you.
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-08 Thread Tom Lane
Robert Haas  writes:
> On Wed, Jan 20, 2016 at 5:09 PM, Tom Lane  wrote:
>> Um, I would like to review it, but I doubt I'll find time before the end
>> of the month.

> Tom, can you pick this up?

Yes, now that I've gotten out from under the pathification thing,
I have cycles for patch review.  I'll take this one.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-08 Thread Alvaro Herrera
Shulgin, Oleksandr wrote:

> Alright.  I'm attaching the latest version of this patch split in two
> parts: the first one is NULLs-related bugfix and the second is the
> "improvement" part, which applies on top of the first one.

I went over patch 0001 and it seems pretty reasonable.  It's missing
some comment updates -- at least the large comments that talk about Duj1
should be modified to indicate why the code is now subtracting the null
count.  Also, I can't quite figure out why the "else" now in line 2131
is now "else if track_cnt != 0".  What happens if track_cnt is zero?
The comment above the "if" block doesn't provide any guidance.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-08 Thread Joel Jacobson
On Wed, Mar 9, 2016 at 1:25 AM, Shulgin, Oleksandr
 wrote:
> Thank you for spending your time to run these :-)

n/p, it took like 30 seconds :-)

> I don't want to be asking for too much here, but is there a chance you could
> try the effects of the proposed patch on an offline copy of your database?

Yes, I think that should be possible.

> Do you envision or maybe have experienced problems with query plans
> referring to the columns that are near the top of the above hist_ratio
> report?  In other words: what are the practical implications for you with
> the values being duplicated rather badly throughout the histogram like in
> the example you shown?

I don't know much about the internals of query planner,
I just read the "57.1. Row Estimation Examples" to get a basic understanding.

If I understand it correctly, if the histogram_bounds contains a lot
of duplicated values,
then the row estimation will be inaccurate, which in turn will trick
the query planner
into a sub-optimal plan?

We've had some problems lately with the query planner, or actually we've always
had them but never noticed them nor cared about them, but now during peak times
we've had short periods where we haven't been able to fully cope up
with the traffic.

I tracked down the most self_time-consuming functions and quickly saw
how to optimize them.
Many of them where on the form:
SELECT .. FROM SomeBigTable WHERE Col1 = [some dynamic value] AND Col2
= [some constant value] AND Col3 = [some other constant value]
The number of rows matching the WHERE clause were very tiny, perfect
match for a partial index:
CREATE INDEX .. ON SomeBigTable USING btree (Col1) WHERE Col2 = [some
constant value] AND Col3 = [some other constant value];

Even though the new partial index matched the query perfectly, the
query planner didn't want to use it. Instead it continued to use some
other sub-optimal index.

The only way to force it to use the correct index was to use the
"+0"-trick which I recently learned from one of my colleagues:
SELECT .. FROM SomeBigTable WHERE Col1 = [some dynamic value] AND
Col2+0 = [some constant value] AND Col3+0 = [some other constant
value]
CREATE INDEX .. ON SomeBigTable USING btree (Col1) WHERE Col2+0 =
[some constant value] AND Col3+0 = [some other constant value];

By adding +0 to the columns, the query planner will as I understand it
be extremely motivated to use the correct index, as otherwise it would
have to do a seq scan on the entire big table, which would be very
costly.

I'm glad the trick worked, now the system is fast again.

We're still on 9.1, so maybe these problems will go away once we upgrade to 9.5.

I don't know if these problems I described can be fixed by your patch,
but I wanted to share this story since I know our systems (Trustly's
and Zalando's) are quite similar in design,
so maybe you have experienced something similar.

(Side note: My biggest wish would be some way to specify explicitly on
a per top-level function level a list of indexes the query planner is
allowed to consider or is NOT allowed to consider.)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-08 Thread Robert Haas
On Wed, Jan 20, 2016 at 5:09 PM, Tom Lane  wrote:
> Alvaro Herrera  writes:
>> Tom Lane wrote:
>>> "Shulgin, Oleksandr"  writes:
 This post summarizes a few weeks of research of ANALYZE statistics
 distribution on one of our bigger production databases with some real-world
 data and proposes a patch to rectify some of the oddities observed.
>
>>> Please add this to the 2016-01 commitfest ...
>
>> Tom, are you reviewing this for the current commitfest?
>
> Um, I would like to review it, but I doubt I'll find time before the end
> of the month.

Tom, can you pick this up?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-08 Thread Shulgin, Oleksandr
On Tue, Mar 8, 2016 at 3:36 PM, Joel Jacobson  wrote:

> Hi Alex,
>
> Thanks for excellent research.
>

Joel,

Thank you for spending your time to run these :-)

I've ran your queries against Trustly's production database and I can
> confirm your findings, the results are similar:
>
> WITH ...
> SELECT count(1),
>min(hist_ratio)::real,
>avg(hist_ratio)::real,
>max(hist_ratio)::real,
>stddev(hist_ratio)::real
>   FROM stats2
>  WHERE histogram_bounds IS NOT NULL;
>
> -[ RECORD 1 ]
> count  | 2814
> min| 0.193548
> avg| 0.927357
> max| 1
> stddev | 0.164134
>
>
> WHERE distinct_hist < num_hist
> -[ RECORD 1 ]
> count  | 624
> min| 0.193548
> avg| 0.672407
> max| 0.990099
> stddev | 0.194901
>
>
> WITH ..
> SELECT schemaname ||'.'|| tablename ||'.'|| attname || (CASE inherited
> WHEN TRUE THEN ' (inherited)' ELSE '' END) AS columnname,
>n_distinct, null_frac,
>num_mcv, most_common_vals, most_common_freqs,
>mcv_frac, (mcv_frac / (1 - null_frac))::real AS nonnull_mcv_frac,
>distinct_hist, num_hist, hist_ratio,
>histogram_bounds
>   FROM stats2
>  ORDER BY hist_ratio
>  LIMIT 1;
>
>  -[ RECORD 1
> ]-+-
> columnname| public.x.y
> n_distinct| 103
> null_frac | 0
> num_mcv   | 10
> most_common_vals  | {0,1,2,3,4,5,6,7,8,9}
> most_common_freqs |
>
> {0.4765,0.141733,0.1073,0.0830667,0.0559667,0.037,0.0251,0.0188,0.0141,0.0113667}
> mcv_frac  | 0.971267
> nonnull_mcv_frac  | 0.971267
> distinct_hist | 18
> num_hist  | 93
> hist_ratio| 0.193548387096774
> histogram_bounds  |
>
> {10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,12,12,12,12,12,12,12,12,12,12,12,12,13,13,13,13,13,13,13,13,13,13,14,14,14,14,14,15,15,15,15,16,16,16,16,21,23,5074,5437,5830,6049,6496,7046,7784,14629,21285}
>

I don't want to be asking for too much here, but is there a chance you
could try the effects of the proposed patch on an offline copy of your
database?

Do you envision or maybe have experienced problems with query plans
referring to the columns that are near the top of the above hist_ratio
report?  In other words: what are the practical implications for you with
the values being duplicated rather badly throughout the histogram like in
the example you shown?

Thank you!
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-08 Thread Joel Jacobson
Hi Alex,

Thanks for excellent research.

I've ran your queries against Trustly's production database and I can
confirm your findings, the results are similar:

WITH ...
SELECT count(1),
   min(hist_ratio)::real,
   avg(hist_ratio)::real,
   max(hist_ratio)::real,
   stddev(hist_ratio)::real
  FROM stats2
 WHERE histogram_bounds IS NOT NULL;

-[ RECORD 1 ]
count  | 2814
min| 0.193548
avg| 0.927357
max| 1
stddev | 0.164134


WHERE distinct_hist < num_hist
-[ RECORD 1 ]
count  | 624
min| 0.193548
avg| 0.672407
max| 0.990099
stddev | 0.194901


WITH ..
SELECT schemaname ||'.'|| tablename ||'.'|| attname || (CASE inherited
WHEN TRUE THEN ' (inherited)' ELSE '' END) AS columnname,
   n_distinct, null_frac,
   num_mcv, most_common_vals, most_common_freqs,
   mcv_frac, (mcv_frac / (1 - null_frac))::real AS nonnull_mcv_frac,
   distinct_hist, num_hist, hist_ratio,
   histogram_bounds
  FROM stats2
 ORDER BY hist_ratio
 LIMIT 1;

 -[ RECORD 1 
]-+-
columnname| public.x.y
n_distinct| 103
null_frac | 0
num_mcv   | 10
most_common_vals  | {0,1,2,3,4,5,6,7,8,9}
most_common_freqs |
{0.4765,0.141733,0.1073,0.0830667,0.0559667,0.037,0.0251,0.0188,0.0141,0.0113667}
mcv_frac  | 0.971267
nonnull_mcv_frac  | 0.971267
distinct_hist | 18
num_hist  | 93
hist_ratio| 0.193548387096774
histogram_bounds  |
{10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,12,12,12,12,12,12,12,12,12,12,12,12,13,13,13,13,13,13,13,13,13,13,14,14,14,14,14,15,15,15,15,16,16,16,16,21,23,5074,5437,5830,6049,6496,7046,7784,14629,21285}



On Mon, Jan 18, 2016 at 4:46 PM, Shulgin, Oleksandr
 wrote:
> On Wed, Dec 2, 2015 at 10:20 AM, Shulgin, Oleksandr
>  wrote:
>>
>> On Tue, Dec 1, 2015 at 7:00 PM, Tom Lane  wrote:
>>>
>>> "Shulgin, Oleksandr"  writes:
>>> > This post summarizes a few weeks of research of ANALYZE statistics
>>> > distribution on one of our bigger production databases with some
>>> > real-world
>>> > data and proposes a patch to rectify some of the oddities observed.
>>>
>>> Please add this to the 2016-01 commitfest ...
>>
>>
>> Added: https://commitfest.postgresql.org/8/434/
>
>
> It would be great if some folks could find a moment to run the queries I was
> showing on their data to confirm (or refute) my findings, or to contribute
> to the picture in general.
>
> As I was saying, the queries were designed in such a way that even
> unprivileged user can run them (the results will be limited to the stats
> data available to that user, obviously; and for custom-tailored statstarget
> one still needs superuser to join the pg_statistic table directly).  Also,
> on the scale of ~30k attribute statistics records, the queries take only a
> few seconds to finish.
>
> Cheers!
> --
> Alex
>



-- 
Joel Jacobson

Mobile: +46703603801
Trustly.com | Newsroom | LinkedIn | Twitter


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-08 Thread Shulgin, Oleksandr
On Mon, Mar 7, 2016 at 6:02 PM, Jeff Janes  wrote:

> On Mon, Mar 7, 2016 at 3:17 AM, Shulgin, Oleksandr
>  wrote:
> >
> > They might get that different plan when they upgrade to the latest major
> > version anyway.  Is it set somewhere that minor version upgrades should
> > never affect the planner?  I doubt so.
>
> People with meticulous standards are expected to re-validate their
> application, including plans and performance, before doing major
> version updates into production. They can continue to use a *fully
> patched* server from a previous major release while they do that.
>
> This is not the case for minor version updates.  We do not want to put
> people in the position where getting a security or corruption-risk
> update forces them to also accept changes which may destroy the
> performance of their system.
>
> I don't know if it is set out somewhere else, but there are many
> examples in this list of us declining to back-patch performance bug
> fixes which might negatively impact some users.  The only times we
> have done it that I can think of are when there is almost no
> conceivable way it could have a meaningful negative effect, or if the
> bug was tied in with security or stability bugs that needed to be
> fixed anyway and couldn't be separated.
>

The necessity to perform security upgrades is indeed a valid argument
against back-patching this, since this is not a bug that causes incorrect
results or data corruption, etc.

Thank you all for the thoughtful replies!
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-07 Thread Jeff Janes
On Mon, Mar 7, 2016 at 3:17 AM, Shulgin, Oleksandr
 wrote:
> On Fri, Mar 4, 2016 at 7:27 PM, Robert Haas  wrote:
>>
>> On Thu, Mar 3, 2016 at 2:48 AM, Shulgin, Oleksandr
>>  wrote:
>> > On Wed, Mar 2, 2016 at 7:33 PM, Alvaro Herrera
>> > 
>> > wrote:
>> >> Shulgin, Oleksandr wrote:
>> >>
>> >> > Alright.  I'm attaching the latest version of this patch split in two
>> >> > parts: the first one is NULLs-related bugfix and the second is the
>> >> > "improvement" part, which applies on top of the first one.
>> >>
>> >> So is this null-related bugfix supposed to be backpatched?  (I assume
>> >> it's not because it's very likely to change existing plans).
>> >
>> > For the good, because cardinality estimations will be more accurate in
>> > these
>> > cases, so yes I would expect it to be back-patchable.
>>
>> -1.  I think the cost of changing existing query plans in back
>> branches is too high.  The people who get a better plan never thank
>> us, but the people who (by bad luck) get a worse plan always complain.
>
>
> They might get that different plan when they upgrade to the latest major
> version anyway.  Is it set somewhere that minor version upgrades should
> never affect the planner?  I doubt so.

People with meticulous standards are expected to re-validate their
application, including plans and performance, before doing major
version updates into production. They can continue to use a *fully
patched* server from a previous major release while they do that.

This is not the case for minor version updates.  We do not want to put
people in the position where getting a security or corruption-risk
update forces them to also accept changes which may destroy the
performance of their system.

I don't know if it is set out somewhere else, but there are many
examples in this list of us declining to back-patch performance bug
fixes which might negatively impact some users.  The only times we
have done it that I can think of are when there is almost no
conceivable way it could have a meaningful negative effect, or if the
bug was tied in with security or stability bugs that needed to be
fixed anyway and couldn't be separated.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-07 Thread Tomas Vondra
Hi,


On Mon, 2016-03-07 at 12:17 +0100, Shulgin, Oleksandr wrote:
> On Fri, Mar 4, 2016 at 7:27 PM, Robert Haas 
> wrote:
> On Thu, Mar 3, 2016 at 2:48 AM, Shulgin, Oleksandr
>  wrote:
> > On Wed, Mar 2, 2016 at 7:33 PM, Alvaro Herrera
> 
> > wrote:
> >> Shulgin, Oleksandr wrote:
> >>
> >> > Alright.  I'm attaching the latest version of this patch
> split in two
> >> > parts: the first one is NULLs-related bugfix and the
> second is the
> >> > "improvement" part, which applies on top of the first
> one.
> >>
> >> So is this null-related bugfix supposed to be backpatched?
> (I assume
> >> it's not because it's very likely to change existing
> plans).
> >
> > For the good, because cardinality estimations will be more
> accurate in these
> > cases, so yes I would expect it to be back-patchable.
> 
> -1.  I think the cost of changing existing query plans in back
> branches is too high.  The people who get a better plan never
> thank
> us, but the people who (by bad luck) get a worse plan always
> complain.
> 
> 
> They might get that different plan when they upgrade to the latest
> major version anyway.  Is it set somewhere that minor version upgrades
> should never affect the planner?  I doubt so.

Major versions are supposed to add features, which may easily result in
plan changes. Moreover people are expected to do more thorough testing
on major version upgrade, so they're more likely to spot them.

OTOH minor versions are bugfix-only relases, and sometimes the fixes are
security related and people are supposed to install them ASAP. So many
people simply upgrade them without much additional testing and while we
can't promise any of the fixes won't change the plans, we kinda try to
minimize such cases.

That being said, I don't have a clear opinion whether to backpatch this.
I think that it's clearly a bug (especially the first part dealing with
NULL values), and it'd be good to backpatch that. OTOH I can't really
quantify the risks of changing some plans to worse ones.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-07 Thread Shulgin, Oleksandr
On Fri, Mar 4, 2016 at 7:27 PM, Robert Haas  wrote:

> On Thu, Mar 3, 2016 at 2:48 AM, Shulgin, Oleksandr
>  wrote:
> > On Wed, Mar 2, 2016 at 7:33 PM, Alvaro Herrera  >
> > wrote:
> >> Shulgin, Oleksandr wrote:
> >>
> >> > Alright.  I'm attaching the latest version of this patch split in two
> >> > parts: the first one is NULLs-related bugfix and the second is the
> >> > "improvement" part, which applies on top of the first one.
> >>
> >> So is this null-related bugfix supposed to be backpatched?  (I assume
> >> it's not because it's very likely to change existing plans).
> >
> > For the good, because cardinality estimations will be more accurate in
> these
> > cases, so yes I would expect it to be back-patchable.
>
> -1.  I think the cost of changing existing query plans in back
> branches is too high.  The people who get a better plan never thank
> us, but the people who (by bad luck) get a worse plan always complain.
>

They might get that different plan when they upgrade to the latest major
version anyway.  Is it set somewhere that minor version upgrades should
never affect the planner?  I doubt so.

--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-04 Thread Robert Haas
On Thu, Mar 3, 2016 at 2:48 AM, Shulgin, Oleksandr
 wrote:
> On Wed, Mar 2, 2016 at 7:33 PM, Alvaro Herrera 
> wrote:
>> Shulgin, Oleksandr wrote:
>>
>> > Alright.  I'm attaching the latest version of this patch split in two
>> > parts: the first one is NULLs-related bugfix and the second is the
>> > "improvement" part, which applies on top of the first one.
>>
>> So is this null-related bugfix supposed to be backpatched?  (I assume
>> it's not because it's very likely to change existing plans).
>
> For the good, because cardinality estimations will be more accurate in these
> cases, so yes I would expect it to be back-patchable.

-1.  I think the cost of changing existing query plans in back
branches is too high.  The people who get a better plan never thank
us, but the people who (by bad luck) get a worse plan always complain.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-02 Thread Shulgin, Oleksandr
On Wed, Mar 2, 2016 at 7:33 PM, Alvaro Herrera 
wrote:

> Shulgin, Oleksandr wrote:
>
> > Alright.  I'm attaching the latest version of this patch split in two
> > parts: the first one is NULLs-related bugfix and the second is the
> > "improvement" part, which applies on top of the first one.
>
> So is this null-related bugfix supposed to be backpatched?  (I assume
> it's not because it's very likely to change existing plans).
>

For the good, because cardinality estimations will be more accurate in
these cases, so yes I would expect it to be back-patchable.

-- 
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-02 Thread Alvaro Herrera
Shulgin, Oleksandr wrote:

> Alright.  I'm attaching the latest version of this patch split in two
> parts: the first one is NULLs-related bugfix and the second is the
> "improvement" part, which applies on top of the first one.

So is this null-related bugfix supposed to be backpatched?  (I assume
it's not because it's very likely to change existing plans).

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-02 Thread Shulgin, Oleksandr
On Wed, Mar 2, 2016 at 5:46 PM, David Steele  wrote:

> On 3/2/16 11:10 AM, Shulgin, Oleksandr wrote:
> > On Wed, Feb 24, 2016 at 12:30 AM, Tomas Vondra
> > >
> wrote:
> >
> > I think it'd be useful not to have all the changes in one lump, but
> > structure this as a patch series with related changes in separate
> > chunks. I doubt we'd like to mix the changes in a single commit, and
> > it makes the reviews and reasoning easier. So those NULL-handling
> > fixes should be in one patch, the MCV patches in another one.
> >
> >
> > OK, such a split would make sense to me.  Though, I'm a bit late as the
> > commitfest is already closed to new patches, I guess asking the CF
> > manager to split this might work (assuming I produce the patch files)?
>
> If the patch is broken into two files that gives the review/committer
> more options but I don't think it requires another CF entry.
>

Alright.  I'm attaching the latest version of this patch split in two
parts: the first one is NULLs-related bugfix and the second is the
"improvement" part, which applies on top of the first one.

--
Alex
From a6b1cb866f9374cdc893e9a318959eccaa5bfbc9 Mon Sep 17 00:00:00 2001
From: Oleksandr Shulgin 
Date: Wed, 2 Mar 2016 18:18:36 +0100
Subject: [PATCH 1/2] Account for NULLs in ANALYZE more strictly

Previously the ndistinct and avgcount calculation (for MCV list) could
be affected greatly by high fraction of NULLs in the sample.  Account
for that by subtracting the number of NULLs we've seen from the total
sample size explicitly.

At the same time, values that are considered "too wide" are accounted
for in ndistinct, but removed from sample size for MCV list
calculation.  In compute_distinct_stats() we need to do that manually,
in compute_scalar_stats() the value_cnt is already holding the number
of non-null, not too-wide values.
---
 src/backend/commands/analyze.c | 42 --
 1 file changed, 24 insertions(+), 18 deletions(-)

diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 8a5f07c..f05b496 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -2085,17 +2085,21 @@ compute_distinct_stats(VacAttrStatsP stats,
 		denom,
 		stadistinct;
 
-			numer = (double) samplerows *(double) d;
+			double		samplerows_nonnull = samplerows - null_cnt;
+			double		totalrows_nonnull
+			= totalrows * (1.0 - stats->stanullfrac);
 
-			denom = (double) (samplerows - f1) +
-(double) f1 *(double) samplerows / totalrows;
+			numer = samplerows_nonnull * (double) d;
+
+			denom = (samplerows_nonnull - f1) +
+(double) f1 * samplerows_nonnull / totalrows_nonnull;
 
 			stadistinct = numer / denom;
 			/* Clamp to sane range in case of roundoff error */
 			if (stadistinct < (double) d)
 stadistinct = (double) d;
-			if (stadistinct > totalrows)
-stadistinct = totalrows;
+			if (stadistinct > totalrows_nonnull)
+stadistinct = totalrows_nonnull;
 			stats->stadistinct = floor(stadistinct + 0.5);
 		}
 
@@ -2124,16 +2128,17 @@ compute_distinct_stats(VacAttrStatsP stats,
 			/* Track list includes all values seen, and all will fit */
 			num_mcv = track_cnt;
 		}
-		else
+		else if (track_cnt != 0)
 		{
+			int			sample_cnt = nonnull_cnt - toowide_cnt;
 			double		ndistinct = stats->stadistinct;
 			double		avgcount,
 		mincount;
 
 			if (ndistinct < 0)
-ndistinct = -ndistinct * totalrows;
+ndistinct = -ndistinct * sample_cnt;
 			/* estimate # of occurrences in sample of a typical value */
-			avgcount = (double) samplerows / ndistinct;
+			avgcount = (double) sample_cnt / ndistinct;
 			/* set minimum threshold count to store a value */
 			mincount = avgcount * 1.25;
 			if (mincount < 2)
@@ -2434,17 +2439,21 @@ compute_scalar_stats(VacAttrStatsP stats,
 		denom,
 		stadistinct;
 
-			numer = (double) samplerows *(double) d;
+			double		samplerows_nonnull = samplerows - null_cnt;
+			double		totalrows_nonnull
+			= totalrows * (1.0 - stats->stanullfrac);
+
+			numer = samplerows_nonnull * (double) d;
 
-			denom = (double) (samplerows - f1) +
-(double) f1 *(double) samplerows / totalrows;
+			denom = (samplerows_nonnull - f1) +
+(double) f1 * samplerows_nonnull / totalrows_nonnull;
 
 			stadistinct = numer / denom;
 			/* Clamp to sane range in case of roundoff error */
 			if (stadistinct < (double) d)
 stadistinct = (double) d;
-			if (stadistinct > totalrows)
-stadistinct = totalrows;
+			if (stadistinct > totalrows_nonnull)
+stadistinct = totalrows_nonnull;
 			stats->stadistinct = floor(stadistinct + 0.5);
 		}
 
@@ -2480,21 +2489,18 @@ compute_scalar_stats(VacAttrStatsP stats,
 		}
 		else
 		{
-			double		ndistinct = stats->stadistinct;
 			double		avgcount,
 		mincount,
 		maxmincount;
 
-			if (ndistinct < 0)
-ndistinct = 

Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-02 Thread David Steele
On 3/2/16 11:10 AM, Shulgin, Oleksandr wrote:
> On Wed, Feb 24, 2016 at 12:30 AM, Tomas Vondra
> > wrote:
> 
> I think it'd be useful not to have all the changes in one lump, but
> structure this as a patch series with related changes in separate
> chunks. I doubt we'd like to mix the changes in a single commit, and
> it makes the reviews and reasoning easier. So those NULL-handling
> fixes should be in one patch, the MCV patches in another one.
> 
> 
> OK, such a split would make sense to me.  Though, I'm a bit late as the
> commitfest is already closed to new patches, I guess asking the CF
> manager to split this might work (assuming I produce the patch files)?

If the patch is broken into two files that gives the review/committer
more options but I don't think it requires another CF entry.

-- 
-David
da...@pgmasters.net



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-03-02 Thread Shulgin, Oleksandr
On Wed, Feb 24, 2016 at 12:30 AM, Tomas Vondra  wrote:

> Hi,
>
> On 02/08/2016 03:01 PM, Shulgin, Oleksandr wrote:
> >
> ...
>
>>
>> I've incorporated this fix into the v2 of my patch, I think it is
>> related closely enough.  Also, added corresponding changes to
>> compute_distinct_stats(), which doesn't produce a histogram.
>>
>
> I think it'd be useful not to have all the changes in one lump, but
> structure this as a patch series with related changes in separate chunks. I
> doubt we'd like to mix the changes in a single commit, and it makes the
> reviews and reasoning easier. So those NULL-handling fixes should be in one
> patch, the MCV patches in another one.


OK, such a split would make sense to me.  Though, I'm a bit late as the
commitfest is already closed to new patches, I guess asking the CF manager
to split this might work (assuming I produce the patch files)?

--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-02-23 Thread Tomas Vondra

Hi,

On 02/08/2016 03:01 PM, Shulgin, Oleksandr wrote:
>
...


I've incorporated this fix into the v2 of my patch, I think it is
related closely enough.  Also, added corresponding changes to
compute_distinct_stats(), which doesn't produce a histogram.


I think it'd be useful not to have all the changes in one lump, but 
structure this as a patch series with related changes in separate 
chunks. I doubt we'd like to mix the changes in a single commit, and it 
makes the reviews and reasoning easier. So those NULL-handling fixes 
should be in one patch, the MCV patches in another one.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-02-08 Thread Shulgin, Oleksandr
On Mon, Jan 25, 2016 at 5:11 PM, Shulgin, Oleksandr <
oleksandr.shul...@zalando.de> wrote:
>
> On Sat, Jan 23, 2016 at 11:22 AM, Tomas Vondra <
tomas.von...@2ndquadrant.com> wrote:
>>
>>
>> Overall, I think this is really about deciding when to cut-off the MCV,
so that it does not grow needlessly large - as Robert pointed out, the
larger the list, the more expensive the estimation (and thus planning).
>>
>> So even if we could fit the whole sample into the MCV list (i.e. we
believe we've seen all the values and we can fit them into the MCV list),
it may not make sense to do so. The ultimate goal is to estimate
conditions, and if we can do that reasonably even after cutting of the
least frequent values from the MCV list, then why not?
>>
>> From this point of view, the analysis concentrates deals just with the
ANALYZE part and does not discuss the estimation counter-part at all.
>
>
> True, this aspect still needs verification.  As stated, my primary
motivation was to improve the plan stability for relatively short MCV lists.
>
> Longer MCV lists might be a different story, but see "Increasing stats
target" section of the original mail: increasing the target doesn't give
quite the expected results with unpatched code either.

To address this concern I've run my queries again on the same dataset, now
focusing on how the number of MCV items changes with the patched code
(using the CTEs from my original mail):

WITH ...

SELECT count(1),
   min(num_mcv)::real,
   avg(num_mcv)::real,
   max(num_mcv)::real,
   stddev(num_mcv)::real

  FROM stats2

 WHERE num_mcv IS NOT NULL;

(ORIGINAL)
count  | 27452
min| 1
avg| 32.7115
max| 100
stddev | 40.6927

(PATCHED)
count  | 27527
min| 1
avg| 38.4341
max| 100
stddev | 43.3596

A significant portion of the MCV lists is occupying all 100 slots available
with the default statistics target, so it also interesting to look at the
stats that habe "underfilled" MCV lists (by changing the condition of the
WHERE clause to read "num_mcv < 100"):

(<100 ORIGINAL)
count  | 20980
min| 1
avg| 11.9541
max| 99
stddev | 18.4132

(<100 PATCHED)
count  | 19329
min| 1
avg| 12.3222
max| 99
stddev | 19.6959

As one can see, with the patched code the average length of MCV lists
doesn't change all that dramatically, while at the same time exposing all
the improvements described in the original mail.

>> After fixing the estimator to consider fraction of NULLs, the estimates
look like this:
>>
>> statistics target |   master  |  patched
>>--
>>   100 | 1302  | 5356
>>  1000 | 6022  | 6791
>>
>> So this seems to significantly improve the ndistinct estimate (patch
attached).
>
>
> Hm... this looks correct.  And compute_distinct_stats() needs the same
treatment, obviously.

I've incorporated this fix into the v2 of my patch, I think it is related
closely enough.  Also, added corresponding changes to
compute_distinct_stats(), which doesn't produce a histogram.

I'm adding this to the next CommitFest.  Further reviews are very much
appreciated!

--
Alex
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 070df29..cbf3538 100644
*** a/src/backend/commands/analyze.c
--- b/src/backend/commands/analyze.c
***
*** 2079,2095 
  		denom,
  		stadistinct;
  
! 			numer = (double) samplerows *(double) d;
  
! 			denom = (double) (samplerows - f1) +
! (double) f1 *(double) samplerows / totalrows;
  
  			stadistinct = numer / denom;
  			/* Clamp to sane range in case of roundoff error */
  			if (stadistinct < (double) d)
  stadistinct = (double) d;
! 			if (stadistinct > totalrows)
! stadistinct = totalrows;
  			stats->stadistinct = floor(stadistinct + 0.5);
  		}
  
--- 2079,2099 
  		denom,
  		stadistinct;
  
! 			double		samplerows_nonnull = samplerows - null_cnt;
! 			double		totalrows_nonnull
! 			= totalrows * (1.0 - stats->stanullfrac);
  
! 			numer = samplerows_nonnull * (double) d;
! 
! 			denom = (samplerows_nonnull - f1) +
! (double) f1 * samplerows_nonnull / totalrows_nonnull;
  
  			stadistinct = numer / denom;
  			/* Clamp to sane range in case of roundoff error */
  			if (stadistinct < (double) d)
  stadistinct = (double) d;
! 			if (stadistinct > totalrows_nonnull)
! stadistinct = totalrows_nonnull;
  			stats->stadistinct = floor(stadistinct + 0.5);
  		}
  
***
*** 2120,2146 
  		}
  		else
  		{
  			double		ndistinct = stats->stadistinct;
- 			double		avgcount,
- 		mincount;
  
  			if (ndistinct < 0)
! ndistinct = -ndistinct * totalrows;
! 			/* estimate # of occurrences in sample of a typical value */
! 			avgcount = (double) samplerows / ndistinct;
! 			/* set minimum threshold count to store a value */
! 			mincount = avgcount * 1.25;
! 			if (mincount < 2)
! mincount = 2;
  			if (num_mcv > track_cnt)

Re: [HACKERS] More stable query plans via more predictable column statistics

2016-01-25 Thread Shulgin, Oleksandr
On Sat, Jan 23, 2016 at 11:22 AM, Tomas Vondra  wrote:

> Hi,
>
> On 01/20/2016 10:49 PM, Alvaro Herrera wrote:
>
>>
>> Tom, are you reviewing this for the current commitfest?
>>
>
> While I'm not the right Tom, I've been looking the the patch recently, so
> let me post the review here ...
>

Thank you for the review!

2) mincount = 1.25 * avgcount
> -
>
> While I share the dislike of arbitrary constants (like the 1.25 here), I
> do think we better keep this, otherwise we can just get rid of the mincount
> entirely I think - the most frequent value will always be above the
> (remaining) average count, making the threshold pointless.
>

Correct.

It might have impact in the original code, but in the new one it's quite
> useless (see the next point), unless I'm missing some detail.
>
>
> 3) modifying avgcount threshold inside the loop
> ---
>
> The comment was extended with this statement:
>
>  * We also decrease ndistinct in the process such that going forward
>  * it refers to the number of distinct values left for the histogram.
>
> and indeed, that's what's happening - at the end of each loop, we do this:
>
> /* Narrow our view of samples left for the histogram */
> sample_cnt -= track[i].count;
> ndistinct--;
>
> but that immediately lowers the avgcount, as that's re-evaluated within
> the same loop
>
> avgcount = (double) sample_cnt / (double) ndistinct;
>
> which means it's continuously decreasing and lowering the threshold,
> although that's partially mitigated by keeping the 1.25 coefficient.
>

I was going to write "not necessarily lowering", but this is actually
accurate.  The following holds due to track[i].count > avgcount (=
sample_cnt / ndistinct):

 sample_cnt sample_cnt - track[i].count
 > -
  ndistinct   ndistinct - 1

It however makes reasoning about the algorithm much more complicated.
>

Unfortunately, yes.

4) for (i = 0; /* i < num_mcv */; i++)
> ---
>
> The way the loop is coded seems rather awkward, I guess. Not only there's
> an unexpected comment in the "for" clause, but the condition also says this
>
> /* Another way to say "while (i < num_mcv)" */
> if (i >= num_mcv)
> break;
>
> Why not to write it as a while loop, then? Possibly including the
> (num_hist >= 2) condition, like this:
>
> while ((i < num_mcv) && (num_hist >= 2))
> {
> ...
> }
>
> In any case, the loop would deserve a comment explaining why we think
> computing the thresholds like this makes sense.
>

This is partially explained by a comment inside the loop:

! for (i = 0; /* i < num_mcv */; i++)
  {
! /*
! * We have to put this before the loop condition, otherwise
! * we'll have to repeat this code before the loop and after
! * decreasing ndistinct.
! */
! num_hist = ndistinct;
! if (num_hist > num_bins)
! num_hist = num_bins + 1;

I guess this is a case where code duplication can be traded for more
apparent control flow, i.e:

!  num_hist = ndistinct;
!  if (num_hist > num_bins)
!  num_hist = num_bins + 1;

!  for (i = 0; i < num_mcv && num_hist >= 2; i++)
   {
...
+ /* Narrow our view of samples left for the histogram */
+ sample_cnt -= track[i].count;
+ ndistinct--;
+
+  num_hist = ndistinct;
+  if (num_hist > num_bins)
+  num_hist = num_bins + 1;
   }

Summary
> ---
>
> Overall, I think this is really about deciding when to cut-off the MCV, so
> that it does not grow needlessly large - as Robert pointed out, the larger
> the list, the more expensive the estimation (and thus planning).
>
> So even if we could fit the whole sample into the MCV list (i.e. we
> believe we've seen all the values and we can fit them into the MCV list),
> it may not make sense to do so. The ultimate goal is to estimate
> conditions, and if we can do that reasonably even after cutting of the
> least frequent values from the MCV list, then why not?
>
> From this point of view, the analysis concentrates deals just with the
> ANALYZE part and does not discuss the estimation counter-part at all.
>

True, this aspect still needs verification.  As stated, my primary
motivation was to improve the plan stability for relatively short MCV lists.

Longer MCV lists might be a different story, but see "Increasing stats
target" section of the original mail: increasing the target doesn't give
quite the expected results with unpatched code either.

5) ndistinct estimation vs. NULLs
> -
>
> While looking at the patch, I started realizing whether we're actually
> handling NULLs correctly when estimating ndistinct. Because that part also
> uses samplerows directly and entirely ignores NULLs, as it does this:
>
> numer = (double) samplerows *(double) d;
>
> denom = (double) (samplerows - f1) +
> (double) f1 *(double) samplerows / totalrows;
>
> ...
> if 

Re: [HACKERS] More stable query plans via more predictable column statistics

2016-01-23 Thread Tomas Vondra

Hi,

On 01/20/2016 10:49 PM, Alvaro Herrera wrote:

Tom Lane wrote:

"Shulgin, Oleksandr"  writes:

This post summarizes a few weeks of research of ANALYZE statistics
distribution on one of our bigger production databases with some real-world
data and proposes a patch to rectify some of the oddities observed.


Please add this to the 2016-01 commitfest ...


Tom, are you reviewing this for the current commitfest?


While I'm not the right Tom, I've been looking the the patch recently, 
so let me post the review here ...


Firstly, I'd like to appreciate the level of detail of the analysis. I 
may disagree with some of the conclusions, but I wish all my patch 
submissions were of such high quality.


Regarding the patch itself, I think there's a few different points 
raised, so let me discuss them one by one:



1) NULLs vs. MCV threshold
--

I agree that this seems like a bug, and that we should really compute 
the threshold only using non-NULL values. I think the analysis rather 
conclusively proves this, and I also think there are places where we do 
the same mistake (more on that at the end of the review).



2) mincount = 1.25 * avgcount
-

While I share the dislike of arbitrary constants (like the 1.25 here), I 
do think we better keep this, otherwise we can just get rid of the 
mincount entirely I think - the most frequent value will always be above 
the (remaining) average count, making the threshold pointless.


It might have impact in the original code, but in the new one it's quite 
useless (see the next point), unless I'm missing some detail.



3) modifying avgcount threshold inside the loop
---

The comment was extended with this statement:

 * We also decrease ndistinct in the process such that going forward
 * it refers to the number of distinct values left for the histogram.

and indeed, that's what's happening - at the end of each loop, we do this:

/* Narrow our view of samples left for the histogram */
sample_cnt -= track[i].count;
ndistinct--;

but that immediately lowers the avgcount, as that's re-evaluated within 
the same loop


avgcount = (double) sample_cnt / (double) ndistinct;

which means it's continuously decreasing and lowering the threshold, 
although that's partially mitigated by keeping the 1.25 coefficient.


It however makes reasoning about the algorithm much more complicated.


4) for (i = 0; /* i < num_mcv */; i++)
---

The way the loop is coded seems rather awkward, I guess. Not only 
there's an unexpected comment in the "for" clause, but the condition 
also says this


/* Another way to say "while (i < num_mcv)" */
if (i >= num_mcv)
break;

Why not to write it as a while loop, then? Possibly including the 
(num_hist >= 2) condition, like this:


while ((i < num_mcv) && (num_hist >= 2))
{
...
}

In any case, the loop would deserve a comment explaining why we think 
computing the thresholds like this makes sense.



Summary
---

Overall, I think this is really about deciding when to cut-off the MCV, 
so that it does not grow needlessly large - as Robert pointed out, the 
larger the list, the more expensive the estimation (and thus planning).


So even if we could fit the whole sample into the MCV list (i.e. we 
believe we've seen all the values and we can fit them into the MCV 
list), it may not make sense to do so. The ultimate goal is to estimate 
conditions, and if we can do that reasonably even after cutting of the 
least frequent values from the MCV list, then why not?


From this point of view, the analysis concentrates deals just with the 
ANALYZE part and does not discuss the estimation counter-part at all.



5) ndistinct estimation vs. NULLs
-

While looking at the patch, I started realizing whether we're actually 
handling NULLs correctly when estimating ndistinct. Because that part 
also uses samplerows directly and entirely ignores NULLs, as it does this:


numer = (double) samplerows *(double) d;

denom = (double) (samplerows - f1) +
(double) f1 *(double) samplerows / totalrows;

...
if (stadistinct > totalrows)
stadistinct = totalrows;

For tables with large fraction of NULLs, this seems to significantly 
underestimate the ndistinct value - for example consider a trivial table 
with 95% of NULL values and ~10k distinct values with skewed distribution:


create table t (id int);

insert into t
select (case when random() < 0.05 then (1 * random() * random())
 else null end) from generate_series(1,100) s(i);

In practice, there are 8325 distinct values in my sample:

test=# select count(distinct id) from t;
 count
---
  8325
(1 row)

But after ANALYZE with default statistics target (100), ndistinct is 
estimated to be 

Re: [HACKERS] More stable query plans via more predictable column statistics

2016-01-20 Thread Tom Lane
Alvaro Herrera  writes:
> Tom Lane wrote:
>> "Shulgin, Oleksandr"  writes:
>>> This post summarizes a few weeks of research of ANALYZE statistics
>>> distribution on one of our bigger production databases with some real-world
>>> data and proposes a patch to rectify some of the oddities observed.

>> Please add this to the 2016-01 commitfest ...

> Tom, are you reviewing this for the current commitfest?

Um, I would like to review it, but I doubt I'll find time before the end
of the month.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-01-20 Thread Alvaro Herrera
Tom Lane wrote:
> "Shulgin, Oleksandr"  writes:
> > This post summarizes a few weeks of research of ANALYZE statistics
> > distribution on one of our bigger production databases with some real-world
> > data and proposes a patch to rectify some of the oddities observed.
> 
> Please add this to the 2016-01 commitfest ...

Tom, are you reviewing this for the current commitfest?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2016-01-18 Thread Shulgin, Oleksandr
On Wed, Dec 2, 2015 at 10:20 AM, Shulgin, Oleksandr <
oleksandr.shul...@zalando.de> wrote:

> On Tue, Dec 1, 2015 at 7:00 PM, Tom Lane  wrote:
>
>> "Shulgin, Oleksandr"  writes:
>> > This post summarizes a few weeks of research of ANALYZE statistics
>> > distribution on one of our bigger production databases with some
>> real-world
>> > data and proposes a patch to rectify some of the oddities observed.
>>
>> Please add this to the 2016-01 commitfest ...
>>
>
> Added: https://commitfest.postgresql.org/8/434/
>

It would be great if some folks could find a moment to run the queries I
was showing on their data to confirm (or refute) my findings, or to
contribute to the picture in general.

As I was saying, the queries were designed in such a way that even
unprivileged user can run them (the results will be limited to the stats
data available to that user, obviously; and for custom-tailored statstarget
one still needs superuser to join the pg_statistic table directly).  Also,
on the scale of ~30k attribute statistics records, the queries take only a
few seconds to finish.

Cheers!
--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2015-12-08 Thread Robert Haas
On Fri, Dec 4, 2015 at 12:53 PM, Tom Lane  wrote:
> Robert Haas  writes:
>> Still, maybe we should try to sneak at least this much into
>> 9.5 RSN, because I have to think this is going to help people with
>> mostly-NULL (or mostly-really-wide) columns.
>
> Please no.  We are trying to get to release, not destabilize things.

Well, OK, but I don't really see how that particular bit is anything
other than a bug fix.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2015-12-07 Thread Shulgin, Oleksandr
On Fri, Dec 4, 2015 at 6:48 PM, Robert Haas  wrote:

> On Tue, Dec 1, 2015 at 10:21 AM, Shulgin, Oleksandr
>  wrote:
> >
> > What I have found is that in a significant percentage of instances, when
> a
> > duplicate sample value is *not* put into the MCV list, it does produce
> > duplicates in the histogram_bounds, so it looks like the MCV cut-off
> happens
> > too early, even though we have enough space for more values in the MCV
> list.
> >
> > In the extreme cases I've found completely empty MCV lists and histograms
> > full of duplicates at the same time, with only about 20% of distinct
> values
> > in the histogram (as it turns out, this happens due to high fraction of
> > NULLs in the sample).
>
> Wow, this is very interesting work.  Using values_cnt rather than
> samplerows to compute avgcount seems like a clear improvement.  It
> doesn't make any sense to raise the threshold for creating an MCV
> based on the presence of additional nulls or too-wide values in the
> table.  I bet compute_distinct_stats needs a similar fix.


Yes, and there's also the magic 1.25 multiplier in that code.  I think it
would make sense to agree first on how exactly the patch for
compute_scalar_stats() should look like, then port the relevant bits to
compute_distinct_stats().


> But for
> plan stability considerations, I'd say we should back-patch this all
> the way, but those considerations might mitigate for a more restrained
> approach.  Still, maybe we should try to sneak at least this much into
> 9.5 RSN, because I have to think this is going to help people with
> mostly-NULL (or mostly-really-wide) columns.
>

I'm not sure.  Likely people would complain or have found this out on their
own if they were seriously affected.

What I would be interested is people running the queries I've shown on
their data to see if there are any interesting/unexpected patterns.

As far as the rest of the fix, your code seems to remove the handling
> for ndistinct < 0.  That seems unlikely to be correct, although it's
> possible that I am missing something.


The difference here is that ndistinct at this scope in the original code
did hide a variable from an outer scope.  That one could be < 0, but in my
code there is no inner-scope ndistinct, we are referring to the outer scope
variable which cannot be < 0.


> Aside from that, the rest of
> this seems like a policy change, and I'm not totally sure off-hand
> whether it's the right policy.  Having more MCVs can increase planning
> time noticeably, and the point of the existing cutoff is to prevent us
> from choosing MCVs that aren't actually "C".  I think this change
> significantly undermines those protections.  It seems to me that it
> might be useful to evaluate the effects of this part of the patch
> separately from the samplerows -> values_cnt change.
>

Yes, that's why I was wondering if frequency cut-off approach might be
helpful here.  I'm going to have a deeper look at array's typanalyze
implementation at the least.

--
Alex


Re: [HACKERS] More stable query plans via more predictable column statistics

2015-12-04 Thread Tom Lane
Robert Haas  writes:
> Still, maybe we should try to sneak at least this much into
> 9.5 RSN, because I have to think this is going to help people with
> mostly-NULL (or mostly-really-wide) columns.

Please no.  We are trying to get to release, not destabilize things.

I think this is fine work for leisurely review and incorporation into
9.6.  It's not appropriate to rush it into 9.5 at the RC stage after
minimal review.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2015-12-04 Thread Robert Haas
On Tue, Dec 1, 2015 at 10:21 AM, Shulgin, Oleksandr
 wrote:
> Hi Hackers!
>
> This post summarizes a few weeks of research of ANALYZE statistics
> distribution on one of our bigger production databases with some real-world
> data and proposes a patch to rectify some of the oddities observed.
>
>
> Introduction
> 
>
> We have observed that for certain data sets the distribution of samples
> between most_common_vals and histogram_bounds can be unstable: so that it
> may change dramatically with the next ANALYZE run, thus leading to radically
> different plans.
>
> I was revisiting the following performance thread and I've found some
> interesting details about statistics in our environment:
>
>
> http://www.postgresql.org/message-id/flat/CAMkU=1zxynmn11yl8g7agf7k5u4zhvjn0dqcc_eco1qs49u...@mail.gmail.com#CAMkU=1zxynmn11yl8g7agf7k5u4zhvjn0dqcc_eco1qs49u...@mail.gmail.com
>
> My initial interest was in evaluation if distribution of samples could be
> made more predictable and less dependent on the factor of luck, thus leading
> to more stable execution plans.
>
>
> Unexpected findings
> ===
>
> What I have found is that in a significant percentage of instances, when a
> duplicate sample value is *not* put into the MCV list, it does produce
> duplicates in the histogram_bounds, so it looks like the MCV cut-off happens
> too early, even though we have enough space for more values in the MCV list.
>
> In the extreme cases I've found completely empty MCV lists and histograms
> full of duplicates at the same time, with only about 20% of distinct values
> in the histogram (as it turns out, this happens due to high fraction of
> NULLs in the sample).

Wow, this is very interesting work.  Using values_cnt rather than
samplerows to compute avgcount seems like a clear improvement.  It
doesn't make any sense to raise the threshold for creating an MCV
based on the presence of additional nulls or too-wide values in the
table.  I bet compute_distinct_stats needs a similar fix.  But for
plan stability considerations, I'd say we should back-patch this all
the way, but those considerations might mitigate for a more restrained
approach.  Still, maybe we should try to sneak at least this much into
9.5 RSN, because I have to think this is going to help people with
mostly-NULL (or mostly-really-wide) columns.

As far as the rest of the fix, your code seems to remove the handling
for ndistinct < 0.  That seems unlikely to be correct, although it's
possible that I am missing something.  Aside from that, the rest of
this seems like a policy change, and I'm not totally sure off-hand
whether it's the right policy.  Having more MCVs can increase planning
time noticeably, and the point of the existing cutoff is to prevent us
from choosing MCVs that aren't actually "C".  I think this change
significantly undermines those protections.  It seems to me that it
might be useful to evaluate the effects of this part of the patch
separately from the samplerows -> values_cnt change.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More stable query plans via more predictable column statistics

2015-12-02 Thread Shulgin, Oleksandr
On Tue, Dec 1, 2015 at 7:00 PM, Tom Lane  wrote:

> "Shulgin, Oleksandr"  writes:
> > This post summarizes a few weeks of research of ANALYZE statistics
> > distribution on one of our bigger production databases with some
> real-world
> > data and proposes a patch to rectify some of the oddities observed.
>
> Please add this to the 2016-01 commitfest ...
>

Added: https://commitfest.postgresql.org/8/434/


Re: [HACKERS] More stable query plans via more predictable column statistics

2015-12-01 Thread Tom Lane
"Shulgin, Oleksandr"  writes:
> This post summarizes a few weeks of research of ANALYZE statistics
> distribution on one of our bigger production databases with some real-world
> data and proposes a patch to rectify some of the oddities observed.

Please add this to the 2016-01 commitfest ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers