Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-04-09 Thread Tomas Vondra

On Tue, Apr 09, 2019 at 12:14:47PM -0400, Alvaro Herrera wrote:

On 2019-Mar-27, Tomas Vondra wrote:


Attached are the remaining parts of this patch series - the multivariate
histograms, and also a new patch tweaking regression tests for the old
statistic types (ndistinct, dependencies) to adopt the function-based
approach instead of the regular EXPLAIN.

But those are clearly a matter for the future (well, maybe it'd make sense
to commit the regression test change now).


IMO it makes sense to get the test patch pushed now.



OK, I'll take care of that soon.


cheers

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-04-09 Thread Alvaro Herrera
On 2019-Mar-27, Tomas Vondra wrote:

> Attached are the remaining parts of this patch series - the multivariate
> histograms, and also a new patch tweaking regression tests for the old
> statistic types (ndistinct, dependencies) to adopt the function-based
> approach instead of the regular EXPLAIN.
> 
> But those are clearly a matter for the future (well, maybe it'd make sense
> to commit the regression test change now).

IMO it makes sense to get the test patch pushed now.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-04-08 Thread Michael Paquier
On Sat, Mar 30, 2019 at 09:13:01PM +0100, Tomas Vondra wrote:
> Hmmm, what's the right status in the CF app when a part of a patch was
> committed and the rest should be moved to the next CF? Committed, Moved
> to next CF, or something else?

This stuff has been around for nine commit fests, and you have been
able to finish the basic work.  So I think that committed is most
appropriate so as you can start later on with a new concept, new patch
sets, perhaps a new thread, and surely a new CF entry.
--
Michael


signature.asc
Description: PGP signature


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-30 Thread Tomas Vondra

On Sun, Mar 31, 2019 at 08:50:53AM +0800, John Naylor wrote:

I believe I found a typo in mcv.c, fix attached.



Thanks, pushed.

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-30 Thread John Naylor
I believe I found a typo in mcv.c, fix attached.

-- 
John Naylorhttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


mcv-comment-fix.patch
Description: Binary data


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-30 Thread Tomas Vondra

On Wed, Mar 27, 2019 at 08:55:07PM +0100, Tomas Vondra wrote:

Hi,

I've now committed the MCV part, ...


Hmmm, what's the right status in the CF app when a part of a patch was
committed and the rest should be moved to the next CF? Committed, Moved
to next CF, or something else?

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 





Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-27 Thread Petr Jelinek
On 27/03/2019 20:55, Tomas Vondra wrote:
> Hi,
> 
> I've now committed the MCV part, after addressing the last two issues
> raised by Dean:
> 

Congrats!

-- 
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services




Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-27 Thread Tomas Vondra

Hi,

I've now committed the MCV part, after addressing the last two issues
raised by Dean:

* The MCV build now always uses the mincount to decide which of the
items to keep in the list.

* Both the MCV build and deserialization now uses the same maximum number
of list items (10k).

Unfortunately, I forgot to merge these two fixes before pushing, so I had
to commit them separately. Sorry about that :/

Attached are the remaining parts of this patch series - the multivariate
histograms, and also a new patch tweaking regression tests for the old
statistic types (ndistinct, dependencies) to adopt the function-based
approach instead of the regular EXPLAIN.

But those are clearly a matter for the future (well, maybe it'd make sense
to commit the regression test change now).


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



0001-adopt-regression-tests.patch.gz
Description: application/gzip


0002-multivariate-histograms.patch.gz
Description: application/gzip


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-26 Thread Tomas Vondra

On Tue, Mar 26, 2019 at 01:37:33PM +, Dean Rasheed wrote:

On Tue, 26 Mar 2019 at 11:59, Dean Rasheed  wrote:


On Mon, 25 Mar 2019 at 23:36, Tomas Vondra  wrote:
>
> Attached is an updated patch...

I just looked through the latest set of changes and I have a couple of
additional review comments:



I just spotted another issue while reading the code:

It's possible to build an MCV list with more than
STATS_MCVLIST_MAX_ITEMS = 8192 items, which then causes an error when
the code tries to read it back in:

create temp table foo(a int, b int);
insert into foo select x,x from generate_series(1,1) g(x);
insert into foo select x,x from generate_series(1,1) g(x);
alter table foo alter column a set statistics 1;
alter table foo alter column b set statistics 1;
create statistics s (mcv) on a,b from foo;
analyse foo;
select * from foo where a=1 and b=1;

ERROR:  invalid length (1) item array in MCVList

So this needs to be checked when building the MCV list.

In fact, the stats targets for table columns can be as large as 1
(a hard-coded constant in tablecmds.c, which is pretty ugly, but
that's a different matter), so I think STATS_MCVLIST_MAX_ITEMS
probably ought to match that.

There are also a couple of comments that refer to the 8k limit, which
would need updating, if you change it.



I think we can simply ditch this separate limit, and rely on the
statistics target. The one issue with it is that if we ever allows the
statistics target to we may end up overflowing uint16 (which is used in
the serialized representation). But I think it's OK to just check that in
an assert, or so.

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-26 Thread Tomas Vondra

On Tue, Mar 26, 2019 at 11:59:56AM +, Dean Rasheed wrote:

On Mon, 25 Mar 2019 at 23:36, Tomas Vondra  wrote:


Attached is an updated patch, fixing all the issues pointed out so far.
Unless there are some objections, I plan to commit the 0001 part by the
end of this CF. Part 0002 is a matter for PG13, as previously agreed.



Yes, I think that's reasonable. It looks to be in pretty good shape. I
have reviewed most of the actual code, but note that I haven't
reviewed the docs changes and I didn't spend much time reading code
comments. It might benefit from a quick once-over comment tidy up.

I just looked through the latest set of changes and I have a couple of
additional review comments:

In the comment about WIDTH_THRESHOLD, s/pg_statistic/pg_statistic_ext/.

In statext_mcv_build(), I'm not convinced by the logic around keeping
the whole MCV list if it fits. Suppose there were a small number of
very common values, and then a bunch of uniformly distributed less
common values. The sample might consist of all the common values, plus
one or two instances of some of the uncommon ones, leading to a list
that would fit, but it would not be appropriate to keep the uncommon
values on the basis of having seen them only one or two times. The
fact that the list of items seen fits doesn't by itself mean that
they're all common enough to justify being kept. In the per-column
stats case, there are a bunch of other checks that have to pass, which
are intended to test not just that the list fits, but that it believes
that those are all the items in the table. For MV stats, you don't
have that, and so I think it would be best to just remove that test
(the "if (ngroups > nitems)" test) and *always* call
get_mincount_for_mcv_list() to determine how many MCV items to keep.
Otherwise there is a risk of keeping too many MCV items, with the ones
at the tail end of the list producing poor estimates.



I need to think about it a bit, but I think you're right we may not need
to keep those items. The reason why I decided to keep them is that they
may represent cases where the (actual frequency << base frequency). But
now that I think about it, that can probably be handled as

 ((1 - total_sel) / ndistinct)

So yeah, I think I'll just change the code to always us the mincount.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-26 Thread Dean Rasheed
On Tue, 26 Mar 2019 at 11:59, Dean Rasheed  wrote:
>
> On Mon, 25 Mar 2019 at 23:36, Tomas Vondra  
> wrote:
> >
> > Attached is an updated patch...
>
> I just looked through the latest set of changes and I have a couple of
> additional review comments:
>

I just spotted another issue while reading the code:

It's possible to build an MCV list with more than
STATS_MCVLIST_MAX_ITEMS = 8192 items, which then causes an error when
the code tries to read it back in:

create temp table foo(a int, b int);
insert into foo select x,x from generate_series(1,1) g(x);
insert into foo select x,x from generate_series(1,1) g(x);
alter table foo alter column a set statistics 1;
alter table foo alter column b set statistics 1;
create statistics s (mcv) on a,b from foo;
analyse foo;
select * from foo where a=1 and b=1;

ERROR:  invalid length (1) item array in MCVList

So this needs to be checked when building the MCV list.

In fact, the stats targets for table columns can be as large as 1
(a hard-coded constant in tablecmds.c, which is pretty ugly, but
that's a different matter), so I think STATS_MCVLIST_MAX_ITEMS
probably ought to match that.

There are also a couple of comments that refer to the 8k limit, which
would need updating, if you change it.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-26 Thread Dean Rasheed
On Mon, 25 Mar 2019 at 23:36, Tomas Vondra  wrote:
>
> Attached is an updated patch, fixing all the issues pointed out so far.
> Unless there are some objections, I plan to commit the 0001 part by the
> end of this CF. Part 0002 is a matter for PG13, as previously agreed.
>

Yes, I think that's reasonable. It looks to be in pretty good shape. I
have reviewed most of the actual code, but note that I haven't
reviewed the docs changes and I didn't spend much time reading code
comments. It might benefit from a quick once-over comment tidy up.

I just looked through the latest set of changes and I have a couple of
additional review comments:

In the comment about WIDTH_THRESHOLD, s/pg_statistic/pg_statistic_ext/.

In statext_mcv_build(), I'm not convinced by the logic around keeping
the whole MCV list if it fits. Suppose there were a small number of
very common values, and then a bunch of uniformly distributed less
common values. The sample might consist of all the common values, plus
one or two instances of some of the uncommon ones, leading to a list
that would fit, but it would not be appropriate to keep the uncommon
values on the basis of having seen them only one or two times. The
fact that the list of items seen fits doesn't by itself mean that
they're all common enough to justify being kept. In the per-column
stats case, there are a bunch of other checks that have to pass, which
are intended to test not just that the list fits, but that it believes
that those are all the items in the table. For MV stats, you don't
have that, and so I think it would be best to just remove that test
(the "if (ngroups > nitems)" test) and *always* call
get_mincount_for_mcv_list() to determine how many MCV items to keep.
Otherwise there is a risk of keeping too many MCV items, with the ones
at the tail end of the list producing poor estimates.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-25 Thread Tomas Vondra



On 3/24/19 8:36 AM, Dean Rasheed wrote:
> On Sun, 24 Mar 2019 at 00:17, David Rowley  
> wrote:
>>
>> On Sun, 24 Mar 2019 at 12:41, Tomas Vondra  
>> wrote:
>>>
>>> On 3/21/19 4:05 PM, David Rowley wrote:
>>
 29. Looking at the tests I see you're testing that you get bad
 estimates without extended stats.  That does not really seem like
 something that should be done in tests that are meant for extended
 statistics.

>>>
>>> True, it might be a bit unnecessary. Initially the tests were meant to
>>> show old/new estimates for development purposes, but it might not be
>>> appropriate for regression tests. I don't think it's a big issue, it's
>>> not like it'd slow down the tests significantly. Opinions?
>>
>> My thoughts were that if someone did something to improve non-MV
>> stats, then is it right for these tests to fail? What should the
>> developer do in the case? update the expected result? remove the test?
>> It's not so clear.
>>
> 
> I think the tests are fine as they are. Don't think of these as "good"
> and "bad" estimates. They should both be "good" estimates, but under
> different assumptions -- one assuming no correlation between columns,
> and one taking into account the relationship between the columns. If
> someone does do something to "improve" the non-MV stats, then the
> former tests ought to tell us whether it really was an improvement. If
> so, then the test result can be updated and perhaps whatever was done
> ought to be factored into the MV-stats' calculation of base
> frequencies. If not, the test is providing valuable feedback that
> perhaps it wasn't such a good improvement after all.
> 

Yeah, I agree. I'm sure there are ways to further simplify (or otherwise
improve) the tests, but I think those tests are useful to demonstrate
what the "baseline" estimates are.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-25 Thread Tomas Vondra
Hi,

Attached is an updated patch, fixing all the issues pointed out so far.
Unless there are some objections, I plan to commit the 0001 part by the
end of this CF. Part 0002 is a matter for PG13, as previously agreed.


On 3/24/19 1:17 AM, David Rowley wrote:
> On Sun, 24 Mar 2019 at 12:41, Tomas Vondra  
> wrote:
>>
>> On 3/21/19 4:05 PM, David Rowley wrote:
>>> 11. In get_mincount_for_mcv_list() it's probably better to have the
>>> numerical literals of 0.0 instead of just 0.
>>
>> Why?
> 
> Isn't it what we do for float and double literals?
> 

OK, fixed.

>>
>>> 12. I think it would be better if you modified build_attnums_array()
>>> to add an output argument that sets the size of the array. It seems
>>> that most places you call this function you perform bms_num_members()
>>> to determine the array size.
>>
>> Hmmm. I've done this, but I'm not sure I like it very much - there's no
>> protection the value passed in is the right one, so the array might be
>> allocated either too small or too large. I think it might be better to
>> make it work the other way, i.e. pass the value out instead.
> 
> When I said "that sets the size", I meant "that gets set to the size",
> sorry for the confusion.  I mean, if you're doing bms_num_members()
> inside build_attnums_array() anyway, then this will save you from
> having to do that in the callers too.
> 

OK, I've done this now, and I'm fairly happy with it.

>>> 28. Can you explain what this is?
>>>
>>> uint32 type; /* type of MCV list (BASIC) */
>>>
>>> I see: #define STATS_MCV_TYPE_BASIC 1 /* basic MCV list type */
>>>
>>> but it's not really clear to me what else could exist. Maybe the
>>> "type" comment can explain there's only one type for now, but more
>>> might exist in the future?
>>>
>>
>> It's the same idea as for dependencies/mvdistinct stats, i.e.
>> essentially a version number for the data structure so that we can
>> perhaps introduce some improved version of the data structure in the future.
>>
>> But now that I think about it, it seems a bit pointless. We would only
>> do that in a major version anyway, and we don't keep statistics during
>> upgrades. So we could just as well introduce the version/flag/... if
>> needed. We can't do this for regular persistent data, but for stats it
>> does not matter.
>>
>> So I propose we just remove this thingy from both the existing stats and
>> this patch.
> 
> I see. I wasn't aware that existed for the other types.   It certainly
> gives some wiggle room if some mistakes were discovered after the
> release, but thinking about it, we could probably just change the
> "magic" number and add new code in that branch only to ignore the old
> magic number, perhaps with a WARNING to analyze the table again. The
> magic field seems sufficiently early in the struct that we could do
> that.  In the master branch we'd just error if the magic number didn't
> match, since we wouldn't have to deal with stats generated by the
> previous version's bug.
> 

OK. I've decided to keep the field for now, for sake of consistency with
the already existing statistic types. I think we can rethink that in the
future, if needed.

>>> 29. Looking at the tests I see you're testing that you get bad
>>> estimates without extended stats.  That does not really seem like
>>> something that should be done in tests that are meant for extended
>>> statistics.
>>>
>>
>> True, it might be a bit unnecessary. Initially the tests were meant to
>> show old/new estimates for development purposes, but it might not be
>> appropriate for regression tests. I don't think it's a big issue, it's
>> not like it'd slow down the tests significantly. Opinions?
> 
> My thoughts were that if someone did something to improve non-MV
> stats, then is it right for these tests to fail? What should the
> developer do in the case? update the expected result? remove the test?
> It's not so clear.
> 

I think such changes would affect a number of other places in regression
tests (changing plans, ...), so I don't see why fixing these tests would
be any different.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-multivariate-MCV-lists-20190325.patch.gz
Description: application/gzip


0002-multivariate-histograms-20190325.patch.gz
Description: application/gzip


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-24 Thread Dean Rasheed
On Sun, 24 Mar 2019 at 00:17, David Rowley  wrote:
>
> On Sun, 24 Mar 2019 at 12:41, Tomas Vondra  
> wrote:
> >
> > On 3/21/19 4:05 PM, David Rowley wrote:
>
> > > 29. Looking at the tests I see you're testing that you get bad
> > > estimates without extended stats.  That does not really seem like
> > > something that should be done in tests that are meant for extended
> > > statistics.
> > >
> >
> > True, it might be a bit unnecessary. Initially the tests were meant to
> > show old/new estimates for development purposes, but it might not be
> > appropriate for regression tests. I don't think it's a big issue, it's
> > not like it'd slow down the tests significantly. Opinions?
>
> My thoughts were that if someone did something to improve non-MV
> stats, then is it right for these tests to fail? What should the
> developer do in the case? update the expected result? remove the test?
> It's not so clear.
>

I think the tests are fine as they are. Don't think of these as "good"
and "bad" estimates. They should both be "good" estimates, but under
different assumptions -- one assuming no correlation between columns,
and one taking into account the relationship between the columns. If
someone does do something to "improve" the non-MV stats, then the
former tests ought to tell us whether it really was an improvement. If
so, then the test result can be updated and perhaps whatever was done
ought to be factored into the MV-stats' calculation of base
frequencies. If not, the test is providing valuable feedback that
perhaps it wasn't such a good improvement after all.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-23 Thread David Rowley
On Sun, 24 Mar 2019 at 12:41, Tomas Vondra  wrote:
>
> On 3/21/19 4:05 PM, David Rowley wrote:
> > 11. In get_mincount_for_mcv_list() it's probably better to have the
> > numerical literals of 0.0 instead of just 0.
>
> Why?

Isn't it what we do for float and double literals?

>
> > 12. I think it would be better if you modified build_attnums_array()
> > to add an output argument that sets the size of the array. It seems
> > that most places you call this function you perform bms_num_members()
> > to determine the array size.
>
> Hmmm. I've done this, but I'm not sure I like it very much - there's no
> protection the value passed in is the right one, so the array might be
> allocated either too small or too large. I think it might be better to
> make it work the other way, i.e. pass the value out instead.

When I said "that sets the size", I meant "that gets set to the size",
sorry for the confusion.  I mean, if you're doing bms_num_members()
inside build_attnums_array() anyway, then this will save you from
having to do that in the callers too.

> > 21. Do you think it would be better to declare
> > pg_stats_ext_mcvlist_items() to accept the oid of the pg_statistic_ext
> > row rather than the stxmcv column? (However, I do see you have a mcv
> > type, so perhaps you might want other types in the future?)
> >
>
> I don't think so, I don't see what advantages would it have.

Okay. I just wanted to ask the question.  When I thought of it I had
in mind that it might be possible to carefully craft some bytea value
to have the function crash, but when I tried to I discovered that the
input function for the pg_mcv_list just errors, so it's impossible to
cast a bytea value to pg_mcv_list.

> > 28. Can you explain what this is?
> >
> > uint32 type; /* type of MCV list (BASIC) */
> >
> > I see: #define STATS_MCV_TYPE_BASIC 1 /* basic MCV list type */
> >
> > but it's not really clear to me what else could exist. Maybe the
> > "type" comment can explain there's only one type for now, but more
> > might exist in the future?
> >
>
> It's the same idea as for dependencies/mvdistinct stats, i.e.
> essentially a version number for the data structure so that we can
> perhaps introduce some improved version of the data structure in the future.
>
> But now that I think about it, it seems a bit pointless. We would only
> do that in a major version anyway, and we don't keep statistics during
> upgrades. So we could just as well introduce the version/flag/... if
> needed. We can't do this for regular persistent data, but for stats it
> does not matter.
>
> So I propose we just remove this thingy from both the existing stats and
> this patch.

I see. I wasn't aware that existed for the other types.   It certainly
gives some wiggle room if some mistakes were discovered after the
release, but thinking about it, we could probably just change the
"magic" number and add new code in that branch only to ignore the old
magic number, perhaps with a WARNING to analyze the table again. The
magic field seems sufficiently early in the struct that we could do
that.  In the master branch we'd just error if the magic number didn't
match, since we wouldn't have to deal with stats generated by the
previous version's bug.

> > 29. Looking at the tests I see you're testing that you get bad
> > estimates without extended stats.  That does not really seem like
> > something that should be done in tests that are meant for extended
> > statistics.
> >
>
> True, it might be a bit unnecessary. Initially the tests were meant to
> show old/new estimates for development purposes, but it might not be
> appropriate for regression tests. I don't think it's a big issue, it's
> not like it'd slow down the tests significantly. Opinions?

My thoughts were that if someone did something to improve non-MV
stats, then is it right for these tests to fail? What should the
developer do in the case? update the expected result? remove the test?
It's not so clear.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-23 Thread Tomas Vondra
On 3/21/19 4:05 PM, David Rowley wrote:
> On Mon, 18 Mar 2019 at 02:18, Tomas Vondra  
> wrote:
>> Yes, it was using the toasted value directly. The attached patch
>> detoasts the value explicitly, similarly to the per-column stats, and it
>> also removes the 1MB limit.
> 
> I just made a pass over 0001 and 0002.
> 
> 0002 is starting to look pretty good, but I did note down a few things
> while looking.  Some things below might just me being unclear on how
> something works. Perhaps that means more comments are needed, but it
> might also mean I need a brain upgrade. I'm hoping it's the former.
> 

That's good to hear. Thanks for the review.

> 0001:
> 
> 1. Could you write a full commit message for this patch. Without
> reading the backlog on this ticket it's not all that obvious what the
> patch aims to fix. (I have read the backlog, so I know, but the next
> person might not have)
> 
> 2. Should all the relpages variables be BlockNumber rather than double?
> 

Probably. But I think the conclusion from the discussion with Dean was
that tweaking the relpages/reltuples reset should really be a matter for
a separate patch. So I've removed it from this patch series and the
tests were modified to check the stats are still there.

> 0002:
> 
> 3. I'm not sure what the following is trying to say:
> 
> * Estimate selectivity on any clauses applicable by stats tracking
> * actual values first, then apply functional dependencies on the
> * remaining clauses.
> 
> can you reword it?
> 

It was supposed to say we first try to apply the more complicated stats
(those that track dependencies between values) before applying the
simpler ones that only track dependencies between columns. I've reworked
and simplified comments in this part of the code.

> 4. This seems out of date:
> 
> * clauses that we've already estimated for.  Each selectivity
> * function will set the appropriate bit in the bitmapset to mark that
> * no further estimation is required for that list item.
> 
> We're only passing estimatedclauses to 1 function before
> clauselist_selectivity_simple is called for the remainder.
> 

True. I've simplified/reworded this. The old wording was mostly a
residue of how this worked in previous patch versions.

> 5. In build_attnums_array there's
> Assert(AttrNumberIsForUserDefinedAttr(j)); I just wanted to point out
> that this could only possibly trigger of the bitmapset had a 0 member.
> It cannot have negative members.  Maybe it would be worth adding a
> comment to acknowledge that as it looks a bit misguided otherwise.
> 

Right. I've added an explanation, and another assert checking the
maximum value (because bitmaps store integers, but we only expect
attnums here).

> 6. In build_attnums_array(), what's the reason to return int *, rather
> than an AttrNumber * ? Likewise in the code that calls that function.
> 

Laziness, I guess. Also, bitmaps work with int members, so it was kinda
natural. But you're right AttrNumber is a better choice, so fixed.

> 7. Not properly indented. Should be two tabs.
> 
>  * build sorted array of SortItem with values from rows
> 
> Should also be "a sorted array"
> 

Fixed.

> 8. This comment seems to duplicate what is just mentioned in the
> header comment for the function.
> 
> /*
> * We won't allocate the arrays for each item independenly, but in one
> * large chunk and then just set the pointers. This allows the caller to
> * simply pfree the return value to release all the memory.
> */
> 
> Also, typo "independenly" -> "independently"
> 

Fixed. I've removed this comment, the function comment is enough.

> 9. Not properly indented:
> 
> /*
>  * statext_is_compatible_clause_internal
>  * Does the heavy lifting of actually inspecting the clauses for
>  * statext_is_compatible_clause. It needs to be split like this because
>  * of recursion.  The attnums bitmap is an input/output parameter collecting
>  * attribute numbers from all compatible clauses (recursively).
>  */
> 

Fixed. It might be a tad too similar to statext_is_compatible_clause
comment, though.

> 10. Header comment for get_mincount_for_mcv_list() ends with
> *-- but does not start with that.
> 

Fixed.

> 11. In get_mincount_for_mcv_list() it's probably better to have the
> numerical literals of 0.0 instead of just 0.
> 

Why?

> 12. I think it would be better if you modified build_attnums_array()
> to add an output argument that sets the size of the array. It seems
> that most places you call this function you perform bms_num_members()
> to determine the array size.
> 

Hmmm. I've done this, but I'm not sure I like it very much - there's no
protection the value passed in is the right one, so the array might be
allocated either too small or too large. I think it might be better to
make it work the other way, i.e. pass the value out instead.

> 13. This comment seems to be having a fight with itself:
> 
> * Preallocate Datum/isnull arrays (not as a single chunk, as we will
> * pass the result outside 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-21 Thread David Rowley
On Mon, 18 Mar 2019 at 02:18, Tomas Vondra  wrote:
> Yes, it was using the toasted value directly. The attached patch
> detoasts the value explicitly, similarly to the per-column stats, and it
> also removes the 1MB limit.

I just made a pass over 0001 and 0002.

0002 is starting to look pretty good, but I did note down a few things
while looking.  Some things below might just me being unclear on how
something works. Perhaps that means more comments are needed, but it
might also mean I need a brain upgrade. I'm hoping it's the former.

0001:

1. Could you write a full commit message for this patch. Without
reading the backlog on this ticket it's not all that obvious what the
patch aims to fix. (I have read the backlog, so I know, but the next
person might not have)

2. Should all the relpages variables be BlockNumber rather than double?

0002:

3. I'm not sure what the following is trying to say:

* Estimate selectivity on any clauses applicable by stats tracking
* actual values first, then apply functional dependencies on the
* remaining clauses.

can you reword it?

4. This seems out of date:

* clauses that we've already estimated for.  Each selectivity
* function will set the appropriate bit in the bitmapset to mark that
* no further estimation is required for that list item.

We're only passing estimatedclauses to 1 function before
clauselist_selectivity_simple is called for the remainder.

5. In build_attnums_array there's
Assert(AttrNumberIsForUserDefinedAttr(j)); I just wanted to point out
that this could only possibly trigger of the bitmapset had a 0 member.
It cannot have negative members.  Maybe it would be worth adding a
comment to acknowledge that as it looks a bit misguided otherwise.

6. In build_attnums_array(), what's the reason to return int *, rather
than an AttrNumber * ? Likewise in the code that calls that function.

7. Not properly indented. Should be two tabs.

 * build sorted array of SortItem with values from rows

Should also be "a sorted array"

8. This comment seems to duplicate what is just mentioned in the
header comment for the function.

/*
* We won't allocate the arrays for each item independenly, but in one
* large chunk and then just set the pointers. This allows the caller to
* simply pfree the return value to release all the memory.
*/

Also, typo "independenly" -> "independently"

9. Not properly indented:

/*
 * statext_is_compatible_clause_internal
 * Does the heavy lifting of actually inspecting the clauses for
 * statext_is_compatible_clause. It needs to be split like this because
 * of recursion.  The attnums bitmap is an input/output parameter collecting
 * attribute numbers from all compatible clauses (recursively).
 */

10. Header comment for get_mincount_for_mcv_list() ends with
*-- but does not start with that.

11. In get_mincount_for_mcv_list() it's probably better to have the
numerical literals of 0.0 instead of just 0.

12. I think it would be better if you modified build_attnums_array()
to add an output argument that sets the size of the array. It seems
that most places you call this function you perform bms_num_members()
to determine the array size.

13. This comment seems to be having a fight with itself:

* Preallocate Datum/isnull arrays (not as a single chunk, as we will
* pass the result outside and thus it needs to be easy to pfree().
*
* XXX On second thought, we're the only ones dealing with MCV lists,
* so we might allocate everything as a single chunk to reduce palloc
* overhead (chunk headers, etc.) without significant risk. Not sure
* it's worth it, though, as we're not re-building stats very often.

14. The following might be easier to read if you used a local variable
instead of counts[dim].

for (i = 0; i < mcvlist->nitems; i++)
{
/* skip NULL values - we don't need to deduplicate those */
if (mcvlist->items[i]->isnull[dim])
continue;

values[dim][counts[dim]] = mcvlist->items[i]->values[dim];
counts[dim] += 1;
}

Then just assign the value of the local variable to counts[dim] at the end.

15. Why does this not use stats[dim]->attrcollid ?

ssup[dim].ssup_collation = DEFAULT_COLLATION_OID;

16. The following:

else if (info[dim].typlen == -2) /* cstring */
{
info[dim].nbytes = 0;
for (i = 0; i < info[dim].nvalues; i++)
{
values[dim][i] = PointerGetDatum(PG_DETOAST_DATUM(values[dim][i]));
info[dim].nbytes += strlen(DatumGetCString(values[dim][i]));
}
}

seems to conflict with:

else if (info[dim].typlen == -2) /* cstring */
{
memcpy(data, DatumGetCString(v), strlen(DatumGetCString(v)) + 1);
data += strlen(DatumGetCString(v)) + 1; /* terminator */
}

It looks like you'll reserve 1 byte too few for each cstring value.

(Might also be nicer to assign the strlen to a local variable rather
than leave it up to the compiler to optimize out the 2nd strlen call
in the latter of the two code fragments above.)

17. I wonder if some compilers will warn about this:

ITEM_INDEXES(item)[dim] = (value - values[dim]);

Probably a cast to uint16 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-17 Thread Tomas Vondra
Hi,

On 3/17/19 12:47 PM, Dean Rasheed wrote:
> On Sat, 16 Mar 2019 at 23:44, Tomas Vondra  
> wrote:
>>
>>> 28). I just spotted the 1MB limit on the serialised MCV list size. I
>>> think this is going to be too limiting. For example, if the stats
>>> target is at its maximum of 1, that only leaves around 100 bytes
>>> for each item's values, which is easily exceeded. In fact, I think
>>> this approach for limiting the MCV list size isn't a good one --
>>> consider what would happen if there were lots of very large values.
>>> Would it run out of memory before getting to that test? Even if not,
>>> it would likely take an excessive amount of time.
>>>
>>
>> True. I don't have a very good argument for a specific value, or even
>> having an explicit limit at all. I've initially added it mostly as a
>> safety for development purposes, but I think you're right we can just
>> get rid of it. I don't think it'd run out of memory before hitting the
>> limit, but I haven't tried very hard (but I recall running into the 1MB
>> limit in the past).
>>
> 
> I've just been playing around a little with this and found that it
> isn't safely dealing with toasted values. For example, consider the
> following test:
> 
> create or replace function random_string(x int) returns text
> as $$
>   select substr(string_agg(md5(random()::text), ''), 1, x)
> from generate_series(1,(x+31)/32);
> $$ language sql;
> 
> drop table if exists t;
> create table t(a int, b text);
> insert into t values (1, random_string(1000));
> create statistics s (mcv) on a,b from t;
> analyse t;
> 
> select length(b), left(b,5), right(b,5) from t;
> select length(stxmcv), length((m.values::text[])[2]),
>left((m.values::text[])[2], 5), right((m.values::text[])[2],5)
>   from pg_statistic_ext, pg_mcv_list_items(stxmcv) m
>  where stxrelid = 't'::regclass;
> 
> The final query returns the following:
> 
>  length |  length  | left  | right
> +--+---+---
> 250 | 1000 | c2667 | 71492
> (1 row)
> 
> suggesting that there's something odd about the stxmcv value. Note,
> also, that it doesn't hit the 1MB limit, even though the value is much
> bigger than that.
> 
> If I then delete the value from the table, without changing the stats,
> and repeat the final query, it falls over:
> 
> delete from t where a=1;
> select length(stxmcv), length((m.values::text[])[2]),
>left((m.values::text[])[2], 5), right((m.values::text[])[2],5)
>   from pg_statistic_ext, pg_mcv_list_items(stxmcv) m
>  where stxrelid = 't'::regclass;
> 
> ERROR:  unexpected chunk number 5008 (expected 0) for toast value
> 16486 in pg_toast_16480
> 
> So I suspect it was using the toast data from the table t, although
> I've not tried to investigate further.
> 

Yes, it was using the toasted value directly. The attached patch
detoasts the value explicitly, similarly to the per-column stats, and it
also removes the 1MB limit.

> 
>>> I think this part of the patch needs a bit of a rethink. My first
>>> thought is to do something similar to what happens for per-column
>>> MCVs, and set an upper limit on the size of each value that is ever
>>> considered for inclusion in the stats (c.f. WIDTH_THRESHOLD and
>>> toowide_cnt in analyse.c). Over-wide values should be excluded early
>>> on, and it will need to track whether or not any such values were
>>> excluded, because then it wouldn't be appropriate to treat the stats
>>> as complete and keep the entire list, without calling
>>> get_mincount_for_mcv_list().
>>>
>> Which part? Serialization / deserialization? Or how we handle long
>> values when building the MCV list?
>>
> 
> I was thinking (roughly) of something like the following:
> 
> * When building the values array for the MCV list, strip out rows with
> values wider than some threshold (probably something like the
> WIDTH_THRESHOLD = 1024 from analyse.c would be reasonable).
> 
> * When building the MCV list, if some over-wide values were previously
> stripped out, always go into the get_mincount_for_mcv_list() block,
> even if nitems == ngroups (for the same reason a similar thing happens
> for per-column stats -- if some items were stripped out, we're already
> saying that not all items will go in the MCV list, and it's not safe
> to assume that the remaining items are common enough to give accurate
> estimates).
> 

Yes, that makes sense I guess.

> * In the serialisation code, remove the size limit entirely. We know
> that each value is now at most 1024 bytes, and there are at most 1
> items, and at most 8 columns, so the total size is already reasonably
> well bounded. In the worst case, it might be around 80MB, but in
> practice, it's always likely to be much much smaller than that.
> 

Yep, I've already removed the limit from the current patch.


cheers

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-17 Thread Tomas Vondra



On 3/17/19 1:14 PM, Dean Rasheed wrote:
> On Sat, 16 Mar 2019 at 23:44, Tomas Vondra  
> wrote:
>>>
>>> 16). This regression test fails for me:
>>>
>>> @@ -654,11 +654,11 @@
>>>  -- check change of unrelated column type does not reset the MCV statistics
>>>  ALTER TABLE mcv_lists ALTER COLUMN d TYPE VARCHAR(64);
>>>  SELECT * FROM check_estimated_rows('SELECT * FROM mcv_lists WHERE a =
>>> 1 AND b = ''1''');
>>>   estimated | actual
>>>  ---+
>>> -50 | 50
>>> +11 | 50
>>>  (1 row)
>>>
>>> Maybe that's platform-dependent, given what you said about
>>> reltuples/relpages being reset. An easy workaround for this would be
>>> to modify this test (and perhaps the one that follows) to just query
>>> pg_statistic_ext to see if the MCV statistics have been reset.
>>>
>>
>> Ah, sorry for not explaining this bit - the failure is expected, due to
>> the reset of relpages/reltuples I mentioned. We do keep the extended
>> stats, but the relsize estimate changes a bit. It surprised me a bit,
>> and this test made the behavior apparent. The last patchset included a
>> piece that changes that - if we decide not to change this, I think we
>> can simply accept the actual output.
>>
> 
> I don't think changing the way reltuples is reset ought to be within
> the scope of this patch. There might be good reasons for it being the
> way it is. Perhaps open a discussion on a separate thread?
> 

Agreed, will do.

> As far as this test goes, how about just doing this:
> 
> -- check change of unrelated column type does not reset the MCV statistics
> ALTER TABLE mcv_lists ALTER COLUMN d TYPE VARCHAR(64);
> SELECT stxmcv IS NOT NULL AS has_mcv
>   FROM pg_statistic_ext WHERE stxrelid = 'mcv_lists'::regclass;
> 
> -- check change of column type resets the MCV statistics
> ALTER TABLE mcv_lists ALTER COLUMN c TYPE numeric;
> SELECT stxmcv IS NOT NULL AS has_mcv
>   FROM pg_statistic_ext WHERE stxrelid = 'mcv_lists'::regclass;
> 

OK, that's probably the best thing we can do.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-17 Thread Dean Rasheed
On Sat, 16 Mar 2019 at 23:44, Tomas Vondra  wrote:
> >
> > 16). This regression test fails for me:
> >
> > @@ -654,11 +654,11 @@
> >  -- check change of unrelated column type does not reset the MCV statistics
> >  ALTER TABLE mcv_lists ALTER COLUMN d TYPE VARCHAR(64);
> >  SELECT * FROM check_estimated_rows('SELECT * FROM mcv_lists WHERE a =
> > 1 AND b = ''1''');
> >   estimated | actual
> >  ---+
> > -50 | 50
> > +11 | 50
> >  (1 row)
> >
> > Maybe that's platform-dependent, given what you said about
> > reltuples/relpages being reset. An easy workaround for this would be
> > to modify this test (and perhaps the one that follows) to just query
> > pg_statistic_ext to see if the MCV statistics have been reset.
> >
>
> Ah, sorry for not explaining this bit - the failure is expected, due to
> the reset of relpages/reltuples I mentioned. We do keep the extended
> stats, but the relsize estimate changes a bit. It surprised me a bit,
> and this test made the behavior apparent. The last patchset included a
> piece that changes that - if we decide not to change this, I think we
> can simply accept the actual output.
>

I don't think changing the way reltuples is reset ought to be within
the scope of this patch. There might be good reasons for it being the
way it is. Perhaps open a discussion on a separate thread?

As far as this test goes, how about just doing this:

-- check change of unrelated column type does not reset the MCV statistics
ALTER TABLE mcv_lists ALTER COLUMN d TYPE VARCHAR(64);
SELECT stxmcv IS NOT NULL AS has_mcv
  FROM pg_statistic_ext WHERE stxrelid = 'mcv_lists'::regclass;

-- check change of column type resets the MCV statistics
ALTER TABLE mcv_lists ALTER COLUMN c TYPE numeric;
SELECT stxmcv IS NOT NULL AS has_mcv
  FROM pg_statistic_ext WHERE stxrelid = 'mcv_lists'::regclass;

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-17 Thread Dean Rasheed
On Sat, 16 Mar 2019 at 23:44, Tomas Vondra  wrote:
>
> > 28). I just spotted the 1MB limit on the serialised MCV list size. I
> > think this is going to be too limiting. For example, if the stats
> > target is at its maximum of 1, that only leaves around 100 bytes
> > for each item's values, which is easily exceeded. In fact, I think
> > this approach for limiting the MCV list size isn't a good one --
> > consider what would happen if there were lots of very large values.
> > Would it run out of memory before getting to that test? Even if not,
> > it would likely take an excessive amount of time.
> >
>
> True. I don't have a very good argument for a specific value, or even
> having an explicit limit at all. I've initially added it mostly as a
> safety for development purposes, but I think you're right we can just
> get rid of it. I don't think it'd run out of memory before hitting the
> limit, but I haven't tried very hard (but I recall running into the 1MB
> limit in the past).
>

I've just been playing around a little with this and found that it
isn't safely dealing with toasted values. For example, consider the
following test:

create or replace function random_string(x int) returns text
as $$
  select substr(string_agg(md5(random()::text), ''), 1, x)
from generate_series(1,(x+31)/32);
$$ language sql;

drop table if exists t;
create table t(a int, b text);
insert into t values (1, random_string(1000));
create statistics s (mcv) on a,b from t;
analyse t;

select length(b), left(b,5), right(b,5) from t;
select length(stxmcv), length((m.values::text[])[2]),
   left((m.values::text[])[2], 5), right((m.values::text[])[2],5)
  from pg_statistic_ext, pg_mcv_list_items(stxmcv) m
 where stxrelid = 't'::regclass;

The final query returns the following:

 length |  length  | left  | right
+--+---+---
250 | 1000 | c2667 | 71492
(1 row)

suggesting that there's something odd about the stxmcv value. Note,
also, that it doesn't hit the 1MB limit, even though the value is much
bigger than that.

If I then delete the value from the table, without changing the stats,
and repeat the final query, it falls over:

delete from t where a=1;
select length(stxmcv), length((m.values::text[])[2]),
   left((m.values::text[])[2], 5), right((m.values::text[])[2],5)
  from pg_statistic_ext, pg_mcv_list_items(stxmcv) m
 where stxrelid = 't'::regclass;

ERROR:  unexpected chunk number 5008 (expected 0) for toast value
16486 in pg_toast_16480

So I suspect it was using the toast data from the table t, although
I've not tried to investigate further.


> > I think this part of the patch needs a bit of a rethink. My first
> > thought is to do something similar to what happens for per-column
> > MCVs, and set an upper limit on the size of each value that is ever
> > considered for inclusion in the stats (c.f. WIDTH_THRESHOLD and
> > toowide_cnt in analyse.c). Over-wide values should be excluded early
> > on, and it will need to track whether or not any such values were
> > excluded, because then it wouldn't be appropriate to treat the stats
> > as complete and keep the entire list, without calling
> > get_mincount_for_mcv_list().
> >
> Which part? Serialization / deserialization? Or how we handle long
> values when building the MCV list?
>

I was thinking (roughly) of something like the following:

* When building the values array for the MCV list, strip out rows with
values wider than some threshold (probably something like the
WIDTH_THRESHOLD = 1024 from analyse.c would be reasonable).

* When building the MCV list, if some over-wide values were previously
stripped out, always go into the get_mincount_for_mcv_list() block,
even if nitems == ngroups (for the same reason a similar thing happens
for per-column stats -- if some items were stripped out, we're already
saying that not all items will go in the MCV list, and it's not safe
to assume that the remaining items are common enough to give accurate
estimates).

* In the serialisation code, remove the size limit entirely. We know
that each value is now at most 1024 bytes, and there are at most 1
items, and at most 8 columns, so the total size is already reasonably
well bounded. In the worst case, it might be around 80MB, but in
practice, it's always likely to be much much smaller than that.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-17 Thread Dean Rasheed
On Sat, 16 Mar 2019 at 23:44, Tomas Vondra  wrote:
>
> > 21). For consistency with other bms_ functions, I think the name of
> > the Bitmapset argument for bms_member_index() should just be called
> > "a". Nitpicking, I'd also put bms_member_index() immediately after
> > bms_is_member() in the source, to match the header.
>
> I think I've already done the renames in the last patch I submitted (are
> you looking at an older version of the code, perhaps?). I've moved it
> right after bms_is_member - good idea.
>

Ah OK, I was on the 20190315 patch yesterday. I've just updated to the
20190317 patch.
It looks like you forgot to update the argument name in the header file though.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-16 Thread Tomas Vondra


On 3/16/19 10:26 PM, Dean Rasheed wrote:
> On Fri, 15 Mar 2019 at 00:06, Tomas Vondra  
> wrote:
>> ... attached patch ...
> 
> Some more review comments, carrying on from where I left off:
> 
> 16). This regression test fails for me:
> 
> @@ -654,11 +654,11 @@
>  -- check change of unrelated column type does not reset the MCV statistics
>  ALTER TABLE mcv_lists ALTER COLUMN d TYPE VARCHAR(64);
>  SELECT * FROM check_estimated_rows('SELECT * FROM mcv_lists WHERE a =
> 1 AND b = ''1''');
>   estimated | actual
>  ---+
> -50 | 50
> +11 | 50
>  (1 row)
> 
> Maybe that's platform-dependent, given what you said about
> reltuples/relpages being reset. An easy workaround for this would be
> to modify this test (and perhaps the one that follows) to just query
> pg_statistic_ext to see if the MCV statistics have been reset.
> 

Ah, sorry for not explaining this bit - the failure is expected, due to
the reset of relpages/reltuples I mentioned. We do keep the extended
stats, but the relsize estimate changes a bit. It surprised me a bit,
and this test made the behavior apparent. The last patchset included a
piece that changes that - if we decide not to change this, I think we
can simply accept the actual output.

> 17). I'm definitely preferring the new style of tests because they're
> much neater and easier to read, and to directly see the effect of the
> extended statistics. One thing I'd consider adding is a query of
> pg_statistic_ext using pg_mcv_list_items() after creating the MCV
> stats, both to test that function, and to show that the MCV lists have
> the expected contents (provided that output isn't too large).
> 

OK, will do.

> 18). Spurious whitespace added to src/backend/statistics/mvdistinct.c.
> 

fixed

> 19). In the function comment for statext_mcv_clauselist_selectivity(),
> the name at the top doesn't match the new function name. Also, I think
> it should mention MCV in the initial description. I.e., instead of
> 
> +/*
> + * mcv_clauselist_selectivity
> + *Estimate clauses using the best multi-column statistics.
> 
> it should say:
> 
> +/*
> + * statext_mcv_clauselist_selectivity
> + *Estimate clauses using the best multi-column MCV statistics.
> 

fixed

> 20). Later in the same comment, this part should now be deleted:
> 
> + *
> + * So (simple_selectivity - base_selectivity) may be seen as a correction for
> + * the part not covered by the MCV list.
> 

fixed

> 21). For consistency with other bms_ functions, I think the name of
> the Bitmapset argument for bms_member_index() should just be called
> "a". Nitpicking, I'd also put bms_member_index() immediately after
> bms_is_member() in the source, to match the header.
> 

I think I've already done the renames in the last patch I submitted (are
you looking at an older version of the code, perhaps?). I've moved it
right after bms_is_member - good idea.

> 22). mcv_get_match_bitmap() should really use an array of bool rather
> than an array of char. Note that a bool is guaranteed to be of size 1,
> so it won't make things any less efficient, but it will allow some
> code to be made neater. E.g., all clauses like "matches[i] == false"
> and "matches[i] != false" can just be made "!matches[i]" or
> "matches[i]". Also the Min/Max expressions on those match flags can be
> replaced with the logical operators && and ||.
> 

fixed

> 23). Looking at this code in statext_mcv_build():
> 
> /* store info about data type OIDs */
> i = 0;
> j = -1;
> while ((j = bms_next_member(attrs, j)) >= 0)
> {
> VacAttrStats *colstat = stats[i];
> 
> mcvlist->types[i] = colstat->attrtypid;
> i++;
> }
> 
> it isn't actually making use of the attribute numbers (j) from attrs,
> so this could be simplified to:
> 
> /* store info about data type OIDs */
> for (i = 0; i < numattrs; i++)
> mcvlist->types[i] = stats[i]->attrtypid;
> 

yep, fixed

> 24). Later in that function, the following comment doesn't appear to
> make sense. Is this possibly from an earlier version of the code?
> 
> /* copy values from the _previous_ group (last item of) */
> 

yep, seems like a residue from an older version, fixed

> 25). As for (23), in build_mss(), the loop over the Bitmapset of
> attributes never actually uses the attribute numbers (j), so that
> could just be a loop from i=0 to numattrs-1, and then that function
> doesn't need to be passed the Bitmapset at all -- it could just be
> passed the integer numattrs.
> 

fixed

> 26). build_distinct_groups() looks like it makes an implicit
> assumption that the counts of the items passed in are all zero. That
> is indeed the case, if they've come from build_sorted_items(), because
> that does a palloc0(), but that feels a little fragile. I think it
> would be better if build_distinct_groups() explicitly set the count
> each time it detects a new group.
> 


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-16 Thread Dean Rasheed
On Fri, 15 Mar 2019 at 00:06, Tomas Vondra  wrote:
> ... attached patch ...

Some more review comments, carrying on from where I left off:

16). This regression test fails for me:

@@ -654,11 +654,11 @@
 -- check change of unrelated column type does not reset the MCV statistics
 ALTER TABLE mcv_lists ALTER COLUMN d TYPE VARCHAR(64);
 SELECT * FROM check_estimated_rows('SELECT * FROM mcv_lists WHERE a =
1 AND b = ''1''');
  estimated | actual
 ---+
-50 | 50
+11 | 50
 (1 row)

Maybe that's platform-dependent, given what you said about
reltuples/relpages being reset. An easy workaround for this would be
to modify this test (and perhaps the one that follows) to just query
pg_statistic_ext to see if the MCV statistics have been reset.

17). I'm definitely preferring the new style of tests because they're
much neater and easier to read, and to directly see the effect of the
extended statistics. One thing I'd consider adding is a query of
pg_statistic_ext using pg_mcv_list_items() after creating the MCV
stats, both to test that function, and to show that the MCV lists have
the expected contents (provided that output isn't too large).

18). Spurious whitespace added to src/backend/statistics/mvdistinct.c.

19). In the function comment for statext_mcv_clauselist_selectivity(),
the name at the top doesn't match the new function name. Also, I think
it should mention MCV in the initial description. I.e., instead of

+/*
+ * mcv_clauselist_selectivity
+ *Estimate clauses using the best multi-column statistics.

it should say:

+/*
+ * statext_mcv_clauselist_selectivity
+ *Estimate clauses using the best multi-column MCV statistics.

20). Later in the same comment, this part should now be deleted:

+ *
+ * So (simple_selectivity - base_selectivity) may be seen as a correction for
+ * the part not covered by the MCV list.

21). For consistency with other bms_ functions, I think the name of
the Bitmapset argument for bms_member_index() should just be called
"a". Nitpicking, I'd also put bms_member_index() immediately after
bms_is_member() in the source, to match the header.

22). mcv_get_match_bitmap() should really use an array of bool rather
than an array of char. Note that a bool is guaranteed to be of size 1,
so it won't make things any less efficient, but it will allow some
code to be made neater. E.g., all clauses like "matches[i] == false"
and "matches[i] != false" can just be made "!matches[i]" or
"matches[i]". Also the Min/Max expressions on those match flags can be
replaced with the logical operators && and ||.

23). Looking at this code in statext_mcv_build():

/* store info about data type OIDs */
i = 0;
j = -1;
while ((j = bms_next_member(attrs, j)) >= 0)
{
VacAttrStats *colstat = stats[i];

mcvlist->types[i] = colstat->attrtypid;
i++;
}

it isn't actually making use of the attribute numbers (j) from attrs,
so this could be simplified to:

/* store info about data type OIDs */
for (i = 0; i < numattrs; i++)
mcvlist->types[i] = stats[i]->attrtypid;

24). Later in that function, the following comment doesn't appear to
make sense. Is this possibly from an earlier version of the code?

/* copy values from the _previous_ group (last item of) */

25). As for (23), in build_mss(), the loop over the Bitmapset of
attributes never actually uses the attribute numbers (j), so that
could just be a loop from i=0 to numattrs-1, and then that function
doesn't need to be passed the Bitmapset at all -- it could just be
passed the integer numattrs.

26). build_distinct_groups() looks like it makes an implicit
assumption that the counts of the items passed in are all zero. That
is indeed the case, if they've come from build_sorted_items(), because
that does a palloc0(), but that feels a little fragile. I think it
would be better if build_distinct_groups() explicitly set the count
each time it detects a new group.

27). In statext_mcv_serialize(), the TODO comment says

 * TODO: Consider packing boolean flags (NULL) for each item into a single char
 * (or a longer type) instead of using an array of bool items.

A more efficient way to save space might be to do away with the
boolean null flags entirely, and just use a special index value like
0x to signify a NULL value.

28). I just spotted the 1MB limit on the serialised MCV list size. I
think this is going to be too limiting. For example, if the stats
target is at its maximum of 1, that only leaves around 100 bytes
for each item's values, which is easily exceeded. In fact, I think
this approach for limiting the MCV list size isn't a good one --
consider what would happen if there were lots of very large values.
Would it run out of memory before getting to that test? Even if not,
it would likely take an excessive amount of time.

I think this part of the patch needs a bit of a rethink. My first

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-16 Thread Tomas Vondra


On 3/16/19 11:55 AM, Dean Rasheed wrote:
> On Fri, 15 Mar 2019 at 00:06, Tomas Vondra  
> wrote:
>> I've noticed an annoying thing when modifying type of column not
>> included in a statistics...
>>
>> That is, we don't remove the statistics, but the estimate still changes.
>> But that's because the ALTER TABLE also resets reltuples/relpages:
>>
>> That's a bit unfortunate, and it kinda makes the whole effort to not
>> drop the statistics unnecessarily kinda pointless :-(
>>
> 
> Well not entirely. Repeating that test with 100,000 rows, I get an
> initial estimate of 9850 (actual 10,000), which then drops to 2451
> after altering the column. But if you drop the dependency statistics,
> the estimate drops to 241, so clearly there is some benefit in keeping
> them in that case.
> 

Sure. What I meant is that to correct the relpages/reltuples estimates
you need to do ANALYZE, which rebuilds the statistics anyway. Although
VACUUM also fixes the estimates, without the stats rebuild.

> Besides, I thought there was no extra effort in keeping the extended
> statistics in this case -- isn't it just using the column
> dependencies, so in this case UpdateStatisticsForTypeChange() never
> gets called anyway?
> 

Yes, it does not get called at all. My point was that I was a little bit
confused because the test says "check change of unrelated column type
does not reset the MCV statistics" yet the estimates do actually change.

I wonder why we reset the relpages/reltuples to 0, instead of retaining
the original values, though. That would likely give us better density
estimates in estimate_rel_size, I think.

So I've tried doing that, and I've included it as 0001 into the patch
series. It seems to work, but I suppose the reset is there for a reason.
In any case, this is a preexisting issue, independent of what this patch
does or changes.

I've discovered another issue, though. Currently, clauselist_selectivity
has this as the very beginning:

/*
 * If there's exactly one clause, just go directly to
 * clause_selectivity(). None of what we might do below is relevant.
 */
if (list_length(clauses) == 1)
return clause_selectivity(root, (Node *) linitial(clauses),
  varRelid, jointype, sjinfo);

Which however fails with queries like this:

WHERE (a = 1 OR b = 1)

because clauselist_selectivity sees it as a single clause, passes it to
clause_selectivity and the OR-clause handling simply relies on

(s1 + s2 - s1 * s2)

which entirely ignores the multivariate stats. The other similar places
in clause_selectivity() simply call clauselist_selectivity() so that's
OK, but OR-clauses don't do that.

For functional dependencies this is not a huge issue because those apply
only to AND-clauses. But there were proposals to maybe apply them to
other types of clauses, in which case it might become issue.

I think the best fix is moving the optimization after the multivariate
stats are applied. The only alternative I can think of is modifying
clauselist_selectivity so that it can be executed on OR-clauses. But
that seems much more complicated than the former option for almost no
other advantages.

I've also changed how statext_is_compatible_clause_internal() handles
the attnums bitmapset - you were right in your 3/10 message that we can
just pass the value, without creating a local bitmapset. So I've just
done that.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-retain-reltuples-relpages-on-ALTER-TABLE.patch.gz
Description: application/gzip


0002-multivariate-MCV-lists.patch.gz
Description: application/gzip


0003-multivariate-histograms.patch.gz
Description: application/gzip


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-16 Thread Dean Rasheed
On Fri, 15 Mar 2019 at 00:06, Tomas Vondra  wrote:
> I've noticed an annoying thing when modifying type of column not
> included in a statistics...
>
> That is, we don't remove the statistics, but the estimate still changes.
> But that's because the ALTER TABLE also resets reltuples/relpages:
>
> That's a bit unfortunate, and it kinda makes the whole effort to not
> drop the statistics unnecessarily kinda pointless :-(
>

Well not entirely. Repeating that test with 100,000 rows, I get an
initial estimate of 9850 (actual 10,000), which then drops to 2451
after altering the column. But if you drop the dependency statistics,
the estimate drops to 241, so clearly there is some benefit in keeping
them in that case.

Besides, I thought there was no extra effort in keeping the extended
statistics in this case -- isn't it just using the column
dependencies, so in this case UpdateStatisticsForTypeChange() never
gets called anyway?

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-15 Thread Robert Haas
On Thu, Mar 14, 2019 at 7:50 AM Kyotaro HORIGUCHI
 wrote:
> Thank you for the kind explanation. I'm not sure but I understand
> this as '"lists" is an extension' turned into 'lists are an
> extension'. That is, the "lists' expresses a concept rather than
> the plurarilty. (But I haven't got a gut feeling..)

The idea that it expresses a concept rather than the plurality is
exactly right -- so apparently you DO have a gut feeling!

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-14 Thread Tomas Vondra


On 3/14/19 12:56 PM, Kyotaro HORIGUCHI wrote:
> At Wed, 13 Mar 2019 19:37:45 +1300, David Rowley 
>  wrote in 
> 
>> On Wed, 13 Mar 2019 at 17:20, Kyotaro HORIGUCHI
>>  wrote:
>>> bms_member_index seems working differently than maybe expected.
>>>
>>>  bms_member_index((2, 4), 0) => 0, (I think) should be -1
>>>  bms_member_index((2, 4), 1) => 0, should be -1
>>>  bms_member_index((2, 4), 2) => 0, should be 0
>>>  bms_member_index((2, 4), 3) => 1, should be -1
>>>  bms_member_index((2, 4), 4) => 1, should be 1
>>>  bms_member_index((2, 4), 5) => 2, should be -1
>>>  bms_member_index((2, 4), 6) => 2, should be -1
>>> ...
>>>  bms_member_index((2, 4), 63) => 2, should be -1
>>>  bms_member_index((2, 4), 64) => -1, correct
>>>
>>> It works correctly only when x is a member - the way the function
>>> is maybe actually used in this patch -, or needs to change the
>>> specifiction (or the comment) of the function.
>>
>> Looks like:
>>
>> + if (wordnum >= a->nwords)
>> + return -1;
>>
>> should be:
>>
>> + if (wordnum >= a->nwords ||
>> + (a->word[wordnum] & ((bitmapword) 1 << bitnum)) == 0)
>> + return -1;
> 
> Yeah, seems right.
> 

Yep, that was broken. The attached patch fixes this by simply calling
bms_is_member, instead of copying the checks into bms_member_index.

I've also reworked the regression tests to use a function extracting the
cardinality estimates, as proposed by Dean and David. I have not reduced
the size of data sets yet, so the tests are not much faster, but we no
longer check the exact query plan. That's probably a good idea anyway.
Actually - the tests are a bit faster because it allows removing indexes
that were used for the query plans.

FWIW I've noticed an annoying thing when modifying type of column not
included in a statistics. Consider this:

create table t (a int, b int, c text);
insert into t select mod(i,10), mod(i,10), ''
  from generate_series(1,1) s(i);
create statistics s (dependencies) on a,b from t;
analyze t;

explain analyze select * from t where a = 1 and b = 1;

QUERY PLAN
-
 Seq Scan on t  (cost=0.00..205.00 rows=1000 width=9)
(actual time=0.014..1.910 rows=1000 loops=1)
   Filter: ((a = 1) AND (b = 1))
   Rows Removed by Filter: 9000
 Planning Time: 0.119 ms
 Execution Time: 2.234 ms
(5 rows)

alter table t alter c type varchar(61);

explain analyze select * from t where a = 1 and b = 1;

  QUERY PLAN
-
 Seq Scan on t  (cost=0.00..92.95 rows=253 width=148)
(actual time=0.020..2.420 rows=1000 loops=1)
   Filter: ((a = 1) AND (b = 1))
   Rows Removed by Filter: 9000
 Planning Time: 0.128 ms
 Execution Time: 2.767 ms
(5 rows)

select stxdependencies from pg_statistic_ext;

 stxdependencies
--
 {"1 => 2": 1.00, "2 => 1": 1.00}
(1 row)

That is, we don't remove the statistics, but the estimate still changes.
But that's because the ALTER TABLE also resets reltuples/relpages:

select relpages, reltuples from pg_class where relname = 't';

 relpages | reltuples
--+---
0 | 0
(1 row)

That's a bit unfortunate, and it kinda makes the whole effort to not
drop the statistics unnecessarily kinda pointless :-(


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-multivariate-MCV-lists-20190315.patch.gz
Description: application/gzip


0002-multivariate-histograms-20190315.patch.gz
Description: application/gzip


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-14 Thread Kyotaro HORIGUCHI
At Wed, 13 Mar 2019 19:37:45 +1300, David Rowley  
wrote in 
> On Wed, 13 Mar 2019 at 17:20, Kyotaro HORIGUCHI
>  wrote:
> > bms_member_index seems working differently than maybe expected.
> >
> >  bms_member_index((2, 4), 0) => 0, (I think) should be -1
> >  bms_member_index((2, 4), 1) => 0, should be -1
> >  bms_member_index((2, 4), 2) => 0, should be 0
> >  bms_member_index((2, 4), 3) => 1, should be -1
> >  bms_member_index((2, 4), 4) => 1, should be 1
> >  bms_member_index((2, 4), 5) => 2, should be -1
> >  bms_member_index((2, 4), 6) => 2, should be -1
> > ...
> >  bms_member_index((2, 4), 63) => 2, should be -1
> >  bms_member_index((2, 4), 64) => -1, correct
> >
> > It works correctly only when x is a member - the way the function
> > is maybe actually used in this patch -, or needs to change the
> > specifiction (or the comment) of the function.
> 
> Looks like:
> 
> + if (wordnum >= a->nwords)
> + return -1;
> 
> should be:
> 
> + if (wordnum >= a->nwords ||
> + (a->word[wordnum] & ((bitmapword) 1 << bitnum)) == 0)
> + return -1;

Yeah, seems right.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-14 Thread Kyotaro HORIGUCHI
At Wed, 13 Mar 2019 12:39:30 -0400, Robert Haas  wrote 
in 
> On Wed, Mar 13, 2019 at 12:20 AM Kyotaro HORIGUCHI
>  wrote:
> > +Multivariate MCV (most-common values) lists are a straightforward 
> > extension of
> >
> >   "lists are *a*" is wrong?
> 
> No, that's correct.  Not sure exactly what your concern is, but it's
> probably related to the fact that the first parent of the sentences
> (before "are") is plural and the second part is singular.  It does

Exactly, with some doubt on my reading.

> seem a little odd that you can say "lists are an extension," mixing
> singular and plural, but English lets you do stuff like that.

Thank you for the kind explanation. I'm not sure but I understand
this as '"lists" is an extension' turned into 'lists are an
extension'. That is, the "lists' expresses a concept rather than
the plurarilty. (But I haven't got a gut feeling..)

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-13 Thread Robert Haas
On Wed, Mar 13, 2019 at 12:20 AM Kyotaro HORIGUCHI
 wrote:
> +Multivariate MCV (most-common values) lists are a straightforward extension 
> of
>
>   "lists are *a*" is wrong?

No, that's correct.  Not sure exactly what your concern is, but it's
probably related to the fact that the first parent of the sentences
(before "are") is plural and the second part is singular.  It does
seem a little odd that you can say "lists are an extension," mixing
singular and plural, but English lets you do stuff like that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-13 Thread David Rowley
On Wed, 13 Mar 2019 at 17:20, Kyotaro HORIGUCHI
 wrote:
> bms_member_index seems working differently than maybe expected.
>
>  bms_member_index((2, 4), 0) => 0, (I think) should be -1
>  bms_member_index((2, 4), 1) => 0, should be -1
>  bms_member_index((2, 4), 2) => 0, should be 0
>  bms_member_index((2, 4), 3) => 1, should be -1
>  bms_member_index((2, 4), 4) => 1, should be 1
>  bms_member_index((2, 4), 5) => 2, should be -1
>  bms_member_index((2, 4), 6) => 2, should be -1
> ...
>  bms_member_index((2, 4), 63) => 2, should be -1
>  bms_member_index((2, 4), 64) => -1, correct
>
> It works correctly only when x is a member - the way the function
> is maybe actually used in this patch -, or needs to change the
> specifiction (or the comment) of the function.

Looks like:

+ if (wordnum >= a->nwords)
+ return -1;

should be:

+ if (wordnum >= a->nwords ||
+ (a->word[wordnum] & ((bitmapword) 1 << bitnum)) == 0)
+ return -1;

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-12 Thread Kyotaro HORIGUCHI
Hello.

At Wed, 13 Mar 2019 02:25:40 +0100, Tomas Vondra  
wrote in <19f76496-dcf3-ccea-dd82-26fbed57b...@2ndquadrant.com>
> Hi,
> 
> attached is an updated version of the patch, addressing most of the
> issues raised in the recent reviews. There are two main exceptions:
> 
> 1) I haven't reworked the regression tests to use a function to check
> cardinality estimates and making them faster.
> 
> 2) Review handling of bitmap in statext_is_compatible_clause_internal
> when processing AND/OR/NOT clauses.
> 
> I plan to look into those items next, but I don't want block review of
> other parts of the patch unnecessarily.

I briefly looked it and have some comments.

0001-multivariate-MCV-lists-20190312.patch

+/*
+ * bms_member_index
+ * determine 0-based index of the varattno in the bitmap
+ *
+ * Returns (-1) when the value is not a member.

I think the comment should be more generic.

"determine 0-based index of member x among the bitmap members"
" Returns -1 when x is not a member."


(cont'ed)
+   if (a == NULL)
+   return 0;

Isn't the case of "not a member"?

bms_member_index seems working differently than maybe expected.

 bms_member_index((2, 4), 0) => 0, (I think) should be -1
 bms_member_index((2, 4), 1) => 0, should be -1
 bms_member_index((2, 4), 2) => 0, should be 0
 bms_member_index((2, 4), 3) => 1, should be -1
 bms_member_index((2, 4), 4) => 1, should be 1
 bms_member_index((2, 4), 5) => 2, should be -1
 bms_member_index((2, 4), 6) => 2, should be -1
...
 bms_member_index((2, 4), 63) => 2, should be -1
 bms_member_index((2, 4), 64) => -1, correct

It works correctly only when x is a member - the way the function
is maybe actually used in this patch -, or needs to change the
specifiction (or the comment) of the function.




+   if (rel && rel->rtekind == RTE_RELATION && rel->statlist != NIL)
+   {
+   /*
+* Estimate selectivity on any clauses applicable by stats 
tracking
+* actual values first, then apply functional dependencies on 
the
+* remaining clauses.

The comment doesn't seem needed since it is mentioning the detail
of statext_clauselist_selectivity() called just below.



+   if (statext_is_kind_built(htup, STATS_EXT_MCV))
+   {
+   StatisticExtInfo *info = makeNode(StatisticExtInfo);
+
+   info->statOid = statOid;
+   info->rel = rel;
+   info->kind = STATS_EXT_MCV;
+   info->keys = bms_copy(keys);
+
+   stainfos = lcons(info, stainfos);
+   }


We are to have four kinds of extended statistics, at worst we
have a list containing four StatisticExtInfos with the same
statOid, rel, keys and only different kind. Couldn't we reverse
the structure so that StatisticExtIbfo be something like:

>  struct StatsticExtInfo
>  {
> NodeTag   type;
> OidstatOid;
> RelOptInfo *rel;
! char   kind[8];  /* arbitrary.. */
> Bitmapset *keys; 



+OBJS = extended_stats.o dependencies.o mcv.o mvdistinct.o

The module for MV distinctness is named 'mvdistinct', but mcv
doesn't have the prefix. I'm not sure we need to unify the
names, though.


+Multivariate MCV (most-common values) lists are a straightforward extension of

  "lists are *a*" is wrong?



@@ -223,26 +220,16 @@ dependency_degree(int numrows, HeapTuple *rows, int k, 
AttrNumber *dependency,

I haven't read it in datil, but why MV-MCV patch contains (maybe)
improvement of functional dependency code?



+int
+compare_scalars_simple(const void *a, const void *b, void *arg)

Seems to need a comment. "compare_scalars without tupnoLink maininance"?


+int
+compare_datums_simple(Datum a, Datum b, SortSupport ssup)
+{
+   return ApplySortComparator(a, false, b, false, ssup);
+}

This wrapper function doesn't seem to me required.



+/* simple counterpart to qsort_arg */
+void *
+bsearch_arg(const void *key, const void *base, size_t nmemb, size_t size,

We have some functions named *_bsearch. If it is really
qsort_arg's bsearch versoin, it might be better to be placed in
qsort_arg.c or new file bsearch_arg.c?


+int *
+build_attnums_array(Bitmapset *attrs)

If the attrs is not offset, I'd like that it is named
differently, say, attrs_nooffset or something.


+   int i,
+   j,
+   len;

I'm not sure but is it following our coding convention?


+   items[i].values[j] = heap_getattr(rows[i],

items is needed by qsort_arg and as return value. It seems to me
that using just values[] and isnull[] make the code simpler there.


+   /* Look inside any binary-compatible relabeling (as in 
examine_variable) */
+   if (IsA(clause, RelabelType))
+   clause = (Node *) ((RelabelType *) clause)->arg;

This is quite a common locution so it's enough that the 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-12 Thread Tomas Vondra
Hi,

attached is an updated version of the patch, addressing most of the
issues raised in the recent reviews. There are two main exceptions:

1) I haven't reworked the regression tests to use a function to check
cardinality estimates and making them faster.

2) Review handling of bitmap in statext_is_compatible_clause_internal
when processing AND/OR/NOT clauses.

I plan to look into those items next, but I don't want block review of
other parts of the patch unnecessarily.


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-multivariate-MCV-lists-20190312.patch.gz
Description: application/gzip


0002-multivariate-histograms-20190312.patch.gz
Description: application/gzip


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-11 Thread Dean Rasheed
On Sun, 10 Mar 2019 at 22:28, David Rowley  wrote:
>
> On Mon, 11 Mar 2019 at 06:36, Tomas Vondra  
> wrote:
> >
> > On 3/9/19 7:33 PM, Dean Rasheed wrote:
> > > I wonder if it's possible to write smaller, more targeted tests.
> > > Currently "stats_ext" is by far the slowest test in its group, and I'm
> > > not sure that some of those tests add much. It ought to be possible to
> > > write a function that calls EXPLAIN and returns a query's row
> > > estimate, and then you could write tests to confirm the effect of the
> > > new stats by verifying the row estimates change as expected.
> >
> > Sure, if we can write more targeted tests, that would be good. But it's
> > not quite clear to me how wrapping EXPLAIN in a function makes those
> > tests any faster?
>
> I've not looked at the tests in question, but if they're executing an
> inferior plan is used when no extended stats exists, then maybe that's
> why they're slow.
>
> I think Dean might mean to create a function similar to
> explain_parallel_append() in partition_prune.sql then write tests that
> check the row estimate with EXPLAIN (COSTS ON) but strip out the other
> costing stuff instead of validating that the poor plan was chosen.
>

Yeah that's the sort of thing I was thinking of. I think it might be
possible to write simpler and faster tests by inserting far fewer rows
and relying on ANALYSE having sampled everything, so the row estimates
should be predictable. It may be the case that, with just a handful of
rows, the extended stats don't affect the plan, but you'd still see a
difference in the row estimates, and that could be a sufficient test I
think.

> > On 3/10/19 2:09 PM, Dean Rasheed wrote:
> > > 12). bms_member_index() should surely be in bitmapset.c. It could be
> > > more efficient by just traversing the bitmap words and making use of
> > > bmw_popcount(). Also, its second argument should be of type 'int' for
> > > consistency with other bms_* functions.
> >
> > Yes, moving to bitmapset.c definitely makes sense. I don't see how it
> > could use bms_popcount() though.
>
> I think it could be done by first checking if the parameter is a
> member of the set, and then if so, count all the bits that come on and
> before that member. You can use bmw_popcount() for whole words before
> the specific member's word then just bitwise-and a bit mask of a
> bitmapword that has all bits set for all bits on and before your
> parameter's BITNUM(), and add the bmw_popcount of the final word
> bitwise-anding the mask. bms_add_range() has some masking code you
> could copy.

Yep, that's what I was imagining. Except I think that to get a 0-based
index result you'd want the mask to have all bits set for bits
*before* the parameter's BITNUM(), rather than on and before. So I
think the mask would simply be

  ((bitmapword) 1 << bitnum) - 1

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-11 Thread Dean Rasheed
On Sun, 10 Mar 2019 at 17:36, Tomas Vondra  wrote:
> On 3/10/19 2:09 PM, Dean Rasheed wrote:
> > 14). The attnums Bitmapset passed to
> > statext_is_compatible_clause_internal() is an input/output argument
> > that it updates. That should probably be documented. When it calls
> > itself recursively for AND/OR/NOT clauses, it could just pass the
> > original Bitmapset through to be updated (rather than creating a new
> > one and merging it), as it does for other types of clause.
>
> I don't think it's really possible, because the AND/OR/NOT clause is
> considered compatible only when all the pieces are compatible. So we
> can't tweak the original bitmapset directly in case the incompatible
> clause is not the very first one.
>

In the case where the overall clause is incompatible, you don't
actually care about the attnums returned. Right now it will return an
empty set (NULL). With this change it would return all the attnums
encountered before the incompatible piece, but that wouldn't matter.
In fact, you could easily preserve the current behaviour just by
having the outer statext_is_compatible_clause() function set attnums
back to NULL if the result is false.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-10 Thread Tomas Vondra



On 3/10/19 11:27 PM, David Rowley wrote:
> On Mon, 11 Mar 2019 at 06:36, Tomas Vondra  
> wrote:
>>
>> On 3/9/19 7:33 PM, Dean Rasheed wrote:
>>> I wonder if it's possible to write smaller, more targeted tests.
>>> Currently "stats_ext" is by far the slowest test in its group, and I'm
>>> not sure that some of those tests add much. It ought to be possible to
>>> write a function that calls EXPLAIN and returns a query's row
>>> estimate, and then you could write tests to confirm the effect of the
>>> new stats by verifying the row estimates change as expected.
>>
>> Sure, if we can write more targeted tests, that would be good. But it's
>> not quite clear to me how wrapping EXPLAIN in a function makes those
>> tests any faster?
> 
> I've not looked at the tests in question, but if they're executing an
> inferior plan is used when no extended stats exists, then maybe that's
> why they're slow.
> 

I don't think the tests are executing any queries - the tests merely
generate execution plans, without executing them.

> I think Dean might mean to create a function similar to
> explain_parallel_append() in partition_prune.sql then write tests that
> check the row estimate with EXPLAIN (COSTS ON) but strip out the other
> costing stuff instead of validating that the poor plan was chosen.
> 

I'm not opposed to doing that, of course. I'm just not sure it's a way
to make the tests faster. Will investigate.

>> On 3/10/19 2:09 PM, Dean Rasheed wrote:
>>> 12). bms_member_index() should surely be in bitmapset.c. It could be
>>> more efficient by just traversing the bitmap words and making use of
>>> bmw_popcount(). Also, its second argument should be of type 'int' for
>>> consistency with other bms_* functions.
>>
>> Yes, moving to bitmapset.c definitely makes sense. I don't see how it
>> could use bms_popcount() though.
> 
> I think it could be done by first checking if the parameter is a
> member of the set, and then if so, count all the bits that come on and
> before that member. You can use bmw_popcount() for whole words before
> the specific member's word then just bitwise-and a bit mask of a
> bitmapword that has all bits set for all bits on and before your
> parameter's BITNUM(), and add the bmw_popcount of the final word
> bitwise-anding the mask. bms_add_range() has some masking code you
> could copy.
> 

Ah, right - that would work.


cheers

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-10 Thread David Rowley
On Mon, 11 Mar 2019 at 06:36, Tomas Vondra  wrote:
>
> On 3/9/19 7:33 PM, Dean Rasheed wrote:
> > I wonder if it's possible to write smaller, more targeted tests.
> > Currently "stats_ext" is by far the slowest test in its group, and I'm
> > not sure that some of those tests add much. It ought to be possible to
> > write a function that calls EXPLAIN and returns a query's row
> > estimate, and then you could write tests to confirm the effect of the
> > new stats by verifying the row estimates change as expected.
>
> Sure, if we can write more targeted tests, that would be good. But it's
> not quite clear to me how wrapping EXPLAIN in a function makes those
> tests any faster?

I've not looked at the tests in question, but if they're executing an
inferior plan is used when no extended stats exists, then maybe that's
why they're slow.

I think Dean might mean to create a function similar to
explain_parallel_append() in partition_prune.sql then write tests that
check the row estimate with EXPLAIN (COSTS ON) but strip out the other
costing stuff instead of validating that the poor plan was chosen.

> On 3/10/19 2:09 PM, Dean Rasheed wrote:
> > 12). bms_member_index() should surely be in bitmapset.c. It could be
> > more efficient by just traversing the bitmap words and making use of
> > bmw_popcount(). Also, its second argument should be of type 'int' for
> > consistency with other bms_* functions.
>
> Yes, moving to bitmapset.c definitely makes sense. I don't see how it
> could use bms_popcount() though.

I think it could be done by first checking if the parameter is a
member of the set, and then if so, count all the bits that come on and
before that member. You can use bmw_popcount() for whole words before
the specific member's word then just bitwise-and a bit mask of a
bitmapword that has all bits set for all bits on and before your
parameter's BITNUM(), and add the bmw_popcount of the final word
bitwise-anding the mask. bms_add_range() has some masking code you
could copy.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-10 Thread Tomas Vondra
Hi Dean,

Thanks for the review. I'll post a patch fixing most of the stuff soon,
but a few comments/questions regarding some of the issues:

On 3/9/19 7:33 PM, Dean Rasheed wrote:
> 5). It's not obvious what some of the new test cases in the
> "stats_ext" tests are intended to show. For example, the first test
> creates a table with 5000 rows and a couple of indexes, does a couple
> of queries, builds some MCV stats, and then repeats the queries, but
> the results seem to be the same with and without the stats.
>

Hmmm. I thought those tests are testing that we get the right plan, but
maybe I broke that somehow during the rebases. Will check.

> I wonder if it's possible to write smaller, more targeted tests.
> Currently "stats_ext" is by far the slowest test in its group, and I'm
> not sure that some of those tests add much. It ought to be possible to
> write a function that calls EXPLAIN and returns a query's row
> estimate, and then you could write tests to confirm the effect of the
> new stats by verifying the row estimates change as expected.

Sure, if we can write more targeted tests, that would be good. But it's
not quite clear to me how wrapping EXPLAIN in a function makes those
tests any faster?

On 3/10/19 2:09 PM, Dean Rasheed wrote:
> 12). bms_member_index() should surely be in bitmapset.c. It could be
> more efficient by just traversing the bitmap words and making use of
> bmw_popcount(). Also, its second argument should be of type 'int' for
> consistency with other bms_* functions.

Yes, moving to bitmapset.c definitely makes sense. I don't see how it
could use bms_popcount() though.

On 3/10/19 2:09 PM, Dean Rasheed wrote:
> 14). The attnums Bitmapset passed to
> statext_is_compatible_clause_internal() is an input/output argument
> that it updates. That should probably be documented. When it calls
> itself recursively for AND/OR/NOT clauses, it could just pass the
> original Bitmapset through to be updated (rather than creating a new
> one and merging it), as it does for other types of clause.
>

I don't think it's really possible, because the AND/OR/NOT clause is
considered compatible only when all the pieces are compatible. So we
can't tweak the original bitmapset directly in case the incompatible
clause is not the very first one.

> On the other hand, the outer function statext_is_compatible_clause()
> does need to return a new bitmap, which may or may not be used by its
> caller, so it would be cleaner to make that a strictly "out" parameter
> and initialise it to NULL in that function, rather than in its caller.


On 3/10/19 2:09 PM, Dean Rasheed wrote:
> 15). As I said yesterday, I don't think that there is a clean
> separator of concerns between the functions clauselist_selectivity(),
> statext_clauselist_selectivity(),
> dependencies_clauselist_selectivity() and
> mcv_clauselist_selectivity(), I think things could be re-arranged as
> follows:
>
> statext_clauselist_selectivity() - as the name suggests - should take
> care of *all* extended stats estimation, not just MCVs and histograms.
> So make it a fairly small function, calling
> mcv_clauselist_selectivity() and
> dependencies_clauselist_selectivity(), and histograms when that gets
> added.
>
> Most of the current code in statext_clauselist_selectivity() is really
> MCV-specific, so move that to mcv_clauselist_selectivity(). Amongst
> other things, that would move the call to choose_best_statistics() to
> mcv_clauselist_selectivity() (just as
> dependencies_clauselist_selectivity() calls choose_best_statistics()
> to get the best dependencies statistics). Then, when histograms are
> added later, you won't have the problem pointed out before where it
> can't apply both MCV and histogram stats if they're on different
> STATISTICS objects.

I agree clauselist_selectivity() shouldn't care about various types of
extended statistics (MCV vs. functional dependencies). But I'm not sure
the approach you suggested (moving stuff to mcv_clauselist_selectivity)
would work particularly well because most of it is not specific to MCV
lists. It'll also need to care about histograms, for example.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-10 Thread Dean Rasheed
On Sun, 10 Mar 2019 at 13:09, Dean Rasheed  wrote:
> Here are some more comments:
>

One more thing --- the comment for statext_clauselist_selectivity() says:

 * So (simple_selectivity - base_selectivity) may be seen as a correction for
 * the part not covered by the MCV list.

That's not quite right. It should really say that (simple_selectivity
- base_selectivity) is an estimate for the part not covered by the MCV
list, or that (mcv_selectivity - base_selectivity) is a correction for
the part covered by the MCV list. Those 2 statements are actually
equivalent, and different from what you wrote.

Perhaps the easiest way to see it is to work through a simple example:

Suppose you had the following clauses:

  a = 1 AND b >= 0 AND b <= 10

The per-column stats might be expected to give reasonable independent
estimates for the following 2 things:

  P(a = 1)

  P(b >= 0 AND b <= 10)  -- in general, greater than P(b >= 0) * P(b <= 10)

but the overall estimate produced by clauselist_selectivity_simple()
would then just be the product of those 2 things:

  simple_sel = P(a = 1) * P(b >= 0 AND b <= 10)

which might not be so good if the columns were correlated.

Now suppose you had MCV stats, which included MCV items for the
following specific values:

  (a=1,b=1), (a=1,b=2), (a=1,b=3)

but no other relevant MCV entries. (There might be lots of other MCV
items that don't match the original clauses, but they're irrelavent
for this discssion.) That would mean that we could get reasonable
estimates for the following 2 quantities:

  mcv_sel = P(a = 1 AND b IN (1,2,3))
  = P(a = 1 AND b = 1) + P(a = 1 AND b = 2) + P(a = 1 AND b = 3)

  mcv_basesel = base_freq(a = 1 AND b IN (1,2,3))
  = P(a = 1) * (P(b = 1) + P(b = 2) + P(b = 3))

So how is that useful? Well, returning to the quantity that we're
actually trying to compute, it can be split into MCV and non-MCV
parts, and since they're mutually exclusive possibilities, their
probabilities just add up. Thus we can write:

  P(a = 1 AND b >= 0 AND b <= 10)

= P(a = 1 AND b IN (1,2,3)) -- MCV part
+ P(a = 1 AND b >= 0 AND b <= 10 AND b NOT IN (1,2,3))  -- non-MCV part

= mcv_sel + other_sel

So the first term is easy -- it's just mcv_sel, from above. The second
term is trickier though, since we have no information about the
correlation between a and b in the non-MCV region. Just about the best
we can do is assume that they're independent, which gives:

  other_sel = P(a = 1 AND b >= 0 AND b <= 10 AND b NOT IN (1,2,3))
   ~= P(a = 1) * P(b >= 0 AND b <= 10 AND b NOT IN (1,2,3))

and that can now be written in terms of things that we know

  other_sel ~= P(a = 1) * P(b >= 0 AND b <= 10 AND b NOT IN (1,2,3))

 = P(a = 1) * P(b >= 0 AND b <= 10)
 - P(a = 1) * P(b IN (1,2,3))  -- mutually exclusive possibilities

 = simple_sel - mcv_basesel

So, as I said above, (simple_selectivity - base_selectivity) is an
estimate for the part not covered by the MCV list.


Another way to look at it is to split the original per-column estimate
up into MCV and non-MCV parts, and correct the MCV part using the MCV
stats:

  simple_sel = P(a = 1) * P(b >= 0 AND b <= 10)

 = P(a = 1) * P(b IN (1,2,3))
 + P(a = 1) * P(b >= 0 AND b <= 10 AND b NOT IN (1,2,3))

The first term is just mcv_basesel, so we can define other_sel to be
the other term, giving

  simple_sel = mcv_basesel  -- MCV part
 + other_sel-- non-MCV part

Clearly mcv_basesel isn't the best estimate for the MCV part, and it
should really be mcv_sel, so we can improve upon simple_sel by
applying a correction of (mcv_sel - basesel) to it:

  better estimate = simple_sel + (mcv_sel - mcv_basesel)
  = mcv_sel + other_sel

(where other_sel = simple_sel - mcv_basesel)

Of course, that's totally equivalent, but looking at it this way
(mcv_selectivity - base_selectivity) can be seen as a correction for
the part covered by the MCV list.


All of that generalises to arbitrary clauses, because the matching
items in the MCV list are independent possibilities that sum up, and
the MCV and non-MCV parts are mutually exclusive. That's also why the
basesel calculation in mcv_clauselist_selectivity() must only include
matching MCV items, and the following XXX comment is wrong:

+   for (i = 0; i < mcv->nitems; i++)
+   {
+   *totalsel += mcv->items[i]->frequency;
+
+   if (matches[i] != STATS_MATCH_NONE)
+   {
+   /* XXX Shouldn't the basesel be outside the if condition? */
+   *basesel += mcv->items[i]->base_frequency;
+   s += mcv->items[i]->frequency;
+   }
+   }

So I believe that that code is correct, as written.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-10 Thread Dean Rasheed
On Sat, 9 Mar 2019 at 18:33, Dean Rasheed  wrote:
>
> On Thu, 28 Feb 2019 at 19:56, Tomas Vondra  
> wrote:
> > Attached is an updated version of this patch series.
>
> Here are some random review comments. I'll add more later, but I'm out
> of energy for today.
>

Here are some more comments:

11). In dependency_degree():

-/* sort the items so that we can detect the groups */
-qsort_arg((void *) items, numrows, sizeof(SortItem),
-  multi_sort_compare, mss);
+/*
+ * build an array of SortItem(s) sorted using the multi-sort support
+ *
+ * XXX This relies on all stats entries pointing to the same tuple
+ * descriptor. Not sure if that might not be the case.
+ */
+items = build_sorted_items(numrows, rows, stats[0]->tupDesc,
+   mss, k, attnums_dep);

That XXX comment puzzled me for a while. Actually it's OK though,
unless/until we try to support stats across multiple relations, which
will require a much larger refactoring of this code. For now though,
The stats entries all point to the same tuple descriptor from the
onerel passed to BuildRelationExtStatistics(), so it's OK to just use
the first tuple descriptor in this way. The comment should be updated
to explain that.

12). bms_member_index() should surely be in bitmapset.c. It could be
more efficient by just traversing the bitmap words and making use of
bmw_popcount(). Also, its second argument should be of type 'int' for
consistency with other bms_* functions.

13). estimate_ndistinct() has been moved from mvdistinct.c to
extended_stats.c and changed from static to extern, but it is only
called from mvdistinct.c, so that change is unnecessary (at least as
far as this patch is concerned).

14). The attnums Bitmapset passed to
statext_is_compatible_clause_internal() is an input/output argument
that it updates. That should probably be documented. When it calls
itself recursively for AND/OR/NOT clauses, it could just pass the
original Bitmapset through to be updated (rather than creating a new
one and merging it), as it does for other types of clause.

On the other hand, the outer function statext_is_compatible_clause()
does need to return a new bitmap, which may or may not be used by its
caller, so it would be cleaner to make that a strictly "out" parameter
and initialise it to NULL in that function, rather than in its caller.

15). As I said yesterday, I don't think that there is a clean
separator of concerns between the functions clauselist_selectivity(),
statext_clauselist_selectivity(),
dependencies_clauselist_selectivity() and
mcv_clauselist_selectivity(), I think things could be re-arranged as
follows:

statext_clauselist_selectivity() - as the name suggests - should take
care of *all* extended stats estimation, not just MCVs and histograms.
So make it a fairly small function, calling
mcv_clauselist_selectivity() and
dependencies_clauselist_selectivity(), and histograms when that gets
added.

Most of the current code in statext_clauselist_selectivity() is really
MCV-specific, so move that to mcv_clauselist_selectivity(). Amongst
other things, that would move the call to choose_best_statistics() to
mcv_clauselist_selectivity() (just as
dependencies_clauselist_selectivity() calls choose_best_statistics()
to get the best dependencies statistics). Then, when histograms are
added later, you won't have the problem pointed out before where it
can't apply both MCV and histogram stats if they're on different
STATISTICS objects.

Most of the comments for statext_clauselist_selectivity() are also
MCV-related. Those too would move to mcv_clauselist_selectivity().

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-09 Thread Dean Rasheed
On Thu, 28 Feb 2019 at 19:56, Tomas Vondra  wrote:
> Attached is an updated version of this patch series.

Here are some random review comments. I'll add more later, but I'm out
of energy for today.

1). src/test/regress/expected/type_sanity.out has bit-rotted.

2). Duplicate OIDs (3425).

3). It looks a bit odd that clauselist_selectivity() calls
statext_clauselist_selectivity(), which does MCV stats and will do
histograms, but it doesn't do dependencies, so
clauselist_selectivity() has to then separately call
dependencies_clauselist_selectivity(). It would seem neater if
statext_clauselist_selectivity() took care of calling
dependencies_clauselist_selectivity(), since dependencies are just
another kind of extended stats.

4). There are no tests for pg_mcv_list_items(). Given a table with a
small enough amount of data, so that it's all sampled, it ought to be
possible to get predictable MCV stats.

5). It's not obvious what some of the new test cases in the
"stats_ext" tests are intended to show. For example, the first test
creates a table with 5000 rows and a couple of indexes, does a couple
of queries, builds some MCV stats, and then repeats the queries, but
the results seem to be the same with and without the stats.

I wonder if it's possible to write smaller, more targeted tests.
Currently "stats_ext" is by far the slowest test in its group, and I'm
not sure that some of those tests add much. It ought to be possible to
write a function that calls EXPLAIN and returns a query's row
estimate, and then you could write tests to confirm the effect of the
new stats by verifying the row estimates change as expected.

6). This enum isn't needed for MCVs:

/*
 * Degree of how much MCV item matches a clause.
 * This is then considered when computing the selectivity.
 */
#define STATS_MATCH_NONE0/* no match at all */
#define STATS_MATCH_PARTIAL1/* partial match */
#define STATS_MATCH_FULL2/* full match */

STATS_MATCH_PARTIAL is never used for MCVs, so you may as well just
use booleans instead of this enum. If those are needed for histograms,
they can probably be made local to histogram.c.

7). estimate_num_groups_simple() isn't needed in this patch.

8). In README.mcv,
s/clauselist_mv_selectivity_mcvlist/mcv_clauselist_selectivity/.

9). In the list of supported clause types that follows (e) is the same
a (c), but with a more general description.

10). It looks like most of the subsequent description of the algorithm
is out of date and needs rewriting. All the stuff about full matches
and the use of ndistinct is now obsolete.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-03-02 Thread David Rowley
On Fri, 1 Mar 2019 at 08:56, Tomas Vondra  wrote:
> Attached is an updated version of this patch series.

I made a quick pass over the 0001 patch. I edited a few small things
along the way; patch attached.

I'll try to do a more in-depth review soon.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


mcv_stats_small_fixups_for_0001.patch
Description: Binary data


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-02-28 Thread Tomas Vondra
Hi,

Attached is an updated version of this patch series. I've decided to
rebase and send both parts (MCV and histograms), although we've agreed
to focus on the MCV part for now. I don't want to leave the histogram to
lag behind, because (a) then it'd be much more work to update it, and
(b) I think it's an useful feedback about likely future changes.

This should address most of the issues pointed out by David in his
recent reviews. Briefly:

1) It fixes/updates a number of comments and docs on various places,
removes redundant comments etc. In most cases I've simply adopted the
wording proposed by David, with minor tweaks in a couple of cases.

2) Reverts changes that exposed analyze_mcv_list - this was a leftover
from the attempt to reuse the single-column algorithm, but we've since
agreed it's not the right approach. So this change is unnecessary.

3) I've tweaked the code to accept RelabelType nodes as supported,
similarly to what examine_variable() does. Previously I concluded we
can't support RelabelType, but it seems that reasoning was bogus. I've
slightly tweaked the regression tests by changing one of the columns to
varchar, so that the queries actualy trigger this.

4) I've tweaked a couple of places (UpdateStatisticsForTypeChange,
statext_clauselist_selectivity and estimate_num_groups_simple) per
David's suggestions. Those were fairly straightforward simplifications.

5) I've removed mcv_count from statext_mcv_build(). As David pointed
out, this was not actually needed - it was another remnant of the
attempt to re-use analyze_mcv_list() which needs such array. But without
it we can access the groups directly.

6) One of the review questions was about the purpose of this code:

  for (i = 0; i < nitems; i++)
  {
  if (groups[i].count < mincount)
  {
  nitems = i;
  break;
  }
  }

It's quite simple - we want to include groups with more occurrences than
mincount, and the groups are sorted by the count (in descending order).
So we simply find the first group with count below mincount, and the
index is the number of groups to keep. I've tried to explain that in a
comment.

7) I've fixed a bunch of format patters in statext_mcv_deserialize(),
particularly those that confused %d and %u. We can't however use %d for
VARSIZE_ANY_EXHDR, because that macro expands into offsetof() etc. So
that would trigger compiler warnings.

8) Yeah, pg_stats_ext_mcvlist_items was broken. The issue was that one
of the output parameters is defined as boolean[], but the function was
building just string. Originally it used BuildTupleFromCStrings(), but
then it switched to heap_form_tuple() without building a valid array.

I've decided to simply revert back to BuildTupleFromCStrings(). It's not
going to be used very frequently, so the small performance difference is
not important.

I've also fixed the formatting issues, pointed out by David.


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-multivariate-MCV-lists-20190228.patch.gz
Description: application/gzip


0002-multivariate-histograms-20190228.patch.gz
Description: application/gzip


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-02-06 Thread Dean Rasheed
On Wed, 6 Feb 2019 at 23:44, Tomas Vondra  wrote:
>
> On 2/6/19 10:59 PM, David Rowley wrote:
> > On Thu, 7 Feb 2019 at 03:16, Alvaro Herrera  
> > wrote:
> >> I wonder what should we be doing with this series -- concretely, should
> >> the effort concentrate on one of the two patches, and leave the other
> >> for pg13, to increase the chances of the first one being in pg12?  I
> >> would favor that approach, since it's pretty late in the cycle by now
> >> and it seems dubious that both will be ready.
> >
> > I mostly have been reviewing the MCV patch with the thoughts that one
> > is better than none in PG12.  I don't see any particular reason that
> > we need both in the one release.
> >
>
> I agree with that, although most of the complexity likely lies in
> integrating the stats into the selectivity estimation - if we get that
> right for the MCV patch, adding histogram seems comparably simpler.
>
> But yeah, let's focus on the MCV part.
>

Agreed. I think the overall approach of the MCV patch is sound and
it's getting closer to being committable. David's review comments were
excellent. I'll try to review it as well when you post your next
update.

I have some more fundamental doubts about the histogram patch, to do
with the way it integrates with selectivity estimation, and some vague
half-formed ideas about how that could be improved, but nothing clear
enough that I can express right now.

So yes, let's focus on the MCV patch for now.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-02-06 Thread Tomas Vondra
On 2/6/19 10:59 PM, David Rowley wrote:
> On Thu, 7 Feb 2019 at 03:16, Alvaro Herrera  wrote:
>> I wonder what should we be doing with this series -- concretely, should
>> the effort concentrate on one of the two patches, and leave the other
>> for pg13, to increase the chances of the first one being in pg12?  I
>> would favor that approach, since it's pretty late in the cycle by now
>> and it seems dubious that both will be ready.
> 
> I mostly have been reviewing the MCV patch with the thoughts that one
> is better than none in PG12.  I don't see any particular reason that
> we need both in the one release.
> 

I agree with that, although most of the complexity likely lies in
integrating the stats into the selectivity estimation - if we get that
right for the MCV patch, adding histogram seems comparably simpler.

But yeah, let's focus on the MCV part.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-02-06 Thread David Rowley
On Thu, 7 Feb 2019 at 03:16, Alvaro Herrera  wrote:
> I wonder what should we be doing with this series -- concretely, should
> the effort concentrate on one of the two patches, and leave the other
> for pg13, to increase the chances of the first one being in pg12?  I
> would favor that approach, since it's pretty late in the cycle by now
> and it seems dubious that both will be ready.

I mostly have been reviewing the MCV patch with the thoughts that one
is better than none in PG12.  I don't see any particular reason that
we need both in the one release.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-02-06 Thread Alvaro Herrera
On 2019-Feb-04, Tomas Vondra wrote:

> On 2/4/19 5:53 AM, Michael Paquier wrote:

> > Moved the patch to next CF for now, waiting on author as the last
> > review happened not so long ago.
> 
> Thanks. Yes, I intend to send a new patch version soon.

I wonder what should we be doing with this series -- concretely, should
the effort concentrate on one of the two patches, and leave the other
for pg13, to increase the chances of the first one being in pg12?  I
would favor that approach, since it's pretty late in the cycle by now
and it seems dubious that both will be ready.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-02-04 Thread Tomas Vondra



On 2/4/19 5:53 AM, Michael Paquier wrote:
> On Sun, Feb 03, 2019 at 02:43:24AM -0800, Andres Freund wrote:
>> Are you planning to update the patch, or should the entry be marked as
>> RWF?
> 
> Moved the patch to next CF for now, waiting on author as the last
> review happened not so long ago.

Thanks. Yes, I intend to send a new patch version soon.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-02-03 Thread Michael Paquier
On Sun, Feb 03, 2019 at 02:43:24AM -0800, Andres Freund wrote:
> Are you planning to update the patch, or should the entry be marked as
> RWF?

Moved the patch to next CF for now, waiting on author as the last
review happened not so long ago.
--
Michael


signature.asc
Description: PGP signature


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-02-03 Thread Andres Freund
Hi Tomas,

On 2019-01-24 14:59:50 +1300, David Rowley wrote:
> On Wed, 23 Jan 2019 at 12:46, David Rowley  
> wrote:
> > (Stopped in statext_mcv_build(). Need to take a break)
>
> Continuing...

Are you planning to update the patch, or should the entry be marked as
RWF?

- Andres



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-23 Thread David Rowley
On Wed, 23 Jan 2019 at 12:46, David Rowley  wrote:
> (Stopped in statext_mcv_build(). Need to take a break)

Continuing...

27. statext_mcv_build() could declare the int j,k variables in the
scope that they're required in.

28. "an array"

 * build array of SortItems for distinct groups and counts matching items

29. No need to set isnull to false in statext_mcv_load()

30. Wondering about the reason in statext_mcv_serialize() that you're
not passing the collation to sort the array.

You have:

ssup[dim].ssup_collation = DEFAULT_COLLATION_OID;

should it not be:

ssup[dim].ssup_collation = stats[dim]->attrcollid;
?

31. uint32 should use %u, not %d:

if (mcvlist->magic != STATS_MCV_MAGIC)
elog(ERROR, "invalid MCV magic %d (expected %d)",
mcvlist->magic, STATS_MCV_MAGIC);

and

if (mcvlist->type != STATS_MCV_TYPE_BASIC)
elog(ERROR, "invalid MCV type %d (expected %d)",
mcvlist->type, STATS_MCV_TYPE_BASIC);

and

ereport(ERROR,
(errcode(ERRCODE_DATA_CORRUPTED),
errmsg("invalid length (%d) item array in MCVList",
mcvlist->nitems)));

I don't think %ld is the correct format for VARSIZE_ANY_EXHDR. %u or
%d seem more suited. I see that value is quite often assigned to int,
so probably can't argue much with %d.

elog(ERROR, "invalid MCV size %ld (expected %zu)",
VARSIZE_ANY_EXHDR(data), expected_size);

32. I think the format is wrong here too:

elog(ERROR, "invalid MCV size %ld (expected %ld)",
VARSIZE_ANY_EXHDR(data), expected_size);

I'd expect "invalid MCV size %d (expected %zu)"

33. How do you allocate a single chunk non-densely?

* Allocate one large chunk of memory for the intermediate data, needed
* only for deserializing the MCV list (and allocate densely to minimize
* the palloc overhead).

34. I thought I saw a few issues with pg_stats_ext_mcvlist_items() so
tried to test it:

create table ab (a int, b int);
insert into ab select x,x from generate_serieS(1,10)x;
create statistics ab_ab_stat (mcv) on a,b from ab;
analyze ab;
select pg_mcv_list_items(stxmcv) from pg_Statistic_ext where stxmcv is not null;
ERROR:  cache lookup failed for type 2139062143

The issues I saw were:

You do:
appendStringInfoString(, "{");
appendStringInfoString(, "{");

but never append '}' after building the string.

(can use appendStringInfoChar() BTW)

also:

if (i == 0)
{
appendStringInfoString(, ", ");
appendStringInfoString(, ", ");
}

I'd have expected you to append the ", " only when i > 0.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-22 Thread David Rowley
On Wed, 23 Jan 2019 at 03:43, David Rowley  wrote:
> I made another pass over the 0001 patch. I've not read through mcv.c
> again yet. Will try to get to that soon.
>
> 0001-multivariate-MCV-lists-20190117.patch

I started on mcv.c this morning. I'm still trying to build myself a
picture of how it works, but I have noted a few more things while I'm
reading.

24. These macros are still missing parenthesis around the arguments:

#define ITEM_INDEXES(item) ((uint16*)item)
#define ITEM_NULLS(item,ndims) ((bool*)(ITEM_INDEXES(item) + ndims))
#define ITEM_FREQUENCY(item,ndims) ((double*)(ITEM_NULLS(item,ndims) + ndims))

While I don't see any reason to put parenthesis around the macro's
argument when passing it to another macro, since it should do it...
There is a good reason to have the additional parenthesis when it's
not passed to another macro.

Also, there's a number of places, including with these macros that
white space is not confirming to project standard. e.g.
((uint16*)item) should be ((uint16 *) (item))  (including fixing the
missing parenthesis)

25. In statext_mcv_build() I'm trying to figure out what the for loop
does below the comment:

* If we can fit all the items onto the MCV list, do that. Otherwise
* use get_mincount_for_mcv_list to decide which items to keep in the
* MCV list, based on the number of occurences in the sample.

The comment explains only as far as the get_mincount_for_mcv_list()
call so the following is completely undocumented:

for (i = 0; i < nitems; i++)
{
if (mcv_counts[i] < mincount)
{
nitems = i;
break;
}
}

I was attempting to figure out if the break should be there, or if the
code should continue and find the 'i' for the smallest mcv_counts, but
I don't really understand what the code is meant to be doing.

Also:  occurences  -> occurrences

26. Again statext_mcv_build() I'm a bit puzzled to why mcv_counts
needs to exist at all. It's built from:

mcv_counts = (int *) palloc(sizeof(int) * nitems);

for (i = 0; i < nitems; i++)
mcv_counts[i] = groups[i].count;

Then only used in the loop mentioned in #25 above.  Can't you just use
groups[i].count?

(Stopped in statext_mcv_build(). Need to take a break)

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-22 Thread David Rowley
On Fri, 18 Jan 2019 at 10:03, Tomas Vondra  wrote:
> thanks for the review. The attached patches address most of the issues
> mentioned in the past several messages, both in the MCV and histogram parts.

I made another pass over the 0001 patch. I've not read through mcv.c
again yet. Will try to get to that soon.

0001-multivariate-MCV-lists-20190117.patch

1. The following mentions "multiple functions", but lists just 1 function.

   
To inspect statistics defined using CREATE STATISTICS
command, PostgreSQL provides multiple functions.
   

2. There's a mix of usages of MCV and
MCV around the docs. Should these be the same?

3. analyze_mcv_list() is modified to make it an external function, but
it's not used anywhere out of analyze.c

4. The following can be simplified further:

* We can also leave the record as it is if there are no statistics
* including the datum values, like for example MCV lists.
*/
if (statext_is_kind_built(oldtup, STATS_EXT_MCV))
reset_stats = true;

/*
* If we can leave the statistics as it is, just do minimal cleanup
* and we're done.
*/
if (!reset_stats)
{
ReleaseSysCache(oldtup);
return;
}

to just:

/*
* When none of the defined statistics types contain datum values
* from the table's columns then there's no need to reset the stats.
* Functional dependencies and ndistinct stats should still hold true.
*/
if (!statext_is_kind_built(oldtup, STATS_EXT_MCV))
{
ReleaseSysCache(oldtup);
return;
}

5. "so that we can ignore them below." seems misplaced now since
you've moved all the code below into clauselist_selectivity_simple().
   Maybe you can change it to "so that we can inform
clauselist_selectivity_simple about clauses that it should ignore" ?

* filled with the 0-based list positions of clauses used that way, so
* that we can ignore them below.

6. README.mcv: multi-variate -> multivariate

are large the list may be quite large. This is especially true for multi-variate

7. README.mcv: similar -> a similar

it impossible to use anyarrays. It might be possible to produce similar

8. I saw you added IS NOT NULL to README.mcv, but README just mentions:

(b) MCV lists - equality and inequality clauses (AND, OR, NOT), IS NULL

Should that mention IS NOT NULL too?

9. The header comment for build_attnums_array() claims that it
"transforms an array of AttrNumber values into a bitmap", but it does
the opposite.

 * Transforms an array of AttrNumber values into a bitmap.

10. The following Assert is not entirely useless.  The bitmapset could
have a 0 member, but it can't store negative values.

while ((j = bms_next_member(attrs, j)) >= 0)
{
/* user-defined attributes only */
Assert(AttrNumberIsForUserDefinedAttr(j));

Just checking you thought of that when you added it?

11. XXX comments are normally reserved for things we may wish to
reconsider later, but the following seems more like a "Note:"

 * XXX All the memory is allocated in a single chunk, so that the caller
 * can simply pfree the return value to release all of it.

12. In statext_is_compatible_clause_internal() there's still a comment
that mentions "Var op Const", but Const op Var is also okay too.

13. This is not fall-through.  Generally, such a comment is reserved
to confirm that the "break;" is meant to be missing.

default:
/* fall-through */
return false;

https://developers.redhat.com/blog/2017/03/10/wimplicit-fallthrough-in-gcc-7/
mentions various comment patterns that are used for that case.  Your
case seems misplaced since it's right about a return, and not another
case.

14. The header comment for statext_is_compatible_clause() is not
accurate.  It mentions only opexprs with equality operations are
allowed, but none of those are true.

 * Only OpExprs with two arguments using an equality operator are supported.
 * When returning True attnum is set to the attribute number of the Var within
 * the supported clause.

15. statext_clauselist_selectivity(): "a number" -> "the number" ?

 * Selects the best extended (multi-column) statistic on a table (measured by
 * a number of attributes extracted from the clauses and covered by it), and

16. I understand you're changing this to a bitmask in the 0002 patch,
but int is the wrong type here;
/* we're interested in MCV lists */
int types = STATS_EXT_MCV;

Maybe just pass the STATS_EXT_MCV directly, or at least make it a char.

17. bms_membership(clauses_attnums) != BMS_MULTIPLE seems better here.
It can stop once it finds 2. No need to count them all.

/* We need at least two attributes for MCV lists. */
if (bms_num_members(clauses_attnums) < 2)
return 1.0;

18. The following comment in statext_is_compatible_clause_internal()
does not seem to be true.  I see OpExprs are supported and NULL test,
including others too.

/* We only support plain Vars for now */

19. The header comment for clauselist_selectivity_simple() does not
mention what estimatedclauses is for.

20. New line. Also, missing "the" before "maximum"

+ * We
+ * iteratively search for 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-21 Thread David Rowley
 Thanks for making those changes.

On Fri, 18 Jan 2019 at 10:03, Tomas Vondra  wrote:
> A couple of items remains:
>
> > 15. I see you broke out the remainder of the code from
> > clauselist_selectivity() into clauselist_selectivity_simple().  The
> > comment looks like just a copy and paste from the original.  That
> > seems like quite a bit of duplication. Is it better to maybe trim down
> > the original one?
>
> I don't follow - where do you see the code duplication? Essentially, we
> have clauselist_selectivity and clauselist_selectivity_simple, but the
> first one calls the second one. The "simple" version is needed because
> in some cases we need to perform estimation without multivariate stats
> (e.g. to prevent infinite loop due to recursion).

It was the comment duplication that I was complaining about. I think
clauselist_selectivity()'s comment can be simplified to mention it
attempts to apply extended statistics and applies
clauselist_selectivity_simple on any stats that remain. Plus any
details that are specific to extended statistics.  That way if anyone
wants further detail on what happens to the remaining clauses they can
look at the comment above clauselist_selectivity_simple().

> > 18. In dependencies_clauselist_selectivity() there seem to be a new
> > bug introduced.  We do:
> >
> > /* mark this one as done, so we don't touch it again. */
> > *estimatedclauses = bms_add_member(*estimatedclauses, listidx);
> >
> > but the bms_is_member() check that skipped these has been removed.
> >
> > It might be easier to document if we just always do:
> >
> >  if (bms_is_member(listidx, *estimatedclauses))
> > continue;
> >
> > at the start of both loops. list_attnums can just be left unset for
> > the originally already estimatedclauses.
>
> This was already discussed - I don't think there's any bug, but I'll
> look into refactoring the code somehow to make it clear.

On looking at this a bit more it seems that since the estimated attr
is removed from the clauses_attnums Bitmapset that
find_strongest_dependency() will no longer find a dependency for that
clause and dependency_implies_attribute() will just return false where
the bms_is_member(listidx, *estimatedclauses) would have done
previously. I'll mean we could get more calls of
dependency_implies_attribute(), but that function is even cheaper than
bms_is_member() so I guess there's no harm in this change.

> > 25. Does statext_is_compatible_clause_internal)_ need to skip over
> > RelabelTypes?
>
> I don't think it should, because while RelabelType nodes represent casts
> to binary-compatible types, there's no guarantee the semantics actually
> is compatible.

The code that looks through RelabelTypes for normal stats is in
examine_variable(). This code allows the following to estimate 4 rows.
I guess if we didn't use that then we'd just need to treat it like
some unknown expression and use DEFAULT_NUM_DISTINCT.

create table a (t varchar);
insert into a select v.v from (values('One'),('Two'),('Three')) as
v(v), generate_Series(1,4);
analyze a;
explain (summary off, timing off, analyze) select * from a where t = 'One';
   QUERY PLAN
-
 Seq Scan on a  (cost=0.00..1.15 rows=4 width=4) (actual rows=4 loops=1)
   Filter: ((t)::text = 'One'::text)
   Rows Removed by Filter: 8
(3 rows)

Why do you think its okay for the normal stats to look through
RelabelTypes but not the new code you're adding?

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-17 Thread Tomas Vondra
Hi,

thanks for the review. The attached patches address most of the issues
mentioned in the past several messages, both in the MCV and histogram parts.

A couple of items remains:

> 15. I see you broke out the remainder of the code from
> clauselist_selectivity() into clauselist_selectivity_simple().  The
> comment looks like just a copy and paste from the original.  That
> seems like quite a bit of duplication. Is it better to maybe trim down
> the original one?

I don't follow - where do you see the code duplication? Essentially, we
have clauselist_selectivity and clauselist_selectivity_simple, but the
first one calls the second one. The "simple" version is needed because
in some cases we need to perform estimation without multivariate stats
(e.g. to prevent infinite loop due to recursion).

> 18. In dependencies_clauselist_selectivity() there seem to be a new
> bug introduced.  We do:
>
> /* mark this one as done, so we don't touch it again. */
> *estimatedclauses = bms_add_member(*estimatedclauses, listidx);
>
> but the bms_is_member() check that skipped these has been removed.
>
> It might be easier to document if we just always do:
>
>  if (bms_is_member(listidx, *estimatedclauses))
> continue;
>
> at the start of both loops. list_attnums can just be left unset for
> the originally already estimatedclauses.

This was already discussed - I don't think there's any bug, but I'll
look into refactoring the code somehow to make it clear.

> 21. Comment does not really explain what the function does or what the
> arguments mean:
>
> /*
>  * statext_is_compatible_clause_internal
>  * Does the heavy lifting of actually inspecting the clauses for
>  * statext_is_compatible_clause.
>  */

Isn't it explained in the statext_is_compatible_clause comment?

> 25. Does statext_is_compatible_clause_internal)_ need to skip over
> RelabelTypes?

I don't think it should, because while RelabelType nodes represent casts
to binary-compatible types, there's no guarantee the semantics actually
is compatible. So for example if you do this:

  create table t (a int, b int);
  insert into t select mod(i,100), mod(i,100)
  from generate_series(1,100) s(i);
  create statistics s (mcv) on a, b from t;
  analyze t;
  explain analyze select * from t where a = 1::oid and b = 1::oid;

then there will be a RelabelType nodes casting each column from int4 to
oid. So the estimation will be made following oid semantics. But the MCV
list contains int4 values, and is built using int4-specific operators.

I admit this int4/oid example is fairly trivial, but it's not clear to
me we can assume all RelabelType will behave like that. The types may be
binary-coerible, but may use vastly different operators - think about
citext vs. text, for example.

> 35. The evaluation order of this macro is wrong.
>
> #define ITEM_SIZE(ndims) \
> (ndims * (sizeof(uint16) + sizeof(bool)) + 2 * sizeof(double))
>

Nope, as mentioned by Dean, it's actually correct.

> 36. Could do with some comments in get_mincount_for_mcv_list(). What's
> magic about 0.04?

That was copied from another patch, but I've removed the comment
explaining the details - I've now added it back, which I think should be
more than enough.

> 40. The comment in the above item seems to indicate the condition for
> when all items can fit in the number of groups, but the if condition
> does not seem to allow for an exact match?
>
> if (ngroups > nitems)
>
> if you want to check if the number of items can fit in the number of
> groups should it be: if (ngroups >= nitems) or if (nitems <= ngroups)
> ? Perhaps I've misunderstood. The comment is a little confusing as I'm
> not sure where the "Otherwise" code is located.

No, the whole point of that block is to decide how many groups to keep
if there are more groups than we have space for (based on stats target).
So if (ngroups == nitems) or (ngrouos < nitems) then we can keep all of
them.

> 41. I don't think palloc0() is required here. palloc() should be fine
> since you're initialising each element in the loop.
>
> ...
>
> I think I agree with the comment above that chunk about reducing the
> number of pallocs, even if it's just allocating the initial array as
> MCVItems instead of pointers to MCVItems

I've left this as it is for now. The number of extra pallocs() is fairly
low anyway, so I don't think it's worth the extra complexity.

> 47. Wondering about the logic behind the variation between elog() and
> ereport() in statext_mcv_deserialize(). They all looks like "can't
> happen" type errors.

That's mostly random, I'll review and fix that. All "can't happen" cases
should use elog().

> 3. I've not really got into understanding how the new statistics types
> are applied yet, but I found this:
>
> * If asked to build both MCV and histogram, first build the MCV part
> * and then histogram on the remaining rows.
>
> I guess that means we'll get different estimates with:
>
> create statistic a_stats (mcv,histogram) on a,b 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-17 Thread David Rowley
I've started looking over 0002.  Here are a few things so far:

1. I think this should be pg_statistic_ext.stxhistogram?

 Values of the pg_histogram can be obtained only from the
 pg_statistic.stxhistogram column.

2. I don't think this bms_copy is needed anymore. I think it was
previously since there were possibly multiple StatisticExtInfo objects
per pg_statistic_ext row, but now it's 1 for 1.

+ info->keys = bms_copy(keys);

naturally, the bms_free() will need to go too.

3. I've not really got into understanding how the new statistics types
are applied yet, but I found this:

* If asked to build both MCV and histogram, first build the MCV part
* and then histogram on the remaining rows.

I guess that means we'll get different estimates with:

create statistic a_stats (mcv,histogram) on a,b from t;

vs

create statistic a_stats1 (mcv) on a,b from t;
create statistic a_stats2 (histogram) on a,b from t;

Is that going to be surprising to people?

4. I guess you can replace "(histogram == NULL);" with "false". The
compiler would likely do it anyway, but...

if (histogram != NULL)
{
/* histogram already is a bytea value, not need to serialize */
nulls[Anum_pg_statistic_ext_stxhistogram - 1] = (histogram == NULL);
values[Anum_pg_statistic_ext_stxhistogram - 1] = PointerGetDatum(histogram);
}

but, hmm. Shouldn't you serialize this, like you are with the others?

5. serialize_histogram() and statext_histogram_deserialize(), should
these follow the same function naming format?

6. IIRC some compilers may warn about this:

if (stat->kinds & requiredkinds)

making it:

if ((stat->kinds & requiredkinds))

should fix that.

UPDATE: Tried to make a few compilers warn about this and failed.
Perhaps I've misremembered.

7. Comment claims function has a parameter named 'requiredkind', but
it no longer does. The comment also needs updated to mention that it
finds statistics with any of the required kinds.

 * choose_best_statistics
 * Look for and return statistics with the specified 'requiredkind' which
 * have keys that match at least two of the given attnums.  Return NULL if
 * there's no match.
 *
 * The current selection criteria is very simple - we choose the statistics
 * object referencing the most of the requested attributes, breaking ties
 * in favor of objects with fewer keys overall.
 *
 * XXX If multiple statistics objects tie on both criteria, then which object
 * is chosen depends on the order that they appear in the stats list. Perhaps
 * further tiebreakers are needed.
 */
StatisticExtInfo *
choose_best_statistics(List *stats, Bitmapset *attnums, int requiredkinds)

8. Looking at statext_clauselist_selectivity() I see it calls
choose_best_statistics() passing requiredkinds as STATS_EXT_INFO_MCV |
STATS_EXT_INFO_HISTOGRAM, do you think the function now needs to
attempt to find the best match plus the one with the most statistics
kinds?

It might only matter if someone had:

create statistic a_stats1 (mcv) on a,b from t;
create statistic a_stats2 (histogram) on a,b from t;
create statistic a_stats3 (mcv,histogram) on a,b from t;

Is it fine to just return a_stats1 and ignore the fact that a_stats3
is probably better? Or too corner case to care?

9. examine_equality_clause() assumes it'll get a Var. I see we should
only allow clauses that pass statext_is_compatible_clause_internal(),
so maybe it's worth an Assert(IsA(var, Var)) along with a comment to
mention anything else could not have been allowed.

10. Does examine_equality_clause need 'root' as an argument?

11. UINT16_MAX -> PG_UINT16_MAX

/* make sure we fit into uint16 */
Assert(count <= UINT16_MAX);

(Out of energy for today.)

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-17 Thread Dean Rasheed
On Thu, 17 Jan 2019 at 03:42, David Rowley  wrote:
> 39. I don't see analyze_mcv_list() being used anywhere around this comment:
>
> * If we can fit all the items onto the MCV list, do that. Otherwise use
> * analyze_mcv_list to decide how many items to keep in the MCV list, just
> * like for the single-dimensional MCV list.
>

Right. Also, analyze_mcv_list() is no longer being used anywhere
outside of analyze.c, so it can go back to being static.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-16 Thread Dean Rasheed
On Thu, 17 Jan 2019 at 03:42, David Rowley  wrote:
> 35. The evaluation order of this macro is wrong.
>
> #define ITEM_SIZE(ndims) \
> (ndims * (sizeof(uint16) + sizeof(bool)) + 2 * sizeof(double))
>
> You'd probably want ITEM_SIZE(10) to return 170, but:
>
> select (10 * (2 + 1) + 2 * 8);
>  ?column?
> --
>46
>
> Unsure why this does not cause a crash.
>

No, the code is actually correct, as explained in the comment above
it. Each item contains (ndims) copies of the uint16 index and the
boolean, but it always contains exactly 2 doubles, independent of
ndims.

> ndims should also have parenthesis around it in case someone does
> ITEM_SIZE(x + y), likewise for the other ITEM_* macros.
>

+1 on that point.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-16 Thread David Rowley
On Thu, 17 Jan 2019 at 01:56, David Rowley  wrote:
> At this stage I'm trying to get to know the patch.  I read a lot of
> discussing between you and Dean ironing out how the stats should be
> used to form selectivities.  At the time I'd not read the patch yet,
> so most of it went over my head.
>
> I did note down a few things on my read.  I've included them below.
> Hopefully, they're useful.
>
> MCV list review

Part 2:


35. The evaluation order of this macro is wrong.

#define ITEM_SIZE(ndims) \
(ndims * (sizeof(uint16) + sizeof(bool)) + 2 * sizeof(double))

You'd probably want ITEM_SIZE(10) to return 170, but:

select (10 * (2 + 1) + 2 * 8);
 ?column?
--
   46

Unsure why this does not cause a crash.

ndims should also have parenthesis around it in case someone does
ITEM_SIZE(x + y), likewise for the other ITEM_* macros.

36. Could do with some comments in get_mincount_for_mcv_list(). What's
magic about 0.04?

37. I think statext_mcv_build() needs some comments to detail out the
arguments.  For example can attrs be empty? Must it contain at least 2
members? etc.

38. Too many "it"s

* we simply treat it as a special item in the MCV list (it it makes it).

39. I don't see analyze_mcv_list() being used anywhere around this comment:

* If we can fit all the items onto the MCV list, do that. Otherwise use
* analyze_mcv_list to decide how many items to keep in the MCV list, just
* like for the single-dimensional MCV list.

40. The comment in the above item seems to indicate the condition for
when all items can fit in the number of groups, but the if condition
does not seem to allow for an exact match?

if (ngroups > nitems)

if you want to check if the number of items can fit in the number of
groups should it be: if (ngroups >= nitems) or if (nitems <= ngroups)
? Perhaps I've misunderstood. The comment is a little confusing as I'm
not sure where the "Otherwise" code is located.

41. I don't think palloc0() is required here. palloc() should be fine
since you're initialising each element in the loop.

mcvlist->items = (MCVItem * *) palloc0(sizeof(MCVItem *) * nitems);

for (i = 0; i < nitems; i++)
{
mcvlist->items[i] = (MCVItem *) palloc(sizeof(MCVItem));
mcvlist->items[i]->values = (Datum *) palloc(sizeof(Datum) * numattrs);
mcvlist->items[i]->isnull = (bool *) palloc(sizeof(bool) * numattrs);
}

I think I agree with the comment above that chunk about reducing the
number of pallocs, even if it's just allocating the initial array as
MCVItems instead of pointers to MCVItems


42. I don't think palloc0() is required in build_distinct_groups().
palloc() should be ok.

Maybe it's worth an Assert(j + 1 == ngroups) to ensure
count_distinct_groups got them all?

43. You're assuming size_t and long are the same size here.

elog(ERROR, "serialized MCV list exceeds 1MB (%ld)", total_length);

I know at least one platform where that's not true.

44. Should use DatumGetCString() instead of DatumGetPointer().

else if (info[dim].typlen == -2) /* cstring */
{
memcpy(data, DatumGetPointer(v), strlen(DatumGetPointer(v)) + 1);
data += strlen(DatumGetPointer(v)) + 1; /* terminator */
}

45. No need to set this to NULL.

Datum*v = NULL;

Is "value" a better name than "v"?

46. What's the extra 'd' for in:

elog(ERROR, "invalid MCV magic %d (expected %dd)",

and

elog(ERROR, "invalid MCV type %d (expected %dd)",

47. Wondering about the logic behind the variation between elog() and
ereport() in statext_mcv_deserialize(). They all looks like "can't
happen" type errors.

48. format assumes size_t is the same size as long.

elog(ERROR, "invalid MCV size %ld (expected %ld)",
VARSIZE_ANY_EXHDR(data), expected_size);

49. palloc0() followed by memset(). Can just use palloc().

matches = palloc0(sizeof(char) * mcvlist->nitems);
memset(matches, (is_or) ? STATS_MATCH_NONE : STATS_MATCH_FULL,
   sizeof(char) * mcvlist->nitems);

50. The coding style in mcv_get_match_bitmap I think needs to be
postgresqlified. We normally declare all our variables in a chunk then
start setting them, unless the assignment is very simple. I don't
recall places in the code where have a comment when declaring a
variable, for example.

FmgrInfo gtproc;
Var*var = (varonleft) ? linitial(expr->args) : lsecond(expr->args);
Const*cst = (varonleft) ? lsecond(expr->args) : linitial(expr->args);
bool isgt = (!varonleft);

TypeCacheEntry *typecache
= lookup_type_cache(var->vartype, TYPECACHE_GT_OPR);

/* match the attribute to a dimension of the statistic */
int idx = bms_member_index(keys, var->varattno);

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-16 Thread David Rowley
On Thu, 17 Jan 2019 at 14:19, Tomas Vondra  wrote:
>  > 12.  Should we be referencing the source from the docs?
>  >
>  > See mcv_clauselist_selectivity
>  >  in src/backend/statistics/mcv.c for details.
>  >
>  > hmm. I see it's not the first going by: git grep -E "\w+\.c\<"
>  > gt
> Hmm, that does not return anything to me - do you actually see any
> references to .c files in the sgml docs? I agree that probably is not a
> good idea, so I'll remove that.

Yeah, I see quite a few. I shouldn't have escaped the <

>  > 18. In dependencies_clauselist_selectivity() there seem to be a new
>  > bug introduced.  We do:
>  >
>  > /* mark this one as done, so we don't touch it again. */
>  > *estimatedclauses = bms_add_member(*estimatedclauses, listidx);
>  >
>  > but the bms_is_member() check that skipped these has been removed.
>  >
>  > It might be easier to document if we just always do:
>  >
>  >if (bms_is_member(listidx, *estimatedclauses))
>  > continue;
>  >
>  > at the start of both loops. list_attnums can just be left unset for
>  > the originally already estimatedclauses.
>  >
> It's probably not as clear as it should be, but if the clause is already
> estimated (or incompatible), then the list_attnums[] entry will be
> InvalidAttrNumber. Which is what we check in the second loop.

hmm. what about the items that should be skipped when you do the
*estimatedclauses = bms_add_member(*estimatedclauses, listidx); in the
2nd loop. You'll need to either also do list_attnums[listidx] =
InvalidAttrNumber; for them, or put back the bms_is_member() check,
no?  I admit to not having debugged it to find an actual bug, it just
looks suspicious.

>  > 25. Does statext_is_compatible_clause_internal)_ need to skip over
> RelabelTypes?
>  >
> I believe it does, based on what I've observed during development. Why
> do you think it's not necessary?

The other way around. I thought it was necessary, but the code does not do it.

>  > 26. In statext_is_compatible_clause_internal() you mention: /* We only
>  > support plain Vars for now */, but I see nothing that ensures that
>  > only Vars are allowed in the is_opclause() condition.
>  >
>  > /* see if it actually has the right */
>  > ok = (NumRelids((Node *) expr) == 1) &&
>  > (is_pseudo_constant_clause(lsecond(expr->args)) ||
>  > (varonleft = false,
>  >is_pseudo_constant_clause(linitial(expr->args;
>  >
>  > the above would allow var+var == const through.
>  >
> But then we call statext_is_compatible_clause_internal on it again, and
> that only allows Vars and "Var op Const" expressions. Maybe there's a
> way around that?

True, I missed that. Drop that one.

>  > 33. "ndimentions"? There's no field in the struct by that name. I'd
>  > assume it's the same size as the isnull array above it?
>  >
>  > Datum*values; /* variable-length (ndimensions) */
>  >
> Yes, that's the case.

If it relates to the ndimensions field from the struct below, maybe
it's worth crafting that into the comment somehow.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-16 Thread Tomas Vondra




On 1/16/19 7:56 AM, David Rowley wrote:> On Tue, 15 Jan 2019 at 08:21, 
Tomas Vondra  wrote:

>> Turns out you were right - the attribute_referenced piece was quite
>> unnecessary. So I've removed it. I've also extended the regression tests
>> to verify changing type of another column does not reset the stats.
>
> (Trying to find my feet over here)
>
> I've read over the entire thread, and apart from missing the last two
> emails and therefore the latest patch, I managed to read over most of
> the MCV patch. I didn't quite get to reading mcv.c and don't quite
> have the energy to take that on now.
>
Thanks for looking!

> At this stage I'm trying to get to know the patch.  I read a lot of
> discussing between you and Dean ironing out how the stats should be
> used to form selectivities.  At the time I'd not read the patch yet,
> so most of it went over my head.
>
> I did note down a few things on my read.  I've included them below.
> Hopefully, they're useful.
>
> MCV list review
>
> 1. In mvc.c there's Assert(ndistinct <= UINT16_MAX); This should be
> PG_UINT16_MAX
>
Yep. Will fix.

> 2. math.h should be included just after postgres.h
>
Yep. Will fix.

> 3. Copyright is still -2017 in mcv.c.  Hopefully, if you change it to
> 2019, you'll never have to bump it ever again! :-)
>
Optimist ;-)

> 4. Looking at pg_stats_ext_mcvlist_items() I see you've coded the
> string building manually. The way it's coded I'm finding a little
> strange.  It means the copying becomes quadratic due to
>
> snprintf(buff, 1024, format, values[1], DatumGetPointer(valout));
> strncpy(values[1], buff, 1023);
>
> So basically, generally, here you're building a new string with
> values[1] followed by a comma, then followed by valout. One the next
> line you then copy that new buffer back into values[1].  I understand
> this part is likely not performance critical, but I see no reason to
> write the code this way.
>
> Are you limiting the strings to 1024 bytes on purpose?  I don't see
> any comment mentioning you want to truncate strings.
>
> Would it not be better to do this part using a
> AppendStringInfoString()? and just manually add a '{', ',' or '}' as
> and when required?
>> DatumGetPointer(valout) should really be using DatumGetCString(valout).
>
> Likely you can also use heap_form_tuple.  This will save you having to
> convert ints into strings then only to have BuildTupleFromCStrings()
> do the reverse.
>
I agree. I admit all of this is a residue of an initial hackish version 
of the function, and should be changed to StringInfo. Will fix.


> 5. individiaul -> individual
>  lists.  This allows very accurate estimates for individiaul 
columns, but

>
>  litst -> lists
>
>  litst on combinations of columns.  Similarly to functional 
dependencies

>
Will fix.

> 6. Worth mentioning planning cycles too?
>
>   "It's advisable to create MCV statistics 
objects only
>   on combinations of columns that are actually used in conditions 
together,
>   and for which misestimation of the number of groups is 
resulting in bad
>   plans.  Otherwise, the ANALYZE cycles are 
just wasted."

>
Makes sense. Although that's what we say about the existing stats, so 
perhaps we should tweak that too.


> 7. straight-forward -> straightforward
>
> (most-common values) lists, a straight-forward extension of the 
per-column

> > 8. adresses -> addresses
>
> statistics adresses the limitation by storing individual values, but it
>
Will fix. Thanks for proof-reading.

> 9. Worth mentioning ANALYZE time?
>
>  This section introduces multivariate variant of 
MCV
>  (most-common values) lists, a straight-forward extension of the 
per-column
>  statistics described in linkend="row-estimation-examples"/>. This
>  statistics adresses the limitation by storing individual values, 
but it
>  is naturally more expensive, both in terms of storage and 
planning time.

>
Yeah.

> 10. low -> a low
>
>  with low number of distinct values. Before looking at the second 
query,

>
> 11. them -> then
>
>  on items in the MCV list, and them sums the 
frequencies

>
Will fix.

> 12.  Should we be referencing the source from the docs?
>
> See mcv_clauselist_selectivity
>  in src/backend/statistics/mcv.c for details.
>
> hmm. I see it's not the first going by: git grep -E "\w+\.c\<"
> gt
Hmm, that does not return anything to me - do you actually see any 
references to .c files in the sgml docs? I agree that probably is not a 
good idea, so I'll remove that.


> 13. Pretty minor, but the following loop in
> UpdateStatisticsForTypeChange() could use a break;
>
> attribute_referenced = false;
> for (i = 0; i < staForm->stxkeys.dim1; i++)
> if (attnum == staForm->stxkeys.values[i])
> attribute_referenced = true;
>
> UPDATE: If I'd reviewed the correct patch I'd have seen that you'd
> removed this already
>
;-)

> 14. Again in UpdateStatisticsForTypeChange(), would it not be better
> to do the 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-16 Thread David Rowley
On Tue, 15 Jan 2019 at 08:21, Tomas Vondra  wrote:
> Turns out you were right - the attribute_referenced piece was quite
> unnecessary. So I've removed it. I've also extended the regression tests
> to verify changing type of another column does not reset the stats.

(Trying to find my feet over here)

I've read over the entire thread, and apart from missing the last two
emails and therefore the latest patch, I managed to read over most of
the MCV patch. I didn't quite get to reading mcv.c and don't quite
have the energy to take that on now.

At this stage I'm trying to get to know the patch.  I read a lot of
discussing between you and Dean ironing out how the stats should be
used to form selectivities.  At the time I'd not read the patch yet,
so most of it went over my head.

I did note down a few things on my read.  I've included them below.
Hopefully, they're useful.

MCV list review

1. In mvc.c there's Assert(ndistinct <= UINT16_MAX); This should be
PG_UINT16_MAX

2. math.h should be included just after postgres.h

3. Copyright is still -2017 in mcv.c.  Hopefully, if you change it to
2019, you'll never have to bump it ever again! :-)

4. Looking at pg_stats_ext_mcvlist_items() I see you've coded the
string building manually. The way it's coded I'm finding a little
strange.  It means the copying becomes quadratic due to

snprintf(buff, 1024, format, values[1], DatumGetPointer(valout));
strncpy(values[1], buff, 1023);

So basically, generally, here you're building a new string with
values[1] followed by a comma, then followed by valout. One the next
line you then copy that new buffer back into values[1].  I understand
this part is likely not performance critical, but I see no reason to
write the code this way.

Are you limiting the strings to 1024 bytes on purpose?  I don't see
any comment mentioning you want to truncate strings.

Would it not be better to do this part using a
AppendStringInfoString()? and just manually add a '{', ',' or '}' as
and when required?

DatumGetPointer(valout) should really be using DatumGetCString(valout).

Likely you can also use heap_form_tuple.  This will save you having to
convert ints into strings then only to have BuildTupleFromCStrings()
do the reverse.

5. individiaul -> individual
lists.  This allows very accurate estimates for individiaul columns, but

litst -> lists

litst on combinations of columns.  Similarly to functional dependencies

6. Worth mentioning planning cycles too?

 "It's advisable to create MCV statistics objects only
 on combinations of columns that are actually used in conditions together,
 and for which misestimation of the number of groups is resulting in bad
 plans.  Otherwise, the ANALYZE cycles are just wasted."

7. straight-forward -> straightforward

(most-common values) lists, a straight-forward extension of the per-column

8. adresses -> addresses

statistics adresses the limitation by storing individual values, but it

9. Worth mentioning ANALYZE time?

This section introduces multivariate variant of MCV
(most-common values) lists, a straight-forward extension of the per-column
statistics described in . This
statistics adresses the limitation by storing individual values, but it
is naturally more expensive, both in terms of storage and planning time.

10. low -> a low

with low number of distinct values. Before looking at the second query,

11. them -> then

on items in the MCV list, and them sums the frequencies

12.  Should we be referencing the source from the docs?

See mcv_clauselist_selectivity
in src/backend/statistics/mcv.c for details.

hmm. I see it's not the first going by: git grep -E "\w+\.c\<"

13. Pretty minor, but the following loop in
UpdateStatisticsForTypeChange() could use a break;

attribute_referenced = false;
for (i = 0; i < staForm->stxkeys.dim1; i++)
if (attnum == staForm->stxkeys.values[i])
attribute_referenced = true;

UPDATE: If I'd reviewed the correct patch I'd have seen that you'd
removed this already

14. Again in UpdateStatisticsForTypeChange(), would it not be better
to do the statext_is_kind_built(oldtup, STATS_EXT_MCV) check before
checking if the stats contain this column?  This gets rid of your
reset_stats variable.

I also don't quite understand why there's an additional check for
statext_is_kind_built(oldtup, STATS_EXT_MCV), which if that's false
then why do we do the dummy update on the tuple?

Have you just coded this so that you can support other stats types
later without too much modification? If so, I'm finding it a bit
confusing to read, so maybe it's worth only coding it that way if
there's more than one stats type to reset for.

UPDATE: If I'd reviewed the correct patch I'd have seen that you'd
removed this already

15. I see you broke out the remainder of the code from
clauselist_selectivity() into clauselist_selectivity_simple().  The
comment looks like just a copy and paste from the original.  That
seems like quite a bit of 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-14 Thread Tomas Vondra
On 1/14/19 4:31 PM, Tomas Vondra wrote:
> 
> On 1/14/19 12:20 PM, Dean Rasheed wrote:
>> (Removing Adrien from the CC list, because messages to that address
>> keep bouncing)
>>
>> On Sun, 13 Jan 2019 at 00:31, Tomas Vondra  
>> wrote:
>>>
>>> On 1/10/19 6:09 PM, Dean Rasheed wrote:

 In the previous discussion around UpdateStatisticsForTypeChange(), the
 consensus appeared to be that we should just unconditionally drop all
 extended statistics when ALTER TABLE changes the type of an included
 column (just as we do for per-column stats), since such a type change
 can rewrite the data in arbitrary ways, so there's no reason to assume
 that the old stats are still valid. I think it makes sense to extract
 that as a separate patch to be committed ahead of these ones, and I'd
 also argue for back-patching it.
>>>
>>> Wasn't the agreement to keep stats that don't include column values
>>> (functional dependencies and ndistinct coefficients), and reset only
>>> more complex stats? That's what happens in master and how it's extended
>>> by the patch for MCV lists and histograms.
>>>
>>
>> Ah OK, I misremembered the exact conclusion reached last time. In that
>> case the logic in UpdateStatisticsForTypeChange() looks wrong:
>>
>> /*
>>  * If we can leave the statistics as it is, just do minimal cleanup
>>  * and we're done.
>>  */
>> if (!attribute_referenced && reset_stats)
>> {
>> ReleaseSysCache(oldtup);
>> return;
>> }
>>
>> That should be "|| !reset_stats", or have more parentheses.
> 
> Yeah, it should have been
> 
> if (!(attribute_referenced && reset_stats))
> 
> i.e. there's a parenthesis missing. Thanks for noticing this. I guess a
> regression test for this would be useful.
> 
>> In fact, I think that computing attribute_referenced is unnecessary
>> because the dependency information includes the columns that the
>> stats are for and ATExecAlterColumnType() uses that, so
>> attribute_referenced will always be true.
> Hmmm. I'm pretty sure I came to the conclusion it's in fact necessary,
> but I might be wrong. Will check.
> 

Turns out you were right - the attribute_referenced piece was quite
unnecessary. So I've removed it. I've also extended the regression tests
to verify changing type of another column does not reset the stats.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-multivariate-MCV-lists-20190114.patch.gz
Description: application/gzip


0002-multivariate-histograms-20190114.patch.gz
Description: application/gzip


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-14 Thread Tomas Vondra


On 1/14/19 12:20 PM, Dean Rasheed wrote:
> (Removing Adrien from the CC list, because messages to that address
> keep bouncing)
> 
> On Sun, 13 Jan 2019 at 00:31, Tomas Vondra  
> wrote:
>>
>> On 1/10/19 6:09 PM, Dean Rasheed wrote:
>>>
>>> In the previous discussion around UpdateStatisticsForTypeChange(), the
>>> consensus appeared to be that we should just unconditionally drop all
>>> extended statistics when ALTER TABLE changes the type of an included
>>> column (just as we do for per-column stats), since such a type change
>>> can rewrite the data in arbitrary ways, so there's no reason to assume
>>> that the old stats are still valid. I think it makes sense to extract
>>> that as a separate patch to be committed ahead of these ones, and I'd
>>> also argue for back-patching it.
>>
>> Wasn't the agreement to keep stats that don't include column values
>> (functional dependencies and ndistinct coefficients), and reset only
>> more complex stats? That's what happens in master and how it's extended
>> by the patch for MCV lists and histograms.
>>
> 
> Ah OK, I misremembered the exact conclusion reached last time. In that
> case the logic in UpdateStatisticsForTypeChange() looks wrong:
> 
> /*
>  * If we can leave the statistics as it is, just do minimal cleanup
>  * and we're done.
>  */
> if (!attribute_referenced && reset_stats)
> {
> ReleaseSysCache(oldtup);
> return;
> }
> 
> That should be "|| !reset_stats", or have more parentheses.

Yeah, it should have been

if (!(attribute_referenced && reset_stats))

i.e. there's a parenthesis missing. Thanks for noticing this. I guess a
regression test for this would be useful.

> In fact, I think that computing attribute_referenced is unnecessary
> because the dependency information includes the columns that the
> stats are for and ATExecAlterColumnType() uses that, so
> attribute_referenced will always be true.
Hmmm. I'm pretty sure I came to the conclusion it's in fact necessary,
but I might be wrong. Will check.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-14 Thread Dean Rasheed
(Removing Adrien from the CC list, because messages to that address
keep bouncing)

On Sun, 13 Jan 2019 at 00:31, Tomas Vondra  wrote:
>
> On 1/10/19 6:09 PM, Dean Rasheed wrote:
> >
> > In the previous discussion around UpdateStatisticsForTypeChange(), the
> > consensus appeared to be that we should just unconditionally drop all
> > extended statistics when ALTER TABLE changes the type of an included
> > column (just as we do for per-column stats), since such a type change
> > can rewrite the data in arbitrary ways, so there's no reason to assume
> > that the old stats are still valid. I think it makes sense to extract
> > that as a separate patch to be committed ahead of these ones, and I'd
> > also argue for back-patching it.
>
> Wasn't the agreement to keep stats that don't include column values
> (functional dependencies and ndistinct coefficients), and reset only
> more complex stats? That's what happens in master and how it's extended
> by the patch for MCV lists and histograms.
>

Ah OK, I misremembered the exact conclusion reached last time. In that
case the logic in UpdateStatisticsForTypeChange() looks wrong:

/*
 * If we can leave the statistics as it is, just do minimal cleanup
 * and we're done.
 */
if (!attribute_referenced && reset_stats)
{
ReleaseSysCache(oldtup);
return;
}

That should be "|| !reset_stats", or have more parentheses. In fact, I
think that computing attribute_referenced is unnecessary because the
dependency information includes the columns that the stats are for and
ATExecAlterColumnType() uses that, so attribute_referenced will always
be true.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-14 Thread Dean Rasheed
On Sun, 13 Jan 2019 at 00:04, Tomas Vondra  wrote:
> On 1/12/19 8:49 AM, Dean Rasheed wrote:
> > A possible refinement would be to say that if there are more than
> > stats_target items more common than this mincount threshold, rather than
> > excluding the least common ones to get the target number of items,
> > exclude the ones closest to their base frequencies, on the grounds that
> > those are the ones for which the MCV stats will make the least
> > difference. That might complicate the code somewhat though -- I don't
> > have it in front of me, so I can't remember if it even tracks more than
> > stats_target items.
>
> Yes, the patch does limit the number of items to stats_target (a maximum
> of per-attribute stattarget values, to be precise). IIRC that's a piece
> you've added sometime last year ;-)
>
> I've been experimenting with removing items closest to base frequencies
> today, and I came to the conclusion that it's rather tricky for a couple
> of reasons.
>
> 1) How exactly do you measure "closeness" to base frequency? I've tried
> computing the error in different ways, including:
>
>   * Max(freq/base, base/freq)
>   * abs(freq - base)
>
> but this does not seem to affect the behavior very much, TBH.
>
> 2) This necessarily reduces mcv_totalsel, i.e. it increases the part not
> covered by MCV. And estimates on this part are rather crude.
>
> 3) It does nothing for "impossible" items, i.e. combinations that do not
> exist at all. Clearly, those won't be part of the sample, and so can't
> be included in the MCV no matter which error definition we pick. And for
> very rare combinations it might lead to sudden changes, depending on
> whether the group gets sampled or not.
>
> So IMHO it's better to stick to the simple SRE approach for now.
>

OK, that makes sense.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-12 Thread Tomas Vondra
On 1/10/19 6:09 PM, Dean Rasheed wrote:
> On Wed, 26 Dec 2018 at 22:09, Tomas Vondra  
> wrote:
>>
>> Attached is an updated version of the patch - rebased and fixing the
>> warnings reported by Thomas Munro.
>>
> 
> Here are a few random review comments based on what I've read so far:
> 
> 
> On the CREATE STATISTICS doc page, the syntax in the new examples
> added to the bottom of the page is incorrect. E.g., instead of
> 
> CREATE STATISTICS s2 WITH (mcv) ON (a, b) FROM t2;
> 
> it should read
> 
> CREATE STATISTICS s2 (mcv) ON a, b FROM t2;
> 

Fixed.

> I think perhaps there should be also be a short explanatory sentence
> after each example (as in the previous one) just to explain what the
> example is intended to demonstrate. E.g., for the new MCV example,
> perhaps say
> 
>These statistics give the planner more detailed information about the
>specific values that commonly appear in the table, as well as an upper
>bound on the selectivities of combinations of values that do not appear in
>the table, allowing it to generate better estimates in both cases.
> 
> I don't think there's a need for too much detail there, since it's
> explained more fully elsewhere, but it feels like it needs a little
> more just to explain the purpose of the example.
> 

I agree, this part of docs can be quite terse. I've adopted the wording
you proposed, and I've done something similar for the histogram patch,
which needs to add something too. It's a bit repetitive, though.

> 
> There is additional documentation in perform.sgml that needs updating
> -- about what kinds of stats the planner keeps. Those docs are
> actually quite similar to the ones on planstats.sgml. It seems the
> former focus more one what stats the planner stores, while the latter
> focus on how the planner uses those stats.
> 

OK, I've expanded this part a bit too.

> 
> In func.sgml, the docs for pg_mcv_list_items need extending to include
> the base frequency column. Similarly for the example query in
> planstats.sgml.
> 

Fixed.

> 
> Tab-completion for the CREATE STATISTICS statement should be extended
> for the new kinds.
> 

Fixed.

> 
> Looking at mcv_update_match_bitmap(), it's called 3 times (twice
> recursively from within itself), and I think the pattern for calling
> it is a bit messy. E.g.,
> 
> /* by default none of the MCV items matches the clauses */
> bool_matches = palloc0(sizeof(char) * mcvlist->nitems);
> 
> if (or_clause(clause))
> {
> /* OR clauses assume nothing matches, initially */
> memset(bool_matches, STATS_MATCH_NONE, sizeof(char) *
> mcvlist->nitems);
> }
> else
> {
> /* AND clauses assume everything matches, initially */
> memset(bool_matches, STATS_MATCH_FULL, sizeof(char) *
> mcvlist->nitems);
> }
> 
> /* build the match bitmap for the OR-clauses */
> mcv_update_match_bitmap(root, bool_clauses, keys,
> mcvlist, bool_matches,
> or_clause(clause));
> 
> the comment for the AND case directly contradicts the initial comment,
> and the final comment is wrong because it could be and AND clause. For
> a NOT clause it does:
> 
> /* by default none of the MCV items matches the clauses */
> not_matches = palloc0(sizeof(char) * mcvlist->nitems);
> 
> /* NOT clauses assume nothing matches, initially */
> memset(not_matches, STATS_MATCH_FULL, sizeof(char) *
> mcvlist->nitems);
> 
> /* build the match bitmap for the NOT-clause */
> mcv_update_match_bitmap(root, not_args, keys,
> mcvlist, not_matches, false);
> 
> so the second comment is wrong. I understand the evolution that lead
> to this function existing in this form, but I think that it can now be
> refactored into a "getter" function rather than an "update" function.
> I.e., something like mcv_get_match_bitmap() which first allocates the
> array to be returned and initialises it based on the passed-in value
> of is_or. That way, all the calling sites can be simplified to
> one-liners like
> 
> /* get the match bitmap for the AND/OR clause */
> bool_matches = mcv_get_match_bitmap(root, bool_clauses, keys,
> mcvlist, or_clause(clause));
> 

Yes, I agree. I've reworked the function per your proposal, and I've
done the same for the histogram too.

> 
> In the previous discussion around UpdateStatisticsForTypeChange(), the
> consensus appeared to be that we should just unconditionally drop all
> extended statistics when ALTER TABLE changes the type of an included
> column (just as we do for per-column stats), since such a type change
> can rewrite the data in arbitrary ways, so there's no reason to assume
> that the old stats are still valid. I 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-12 Thread Tomas Vondra



On 1/12/19 8:49 AM, Dean Rasheed wrote:
> On Fri, 11 Jan 2019, 21:18 Tomas Vondra   wrote:
> 
> 
> On 1/10/19 4:20 PM, Dean Rasheed wrote:
> > ...
> >
> > So perhaps what we should do for multivariate stats is simply use the
> > relative standard error approach (i.e., reuse the patch in [2] with a
> > 20% RSE cutoff). That had a lot of testing at the time, against a wide
> > range of data distributions, and proved to be very good, not to
> > mention being very simple.
> >
> > That approach would encompass both groups more and less common than
> > the base frequency, because it relies entirely on the group appearing
> > enough times in the sample to infer that any errors on the resulting
> > estimates will be reasonably well controlled. It wouldn't actually
> > look at the base frequency at all in deciding which items to keep.
> >
> 
> I've been looking at this approach today, and I'm a bit puzzled. That
> patch essentially uses SRE to compute mincount like this:
> 
>     mincount = n*(N-n) / (N-n+0.04*n*(N-1))
> 
> and then includes all items more common than this threshold.
> 
> 
> Right.
> 
> How could
> that handle items significantly less common than the base frequency?
> 
> 
> Well what I meant was that it will *allow* items significantly less
> common than the base frequency, because it's not even looking at the
> base frequency. For example, if the table size were N=100,000 and we
> sampled n=10,000 rows from that, mincount would work out as 22. So it's
> easy to construct allowed items more common than that and still
> significantly less common than their base frequency.
> 

OK, understood. I agree that's a sensible yet simple approach, so I've
adopted it in the next version of the patch.

> A possible refinement would be to say that if there are more than
> stats_target items more common than this mincount threshold, rather than
> excluding the least common ones to get the target number of items,
> exclude the ones closest to their base frequencies, on the grounds that
> those are the ones for which the MCV stats will make the least
> difference. That might complicate the code somewhat though -- I don't
> have it in front of me, so I can't remember if it even tracks more than
> stats_target items.
> 

Yes, the patch does limit the number of items to stats_target (a maximum
of per-attribute stattarget values, to be precise). IIRC that's a piece
you've added sometime last year ;-)

I've been experimenting with removing items closest to base frequencies
today, and I came to the conclusion that it's rather tricky for a couple
of reasons.

1) How exactly do you measure "closeness" to base frequency? I've tried
computing the error in different ways, including:

  * Max(freq/base, base/freq)
  * abs(freq - base)

but this does not seem to affect the behavior very much, TBH.

2) This necessarily reduces mcv_totalsel, i.e. it increases the part not
covered by MCV. And estimates on this part are rather crude.

3) It does nothing for "impossible" items, i.e. combinations that do not
exist at all. Clearly, those won't be part of the sample, and so can't
be included in the MCV no matter which error definition we pick. And for
very rare combinations it might lead to sudden changes, depending on
whether the group gets sampled or not.

So IMHO it's better to stick to the simple SRE approach for now.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-11 Thread Dean Rasheed
On Fri, 11 Jan 2019, 21:18 Tomas Vondra 
> On 1/10/19 4:20 PM, Dean Rasheed wrote:
> > ...
> >
> > So perhaps what we should do for multivariate stats is simply use the
> > relative standard error approach (i.e., reuse the patch in [2] with a
> > 20% RSE cutoff). That had a lot of testing at the time, against a wide
> > range of data distributions, and proved to be very good, not to
> > mention being very simple.
> >
> > That approach would encompass both groups more and less common than
> > the base frequency, because it relies entirely on the group appearing
> > enough times in the sample to infer that any errors on the resulting
> > estimates will be reasonably well controlled. It wouldn't actually
> > look at the base frequency at all in deciding which items to keep.
> >
>
> I've been looking at this approach today, and I'm a bit puzzled. That
> patch essentially uses SRE to compute mincount like this:
>
> mincount = n*(N-n) / (N-n+0.04*n*(N-1))
>
> and then includes all items more common than this threshold.


Right.

How could
> that handle items significantly less common than the base frequency?
>

Well what I meant was that it will *allow* items significantly less common
than the base frequency, because it's not even looking at the base
frequency. For example, if the table size were N=100,000 and we sampled
n=10,000 rows from that, mincount would work out as 22. So it's easy to
construct allowed items more common than that and still significantly less
common than their base frequency.

A possible refinement would be to say that if there are more than
stats_target items more common than this mincount threshold, rather than
excluding the least common ones to get the target number of items, exclude
the ones closest to their base frequencies, on the grounds that those are
the ones for which the MCV stats will make the least difference. That might
complicate the code somewhat though -- I don't have it in front of me, so I
can't remember if it even tracks more than stats_target items.

Regards,
Dean


Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-11 Thread Tomas Vondra


On 1/10/19 4:20 PM, Dean Rasheed wrote:
> ...
>
> So perhaps what we should do for multivariate stats is simply use the
> relative standard error approach (i.e., reuse the patch in [2] with a
> 20% RSE cutoff). That had a lot of testing at the time, against a wide
> range of data distributions, and proved to be very good, not to
> mention being very simple.
> 
> That approach would encompass both groups more and less common than
> the base frequency, because it relies entirely on the group appearing
> enough times in the sample to infer that any errors on the resulting
> estimates will be reasonably well controlled. It wouldn't actually
> look at the base frequency at all in deciding which items to keep.
> 

I've been looking at this approach today, and I'm a bit puzzled. That
patch essentially uses SRE to compute mincount like this:

mincount = n*(N-n) / (N-n+0.04*n*(N-1))

and then includes all items more common than this threshold. How could
that handle items significantly less common than the base frequency?

Or did you mean to use the SRE, but in some different way?

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-10 Thread Dean Rasheed
On Wed, 26 Dec 2018 at 22:09, Tomas Vondra  wrote:
>
> Attached is an updated version of the patch - rebased and fixing the
> warnings reported by Thomas Munro.
>

Here are a few random review comments based on what I've read so far:


On the CREATE STATISTICS doc page, the syntax in the new examples
added to the bottom of the page is incorrect. E.g., instead of

CREATE STATISTICS s2 WITH (mcv) ON (a, b) FROM t2;

it should read

CREATE STATISTICS s2 (mcv) ON a, b FROM t2;

I think perhaps there should be also be a short explanatory sentence
after each example (as in the previous one) just to explain what the
example is intended to demonstrate. E.g., for the new MCV example,
perhaps say

   These statistics give the planner more detailed information about the
   specific values that commonly appear in the table, as well as an upper
   bound on the selectivities of combinations of values that do not appear in
   the table, allowing it to generate better estimates in both cases.

I don't think there's a need for too much detail there, since it's
explained more fully elsewhere, but it feels like it needs a little
more just to explain the purpose of the example.


There is additional documentation in perform.sgml that needs updating
-- about what kinds of stats the planner keeps. Those docs are
actually quite similar to the ones on planstats.sgml. It seems the
former focus more one what stats the planner stores, while the latter
focus on how the planner uses those stats.


In func.sgml, the docs for pg_mcv_list_items need extending to include
the base frequency column. Similarly for the example query in
planstats.sgml.


Tab-completion for the CREATE STATISTICS statement should be extended
for the new kinds.


Looking at mcv_update_match_bitmap(), it's called 3 times (twice
recursively from within itself), and I think the pattern for calling
it is a bit messy. E.g.,

/* by default none of the MCV items matches the clauses */
bool_matches = palloc0(sizeof(char) * mcvlist->nitems);

if (or_clause(clause))
{
/* OR clauses assume nothing matches, initially */
memset(bool_matches, STATS_MATCH_NONE, sizeof(char) *
mcvlist->nitems);
}
else
{
/* AND clauses assume everything matches, initially */
memset(bool_matches, STATS_MATCH_FULL, sizeof(char) *
mcvlist->nitems);
}

/* build the match bitmap for the OR-clauses */
mcv_update_match_bitmap(root, bool_clauses, keys,
mcvlist, bool_matches,
or_clause(clause));

the comment for the AND case directly contradicts the initial comment,
and the final comment is wrong because it could be and AND clause. For
a NOT clause it does:

/* by default none of the MCV items matches the clauses */
not_matches = palloc0(sizeof(char) * mcvlist->nitems);

/* NOT clauses assume nothing matches, initially */
memset(not_matches, STATS_MATCH_FULL, sizeof(char) *
mcvlist->nitems);

/* build the match bitmap for the NOT-clause */
mcv_update_match_bitmap(root, not_args, keys,
mcvlist, not_matches, false);

so the second comment is wrong. I understand the evolution that lead
to this function existing in this form, but I think that it can now be
refactored into a "getter" function rather than an "update" function.
I.e., something like mcv_get_match_bitmap() which first allocates the
array to be returned and initialises it based on the passed-in value
of is_or. That way, all the calling sites can be simplified to
one-liners like

/* get the match bitmap for the AND/OR clause */
bool_matches = mcv_get_match_bitmap(root, bool_clauses, keys,
mcvlist, or_clause(clause));


In the previous discussion around UpdateStatisticsForTypeChange(), the
consensus appeared to be that we should just unconditionally drop all
extended statistics when ALTER TABLE changes the type of an included
column (just as we do for per-column stats), since such a type change
can rewrite the data in arbitrary ways, so there's no reason to assume
that the old stats are still valid. I think it makes sense to extract
that as a separate patch to be committed ahead of these ones, and I'd
also argue for back-patching it.


That's it for now. I'll try to keep reviewing if time permits.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-10 Thread Tomas Vondra
On 1/10/19 4:20 PM, Dean Rasheed wrote:
> On Wed, 9 Jan 2019 at 15:40, Tomas Vondra  
> wrote:
>> On 1/8/19 3:18 PM, Dean Rasheed wrote:
>>> So actually, the estimate for a group of values will be either the MCV
>>> item's frequency (if the MCV item is kept), or (roughly) the MCV
>>> item's base_frequency (if the MCV item is not kept). That suggests
>>> that we should simply keep items that are significantly more or less
>>> common than the item's base frequency -- i.e., keep rule (b) and ditch
>>> rule (a).
>>>
>>
>> Hmmm, but won't that interfere with how we with how we extrapolate the
>> MCV estimate to the non-MCV part? Currently the patch does what you
>> proposed, i.e.
>>
>> other_sel = simple_sel - mcv_basesel;
>>
>> I'm worried that if we only include the items that are significantly
>> more or less common than the base frequency, it may skew the other_sel
>> estimate.
>>
> 
> I don't see how that would skew other_sel. Items close to the base
> frequency would also tend to be close to simple_sel, making other_sel
> approximately zero, so excluding them should have little effect.

Oh, I see. You're right those items should contribute very little to
other_sel, I should have realized that.

> However...
> 
> Re-reading the thread where we enhanced the per-column MCV stats last
> year [1], it was actually the case that an algorithm based on just
> looking at the relative standard error worked pretty well for a very
> wide range of data distributions.
> 
> The final algorithm chosen in analyze_mcv_list() was only a marginal
> improvement on that, and was directly based upon the fact that, in the
> univariate statistics case, all the values not included in the MCV
> list are assigned the same selectivity. However, that's not the case
> for multivariate stats, because each group not included in the
> multivariate MCV list gets assigned a different selectivity based on
> its per-column stats.
> 
> So perhaps what we should do for multivariate stats is simply use the
> relative standard error approach (i.e., reuse the patch in [2] with a
> 20% RSE cutoff). That had a lot of testing at the time, against a wide
> range of data distributions, and proved to be very good, not to
> mention being very simple.
> 
> That approach would encompass both groups more and less common than
> the base frequency, because it relies entirely on the group appearing
> enough times in the sample to infer that any errors on the resulting
> estimates will be reasonably well controlled. It wouldn't actually
> look at the base frequency at all in deciding which items to keep.
> 
> Moreover, if the group appears sufficiently often in the sample to
> justify being kept, each of the individual column values must also
> appear at least that often as well, which means that the errors on the
> base frequency estimate are also well controlled. That was one of my
> concerns about other algorithms such as "keep items significantly more
> or less common than the base frequency" -- in the less common case,
> there's no lower bound on the number of occurrences seen, and so no
> guarantee that the errors are kept under control.
> 

Yep, that looks like a great approach. Simple and tested. I'll try
tweaking the patch accordingly over the weekend.

Thanks!

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-10 Thread Dean Rasheed
On Wed, 9 Jan 2019 at 15:40, Tomas Vondra  wrote:
> On 1/8/19 3:18 PM, Dean Rasheed wrote:
> > So actually, the estimate for a group of values will be either the MCV
> > item's frequency (if the MCV item is kept), or (roughly) the MCV
> > item's base_frequency (if the MCV item is not kept). That suggests
> > that we should simply keep items that are significantly more or less
> > common than the item's base frequency -- i.e., keep rule (b) and ditch
> > rule (a).
> >
>
> Hmmm, but won't that interfere with how we with how we extrapolate the
> MCV estimate to the non-MCV part? Currently the patch does what you
> proposed, i.e.
>
> other_sel = simple_sel - mcv_basesel;
>
> I'm worried that if we only include the items that are significantly
> more or less common than the base frequency, it may skew the other_sel
> estimate.
>

I don't see how that would skew other_sel. Items close to the base
frequency would also tend to be close to simple_sel, making other_sel
approximately zero, so excluding them should have little effect.
However...

Re-reading the thread where we enhanced the per-column MCV stats last
year [1], it was actually the case that an algorithm based on just
looking at the relative standard error worked pretty well for a very
wide range of data distributions.

The final algorithm chosen in analyze_mcv_list() was only a marginal
improvement on that, and was directly based upon the fact that, in the
univariate statistics case, all the values not included in the MCV
list are assigned the same selectivity. However, that's not the case
for multivariate stats, because each group not included in the
multivariate MCV list gets assigned a different selectivity based on
its per-column stats.

So perhaps what we should do for multivariate stats is simply use the
relative standard error approach (i.e., reuse the patch in [2] with a
20% RSE cutoff). That had a lot of testing at the time, against a wide
range of data distributions, and proved to be very good, not to
mention being very simple.

That approach would encompass both groups more and less common than
the base frequency, because it relies entirely on the group appearing
enough times in the sample to infer that any errors on the resulting
estimates will be reasonably well controlled. It wouldn't actually
look at the base frequency at all in deciding which items to keep.

Moreover, if the group appears sufficiently often in the sample to
justify being kept, each of the individual column values must also
appear at least that often as well, which means that the errors on the
base frequency estimate are also well controlled. That was one of my
concerns about other algorithms such as "keep items significantly more
or less common than the base frequency" -- in the less common case,
there's no lower bound on the number of occurrences seen, and so no
guarantee that the errors are kept under control.

Regards,
Dean

[1] 
https://www.postgresql.org/message-id/flat/CAMkU%3D1yvdGvW9TmiLAhz2erFnvnPFYHbOZuO%2Ba%3D4DVkzpuQ2tw%40mail.gmail.com

[2] 
https://www.postgresql.org/message-id/CAEZATCUEmHCZeOHJN8JO5O9LK_VuFeCecy_AxTk7S_2SmLXeyw%40mail.gmail.com



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-09 Thread Tomas Vondra



On 1/8/19 3:18 PM, Dean Rasheed wrote:
> On Mon, 7 Jan 2019 at 00:45, Tomas Vondra  
> wrote:
>>
>> FWIW the main unsolved issue (at least on the MCV part) is how it
>> decides which items to keep in the list.
>>
>> As explained in [1], in the multivariate case we can't simply look at
>> the group frequency and compare it to the average frequency (of the
>> non-MCV items), which is what analyze_mcv_list() does in the
>> single-column case. In the multivariate case we also case about observed
>> vs. base frequency, i.e. we want the MCV list to include groups that are
>> present singificantly more/less than product of per-column stats.
>>
>> I've repeatedly tried to come up with a criteria that would address
>> that, but it seems rather difficult because we can't abandon the other
>> criteria either. So the MCV list should include groups that match both
>>
>> (a) items that are statistically more common than the non-MCV part (i.e.
>> the rule from per-column analyze_mcv_list)
>>
>> (b) items that are statistically more/less common than estimated from
>> per-column stats (i.e. the new rule)
> 
> Thinking about this some more, I think that it probably isn't
> appropriate to use analyze_mcv_list() directly because that's making
> specific assumptions about how items not in the list will be estimated
> that aren't actually true for groups of values in multivariate stats.
> If a group of values isn't in the MCV list, it gets estimated based on
> the product of the selectivities from the per-column stats (modulo the
> additional logic preventing the selectivity not exceeding the total
> non-MCV selectivity).
> 
> So actually, the estimate for a group of values will be either the MCV
> item's frequency (if the MCV item is kept), or (roughly) the MCV
> item's base_frequency (if the MCV item is not kept). That suggests
> that we should simply keep items that are significantly more or less
> common than the item's base frequency -- i.e., keep rule (b) and ditch
> rule (a).
> 

Hmmm, but won't that interfere with how we with how we extrapolate the
MCV estimate to the non-MCV part? Currently the patch does what you
proposed, i.e.

other_sel = simple_sel - mcv_basesel;

I'm worried that if we only include the items that are significantly
more or less common than the base frequency, it may skew the other_sel
estimate.

>> Enforcing rule (a) seems reasonable because it ensures the MCV list
>> includes all items more frequent than the last one. Without it, it's
>> difficult to decide know whether the absent item is very common (but
>> close to base frequency) or very uncommon (so less frequent than the
>> last MCV item).
> 
> I'm not sure there's much we can do about that. Keeping the item will
> result in keeping a frequency that we know is close to the base
> frequency, and not keeping the item will result in per-column stats
> being used that we expect to also give an estimate close to the base
> frequency. So either way, the result is about the same, and it's
> probably better to discard it, leaving more room for other items about
> which we may have more information.
> 
> That said, there is a separate benefit to keeping items in the list
> even if their frequency is close to the base frequency -- the more
> items kept, the larger their total selectivity will be, giving a
> better cap on the non-MCV selectivities. So if, after keeping all
> items satisfying rule (b), there are free slots available, perhaps
> they should be used for the most common remaining values satisfying
> rule (a).
> 

Hmm, so essentially we'd use (b) first to bootstrap the MCV list, and
then we could do what analyze_mcv_list() does. That could work, I guess.

The question is how to define "significantly different from base freq"
though. Any ideas?

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-08 Thread Dean Rasheed
On Mon, 7 Jan 2019 at 00:45, Tomas Vondra  wrote:
>
> FWIW the main unsolved issue (at least on the MCV part) is how it
> decides which items to keep in the list.
>
> As explained in [1], in the multivariate case we can't simply look at
> the group frequency and compare it to the average frequency (of the
> non-MCV items), which is what analyze_mcv_list() does in the
> single-column case. In the multivariate case we also case about observed
> vs. base frequency, i.e. we want the MCV list to include groups that are
> present singificantly more/less than product of per-column stats.
>
> I've repeatedly tried to come up with a criteria that would address
> that, but it seems rather difficult because we can't abandon the other
> criteria either. So the MCV list should include groups that match both
>
> (a) items that are statistically more common than the non-MCV part (i.e.
> the rule from per-column analyze_mcv_list)
>
> (b) items that are statistically more/less common than estimated from
> per-column stats (i.e. the new rule)

Thinking about this some more, I think that it probably isn't
appropriate to use analyze_mcv_list() directly because that's making
specific assumptions about how items not in the list will be estimated
that aren't actually true for groups of values in multivariate stats.
If a group of values isn't in the MCV list, it gets estimated based on
the product of the selectivities from the per-column stats (modulo the
additional logic preventing the selectivity not exceeding the total
non-MCV selectivity).

So actually, the estimate for a group of values will be either the MCV
item's frequency (if the MCV item is kept), or (roughly) the MCV
item's base_frequency (if the MCV item is not kept). That suggests
that we should simply keep items that are significantly more or less
common than the item's base frequency -- i.e., keep rule (b) and ditch
rule (a).

> Enforcing rule (a) seems reasonable because it ensures the MCV list
> includes all items more frequent than the last one. Without it, it's
> difficult to decide know whether the absent item is very common (but
> close to base frequency) or very uncommon (so less frequent than the
> last MCV item).

I'm not sure there's much we can do about that. Keeping the item will
result in keeping a frequency that we know is close to the base
frequency, and not keeping the item will result in per-column stats
being used that we expect to also give an estimate close to the base
frequency. So either way, the result is about the same, and it's
probably better to discard it, leaving more room for other items about
which we may have more information.

That said, there is a separate benefit to keeping items in the list
even if their frequency is close to the base frequency -- the more
items kept, the larger their total selectivity will be, giving a
better cap on the non-MCV selectivities. So if, after keeping all
items satisfying rule (b), there are free slots available, perhaps
they should be used for the most common remaining values satisfying
rule (a).

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2019-01-06 Thread Tomas Vondra
FWIW the main unsolved issue (at least on the MCV part) is how it
decides which items to keep in the list.

As explained in [1], in the multivariate case we can't simply look at
the group frequency and compare it to the average frequency (of the
non-MCV items), which is what analyze_mcv_list() does in the
single-column case. In the multivariate case we also case about observed
vs. base frequency, i.e. we want the MCV list to include groups that are
present singificantly more/less than product of per-column stats.

I've repeatedly tried to come up with a criteria that would address
that, but it seems rather difficult because we can't abandon the other
criteria either. So the MCV list should include groups that match both

(a) items that are statistically more common than the non-MCV part (i.e.
the rule from per-column analyze_mcv_list)

(b) items that are statistically more/less common than estimated from
per-column stats (i.e. the new rule)

Enforcing rule (a) seems reasonable because it ensures the MCV list
includes all items more frequent than the last one. Without it, it's
difficult to decide know whether the absent item is very common (but
close to base frequency) or very uncommon (so less frequent than the
last MCV item).

So it's not clear to me how to best marry these two things. So far the
only thing I came up with is looking for the last item where the
frequency and base frequency are very different (not sure how exactly to
decide when the difference becomes statistically significant), include
all items with higher frequencies, and then do analyze_mcv_list() to
also enforce (a). But it seems a bit cumbersome :-(

[1]
https://www.postgresql.org/message-id/8ac8bd94-478d-215d-e6bd-339f1f20a74c%402ndquadrant.com


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-11-26 Thread Tomas Vondra
On 11/26/18 11:29 PM, Thomas Munro wrote:
> On Mon, Sep 3, 2018 at 11:17 AM Tomas Vondra
>  wrote:
>> Attached is an updated version of the patch series, adopting a couple of
>> improvements - both for MCV lists and histograms.
> 
> Hello Tomas,
> 
> FYI, here are a couple of warnings from GCC (I just noticed because I
> turned on -Werror on cfbot so your patch turned red):
> 
> extended_stats.c: In function ‘statext_clauselist_selectivity’:
> extended_stats.c:1227:6: error: ‘other_sel’ may be used uninitialized
> in this function [-Werror=maybe-uninitialized]
>   sel = mcv_sel + other_sel;
>   ^
> extended_stats.c:1091:5: note: ‘other_sel’ was declared here
>  other_sel,
>  ^
> extended_stats.c:1227:6: error: ‘mcv_sel’ may be used uninitialized in
> this function [-Werror=maybe-uninitialized]
>   sel = mcv_sel + other_sel;
>   ^
> extended_stats.c:1087:5: note: ‘mcv_sel’ was declared here
>  mcv_sel,
>  ^
> 

Thanks, I'll fix that in the next version of the patch I'm working on.


cheers

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-11-26 Thread Thomas Munro
On Mon, Sep 3, 2018 at 11:17 AM Tomas Vondra
 wrote:
> Attached is an updated version of the patch series, adopting a couple of
> improvements - both for MCV lists and histograms.

Hello Tomas,

FYI, here are a couple of warnings from GCC (I just noticed because I
turned on -Werror on cfbot so your patch turned red):

extended_stats.c: In function ‘statext_clauselist_selectivity’:
extended_stats.c:1227:6: error: ‘other_sel’ may be used uninitialized
in this function [-Werror=maybe-uninitialized]
  sel = mcv_sel + other_sel;
  ^
extended_stats.c:1091:5: note: ‘other_sel’ was declared here
 other_sel,
 ^
extended_stats.c:1227:6: error: ‘mcv_sel’ may be used uninitialized in
this function [-Werror=maybe-uninitialized]
  sel = mcv_sel + other_sel;
  ^
extended_stats.c:1087:5: note: ‘mcv_sel’ was declared here
 mcv_sel,
 ^

-- 
Thomas Munro
http://www.enterprisedb.com



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-09-10 Thread Tomas Vondra
Hi,

On 07/18/2018 09:32 AM, Konstantin Knizhnik wrote:
> 
> 
> On 18.07.2018 02:58, Tomas Vondra wrote:
>> On 07/18/2018 12:41 AM, Konstantin Knizhnik wrote:
>>> ...
>>>
>>> Teodor Sigaev has proposed an alternative approach for calculating
>>> selectivity of multicolumn join or compound index search.
>>> Usually DBA creates compound indexes which can be used  by optimizer to
>>> build efficient query execution plan based on index search.
>>> We can stores statistic for compound keys of such indexes and (as it is
>>> done now for functional indexes) and use it to estimate selectivity
>>> of clauses. I have implemented this idea and will publish patch in
>>> separate thread soon.
>>> Now I just want to share some results for the Tomas examples.
>>>
>>> So for Vanilla Postges without extra statistic  estimated number of rows
>>> is about 4 times smaller than real.
>>>
>> Can you please post plans with parallelism disabled, and perhaps without
>> the aggregate? Both makes reading the plans unnecessarily difficult ...
> 
> 
> Sorry, below are plans with disabled parallel execution on simpler
> query(a=1 and b=1):
> 
> explain analyze SELECT count(*) FROM foo WHERE a=1 and b=1;
> 
> 
> 
> Vanilla:
> 
>  Aggregate  (cost=11035.86..11035.87 rows=1 width=8) (actual
> time=22.746..22.746 rows=1 loops=1)
>    ->  Bitmap Heap Scan on foo  (cost=291.35..11001.97 rows=13553
> width=0) (actual time=9.055..18.711 rows=5 loops=1)
>  Recheck Cond: ((a = 1) AND (b = 1))
>  Heap Blocks: exact=222
>  ->  Bitmap Index Scan on foo_a_b_idx  (cost=0.00..287.96
> rows=13553 width=0) (actual time=9.005..9.005 rows=5 loops=1)
>    Index Cond: ((a = 1) AND (b = 1))
> 
> 
> --
> 
> Vanilla + extra statistic (create statistics ab on a,b from foo):
> 
>  Aggregate  (cost=12693.35..12693.36 rows=1 width=8) (actual
> time=22.747..22.748 rows=1 loops=1)
>    ->  Bitmap Heap Scan on foo  (cost=1490.08..12518.31 rows=70015
> width=0) (actual time=9.399..18.636 rows=5 loops=1)
>  Recheck Cond: ((a = 1) AND (b = 1))
>  Heap Blocks: exact=222
>  ->  Bitmap Index Scan on foo_a_b_idx (cost=0.00..1472.58
> rows=70015 width=0) (actual time=9.341..9.341 rows=5 loops=1)
>    Index Cond: ((a = 1) AND (b = 1))
> 
> --
> 
> Multicolumn index statistic:
> 
>  Aggregate  (cost=11946.35..11946.36 rows=1 width=8) (actual
> time=25.117..25.117 rows=1 loops=1)
>    ->  Bitmap Heap Scan on foo  (cost=1080.47..11819.51 rows=50736
> width=0) (actual time=11.568..21.362 rows=5 loops=1)
>  Recheck Cond: ((a = 1) AND (b = 1))
>  Heap Blocks: exact=222
>  ->  Bitmap Index Scan on foo_a_b_idx (cost=0.00..1067.79
> rows=50736 width=0) (actual time=11.300..11.300 rows=5 loops=1)
>    Index Cond: ((a = 1) AND (b = 1))
> 

I wonder what happened to this alternative approach, relying on stats
from multicolumn indexes ...

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-09-09 Thread Tomas Vondra
Hi,

On 09/04/2018 04:16 PM, Dean Rasheed wrote:
> On 3 September 2018 at 00:17, Tomas Vondra  
> wrote:
>> Hi,
>>
>> Attached is an updated version of the patch series, adopting a couple of
>> improvements - both for MCV lists and histograms.
>>
>>
>> MCV
>> ---
>>
>> For the MCV list part, I've adopted the approach proposed by Dean, using
>> base selectivity and using it to correct the non-MCV part. I agree the
>> simplicity of the approach is a nice feature, and it seems to produce
>> better estimates. I'm not sure I understand the approach perfectly, but
>> I've tried to add comments explaining how it works etc.
>>
> 
> Cool. Looking at this afresh after some time away, it still looks like
> a reasonable approach, and the test results are encouraging.
> 
> In mcv_clauselist_selectivity(), you raise the following question:
> 
> if (matches[i] != STATS_MATCH_NONE)
> {
> /* XXX Shouldn't the basesel be outside the if condition? */
> *basesel += mcv->items[i]->base_frequency;
> s += mcv->items[i]->frequency;
> }
> 
> The reason it needs to be inside the "if" block is that what it's
> computing is the base (independent) selectivity of the clauses found
> to match the MCV stats, so that then in
> statext_clauselist_selectivity() it can be used in the following
> computation:
> 
> /* Estimated selectivity of values not covered by MCV matches */
> other_sel = simple_sel - mcv_basesel;
> 
> to give an estimate for the other clauses that aren't covered by the
> MCV stats. So I think the code is correct as it stands, but if you
> think it isn't clear enough, maybe a comment update is in order.
> 
> The assumption being made is that mcv_basesel will cancel out the part
> of simple_sel that is due to clauses matching the MCV stats, so that
> we can then just add on the MCV selectivity. Of course that's only an
> approximation, and it won't be exact -- partly due to the fact that
> simple_sel and mcv_basesel are potentially computed using different
> sample rows, and so will differ in the MCV region, and partly because
> of second-order effects arising from the way that selectivities are
> combined in clauselist_selectivity_simple(). Maybe that's something
> that can be improved upon, but it doesn't seem like a bad initial
> approximation.
> 

Thanks for the clarification. It's one of the comments I added while
reworking the patch, with still a very limited understanding of the
approach at that point in time. I'll replace it with a comment
explaining the reasoning in the next version.

> 
>> I've also changed how we build the MCV lists, particularly how we decide
>> how many / which items store in the MCV list. In the previous version
>> I've adopted the same algorithm we use for per-column MCV lists, but in
>> certain cases that turned out to be too restrictive.
>>
>> Consider for example a table with multiple perfectly correlated columns,
>> with very few combinations. That is, something like this:
>>
>> CREATE TABLE t (a int, b int);
>>
>> INSERT INTO t SELECT mod(i,50), mod(i,50)
>>   FROM generate_series(1,1e6) s(i);
>>
>> CREATE STATISTICS s (mcv) ON a,b FROM t;
>>
>> Now, the data distribution is very simple - uniform, with 50 distinct
>> combinations, each representing 2% of data (and the random sample should
>> be pretty close to that).
>>
>> In these cases, analyze_mcv_list decides it does not need any MCV list,
>> because the frequency for each value is pretty much 1/ndistinct. For
>> single column that's reasonable, but for multiple correlated columns
>> it's rather problematic. We might use the same ndistinct approach
>> (assuming we have the ndistinct coefficients), but that still does not
>> allow us to decide which combinations are "valid" with respect to the
>> data. For example we can't decide (1,10) does not appear in the data.
>>
>> So I'm not entirely sure adopting the same algorithm analyze_mcv_list
>> algorithm both for single-column and multi-column stats. It may make
>> sense to keep more items in the multi-column case for reasons that are
>> not really valid for a single single-column.
>>
>> For now I've added a trivial condition to simply keep all the groups
>> when possible. This probably needs more thought.
>>
> 
> Ah, this is a good point. I think I see the problem here.
> 
> analyze_mcv_list() works by keeping those MCV entries that are
> statistically significantly more frequent than the selectivity that
> would have otherwise been assigned to the values, which is based on
> ndistinct and nullfrac. That's not really right for multivariate stats
> though, because the selectivity that would be assigned to a
> multi-column value if it weren't in the multivariate MCV list is
> actually calculated using the product of individual column
> selectivities. Fortunately we now calculate this (base_frequency), so
> actually I think what's needed is a custom version of
> analyze_mcv_list() that keeps MCV entries if the observed frequency 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-09-04 Thread Dean Rasheed
On 3 September 2018 at 00:17, Tomas Vondra  wrote:
> Hi,
>
> Attached is an updated version of the patch series, adopting a couple of
> improvements - both for MCV lists and histograms.
>
>
> MCV
> ---
>
> For the MCV list part, I've adopted the approach proposed by Dean, using
> base selectivity and using it to correct the non-MCV part. I agree the
> simplicity of the approach is a nice feature, and it seems to produce
> better estimates. I'm not sure I understand the approach perfectly, but
> I've tried to add comments explaining how it works etc.
>

Cool. Looking at this afresh after some time away, it still looks like
a reasonable approach, and the test results are encouraging.

In mcv_clauselist_selectivity(), you raise the following question:

if (matches[i] != STATS_MATCH_NONE)
{
/* XXX Shouldn't the basesel be outside the if condition? */
*basesel += mcv->items[i]->base_frequency;
s += mcv->items[i]->frequency;
}

The reason it needs to be inside the "if" block is that what it's
computing is the base (independent) selectivity of the clauses found
to match the MCV stats, so that then in
statext_clauselist_selectivity() it can be used in the following
computation:

/* Estimated selectivity of values not covered by MCV matches */
other_sel = simple_sel - mcv_basesel;

to give an estimate for the other clauses that aren't covered by the
MCV stats. So I think the code is correct as it stands, but if you
think it isn't clear enough, maybe a comment update is in order.

The assumption being made is that mcv_basesel will cancel out the part
of simple_sel that is due to clauses matching the MCV stats, so that
we can then just add on the MCV selectivity. Of course that's only an
approximation, and it won't be exact -- partly due to the fact that
simple_sel and mcv_basesel are potentially computed using different
sample rows, and so will differ in the MCV region, and partly because
of second-order effects arising from the way that selectivities are
combined in clauselist_selectivity_simple(). Maybe that's something
that can be improved upon, but it doesn't seem like a bad initial
approximation.


> I've also changed how we build the MCV lists, particularly how we decide
> how many / which items store in the MCV list. In the previous version
> I've adopted the same algorithm we use for per-column MCV lists, but in
> certain cases that turned out to be too restrictive.
>
> Consider for example a table with multiple perfectly correlated columns,
> with very few combinations. That is, something like this:
>
> CREATE TABLE t (a int, b int);
>
> INSERT INTO t SELECT mod(i,50), mod(i,50)
>   FROM generate_series(1,1e6) s(i);
>
> CREATE STATISTICS s (mcv) ON a,b FROM t;
>
> Now, the data distribution is very simple - uniform, with 50 distinct
> combinations, each representing 2% of data (and the random sample should
> be pretty close to that).
>
> In these cases, analyze_mcv_list decides it does not need any MCV list,
> because the frequency for each value is pretty much 1/ndistinct. For
> single column that's reasonable, but for multiple correlated columns
> it's rather problematic. We might use the same ndistinct approach
> (assuming we have the ndistinct coefficients), but that still does not
> allow us to decide which combinations are "valid" with respect to the
> data. For example we can't decide (1,10) does not appear in the data.
>
> So I'm not entirely sure adopting the same algorithm analyze_mcv_list
> algorithm both for single-column and multi-column stats. It may make
> sense to keep more items in the multi-column case for reasons that are
> not really valid for a single single-column.
>
> For now I've added a trivial condition to simply keep all the groups
> when possible. This probably needs more thought.
>

Ah, this is a good point. I think I see the problem here.

analyze_mcv_list() works by keeping those MCV entries that are
statistically significantly more frequent than the selectivity that
would have otherwise been assigned to the values, which is based on
ndistinct and nullfrac. That's not really right for multivariate stats
though, because the selectivity that would be assigned to a
multi-column value if it weren't in the multivariate MCV list is
actually calculated using the product of individual column
selectivities. Fortunately we now calculate this (base_frequency), so
actually I think what's needed is a custom version of
analyze_mcv_list() that keeps MCV entries if the observed frequency is
statistically significantly larger than the base frequency, not the
ndistinct-based frequency.

It might also be worthwhile doing a little more work to make the
base_frequency values more consistent with the way individual column
selectivities are actually calculated -- currently the patch always
uses the observed single-column frequencies to calculate the base
frequencies, but actually the univariate stats would only do that for
a subset of the 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-08-06 Thread Tomas Vondra




On 08/03/2018 04:24 PM, Dean Rasheed wrote:

On 17 July 2018 at 14:03, Tomas Vondra  wrote:

For equalities it's going to be hard. The only thing I can think of at the
moment is checking if there are any matching buckets at all, and using that
to decide whether to extrapolate the MCV selectivity to the non-MCV part or
not (or perhaps to what part of the non-MCV part).



So I decided to play a little more with this, experimenting with a
much simpler approach -- this is for MCV's only at the moment, see the
attached (very much WIP) patch (no doc or test updates, and lots of
areas for improvement).

The basic idea when building the MCV stats is to not just record the
frequency of each combination of values, but also what I'm calling the
"base frequency" -- that is the frequency that that combination of
values would have if the columns were independent (i.e., the product
of each value's individual frequency).

The reasoning then, is that if we find an MCV entry matching the query
clauses, the difference (frequency - base_frequency) can be viewed as
a correction to be applied to the selectivity returned by
clauselist_selectivity_simple(). If all possible values were covered
by matching MCV entries, the sum of the base frequencies of the
matching MCV entries would approximately cancel out with the simple
selectivity, and only the MCV frequencies would be left (ignoring
second order effects arising from the fact that
clauselist_selectivity_simple() doesn't just sum up disjoint
possibilities). For partial matches, it will use what multivariate
stats are available to improve upon the simple selectivity.

I wondered about just storing the difference (frequency -
base_frequency) in the stats, but it's actually useful to have both
values, because then the total of all the MCV frequencies can be used
to set an upper bound on the non-MCV part.

The advantage of this approach is that it is very simple, and in
theory ought to be reasonably applicable to arbitrary combinations of
clauses. Also, it naturally falls back to the univariate-based
estimate when there are no matching MCV entries. In fact, even when
there are no matching MCV entries, it can still improve upon the
univariate estimate by capping it to 1-total_mcv_sel.

I tested it with the same data posted previously and a few simple
queries, and the initial results are quite encouraging. Where the
previous patch sometimes gave noticeable over- or under-estimates,
this patch generally did better:


Query  Actual rows  Est (HEAD)  Est (24 Jun patch)  Est (new patch)
   Q1   5  12625   48631  49308
   Q2   4   9375   40739  38710
   Q3   9  21644  172688  88018
   Q4  14  52048  267528 138228
   Q5  14  52978  267528 138228
   Q6  14  52050  267528 138228
   Q7  829942 777806  149886 822788
   Q8  749942 748302  692686 747922
   Q9   15000  40989   27595  14131
  Q10   15997  49853   27595  23121

Q1: a=1 and b=1
Q2: a=1 and b=2
Q3: a=1 and (b=1 or b=2)
Q4: (a=1 or a=2) and (b=1 or b=2)
Q5: (a=1 or a=2) and (b<=2)
Q6: (a=1 or a=2 or a=4) and (b=1 or b=2)
Q7: (a=1 or a=2) and not (b=2)
Q8: (a=1 or a=2) and not (b=1 or b=2)
Q9: a=3 and b>0 and b<3
Q10: a=3 and b>0 and b<1000



Interesting idea, and the improvements certainly seem encouraging.

I wonder what a counter-example would look like - I think the MCV and 
non-MCV parts would have to behave very differently (one perfectly 
dependent, the other perfectly independent). But that does seem very 
likely, and even if it was there's not much we can do about such cases.




I've not tried anything with histograms. Possibly the histograms could
be used as-is, to replace the non-MCV part (other_sel). Or, a similar
approach could be used, recording the base frequency of each histogram
bucket, and then using that to refine the other_sel estimate. Either
way, I think it would be necessary to exclude equality clauses from
the histograms, otherwise MCVs might end up being double-counted.



I do have an idea about histograms. I didn't have time to hack on it 
yet, but I think it could work in combination with your MCV algorithm.


Essentially there are two related issues with histograms:


1) equality conditions

Histograms work nicely with inequalities, not that well for equalities. 
For equality clauses, we can estimate the selectivity as 1/ndistinct, 
similarly to what we do in 1-D cases (we can use ndistinct coefficients 
if we have them, and MCV tracking the common combinations).


If there are both equalities and inequalities, we can then use the 
equality clauses merely as condition (to limit the set of buckets), and 
evaluate the inequalities for those buckets only. Essentially compute


P(equals + inequals) = 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-18 Thread Konstantin Knizhnik




On 18.07.2018 02:58, Tomas Vondra wrote:

On 07/18/2018 12:41 AM, Konstantin Knizhnik wrote:

...

Teodor Sigaev has proposed an alternative approach for calculating
selectivity of multicolumn join or compound index search.
Usually DBA creates compound indexes which can be used  by optimizer to
build efficient query execution plan based on index search.
We can stores statistic for compound keys of such indexes and (as it is
done now for functional indexes) and use it to estimate selectivity
of clauses. I have implemented this idea and will publish patch in
separate thread soon.
Now I just want to share some results for the Tomas examples.

So for Vanilla Postges without extra statistic  estimated number of rows
is about 4 times smaller than real.


Can you please post plans with parallelism disabled, and perhaps without
the aggregate? Both makes reading the plans unnecessarily difficult ...



Sorry, below are plans with disabled parallel execution on simpler 
query(a=1 and b=1):


explain analyze SELECT count(*) FROM foo WHERE a=1 and b=1;



Vanilla:

 Aggregate  (cost=11035.86..11035.87 rows=1 width=8) (actual 
time=22.746..22.746 rows=1 loops=1)
   ->  Bitmap Heap Scan on foo  (cost=291.35..11001.97 rows=13553 
width=0) (actual time=9.055..18.711 rows=5 loops=1)

 Recheck Cond: ((a = 1) AND (b = 1))
 Heap Blocks: exact=222
 ->  Bitmap Index Scan on foo_a_b_idx  (cost=0.00..287.96 
rows=13553 width=0) (actual time=9.005..9.005 rows=5 loops=1)

   Index Cond: ((a = 1) AND (b = 1))


--

Vanilla + extra statistic (create statistics ab on a,b from foo):

 Aggregate  (cost=12693.35..12693.36 rows=1 width=8) (actual 
time=22.747..22.748 rows=1 loops=1)
   ->  Bitmap Heap Scan on foo  (cost=1490.08..12518.31 rows=70015 
width=0) (actual time=9.399..18.636 rows=5 loops=1)

 Recheck Cond: ((a = 1) AND (b = 1))
 Heap Blocks: exact=222
 ->  Bitmap Index Scan on foo_a_b_idx (cost=0.00..1472.58 
rows=70015 width=0) (actual time=9.341..9.341 rows=5 loops=1)

   Index Cond: ((a = 1) AND (b = 1))

--

Multicolumn index statistic:

 Aggregate  (cost=11946.35..11946.36 rows=1 width=8) (actual 
time=25.117..25.117 rows=1 loops=1)
   ->  Bitmap Heap Scan on foo  (cost=1080.47..11819.51 rows=50736 
width=0) (actual time=11.568..21.362 rows=5 loops=1)

 Recheck Cond: ((a = 1) AND (b = 1))
 Heap Blocks: exact=222
 ->  Bitmap Index Scan on foo_a_b_idx (cost=0.00..1067.79 
rows=50736 width=0) (actual time=11.300..11.300 rows=5 loops=1)

   Index Cond: ((a = 1) AND (b = 1))




Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-17 Thread Tomas Vondra


On 07/18/2018 12:41 AM, Konstantin Knizhnik wrote:
> ...
> 
> Teodor Sigaev has proposed an alternative approach for calculating
> selectivity of multicolumn join or compound index search.
> Usually DBA creates compound indexes which can be used  by optimizer to
> build efficient query execution plan based on index search.
> We can stores statistic for compound keys of such indexes and (as it is
> done now for functional indexes) and use it to estimate selectivity
> of clauses. I have implemented this idea and will publish patch in
> separate thread soon.
> Now I just want to share some results for the Tomas examples.
> 
> So for Vanilla Postges without extra statistic  estimated number of rows
> is about 4 times smaller than real.
> 

Can you please post plans with parallelism disabled, and perhaps without
the aggregate? Both makes reading the plans unnecessarily difficult ...

>  postgres=# explain analyze SELECT count(*) FROM foo WHERE a=1 and (b=1
> or b=2);
> QUERY PLAN
> 
> 
> 
>  Aggregate  (cost=10964.76..10964.77 rows=1 width=8) (actual
> time=49.251..49.251 rows=1 loops=1)
>    ->  Bitmap Heap Scan on foo  (cost=513.60..10906.48 rows=23310
> width=0) (actual time=17.368..39.928 rows=9 loops=1)

ok, 23k vs. 90k

> 
> If we add statistic for a and b  columns:
> 
>  create statistics ab on a,b from foo;
>  analyze foo;
> 
> then expected results is about 30% larger then real: 120k vs 90k:
> 

Eh? The plan however says 9k vs. 30k ...

>    ->  Parallel Bitmap Heap Scan on foo
> (cost=2561.33..13424.24 rows=9063 width=0) (actual time=15.551..26.057
> rows=3 loops=3)

...

> With compound index statistic estimation is almost equal to real value:
> 
>    ->  Bitmap Heap Scan on foo  (cost=1880.20..13411.66 rows=23310
> width=0) (actual time=38.776..61.050 rows=9 loops=1)

Well, this says 23k vs. 90k.


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-17 Thread Konstantin Knizhnik




On 16.07.2018 23:55, Tomas Vondra wrote:


On 07/16/2018 02:54 PM, Dean Rasheed wrote:

On 16 July 2018 at 13:23, Tomas Vondra  wrote:

The top-level clauses allow us to make such deductions, with deeper
clauses it's much more difficult (perhaps impossible). Because for
example with (a=1 AND b=1) there can be just a single match, so if we
find it in MCV we're done. With clauses like ((a=1 OR a=2) AND (b=1 OR
b=2)) it's not that simple, because there may be multiple combinations
and so a match in MCV does not guarantee anything.

Actually, it guarantees a lower bound on the overall selectivity, and
maybe that's the best that we can do in the absence of any other
stats.


Hmmm, is that actually true? Let's consider a simple example, with two
columns, each with just 2 values, and a "perfect" MCV list:

 a | b | frequency
---
 1 | 1 | 0.5
 2 | 2 | 0.5

And let's estimate sel(a=1 & b=2).

OK.In this case, there are no MCV matches, so there is no lower bound (it's 0).

What we could do though is also impose an upper bound, based on the
sum of non-matching MCV frequencies. In this case, the upper bound is
also 0, so we could actually say the resulting selectivity is 0.


Hmmm, it's not very clear to me how would we decide which of these cases
applies, because in most cases we don't have MCV covering 100% rows.

Anyways, I've been thinking about how to modify the code to wort the way
you proposed (in a way sufficient for a PoC). But after struggling with
it for a while it occurred to me it might be useful to do it on paper
first, to verify how would it work on your examples.

So let's use this data

create table foo(a int, b int);
insert into foo select 1,1 from generate_series(1,5);
insert into foo select 1,2 from generate_series(1,4);
insert into foo select 1,x/10 from generate_series(30,25) g(x);
insert into foo select 2,1 from generate_series(1,3);
insert into foo select 2,2 from generate_series(1,2);
insert into foo select 2,x/10 from generate_series(30,50) g(x);
insert into foo select 3,1 from generate_series(1,1);
insert into foo select 3,2 from generate_series(1,5000);
insert into foo select 3,x from generate_series(3,60) g(x);
insert into foo select x,x/10 from generate_series(4,75) g(x);

Assuming we have perfectly exact statistics, we have these MCV lists
(both univariate and multivariate):

   select a, count(*), round(count(*) /2254937.0, 4) AS frequency
 from foo group by a order by 2 desc;

  a| count  | frequency
   ++---
 3 | 614998 |0.2727
 2 | 549971 |0.2439
 1 | 339971 |0.1508
  1014 |  1 |0.
 57220 |  1 |0.
 ...

   select b, count(*), round(count(*) /2254937.0, 4) AS frequency
 from foo group by b order by 2 desc;

  b| count | frequency
   +---+---
 1 | 90010 |0.0399
 2 | 65010 |0.0288
 3 |31 |0.
 7 |31 |0.
...

   select a, b, count(*), round(count(*) /2254937.0, 4) AS frequency
 from foo group by a, b order by 3 desc;

  a|   b| count | frequency
   ++---+---
 1 |  1 | 5 |0.0222
 1 |  2 | 4 |0.0177
 2 |  1 | 3 |0.0133
 2 |  2 | 2 |0.0089
 3 |  1 | 1 |0.0044
 3 |  2 |  5000 |0.0022
 2 |  12445 |10 |0.
 ...

And let's estimate the two queries with complex clauses, where the
multivariate stats gave 2x overestimates.

SELECT * FROM foo WHERE a=1 and (b=1 or b=2);
-- actual 9, univariate: 24253, multivariate: 181091

univariate:

  sel(a=1) = 0.1508
  sel(b=1) = 0.0399
  sel(b=2) = 0.0288
  sel(b=1 or b=2) = 0.0673

multivariate:
  sel(a=1 and (b=1 or b=2)) = 0.0399 (0.0770)

The second multivariate estimate comes from assuming only the first 5
items make it to the multivariate MCV list (covering 6.87% of the data)
and extrapolating the selectivity to the non-MCV data too.

(Notice it's about 2x the actual selectivity, so this extrapolation due
to not realizing the MCV already contains all the matches is pretty much
responsible for the whole over-estimate).

So, how would the proposed algorithm work? Let's start with "a=1":

sel(a=1) = 0.1508

I don't see much point in applying the two "b" clauses independently (or
how would it be done, as it's effectively a single clause):

sel(b=1 or b=2) = 0.0673

And we get

total_sel = sel(a=1) * sel(b=1 or b=2) = 0.0101

 From the multivariate MCV we get

mcv_sel = 0.0399

And finally

total_sel = Max(total_sel, mcv_sel) = 0.0399

Which is great, but I don't see how that actually helped anything? We
still only have the estimate for the ~7% covered by the MCV list, and
not the remaining non-MCV part.

I could do the same thing for the second 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-17 Thread Tomas Vondra

On 07/17/2018 11:09 AM, Dean Rasheed wrote:

On 16 July 2018 at 21:55, Tomas Vondra  wrote:



...
>>

So, how would the proposed algorithm work? Let's start with "a=1":

sel(a=1) = 0.1508

I don't see much point in applying the two "b" clauses independently (or
how would it be done, as it's effectively a single clause):

sel(b=1 or b=2) = 0.0673

And we get

total_sel = sel(a=1) * sel(b=1 or b=2) = 0.0101

 From the multivariate MCV we get

mcv_sel = 0.0399

And finally

total_sel = Max(total_sel, mcv_sel) = 0.0399

Which is great, but I don't see how that actually helped anything? We
still only have the estimate for the ~7% covered by the MCV list, and
not the remaining non-MCV part.



Right. If these are the only stats available, and there are just 2
top-level clauses like this, it either returns the MCV estimate, or
the old univariate estimate (whichever is larger). It avoids
over-estimating, but will almost certainly under-estimate when the MCV
matches are incomplete.



Yeah :-(

In my experience under-estimates tend to have much worse consequences 
(say a nested loop chosen by under-estimate vs. hash join chosen by 
over-estimate). This certainly influenced some of the choices I've made 
in this patch (extrapolation to non-MCV part is an example of that), but 
I agree it's not particularly scientific approach and I'd very much want 
something better.





I could do the same thing for the second query, but the problem there is
actually exactly the same - extrapolation from MCV to non-MCV part
roughly doubles the estimate.

So unless I'm applying your algorithm incorrectly, this does not seem
like a very promising direction :-(



You could be right. Actually it's the order dependence with more than
2 top-level clauses that bothers me most about this algorithm. It's
also not entirely obvious how to include histogram stats in this
scheme.



I think for inequalities that's fairly simple - histograms work pretty 
well for that, and I have a hunch that replacing the 0.5 estimate for 
partially-matching buckets with conver_to_scalar-like logic and the 
geometric mean (as you proposed) will work well enough.


For equalities it's going to be hard. The only thing I can think of at 
the moment is checking if there are any matching buckets at all, and 
using that to decide whether to extrapolate the MCV selectivity to the 
non-MCV part or not (or perhaps to what part of the non-MCV part).



A different approach that I have been thinking about is, in
mcv_update_match_bitmap(), attempt to work out the probability of all
the clauses matching and it not being an MCV value. For example, given
a clause like a=1 whose univariate estimate we know (0.1508 in the
above example), knowing that the MCV values account for 0.0222+0.0177
of that, the remainder must be from non-MCV values. So we could say
that the probability that a=1 and it not being and MCV is
0.1508-0.0222-0.0177 = 0.1109. So then the question is could we
combine these non-MCV probabilities to give an estimate of how many
non-MCV values we need to worry about? I've not fully thought that
through, but it might be useful.


Could we use it to derive some upper boundaries on the non-MCV part?


The problem is, it's still likely to
over-estimate when the MCV matches fully cover all possibilities, and
I still don't see a way to reliably detect that case.



I guess we can use a histogram to limit the over-estimates, as explained 
above. It may not be 100% reliable as it depends on the sample and how 
exactly the buckets are formed, but it might help.


But perhaps these over-estimates are a particularly serious issue? When 
you already get matches in a MCV, the number of matching rows is going 
to be pretty significant. If you over-estimate 2x, so what? IMHO that's 
still pretty accurate estimate.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-17 Thread Dean Rasheed
On 16 July 2018 at 21:55, Tomas Vondra  wrote:
>
>
> On 07/16/2018 02:54 PM, Dean Rasheed wrote:
>> On 16 July 2018 at 13:23, Tomas Vondra  wrote:
> The top-level clauses allow us to make such deductions, with deeper
> clauses it's much more difficult (perhaps impossible). Because for
> example with (a=1 AND b=1) there can be just a single match, so if we
> find it in MCV we're done. With clauses like ((a=1 OR a=2) AND (b=1 OR
> b=2)) it's not that simple, because there may be multiple combinations
> and so a match in MCV does not guarantee anything.

 Actually, it guarantees a lower bound on the overall selectivity, and
 maybe that's the best that we can do in the absence of any other
 stats.

>>> Hmmm, is that actually true? Let's consider a simple example, with two
>>> columns, each with just 2 values, and a "perfect" MCV list:
>>>
>>> a | b | frequency
>>>---
>>> 1 | 1 | 0.5
>>> 2 | 2 | 0.5
>>>
>>> And let's estimate sel(a=1 & b=2).
>>
>> OK.In this case, there are no MCV matches, so there is no lower bound (it's 
>> 0).
>>
>> What we could do though is also impose an upper bound, based on the
>> sum of non-matching MCV frequencies. In this case, the upper bound is
>> also 0, so we could actually say the resulting selectivity is 0.
>>
>
> Hmmm, it's not very clear to me how would we decide which of these cases
> applies, because in most cases we don't have MCV covering 100% rows.
>
> Anyways, I've been thinking about how to modify the code to wort the way
> you proposed (in a way sufficient for a PoC). But after struggling with
> it for a while it occurred to me it might be useful to do it on paper
> first, to verify how would it work on your examples.
>
> So let's use this data
>
> create table foo(a int, b int);
> insert into foo select 1,1 from generate_series(1,5);
> insert into foo select 1,2 from generate_series(1,4);
> insert into foo select 1,x/10 from generate_series(30,25) g(x);
> insert into foo select 2,1 from generate_series(1,3);
> insert into foo select 2,2 from generate_series(1,2);
> insert into foo select 2,x/10 from generate_series(30,50) g(x);
> insert into foo select 3,1 from generate_series(1,1);
> insert into foo select 3,2 from generate_series(1,5000);
> insert into foo select 3,x from generate_series(3,60) g(x);
> insert into foo select x,x/10 from generate_series(4,75) g(x);
>
> Assuming we have perfectly exact statistics, we have these MCV lists
> (both univariate and multivariate):
>
>   select a, count(*), round(count(*) /2254937.0, 4) AS frequency
> from foo group by a order by 2 desc;
>
>  a| count  | frequency
>   ++---
> 3 | 614998 |0.2727
> 2 | 549971 |0.2439
> 1 | 339971 |0.1508
>  1014 |  1 |0.
> 57220 |  1 |0.
> ...
>
>   select b, count(*), round(count(*) /2254937.0, 4) AS frequency
> from foo group by b order by 2 desc;
>
>  b| count | frequency
>   +---+---
> 1 | 90010 |0.0399
> 2 | 65010 |0.0288
> 3 |31 |0.
> 7 |31 |0.
>...
>
>   select a, b, count(*), round(count(*) /2254937.0, 4) AS frequency
> from foo group by a, b order by 3 desc;
>
>  a|   b| count | frequency
>   ++---+---
> 1 |  1 | 5 |0.0222
> 1 |  2 | 4 |0.0177
> 2 |  1 | 3 |0.0133
> 2 |  2 | 2 |0.0089
> 3 |  1 | 1 |0.0044
> 3 |  2 |  5000 |0.0022
> 2 |  12445 |10 |0.
> ...
>
> And let's estimate the two queries with complex clauses, where the
> multivariate stats gave 2x overestimates.
>
> SELECT * FROM foo WHERE a=1 and (b=1 or b=2);
> -- actual 9, univariate: 24253, multivariate: 181091
>
>univariate:
>
>  sel(a=1) = 0.1508
>  sel(b=1) = 0.0399
>  sel(b=2) = 0.0288
>  sel(b=1 or b=2) = 0.0673
>
>multivariate:
>  sel(a=1 and (b=1 or b=2)) = 0.0399 (0.0770)
>
> The second multivariate estimate comes from assuming only the first 5
> items make it to the multivariate MCV list (covering 6.87% of the data)
> and extrapolating the selectivity to the non-MCV data too.
>
> (Notice it's about 2x the actual selectivity, so this extrapolation due
> to not realizing the MCV already contains all the matches is pretty much
> responsible for the whole over-estimate).
>

Agreed. I think the actual MCV stats I got included the first 6
entries, but yes, that's only around 7% of the data.


> So, how would the proposed algorithm work? Let's start with "a=1":
>
>sel(a=1) = 0.1508
>
> I don't see much point in applying the two "b" clauses independently (or
> how would it be done, as it's effectively a single clause):
>
>sel(b=1 or b=2) = 0.0673
>
> And we get
>
>total_sel = 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-16 Thread Tomas Vondra



On 07/16/2018 02:54 PM, Dean Rasheed wrote:
> On 16 July 2018 at 13:23, Tomas Vondra  wrote:
 The top-level clauses allow us to make such deductions, with deeper
 clauses it's much more difficult (perhaps impossible). Because for
 example with (a=1 AND b=1) there can be just a single match, so if we
 find it in MCV we're done. With clauses like ((a=1 OR a=2) AND (b=1 OR
 b=2)) it's not that simple, because there may be multiple combinations
 and so a match in MCV does not guarantee anything.
>>>
>>> Actually, it guarantees a lower bound on the overall selectivity, and
>>> maybe that's the best that we can do in the absence of any other
>>> stats.
>>>
>> Hmmm, is that actually true? Let's consider a simple example, with two
>> columns, each with just 2 values, and a "perfect" MCV list:
>>
>> a | b | frequency
>>---
>> 1 | 1 | 0.5
>> 2 | 2 | 0.5
>>
>> And let's estimate sel(a=1 & b=2).
> 
> OK.In this case, there are no MCV matches, so there is no lower bound (it's 
> 0).
> 
> What we could do though is also impose an upper bound, based on the
> sum of non-matching MCV frequencies. In this case, the upper bound is
> also 0, so we could actually say the resulting selectivity is 0.
> 

Hmmm, it's not very clear to me how would we decide which of these cases
applies, because in most cases we don't have MCV covering 100% rows.

Anyways, I've been thinking about how to modify the code to wort the way
you proposed (in a way sufficient for a PoC). But after struggling with
it for a while it occurred to me it might be useful to do it on paper
first, to verify how would it work on your examples.

So let's use this data

create table foo(a int, b int);
insert into foo select 1,1 from generate_series(1,5);
insert into foo select 1,2 from generate_series(1,4);
insert into foo select 1,x/10 from generate_series(30,25) g(x);
insert into foo select 2,1 from generate_series(1,3);
insert into foo select 2,2 from generate_series(1,2);
insert into foo select 2,x/10 from generate_series(30,50) g(x);
insert into foo select 3,1 from generate_series(1,1);
insert into foo select 3,2 from generate_series(1,5000);
insert into foo select 3,x from generate_series(3,60) g(x);
insert into foo select x,x/10 from generate_series(4,75) g(x);

Assuming we have perfectly exact statistics, we have these MCV lists
(both univariate and multivariate):

  select a, count(*), round(count(*) /2254937.0, 4) AS frequency
from foo group by a order by 2 desc;

 a| count  | frequency
  ++---
3 | 614998 |0.2727
2 | 549971 |0.2439
1 | 339971 |0.1508
 1014 |  1 |0.
57220 |  1 |0.
...

  select b, count(*), round(count(*) /2254937.0, 4) AS frequency
from foo group by b order by 2 desc;

 b| count | frequency
  +---+---
1 | 90010 |0.0399
2 | 65010 |0.0288
3 |31 |0.
7 |31 |0.
   ...

  select a, b, count(*), round(count(*) /2254937.0, 4) AS frequency
from foo group by a, b order by 3 desc;

 a|   b| count | frequency
  ++---+---
1 |  1 | 5 |0.0222
1 |  2 | 4 |0.0177
2 |  1 | 3 |0.0133
2 |  2 | 2 |0.0089
3 |  1 | 1 |0.0044
3 |  2 |  5000 |0.0022
2 |  12445 |10 |0.
...

And let's estimate the two queries with complex clauses, where the
multivariate stats gave 2x overestimates.

SELECT * FROM foo WHERE a=1 and (b=1 or b=2);
-- actual 9, univariate: 24253, multivariate: 181091

   univariate:

 sel(a=1) = 0.1508
 sel(b=1) = 0.0399
 sel(b=2) = 0.0288
 sel(b=1 or b=2) = 0.0673

   multivariate:
 sel(a=1 and (b=1 or b=2)) = 0.0399 (0.0770)

The second multivariate estimate comes from assuming only the first 5
items make it to the multivariate MCV list (covering 6.87% of the data)
and extrapolating the selectivity to the non-MCV data too.

(Notice it's about 2x the actual selectivity, so this extrapolation due
to not realizing the MCV already contains all the matches is pretty much
responsible for the whole over-estimate).

So, how would the proposed algorithm work? Let's start with "a=1":

   sel(a=1) = 0.1508

I don't see much point in applying the two "b" clauses independently (or
how would it be done, as it's effectively a single clause):

   sel(b=1 or b=2) = 0.0673

And we get

   total_sel = sel(a=1) * sel(b=1 or b=2) = 0.0101

>From the multivariate MCV we get

   mcv_sel = 0.0399

And finally

   total_sel = Max(total_sel, mcv_sel) = 0.0399

Which is great, but I don't see how that actually helped anything? We
still only have the estimate for the ~7% covered by the MCV list, and
not the remaining non-MCV part.

I could do the same thing for the second 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-16 Thread Dean Rasheed
On 16 July 2018 at 13:23, Tomas Vondra  wrote:
>>> The top-level clauses allow us to make such deductions, with deeper
>>> clauses it's much more difficult (perhaps impossible). Because for
>>> example with (a=1 AND b=1) there can be just a single match, so if we
>>> find it in MCV we're done. With clauses like ((a=1 OR a=2) AND (b=1 OR
>>> b=2)) it's not that simple, because there may be multiple combinations
>>> and so a match in MCV does not guarantee anything.
>>
>> Actually, it guarantees a lower bound on the overall selectivity, and
>> maybe that's the best that we can do in the absence of any other
>> stats.
>>
> Hmmm, is that actually true? Let's consider a simple example, with two
> columns, each with just 2 values, and a "perfect" MCV list:
>
> a | b | frequency
>---
> 1 | 1 | 0.5
> 2 | 2 | 0.5
>
> And let's estimate sel(a=1 & b=2).

OK.In this case, there are no MCV matches, so there is no lower bound (it's 0).

What we could do though is also impose an upper bound, based on the
sum of non-matching MCV frequencies. In this case, the upper bound is
also 0, so we could actually say the resulting selectivity is 0.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-16 Thread Tomas Vondra




On 07/15/2018 11:36 AM, Dean Rasheed wrote:

On 13 July 2018 at 18:27, Tomas Vondra  wrote:

I'm not so sure. The issue is that a lot of the MCV deductions depends
on whether we can answer questions like "Is there a single match?" or
"If we got a match in MCV, do we need to look at the non-MCV part?" This
is not very different from the single-column estimates, except of course
here we need to look at multiple columns.

The top-level clauses allow us to make such deductions, with deeper
clauses it's much more difficult (perhaps impossible). Because for
example with (a=1 AND b=1) there can be just a single match, so if we
find it in MCV we're done. With clauses like ((a=1 OR a=2) AND (b=1 OR
b=2)) it's not that simple, because there may be multiple combinations
and so a match in MCV does not guarantee anything.


Actually, it guarantees a lower bound on the overall selectivity, and
maybe that's the best that we can do in the absence of any other
stats.



Hmmm, is that actually true? Let's consider a simple example, with two 
columns, each with just 2 values, and a "perfect" MCV list:


a | b | frequency
   ---
1 | 1 | 0.5
2 | 2 | 0.5

And let's estimate sel(a=1 & b=2). Your proposed algorithm does this:

1) sel(a=1) = 0.5
2) sel(b=2) = 0.5
3) total_sel = sel(a=1) * sel(b=2) = 0.25
4) mcv_sel = 0.0
5) total_sel = Max(total_sel, mcv_sel) = 0.25

How is that a lower bound? Or what is it lower than?


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-16 Thread Tomas Vondra




On 07/16/2018 12:16 AM, Tomas Vondra wrote:


On 07/15/2018 04:43 PM, Dean Rasheed wrote:

On 15 July 2018 at 14:29, Tomas Vondra  wrote:

It's quite unclear to me how this algorithm could reliably end up with
hist_sel=0 (in cases where we already don't end up with that). I mean,
if a bucket matches the conditions, then the only way to eliminate is by
deducing that MCV already contains all the matches - and that's rather
difficult for complex clauses ...


Ah, I didn't realise that you were using histograms for equality
clauses as well. I had assumed that they would only use the MCV stats,
as in the univariate case. Using histograms for equality seems
problematic -- if bucket_contains_value() returns STATS_MATCH_PARTIAL,
as things stand that would end up with an estimate of half the
bucket's frequency, which seems excessive.


Yeah, I think you're right - this is likely to produce over-estimates
and needs rethinking. With top-level equality clauses it should be
fairly trivial to use approach similar to the univariate case, i.e.
computing ndistinct and using

 (1 - mcv_totalsel) / ndistinct

If there are ndistinct coefficients this might be pretty good estimate
of the non-MCV part, I think. But it only works for top-level
equalities, not for complex clauses as in your examples.



On further thought, it's a bit more complicated, actually. Firstly, we 
already do that when there's no histogram (as in your example), and 
clearly it does not help. I initially thought it's a mistake to use the 
histogram in this case, but I can think of cases where it helps a lot.


1) when the equality clauses match nothing

In this case we may not find any buckets possibly matching the 
combination of values, producing selectivity estimate 0.0. While by 
using 1/ndistinct would give us something else.


2) when there are equality and inequality clauses

Similarly to the previous case, the equality clauses are useful in 
eliminating some of the buckets.


Now, I agree estimating equality clauses using histogram is tricky, so 
perhaps what we should do is using them as "conditions" to eliminate 
histogram buckets, but use ndistinct to estimate the selectivity. That 
is something like this:


  P(a=1 & b=1 & c<10 & d>=100)
   = P(a=1 & b=1) * P(c<10 & d>=100 | a=1 & b=1)
   = 1/ndistinct(a,b) * P(c<10 & d>=100 | a=1 & b=1)

where the second part is estimated using the histogram.

Of course, this still only works for the top-level equality clauses :-(

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-15 Thread Tomas Vondra


On 07/15/2018 04:43 PM, Dean Rasheed wrote:
> On 15 July 2018 at 14:29, Tomas Vondra  wrote:
>> It's quite unclear to me how this algorithm could reliably end up with
>> hist_sel=0 (in cases where we already don't end up with that). I mean,
>> if a bucket matches the conditions, then the only way to eliminate is by
>> deducing that MCV already contains all the matches - and that's rather
>> difficult for complex clauses ...
> 
> Ah, I didn't realise that you were using histograms for equality
> clauses as well. I had assumed that they would only use the MCV stats,
> as in the univariate case. Using histograms for equality seems
> problematic -- if bucket_contains_value() returns STATS_MATCH_PARTIAL,
> as things stand that would end up with an estimate of half the
> bucket's frequency, which seems excessive.

Yeah, I think you're right - this is likely to produce over-estimates
and needs rethinking. With top-level equality clauses it should be
fairly trivial to use approach similar to the univariate case, i.e.
computing ndistinct and using

(1 - mcv_totalsel) / ndistinct

If there are ndistinct coefficients this might be pretty good estimate
of the non-MCV part, I think. But it only works for top-level
equalities, not for complex clauses as in your examples.

While looking at the statext_clauselist_selectivity code I think I see
two more bugs:

1) the histogram_clauselist_selectivity() should probably use
'stat_clauses' and not 'clauses'

2) the early returns fail to estimate the neqclauses

It's a bit too late here but I'll look at it tomorrow.

> Also, if I'm reading it correctly, the code for histograms with
> not-equals will return STATS_MATCH_PARTIAL for all but one of the
> buckets, which isn't great either.
> 

Ummm, why?

> 
>> I don't know, really. I'll have to try hacking on this a bit I guess.
>> But there's one obvious question - in what order should we add the
>> clauses? Does it matter at all, or what is the optimal order? We don't
>> need to worry about it now, because we simply consider all clauses at
>> once, but I guess the proposed algorithm is more sensitive to this.
> 
> I don't know. That's definitely one of the least satisfactory parts of
> that idea.
> 
> The alternative seems to be to improve the match tracking in your
> current algorithm so that it keeps more detailed information about the
> kinds of matches seen at each level, and combines them appropriately.
> Maybe that's possible, but I'm struggling to see exactly how. Counting
> equality clauses seen on each column might be a start. But it would
> also need to track inequalities, with min/max values or fractions of
> the non-MCV total, or some such thing.
> 

Yeah, I agree, I'm not happy with this part either. But I'm grateful
there's someone else thinking about the issues and proposing alternative
approaches. Thanks for doing that.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-15 Thread Dean Rasheed
On 15 July 2018 at 14:29, Tomas Vondra  wrote:
> It's quite unclear to me how this algorithm could reliably end up with
> hist_sel=0 (in cases where we already don't end up with that). I mean,
> if a bucket matches the conditions, then the only way to eliminate is by
> deducing that MCV already contains all the matches - and that's rather
> difficult for complex clauses ...

Ah, I didn't realise that you were using histograms for equality
clauses as well. I had assumed that they would only use the MCV stats,
as in the univariate case. Using histograms for equality seems
problematic -- if bucket_contains_value() returns STATS_MATCH_PARTIAL,
as things stand that would end up with an estimate of half the
bucket's frequency, which seems excessive. Also, if I'm reading it
correctly, the code for histograms with not-equals will return
STATS_MATCH_PARTIAL for all but one of the buckets, which isn't great
either.


> I don't know, really. I'll have to try hacking on this a bit I guess.
> But there's one obvious question - in what order should we add the
> clauses? Does it matter at all, or what is the optimal order? We don't
> need to worry about it now, because we simply consider all clauses at
> once, but I guess the proposed algorithm is more sensitive to this.

I don't know. That's definitely one of the least satisfactory parts of
that idea.

The alternative seems to be to improve the match tracking in your
current algorithm so that it keeps more detailed information about the
kinds of matches seen at each level, and combines them appropriately.
Maybe that's possible, but I'm struggling to see exactly how. Counting
equality clauses seen on each column might be a start. But it would
also need to track inequalities, with min/max values or fractions of
the non-MCV total, or some such thing.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-15 Thread Tomas Vondra



On 07/15/2018 11:36 AM, Dean Rasheed wrote:
> ...
>
> What I'm considering is an algorithm where we simultaneously compute 3 things:
> 
> simple_sel - The result we would get without multivariate stats (*)
> mcv_sel - The multivariate MCV result
> hist_sel - The multivariate histogram result
> 
> (*) except that at each stage where we add a new clause to the
> simple_sel value, we improve upon that estimate by factoring in a
> lower bound from the multivariate stats so far, so that even if the
> multivariate stats fail to generate anything at the end, we've managed
> to account for some of the non-independence of the columns.
> 
> If the MCV matches represented all the matches, then at the end we
> would have simple_sel = mcv_sel, and hist_sel = 0, and we'd be done.
> 

It's quite unclear to me how this algorithm could reliably end up with
hist_sel=0 (in cases where we already don't end up with that). I mean,
if a bucket matches the conditions, then the only way to eliminate is by
deducing that MCV already contains all the matches - and that's rather
difficult for complex clauses ...

> Otherwise, we'd have simple_sel >= mcv_sel, and a possibly non-zero
> hist_sel, but if we had managed to factor in both mcv_sel and hist_sel
> to simple_sel at each stage as we went along, then simple_sel is the
> best overall estimate we can return.
> 

Hmm. I may not be understanding the algorithm yet, but I find it hard to
believe applying the stats incrementally is going to produce reliably
better estimates that looking at all clauses at once. I understand
deducing upper/lower boundaries is useful, but I wonder if we could do
that somehow with the current algorithm.

> Perhaps this is not so very different from what you're currently
> doing, except that with this approach we might also end up with
> mcv_sel = 0 and hist_sel = 0, but still have an improved simple_sel
> estimate that had accounted for some the multivariate stats available
> along the way.
> 

I don't know, really. I'll have to try hacking on this a bit I guess.
But there's one obvious question - in what order should we add the
clauses? Does it matter at all, or what is the optimal order? We don't
need to worry about it now, because we simply consider all clauses at
once, but I guess the proposed algorithm is more sensitive to this.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-15 Thread Dean Rasheed
On 13 July 2018 at 18:27, Tomas Vondra  wrote:
> I'm not so sure. The issue is that a lot of the MCV deductions depends
> on whether we can answer questions like "Is there a single match?" or
> "If we got a match in MCV, do we need to look at the non-MCV part?" This
> is not very different from the single-column estimates, except of course
> here we need to look at multiple columns.
>
> The top-level clauses allow us to make such deductions, with deeper
> clauses it's much more difficult (perhaps impossible). Because for
> example with (a=1 AND b=1) there can be just a single match, so if we
> find it in MCV we're done. With clauses like ((a=1 OR a=2) AND (b=1 OR
> b=2)) it's not that simple, because there may be multiple combinations
> and so a match in MCV does not guarantee anything.

Actually, it guarantees a lower bound on the overall selectivity, and
maybe that's the best that we can do in the absence of any other
stats.

I did wonder if maybe we could do better by tracking allowed value
counts. E.g., with a clause like ((a=1 OR a=2) AND (b=1 OR b=2)) it
would be fairly simple to see that there are 2 allowed values of a,
and 2 allowed values of b, so 4 allowed values overall. If we had,
say, 3 MCV matches, we'd then know to factor in something extra for
the 1 non-MCV match. I'm not sure what to do with non-equality clauses
though.


>> I think perhaps it might be better not to attempt to combine the
>> *overall* selectivity returned by mcv_clauselist_selectivity() with
>> that returned by clauselist_selectivity(), but rather merge the logic
>> of these two functions together into a single traversal of the
>> clause-tree. That way, the various individual selectivities can be
>> combined on a clause-by-clause basis to give the best running total
>> based on the available information. Even when the multivariate stats
>> are incomplete, they may still provide a useful lower bound on the
>> selectivity.
>
> I don't follow. The example you presented above showed multivariate
> stats producing over-estimates, so how would it be helpful to use that
> as lower boundary for anything?

No, the multivariate MCV stats were producing good estimates, even for
the complex clauses, because they were all common values in my
example. The problem was that the good MCV estimate was then being
ruined by adding on extra factors because at the top-level it didn't
appear to be a full match.


>> If/when all MCV columns have been matched exactly, that
>> lower bound might turn into the appropriate overall result, if there
>> is a matching MCV entry.
>
> Isn't the problem illustrated by the examples that we don't know if the
> MCV matches represent all matches, or if there may be matches in the
> histogram?

The example illustrated a case where the MCV matches represented all
the matches, but we failed to recognise that. Now we could fix that to
reliably detect cases where the MCV matches represented all the
matches, but it's still not entirely obvious what to do when they
don't.

What I'm considering is an algorithm where we simultaneously compute 3 things:

simple_sel - The result we would get without multivariate stats (*)
mcv_sel - The multivariate MCV result
hist_sel - The multivariate histogram result

(*) except that at each stage where we add a new clause to the
simple_sel value, we improve upon that estimate by factoring in a
lower bound from the multivariate stats so far, so that even if the
multivariate stats fail to generate anything at the end, we've managed
to account for some of the non-independence of the columns.

If the MCV matches represented all the matches, then at the end we
would have simple_sel = mcv_sel, and hist_sel = 0, and we'd be done.

Otherwise, we'd have simple_sel >= mcv_sel, and a possibly non-zero
hist_sel, but if we had managed to factor in both mcv_sel and hist_sel
to simple_sel at each stage as we went along, then simple_sel is the
best overall estimate we can return.

Perhaps this is not so very different from what you're currently
doing, except that with this approach we might also end up with
mcv_sel = 0 and hist_sel = 0, but still have an improved simple_sel
estimate that had accounted for some the multivariate stats available
along the way.

Regards,
Dean



Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-13 Thread Tomas Vondra



On 07/13/2018 01:19 PM, Dean Rasheed wrote:
> On 24 June 2018 at 20:45, Tomas Vondra  wrote:
>> Attached is a rebased version of this patch series, mostly just fixing
>> the breakage caused by reworked format of initial catalog data.
>>
>> Aside from that, the MCV building now adopts the logic introduced by
>> commit b5db1d93d2 for single-column MCV lists. The new algorithm seems
>> pretty good and I don't see why multi-column MCV lists should use
>> something special.
> 
> Agreed.
> 
> 
>> I'm sure there are plenty of open questions to discuss, particularly
>> stuff related to combining the various types of statistics to the final
>> estimate (a lot of that was already improved based on Dean's reviews).
> 
> Yes, that's definitely one of the trickier parts of this. I don't
> think that the current algorithm is ideal as it stands. In particular,
> the way that it attempts to handle complex combinations of clauses
> doesn't look right. I think mcv_clauselist_selectivity() and
> histogram_clauselist_selectivity() are plausibly correct, but the way
> that the resulting selectivities are combined in
> statext_clauselist_selectivity() doesn't seem right. In particular,
> the use of estimate_equality_groups() to count "nmatches" and
> "fullmatch" only takes into account top-level equality clauses, so it
> will fail to recognise other cases like (a=1 AND (b=1 OR b=2)) which
> might be fully covered by, say, the MCV stats. Consider, for example,
> the following simple test case:
> 
> 
> create table foo(a int, b int);
> insert into foo select 1,1 from generate_series(1,5);
> insert into foo select 1,2 from generate_series(1,4);
> insert into foo select 1,x/10 from generate_series(30,25) g(x);
> insert into foo select 2,1 from generate_series(1,3);
> insert into foo select 2,2 from generate_series(1,2);
> insert into foo select 2,x/10 from generate_series(30,50) g(x);
> insert into foo select 3,1 from generate_series(1,1);
> insert into foo select 3,2 from generate_series(1,5000);
> insert into foo select 3,x from generate_series(3,60) g(x);
> insert into foo select x,x/10 from generate_series(4,75) g(x);
> 
> create statistics foo_mcv_ab (mcv) on a,b from foo;
> analyse foo;
> 
> explain analyse select * from foo where a=1 and b=1;
>  -- Actual rows: 5, Estimated: 52690 (14149 without MV-stats)
> 
> explain analyse select * from foo where a=1 and b=2;
>  -- Actual rows: 4, Estimated: 41115 (10534 without MV-stats)
> 
> explain analyse select * from foo where a=1 and (b=1 or b=2);
>  -- Actual rows: 9, Estimated: 181091 (24253 without MV-stats)
> 
> explain analyse select * from foo where (a=1 or a=2) and (b=1 or b=2);
>  -- Actual rows: 14, Estimated: 276425 (56716 without MV-stats)
> 
> 
> In the first 2 queries the multivariate MCV stats help a lot and give
> good estimates, but in the last 2 queries the estimates are around
> twice as large as they should be, even though good MCV stats are
> available on those specific values.
> 

I'm not so sure. The issue is that a lot of the MCV deductions depends
on whether we can answer questions like "Is there a single match?" or
"If we got a match in MCV, do we need to look at the non-MCV part?" This
is not very different from the single-column estimates, except of course
here we need to look at multiple columns.

The top-level clauses allow us to make such deductions, with deeper
clauses it's much more difficult (perhaps impossible). Because for
example with (a=1 AND b=1) there can be just a single match, so if we
find it in MCV we're done. With clauses like ((a=1 OR a=2) AND (b=1 OR
b=2)) it's not that simple, because there may be multiple combinations
and so a match in MCV does not guarantee anything.

I don't think there's a way around this. The single-dimensional case
does not have this issue, of course.

> The tricky thing is to work out how to correctly combine the various
> stats that are available. In the above examples, the selectivity
> returned by mcv_clauselist_selectivity() would have been basically
> correct, but since it will have not been identified as a fullmatch and
> some non-equality clauses will have been seen at the top-level (the OR
> clauses), it ends up adding on additional selectivities from
> clauselist_selectivity().
> 
> I think perhaps it might be better not to attempt to combine the
> *overall* selectivity returned by mcv_clauselist_selectivity() with
> that returned by clauselist_selectivity(), but rather merge the logic
> of these two functions together into a single traversal of the
> clause-tree. That way, the various individual selectivities can be
> combined on a clause-by-clause basis to give the best running total
> based on the available information. Even when the multivariate stats
> are incomplete, they may still provide a useful lower bound on the
> selectivity.

I don't follow. The example you presented above showed multivariate
stats producing over-estimates, so how 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-07-13 Thread Dean Rasheed
On 24 June 2018 at 20:45, Tomas Vondra  wrote:
> Attached is a rebased version of this patch series, mostly just fixing
> the breakage caused by reworked format of initial catalog data.
>
> Aside from that, the MCV building now adopts the logic introduced by
> commit b5db1d93d2 for single-column MCV lists. The new algorithm seems
> pretty good and I don't see why multi-column MCV lists should use
> something special.

Agreed.


> I'm sure there are plenty of open questions to discuss, particularly
> stuff related to combining the various types of statistics to the final
> estimate (a lot of that was already improved based on Dean's reviews).

Yes, that's definitely one of the trickier parts of this. I don't
think that the current algorithm is ideal as it stands. In particular,
the way that it attempts to handle complex combinations of clauses
doesn't look right. I think mcv_clauselist_selectivity() and
histogram_clauselist_selectivity() are plausibly correct, but the way
that the resulting selectivities are combined in
statext_clauselist_selectivity() doesn't seem right. In particular,
the use of estimate_equality_groups() to count "nmatches" and
"fullmatch" only takes into account top-level equality clauses, so it
will fail to recognise other cases like (a=1 AND (b=1 OR b=2)) which
might be fully covered by, say, the MCV stats. Consider, for example,
the following simple test case:


create table foo(a int, b int);
insert into foo select 1,1 from generate_series(1,5);
insert into foo select 1,2 from generate_series(1,4);
insert into foo select 1,x/10 from generate_series(30,25) g(x);
insert into foo select 2,1 from generate_series(1,3);
insert into foo select 2,2 from generate_series(1,2);
insert into foo select 2,x/10 from generate_series(30,50) g(x);
insert into foo select 3,1 from generate_series(1,1);
insert into foo select 3,2 from generate_series(1,5000);
insert into foo select 3,x from generate_series(3,60) g(x);
insert into foo select x,x/10 from generate_series(4,75) g(x);

create statistics foo_mcv_ab (mcv) on a,b from foo;
analyse foo;

explain analyse select * from foo where a=1 and b=1;
 -- Actual rows: 5, Estimated: 52690 (14149 without MV-stats)

explain analyse select * from foo where a=1 and b=2;
 -- Actual rows: 4, Estimated: 41115 (10534 without MV-stats)

explain analyse select * from foo where a=1 and (b=1 or b=2);
 -- Actual rows: 9, Estimated: 181091 (24253 without MV-stats)

explain analyse select * from foo where (a=1 or a=2) and (b=1 or b=2);
 -- Actual rows: 14, Estimated: 276425 (56716 without MV-stats)


In the first 2 queries the multivariate MCV stats help a lot and give
good estimates, but in the last 2 queries the estimates are around
twice as large as they should be, even though good MCV stats are
available on those specific values.

The tricky thing is to work out how to correctly combine the various
stats that are available. In the above examples, the selectivity
returned by mcv_clauselist_selectivity() would have been basically
correct, but since it will have not been identified as a fullmatch and
some non-equality clauses will have been seen at the top-level (the OR
clauses), it ends up adding on additional selectivities from
clauselist_selectivity().

I think perhaps it might be better not to attempt to combine the
*overall* selectivity returned by mcv_clauselist_selectivity() with
that returned by clauselist_selectivity(), but rather merge the logic
of these two functions together into a single traversal of the
clause-tree. That way, the various individual selectivities can be
combined on a clause-by-clause basis to give the best running total
based on the available information. Even when the multivariate stats
are incomplete, they may still provide a useful lower bound on the
selectivity. If/when all MCV columns have been matched exactly, that
lower bound might turn into the appropriate overall result, if there
is a matching MCV entry. For example, suppose that there are MCV stats
on 3 columns a,b,c and a WHERE clause like (a=1 AND b=2 AND c=3). You
might process that something like:

* Get sel(a=1) using the normal univariate stats
* Update the multivariate MCV stats match bitmap based on a=1
* Get sel(b=2) using the normal univariate stats
* Compute the total selectivity assuming independence:
total_sel = sel(a=1)*sel(b=2)
* Update the multivariate MCV stats match bitmap based on b=2
* Compute the multivariate MCV selectivity so far:
mcv_sel = Sum of MCV frequencies that match so far
* Use that as a lower bound:
total_sel = Max(total_sel, mcv_sel)
* Get sel(c=3) using the normal univariate stats
* Compute the new total selectivity assuming independence:
total_sel *= sel(c=3)
* Update the multivariate MCV stats match bitmap based on c=3
* Compute the new multivariate MCV selectivity:
mcv_sel = Sum of MCV frequencies that now match
* Use that as a new lower bound:
total_sel = 

Re: [HACKERS] PATCH: multivariate histograms and MCV lists

2018-06-24 Thread Tomas Vondra
Hi all,

Attached is a rebased version of this patch series, mostly just fixing
the breakage caused by reworked format of initial catalog data.

Aside from that, the MCV building now adopts the logic introduced by
commit b5db1d93d2 for single-column MCV lists. The new algorithm seems
pretty good and I don't see why multi-column MCV lists should use
something special.

I'm sure there are plenty of open questions to discuss, particularly
stuff related to combining the various types of statistics to the final
estimate (a lot of that was already improved based on Dean's reviews).

On thing that occurred to me while comparing the single-column logic (as
implemented in selfuncs.c) and the new multi-column stuff, is dealing
with partially-matching histogram buckets.

In the single-column case, we pretty much assume uniform distribution in
each bucket, and linearly interpolate the selectivity. So for a bucket
with boundaries [0, 10] and condition "x <= 5" we return 0.5, for "x <
7" we return 0.7 and so on. This is what convert_to_scalar() does.

In the multi-column case, we simply count each matching bucket as 0.5,
without any attempts to linearly interpolate. It would not be difficult
to call "convert_to_scalar" for each condition (essentially repeating
the linear interpolation for each column), but then what? We could
simply compute a product of those results, of course, but that only
works assuming independence. And that's exactly the wrong thing to
assume here, considering the extended statistics are meant for cases
where the columns are not independent.

So I'd argue the 0.5 estimate for partially-matching buckets is the
right thing to do here, as it's minimizing the average error.


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


0001-multivariate-MCV-lists.patch.gz
Description: application/gzip


0002-multivariate-histograms.patch.gz
Description: application/gzip


  1   2   >