Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-06 Thread Tomas Vondra

On 06/06/2016 07:40 PM, Tom Lane wrote:

Tomas Vondra  writes:

On 06/06/2016 06:15 PM, Tom Lane wrote:

This checks that you found a joinclause mentioning foreignrel.  But
foreignrel need have nothing to do with the foreign key; it could be any
table in the query.



I don't follow. How could it have 'nothing to do with the foreign key'?


Precisely that: clauselist_join_selectivity iterates over every table in
the join as a potential foreignrel, and you explicitly refuse to check
that that table has anything to do with the foreign key's referenced side.

Here's an example:

drop table if exists t1, t2, t3;
create table t1(f1 int, f2 int, primary key(f1,f2));
insert into t1 select x,x from generate_series(1,10) x;
create table t2 (f1 int, f2 int, foreign key(f1,f2) references t1);
insert into t2 select (x+10)/10,(x+10)/10 from generate_series(1,10) x;
create table t3(f1 int, f2 int);
insert into t3 select (x+10)/10,(x+10)/10 from generate_series(1,10) x;
analyze t1;
analyze t2;
analyze t3;
explain select * from t1 join t2 on t1.f1=t2.f1 and t1.f2=t2.f2;
explain select * from t3 join t2 on t3.f1=t2.f1 and t3.f2=t2.f2;

9.5 estimates the first query as producing 1 row, the second as producing
100 rows.  Both of those estimates suck, of course, but it's what you'd
expect from treating the joinclauses as uncorrelated.  HEAD estimates them
both at 10 rows, which is correct for the first query but a pure
flight of fancy for the second query.  Tracing through this shows that
it's accepting t2's FK as a reason to make the estimate, even though
t1 doesn't even appear in that query!


D'oh!

Clearly we need to check confrelid somewhere, not just varno/varattno. I 
think this should do the trick


rte = planner_rt_fetch(var->varno, root);

if (foreignrel->relid == var->varno &&
fkinfo->confrelid == rte->relid &&
fkinfo->confkeys[i] == var->varattno)
foundvarmask |= 1;

It seems to resolve the the issue (the estimate is now just 100), but 
I'm not going to claim it's 100% correct.


In any case, thanks for point this out.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-06 Thread Tom Lane
Tomas Vondra  writes:
> When it comes to improving multiple (multi-column) foreign keys, I think 
> it may get way more complicated that it might seem. What if the foreign 
> keys overlap, for example? Or what if the keys go in opposite directions 
> (cycle). And so on ...

I think you can group all FKs referencing the same table and discard
all their matched join clauses in favor of a single 1/N estimate
(and when I say "discard", that means you don't match those clauses
against later FKs, which should take care of the reciprocal-FK issue).
This is clearly correct if no nulls are involved.  We need to make some
estimate of how much to de-rate that figure for nulls, but I don't see
that it's any harder than what's already required for a single multicol
FK.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-06 Thread Tom Lane
Tomas Vondra  writes:
> On 06/06/2016 06:15 PM, Tom Lane wrote:
>> This checks that you found a joinclause mentioning foreignrel.  But
>> foreignrel need have nothing to do with the foreign key; it could be any
>> table in the query.

> I don't follow. How could it have 'nothing to do with the foreign key'? 

Precisely that: clauselist_join_selectivity iterates over every table in
the join as a potential foreignrel, and you explicitly refuse to check
that that table has anything to do with the foreign key's referenced side.

Here's an example:

drop table if exists t1, t2, t3;
create table t1(f1 int, f2 int, primary key(f1,f2));
insert into t1 select x,x from generate_series(1,10) x;
create table t2 (f1 int, f2 int, foreign key(f1,f2) references t1);
insert into t2 select (x+10)/10,(x+10)/10 from generate_series(1,10) x;
create table t3(f1 int, f2 int);
insert into t3 select (x+10)/10,(x+10)/10 from generate_series(1,10) x;
analyze t1;
analyze t2;
analyze t3;
explain select * from t1 join t2 on t1.f1=t2.f1 and t1.f2=t2.f2;
explain select * from t3 join t2 on t3.f1=t2.f1 and t3.f2=t2.f2;

9.5 estimates the first query as producing 1 row, the second as producing
100 rows.  Both of those estimates suck, of course, but it's what you'd
expect from treating the joinclauses as uncorrelated.  HEAD estimates them
both at 10 rows, which is correct for the first query but a pure
flight of fancy for the second query.  Tracing through this shows that
it's accepting t2's FK as a reason to make the estimate, even though
t1 doesn't even appear in that query!

If we made the modifications previously discussed to throw away FKs
that don't connect two tables mentioned in the query, this particular
example would stop failing.  But it would only take a three-table
query involving t1, t2, and t3 to bring the bug back to life, whether
or not the join conditions actually match the FK.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-06 Thread Tomas Vondra

On 06/06/2016 06:34 PM, Tom Lane wrote:

... BTW, another thought occurred to me yesterday: it seems like the
existing code hasn't thought through its behavior for multiple foreign
keys very carefully.  That is, suppose we have both "A.J references B.K"
and "A.X references B.Y", as separate FKs not a single multicolumn FK.
The current code goes to some lengths to decide that one of these is
better than the other and then ignore the other.  Why?  Seems to me
that in such a case you want to behave more nearly as you would for a
multicolumn FK, that is discard all the join quals matched to either FK
in favor of a single selectivity estimate based on the number of rows in
the referenced table.  Discarding only the A.J = B.K clause and then
multiplying by the independent selectivity of A.X = B.Y is surely just as
wrong as what we've historically done for multicolumn FKs.  (Correcting
for nulls would take a bit of thought, but I wouldn't be surprised if it
ends up being the same as for the multicolumn-FK case, at least to within
the precision we can hope to get with the available stats for nulls.)


Yes, that can be improved. The plan was to improve the common case 
first, and then look at the more complicated cases. It might seem like a 
hand-waving but I'd bet 99% tables are joined on a single FK, so this 
seems like a reasonable approach.


When it comes to improving multiple (multi-column) foreign keys, I think 
it may get way more complicated that it might seem. What if the foreign 
keys overlap, for example? Or what if the keys go in opposite directions 
(cycle). And so on ...



regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-06 Thread Tomas Vondra

On 06/06/2016 06:15 PM, Tom Lane wrote:

Tomas Vondra  writes:

On 06/04/2016 08:15 PM, Tom Lane wrote:

* Make RelationGetFKeyList cache a list of ForeignKeyOptInfo structs,
not just constraint OIDs.  It's insane that the relcache scans
pg_constraint to collect those OIDs and then the planner re-reads all
those same rows on every planning cycle.



That seems like a fairly straightforward change, and I'm not opposed to
doing that. However RelationGetFKeyList is basically a modified copy of
RelationGetIndexList, so it shares the same general behavior, including
caching of OIDs and then constructing IndexOptInfo objects on each
get_relation_info() call. Why is it 'insane' for foreign keys but not
for indexes? Or what am I missing?


I would not be in favor of migrating knowledge of IndexOptInfo into the
relcache; it's too planner-specific.  Also, it mostly copies info from
the index's relcache entry, not the parent relation's (which for one
thing would imply locking hazards if we tried to cache that info in the
parent rel).  But for foreign keys, we can cache an image of certain
well-defined fields of certain well-defined rows of pg_constraint, and
that seems like a reasonably arm's-length definition of a responsibility
to give the relcache, especially when it has to visit all and only those
same rows to construct what it's caching now.


would be okay if you checked after identifying a matching eclass member
that it belonged to the FK's referenced table, but AFAICS there is no such
check anywhere.



Perhaps I'm missing something, but I thought this is checked by these
conditions in quals_match_foreign_key():



1) with ECs (line 3990)



if (foreignrel->relid == var->varno &&
fkinfo->confkeys[i] == var->varattno)
foundvarmask |= 1;


This checks that you found a joinclause mentioning foreignrel.  But
foreignrel need have nothing to do with the foreign key; it could be any
table in the query.  That comparison of confkeys[] and varattno is thus
checking for column-number equality of two columns that might be from
different relations.  That is, if we have an FK "A.X references B.Y",
and the query contains "A.X = C.Z", this code could match the FK to that
clause if Y and Z happen to have the same data types and column numbers.


I don't follow. How could it have 'nothing to do with the foreign key'? 
We're explicitly checking both varno and varattno on both sides of the 
foreign key, and we only consider the column is considered matched if 
both the checks pass.


I've tried to come up with an example triggering the issue, but 
unsuccessfully ...



regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-06 Thread Tom Lane
... BTW, another thought occurred to me yesterday: it seems like the
existing code hasn't thought through its behavior for multiple foreign
keys very carefully.  That is, suppose we have both "A.J references B.K"
and "A.X references B.Y", as separate FKs not a single multicolumn FK.
The current code goes to some lengths to decide that one of these is
better than the other and then ignore the other.  Why?  Seems to me
that in such a case you want to behave more nearly as you would for a
multicolumn FK, that is discard all the join quals matched to either FK
in favor of a single selectivity estimate based on the number of rows in
the referenced table.  Discarding only the A.J = B.K clause and then
multiplying by the independent selectivity of A.X = B.Y is surely just as
wrong as what we've historically done for multicolumn FKs.  (Correcting
for nulls would take a bit of thought, but I wouldn't be surprised if it
ends up being the same as for the multicolumn-FK case, at least to within
the precision we can hope to get with the available stats for nulls.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-06 Thread Tom Lane
Tomas Vondra  writes:
> On 06/04/2016 08:15 PM, Tom Lane wrote:
>> * Make RelationGetFKeyList cache a list of ForeignKeyOptInfo structs,
>> not just constraint OIDs.  It's insane that the relcache scans
>> pg_constraint to collect those OIDs and then the planner re-reads all
>> those same rows on every planning cycle.

> That seems like a fairly straightforward change, and I'm not opposed to 
> doing that. However RelationGetFKeyList is basically a modified copy of 
> RelationGetIndexList, so it shares the same general behavior, including 
> caching of OIDs and then constructing IndexOptInfo objects on each 
> get_relation_info() call. Why is it 'insane' for foreign keys but not 
> for indexes? Or what am I missing?

I would not be in favor of migrating knowledge of IndexOptInfo into the
relcache; it's too planner-specific.  Also, it mostly copies info from
the index's relcache entry, not the parent relation's (which for one
thing would imply locking hazards if we tried to cache that info in the
parent rel).  But for foreign keys, we can cache an image of certain
well-defined fields of certain well-defined rows of pg_constraint, and
that seems like a reasonably arm's-length definition of a responsibility
to give the relcache, especially when it has to visit all and only those
same rows to construct what it's caching now.

>> would be okay if you checked after identifying a matching eclass member
>> that it belonged to the FK's referenced table, but AFAICS there is no such
>> check anywhere.

> Perhaps I'm missing something, but I thought this is checked by these 
> conditions in quals_match_foreign_key():

> 1) with ECs (line 3990)

>   if (foreignrel->relid == var->varno &&
>   fkinfo->confkeys[i] == var->varattno)
>   foundvarmask |= 1;

This checks that you found a joinclause mentioning foreignrel.  But
foreignrel need have nothing to do with the foreign key; it could be any
table in the query.  That comparison of confkeys[] and varattno is thus
checking for column-number equality of two columns that might be from
different relations.  That is, if we have an FK "A.X references B.Y",
and the query contains "A.X = C.Z", this code could match the FK to that
clause if Y and Z happen to have the same data types and column numbers.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-05 Thread Tomas Vondra

Hi,

While this thread was effectively superseded by the 'new design' thread 
[1], I'd like to address a few points raised here, as they are relevant 
for the new design (at least I believe so).


[1] https://www.postgresql.org/message-id/31041.1465069...@sss.pgh.pa.us

On 06/04/2016 08:15 PM, Tom Lane wrote:
...


* Make RelationGetFKeyList cache a list of ForeignKeyOptInfo structs,
not just constraint OIDs.  It's insane that the relcache scans
pg_constraint to collect those OIDs and then the planner re-reads all
those same rows on every planning cycle.  Allow the relcache to return
a pointer to the list in-cache, and require the planner to copy that
data before making any more cache requests.  The planner would run
through the list, skipping single-key entries and entries leading to
irrelevant tables, and copy out just the items that are useful for
the current query (probably adding query-specific table RTE indexes
at the same time).


That seems like a fairly straightforward change, and I'm not opposed to 
doing that. However RelationGetFKeyList is basically a modified copy of 
RelationGetIndexList, so it shares the same general behavior, including 
caching of OIDs and then constructing IndexOptInfo objects on each 
get_relation_info() call. Why is it 'insane' for foreign keys but not 
for indexes? Or what am I missing?




* Personally I'd probably handle the "ignore irrelevant tables" test
with a list_member_oid test on a list of relation OIDs, not a
hashtable. It's unlikely that there are enough tables in the query to
justify building a hashtable.


OK



* All of the above should happen only if the query contains multiple
tables; there is no reason to expend even one cycle on loading FK data
in a simple single-table query.


OK



... snip the part about nested loops (new design thread) ...

Also, there are serious bugs remaining, even without considering planning
speed.  An example I just noticed is that quals_match_foreign_key actually
fails entirely to ensure that it's found a match to the FK, because there
is no check on whether foreignrel has anything to do with the FK.  That
is, this bit:

 * We make no attempt in checking that this foreign key actually
 * references 'foreignrel', the reasoning here is that we may be able
 * to match the foreign key to an eclass member Var of a RestrictInfo
 * that's in qualslist, this Var may belong to some other relation.

would be okay if you checked after identifying a matching eclass member
that it belonged to the FK's referenced table, but AFAICS there is no such
check anywhere.



Perhaps I'm missing something, but I thought this is checked by these 
conditions in quals_match_foreign_key():


1) with ECs (line 3990)

if (foreignrel->relid == var->varno &&
fkinfo->confkeys[i] == var->varattno)
foundvarmask |= 1;

2) without ECs (line 4019)


if ((foreignrel->relid == leftvar->varno) &&
(fkrel->relid == rightvar->varno) &&
(fkinfo->confkeys[i] == leftvar->varattno) &&
(fkinfo->conkeys[i] == rightvar->varattno))
{
...
}
else if ((foreignrel->relid == rightvar->varno) &&
 (fkrel->relid == leftvar->varno) &&
 (fkinfo->confkeys[i] == rightvar->varattno) &&
 (fkinfo->conkeys[i] == leftvar->varattno))
{
...
}

Or does that fail for some reason?

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-04 Thread Tom Lane
David Rowley  writes:
> I think your wires are crossed to what this patch actually does. A
> unique index could only prove that no more than 1 rows exists. This
> goes to prove that exactly 1 exists, then will reduce that estimate by
> any other join conditions which were not matched to a foreign key.

BTW, I thought some more about this, and I believe the patch is a few
bricks shy of a load in this respect.  An FK constraint will enforce that
a referencing row matches exactly one referenced row only if all the
referencing columns are marked NOT NULL.  If any nulls are allowed, then
we are back to the unique-index situation, ie we can only conclude that
there is at most one matching row.

I do not think this means that we must dial the patch back to only
considering FKs that have NOT NULL on all their columns.  If we could
estimate the fraction of referencing rows that have any nulls, we could
still arrive at a selectivity estimate that's better than we get when
disregarding the FK altogether: instead of 1/num_referenced_rows it'd be
fraction-of-referencing-rows-without-nulls/num_referenced_rows, since the
rows containing nulls are guaranteed to have 0 matches rather than 1.
However, since the statistics we have at hand only tell us the fraction of
nulls in each column separately, making a fraction-with-any-nulls estimate
for a multi-column FK is going to be pretty spongy.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-04 Thread Tom Lane
Robert Haas  writes:
> FYI, I spoke to Tom Lane about this at PGCon and suggested that he
> look at the proposed patch as I requested in
> https://www.postgresql.org/message-id/CA+TgmobPqrAVXOBMHTcpDq8hX7gCzcVhoUvC8s9V=d09+bt...@mail.gmail.com
> and see whether that would address his concerns, and he said that he
> would do that.  It may, however, have slipped his mind.

> My opinion is that something needs to be done about this patch.  It
> needs to be improved or reverted.  Improved would be ideal in my mind,
> but reverted is an outcome I can live with.

I've taken a look at fkest_fixes_v2.patch.  I do not like
mark_useful_foreign_keys one bit: it adds *more* overhead to *every*
query, whether or not there's anything to be saved.  Also, adding a
flag to the ForeignKeyOptInfo entries doesn't save having to iterate
over the irrelevant entries.  We should arrange for them not to be
in the list in the first place.

The direction I would go in to get rid of irrelevant FKs is more
like this:

* Make RelationGetFKeyList cache a list of ForeignKeyOptInfo structs,
not just constraint OIDs.  It's insane that the relcache scans
pg_constraint to collect those OIDs and then the planner re-reads all
those same rows on every planning cycle.  Allow the relcache to return
a pointer to the list in-cache, and require the planner to copy that
data before making any more cache requests.  The planner would run
through the list, skipping single-key entries and entries leading to
irrelevant tables, and copy out just the items that are useful for
the current query (probably adding query-specific table RTE indexes
at the same time).

* Personally I'd probably handle the "ignore irrelevant tables" test
with a list_member_oid test on a list of relation OIDs, not a hashtable.
It's unlikely that there are enough tables in the query to justify
building a hashtable.

* All of the above should happen only if the query contains multiple
tables; there is no reason to expend even one cycle on loading FK data
in a simple single-table query.

But more generally, this entire approach does very little to improve
the problem of nested loops leading to probable poor big-O behavior.
I previously complained that the patch adds code that is executed

>   * at least once per join relation created (hence, significantly more than
>   the number of rels in the query; see also get_joinrel_parampathinfo)
> * for each inner relation in the initial input joinrel pair
>   * for each outer relation in the initial joinrel pair
> * for each foreign key constraint on this inner and outer rel
>   * for each key column in that FK
> * for each join qual for the current input joinrel pair
>   * for each member of the relevant EquivalenceClass

and only the fourth of those seven nested loops is significantly shortened
by this patch.  It's still about as brute-force as can be.  I think that
fkest_fixes_v2.patch really only makes things significantly better for
cases where a table in the query has a huge number of FKs leading to
tables not in the query, and TBH I think that's an artificial test
condition that will seldom have much to do with real-world performance.

It's also pretty bad that "for each foreign key constraint on this inner
and outer rel" means "for each FK leading out of either the inner or outer
rel, whether or not it leads to the other side".  The existing code has
to do that in order to handle cases where the particular representative
clause generated from an EC doesn't match the FK but another clause could
have.  I still think that some more careful thought about relating FKs
to ECs could result in elimination of a couple of these looping levels
altogether.

This is certainly all attackable, and I'm even willing to help work on it,
but it's going to lead to a rather larger patch than we usually like to
apply in beta.  And there are still all the non-planning-speed cleanup
issues to be dealt with.  fkest_fixes_v2.patch addresses some of them
but I still feel that a lot of work is needed on the comments.

Also, there are serious bugs remaining, even without considering planning
speed.  An example I just noticed is that quals_match_foreign_key actually
fails entirely to ensure that it's found a match to the FK, because there
is no check on whether foreignrel has anything to do with the FK.  That
is, this bit:

 * We make no attempt in checking that this foreign key actually
 * references 'foreignrel', the reasoning here is that we may be able
 * to match the foreign key to an eclass member Var of a RestrictInfo
 * that's in qualslist, this Var may belong to some other relation.

would be okay if you checked after identifying a matching eclass member
that it belonged to the FK's referenced table, but AFAICS there is no such
check anywhere.

We need to decide whether we want to put a significant amount of time
into this patch over the next couple of weeks, to the exclusi

Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-02 Thread Tom Lane
Robert Haas  writes:
> FYI, I spoke to Tom Lane about this at PGCon and suggested that he
> look at the proposed patch as I requested in
> https://www.postgresql.org/message-id/CA+TgmobPqrAVXOBMHTcpDq8hX7gCzcVhoUvC8s9V=d09+bt...@mail.gmail.com
> and see whether that would address his concerns, and he said that he
> would do that.  It may, however, have slipped his mind.

Not entirely, though I've been wrapped up in pg_dump fixes for the last
little while.  I'll get to this soon.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-06-02 Thread Robert Haas
On Wed, Jun 1, 2016 at 9:29 PM, Noah Misch  wrote:
> This PostgreSQL 9.6 open item is past due for your status update.  Kindly send
> a status update within 24 hours, and include a date for your subsequent status
> update.  Refer to the policy on open item ownership:
> http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com

FYI, I spoke to Tom Lane about this at PGCon and suggested that he
look at the proposed patch as I requested in
https://www.postgresql.org/message-id/CA+TgmobPqrAVXOBMHTcpDq8hX7gCzcVhoUvC8s9V=d09+bt...@mail.gmail.com
and see whether that would address his concerns, and he said that he
would do that.  It may, however, have slipped his mind.

My opinion is that something needs to be done about this patch.  It
needs to be improved or reverted.  Improved would be ideal in my mind,
but reverted is an outcome I can live with.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-08 Thread Simon Riggs
On 9 May 2016 at 00:24, Tomas Vondra  wrote:

> Hi,
>
> Attached is a minor revision of the patch posted by David a few days ago,
> rebased on the current master (which already includes 68d704 fixing the
> segfault that started this thread).
>
> The modifications are fairly small:
>
> * The 'possibleRef' flag is renamed to 'use_for_estimation' which I think
> better describes the purpose of the flag.
>
> * The mark_useful_foreign_keys now skips foreign keys on a single column,
> as those are not useful for the patch at all. This should further reduce
> performance impact in the common case.
>

Good, thanks.


> * I've slightly reworded some of the comments, hopefully for the better.



> But now the bad news - while reviewing the David's fixup patch, I've
> noticed this comment
>
> /* XXX do we need to add entries for the append_rel_list here? */
>
> and I've realized that the estimation does not quite work with partitioned
> tables, as it only checks foreign keys on the parent tables, but the child
> tables may not use the foreign keys at all, or even use different foreign
> keys (a bit bizzare, but possible).
>
> Obviously the simplest solution would be to simply stop considering
> foreign keys with partitioned tables - that seems a bit unfortunate, but
> AFAIK we don't inspect child tables during planning anyway, and in the
> worst case we fall back to the current estimate.
>
> It might be possible to improve this by checking that all child tables
> have a matching foreign key (referencing the same table), which would allow
> us to handle the case with one side partitioned. And maybe some other (more
> complex) cases, like equi-partitioned tables. But all of this would require
> a fair amount of new code, far more than we should commit in beta mode.
>
>
> To summarize all of this, I think David's patch marking usable foreign
> keys greatly reduces the overhead compared to the committed version. The
> 'skip 1-column FKs' significantly reduces (or even eliminates) the overhead
> for the common case where only few FKs use multiple columns.
>
> Not handling the inheritance properly is clearly a serious oversight,
> though. Tom is clearly right that this got committed a bit too early.
>
> If the conclusion is that the current performance impact is still not
> acceptable, or that we need a better solution to the inheritance issue than
> simply disabling the FK estimates, then I think reverting the patch is the
> only possible solution at this point.


I disagree.

The purpose of the patch is to improve planning in simple and important
common cases. It does that. The fact that it does not cover all cases is
understood and we will make more progress in future releases. We don't
handle the inheritance case well in many ways is not this patches' fault,
or problem.

There's no point putting years of effort into parallel query if we can't
work out when to use it sensibly. This patch makes major gains in that area.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-08 Thread Tomas Vondra

Hi,

Attached is a minor revision of the patch posted by David a few days 
ago, rebased on the current master (which already includes 68d704 fixing 
the segfault that started this thread).


The modifications are fairly small:

* The 'possibleRef' flag is renamed to 'use_for_estimation' which I 
think better describes the purpose of the flag.


* The mark_useful_foreign_keys now skips foreign keys on a single 
column, as those are not useful for the patch at all. This should 
further reduce performance impact in the common case.


* I've slightly reworded some of the comments, hopefully for the better.


But now the bad news - while reviewing the David's fixup patch, I've 
noticed this comment


/* XXX do we need to add entries for the append_rel_list here? */

and I've realized that the estimation does not quite work with 
partitioned tables, as it only checks foreign keys on the parent tables, 
but the child tables may not use the foreign keys at all, or even use 
different foreign keys (a bit bizzare, but possible).


Obviously the simplest solution would be to simply stop considering 
foreign keys with partitioned tables - that seems a bit unfortunate, but 
AFAIK we don't inspect child tables during planning anyway, and in the 
worst case we fall back to the current estimate.


It might be possible to improve this by checking that all child tables 
have a matching foreign key (referencing the same table), which would 
allow us to handle the case with one side partitioned. And maybe some 
other (more complex) cases, like equi-partitioned tables. But all of 
this would require a fair amount of new code, far more than we should 
commit in beta mode.



To summarize all of this, I think David's patch marking usable foreign 
keys greatly reduces the overhead compared to the committed version. The 
'skip 1-column FKs' significantly reduces (or even eliminates) the 
overhead for the common case where only few FKs use multiple columns.


Not handling the inheritance properly is clearly a serious oversight, 
though. Tom is clearly right that this got committed a bit too early.



If the conclusion is that the current performance impact is still not 
acceptable, or that we need a better solution to the inheritance issue 
than simply disabling the FK estimates, then I think reverting the patch 
is the only possible solution at this point.



regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


fkest_fixes_v2.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-07 Thread Simon Riggs
On 6 May 2016 at 01:00, Tomas Vondra  wrote:


> I think we can further limit the impact by ignoring foreign keys on a
> single column, because the primary goal of the patch is improving estimates
> with multi-column FKs (and matching joins). I'd argue that 99% of the FKs
> in practice is single-column ones, but sure - if you have a database with
> many multi-column FKs this won't help.
>
> I think it's also important to point out that whatever solution we choose,
> it should probably allow us to relax some of the restrictions in the
> future. For example, with a FK on 3 columns, the current patch only handles
> a "full match" joins, with conditions on all three columns. But it may be
> possible to also improve estimates on just two columns - the current patch
> does not do that, as that needs definitely further more thought and
> discussion.
>

The context of the patch is important. If we can save this for 9.6, we
should.

Multi-column joins are currently badly estimated. Multi-variate States
would be useful here also, but I chose not to commit that patch since it
was much larger. The current patch is small, isolated and yet very
effective in solving the estimation problem in some cases. Committing and
possibly reverting the feature was/is possible without collateral damage.

The current patch works only for multi-column joins, matching them against
multi-column FKs. I see no reason why that should give a massively negative
performance because that is a small subset of cases; I did test that to
check for regressions, but its possible I missed something that does cause
additional run-time in real world cases.

Clearly, an optimizer test suite would be helpful in analyzing the effect
of new patches.

I'll continue to look at this and comment further.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-07 Thread Simon Riggs
On 3 May 2016 at 16:10, Tomas Vondra  wrote:


> While this in itself is about a two-line fix, after reviewing
>> 137805f89acb3611 I'm pretty unhappy that it got committed at all.
>> I think this obvious bug is a good sign that it wasn't ready.
>> Other unfinished aspects like invention of an undocumented GUC
>> don't leave a good impression either.
>>
>
> The GUC is meant to make testing during development easier. I haven't
> realized it got committed, but I assume Simon plans to remove it before the
> final release.


Yes, the GUC was for testing, as mentioned on the commit message.


> Can't check right now with Simon, though, as he's is out of office this
> week.
>

Am back and reading.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-06 Thread Robert Haas
On Thu, May 5, 2016 at 10:48 AM, David Rowley
 wrote:
> On 5 May 2016 at 16:04, David Rowley  wrote:
>> I've started making some improvements to this, but need to talk to
>> Tomas. It's currently in the middle of his night, but will try to
>> catch him in his morning to discuss this with him.
>
> Ok, so I spoke to Tomas about this briefly, and he's asked me to send
> in this patch. He didn't get time to look over it due to some other
> commitments he has today.
>
> I do personally feel that if the attached is not good enough, or not
> very close to good enough then probably the best course of action is
> to revert the whole thing.

Tom, what do you think about this patch?  Is it good enough, or should
we revert the whole thing?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-05 Thread David Rowley
On 6 May 2016 at 02:48, David Rowley  wrote:
> In the attached I've left the GUC remaining. The reason for the GUC is
> for testing purposes and it should be removed before release. It
> should likely be documented though, even if we're planning to remove
> it later. I've not gotten around to that in this patch though.

I've attached a patch which is the bear minimum fix for the crash
reported by Stefan. I don't think we need to debate any of whether
this goes in.

I also included removing that bogus function declaration from paths.h
as it was upsetting me a bit, and also there should be no debate on
that.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


fkest_crash_fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-05 Thread Tomas Vondra

Hi,

On 05/05/2016 04:48 PM, David Rowley wrote:

On 5 May 2016 at 16:04, David Rowley  wrote:

I've started making some improvements to this, but need to talk to
Tomas. It's currently in the middle of his night, but will try to
catch him in his morning to discuss this with him.


Ok, so I spoke to Tomas about this briefly, and he's asked me to
send in this patch. He didn't get time to look over it due to some
other commitments he has today.



Thanks!

>

I do personally feel that if the attached is not good enough, or not
very close to good enough then probably the best course of action is
to revert the whole thing. I'd much rather have this out than to
feel some responsibility for someone's performance problems with
this feature.

>

I share this opinion. If the approach proposed here is deemed not good 
enough, then +1 to revert.


I think we can further limit the impact by ignoring foreign keys on a 
single column, because the primary goal of the patch is improving 
estimates with multi-column FKs (and matching joins). I'd argue that 99% 
of the FKs in practice is single-column ones, but sure - if you have a 
database with many multi-column FKs this won't help.


I think it's also important to point out that whatever solution we 
choose, it should probably allow us to relax some of the restrictions in 
the future. For example, with a FK on 3 columns, the current patch only 
handles a "full match" joins, with conditions on all three columns. But 
it may be possible to also improve estimates on just two columns - the 
current patch does not do that, as that needs definitely further more 
thought and discussion.


>

But I also do feel that the patch allows a solution to a big problem
that we currently have with PostgreSQL, which there's any easy
workarounds for... that's multi-column join estimates.



In a sense, yes. FKs allow us to improve estimates for a fairly narrow 
case of multi-column estimates - joins matching multi-column FKs. And 
for this (not entirely narrow) case they are actually more accurate and 
way cheaper than e.g. histograms or MCV lists (at least in the 
multi-column case).


FWIW the current multivariate stats patch does not even try to mess with 
joins, as it's quite complicated in the multi-column case.




In the attached I've left the GUC remaining. The reason for the GUC
is for testing purposes and it should be removed before release. It
should likely be documented though, even if we're planning to remove
it later. I've not gotten around to that in this patch though.



Yes, removal of the GUC should go to the Open Items list.


I'd also like to make it clear that all the shitty parts of the patch 
are my fault, not David's. His name is on the patch as he provided a lot 
of useful ideas, pieces of code and feedback in general. But it was up 
to me to craft the final patch - and I clearly failed to do so. I think 
David still feels responsible for the patch as he made a lot of effort 
to salvage it over the last few days, so I'd like to set the record 
straight.



regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-05 Thread David Rowley
On 5 May 2016 at 16:04, David Rowley  wrote:
> I've started making some improvements to this, but need to talk to
> Tomas. It's currently in the middle of his night, but will try to
> catch him in his morning to discuss this with him.

Ok, so I spoke to Tomas about this briefly, and he's asked me to send
in this patch. He didn't get time to look over it due to some other
commitments he has today.

I do personally feel that if the attached is not good enough, or not
very close to good enough then probably the best course of action is
to revert the whole thing. I'd much rather have this out than to feel
some responsibility for someone's performance problems with this
feature. But I also do feel that the patch allows a solution to a big
problem that we currently have with PostgreSQL, which there's any easy
workarounds for... that's multi-column join estimates.

In the attached I've left the GUC remaining. The reason for the GUC is
for testing purposes and it should be removed before release. It
should likely be documented though, even if we're planning to remove
it later. I've not gotten around to that in this patch though.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


fkest_fixes_david.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread David Rowley
On 5 May 2016 at 06:54, Tom Lane  wrote:
> David Rowley  writes:
>> On 4 May 2016 at 09:18, David Rowley  wrote:
>>> On 4 May 2016 at 02:10, Tomas Vondra  wrote:
 There are probably a few reasonably simple things we could do - e.g. ignore
 foreign keys with just a single column, as the primary goal of the patch is
 improving estimates with multi-column foreign keys. I believe that covers a
 vast majority of foreign keys in the wild.
>
>> I've spent a few hours looking at this and I've come up with the
>> attached patch, which flags each ForeignKeyOptInfo to say whether its
>> possible to be referenced in any join condition, with the logic that
>> if the referenced relation is in the simple_rte_array, then it could
>> be referenced.
>
> My fundamental problem with the FK-selectivity patch is that it looks
> an awful lot like a preliminary proof-of-concept that got committed.

I don't disagree that there were some mistakes made. The last time I
saw this work was when I proposed some changes to Tomas' patch, which
was quite a long time ago now. I'd not gotten to look at it since
then.

> I do not like the basic design: it's about as brute-force as could
> possibly be.  It adds code that is executed:
>
>   * at least once per join relation created (hence, significantly more than
>   the number of rels in the query; see also get_joinrel_parampathinfo)
> * for each inner relation in the initial input joinrel pair
>   * for each outer relation in the initial joinrel pair
> * for each foreign key constraint on this inner and outer rel
>   * for each key column in that FK
> * for each join qual for the current input joinrel pair
>   * for each member of the relevant EquivalenceClass
>
> This is at least O(N^3) in the number of baserels in the query, not to
> mention the other multipliers.  I'm not very impressed by tests that scale
> only one of the multipliers (like the number of FK constraints); where the
> pain is going to come in is when all of these factors are large.

Yes, it's quite a lot of nesting. The patch I sent yesterday helps to
reduce the number of foreign keys considered. The part I'm not all
that happy with now is the fact that quite a bit of repeat work gets
done during the join search. It would be nice to cache some of this
but nothing came to mind, as we need to record the position of each
joinqual so that we only estimate the non-matched ones with the
standard estimation technique. The order of, or items in that list
won't be fixed as more relations are added to the mix during the join
search.

> I spent some time trying to make a test case that was impossibly slow,
> without any really impressive result: I saw at most about a 25% growth in
> planning time, for a 27-way join with one or two foreign keys per table.
> I noted however that with a simple FROM list of tables, you don't really
> get the full force of the combinatorial explosion, because
> join_search_one_level prefers to generate left-deep trees first, and so
> the first creation of a given joinrel is always N left-side rels against 1
> right-side rel, causing the second level of looping to always iterate just
> once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
> devise join order constraints that would result in a lot of high-level
> joinrels being formed with many relations on both sides, which would cause
> both of the second and third levels to iterate O(N) times not just once.
> I didn't have much luck, but I have zero confidence that our users won't
> find such cases.

Did you test that with or without my patch from yesterday?

> The reason it's so brute force is that it's rediscovering the same facts
> over and over.  In all simple (non-outer-join) cases, the only clauses
> that are of any interest are derived from EquivalenceClasses, and all
> that you really need to know is "are the vars mentioned on each side
> of this FK present in the same EquivalenceClass?".  ISTM that could be
> determined once per FK per query and cached in or near the EC, not
> expensively rediscovered at each level of joining.  I'm not sure whether
> it'd be worth considering non-EC-derived equalities (ie, outer join
> clauses) at all, and note that the current patch fails to do so anyway;
> see below.

That's interesting, but requires a good bit of thought to how it might work.

> My other design-level complaint is that basing this on foreign keys is
> fundamentally the wrong thing.  What actually matters is the unique index
> underlying the FK; that is, if we have "a.x = b.y" and there's a
> compatible unique index on b.y, we can conclude that no A row will match
> more than one B row, whether or not an explicit FK relationship has been
> declared.  So we should drive this off unique indexes instead of FKs,
> first because we will find more cases, and second because the planner
> already examines indexes and doesn't need any additional catalog lookups
> to g

Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread Robert Haas
On Wed, May 4, 2016 at 2:54 PM, Tom Lane  wrote:
> I spent some time trying to make a test case that was impossibly slow,
> without any really impressive result: I saw at most about a 25% growth in
> planning time, for a 27-way join with one or two foreign keys per table.
> I noted however that with a simple FROM list of tables, you don't really
> get the full force of the combinatorial explosion, because
> join_search_one_level prefers to generate left-deep trees first, and so
> the first creation of a given joinrel is always N left-side rels against 1
> right-side rel, causing the second level of looping to always iterate just
> once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
> devise join order constraints that would result in a lot of high-level
> joinrels being formed with many relations on both sides, which would cause
> both of the second and third levels to iterate O(N) times not just once.
> I didn't have much luck, but I have zero confidence that our users won't
> find such cases.

Have you looked at the patch David Rowley proposed to fix this by
doing some caching?  I am not crazy about accepting even a 25% growth
in planning time on a 27-way join, although maybe 27-way joins are
rare enough and vulnerable enough to bad plans that it would be worth
it if we could convince ourselves that plan quality would go up.  But
if that patch drops it to some much lesser number, we should consider
that as a possible fix.

> Bugs in quals_match_foreign_key():
>
> * Test for clause->opno == fkinfo->conpfeqop[i] fails to consider
> cross-type operators, ie, what's in the clause might be int2 = int4
> while the conpfeqop is int4 = int2.
>
> * parent_ec check isn't really the right thing, since EC-derived clauses
> might not have that set.  I think it may be adequate given that you only
> deal with simple Vars, but at least a comment about that would be good.
>
> * Come to think of it, you could probably dodge the commutator operator
> problem altogether for clauses with nonnull parent_ec, because they must
> contain a relevant equality operator.  (Although if it's redesigned as I
> suggest above, the code path for a clause with parent_ec set would look
> totally different from this anyway.)
>
> * Maintaining the fkmatches bitmapset is useless expense, just use a
> counter of matched keys.  Or for that matter, why not just fail
> immediately if i'th key is not found?

Technically, only the first of these is a clear bug, IMHO.  But it
seems like they should all be fixed.

> find_best_foreign_key_quals():
>
> * Test on enable_fkey_estimates should be one call level up to dodge the
> useless doubly-nested loop in clauselist_join_selectivity (which would
> make the "fast path" exit here pretty much useless)

Yes, that's pretty stupid, and should be fixed.  Coding style is not
per project spec, either.  Also, the header comment for
find_best_foreign_key_quals and in fact the name of the function look
pretty poor.  It seems that the return value is the number of columns
in the foreign key and that an out parameter, joinqualsbitmap, whose
exact meaning doesn't seem to be documented in any comment anywhere in
the patch.

> clauselist_join_selectivity():
>
> * "either could be zero, but not both" is a pretty unhelpful comment given
> the if-test just above it.  What *could* have used some explanation is
> what the next two dozen lines are doing, because they're as opaque as can
> be; and the XXX comment doesn't leave a warm feeling that the author
> understands it either.  I'm not prepared to opine on whether this segment
> of code is correct at all without better commentary.

I'm pretty baffled by this code, too.  I think what the overlap stuff
is doing is trying to calculate selectivity when we match multiple
foreign key constraints, but it doesn't look very principled.
find_best_foreign_key_quals discards "shorter" matches entirely,
picking arbitrarily among longer ones, but then we try to deal with
all of the ones that survive that stage even if they overlap.  It's
hard to judge whether any of this makes sense without more
explanation.

> calc_joinrel_size_estimate():
>
> * why is clauselist_join_selectivity applied to pushed-down clauses and
> not local ones in an outer join?  If that's not an oversight, it at least
> requires an explanatory comment.  Note that this means we are not applying
> FK knowledge for "a left join b on x = y", though it seems like we could.
>
> compute_semi_anti_join_factors isn't using clauselist_join_selectivity
> either.  I don't know whether it should be, but if not, a comment about
> why not seems appropriate.  More generally, I wonder why this logic
> wasn't just folded into clauselist_selectivity.

Good questions.

> guc.c:
> undocumented GUCs are not acceptable

Agreed.

> paths.h:
> patch introduces an extern that's referenced noplace

Oops.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-ha

Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread Robert Haas
On Wed, May 4, 2016 at 5:51 PM, Robert Haas  wrote:
> On Wed, May 4, 2016 at 2:54 PM, Tom Lane  wrote:
>> I spent some time trying to make a test case that was impossibly slow,
>> without any really impressive result: I saw at most about a 25% growth in
>> planning time, for a 27-way join with one or two foreign keys per table.
>> I noted however that with a simple FROM list of tables, you don't really
>> get the full force of the combinatorial explosion, because
>> join_search_one_level prefers to generate left-deep trees first, and so
>> the first creation of a given joinrel is always N left-side rels against 1
>> right-side rel, causing the second level of looping to always iterate just
>> once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
>> devise join order constraints that would result in a lot of high-level
>> joinrels being formed with many relations on both sides, which would cause
>> both of the second and third levels to iterate O(N) times not just once.
>> I didn't have much luck, but I have zero confidence that our users won't
>> find such cases.
>
> I'

Well, that wasn't my best-ever email to the list.  Sigh.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread Robert Haas
On Wed, May 4, 2016 at 2:54 PM, Tom Lane  wrote:
> I spent some time trying to make a test case that was impossibly slow,
> without any really impressive result: I saw at most about a 25% growth in
> planning time, for a 27-way join with one or two foreign keys per table.
> I noted however that with a simple FROM list of tables, you don't really
> get the full force of the combinatorial explosion, because
> join_search_one_level prefers to generate left-deep trees first, and so
> the first creation of a given joinrel is always N left-side rels against 1
> right-side rel, causing the second level of looping to always iterate just
> once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
> devise join order constraints that would result in a lot of high-level
> joinrels being formed with many relations on both sides, which would cause
> both of the second and third levels to iterate O(N) times not just once.
> I didn't have much luck, but I have zero confidence that our users won't
> find such cases.

I'

>
> The reason it's so brute force is that it's rediscovering the same facts
> over and over.  In all simple (non-outer-join) cases, the only clauses
> that are of any interest are derived from EquivalenceClasses, and all
> that you really need to know is "are the vars mentioned on each side
> of this FK present in the same EquivalenceClass?".  ISTM that could be
> determined once per FK per query and cached in or near the EC, not
> expensively rediscovered at each level of joining.  I'm not sure whether
> it'd be worth considering non-EC-derived equalities (ie, outer join
> clauses) at all, and note that the current patch fails to do so anyway;
> see below.
>
> My other design-level complaint is that basing this on foreign keys is
> fundamentally the wrong thing.  What actually matters is the unique index
> underlying the FK; that is, if we have "a.x = b.y" and there's a
> compatible unique index on b.y, we can conclude that no A row will match
> more than one B row, whether or not an explicit FK relationship has been
> declared.  So we should drive this off unique indexes instead of FKs,
> first because we will find more cases, and second because the planner
> already examines indexes and doesn't need any additional catalog lookups
> to get the required data.  (IOW, the relcache additions that were made in
> this patch series should go away too.)
>
> Aside from the design being basically wrong, the code quality seems pretty
> low.  Aside from the missing IsA check that started this thread, I found
> the following problems in a quick once-over of 137805f89:
>
> Bugs in quals_match_foreign_key():
>
> * Test for clause->opno == fkinfo->conpfeqop[i] fails to consider
> cross-type operators, ie, what's in the clause might be int2 = int4
> while the conpfeqop is int4 = int2.
>
> * parent_ec check isn't really the right thing, since EC-derived clauses
> might not have that set.  I think it may be adequate given that you only
> deal with simple Vars, but at least a comment about that would be good.
>
> * Come to think of it, you could probably dodge the commutator operator
> problem altogether for clauses with nonnull parent_ec, because they must
> contain a relevant equality operator.  (Although if it's redesigned as I
> suggest above, the code path for a clause with parent_ec set would look
> totally different from this anyway.)
>
> * Maintaining the fkmatches bitmapset is useless expense, just use a
> counter of matched keys.  Or for that matter, why not just fail
> immediately if i'th key is not found?
>
> find_best_foreign_key_quals():
>
> * Test on enable_fkey_estimates should be one call level up to dodge the
> useless doubly-nested loop in clauselist_join_selectivity (which would
> make the "fast path" exit here pretty much useless)
>
> clauselist_join_selectivity():
>
> * "either could be zero, but not both" is a pretty unhelpful comment given
> the if-test just above it.  What *could* have used some explanation is
> what the next two dozen lines are doing, because they're as opaque as can
> be; and the XXX comment doesn't leave a warm feeling that the author
> understands it either.  I'm not prepared to opine on whether this segment
> of code is correct at all without better commentary.
>
> calc_joinrel_size_estimate():
>
> * why is clauselist_join_selectivity applied to pushed-down clauses and
> not local ones in an outer join?  If that's not an oversight, it at least
> requires an explanatory comment.  Note that this means we are not applying
> FK knowledge for "a left join b on x = y", though it seems like we could.
>
> compute_semi_anti_join_factors isn't using clauselist_join_selectivity
> either.  I don't know whether it should be, but if not, a comment about
> why not seems appropriate.  More generally, I wonder why this logic
> wasn't just folded into clauselist_selectivity.
>
> guc.c:
> undocumented GUCs are not acceptable
>
> paths.h:
> patch introduces an extern that's refer

Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread Robert Haas
On Wed, May 4, 2016 at 5:02 PM, Tom Lane  wrote:
> Very good point, but unless I'm missing something, that is not what the
> current patch does.  I'm not sure offhand whether that's an important
> estimation failure mode currently, or if it is whether it would be
> sensible to try to implement that rule entirely separately from the "at
> most one" aspect, or if it isn't sensible, whether that's a sufficiently
> strong reason to confine the "at most one" logic to working only with FKs
> and not with bare unique indexes.

Tomas seems to feel that that is what the current patch does, and
indeed that it's the main point of the current patch, but you seem to
think that it doesn't do that.  Either I'm misinterpreting what one of
you is saying, or you are missing something, or his patch fails to
accomplish its intended purpose.  It seems important to figure out
which of those things is true.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread Tomas Vondra

Hi,

On 05/04/2016 11:02 PM, Tom Lane wrote:

Robert Haas  writes:

On Wed, May 4, 2016 at 2:54 PM, Tom Lane  wrote:

My other design-level complaint is that basing this on foreign keys is
fundamentally the wrong thing.  What actually matters is the unique index
underlying the FK; that is, if we have "a.x = b.y" and there's a
compatible unique index on b.y, we can conclude that no A row will match
more than one B row, whether or not an explicit FK relationship has been
declared.  So we should drive this off unique indexes instead of FKs,
first because we will find more cases, and second because the planner
already examines indexes and doesn't need any additional catalog lookups
to get the required data.  (IOW, the relcache additions that were made in
this patch series should go away too.)



Without prejudice to anything else in this useful and detailed review,
I have a question about this.  A unique index proves that no A row
will match more than one B row, and I agree that deriving that from
unique indexes is sensible.  However, ISTM that an FK provides
additional information: we know that, modulo filter conditions on B,
every A row will match *exactly* one row B row, which can prevent us
from *underestimating* the size of the join product.  A unique index
can't do that.


Very good point, but unless I'm missing something, that is not what the
current patch does.  I'm not sure offhand whether that's an important
estimation failure mode currently, or if it is whether it would be
sensible to try to implement that rule entirely separately from the "at
most one" aspect, or if it isn't sensible, whether that's a sufficiently
strong reason to confine the "at most one" logic to working only with FKs
and not with bare unique indexes.


FWIW it's a real-world problem with multi-column FKs. As David pointed 
out upthread, a nice example of this issue is Q9 in the TPC-H bench, 
where the underestimate leads to HashAggregate and then OOM failure.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread Tom Lane
Robert Haas  writes:
> On Wed, May 4, 2016 at 2:54 PM, Tom Lane  wrote:
>> My other design-level complaint is that basing this on foreign keys is
>> fundamentally the wrong thing.  What actually matters is the unique index
>> underlying the FK; that is, if we have "a.x = b.y" and there's a
>> compatible unique index on b.y, we can conclude that no A row will match
>> more than one B row, whether or not an explicit FK relationship has been
>> declared.  So we should drive this off unique indexes instead of FKs,
>> first because we will find more cases, and second because the planner
>> already examines indexes and doesn't need any additional catalog lookups
>> to get the required data.  (IOW, the relcache additions that were made in
>> this patch series should go away too.)

> Without prejudice to anything else in this useful and detailed review,
> I have a question about this.  A unique index proves that no A row
> will match more than one B row, and I agree that deriving that from
> unique indexes is sensible.  However, ISTM that an FK provides
> additional information: we know that, modulo filter conditions on B,
> every A row will match *exactly* one row B row, which can prevent us
> from *underestimating* the size of the join product.  A unique index
> can't do that.

Very good point, but unless I'm missing something, that is not what the
current patch does.  I'm not sure offhand whether that's an important
estimation failure mode currently, or if it is whether it would be
sensible to try to implement that rule entirely separately from the "at
most one" aspect, or if it isn't sensible, whether that's a sufficiently
strong reason to confine the "at most one" logic to working only with FKs
and not with bare unique indexes.

On the whole, that seems like another argument why this needs more time.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread Tomas Vondra

Hi,

On 05/04/2016 08:54 PM, Tom Lane wrote:

David Rowley  writes:

On 4 May 2016 at 09:18, David Rowley  wrote:

On 4 May 2016 at 02:10, Tomas Vondra  wrote:

There are probably a few reasonably simple things we could do - e.g. ignore
foreign keys with just a single column, as the primary goal of the patch is
improving estimates with multi-column foreign keys. I believe that covers a
vast majority of foreign keys in the wild.



I've spent a few hours looking at this and I've come up with the
attached patch, which flags each ForeignKeyOptInfo to say whether its
possible to be referenced in any join condition, with the logic that
if the referenced relation is in the simple_rte_array, then it could
be referenced.


My fundamental problem with the FK-selectivity patch is that it looks
an awful lot like a preliminary proof-of-concept that got committed.

I do not like the basic design: it's about as brute-force as could
possibly be.  It adds code that is executed:

  * at least once per join relation created (hence, significantly more than
  the number of rels in the query; see also get_joinrel_parampathinfo)
* for each inner relation in the initial input joinrel pair
  * for each outer relation in the initial joinrel pair
* for each foreign key constraint on this inner and outer rel
  * for each key column in that FK
* for each join qual for the current input joinrel pair
  * for each member of the relevant EquivalenceClass

This is at least O(N^3) in the number of baserels in the query, not
to mention the other multipliers. I'm not very impressed by tests
that scale only one of the multipliers (like the number of FK
constraints); where the pain is going to come in is when all of these
factors are large.

I spent some time trying to make a test case that was impossibly
slow, without any really impressive result: I saw at most about a 25%
growth in planning time, for a 27-way join with one or two foreign
keys per table. I noted however that with a simple FROM list of
tables, you don't really get the full force of the combinatorial
explosion, because join_search_one_level prefers to generate
left-deep trees first, and so the first creation of a given joinrel
is always N left-side rels against 1 right-side rel, causing the
second level of looping to always iterate just once. (GEQO behaves
likewise, I think.) I spent a little time trying to devise join order
constraints that would result in a lot of high-level joinrels being
formed with many relations on both sides, which would cause both of
the second and third levels to iterate O(N) times not just once. I
didn't have much luck, but I have zero confidence that our users
won't find such cases.


Don't know. We haven't found such extreme example either.



The reason it's so brute force is that it's rediscovering the same
facts over and over. In all simple (non-outer-join) cases, the only
clauses that are of any interest are derived from EquivalenceClasses,
and all that you really need to know is "are the vars mentioned on
each side of this FK present in the same EquivalenceClass?". ISTM
that could be determined once per FK per query and cached in or near
the EC, not expensively rediscovered at each level of joining. I'm
not sure whether it'd be worth considering non-EC-derived equalities
(ie, outer join clauses) at all, and note that the current patch
fails to do so anyway; see below.


I'm not sure it's that simple, as it also depends on the join order, so 
if we only detect that once per query we'll get incorrect estimates for 
the intermediate results. I think the approach with cache proposed by 
David few days ago is the best approach.




My other design-level complaint is that basing this on foreign keys is
fundamentally the wrong thing.  What actually matters is the unique index
underlying the FK; that is, if we have "a.x = b.y" and there's a
compatible unique index on b.y, we can conclude that no A row will match
more than one B row, whether or not an explicit FK relationship has been
declared.  So we should drive this off unique indexes instead of FKs,
first because we will find more cases, and second because the planner
already examines indexes and doesn't need any additional catalog lookups
to get the required data.  (IOW, the relcache additions that were made in
this patch series should go away too.)


No, that's not what the patch does, and it can't use unique indexes 
instead. The patch improves estimation with multi-column foreign keys, 
when the join matches the constraint. Currently we treat the conditions 
as independent and multiply the estimated selectivities, completely 
ignoring the guarantee provided by the FK, which leads to significant 
under-estimates.


So when you have:

CREATE TABLE t1 (a1 INT, a2 INT, primary key (a1,a2));
CREATE TABLE t2 (b1 INT, b2 INT,
 FOREIGN KEY (b1,b2) REFERENCES t1(a1,a2));

and do

   SELECT * FROM t1, t2 WHERE a1=b1 AND a2=b2;

the patch

Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread Robert Haas
On Wed, May 4, 2016 at 2:54 PM, Tom Lane  wrote:
> My other design-level complaint is that basing this on foreign keys is
> fundamentally the wrong thing.  What actually matters is the unique index
> underlying the FK; that is, if we have "a.x = b.y" and there's a
> compatible unique index on b.y, we can conclude that no A row will match
> more than one B row, whether or not an explicit FK relationship has been
> declared.  So we should drive this off unique indexes instead of FKs,
> first because we will find more cases, and second because the planner
> already examines indexes and doesn't need any additional catalog lookups
> to get the required data.  (IOW, the relcache additions that were made in
> this patch series should go away too.)

Without prejudice to anything else in this useful and detailed review,
I have a question about this.  A unique index proves that no A row
will match more than one B row, and I agree that deriving that from
unique indexes is sensible.  However, ISTM that an FK provides
additional information: we know that, modulo filter conditions on B,
every A row will match *exactly* one row B row, which can prevent us
from *underestimating* the size of the join product.  A unique index
can't do that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-04 Thread Tom Lane
David Rowley  writes:
> On 4 May 2016 at 09:18, David Rowley  wrote:
>> On 4 May 2016 at 02:10, Tomas Vondra  wrote:
>>> There are probably a few reasonably simple things we could do - e.g. ignore
>>> foreign keys with just a single column, as the primary goal of the patch is
>>> improving estimates with multi-column foreign keys. I believe that covers a
>>> vast majority of foreign keys in the wild.

> I've spent a few hours looking at this and I've come up with the
> attached patch, which flags each ForeignKeyOptInfo to say whether its
> possible to be referenced in any join condition, with the logic that
> if the referenced relation is in the simple_rte_array, then it could
> be referenced.

My fundamental problem with the FK-selectivity patch is that it looks
an awful lot like a preliminary proof-of-concept that got committed.

I do not like the basic design: it's about as brute-force as could
possibly be.  It adds code that is executed:

  * at least once per join relation created (hence, significantly more than
  the number of rels in the query; see also get_joinrel_parampathinfo)
* for each inner relation in the initial input joinrel pair
  * for each outer relation in the initial joinrel pair
* for each foreign key constraint on this inner and outer rel
  * for each key column in that FK
* for each join qual for the current input joinrel pair
  * for each member of the relevant EquivalenceClass

This is at least O(N^3) in the number of baserels in the query, not to
mention the other multipliers.  I'm not very impressed by tests that scale
only one of the multipliers (like the number of FK constraints); where the
pain is going to come in is when all of these factors are large.

I spent some time trying to make a test case that was impossibly slow,
without any really impressive result: I saw at most about a 25% growth in
planning time, for a 27-way join with one or two foreign keys per table.
I noted however that with a simple FROM list of tables, you don't really
get the full force of the combinatorial explosion, because
join_search_one_level prefers to generate left-deep trees first, and so
the first creation of a given joinrel is always N left-side rels against 1
right-side rel, causing the second level of looping to always iterate just
once.  (GEQO behaves likewise, I think.)  I spent a little time trying to
devise join order constraints that would result in a lot of high-level
joinrels being formed with many relations on both sides, which would cause
both of the second and third levels to iterate O(N) times not just once.
I didn't have much luck, but I have zero confidence that our users won't
find such cases.

The reason it's so brute force is that it's rediscovering the same facts
over and over.  In all simple (non-outer-join) cases, the only clauses
that are of any interest are derived from EquivalenceClasses, and all
that you really need to know is "are the vars mentioned on each side
of this FK present in the same EquivalenceClass?".  ISTM that could be
determined once per FK per query and cached in or near the EC, not
expensively rediscovered at each level of joining.  I'm not sure whether
it'd be worth considering non-EC-derived equalities (ie, outer join
clauses) at all, and note that the current patch fails to do so anyway;
see below.

My other design-level complaint is that basing this on foreign keys is
fundamentally the wrong thing.  What actually matters is the unique index
underlying the FK; that is, if we have "a.x = b.y" and there's a
compatible unique index on b.y, we can conclude that no A row will match
more than one B row, whether or not an explicit FK relationship has been
declared.  So we should drive this off unique indexes instead of FKs,
first because we will find more cases, and second because the planner
already examines indexes and doesn't need any additional catalog lookups
to get the required data.  (IOW, the relcache additions that were made in
this patch series should go away too.)

Aside from the design being basically wrong, the code quality seems pretty
low.  Aside from the missing IsA check that started this thread, I found
the following problems in a quick once-over of 137805f89:

Bugs in quals_match_foreign_key():

* Test for clause->opno == fkinfo->conpfeqop[i] fails to consider
cross-type operators, ie, what's in the clause might be int2 = int4
while the conpfeqop is int4 = int2.

* parent_ec check isn't really the right thing, since EC-derived clauses
might not have that set.  I think it may be adequate given that you only
deal with simple Vars, but at least a comment about that would be good.

* Come to think of it, you could probably dodge the commutator operator
problem altogether for clauses with nonnull parent_ec, because they must
contain a relevant equality operator.  (Although if it's redesigned as I
suggest above, the code path for a clause with parent_ec set would look
totally different from th

Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-03 Thread David Rowley
On 4 May 2016 at 09:18, David Rowley  wrote:
> On 4 May 2016 at 02:10, Tomas Vondra  wrote:
>> There are probably a few reasonably simple things we could do - e.g. ignore
>> foreign keys with just a single column, as the primary goal of the patch is
>> improving estimates with multi-column foreign keys. I believe that covers a
>> vast majority of foreign keys in the wild.
>>
>> If that's deemed insufficient, we'll have to resort to a more complex
>> improvement, perhaps something akin to the cache proposed in in the unijoin
>> patch. But if that's required, that's 9.7 material.
>
> I had thought that if we had a hashtable of rel OIDs which belong to
> relations with has_eclass_joins == true, then we could just skip
> foreign keys where the confrelid is not in the hashtable. Perhaps that
> could be optimised a bit more and we could have something akin to what
> predOK is for IndexOptInfo in ForeignKeyOptInfo which just gets set to
> true if the relation referenced by the foreign key is in the
> simple_rel_array. It's quite likely that if many foreign keys were
> used, then the query would have a great number of joins, and planning
> would be slow anyway.

I've spent a few hours looking at this and I've come up with the
attached patch, which flags each ForeignKeyOptInfo to say whether its
possible to be referenced in any join condition, with the logic that
if the referenced relation is in the simple_rte_array, then it could
be referenced.

I ran some of the tests Tomas posted with 1000 FKs and a 4-way join,
with 2 join columns.

Query:
explain analyze
select * from f1
inner join f2 on f1.a = f2.a and f1.b = f2.b
inner join f3 on f1.a = f3.a and f1.b = f3.b
inner join f4 on f1.a = f4.a and f1.b = f4.b;

SET enable_fkey_estimates = on;
duration: 30 s
number of transactions actually processed: 8173
latency average: 3.671 ms
tps = 272.420508 (including connections establishing)
tps = 272.586329 (excluding connections establishing)

SET enable_fkey_estimates = off;
duration: 30 s
number of transactions actually processed: 9153
latency average: 3.278 ms
tps = 305.098852 (including connections establishing)
tps = 305.286072 (excluding connections establishing)

So there's still a 10% planner slowdown for this worst case test, but
it's far less than what it was previously with the same test case.

I just also want to add that the aim of this patch was to fix a very
real world problem which also manifests itself in TPC-H Q9, where the
join to partsupp is drastically underestimated due to the 2 column
join condition, which in our test cases caused the GROUP BY to perform
a Hash Aggregate rather than a Sort/Group Aggregate and since we don't
spill HashAggs to disk, we get OOM for a large scale test on large
scale hardware.

Here's some sample EXPLAIN output from the query in question, which I
think is a smaller scale than the 3TB test where we had issues, but
still demonstrates the issue;

Hash Join  (cost=74686.00..597734.90 rows=2400 width=23) (actual
time=564.038..11645.047 rows=11997996 loops=1)
  Hash Cond: ((lineitem.l_suppkey = partsupp.ps_suppkey) AND
(lineitem.l_partkey = partsupp.ps_partkey))

Here the estimate is off 5000x.

The attached patch is intended to assist discussion at the moment.
Likely some naming could be better, and the code would need to find a
better home.

The patch also fixes the missing IsA OpExpr test.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


mark_useful_foreignkeys.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-03 Thread David Rowley
On 4 May 2016 at 02:10, Tomas Vondra  wrote:
> There are probably a few reasonably simple things we could do - e.g. ignore
> foreign keys with just a single column, as the primary goal of the patch is
> improving estimates with multi-column foreign keys. I believe that covers a
> vast majority of foreign keys in the wild.
>
> If that's deemed insufficient, we'll have to resort to a more complex
> improvement, perhaps something akin to the cache proposed in in the unijoin
> patch. But if that's required, that's 9.7 material.

I had thought that if we had a hashtable of rel OIDs which belong to
relations with has_eclass_joins == true, then we could just skip
foreign keys where the confrelid is not in the hashtable. Perhaps that
could be optimised a bit more and we could have something akin to what
predOK is for IndexOptInfo in ForeignKeyOptInfo which just gets set to
true if the relation referenced by the foreign key is in the
simple_rel_array. It's quite likely that if many foreign keys were
used, then the query would have a great number of joins, and planning
would be slow anyway.


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-03 Thread Tomas Vondra

Hi,

On 05/02/2016 09:18 PM, Robert Haas wrote:

On Sat, Apr 30, 2016 at 1:35 PM, Tom Lane  wrote:

Julien Rouhaud  writes:

On 29/04/2016 18:05, Tom Lane wrote:

Julien Rouhaud  writes:

The segfault is caused by quals_match_foreign_key() calling get_leftop()
and get_rightop() on a ScalarArrayOpExpr node.

I'm not sure that assuming this compatibility is the right way to fix
this though.



It certainly isn't.



Agreed. New attached patch handles explicitly each node tag.


No, this is completely nuts.  The problem is that quals_match_foreign_key
is simply assuming that every clause in the list it's given is an OpExpr,
which is quite wrong.  It should just ignore non-OpExpr quals, since it
cannot do anything useful with them anyway.  There's a comment claiming
that non-OpExpr quals were already rejected:

 * Here since 'usefulquals' only contains bitmap indexes for quals
 * of type "var op var" we can safely skip checking this.

but that doesn't appear to have anything to do with current reality.

While this in itself is about a two-line fix, after reviewing
137805f89acb3611 I'm pretty unhappy that it got committed at all.
I think this obvious bug is a good sign that it wasn't ready.
Other unfinished aspects like invention of an undocumented GUC
don't leave a good impression either.

Moreover, it looks to me like this will add quite a lot of overhead,
probably far more than is justified, because clauselist_join_selectivity
is basically O(N^2) in the relation-footprint of the current join --- and
not with a real small multiplier, either, as the functions it calls
contain about four levels of nested loops themselves.  Maybe that's
unmeasurable on trivial test cases but it's going to be disastrous in
large queries, or for relations with large numbers of foreign keys.

I think this should be reverted and pushed out to 9.7 development.
It needs a significant amount of rewriting to fix the performance
issue, and now's not the time to be doing that.


If this gets reverted, then probably
015e88942aa50f0d419ddac00e63bb06d6e62e86 should also be reverted.


Probably. I think the code is fine / correct, but as the FK estimation 
patch was the only user, there's not much point in keeping it if that 
gets reverted.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-03 Thread Tomas Vondra

Hi,

On 04/30/2016 07:35 PM, Tom Lane wrote:

Julien Rouhaud  writes:

On 29/04/2016 18:05, Tom Lane wrote:

Julien Rouhaud  writes:

The segfault is caused by quals_match_foreign_key() calling get_leftop()
and get_rightop() on a ScalarArrayOpExpr node.

I'm not sure that assuming this compatibility is the right way to fix
this though.



It certainly isn't.



Agreed. New attached patch handles explicitly each node tag.


No, this is completely nuts.  The problem is that quals_match_foreign_key
is simply assuming that every clause in the list it's given is an OpExpr,
which is quite wrong.  It should just ignore non-OpExpr quals, since it
cannot do anything useful with them anyway.  There's a comment claiming
that non-OpExpr quals were already rejected:

 * Here since 'usefulquals' only contains bitmap indexes for quals
 * of type "var op var" we can safely skip checking this.

but that doesn't appear to have anything to do with current reality.


Yes, I agree - there should be a check that the node is OpExpr in 
quals_match_foreign_key. This is clearly a bug :-(




While this in itself is about a two-line fix, after reviewing
137805f89acb3611 I'm pretty unhappy that it got committed at all.
I think this obvious bug is a good sign that it wasn't ready.
Other unfinished aspects like invention of an undocumented GUC
don't leave a good impression either.


The GUC is meant to make testing during development easier. I haven't 
realized it got committed, but I assume Simon plans to remove it before 
the final release. Can't check right now with Simon, though, as he's is 
out of office this week.


In any case, I do agree the GUC should have been documented better, and 
we should probably put it on the TODO so that it gets removed before the 
actual release.




Moreover, it looks to me like this will add quite a lot of overhead,
probably far more than is justified, because
clauselist_join_selectivity is basically O(N^2) in the
relation-footprint of the current join --- and not with a real small
multiplier, either, as the functions it calls contain about four
levels of nested loops themselves. Maybe that's unmeasurable on
trivial test cases but it's going to be disastrous in large queries,
or for relations with large numbers of foreign keys.

>

That depends on a lot of factors, I guess. Attached are two SQL scripts 
that create 5 tables (f1, f2, f3, f4, f5) and then create N foreign keys 
on each of them. The foreign keys reference other tables, which means 
the loops won't terminate early (after finding the first match) when 
joining the first 5 tables, which makes this the worst case. And then we 
join different numbers of tables (2, 3, 4 or 5) and do explain analyze 
to measure planning time.


The first script (fk-test.sql) does joins on 2 columns, fk-test-6.sql 
does joins on 6 columns (so that we also know how the number of columns

affects the planning time).

Sum of planning time for 100 queries (in milliseconds) with 2 columns, 
where "2/on" means "join on 2 tables with enable_fk_estimates=on":


N   2/on2/off   3/on3/off   4/on4/off   5/on5/off
10  6   4   9   8   22  20  77  68
100 15  11  26  19  64  36  177 93
100097  84  217 133 516 233 1246342

and comparison of the timings:

2   3   4   5
10  155%115%114%113%
100 142%138%177%190%
1000116%163%221%364%

And with the 6 columns:

2/on2/off   3/on3/off   4/on4/off   5/on5/off
10  25  7   23  20  96  82  344 297
100 21  14  65  33  233 104 735 328
1000151 99  492 153 1603320 4627593

and comparison:

2   3   4   5
10  371%120%117%116%
100 149%196%224%224%
1000152%322%502%780%

So yes, the overhead is clearly measurable, no doubt about that.



I think this should be reverted and pushed out to 9.7 development.
It needs a significant amount of rewriting to fix the performance
issue, and now's not the time to be doing that.



There are probably a few reasonably simple things we could do - e.g. 
ignore foreign keys with just a single column, as the primary goal of 
the patch is improving estimates with multi-column foreign keys. I 
believe that covers a vast majority of foreign keys in the wild.


If that's deemed insufficient, we'll have to resort to a more complex 
improvement, perhaps something akin to the cache proposed in in the 
unijoin patch. But if that's required, that's 9.7 material.



regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


fk-test-6.sql
Description: application/sql


fk-test.sql
Description: application/sql

-- 
Sent via pgsql-hackers mailing list (pgs

Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-05-02 Thread Robert Haas
On Sat, Apr 30, 2016 at 1:35 PM, Tom Lane  wrote:
> Julien Rouhaud  writes:
>> On 29/04/2016 18:05, Tom Lane wrote:
>>> Julien Rouhaud  writes:
 The segfault is caused by quals_match_foreign_key() calling get_leftop()
 and get_rightop() on a ScalarArrayOpExpr node.

 I'm not sure that assuming this compatibility is the right way to fix
 this though.
>
>>> It certainly isn't.
>
>> Agreed. New attached patch handles explicitly each node tag.
>
> No, this is completely nuts.  The problem is that quals_match_foreign_key
> is simply assuming that every clause in the list it's given is an OpExpr,
> which is quite wrong.  It should just ignore non-OpExpr quals, since it
> cannot do anything useful with them anyway.  There's a comment claiming
> that non-OpExpr quals were already rejected:
>
>  * Here since 'usefulquals' only contains bitmap indexes for quals
>  * of type "var op var" we can safely skip checking this.
>
> but that doesn't appear to have anything to do with current reality.
>
> While this in itself is about a two-line fix, after reviewing
> 137805f89acb3611 I'm pretty unhappy that it got committed at all.
> I think this obvious bug is a good sign that it wasn't ready.
> Other unfinished aspects like invention of an undocumented GUC
> don't leave a good impression either.
>
> Moreover, it looks to me like this will add quite a lot of overhead,
> probably far more than is justified, because clauselist_join_selectivity
> is basically O(N^2) in the relation-footprint of the current join --- and
> not with a real small multiplier, either, as the functions it calls
> contain about four levels of nested loops themselves.  Maybe that's
> unmeasurable on trivial test cases but it's going to be disastrous in
> large queries, or for relations with large numbers of foreign keys.
>
> I think this should be reverted and pushed out to 9.7 development.
> It needs a significant amount of rewriting to fix the performance
> issue, and now's not the time to be doing that.

If this gets reverted, then probably
015e88942aa50f0d419ddac00e63bb06d6e62e86 should also be reverted.

We need to make some decisions here PDQ.  I haven't had time to look
at this issue in any technical detail yet.  Simon, anyone else, please
weigh in.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-30 Thread Tom Lane
Julien Rouhaud  writes:
> On 29/04/2016 18:05, Tom Lane wrote:
>> Julien Rouhaud  writes:
>>> The segfault is caused by quals_match_foreign_key() calling get_leftop()
>>> and get_rightop() on a ScalarArrayOpExpr node.
>>>
>>> I'm not sure that assuming this compatibility is the right way to fix
>>> this though.

>> It certainly isn't.

> Agreed. New attached patch handles explicitly each node tag.

No, this is completely nuts.  The problem is that quals_match_foreign_key
is simply assuming that every clause in the list it's given is an OpExpr,
which is quite wrong.  It should just ignore non-OpExpr quals, since it
cannot do anything useful with them anyway.  There's a comment claiming
that non-OpExpr quals were already rejected:

 * Here since 'usefulquals' only contains bitmap indexes for quals
 * of type "var op var" we can safely skip checking this.

but that doesn't appear to have anything to do with current reality.

While this in itself is about a two-line fix, after reviewing 
137805f89acb3611 I'm pretty unhappy that it got committed at all.
I think this obvious bug is a good sign that it wasn't ready.
Other unfinished aspects like invention of an undocumented GUC
don't leave a good impression either.

Moreover, it looks to me like this will add quite a lot of overhead,
probably far more than is justified, because clauselist_join_selectivity
is basically O(N^2) in the relation-footprint of the current join --- and
not with a real small multiplier, either, as the functions it calls
contain about four levels of nested loops themselves.  Maybe that's
unmeasurable on trivial test cases but it's going to be disastrous in
large queries, or for relations with large numbers of foreign keys.

I think this should be reverted and pushed out to 9.7 development.
It needs a significant amount of rewriting to fix the performance
issue, and now's not the time to be doing that.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Julien Rouhaud
On 29/04/2016 18:05, Tom Lane wrote:
> Julien Rouhaud  writes:
>> The segfault is caused by quals_match_foreign_key() calling get_leftop()
>> and get_rightop() on a ScalarArrayOpExpr node.
> 
>> Reordering the common fields of OpExpr and ScalarArrayOpExpr at the
>> beginning of the struct so the get_*op() work with either (as in
>> attached patch) fixes the issue.
> 
>> I'm not sure that assuming this compatibility is the right way to fix
>> this though.
> 
> It certainly isn't.
> 

Agreed. New attached patch handles explicitly each node tag.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
diff --git a/src/backend/optimizer/util/clauses.c 
b/src/backend/optimizer/util/clauses.c
index 759566a..3b9715b 100644
--- a/src/backend/optimizer/util/clauses.c
+++ b/src/backend/optimizer/util/clauses.c
@@ -100,6 +100,7 @@ typedef struct
boolallow_restricted;
 } has_parallel_hazard_arg;
 
+static List *get_opargs(const Expr *clause);
 static bool aggregates_allow_partial_walker(Node *node,

partial_agg_context *context);
 static bool contain_agg_clause_walker(Node *node, void *context);
@@ -197,6 +198,19 @@ make_opclause(Oid opno, Oid opresulttype, bool opretset,
return (Expr *) expr;
 }
 
+static List *
+get_opargs(const Expr *clause)
+{
+   if (IsA(clause, OpExpr))
+   return ((OpExpr *) clause)->args;
+
+   if (IsA(clause, ScalarArrayOpExpr))
+   return ((ScalarArrayOpExpr *) clause)->args;
+
+   elog(ERROR, "unrecognized node type: %d",
+   (int) nodeTag(clause));
+}
+
 /*
  * get_leftop
  *
@@ -206,10 +220,10 @@ make_opclause(Oid opno, Oid opresulttype, bool opretset,
 Node *
 get_leftop(const Expr *clause)
 {
-   const OpExpr *expr = (const OpExpr *) clause;
+   const List *args = get_opargs(clause);
 
-   if (expr->args != NIL)
-   return linitial(expr->args);
+   if (args != NIL)
+   return linitial(args);
else
return NULL;
 }
@@ -223,10 +237,10 @@ get_leftop(const Expr *clause)
 Node *
 get_rightop(const Expr *clause)
 {
-   const OpExpr *expr = (const OpExpr *) clause;
+   const List *args = get_opargs(clause);
 
-   if (list_length(expr->args) >= 2)
-   return lsecond(expr->args);
+   if (list_length(args) >= 2)
+   return lsecond(args);
else
return NULL;
 }

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Tom Lane
Julien Rouhaud  writes:
> The segfault is caused by quals_match_foreign_key() calling get_leftop()
> and get_rightop() on a ScalarArrayOpExpr node.

> Reordering the common fields of OpExpr and ScalarArrayOpExpr at the
> beginning of the struct so the get_*op() work with either (as in
> attached patch) fixes the issue.

> I'm not sure that assuming this compatibility is the right way to fix
> this though.

It certainly isn't.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Julien Rouhaud
On 29/04/2016 13:20, Michael Paquier wrote:
> On Fri, Apr 29, 2016 at 7:25 PM, Stefan Huehner  wrote:
>> If you need any more info or testing done just let me know.
> 
> The information you are providing is sufficient to reproduce the
> problem, thanks! I have added this bug to the list of open items for
> 9.6.
> 

The segfault is caused by quals_match_foreign_key() calling get_leftop()
and get_rightop() on a ScalarArrayOpExpr node.

Reordering the common fields of OpExpr and ScalarArrayOpExpr at the
beginning of the struct so the get_*op() work with either (as in
attached patch) fixes the issue.

I'm not sure that assuming this compatibility is the right way to fix
this though.

-- 
Julien Rouhaud
http://dalibo.com - http://dalibo.org
diff --git a/src/include/nodes/primnodes.h b/src/include/nodes/primnodes.h
index 1ffc0a1..dffe129 100644
--- a/src/include/nodes/primnodes.h
+++ b/src/include/nodes/primnodes.h
@@ -468,12 +468,12 @@ typedef struct OpExpr
Exprxpr;
Oid opno;   /* PG_OPERATOR OID of 
the operator */
Oid opfuncid;   /* PG_PROC OID of 
underlying function */
-   Oid opresulttype;   /* PG_TYPE OID of result value 
*/
-   boolopretset;   /* true if operator returns set 
*/
-   Oid opcollid;   /* OID of collation of 
result */
Oid inputcollid;/* OID of collation that 
operator should use */
List   *args;   /* arguments to the operator (1 
or 2) */
int location;   /* token location, or 
-1 if unknown */
+   Oid opresulttype;   /* PG_TYPE OID of result value 
*/
+   boolopretset;   /* true if operator returns set 
*/
+   Oid opcollid;   /* OID of collation of 
result */
 } OpExpr;
 
 /*
@@ -511,10 +511,10 @@ typedef struct ScalarArrayOpExpr
Exprxpr;
Oid opno;   /* PG_OPERATOR OID of 
the operator */
Oid opfuncid;   /* PG_PROC OID of 
underlying function */
-   booluseOr;  /* true for ANY, false for ALL 
*/
Oid inputcollid;/* OID of collation that 
operator should use */
List   *args;   /* the scalar and array 
operands */
int location;   /* token location, or 
-1 if unknown */
+   booluseOr;  /* true for ANY, false for ALL 
*/
 } ScalarArrayOpExpr;
 
 /*

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Michael Paquier
On Fri, Apr 29, 2016 at 7:25 PM, Stefan Huehner  wrote:
> If you need any more info or testing done just let me know.

The information you are providing is sufficient to reproduce the
problem, thanks! I have added this bug to the list of open items for
9.6.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg9.6 segfault using simple query (related to use fk for join estimates)

2016-04-29 Thread Stefan Huehner
Hello,
@Tomas put you in CC as it looks like related to work on fk -> join estimates

i did a tiny bit of testing of our software against the nightly postgresql-9.6 
debs from apt.postgresql.org

Specifically against:
ii  postgresql-9.6
9.6~~devel~20160428.1605-1~664.git23b09e1.pgdg+1 amd64object-relational 
SQL database, version 9.6 server
ii  postgresql-9.6-dbg
9.6~~devel~20160428.1605-1~664.git23b09e1.pgdg+1 amd64debug symbols for 
postgresql-9.6

so autobuilt from last night.

I get postgres consistently to segfault using the following query (trimmed down 
to shortest example still triggering the crash)

SELECT 1
FROM ad_model_object mo
LEFT JOIN ad_menu m ON mo.ad_process_id = m.ad_process_id
AND mo.action IN ('P', 'R');

With the trigger being a FK definition from ad_menu.ad_process_id to 
ad_process.ad_process_id.

Dropping that fk makes the crash go away.

See attached files for trimmed down table definition to directly reproduce.

Backtrace ends in:
#0  get_leftop (clause=clause@entry=0x5652932e2d98)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/clauses.c:212
#1  0x565291ec6ba0 in quals_match_foreign_key (root=0x7fca9b3bcba0, 
fkrel=0x5652932ab980, foreignrel=0x5652932e77b8, 
joinquals=0x7fca9b3bcba0, fkinfo=0x5652932e6ce8)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:3961

so probably related to the 'Use Foreign keys to improve joins estimates' 
project from Tomas

If you need any more info or testing done just let me know.

Regards,
Stefan


pg9.6-2016-04-29-crash-reproducer.sql
Description: application/sql
Program received signal SIGSEGV, Segmentation fault.
get_leftop (clause=clause@entry=0x5652932e2d98)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/clauses.c:212
212 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/clauses.c:
 No such file or directory.
(gdb) bt
#0  get_leftop (clause=clause@entry=0x5652932e2d98)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/clauses.c:212
#1  0x565291ec6ba0 in quals_match_foreign_key (root=0x7fca9b3bcba0, 
fkrel=0x5652932ab980, foreignrel=0x5652932e77b8, 
joinquals=0x7fca9b3bcba0, fkinfo=0x5652932e6ce8)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:3961
#2  find_best_foreign_key_quals (fkrel=fkrel@entry=0x5652932ab980, 
foreignrel=foreignrel@entry=0x5652932e77b8, 
joinquals=joinquals@entry=0x7fca9b3bcba0, 
joinqualsbitmap=joinqualsbitmap@entry=0x7ffdd3f0d848, root=0x5652932b0e78)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:4110
#3  0x565291ec6e6d in clauselist_join_selectivity 
(root=root@entry=0x5652932b0e78, joinquals=joinquals@entry=0x7fca9b3bcba0, 
jointype=jointype@entry=JOIN_LEFT, sjinfo=sjinfo@entry=0x7fca9b3b0108)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:4170
#4  0x565291ec743e in calc_joinrel_size_estimate (root=0x5652932b0e78, 
outer_rows=1, inner_rows=127, sjinfo=0x7fca9b3b0108, 
restrictlist=)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:4292
#5  0x565291ecb621 in set_joinrel_size_estimates 
(root=root@entry=0x5652932b0e78, rel=rel@entry=0x7fca9b3b3a20, 
outer_rel=outer_rel@entry=0x5652932ab980, 
inner_rel=inner_rel@entry=0x5652932e77b8, sjinfo=sjinfo@entry=0x7fca9b3b0108, 
restrictlist=restrictlist@entry=0x7fca9b3bca80)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/costsize.c:3840
#6  0x565291f00d8a in build_join_rel (root=root@entry=0x5652932b0e78, 
joinrelids=joinrelids@entry=0x7fca9b3bc8d0, 
outer_rel=outer_rel@entry=0x5652932ab980, 
inner_rel=inner_rel@entry=0x5652932e77b8, sjinfo=0x7fca9b3b0108, 
restrictlist_ptr=restrictlist_ptr@entry=0x7ffdd3f0d998)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/util/relnode.c:502
#7  0x565291ed508d in make_join_rel (root=root@entry=0x5652932b0e78, 
rel1=rel1@entry=0x5652932ab980, rel2=rel2@entry=0x5652932e77b8)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/joinrels.c:714
#8  0x565291ed5abb in make_rels_by_clause_joins (other_rels=, old_rel=, root=)
at 
/build/postgresql-9.6-8aVkeq/postgresql-9.6-9.6~~devel~20160428.1605/build/../src/backend/optimizer/path/joinrels.c:274
#9  join_search_one_level (root=root@entry=0x5652932b0e78, level=lev