On Mon, Jul 7, 2014 at 11:46 PM, Greg Stark <st...@mit.edu> wrote:
> On Mon, Jul 7, 2014 at 3:07 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> > I doubt it. The extra code isn't the problem so much, it's the extra
> > planning cycles spent trying to make proofs that will usually fail.
> > What you propose will create a combinatorial explosion in the number
> > of proof paths to be considered.
> Well, not necessarily. You only need to consider constraints on
> matching columns and only when there's a join column on those columns.
> So you could imagine, for example, sorting all the constraints by the
> eclass for the non-const side of their expression, then going through
> them linearly to see where you have multiple constraints on the same
> For what it's worth I think there is a case where this is a common
> optimization. When you have multiple tables partitioned the same way.
> Then you ideally want to be able to turn that from an join of multiple
> appends into an append of multiple joins. This is even more important
> when it comes to a parallelizing executor since it lets you do the
> joins in parallel.
Ah, right. Also, if the foreign tables come under the inheritance
hierarchy, and we want push joins to foreign servers.
> However to get from here to there I guess you would need to turn the
> join of the appends into NxM joins of every pair of subscans and then
> figure out which ones to exclude. That would be pretty nuts. To do it
> reasonably we probably need the partitioning infrastructure we've been
> talking about where Postgres would know what the partitioning key is
> and the order and range of the partitions so it can directly generate
> the matching subjoins in less than n^2 time.
The Postgres Database Company