On Fri, Jan 27, 2017 at 2:00 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Robert Haas <robertmh...@gmail.com> writes:
>> On Fri, Jan 27, 2017 at 1:39 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>> Um ... what's that got to do with the point at hand?
>> So I assumed from that that the issue was that you'd have to wait for
>> the first time the irrelevant-joinqual got satisfied before the
>> optimization kicked in.
> No, the problem is that that needs to happen for *each* outer row,
> and there's only one chance for it to happen. Given the previous
> select ... from t1 left join t2 on t1.x = t2.x and t1.y < t2.y
> once we've found an x match for a given outer row, there aren't going to
> be any more and we should move on to the next outer row. But as the patch
> stands, we only recognize that if t1.y < t2.y happens to be true for that
> particular row pair. Otherwise we'll keep searching and we'll never find
> another match for that outer row. So if the y condition is, say, 50%
> selective then the optimization only wins for 50% of the outer rows
> (that have an x partner in the first place).
> Now certainly that's better than a sharp stick in the eye, and
> maybe we should just press forward anyway. But it feels like
> this is leaving a lot more on the table than I originally thought.
> Especially for the inner-join case, where *all* the WHERE conditions
> get a chance to break the optimization this way.
OK, now I understand why you were concerned. Given the size of some of
the speedups David's reported on this thread, I'd be tempted to press
forward even if no solution to this part of the problem presents
itself, but I also agree with you that it's leaving quite a bit on the
table if we can't do better.
The Enterprise PostgreSQL Company
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: