Since you have two redundant tests, the selectivity is being
double-counted, leading to a too-small rows estimate and a not very
appropriate choice of join plan.

I see, thanks for the explanation. I did notice though that in the second case, with 1 redundant test removed, the estimate is still low:

"Hash Left Join (cost=1449.99..2392.68 rows=2 width=714) (actual time=24.257..25.292 rows=553 loops=1)"

In that case the prediction is 2 rows, which is only 1 row more than in the previous case. Yet the plan is much better and performance improved dramatically. Is there a reason/explanation for that?

FWIW, CVS HEAD does get rid of the duplicate conditions for the common
case of mergejoinable equality operators --- but it's not explicitly
looking for duplicate conditions, rather this is falling out of a new
method for making transitive equality deductions.

This sounds very interesting Tom. Is there some documentation somewhere where I can read about this new method?

_________________________________________________________________
Live Search, for accurate results! http://www.live.nl


---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

Reply via email to