Ok - wow.

Adding that index, I get the same estimate of 1 row, but a runtime of
~450ms.
A 23000ms improvement.

http://explain.depesz.com/s/TzF8h

This is great. So as a general rule of thumb, if I see a Join Filter
removing an excessive number of rows, I can check if that condition can be
added to an index from the same table which is already being scanned.

Thanks for this!

On 10 November 2015 at 17:05, Tom Lane <t...@sss.pgh.pa.us> wrote:

>
> But taking a step back, it seems like the core problem in your explain
> output is here:
>
> >>    ->  Nested Loop  (cost=1.29..83263.71 rows=1 width=24) (actual
> time=0.196..23799.930 rows=53595 loops=1)
> >>          Join Filter: (o.po_id = p.po_id)
> >>          Rows Removed by Join Filter: 23006061
> >>          Buffers: shared hit=23217993 dirtied=1
>
> That's an awful lot of rows being formed by the join only to be rejected.
> You should try creating an index on
> branch_purchase_order_products(po_id, product_code)
> so that the po_id condition could be enforced at the inner indexscan
> instead of the join.
>
>
>

Reply via email to