David Osborne writes:
> We have 3 different ways we have to do the final X join condition (we use 3
> subqueries UNIONed together), but the one causing the issues is:
> (o.branch_code || o.po_number = replace(ss.order_no,' ',''))
> ... So we can see straight away that the
Thanks very much Tom.
Doesn't seem to quite do the trick. I created both those indexes (or the
missing one at least)
Then I ran analyse on stocksales_ib and branch_purchase_order.
I checked there were stats held in pg_stats for both indexes, which there
were.
But the query plan still predicts 1
David Osborne writes:
> Doesn't seem to quite do the trick. I created both those indexes (or the
> missing one at least)
> Then I ran analyse on stocksales_ib and branch_purchase_order.
> I checked there were stats held in pg_stats for both indexes, which there
> were.
> But
Ok - wow.
Adding that index, I get the same estimate of 1 row, but a runtime of
~450ms.
A 23000ms improvement.
http://explain.depesz.com/s/TzF8h
This is great. So as a general rule of thumb, if I see a Join Filter
removing an excessive number of rows, I can check if that condition can be
added
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of David Osborne
Sent: Tuesday, November 10, 2015 12:32 PM
To: Tom Lane
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow 3 Table Join with v bad row
Hi,
We using Postgres 9.3.10 on Amazon RDS and running into some strange
behavior that has been tough to track down and debug (partially due to the
limited admin access from RDS).
We're running a read-only query that normally takes ~10-15 min., but also
runs concurrently with several other
We're hoping to get some suggestions as to improving the performance of a 3
table join we're carrying out.
(I've stripped out some schema info to try to keep this post from getting
too convoluted - if something doesn't make sense it maybe I've erroneously
taken out something significant)
The 3