On 11/6/17, 9:21 AM, "Justin Pryzby" <pry...@telsasoft.com> wrote:
>     see if statistics improve:
>     SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, 
> tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv,
>     FROM pg_stats WHERE attname~'customers_customer' AND tablename='id' GROUP 
> BY 1,2,3,4,5 ORDER BY 1

On Mon, Nov 06, 2017 at 09:12:01PM +0000, Adam Torres wrote:
> I changed the statistics on av.customer_id as suggested and the number
> returned by pg_stats went from 202,333 to 904,097.

Do you mean n_distinct ?  It' be useful to see that query on pg_stats.  Also I
don't know that we've seen \d output for the tables (or at least the joined
columns) or the full query ?

> There are 11.2 million distinct customer_ids on the 14.8 million vehicle
> records.

If there's so many distinct ids, updating stats won't help the rowcount
estimate (and could even hurt) - it can only store 10000 most-common-values.

Are there as many distinct values for cc.id ?

I would try to reproduce the rowcount problem with a minimal query:
explain analyze SELECT FROM av JOIN cc ON av.customer_id=cc.id; --WHERE 
Maybe the rows estimate is okay for some values and not for others, so maybe
you need to try various WHERE (with JOIN an additional tables if need be...but
without reimplementing the whole query).

I just noticed there are two conditions on dealer_id, one from table av and one
from table cc_1.  It seems likely those are co-related/non-independent
conditions..but postgres probably doesn't know that (unless you used PG96 FK
logic, or PG10 multi-variable stats). 

As a test, you could try dropping one of those conditions, or maybe a hacky
change like ROW(av.dealer_id, cc_1.dealer_id)=ROW('EC000079', 'EC000079'),
which postgres estimates as no more selective than a single equality test.  BTW
this is all from src/backend/utils/adt/selfuncs.c.


Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:

Reply via email to