Re: [PERFORM] Performance loss upgrading from 9.3 to 9.6

2017-11-06 Thread Justin Pryzby
On 11/6/17, 9:21 AM, "Justin Pryzby" wrote: > see if statistics improve: > SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, > tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv, > FROM pg_stats WHERE

Re: [PERFORM] Performance loss upgrading from 9.3 to 9.6

2017-11-06 Thread Adam Brusselback
> It has now been decided to try upgrading to 9.4 as that is the minimum to > support Django 1.11 (which we are trying to upgrade a backend service to). > The hope is whatever feature we have not configured properly in 9.6 is not > there in 9.4. It's entirely possible whatever is causing your

Re: [PERFORM] Performance loss upgrading from 9.3 to 9.6

2017-11-06 Thread Adam Torres
Justin, Thanks for the reply. I changed the statistics on av.customer_id as suggested and the number returned by pg_stats went from 202,333 to 904,097. There are 11.2 million distinct customer_ids on the 14.8 million vehicle records. Rerunning the query showed no significant change in time

Re: [PERFORM] Index-Advisor Tools

2017-11-06 Thread Baron Schwartz
On Tue, Oct 31, 2017 at 8:06 PM Julien Rouhaud wrote: > On Tue, Oct 31, 2017 at 8:25 PM, Alexandre de Arruda Paes > wrote: > > I will be very happy with a tool(or a stats table) that shows the most > > searched values from a table(since a statistic

Re: [PERFORM] Performance loss upgrading from 9.3 to 9.6

2017-11-06 Thread Justin Pryzby
On Mon, Nov 06, 2017 at 01:18:00PM +, Adam Torres wrote: > Good morning all, > > We have a problem with performance after upgrading from 9.3 to 9.6 where > certain queries take 9 times longer to run. On our initial attempt to > upgrade, we noticed the system as a whole was taking longer to

[PERFORM] Performance loss upgrading from 9.3 to 9.6

2017-11-06 Thread Adam Torres
Good morning all, We have a problem with performance after upgrading from 9.3 to 9.6 where certain queries take 9 times longer to run. On our initial attempt to upgrade, we noticed the system as a whole was taking longer to run through normal daily processes. The query with the largest run