Gary Warner <g...@cis.uab.edu> writes:
> Recently my database stopped respecting one of my indexes, which took a query 
> that should run in "subsecond response time" and turning it into something 
> that with small data sets runs in the 7-10 minute range and with large data 
> sets runs in the 30 minute - eternity range.

> Explain Analyze tells me that what used to be an Index Scan has become a Seq 
> Scan, doing a full table scan through 140 million records.

> Any thoughts on why that happens?

I'd bet it has a lot to do with the nigh-three-orders-of-magnitude
overestimates of the numbers of matching rows.  You might find that
increasing the statistics targets for the indexed columns helps ---
I'm guessing that these particular key values are out in the long
tail of a highly skewed distribution, and the planner needs a larger MCV
list to convince it that non-MCV values will not occur very many times.

If that is an accurate guess, then trying to force the matter with
something like enable_seqscan = off is not a good production solution,
because it will result in horrid plans whenever you decide to query
a not-so-infrequent value.

                        regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to