"Ed L." <[EMAIL PROTECTED]> writes:
> So, does this sound like we just happened to get repeatedly 
> horribly unrepresentative random samples with stats target at 
> 10?  Are we at the mercy of randomness here?  Or is there a 
> better preventive procedure we can follow to systematically 
> identify this kind of situation?

I think the real issue is that stats target 10 is too small for large
tables: the samples are just not large enough to support a decent
numdistinct estimate, which is the critical stat for cases such as this
(ie, estimating the number of hits on a value that's not in the
most-common-values list).

The reason the default is currently 10 is just conservatism: it was
already an order of magnitude better than what it replaced (a *single*
representative value) and I didn't feel I had the evidence to justify
higher values.  It's become clear that the default ought to be higher,
but I've still got no good fix on a more reasonable default.  100 might
be too much, or then again maybe not.

I encourage you to play around with default_statistics_target and see
what you can learn about quality of estimates vs. planning time.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to