Greg Smith <g...@2ndquadrant.com> writes:
> Right, and the only thing that makes this case less painful is that you 
> don't really need the stats to be updated quite as often in situations 
> with that much data.  If, say, your stats say there's 2B rows in the 
> table but there's actually 2.5B, that's a big error, but unlikely to 
> change the types of plans you get.  Once there's millions of distinct 
> values it's takes a big change for plans to shift, etc.

Normally, yeah.  I think Josh's problem is that he's got
performance-critical queries that are touching the "moving edge" of the
data set, and so the part of the stats that are relevant to them is
changing fast, even though in an overall sense the table contents might
not be changing much.

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to