On Mar 24, 2011, at 5:23 PM, Claudio Freire wrote:
> I routinely have to work around query inefficiencies because GEQO does
> something odd - and since postgres gives me too few tools to tweak
> plans (increase statistics, use subqueries, rephrase joins, no direct
> tool before CTEs which are rather new), it becomes an art form, and it
> becomes very unpredictable and an administrative burden. Out of the
> blue, statistics change, queries that worked fine start to perform
> poorly, and sites go down.
> 
> If GEQO could detect unsafe plans and work around them automatically,
> it would be a major improvement.

This isn't limited to GEQO queries either. Every few months we'll have what 
should be a very fast query suddenly become far slower. Still on the order of 
seconds, but when you're running several of those a second and they normally 
take fractions of a second, this kind of performance degradation can easily 
bring a server to it's knees. Every time this has happened the solution has 
been to re-analyze a fairly large table; even with default stats target of 1000 
it's very easy for one bad analyze to ruin your day. 
--
Jim C. Nasby, Database Architect                   j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to