On 29/06/10 04:48, Tom Lane wrote:
"Ross J. Reedstrom"<reeds...@rice.edu>  writes:
Hmm, I'm suddenly struck by the idea of having a max_cost parameter,
that refuses to run (or delays?) queries that have "too high" a cost.
That's been suggested before, and shot down on the grounds that the
planner's cost estimates are not trustworthy enough to rely on for
purposes of outright-failing a query.  If you didn't want random
unexpected failures, you'd have to set the limit so much higher than
your regular queries cost that it'd be pretty much useless.


I wrote something along the lines of this for Greenplum (is probably still available in the Bizgres cvs). Yes, cost is not an ideal metric to use for bounding workload (but was perhaps better than nothing at all in the case it was intended for).

One difficulty with looking at things from the statement cost point of view is that all the requisite locks are already taken by the time you have a plan - so if you delay execution, these are still held, so deadlock likelihood is increased (unless you release locks for waiters, and retry for them later - but possibly need to restart executor from scratch to cope with possible table or schema changes).

Maybe it'd be all right if it were just used to delay launching the
query a bit, but I'm not entirely sure I see the point of that.

I recall handling this by having a configurable option to let these queries run if nothing else was. Clearly to have this option on you would have to be confident that no single query could bring the system down.

Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to