Josh Berkus wrote:
Tom,

Wasn't this exact proposal discussed and rejected awhile back?

We rejected Greenplum's much more invasive resource manager, because it created a large performance penalty on small queries whether or not it was turned on. However, I don't remember any rejection of an idea as simple as a cost limit rejection.

This would, IMHO, be very useful for production instances of PostgreSQL. The penalty for mis-rejection of a poorly costed query is much lower than the penalty for having a bad query eat all your CPU.

Greenplum's introduced a way to creating a cost "threshold" a bit like the way Simon was going to do "shared" work_mem. It did 2 things:

1/ Counted the cost of an about-to-be run query against the threshold, and made the query wait if it would exhaust it
2/ Aborted the query if its  cost was greater than the threshold

Initially there was quite a noticeable performance penalty with it enabled - but as the guy working on it (me) redid bits and pieces then penalty decreased massively. Note that in all cases, disabling the feature meant there was no penalty.

The latest variant of the code is around in the Bizgres repository (src/backend/utils/resscheduler I think) - some bits might be worth looking at!

Best wishes

Mark

P.s : I'm not working for Greenplum now.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to