On Aug 4, 2008, at 1:04 PM, daveg wrote:

Ok, that is a different use case where an error seems very useful. What about slightly extending the proposal to have the severity of exceeding
the limit configurable too. Something like:

  costestimate_limit = 1000000000         # default 0 to ignore limit
costestimate_limit_severity = error # debug, notice, warning, error

I very much like this idea, and I would definitely use something like this on our production oltp app. We had a case recently where a query joining two large tables was very fast 99.9% of the time (i.e., a few ms), but for particular, rare key combinations the planner would make a poor choice turning into a multi-minute monster. It ran longer than the web server timeout, and the client was programmed to retry on error, essentially causing a database DoS.

The monster version of the plan had an outrageous cost estimate, many orders of magnitude higher than any regular app query, and would be easy to peg using even a crudely chosen limit value.

The problem was first mitigated by setting a query timeout a little longer than the web server timeout (since the query results are discarded for anything running longer), but even this was not a solution, since the client would retry on timeout, still keeping the db too busy. The real solution was to not do the query, but it would have been better to identify this via ERRORs in the logs than by the database becoming saturated in the middle of the day.

For our application it is far better for an expensive query to be rejected outright than to attempt to run it in vain. Just thought I'd throw that out as anecdotal support.

-Casey


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to