On Wed, 2006-03-01 at 11:47 -0500, Tom Lane wrote: > Simon Riggs <[EMAIL PROTECTED]> writes: > > A new parameter that allows the administrator to place sensible limits > > on the size of queries executed. > > As I said when the idea was floated originally, I don't think this is a > very good idea at all. The planner's estimates are sufficiently often > wrong that refusing to execute queries on the strength of an estimated > cost is going to burn you in both directions.
That depends upon your view on risk. Some admins would rather abort a few queries wrongly in less than a second than risk having a query run for hours before being cancelled by statement_timeout. Most end-users would agree with this, because if the answer is No they want to hear it quickly so they can correct their mistake and continue. But I think the estimates aren't sufficiently wrong to make a big difference. People with a 100GB+ table can set it with sufficiently useful accuracy to avoid pointless attempts to sort that table, for example. > Even if it were a good idea, the proposed location of the test is 100% > wrong, as you are only guarding one path of query submission. Or were > you intending that the restriction be trivial to subvert? The main idea was to guard the path by which ad-hoc queries would come, but you might want to set it on a dev server also for example. Its a discussion point as to whether we'd want it the way I've coded, or whether you want to block other routes also. I can see things both ways on that and have no problem changing the behaviour if that is the consensus; that change would be fairly quick. Best Regards, Simon Riggs ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly