On Mon, Aug 04, 2008 at 11:59:03AM -0700, Josh Berkus wrote:
> Greg,
> 
> >Well that's going to depend on the application.... But I suppose there's
> >nothing wrong with having options which aren't always a good idea to use. 
> >The
> >real question I guess is whether there's ever a situation where it would 
> >be a
> >good idea to use this. I'm not 100% sure.
> 
> I can think of *lots*.   Primarily, simple web applications, where 
> queries are never supposed to take more than 50ms.  If a query turns up 
> with an estimated cost of 10000000000, then you know something's wrong; 
> in the statistics if not in the query.  In either case, that query has a 
> good chance of dragging down the whole system.
> 
> In such a production application, it is better to have false positives 
> and reject otherwise-OK queries becuase their costing is wrong, than to 
> let a single cartesian join bog down an application serving 5000 
> simultaneous users.  Further, with a SQL error, this would allow the 
> query rejection to be handled in a user-friendly way from the UI 
> ("Search too complex.  Try changing search terms.") rather than timing 
> out, which is very difficult to handle well.
> 
> The usefulness of this feature for interactive sessions is 
> limited-to-nonexistant.  It's for production applications.

Ok, that is a different use case where an error seems very useful. What
about slightly extending the proposal to have the severity of exceeding
the limit configurable too. Something like:

   costestimate_limit = 1000000000         # default 0 to ignore limit
   costestimate_limit_severity = error     # debug, notice, warning, error

-dg

-- 
David Gould       [EMAIL PROTECTED]      510 536 1443    510 282 0869
If simplicity worked, the world would be overrun with insects.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to