On Sun, 6 Mar 2016 11:39:38 +0000
Paul Sanderson <sandersonforensics at gmail.com> wrote:

> I understand this - but, there always a but, I still would like to do
> something. Applying the limit anyway and then telling them the query
> has been limited might be a solution.
> 
> Time is usually not an issue but as the results are loaded into a grid
> for display memory can be the issue.

So execute the query, and start displaying the rows.  When you reach
"too many", ask the user if he wants to continue.  Or light up the
"next page" button, or whatever.  

(That doesn't exempt you, btw, from providing for the user to cancel
long-running queries.  But you know that.)

ISTM what you really want (other than a crystal ball) is tabular access
to the query optimizer's estimations, so you could so something like

        select "estimated rows" from (
                estimate query select ... 
        ) as A;

I've never seen a system that provides queryable optimizer metadata.  I
don't remember ever having read a paper on the idea, either.  But
that's not too surprising: the purpose of externalized query optimizer
output is normally to help the user understand what the optimizer will
do, not let the program decide whether or not to do it.  

I still think you're barking up the wrong tree.  You suppose that you
can vet the queries to prevent accidental cross joins not by finding
cross joins directly, but indirectly, by finding inordinately large
outputs.  I suggest that will lead to a lot of false positives,
because mistakes like that are actually pretty rare.  You and your user
are better off if you don't second-guess him.  Just execute the query
and show him the output in a controlled way.  He'll deal with
long-running queries or supervoluminous output the way we all have since
DBMSs were invented: cancel the query, and start looking into what
happened.  

--jkl

Reply via email to