Hi Chris,
There may be a better way to do this,
but many developers will perform an SQL
count() function to determine the number of
records returned by the desired SQL statement
prior to actually executing the desired SQL
statement and returning all of the records.
The developer should have an understanding
of the network resources being used, and
set performance ranges accordingly.
If your LAN can handle queries that
return 1000-record recordsets,
then when the count is 1000 - 2500 records
to be returned, advise the user that performance
will be poor.
When the count is 2500-10000 it will be
extremely poor. When the count is over 10000
you reject the query. Obviously these numbers
are only for conversation sake. You will need
to determine the best ranges for your situation.
This may not be feasible in your situation,
I do not know.
Hope This Helps,
Bob
----- Original Message -----
From: Chris Mungall
Sent: 4/5/2004 2:11:36 PM
To: [EMAIL PROTECTED]
Subject: preventing killer queries
>
> I would like to provide power-users with a CGI interface to a database;
> this interface would allow arbitrary SQL. I would also like to be able to
> throttle queries that take too long to stop the db server getting backed
> up with killer queries.
>
> Right now I am doing this:
>
> my $tag = "kill$$"."TAG";
> my $tagf = "/tmp/$tag";
> system("touch $tagf && chmod 777 $tagf && sleep 15 &&
> test -f $tagf && kill -9 $$ && rm $tagf &");
>
> # execute query
> $results = $dbh->do(...)
>
> # inactivate killer
> system("rm $tagf &");
>
> # print results....
>
> However, this isn't much good as it just kills the CGI process, the killer
> query carries on running on the db server.
>
> I have thought about extending my hacky shell kill pipe to also run
> mysqladmin -kill and whatever the postgresql equivalent is, but then
> thought:
>
> - way too messy and hacky
> - someone must have solved this one already
>
> However, I couldn't find any module for dealing with this. Any
> suggestions?
>
> Cheers
> Chris
>