>The database is structured so that what we expect will be the most common
>queries will run as quickly as possible. Unfortunately, some of the
queries
>which are expected to be run less often are also the longer-running queries
>which get data from lots of different tables.
Use to be that case, yes... ;)
>We're expecting 6,000 - 80,000 additional sets of data to be added to the
>database each day. Well, one of the longer searches does involve a
freetext
>search--with a "LIKE '%???%'" none the less. (God help the poor fool who
>searches for 'a' once the database has grown.) However, it's the only one
>that does that sort of thing.
I understand it as you are using MS SQL server 7.0? If so, have you
been consider to use freetext indexing instead of wild card search
in these fields?
>As for Anyhow2, I don't think the server has managed to hit it's swap yet.
>I haven't been watching it too closely, but we'd expect to see a bit more
HD
>activity on the Performance Monitor if it were. Instead it seems to hover
>around the same level of HD activity as it does during the search portion
of
>the faster queries.
Ok, I take it as you say it then.
>As for the Finally, these longer queries can often start to monopolize SQL
>Server to the point that EVERY query gets significantly slower.
Yes, I know the case, that why I like to uses cursors instead if I know the
query will be running during interactive hours. The benefit with the cursor
is that it makes a little bit more softer impact on the server, since you
can break down a huge query into several small queries in the database.
>We're trying to allow a user who has realized that their search is
>taking MUCH longer than expected to stop their search so it doesn't
>tie up the system for so long. The server in question is a Dual PIII
>733 with 512MB of RAM,
I think you should at least double that amount of memory in the Server.
It seams to be a little bit to less considering what you have been told
me so far. The RDBMS I uses is equipped with 1 or 2 GB (not really sure
about the exact amount :).
>and software mirrored 10000RPM LVD SCSI drives. I had to push through a
>fair bit of inertia to get the company to look at a dual-processor
>configuration, and haven't quite gotten anybody in a purchase-approval
>position to go for hardware RAID yet. Before the dual-processor
>configuration it was a single processor 866 (I think), and SQL
Hmmm, I would have recommended them a quad-board with those requirements.
And 2GB RAM for the system. I consider a dual system the under limit in
such cases as you describes. If they trade CPU's, them they should
populate the server with as much memory they ever squeeze into the
machine.
Btw;NT allocates the CPU in a specific way. I don't remember if it the
first or the last one. Lets say it is the last, CPU is dedicated for
network traffic.
Btw2; I made a crunch bench with our MS SQL server 6.5 at my former
work, a dual system to see how the SQL server utilized the CPU's. In
Theory you should have 200% CPU usage (100% with each CPU), but in
fact the CPU load stayed constant at 100%. And the amazing thing was
that this were the total sum of CPU load on BOTH CPU. If the CPU load
sank on one CPU, it was raised on the other to keep this 100% load
constant. What you could then observer was a "signal" on both CPU's
with its "zero" on each CPU around 50% load, and an amplitude of 10%,
that is both CPU's were oscillating around 50% load and increasing,
and decreasing between 40%-60% load. When one CPU was at its minimum,
the other were at is maximum, and vises versa.
>Server would regularly monopolize the processor to the point that NOBODY
>else could access the application until a query was done running.
That's why batch job should be running during night time ;) - but try to
explain that for a customer, who believes they need the answers, because
their customer in turns needs it. They just can't accept to wait, and
again we are there, they wont one to create the perpetual mobile... ;)
You should consider to use one (or several) cursor in this case instead
of a big single query...
> - Theo
>
>-----Original Message-----
>From: Svensson, B.A.T. [mailto:[EMAIL PROTECTED]]
>Sent: Thursday, May 17, 2001 4:29 PM
>To: Brinkman, Theodore; [EMAIL PROTECTED]
>Subject: RE: [PHP-WIN] Canceling a Query
>
>
>Dear Brinkman,
>
>With out having seen the design, my objections below might
>be misplace, but: have the database been designed with respect
>of your queries? Even though the designer might have optimized
>the database for a special intention, your queries might be out
>of this scope, this is just a first reflection on your answer.
>
>Secondly: which normal form is the database in? If he have pushed
>the "academic way" to far, the design it self might very well be
>"bad" in respect of speed. A little bit of well chosen redundancy
>here and there might very well speed up things dramatically.
>
>Also introducing abstract attributes (with the only purpose to be
>used as indexed attributes) in the tables which splits the data of
>the same domain in smaller subdomains, might also help to increase
>the speed.
>
>For example I am building a database system which a "small" table
>in the system of 3,5 millions rows raw data, and it will increase to
>contain about 10 million rows (I am working with "bio-informatics"
>and we will not to soon deal with data in the Terror-byte :) range,
>so DBMS speed is also of great concern for me. Anyhow, searches in
>this table takes just mili seconds - and I don't have any reason to
>believe it will increase that much more when I continues to populate
>the database. What limits the response time is not the RDBMS, but the
>transfer rate of data over the network.
>
>Anyhow:
>
>But since I haven't seen the design, it is of course very hard
>for me to track down the actually problem. I don't even know
>what kind of queries you do! For ex, does it involves freetext
>searches?
>
>Anyhow2:
>
>I can identify one problem you have: Huge amount of information
>is gathered in the queries. The natural question to ask then is:
>Does the server provide enough memory to handle this data in
>main memory, or does it need to swap on disk to handle the amount
>of data?
>
>However you are dealing with an interesting problem, that I
>wouldn't mind to have a closer look into - unfortunately I am
>fully occupied with my database design work... ;)
>
>Finally:
>
>Maybe one also have to realize that some queries by there
>nature ARE long running, and one has to realize that a perpetual
>mobile actually is not possible to construct. Is it possible to
>introduce the ideas with the end user that some queries actually
>*will* take a long time to run?
>
>
>I hope it will work out for you in the end!
>
>Cheers,
>
> /Anders
>
>>-----Original Message-----
>>From: Brinkman, Theodore
>>[mailto:[EMAIL PROTECTED]]
>>Sent: Thursday, May 17, 2001 9:34 PM
>>To: [EMAIL PROTECTED]
>>Cc: Svensson, B.A.T.
>>Subject: RE: [PHP-WIN] Canceling a Query
>>
>>
>>Well, the queries are taking a long time because they're
>searching through
>>8GB+ of database doing joins across multiple tables. The guy
>who did the
>>database design and indexing has gotten the database to the
>fastest point
>he
>>knows how. Any other indexes he's tried so far have actually
>SLOWED the
>>queries. The most interesting thing is that the index which
>sped things up
>>the most was on a field which isn't even a key.
>>
>>Other queries exacerbate the issue, because to return the
>desired data they
>>MUST walk entire tables, so indexing doesn't buy anything there. (for
>>example, the 'list accounts' query.)
>>
>>The main problem is that it's an interactive application
>where the database
>>can grow to immense sizes, and the most common searches can
>return huge
>>amounts of information. The first clue a user may have that they've
>>accidentally entered the wrong parameter (not invalid, just
>not what they
>>meant to) is when a search seems to be taking A LOT longer than usual.
>>(i.e.: you mean to run the search against items between 1/1/2001 and
>>2/1/2001, but accidentally run it between 1/1/2000 and 12/1/2001). We
>can't
>>limit the range, because we know there are perfectly valid
>reasons to do
>>that same search across 7 years or more, but they just aren't
>>that common.
>>
>>We've found a way to do it by going to asynchronous queries, but that
>>requires changes to quite a few pages, and that's something
>I'm hoping to
>>avoid if possible.
>>
>> - Theo
>>
>>-----Original Message-----
>>From: Svensson, B.A.T. [mailto:[EMAIL PROTECTED]]
>>Sent: Thursday, May 17, 2001 3:10 PM
>>To: Brinkman, Theodore; [EMAIL PROTECTED]
>>Subject: RE: [PHP-WIN] Canceling a Query
>>
>>
>>Dear Brinkman,
>>
>>First I wonder one thing: Why are the queries long running
>>in the first place? Might it be because improper indexing
>>of the attribute in the tables? Maybe it can be worth to
>>have alook at this to see if you can increase performance
>>this way. You might very well be able to speed up the
>>execution of your queries by rewriting them and/or
>>adding a few more new indexes to the table.
>>
>>I know that within the SQL server one are able to track user
>>process, create new jobs, and delete jobs, and a thought around
>>this - which is untried by me - might be to uses these facilities
>>provided by SQL server, and build a kind of tracking system that
>>keeps track of the users queries with a kind of "ticket", when
>>a user sends a cancel request, you simply forward this cancel
>>message with the proper "ticket" to SQL server, and then you
>>lets the RDBMS do the killing of the long running process.
>>
>>How ever I haven't done this my self, and don't even know if
>>it possible, but it might very well be worth in your case to
>>have a lock at it. In the SQL help, you might start out by
>>checking out sp_add_jobb and sp_delete_job.
>>
>>Cheers,
>> /Anders
>>
>>>-----Original Message-----
>>>From: Brinkman, Theodore
>>>[mailto:[EMAIL PROTECTED]]
>>>Sent: Wednesday, May 16, 2001 5:45 PM
>>>To: [EMAIL PROTECTED]
>>>Subject: [PHP-WIN] Canceling a Query
>>>
>>>
>>>I'm working on an application for work where users will be
>>querying the
>>>database to get information back about processed documents.
>>>I'm trying toset up a way for a user to cancel a query which
>>is taking too
>>>long. I'm using MS SQL Server 2000. The interface for the
>cancel is
>>>easy enough, a form containing a button that says 'cancel'
>>and a value or
>>set
>>>of values identifying the query being run.
>>>
>>>I've found 'KILL <SPID>', but the problem is that all the
>>>connections to the
>>>database are done through the web-server so they get the same
>>>SPID, which
>>>means that EVERY query being run by EVERY user gets killed.
>>>(Obviously not
>>>an acceptable solution.) Does anybody out there know of a way to
>>>specifically kill a single query/stored procedure? We're
>>>willing to track
>>>as many values as necessary to do it.
>>>
>>>Thanks.
>>>
>>> - Theo
>>>
>>>--
>>>PHP Windows Mailing List (http://www.php.net/)
>>>To unsubscribe, e-mail: [EMAIL PROTECTED]
>>>For additional commands, e-mail: [EMAIL PROTECTED]
>>>To contact the list administrators, e-mail:
>>>[EMAIL PROTECTED]
>>>
>>
>
--
PHP Windows Mailing List (http://www.php.net/)
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
To contact the list administrators, e-mail: [EMAIL PROTECTED]