> I'm  probably  not  good  enough,  technically, to contribute too much
> here,  but  nevertheless  I would like to share my experiences. We are
> running  Classic  (still  1.5! in the process of migrating to 2.5) for
> many years now. Serving 6 databases concurrently used by some 50 users
> and 3 different applications.
>
> Database  sizes  are  not  too  large, up to 1 GB, biggest queries are
> processing  some 200.000 records, some StoredProcs are processing some
> 4-5 million records.
>
> That's  up  to 6*50*3 == 900 concurrent processes on the CS (yes, plus
> the   mother  process,  plus  2  for  gbak  at times, plus a couple of
> zombies).  A realistic average is some 300 concurrent processes at any
> time.
>
> Still   using  1.5,  my  options of hunting down the regular slowdowns
> is  quite  limited  in  comparison  with the wonderful things possible
> today.
>
> But   I   went  through  this  often enough, using SysMon tools on the
> server (Windows Server 2003).
>
> Almost  always,  the  cause  for  a  slowdown is when 1 or 2 users are
> running  complex reports. At this time, only 1 or 2 users are using up
> to 80% of CPU (50% max per process), and are causing thousand times as
> many disk writes as all the other 298 processes together.
>
> But  while  the  queries of these 2 also run quite slow, the effect on
> all the other connections is much more drastic. You might say that the
> server  is  just  not  idle enough to serve 298 other connections in a
> decent way, because of just those 1 or 2.
>
> The effects are worse if the database has not been backup/restored for
> some time (2 months), and/or if "the GAP" is unusually big.
>
> I successfully managed to improve the situation over the years
> -  Revised  all read-only queries to indeed only run read-only instead
> of read/write
> - Close connections as soon as possible
> - avoid Rollbacks as often as possible
> -  rethinking  and  redesigning  the  transaction  management  in  the
> applications

Beside SQL tuning for short-term improvements, proper client transaction 
management is a MUST for every Firebird-based client application and the 
top-hitter for mid/long-term improvements.

Thus, the Firebird 2013 tour dedicated to Firebird transaction is a 
pretty good move, IMHO:
http://www.firebirdsql.org/en/firebird-conference-tour-2013/


> In  addition,  using  new  technologies  for  the  applications made it
> possible to reduce overall serverload dramatically (e.g. ADO.net).
>
> I   changed  one long-running report only recently and now it runs for
> 40  seconds  instead  of 10 minutes. The reduced time is not only good
> for the person running the report, but also for the server and thereby
> for all other users.
>
> Somewhere  along  the  way,  I  *also* tried to change architecture to
> SuperServer,  but  I  can  assure you that this was only making things
> much worse.
>
> Today,  we  only very rarely still have problems with slowdowns. And I
> even have the database sweeping at normal intervals.
>
> What gives?
> If  you  want  to  analyse  what's going on and what you need to do to
> improve it, don't just look at the queries which are slow, but most of
> all   at those which are running concurrently! Re-thinking transaction
> management and application technology might bring much much more speed
> than tweaking any of the server parameters.

Concurrent statement execution can also mean concurrent (random) I/O 
from a single rotating disk.


-- 
With regards,
Thomas Steinmaurer
http://www.upscene.com/

Professional Tools and Services for Firebird
FB TraceManager, IB LogManager, Database Health Check, Tuning etc.




Reply via email to