As Tom has noted, the most dramatic performance enhancements typically
come from a change in strategy or algorithm used.  In my experience you
get better results by looking for ways to accomplish the end result by
having the program do fewer actions rather than concentrating on
micro-optimzing the individual actions.


Very true. As others have pointed out, very often the problem is sequential search
of tables considered to be small which are large in real situations.

My example is the DB2 CAF interface module DSNALI;
there is a loop that checks if a certain module has already been loaded by
sequentially looking up the CDE list. And this is done on every single DB2 action, for example a fetch of a DB2 row. This works in a batch environment where the
number of modules is low. Now we had at a customers site a situation where
DSNALI was used in an environment where there where some 5000 modules
in the CDE list. The CPU amount in the DSNALI loop added up to more than 5 % of the overall CPU, although this was a region with heavy math load; DSNALI should
normally be invisible.

We talked with IBM on this, but IBM didn't fix it - and we were not allowed to fix it at the customer's site (it's IBM software). The solution in the end was: we changed
all those processes to RRSAF, that is: DSNRLI; DSNRLI had no such problem.

Kind regards

Bernd



----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to