I saw a similar effect, but with DB2.

The customer was a highly regarded bank, and had some very clever
application programming techniques.  For example
EXEC SQL QUERY Select version from application.table where application =
"PAYROLL"
If version > ...
{
new function
}

So by setting a flag in a table they could "instantly" roll back an
application change. Bravo.

The problem was they did this in every CICS program, and so was issued 1000
times a transaction, and at 1000 transactions a second this SQL query was
being done 1 million times a second.  The customer did not believe us -
till we showed them the CICS trace.   They changed this to store a version
field in an anchor block, so the lookup took 10's of instructions, and the
CPU dropped by 25%.

I was involved in a Java programming problem.
They had a method which opened a file, read a record.  As the subroutine
was exited, Java automatically did garbage collection on the fle handle and
closed the file.
They changed it to pass the handle back to the caller.  The program then
checked to see if the passed handle was 0, if so open the file.

The original people are looking at the wrong problem.    They need to use
products like IBM's Application Performance Analyser, or Strobe, which
gives a profile of where the CPU is being used.  For one bank about to go
live for the first time, this showed that 40% of the CPU was in the printf
routine (they had all the debug code compiled in!), and 20% of the CPU in a
badly written SQL query.  These sampling products will also tell you which
files had the most I/O.

Colin




On Sun, 19 Jan 2025 at 13:10, Robert Prins <
[email protected]> wrote:

> From LinkedIn:
>
> <quote>
> 2 weeks ago I received the analysis data from a new client that wanted to
> reduce their CPU consumption and improve their performance. They sent me
> the statistical data from their z16 10 LPARS. Information about 89,000+
> files. I analyzed their data and found 2,000+ files *that could be
> improved*
> and would save CPU when improved. *I pulled out 1 file to demonstrate a
> Proof of Concept (POC) for the client. I had the client run the POC and it
> showed a 29% reduction in CPU every time that file will be used. The 29%
> did not include 3 other major adjustments that would save an addition 14%
> CPU and cut the I/O by 75%.* This is just 1 file. The other files can save
> 3% to 52% of their CPU every time they are used in BATCH or ONLINE.
> </quote>
>
> I've been a programmer on IBM since 1985, and the above doesn't make any
> sense to me, how can changing just one file result in a 43% reduction in
> CPU usage?
>
> I've only ever been using PL/I, and using that I did manage to make some
> improvements to code, including reducing the CPU usage of a CRC routine by
> an even larger amount, 99.7% (Yes, ninety-nine-point-seven percent), but
> that was because the old V2.3.0 PL/I Optimizing compiler was absolute shite
> at handling unaligned bit-strings, but WTH can you change about a file to
> get the above reduction in CPU?
>
> Robert
> --
> Robert AH Prins
> robert(a)prino(d)org
> The hitchhiking grandfather <https://prino.neocities.org/index.html>
> Some REXX code for use on z/OS
> <https://prino.neocities.org/zOS/zOS-Tools.html>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [email protected] with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to