Don't use zoned decimal for subscripts or counters, rather use indexes for
subscripts and binary for counter type variables.  And when using conditional
branching, try to code so as to make the branch the exception rather than the
rule.  For large table lookups, use a binary search as opposed to a sequential
search.  

These simple coding techniques can also reduce CPU time.



--- jcew...@acm.org wrote:

From:         "Joel C. Ewing" <jcew...@acm.org>
To:           IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Is there a source for detailed, instruction-level performance info?
Date:         Thu, 24 Dec 2015 15:53:42 -0600

On 12/24/2015 12:52 PM, Tom Brennan wrote:
> Farley, Peter x23353 wrote:
>> So what is an ordinary programmer to do?
>
> Years ago I guess I had nothing to do so I wrote a program that hooked
> into various LINK/LOAD SVC's and recorded the load module name (like
> Isogon and TADz do).  That huge pile of data ended up on a tape and I
> wrote some code to scan the tape for a particular module, to find out
> who was using it and how often.
>
> The scan took forever, so I worked quite a bit trying to make the main
> loop more efficient.  Co-worker Stuart Holland looked at my logic and
> quickly switched it to using a hashing lookup algorithm, making it run
> probably a thousand times faster.  Oops :)
>
As Tom has noted, the most dramatic performance enhancements typically
come from a change in strategy or algorithm used.  In my experience you
get better results by looking for ways to accomplish the end result by
having the program do fewer actions rather than concentrating on
micro-optimzing the individual actions. 

You may not be able to predict how to micro-manage the mix of
instructions in a loop or a highly-used section of code to optimize its
performance, but if by changing strategy you can significantly reduce
the required number of times the loop or section of code is executed it
is reasonable to always expect better performance.

Although obviously some instructions require more resources than others,
in general if you can reduce the total number of instructions executed
without dramatically changing the program's instruction mix, that too
must have a positive impact.  If you can reduce a program's references
to storage without also increasing the number of instructions the
program executes, that also always has a positive effect.

No matter how fast a processor is, I/O operations are always expensive
both in real  time and CP time.  An approach that requires significantly
fewer external records to be read or written is always a significant
improvement.  If you can't reduce the number of logical record
read/writes, perhaps changing buffer management strategies can
significantly reduce the number of physical read/writes and still get
orders of magnitude performance gains.

If writing a highly used section of code directly in assembler, one can
strive for a minimal number of instructions by choosing efficient data
representations for the task at hand and minimize storage references by
using registers wisely, and mostly that gets you close enough for local
code optimization without worrying about whether instruction X might
execute faster than instruction Y.

-- 
Joel C. Ewing,    Bentonville, AR       jcew...@acm.org 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




_____________________________________________________________
Netscape.  Just the Net You Need.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to