On 1/12/2012 2:23 PM, John Gilmore wrote:
No one has ever claimed that the timing differences here are large,
significant ones; and the continuing preoccupation here with
suboptimizing of this sort is, I think, evidence of a pervasive
malaise, a retreat into the familiar that precludes consideration of
more, much more, important design issues.
on 2012-01-1 22:55 Gerhard Postpischil wrote:
In general I tend to agree with this, but I've worked or
consulted at installations that either had problems completing
overnight jobs in their assigned batch window, or just
processing large amounts of data.

John, Gerhard, right on! I have been even more radical !

First I attempted to *eliminate* the need to have the code
*THERE* in the first place, in the most often executed path.
On one occasion I added an extra field to the record (row)
to avoid generating a key each time.

Another comes to mind, scrapping a name&address
decompression routine and replacing its loops, bit shifts,
translates, by blanks-truncation and a table of the 65000
most common city and street name "words" on file. To
print an adress became lightnig fast: follow a chain of
1-byte offsets to the next 3-byte place holder and load the
2-byte index into the table, a few RX (L LA IC), an SLL
to convert the index to an ofsset, an SR/JNP to detect and
move text between tokens and lastly an EX-MVC combo.

For this elite group here, this post is really OFF-TOPIC.
You worry about picoseconds because your code runs zillion
times per... If you weren't, the perennial EX-topic would fit
in nicely with my TGIF post - let's have a great weekend!

Andreas F. Geissbuehler
AFG Consultants Inc.
http://www.afgc-inc.com/

Reply via email to