Thank you Kirk for reposting by "The Pointlessness of handwriting 'efficient' code" article! Sometimes I put these things out there and am deafened by the silence. It's nice to know that someone is listening.

Dave Cole
ColeSoft Marketing






At 1/30/2018 01:07 PM, Charles Mills wrote:
Amen!
Charles

-----Original Message-----
From: IBM Mainframe Assembler List [mailto:[email protected]] On Behalf Of Kirk Wolf
Sent: Tuesday, January 30, 2018 9:34 AM
To: [email protected]
Subject: Re: Fair comparison C vs HLASM
I'm not sure what "fair" means the context of comparing HLASM and C/C++.
(IMO, all C programmers should be using C++ if possible, even if you choose to restrict yourself to a small subset like we do). Is it fair that the z-arch has an order of magnitude more instructions than /can be effectively used by mortal programmers? Or that with compilers you
can just re-compile and target new machines with new instructions?.    Is
it fair that IBM compilers use undocumented knowledge about which instruction patterns run fastest, and automatically inline routines and unroll loops? Do HLASM programmers have PDO (profile directed optimization)? It seems decidedly unfair to me :-)
I don't buy several categories of arguments that I've seen:
1) My beautiful HLASM code is much faster, maintainable, etc than some crappy C code that I have seen
2) You can't do "x" in C, so it isn't viable.  (Like you can't have both)
3) the C (or C++) language has problems.  (and?)
For me, the most useful insights come from those who have spent many years and many thousands of KLOCS with both HLASM and C/C++. My favorite comments related to this subject came from Dave Cole on the Assembler-List last October, which I will re-post below because it deserves more bits of storage.
Kirk Wolf
Dovetailed Technologies
http://dovetail.com


"The Pointlessness of handwriting "efficient" code (was One Byte MVC Versus IC/STC)"
David Cole [email protected]
10/16/17
to ASSEMBLER-LIST
First, let me start by saying I am NOT talking about the kernel of sorting routines intended to sort records by the millions. Nor am I talking about any similar place where saving a few nano-seconds here and there might actually matter. If that's your concern, then this post is not for you. I am talking about typical logic whose execution frequency ranges from a handful per week all the way up to maybe a million times per hour. (Just guessing here, but it sounds good.) I'm also talking about hand coded Assembler. If you want to write efficient code, use a compiled language. Use C. Use Cobol. Use whatever. But don't use Assembler. Assembler is probably the worse language to choose.
Why? Well, read on.


I started writing assembler back in the late 1960s. I've been writing assembler for nearly 50 years. I've written a LOT of assembler, and I still love writing it! Back when I started, one of my coding ethics was "efficiency", both in space and time. I wanted my programs to accomplish as much as they could with the fewest instructions taking up the least number of bytes as possible. (That, of course led to some gawdawful code being written!) Well, back in that day, when a large machine had maybe a half a meg of storage, and megabyte storage frames literally had to be wheeled in on trucks, program size actually did matter. And with storage access speeds measured in micro seconds (and even milliseconds for the LCS storage), speed mattered too. But those days are long gone, and I have long since grown out of my childish ways.

Yes, speed does matter, and IBM has invested an immense amount of expertise and creativity to come up with ways to leverage parallelism and pipelining and only god knows what all else to squeeze out every unneeded nanosecond of processing time it can. It is statistics based and it is mind-numbingly complex. Any given combination of workflow will never behave the identically same way one run to the next. (Even though statistically speaking, efficiencies will be repeatable.) But all these techniques for efficiency that IBM has created are not human compatible. They are too complex, they are too messy[!] and they are not even the same techniques from one machine to the next. In fact, sometimes code written to be efficient on one machine will be actually inefficient on another! In other words, if you are using hand-coded Assembler, and you want to write the most efficient code possible, you will end up writing something...
  - That is messy,
  - That is ugly,
  - That will be difficult to read, follow and understand,
  - That will probably fail to be the most efficient possible,
- And that you will probably have to rewrite when IBM comes out with its next machine. So if there is anything that needs to be "Laughed out of code review", it's feeble concerns with such questions as MVC-onebyte vs. IC/STC. As a prior gentleman commented, "rubbish!" The point is, with code written in any language (especially for production program development), one of the most important ethics is clarity, because without that, the code will not be maintainable over time. (
www.colesoft.com/files/presentations/commercialqualityprogramming.pdf)
Obscure code is what should be laughed out of code review, and serious attempts to write so called "efficient" code (a) will fail to produce perceptible results and (b) will only end up obfuscating the code. So if all these wonderful efficiency techniques that IBM has come up with are too complex/obscure/ugly to use, then what's the point? COMPILERS!
That's the point.
Let the compilers be concerned about efficiency. That's their job. That's what they do far far better than humans. When IBM develops new pipelining techniques and new methods to achieve better parallelisms, they don't do it in a vacuum. They get their compiler writers involved! There is a back-and-forth between the two teams: Between the hardware developers and the compiler writers. Together, they hammer out what will work and what won't. In the end, the compilers are fitted to the hardware, and the hardware is fitted to the compilers.


Another thing... Did you know that as of the Z14 machine, the Principles describes 2,024 separate machine instructions? Did you know that if you throw in Extended Mnemonics, that comes to 2,144? Here's my questions. If you are an Assembler programmer, how many of these do you use more than just for special occasions? (In my case, its maybe a couple of hundred at most.) Do you really think that you're really going to write the most efficient code possible using just the instructions you're most comfortable with?

Well, maybe you and I won't be using the full instruction set anytime soon, but you can be damn sure the compilers will! In recent years, IBM has gone to town creating new machine instructions, and if you glance over them at all, you will note that a lot of them were specifically developed to increase execution efficiency. (The several Compare/Load and Trap instructions are just one group that comes to mind as obvious examples of this.) So if you really want to write the most efficient code possible, you will use C, or you will use Cobol, but you will never use Assembler! Don't get me wrong. Assembler does have its uses, but contrary to what may prople think, efficiency no longer is one of them.



Dave Cole
ColeSoft Marketing
414 Third Street, NE
Charlottesville, VA 22902
EADDRESS:    [email protected]
Home page:   www.colesoft.com
LinkedIn:    www.xdc.com
Facebook:    www.facebook.com/colesoftware
YouTube:     www.youtube.com/user/colesoftware

Reply via email to