Oh my gosh -- so many assumptions in there that I disagree with it is hard to 
know where to start.

> it's [its] runtime overhead

I find the "runtime overhead" of C++ to be very acceptable. "My" (my 
employer's, for which I am responsible) C++ product receives SMF records out of 
a queue. It selects SMF fields one at a time, in a kind of interpreted fashion, 
sort of like a report generator. It converts the selected funky SMF fields to 
character form (several variations, depending on customer options), often 
elaborately such as by expanding bit flags to character names, and then 
translates the whole shebang from EBCDIC to UTF-8, and pushes it out the TCP 
stack. How much CPU time per record? About 1/20000 of a second -- 50 CPU 
milliseconds -- on a 12/13/14, depending on complexity. I find that very 
acceptable. When I look at the incredible machine code output by the compiler I 
doubt that a skilled assembler programmer would do as well. 

> C++ could easily change these to a single function call but still require the 
> programmer to make the correct choice

Not so, if I am understanding you correctly. The C++ compiler has incredibly 
good logic to pick the correct overload with no special programmer effort. Take 
a look at stream I/O (the >> style I/O) for an example. And you have not even 
touched on templates, which are like overloads on steroids.

> Imagine converting OPEN / DCB to C functions

They did. It's called fopen(), and has much of SVC 99 integrated in for good 
measure.
 
> Byte or char are the only data types with a known length

Really? For most modern, general purpose implementations of C/C++, a short is 
16 bits, a long or int is 32 bits, and a long long is 64 bits. There is also 
int16_t, int32_t and int64_t if you want to specify the length for certain 
across multiple compilers. Not to mention bool and float and double. Also, 
FWIW, I have never encountered a C with a data type of byte. Where are you 
getting your information? If you are going to pick and choose a subset of C/C++ 
features, and then say "see, the subset I chose is a pretty feature-poor 
language" then yes, that will work every time.

> High Level ASM is far more programmer friendly than C

De gustibus non est disputandam. I respectfully hold the opposite opinion. (I 
have 20 years off and on, mostly on, in which assembler was my primary 
language, and now about eight with C++ as my primary language, so I have a 
basis for opinions on both.) HLASM has its place, and I still use it on a 
regular basis. But I suspect I am 5 to 10 times more productive (hours per 
function point from wish list to distribution) with C++ for the major 
functionality than I would be in assembler. Not to mention much more bug free: 
you never have one of those "oh crap, I forgot that I needed to preserve R2" 
type errors, nor one like I just had in my assembler code, where I coded an LH 
on what was actually a fullword field.

Charles


-----Original Message-----
From: IBM Mainframe Assembler List [mailto:[email protected]] On 
Behalf Of Jon Perryman
Sent: Monday, December 11, 2017 3:33 PM
To: [email protected]
Subject: Re: Address of a Literal

If by "C's less powerful macro language", you actually mean abysmal 
pre-processor language then I totally agree. C programmers will use motivated 
reasoning to convince you that C is still the language of choice. The problem 
is that C hasn't really grown as a language to help programmers (C purists 
don't consider C++ true C because of it's runtime overhead). In the last 50 
years, standards (e.g. K&R, posix, ansi, C89, C90, ...) mostly added functions. 
C programmers love to add functions and this is the problem. As an example 
consider C's 14 absolute value functions. They still haven't implemented 
function overload which would allow programmers to code the same function name 
for all data types. C++ could easily change these to a single function call but 
still require the programmer to make the correct choice. In HLASM, a macro 
could easily choose the correct function based on the data types. Imagine 
converting OPEN / DCB to C functions. Even IBM's fopen implementation is 
unusual in that it contains keyword arguments.
Both C and HLASM could learn from each other. If C implemented a macro language 
and function overloading, then it would be an exceptional language. HLASM on 
the other hand needs macro's that return values and allow stacking macro calls 
on a single source statement.
Rarely will C programmers code more than basic C macro's. I wish that C 
programmers thought integration / usability rather than C function 
implementation.
 I'm confused by why you say assembler macro language is not suitable for C. It 
would make C great. Byte or char are the only data types with a known length. 
All others must be a minimum size but is not guaranteed to be that length. The 
C "sizeof" allows my program to access the size of any variable. Macro "if" 
statements don't understand "sizeof" so they cannot choose the proper function 
for the size (e.g. absolute value).
 At this time, High Level ASM is far more programmer friendly than C. In z 
systems, HLASM macro language more than makes up for what I lose by not using C.
   

Reply via email to