[snipped due to line limit]

I think what we’re seeing is the evolution of compilers on z/OS.

Take the COBOL compiler: it used to be that the optimizer was a distinctly 
optional extra step. That is, NOOPT would generate a reasonable set of 
instructions, that would be approximately what a person trying to write 
straight-forward assembler would do. The optimizer would apply more global 
optimizations, such as loop unrolling and common expression elimination.

The IBM COBOL for z/OS 5 & 6 compiler is rewritten, and generates much more 
aggressive optimization. It rearranges the statement order, even to the extent 
of performing sections in a different order than coded.

I think the way the modern compiler works is the front end decomposes the 
statements down to a set of pseudo-operations, that are not optimized at all, 
and don’t directly map to zArchitecture instructions. Then the back end 
optimizes the pseudo-ops and then generates the machine instructions necessary 
to implement them.

In the example case, Phil pointed out that the C code is really two operations: 
define a variable, and then move (initialize) a value to it. So, this could be 
two pseudo-ops.

My guess is that at opt(0) the compiler is just outputting the generated 
instructions for the pseudo-ops directly, 1 for 1. And since the pseudo-ops 
aren’t directly equivalent to z/OS instructions, you end up with code that 
neither a human nor the older compiler would generate.

But I could be wrong.

Reply via email to