I had the opportunity a couple of years ago to do a deep dive into C/C++ (gcc) 
optimisation, resulting from an incidence of integer overflow in a z/TPF C 
segment.  For those who are unfamiliar with the C language standardisation 
backstory it seems that the original standards committee was tasked with 
writing a language definition that encompassed all the different ways that 
compilers of that time dealt with such matters as integer overflow.  They did 
this by the liberal use of such terms as "undefined" and "implementation 
dependant", a situation which has lead to airline groundings and deaths, 
amongst other things.  Nothing quite so dramatic in our case, just a near 
strike by a group travel booking office.
After fixing the immediate issue I took the opportunity to investigate what 
could be done to remedy this situation more robustly, and implemented my own 
arithmetic functions using C and gcc's inline assembler.  As they stood, these 
were horribly inefficient, but when gcc's level 3 optimisation stripped away 
all of the function call overhead and what remained was only the arithmetic 
operation, the CC check, and a conditional branch, just as if we had coded the 
most optimal solution in line.
I came away with a deep respect for the optimisation routines. 
> but all hardware enhancements> need to be translated into compiler code.
> Changing the compiler is tedious and laborious.
I did, out of curiosity, take a journey down the gcc optimisation pathway, but 
came away bloodied and confused.  It seemed, iirc, that there are both generic 
optimisers and architecture-specific optimisers/code generators.  In order to 
add enhancements it is only necessary to modify the later (the gcc maintainers 
will do this) and then programs may be recompiled at leisure.  This seems a 
much better solution than hand-modifying and retesting one's entire code base.
I am no fan of the C language "portable assembler", but came away from this 
exercise mightily impressed with the capabilities of gcc.
(Should you want to know more, a write up, and a presentation deck, are 
available at https://ianw.quarto.pub/tech/posts/Integer-Safety/)

Best wishes / Mejores deseos /  Meilleurs vœux

Ian ... 

    On Wednesday, August 20, 2025 at 10:06:25 AM GMT+2, Abe Kornelis 
<a...@bixoft.nl> wrote:  
 
 Hi Rupert, all,

Choice of algorithm matters really very much.

Compiler claim to generate better code than 
handcrafted assembler. I'm not so sure about that.

there will definitely be cases where compilers do
better, but I've been with IBM in languages lab.
And although my sojourn there was rather brief
(the legals objected to a z390 volunteer being there)
I did spot various "opportunities to improve".

To me that indicates that at the very least it is
not nearly as black-and-white as the compiler
folks would make us believe. As usual, "it depends".

Compilers are probably better at optimising for
pipeline performance, but all hardware enhancements
need to be translated into compiler code.
Changing the compiler is tedious and laborious.

Bottom line: I'm still very happy to be an
 assembler programmer :-) 

Kind regards & happy programming!
Abe
===

    

Reply via email to