One of the downsides to such great optimization is the added difficulty in 
debugging.  

Programs will often have code that leaves footprints and saves various values 
in work areas for diagnostic reasons.  Many optimization algorithms will detect 
that these area are "never referenced" after being set and eliminate the code 
that sets or stores these values.

Another optimization that makes debugging difficult is the inlining of 
subroutines that are only called in one place to save the overhead of the 
linkage.  But the generated mapping of the source to the generated machine 
code/assembler does not match the original source statements to the generated 
machine code.

Sure, there are various tricks that can be done to prevent such optimization, 
but that partially defeats the value of using a high level language when you 
have to think about how to defeat it.  

When you spend a lot of time debugging problems occurring in customer 
production environments, life can be difficult.

Optimization is great until it isn't!

Keith Moe
Lead Developer
BMC Software, Inc.
--------------------------------------------
On Mon, 1/29/18, Martin Ward <[email protected]> wrote:

 Subject: Re: Fair comparison C vs HLASM
 To: [email protected]
 Date: Monday, January 29, 2018, 3:25 PM
 
 On 29/01/18 22:54, Jon Perryman
 wrote:
 > Is there a PL/X or C feature that could
 not be implemented in HLASM?
 
 I have already mentioned the automated
 application of
 loop optimisations such as
 strength reduction, code motion,
 fusion,
 fission, permutation, unrolling, inversion, splitting,
 peeling, tiling, unswitching, etc. etc.
 
 Also automated register
 allocation, procedure inlining
 and whole
 program optimisation.
 
 Also,
 local variables in function definitions (and functions,
 for that matter!)
 
 -- 
            
 Martin
 
 Dr Martin Ward |
 Email: [email protected]
 | http://www.gkc.org.uk
 G.K.Chesterton site: http://www.gkc.org.uk/gkc | Erdos
 number: 4
 

Reply via email to