There are different circumstances in which code sequences may be 'unreachable'.

Most commonly, in my experience, a sequence of instructions that was
once executed is left behind when maintenance changes are made, either
1) inadvertently or 2) because the path of control in a routine has
become so tangled that it is not easy to determine whether a
particular sequence will ever be executed.

Or again, an already present sequence that is not yet but will later
be used within a routine that is still under development may be
deleted

Unreferenced, 'static' constants are also sometimes deleted.  This
action by any optimizer that I am familiar can be precluded by
defining that constant within a structure other elements of which are
referenced.

In general---I do not know what exactly the new COBOL compiler
does---schemes for the deletion of such constants have become very
sophisticated because it has been necessary for them to be.  FORTRAN
had and has an EQUIVALENCE facility that permits a block of storage to
in effect be redefined, given an additional name, data type, and
organization; and deleting such a block because it is not referenced
using one such name/identifier when it is referenced using another is
inadmissible.

In languages that support the use of pointers much more complex
aliasing and data-type punning schemes are easy to construct, and in
the upshot pseudo-address references have largely supplanted
identifier references in schemes that attempt to detect and eliminate
unused data storage.

It is important that optimizers be understood for what they are.
Caricatured usefully, they are schemes for defining special cases
parametrically and generically and dealing with each of these
differently.  They are best thought of as machinery for dealing
systematically instead of ad hoc with instances of bad code (or
language inadequacies).

It is likely, all but certain, that additional schemata will be added
to the optimizing machinery the new COBOL compiler uses; and some of
those already in use made need modification.  Both of these sorts of
changes are easy to make.

So far, we do not appear to be dealing with defective optimizations
that make a routine unusable.  Instead we are dealing with situations
in which further optimizations are possible.

That being the case, some patience is in order.  This new compiler is,
on balance, much superior to its predecessors; and crotchets of this
sort do not constitute a legitimate arguments for avoiding its use.

Until now I avoided contributing to this thread on the assumption that
IBM was well able to defend itself and did not need my herlp in doing
so.  This post reflects my, changed, view that some other posters have
misunderstood this 'deficiency' and the nature of optimization in
general.

John Gilmore, Ashland, MA 01721 - USA

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to