> On Jul 31, 2018, at 12:27 PM, Mark Pizzolato <m...@infocomm.com> wrote:
> 
> ...
> 
>> I'm not sure where SimH stands on compiler optimizations - you can always 
>> try 
>> compiling with the highest optimization level your compiler supports.  But 
>> you may have to report some bugs...
> 
> The normal build compiles with the best optimizations we know to work.  When 
> failures are observed or reported, if we can't fix the failing case, 
> optimization 
> levels are backed down.  A recent report, just last week, came up on OS X 
> where
> someone was running the HCORE diagnostic on the VAX750 simulator and a 
> failure was now being reported for stuff that previously worked.  Compiling 
> in debug mode (unoptimized) and the diagnostic passes.  Problems like this 
> have come up a number of times as compiler implementations evolve.  Some
> of the problems are implementation issues in the VAX simulator which Bob
> initially did without strict regard to unsigned or signed integer values.  The
> original simulator code was once run through the AXE tests suite to verify 
> conformance to the architecture details.  I've taken a shot at rewriting the
> things with consistent types, and the results run VMS and pass the basic 
> HCORE diagnostics, but without the rigorous verification that AXE can provide
> I don't feel comfortable committing this to the master branch.  If anyone can 
> dig up or provide a way that the we can get someone to run this for us I
> can move forward with the new implementation.  This approach would 
> certainly seem better than the 'wack-a-mole' approach we've got now as
> compilers change.

One thing that happens with newer compilers is that they take more advantage of 
opportunities offered by the letter of the standard.  If you do something that 
is "undefined", the compiler can do with that whatever it wants to.  If you 
have code that appears to allow something undefined under certain conditions, 
the compiler is free to assume that the undefined thing cannot happen and 
therefore that scenario doesn't occur.

For example: 
        extern int foo[100];

        if (i < 0)
                a += foo[i];

The compiler is allowed to delete that code because subscripting an array with 
a negative number is undefined.  And modern compilers often will.

There is an excellent paper, I think from MIT, about many undefined things in C 
that are not all that well understood by many programmers.  Unfortunately I've 
misplaced the reference.  It mentions that Linux turns off a whole pile of gcc 
optimizations by specific magic flags to avoid breaking things due to undefined 
things it does in the existing code base.

For integer arithmetic, what you mentioned is a key point.  Unsigned is defined 
to have wraparound semantics.  With signed integers, overflow is "undefined".  
So, for example, if you want to emulate the PDP11 arithmetic operations, you 
have to use unsigned short (uint16_t).  Using signed short (int16_t) is 
incorrect because of the overflow rules.

More generally, in SIMH you probably want the rule that every integer variable 
is unsigned.  Or at least, make every variable unsigned unless you know there 
is a good reason why it needs to be signed, and no undefined behavior is 
possible for that particular variable.

If you compile (in gcc) with -Wall and get no warnings, that's a pretty good 
sign.  If you do get warnings, they almost certainly need to be fixed rather 
than disabled.

        paul


_______________________________________________
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Reply via email to