Hi

Thank you Dave.
I also have the feeling maybe if we change the Windows setting, we could
get closer, but
unfortunately I have never seen a statement like :
"in you set in Windows  xxxx options and in ZOS yyyy options you will
get always the same result"

As a change would be over a very very large number of modules, nobody
want to start this experiment till now

On 3/25/2011 9:05 AM, Dave wrote:
-----Original Message-----
From: IBM Mainframe Assembler List
[mailto:[email protected]] On Behalf Of Thomas
David Rivers
Sent: 23 March 2011 17:28
To: [email protected]
Subject: Re: IEEE floating points



Miklos Szigetvari<[email protected]>  wrote:

      Hi

Thank you.
Our problem is: which option and where to change to get the same
result.

   Till now we have ignored this differences( in AFP documents,
position difference sometimes in 1 pel), but some of our
customers are
saying, they would like to get the same result , in all platforms .

  Hi Miklos,

   Yes... :-)

   The basic problem is that the floating point value for the
type  'double' is 64-bits wide when stored in memory; but by
default, when  the x86 floating pt. hardware loads it into
the register; the register  is 80-bits wide.

   So, if you have any expressions involving the value, and
the compiler  rather smartly keeps the values solely in
registers; the full 80-bit
  value participates in the computation.   When the result of
the expression
  is stored back into memory (eventually) it's converted to a
64-bit value.  The intermediate steps would have potentially
different rounding  behaviors because of the 'extra bits'
while the values "lived"  in registers.

   Thus, you need to look for options that prevent the values
from  living in registers.. or otherwise cause the compilers
to  properly use only 64-bit intermediate results during
computations.  This results in slower running code on the x86.

   You can imagine, in our C, C++ and Assembler tools, we are
*required*  to produce exactly the same result on many
different platforms as
  what we would find on the mainframe.   A very reliable way we found
  to do this was to not depend on the native floating-pt at all, and
  use our own libraries.   Many times, a rather straight-forward
  emulation will suffice, but if you want to get things
exactly right  with this approach takes quite a bit of testing.

   But - if you want to try something quick-and-dirty, this
article tells you how to set the floating-pt rounding mode
on the x87 processor:

    http://www.network-theory.co.uk/docs/gccintro/gccintro_70.html

  That article is in relation to GCC, I'm not sure of the
syntax  for Microsoft Visual Studio... basically, you want to
use  the x87

    fldcw

  instruction (floating load control word) to set things up
  the way you need.
Microsoft seem to have fudges. This is from the Visual Studio 2010 Express
Compiler

  /fp:<except[-]|fast|precise|strict>  choose floating-point model:
     except[-] - consider floating-point exceptions when generating code
     fast - "fast" floating-point model; results are less predictable
     precise - "precise" floating-point model; results are predictable
     strict - "strict" floating-point model (implies /fp:except)
  /Qfast_transcendentals generate inline FP intrinsics even with /fp:except

So what the heck it does I don't know. I think I would try with "/strict"
and see if that makes a difference. As others have said you may also need to
adjust optimization levels. By default Visual C has them disabled.


  This article:

      http://www.math.ucdavis.edu/~hass/doubbub.cpp

  has some examples (and references a paper that may discuss
this very issue.)  It appears to have been published some
time ago, so I'm not sure how pertinent it is to more
current Visual Studio environments.

         - Dave Rivers -

--
[email protected]                        Work: (919) 676-0847
Get your mainframe programming tools at http://www.dignus.com



Reply via email to