I've noticed that we change the default setting for floating point
precision from extended precision (64-bit) to double precision (53-bit).
The comment in npx.h says:

  /*
   * The hardware default control word for i387's and later coprocessors is
   * 0x37F, giving:
   *
   *    round to nearest
   *    64-bit precision
   *    all exceptions masked.
   *
   * We modify the affine mode bit and precision bits in this to give:
   *
   *    affine mode for 287's (if they work at all) (1 in bitfield 1<<12)
   *    53-bit precision (2 in bitfield 3<<8)
   *
   * 64-bit precision often gives bad results with high level languages
   * because it makes the results of calculations depend on whether
   * intermediate values are stored in memory or in FPU registers.
   */
  #define       __INITIAL_NPXCW__       0x127F

Oddly, this causes problems with GNAT (Ada is a high level language)
because it wants/expects 64-bit extended precision.  It seems as if
GNAT for linux-i386 also uses 64-bit extended precision.  The only
other GNAT i386 platform that doesn't use 64-bit precision is NT.

So is the above comment still valid?

-- 
Dan Eischen


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to