Re: Lack of real long double support (was Re: libstdc++ does notcontain fabsl symbol)

2002-10-29 Thread Loren James Rittle
 Claiming 53 bits but supporting 64, and then not raising an exception
 and/or giving a NaN or INF result on overflow to the 54th bit is
 broken.  If you do this, you will fail runtime validation suites.

Huh?  The 53-bit quantity refers to the mantissa not the exponent.
Unless I'm sorely confused, the only IEEE exceptions you could be
speaking about is inexact; Inf and NaN don't come into play.  In any
event:

As I outlined, in this case (where we advertised 53-bit precision but
the user forced 64-bit precision - it was not precise for you to use
the word supporting since that does not characterize the situation I
was speaking about properly), the user will have explicitly made a
call to change the default precision as conveyed in the system header.
At that point, I don't know why the user should be able to expect that
the published values in float.h are still valid.  Do these
validation suites typically run with a call to override the precision
setting in the IEEE FP control word?  Doubtful (or if they do, they
know what they are doing; just as application code that monkeys with
the control register would have to).  I.e. the test suite would be
running against the published limits.h which at the moment should be:

#define LDBL_MANT_DIG53
#define LDBL_MIN_EXP (-16381)
#define LDBL_MAX_EXP 16384

(All other published LDBL_ constants are derivable from these.)

since they match the default FP control word setting on FreeBSD/i386.

 I don't think that any amount of hand-waving will turn it into a
 compiler-only issue.

I am not trying to handwave anything.  The currently published
float.h (both before and after recent patch) is what is attempting
to handwave something.  It doesn't match the hardware settings.  That
patch it just received fixed one thing (exponent range) but broke
another (precision/epsilon).  More than anything, I am attempting to
have that file written exactly to match the default hardware setup.
Since there is exactly one default hardware setup for any particular
booted system, it seems possible to do.

If I were doing what is claimed, I would have fixed the issue in gcc
mainline without consulting FreeBSD-side experts.  Perhaps we should
just leave it broken in the FSF tree and let an expert sort it out
when they import gcc 3.3 into FreeBSD.  However, it seemed reasonable
to me to fix the issues exposed by the gcc infrastructure change ASAP...

 I'd like to see Bruce's issues with the 64 bit support taken care
 of with long double (and the implicit cast that occurs in the one
 case that Bruce complained about in his email, where there is *too
 much* precision on the rvalue, which is a computation of dobule
 operands done in long double form, with a double result).

I think I now understand Bruce's point.  There are two issues: (a)
incorrect double-rounding (once to the precision of extended double in
register and another to double when moved to memory allocated for
double); (b) non-exact comparisons where might otherwise be expected
by programmers even if not absolutely guaranteed by IEEE 754.

To anyone that wishes to understand it, start reading page 249 of this
version of _What Every Computer Scientist Should Know About
Floating-Point Arithmetic_ http://www.validlab.com/goldberg/paper.ps

FYI, by default, FreeBSD/i386 uses technique 4 on page 260 to
circumvent these issues. (There is a proof earlier that fully explains
why double-rounding doesn't affect double-float conversions on i386.)

Now, in case anyone cares, gcc 3.3 will expose the C99 FLT_EVAL_METHOD
macro (set to 2 for FreeBSD/i386) and related float_t and double_t
types (which will both map to long double to represent the destination
of intermediate results otherwise beyond the user's control).  Given
the extended range of the exponent of a double or float in an i386 HW
FP register; I believe that is correct.

Regards,
Loren

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Lack of real long double support (was Re: libstdc++ does notcontain fabsl symbol)

2002-10-28 Thread Loren James Rittle
In article [EMAIL PROTECTED],
Bruce Evans[EMAIL PROTECTED] writes:

 If you really want, I can tell RTH that FreeBSD/i386 absolutely wants
 `long double' to be:

 53 mantissa bits
 1024 max exponent

 No need.  I prefer to keep the 53-bit precision for now, but fix the
 exponents.

OK.  This was the resolution that made sense to us on the gcc side
(RTH, quickly; myself, some time to get up to speed).  FYI, the system
header patch that was actually checked in claims 64-bit precision.  Oh
well, at least the exponents are now correct.  ;-)  FYI, here is the
exact test case which shows the new value for LDBL_EPSILON (derived
directly from LDBL_MANT_DIG) is wrong verses the hardware default:

#include float.h

// Force run-time computation of math (beware, inlining oneld will defeat this).
long double oneld() { return 1.0L; }

main ()
{
  long double a = oneld();
  long double b = oneld() + LDBL_EPSILON;
  if (a == b) abort ();
}

On ref5 against gcc 3.2.X (as installed as system compiler), with the
latest float.h, it crashes.  By specification, this is one FP test
involving exact equality that is guaranteed to work (and report false,
in this case).

 Hopefully the precision won't be hard-coded into gcc in such
 a way as to completely break changing the precision at runtime.  I think
 it should affect (only) constant folding.  The issues are very similar
 to the ones with changing the rounding mode at runtime (C99 compilers
 shouldn't do constant folding in #pragma STDC FENV_ACCESS ON sections
 if the correct result might depend on the FP environment).

This exchange has been quite useful.  I see that issue.
Unfortunately, changing precision of the FP hardware would seem to
change the value of epsilon that is exported, both in classic C
headers and C++ limits (the latter being how I originally took any
interest in this matter since a C++ test case started failing after
the new real.c code was installed).

gcc 3.3 will support a framework in which such changes would be easy
to convey at compile-time but, to my knowledge, there is no support
yet to obtain e.g. the precision setting at run-time.  I.e. float.h
is now completely dynamically created at compile-time based on the
exact knowledge within gcc of the FP hardware; but it is static
w.r.t. eventual run-time.  It does not know how to effectively export
a function ala FreeBSD/alpha's float.h:

#define FLT_ROUNDS  __flt_rounds()

One issue, the standard says that various macros related to float
limits are constant expressions (as may be used to influence the
preprocessor?).  The above construct doesn't conform but I understand
the intent.

I will advise RTH about that type of issue.  Fortunately, in this
case, I think advertising 53-bit precision but really using 64-bit
precision (i.e. application code overrode system default) doesn't
invalidate the advertised epsilon, in terms of how it is used by the
application.

More generally, I will ask if gcc can support these details as gained
from the run-time environment instead of hard-coded defaults.  This
would be useful for all free OSes not just FreeBSD running on hardware
with such flexible FP hardware.

Any more comments, before I start work on the gcc mainline side of
things?

Thanks,
Loren

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Lack of real long double support (was Re: libstdc++ does notcontain fabsl symbol)

2002-10-28 Thread Terry Lambert
Loren James Rittle wrote:
 I will advise RTH about that type of issue.  Fortunately, in this
 case, I think advertising 53-bit precision but really using 64-bit
 precision (i.e. application code overrode system default) doesn't
 invalidate the advertised epsilon, in terms of how it is used by the
 application.
 
 More generally, I will ask if gcc can support these details as gained
 from the run-time environment instead of hard-coded defaults.  This
 would be useful for all free OSes not just FreeBSD running on hardware
 with such flexible FP hardware.
 
 Any more comments, before I start work on the gcc mainline side of
 things?

Claiming 53 bits but supporting 64, and then not raising an exception
and/or giving a NaN or INF result on overflow to the 54th bit is
broken.  If you do this, you will fail runtime validation suites.

The C99 standard intentionall gets its information from float.h ...
rather than it's float.h from the information, as you are doing ...
because this is a combination host OS *and* compiler issue.

I don't think that any amount of hand-waving will turn it into a
compiler-only issue.

I'd like to see Bruce's issues with the 64 bit support taken care
of with long double (and the implicit cast that occurs in the one
case that Bruce complained about in his email, where there is *too
much* precision on the rvalue, which is a computation of dobule
operands done in long double form, with a double result).

-- Terry

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Lack of real long double support (was Re: libstdc++ does notcontain fabsl symbol)

2002-10-25 Thread Bruce Evans
On Thu, 24 Oct 2002, Loren James Rittle wrote:

  ...  Anyways, that work exposed some issues.
 ...
 It is easy to generate, with arithmetic, a long double value outside
 the *exponent* range above no matter how the precision is set; it is
 not truncated to Inf until it is actually cast to a double (as is
 currently done just before a long double is printed with stdio).  See
 below for a program that demonstrates this behavior.

  Yet, the FP hardware is actually configured by default to provide
  `long double' as:

  #define LDBL_MANT_DIG   53

  I think you mean 64 (the hardware default).

 No sir, I did mean as configured in the system startup code, it is
 forced to 53-bits (that choice affects long double as well as double).
 But there are no IEEE control bits to limit the size of the exponent
 field, are there?  Thus, the following describes the exact size of the
 HW's exponent field of a long double as configured by default:

  #define LDBL_MIN_EXP(-16381)
  #define LDBL_MAX_EXP16384

  gcc can't actually support the full range, since it doesn't control
  the runtime environement (it could issue a fninit before main() to
  change the default, but it shouldn't and doesn't).  The exponent
  range is lost long before printf() is reached.  E.g.,

  long double x= DBL_MAX;
  long double y = 2 * x;

  gives +Inf for y since the result is doesn't fit in 53-bit precision.

 As you know, the selection of maximum precision is orthogonal to the
 number of bits used for the exponent.

 I do wish you were correct.  Have you looked at the raw memory of y?
 It is *not* the bit pattern for +Inf.  If y were +Inf, then based on
 the properties of IEEE math, the following would report Inf not
 DBL_MAX/2:

Oops.  I did look at the bits, but I looked more closely at gdb's display
of the value which is broken (it says +inf).  Apparently gdb uses the
host's FP too much.

 #include float.h
 int main (void)
 {
   long double x= LDBL_MAX; // Same as DBL_MAX in current float.h
   long double y = 2e100 * x; // If LDBL_MAX was correct, we should latch Inf.
   long double z = y / 4e100;
   printf (%Lg\n, z);
 }

 It does, in fact, report DBL_MAX/2 with the system compiler on FreeBSD
 4.  FYI, I reviewed the generated code to ensure it was using run-time
 calculations not compile-time calculations.  I'd call that a rather
 easy time getting a normalized long double much larger than
 LDBL_MAX/DBL_MAX.  This test alone proves in my mind that the
 float.h system header for i386 is itself wrong.

Yes, this example is as convincing as examining the (right :) bits.

  The system header correctly reports this default precision.  Any header
  genrated by the gcc build should be no different, since the build should
  run in the target environment.

 I agree that the precision setting in the system header is fine.  The
 size of the exponent field is buggy.  I held the opinion you have but
 while trying to convince RTH (the guy that rewrote gcc FP support), I
 did enough research that also leads me to think that it is the header
 itself which is buggy.

 If you really want, I can tell RTH that FreeBSD/i386 absolutely wants
 `long double' to be:

 53 mantissa bits
 1024 max exponent

No need.  I prefer to keep the 53-bit precision for now, but fix the
exponents.  Hopefully the precision won't be hard-coded into gcc in such
a way as to completely break changing the precision at runtime.  I think
it should affect (only) constant folding.  The issues are very similar
to the ones with changing the rounding mode at runtime (C99 compilers
shouldn't do constant folding in #pragma STDC FENV_ACCESS ON sections
if the correct result might depend on the FP environment).

  It should use whatever is the default format for the host environment,
  It still has enquire.c for doing this automatically.  [...]

 I fear I didn't explain clearly enough.  enquire.c is completely
 *gone* in gcc 3.3.  This is why gcc needs to fully understand the
 exact FP default configuration.  As you noted, enquire.c was buggy.

Ah.  I was a little surprised to find it still in 3.2.  It is not so
much buggy as dysfunctional in a cross compiler.  It has to run on the
target somehow to work.

Bruce


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Lack of real long double support (was Re: libstdc++ does notcontain fabsl symbol)

2002-10-24 Thread Bruce Evans
On Thu, 24 Oct 2002, Loren James Rittle wrote:

 ...  Anyways, that work exposed some issues.

 We have this in the system header:

 #define LDBL_MANT_DIG   DBL_MANT_DIG
 #define LDBL_MIN_EXPDBL_MIN_EXP
 #define LDBL_MAX_EXPDBL_MAX_EXP
 [...]

This seems to be correct.  Long double precision is not really supported
either at complie time or runtime.  The default precision (on i386's)
is 53 bits, so (normalized) long doubles have a hard time getting any
larger than DBL_MAX or any smaller than DBL_MIN (you can create them
as bits or using a hardware transcendental function, but any arithmetic
on them will convert them to double precision).

 Yet, the FP hardware is actually configured by default to provide
 `long double' as:

 #define LDBL_MANT_DIG   53

I think you mean 64 (the hardware default).

 #define LDBL_MIN_EXP(-16381)
 #define LDBL_MAX_EXP16384

 Indeed, FP code using long double generated by gcc 2.95.X, installed
 as the FreeBSD 4 system compiler, uses the full exponent range of
 -16381 to 16384.  However, printf(), etc loses on that exponent range
 and reports Inf.  I suspect this is why the system header misreports
 the FP hardware, thus the header describes the largest printable long
 double value, not the largest that could be held in a calculation.

gcc can't actually support the full range, since it doesn't control
the runtime environement (it could issue a fninit before main() to
change the default, but it shouldn't and doesn't).  The exponent
range is lost long before printf() is reached.  E.g.,

long double x= DBL_MAX;
long double y = 2 * x;

gives +Inf for y since the result is doesn't fit in 53-bit precision.
The system header correctly reports this default precision.  Any header
genrated by the gcc build should be no different, since the build should
run in the target environment.

 Anyways, two questions for FreeBSD maintainers.  How should gcc, as
 provided from the FSF, describe the long double FP format for
 FreeBSD/i386 4.x?  Shall we assume that no changes for FreeBSD 5.x
 will occur?

It should use whatever is the default format for the host environment,
It still has enquire.c for doing this automatically.  I don't know
when enquire.c actually gets used.  The FreeBSD build just ignores it
and uses src/sys/${MACHINE_ARCH}/float.h.  My first encounter with this
bug suite was near enquire.c 12-14 years ago.  enquire gave different
results for -msoft-float and hardware FP because my soft-float libraries
only had 53-bit precision but hardware FP used 64-bit precision.

 ...
 However, that also brings to mind: Are we happy with the current
 default FP hardware setup especially if support for long double is to
 be improved?  At the moment we support long double in name only.
 There is no extra dynamic range over a double; as might be implied
 (and allowed by the hardware).

Not really.  Assembly is still required to get full control over precision.
I'm still waiting for (C) language support to catch up.  Been waiting for
about 14 years so far.  C99 clarified the semantics of floating point
precision but is not supported by gcc (yet?).

 Also, as an aside, having just learned everything there is to know
 about IEEE FP from a witty 80-slide presentation on the WWW by one of
 the original designers of the spec@Berkeley, I'd say that he would be
 sad that we default to use 53-bit precision mode.  I'd say that it is
 dumb to claim we even support long double unless we change that to
 64-bit precision mode...

I think he would be even unhappier with gcc's support for it.  We default
to 53-bit precision partly because Kahan's (his?) paranioa(1) is unhappy
with 64-bit precision.

 Regarding the comment in sys/i386/include/npx.h:

  * 64-bit precision often gives bad results with high level languages
  * because it makes the results of calculations depend on whether
  * intermediate values are stored in memory or in FPU registers.

 This is pure bunk when considered in broad context.  Using 53-bit
 precision mode with high level languages while making actual
 calculations yields more outright undetectable precision-related
 errors due to algorithm design by non-numerical analysts.  I know
 which error I'd rather *not* find in my naively-written and compiled
 FP code.

I think it makes no provably significant difference for naively-written
code.  I'd rather get the same (perhaps wrong) results on all machines
from naively-written code.

 People that write their FP code to correctly use epsilon (the one
 appropriate for the type of the float they are using) should never see
 this problem, no?

No.  They would see strange behaviour cause by compiler bugs.

 I don't see why FreeBSD should cater to people that
 don't know how to write FP code properly (i.e. anyone that expected
 exact results across compilation switches, etc) at the expense of
 being able to write code that fully utilizes the best mode available
 from the CPU.

IEEE754 provides many guarantees about 

Re: Lack of real long double support (was Re: libstdc++ does notcontain fabsl symbol)

2002-10-24 Thread Loren James Rittle
Thanks for the quick answer Bruce.  Based on the statement: ``It
should use whatever is the default format for the host environment''
and the statements that make it clear gcc isn't changing its default
precision setting anytime soon, I think I now know how to make a
correct patch for the FSF gcc mainline.  More details in line below.

Regards,
Loren

 ...  Anyways, that work exposed some issues.

 We have this in the system header:

 #define LDBL_MANT_DIG   DBL_MANT_DIG
 #define LDBL_MIN_EXPDBL_MIN_EXP
 #define LDBL_MAX_EXPDBL_MAX_EXP
 [...]

 This seems to be correct.  Long double precision is not really supported
 either at complie time or runtime.  The default precision (on i386's)
 is 53 bits, so (normalized) long doubles have a hard time getting any
 larger than DBL_MAX or any smaller than DBL_MIN (you can create them
 as bits or using a hardware transcendental function, but any arithmetic
 on them will convert them to double precision).

It is easy to generate, with arithmetic, a long double value outside
the *exponent* range above no matter how the precision is set; it is
not truncated to Inf until it is actually cast to a double (as is
currently done just before a long double is printed with stdio).  See
below for a program that demonstrates this behavior.

 Yet, the FP hardware is actually configured by default to provide
 `long double' as:

 #define LDBL_MANT_DIG   53

 I think you mean 64 (the hardware default).

No sir, I did mean as configured in the system startup code, it is
forced to 53-bits (that choice affects long double as well as double).
But there are no IEEE control bits to limit the size of the exponent
field, are there?  Thus, the following describes the exact size of the
HW's exponent field of a long double as configured by default:

 #define LDBL_MIN_EXP(-16381)
 #define LDBL_MAX_EXP16384

 gcc can't actually support the full range, since it doesn't control
 the runtime environement (it could issue a fninit before main() to
 change the default, but it shouldn't and doesn't).  The exponent
 range is lost long before printf() is reached.  E.g.,

   long double x= DBL_MAX;
   long double y = 2 * x;

 gives +Inf for y since the result is doesn't fit in 53-bit precision.

As you know, the selection of maximum precision is orthogonal to the
number of bits used for the exponent.

I do wish you were correct.  Have you looked at the raw memory of y?
It is *not* the bit pattern for +Inf.  If y were +Inf, then based on
the properties of IEEE math, the following would report Inf not
DBL_MAX/2:

#include float.h
int main (void)
{
  long double x= LDBL_MAX; // Same as DBL_MAX in current float.h
  long double y = 2e100 * x; // If LDBL_MAX was correct, we should latch Inf.
  long double z = y / 4e100;
  printf (%Lg\n, z);
}

It does, in fact, report DBL_MAX/2 with the system compiler on FreeBSD
4.  FYI, I reviewed the generated code to ensure it was using run-time
calculations not compile-time calculations.  I'd call that a rather
easy time getting a normalized long double much larger than
LDBL_MAX/DBL_MAX.  This test alone proves in my mind that the
float.h system header for i386 is itself wrong.

 The system header correctly reports this default precision.  Any header
 genrated by the gcc build should be no different, since the build should
 run in the target environment.

I agree that the precision setting in the system header is fine.  The
size of the exponent field is buggy.  I held the opinion you have but
while trying to convince RTH (the guy that rewrote gcc FP support), I
did enough research that also leads me to think that it is the header
itself which is buggy.

If you really want, I can tell RTH that FreeBSD/i386 absolutely wants
`long double' to be:

53 mantissa bits
1024 max exponent

And we can let the chips fall where they may but that is lie verses
the hardware setup just to match a buggy system header.  Attempting to
get that patch in was what triggered my research and first post.

 Anyways, two questions for FreeBSD maintainers.  How should gcc, as
 provided from the FSF, describe the long double FP format for
 FreeBSD/i386 4.x?  Shall we assume that no changes for FreeBSD 5.x
 will occur?

 It should use whatever is the default format for the host environment,
 It still has enquire.c for doing this automatically.  [...]

I fear I didn't explain clearly enough.  enquire.c is completely
*gone* in gcc 3.3.  This is why gcc needs to fully understand the
exact FP default configuration.  As you noted, enquire.c was buggy.

 I think he would be even unhappier with gcc's support for it.  We default
 to 53-bit precision partly because Kahan's (his?) paranioa(1) is unhappy
 with 64-bit precision.

I can't disagree with the first statement.  However, support for FP in
gcc has just been rewritten and the paranioa test was pulled into gcc.
Perhaps it is time to revisit some past assumptions related to FP when
gcc 3.3 is imported as the system compiler.  And of