On 13-07-02 01:26 PM, Gregory Maxwell wrote:

> $ gcc -o a.s -S a.c -m32
> $ cat a.s | grep fstpt
>         fstpt   (%eax)
> $ ./a
> 12
> 
> At least ten years ago, I knew people who were doing numerical work
> where they were still purchasing itanium hardware on the basis of long
> double performance.  I suppose now most of those people would spend
> more cycles getting their algorithms to behave well with doubles—
> considering the performance gaps... so it it indeed may be irrelevant.
> But 80 bit floating point is certainly a real, if non-standard, thing!

Huh. My mistake! I thought 'long double' usually mapped to f128, with
softfp when it wasn't supported in hardware. Shows how much numerical
code I write.

  https://en.wikipedia.org/wiki/Long_double

Quite the variety of representations. I thought they never reified f80
into a first class memory type. Bummer. Yup, there it is:

  FSTP mem32real D9 /3
  FSTP mem64real DD /3
  FSTP mem80real DB /7

How unfortunate.

> (and absent people saying they need it, it isn't like it would be the
> sort of thing that would be hard to add to a later version of the
> language)

No, it's conceivable, just .. sigh. I would not want to go there. I
guess it makes sense on pre-SSE x86 targets also, however long those
live on.

Anyway, seeing as how you have more insight and experience here, perhaps
you can settle the broader question: does it actually ever make sense to
code "best effort" floating point and let the compiler pick the best
precision it can get on the target?

-Graydon

_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to